I'm using a topic modeling approach that works well on my computer in RStudio, except that it takes ages. So I'm using a linux cluster. However, also I seem to request a lot of capacity, it doesn't really speed up:
I'm sorry I'm a greenhorn... So this is what I'm using in the pageant shell:
salloc -N 240 --mem=61440 -t 06:00:00 -p med
#!/bin/sh
#SBATCH --nodes=200
#SBATCH --time=06:00:00
#SBATCH --partition=med
#SBATCH --mem=102400
#SBATCH --job-name=TestJobUSERNAME
#SBATCH --mail-user=username@ddomain.com
#SBATCH --mail-type=ALL
#SBATCH --cpus-per-task=100
squeue –u username
cd /work/username/data
module load R
export OMP_NUM_THREADS=100
echo "sbatch: START SLURM_JOB_ID $SLURM_JOB_ID (SLURM_TASK_PID $SLURM_TASK_PID) on $SLURMD_NODENAME"
echo "sbatch: SLURM_JOB_NODELIST $SLURM_JOB_NODELIST"
echo "sbatch: SLURM_JOB_ACCOUNT $SLURM_JOB_ACCOUNT"
Rscript myscript.R
I'm pretty sure there's sth. wrong with my inputs because:
- it isn't really faster (but my R code of course could also be just slow - so I tried various R codes with different calculation types)
- whether I'm using 1 oder 200 nodes, the calculation of the same R script takes almost exactly the same time (there should be at least 244 nodes, though)
- the echo results do not give complete information and I do not receive e-Mail notifications
so these are my typical outcomes:
#just very small request to copy/paste the results, usually I request the one above
[username@gw02 ~]$ salloc -N 2 --mem=512 -t 00:10:00 -p short
salloc: Granted job allocation 1234567
salloc: Waiting for resource configuration
salloc: Nodes cstd01-[218-219] are ready for job
Disk quotas for user username (uid 12345):
-- disk space --
Filesystem limit used avail used
/home/user 32G 432M 32G 2%
/work/user 1T 219M 1024G 0%
[username@gw02 ~]$ squeue -u username
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1234567 short bash username R 2:14 2 cstd01-[218-219]
#(directory, module load, etc.)
#missing outcomes for SLURM_TAST_PID and SLUMD_NODENAME:
[username@gw02 data]$ echo "sbatch: START SLURM_JOB_ID $SLURM_JOB_ID (SLURM_TASK_PID $SLURM_TASK_PID) on $SLURMD_NODENAME"
sbatch: START SLURM_JOB_ID 1314914 (SLURM_TASK_PID ) on
Can anybody help? Thank you so much!
EDIT: As Ralf Stubner points out in his comment, I don't do parallelization in the R Code. I have absolutely no idea on how to do that. Here is one example calculation:
# Create the data frame
col1 <- runif (12^5, 0, 2)
col2 <- rnorm (12^5, 0, 2)
col3 <- rpois (12^5, 3)
col4 <- rchisq (12^5, 2)
df <- data.frame (col1, col2, col3, col4)
# Original R code: Before vectorization and pre-allocation
system.time({
for (i in 1:nrow(df)) { # for every row
if ((df[i, "col1"] + df[i, "col2"] + df[i, "col3"] + df[i, "col4"]) > 4) { # check if > 4
df[i, 5] <- "greater_than_4" # assign 5th column
} else {
df[i, 5] <- "lesser_than_4" # assign 5th column
}
}
})
... and a shortened "real code":
library(NLP)
library(tm)
library(SnowballC)
library(topicmodels)
library(lda)
library(textclean)
# load data and create corups
filenames <- list.files(getwd(),pattern='*.txt')
files <- lapply(filenames,readLines)
docs <- Corpus(VectorSource(files))
# clean data (shortened, just two examples)
docs.adj <- tm_map(docs.adj, removeWords, stopwords('english'))
docs.adj <-tm_map(docs.adj,content_transformer(tolower))
# create document-term matrix
dtm <- DocumentTermMatrix(docs.adj)
dtm_stripped <- removeSparseTerms(dtm, 0.8)
rownames(dtm_stripped) <- filenames
freq <- colSums(as.matrix(dtm_stripped))
ord <- order(freq,decreasing=TRUE)
### find optimal number of k
burnin <- 10000
iter <- 250
thin <- 50
seed <-list(3)
nstart <- 1
best <- TRUE
seq_start <- 2
seq_end <- length(files)
iteration <- floor(length(files)/5)
best.model <- lapply(seq(seq_start,seq_end, by=iteration), function(k){LDA(dtm_stripped, k, method = 'Gibbs',control=list(nstart=nstart, seed = seed, best=best, burnin = burnin, iter = iter, thin=thin))})
best.model.logLik <- as.data.frame(as.matrix(lapply(best.model, logLik)))
best.model.logLik.df <- data.frame(topics=c(seq(seq_start,seq_end, by=iteration)), LL=as.numeric(as.matrix(best.model.logLik)))
optimal_k <- best.model.logLik.df[which.max(best.model.logLik.df$LL),]
print(optimal_k)
### do topic modeling with more iterations on optimal_k
burnin <- 4000
iter <- 1000
thin <- 100
seed <-list(2003,5,63)
nstart <- 3
best <- TRUE
ldaOut <-LDA(dtm_stripped,optimal_k, method='Gibbs', control=list(nstart=nstart, seed = seed, best=best, burnin = burnin, iter = iter, thin=thin))