7

I had written a previous (similar) post here where I was trying to create a wide table as opposed to a long table. I realized that its best to have my table in the long format so I am posting it as a different question. I am also posting what I have tried.

I am using R to rbind about ~11000 files using:

# get list of ~11000 files
lfiles <- list.files(pattern = "*.tsv", full.names = TRUE)

# row-bind the files
# use rbindlist to rbind and fread to read files
# use mclapply I am assigning 32 cores to it
# add the file basename as the id to identify rows
dat <- rbindlist(mclapply(lfiles, function(X) {
data.frame(id = basename(tools::file_path_sans_ext(X)),
           fread(X))},mc.cores = 32))

I am using R because my downstream processing like creating plots etc is in R. I have two questions:

1. Is there a way I can make my code more efficient/faster? I know the number of rows expected at the end so will it help if I preallocate the dataframe?

2. How should I save (in what format) this huge data - as .RData or as a database or something else?

As an additional info: I have three types of files for which I want this done. They look like this:

[centos@ip data]$ head C021_0011_001786_tumor_RNASeq.abundance.tsv
target_id   length  eff_length  est_counts  tpm
ENST00000619216.1   68  26.6432 10.9074 5.69241
ENST00000473358.1   712 525.473 0   0
ENST00000469289.1   535 348.721 0   0
ENST00000607096.1   138 15.8599 0   0
ENST00000417324.1   1187    1000.44 0.0673096   0.000935515
ENST00000461467.1   590 403.565 3.22654 0.11117
ENST00000335137.3   918 731.448 0   0
ENST00000466430.5   2748    2561.44 162.535 0.882322
ENST00000495576.1   1319    1132.44 0   0

[centos@ip data]$ head C021_0011_001786_tumor_RNASeq.rsem.genes.norm_counts.hugo.tab
gene_id C021_0011_001786_tumor_RNASeq
TSPAN6  1979.7185
TNMD    1.321
DPM1    1878.8831
SCYL3   452.0372
C1orf112    203.6125
FGR 494.049
CFH 509.8964
FUCA2   1821.6096
GCLC    1557.4431

[centos@ip data]$ head CPBT_0009_1_tumor_RNASeq.rsem.genes.norm_counts.tab
gene_id CPBT_0009_1_tumor_RNASeq
ENSG00000000003.14  2005.0934
ENSG00000000005.5   5.0934
ENSG00000000419.12  1100.1698
ENSG00000000457.13  2376.9100
ENSG00000000460.16  1536.5025
ENSG00000000938.12  443.1239
ENSG00000000971.15  1186.5365
ENSG00000001036.13  1091.6808
ENSG00000001084.10  1602.7165

Any help would be much appreciated!

Thanks!

Community
  • 1
  • 1
Komal Rathi
  • 4,164
  • 13
  • 60
  • 98

2 Answers2

5

You can't do this faster than using fread and rbindlist in R. But, you should not use data.frame and copy the data. Instead assign by reference:

DF <- fread(X)
DF[, id := basename(tools::file_path_sans_ext(X))]
return(DF) 

However, you should consider using a database.

PS: The correct regex is ".+\\.tsv$". This matches any file name with one or more characters followed by a dot and the string "tsv" followed by the end of the file name.

Roland
  • 127,288
  • 10
  • 191
  • 288
3

Regarding question 1., I can't tell for sure if there will be a noticeable difference, but you could try the following to avoid the data.frame calls (as mentioned by @Roland in his answer):

lfiles <- list.files(pattern = ".*\\.tsv$", full.names = TRUE)
setattr(lfiles, "names", basename(lfiles))
dat <- rbindlist(mclapply(lfiles, fread, mc.cores = 32), idcol = "id")

Here, you can make use of the idcol-argument inside rbindlist.

Regarding question 2., I guess it depends on what you want to do later on in your analysis.

talat
  • 68,970
  • 21
  • 126
  • 157