0

Can anyone help me on how to read in a huge data frame (33 rows,38 columns and always a header every 39th column starting with # from daily weather 1951-2015) in R. I have following data in a .txt file naming it test2.txt:

# 1950-01-01 00:59:00
  1 5 5 5 9
  2 3 4 5 2
# 1950-01-02 00:59:00
  4 5 4 4 3
  9 4 3 3 3
# 1950-01-03 00:59:00
  4 2 3 3 3
  2 2 2 3 9

I am trying to read it into R, creating an array or a reasonable matrix to do calculations afterwards. I tried using read.csv and scan, but I guess I am completely going the wrong way. Has anyone an idea which command to use.

 read.csv("test2.txt", header=FALSE, sep="")

Additionally I want to add names to the columns and rows afterwards, but this can also happen in a second step. Names should be for rows: A,B and for columns C,D,E,F,G, so in the end, the array would look like this, as the names I assume (e.g.# 1950-01-03 00:59:00) would probably be lost.

  , , 1 
  1 5 5 5 9
  2 3 4 5 2
  , , 2
  4 5 4 4 3
  9 4 3 3 3
  , , 3
  4 2 3 3 3
  2 2 2 3 9
Jørgen R
  • 10,568
  • 7
  • 42
  • 59
user3472591
  • 41
  • 1
  • 1
  • 4

4 Answers4

2

Edited

I provide two separate one-line solutions.

Treating the file as fixed width format

read.fwf("test2.txt", 
         widths = list(21, c(1, rep(2, 4)), rep(2, 5)), 
         comment.char = "")

I illustrate:

file <- "# 1950-01-01 00:59:00
1 5 5 5 9
2 3 4 5 2
# 1950-01-02 00:59:00
4 5 4 4 3
9 4 3 3 3
# 1950-01-03 00:59:00
4 2 3 3 3
2 2 2 3 9"

read.fwf(textConnection(file), 
         widths = list(21, c(1, rep(2, 4)), rep(2, 5)), 
         comment.char = "")

                     V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11
1 # 1950-01-01 00:59:00  1  5  5  5  9  2  3  4   5   2
2 # 1950-01-02 00:59:00  4  5  4  4  3  9  4  3   3   3
3 # 1950-01-03 00:59:00  4  2  3  3  3  2  2  2   3   9

Treating the file as white-space separate table

You only need one line of R code to do this:

read.table("test2.txt", comment.char = "#", header = FALSE)

The reason this works is that the comment.char allows you to specify text to ignore. In your case, since your line starts with # the read.table() function ignores the entire line.

I illustrate:

file <- "# 1950-01-01 00:59:00
  1 5 5 5 9
2 3 4 5 2
# 1950-01-02 00:59:00
4 5 4 4 3
9 4 3 3 3
# 1950-01-03 00:59:00
4 2 3 3 3
2 2 2 3 9"

read.table(text = file, comment.char = "#", header = FALSE)

  V1 V2 V3 V4 V5
1  1  5  5  5  9
2  2  3  4  5  2
3  4  5  4  4  3
4  9  4  3  3  3
5  4  2  3  3  3
6  2  2  2  3  9
Andrie
  • 176,377
  • 47
  • 447
  • 496
1

For the sample text, i used the following code:

library(stringi)
nrrep <- 3 # or 39 in your case
ncols <- 5
list.files()
dump <- readLines("test2.txt")
namelines <- str_trim(dump[(1+nrrep*(0:((length(dump))/nrrep -1 )))])
goodlines <- str_trim(dump[-(1+nrrep*(0:((length(dump))/nrrep -1 )))])
mymat <- matrix(unlist(str_split(goodlines, " ")), ncol=ncols)
rownames(mymat) <- rep(namelines, each=nrrep-1)
colnames(mymat) <- paste0("Col",LETTERS[1:ncols])
mymat

                        ColA ColB ColC ColD ColE
# 1950-01-01 00:59:00 "1"  "3"  "4"  "3"  "3" 
# 1950-01-01 00:59:00 "5"  "4"  "4"  "3"  "2" 
# 1950-01-02 00:59:00 "5"  "5"  "3"  "4"  "2" 
# 1950-01-02 00:59:00 "5"  "2"  "9"  "2"  "2" 
# 1950-01-03 00:59:00 "9"  "4"  "4"  "3"  "3" 
# 1950-01-03 00:59:00 "2"  "5"  "3"  "3"  "9" 
Serban Tanasa
  • 3,592
  • 2
  • 23
  • 45
0

I'm not sure the format you think you want to end up with in R is going to help you analyze the data. Without knowing more about the readings, here's a way to do it with base R and then how to reformat the resultant data frame from wide to long with tidyr:

readings_raw <- readLines(textConnection("# 1950-01-01 00:59:00
  1 5 5 5 9
  2 3 4 5 2
# 1950-01-02 00:59:00
  4 5 4 4 3
  9 4 3 3 3
# 1950-01-03 00:59:00
  4 2 3 3 3
  2 2 2 3 9"))


readings_wide <- do.call(rbind, lapply(seq(1, length(readings_raw), 3), function(i) {
  tmp <- read.table(text=paste(readings_raw[(i+1):(i+2)], collapse=""),
                    col.names=LETTERS[1:10])
  tmp$date <- as.POSIXct(gsub("^# |\ *$", "", readings_raw[i]))
  tmp
}))

readings_wide
##   A B C D E F G H I J                date
## 1 1 5 5 5 9 2 3 4 5 2 1950-01-01 00:59:00
## 2 4 5 4 4 3 9 4 3 3 3 1950-01-02 00:59:00
## 3 4 2 3 3 3 2 2 2 3 9 1950-01-03 00:59:00
tidyr::gather(readings_wide, reading, value, -date)
##                   date reading value
## 1  1950-01-01 00:59:00       A     1
## 2  1950-01-02 00:59:00       A     4
## 3  1950-01-03 00:59:00       A     4
## 4  1950-01-01 00:59:00       B     5
## 5  1950-01-02 00:59:00       B     5
## 6  1950-01-03 00:59:00       B     2
## 7  1950-01-01 00:59:00       C     5
## 8  1950-01-02 00:59:00       C     4
## 9  1950-01-03 00:59:00       C     3
## 10 1950-01-01 00:59:00       D     5
## 11 1950-01-02 00:59:00       D     4
## 12 1950-01-03 00:59:00       D     3
## 13 1950-01-01 00:59:00       E     9
## 14 1950-01-02 00:59:00       E     3
## 15 1950-01-03 00:59:00       E     3
## 16 1950-01-01 00:59:00       F     2
## 17 1950-01-02 00:59:00       F     9
## 18 1950-01-03 00:59:00       F     2
## 19 1950-01-01 00:59:00       G     3
## 20 1950-01-02 00:59:00       G     4
## 21 1950-01-03 00:59:00       G     2
## 22 1950-01-01 00:59:00       H     4
## 23 1950-01-02 00:59:00       H     3
## 24 1950-01-03 00:59:00       H     2
## 25 1950-01-01 00:59:00       I     5
## 26 1950-01-02 00:59:00       I     3
## 27 1950-01-03 00:59:00       I     3
## 28 1950-01-01 00:59:00       J     2
## 29 1950-01-02 00:59:00       J     3
## 30 1950-01-03 00:59:00       J     9
hrbrmstr
  • 77,368
  • 11
  • 139
  • 205
0
file <- "# 1950-01-01 00:59:00
  1 5 5 5 9
  2 3 4 5 2
# 1950-01-02 00:59:00
  4 5 4 4 3
  9 4 3 3 3
# 1950-01-03 00:59:00
  4 2 3 3 3
  2 2 2 3 9"

library(dplyr)
library(stringr)
Imported <- data.frame(raw= readLines(textConnection(file))) %>%
  mutate(index = cumsum(grepl("#", raw))) 

Dates <- filter(Imported, grepl("#", raw))

ColumnsData <- filter(Imported, !grepl("#", raw)) %>%
  group_by(index) %>%
  mutate(sub_index = 1:n())

Columns <- 
  do.call("rbind",
          lapply(1:nrow(ColumnsData),
                 function(i){
                   cols <- unlist(str_split(ColumnsData$raw[i], ""))
                   cols <- cols[cols != " "]
                   as.numeric(cols)
                 }
          ))
Columns <- cbind(ColumnsData, as.data.frame(Columns))
Columns <- merge(Dates, Columns,
                 by = "index")

> Columns
  index                 raw.x       raw.y sub_index V1 V2 V3 V4 V5
1     1 # 1950-01-01 00:59:00   1 5 5 5 9         1  1  5  5  5  9
2     1 # 1950-01-01 00:59:00   2 3 4 5 2         2  2  3  4  5  2
3     2 # 1950-01-02 00:59:00   4 5 4 4 3         1  4  5  4  4  3
4     2 # 1950-01-02 00:59:00   9 4 3 3 3         2  9  4  3  3  3
5     3 # 1950-01-03 00:59:00   4 2 3 3 3         1  4  2  3  3  3
6     3 # 1950-01-03 00:59:00   2 2 2 3 9         2  2  2  2  3  9

Not a particularly elegant solution, but it has the advantage of indexing the row numbers within each date as well.

Benjamin
  • 16,897
  • 6
  • 45
  • 65