1

I have a data frame with time series values. The time stamp column is formatted as

> trial_sub$time[8:15] [1] "500.0008" "500.0009" "500.0010" "500.0011" "500.0012" "500.0013" "500.0014" "500.0015"

When I try to export the data frame as a csv, no matter which method I try, the trailing zeros seem to get dropped:

> print(trial_sub_new$time[8:15])
       time 
8  500.0008             
9  500.0009             
10  500.001             
11 500.0011             
12 500.0012             
13 500.0013             
14 500.0014             
15 500.0015             

This messes up the cross-validation.

How can I export a csv with the zeros at the end?

JaredS
  • 242
  • 2
  • 5
  • 16
  • 2
    read it as a character column by specifying column type with `colClasses` argument in `read.csv/read.table` and use `stringsAsFactors = FALSE` – akrun Mar 02 '20 at 19:32
  • 3
    When you say the zeros get dropped---did you look in the CSV file and see that they are not there? Or do you just notice the dropping when you read it back in to R? – Gregor Thomas Mar 02 '20 at 19:38
  • I'm also curious about how this messes up the cross-validation. What goes wrong on the back-end? – Aaron Montgomery Mar 02 '20 at 19:50
  • I checked the csv with Excel (mistake!) and they weren't there. When i opened it in Notepad they were there after using `colClasses`. Thanks – JaredS Mar 02 '20 at 19:53
  • After export it's being validated against `character` columns that have the zeros. Since they don't match, validation scores go way down. – JaredS Mar 02 '20 at 19:55

0 Answers0