0

I'm attempting to decompress a csv file on my EC2 instance. The instance should definitely be large enough so I guess it has to do with partitioning, but I am new to that stuff and don't really understand the posts I've found here and here, or whether they apply to me. (I'm not using Hadoop nor do I have a full "/tmp" folder). The .csv.gz file is 1.6 GB and it should be 14 GB decompressed. Executing gzip -d data.csv.gz, I get the error gzip: data.csv: No space left on device, and df -h shows:

Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.8G 2.8G 5.0G 36% / devtmpfs 15G 56K 15G 1% /dev tmpfs 15G 0 15G 0% /dev/shm

Thanks for your help!

Community
  • 1
  • 1
user
  • 621
  • 1
  • 9
  • 21
  • Your root drive (/dev/xvda1) has 5GB free space. try running it within `/dev/shm` – Max Mar 21 '15 at 20:01
  • Yeah, alright. I wasn't sure what 'tmpfs' was so didn't want to mess with it. I didn't realize that the storage you got assigned with larger instances wouldn't be in your root directory and worse as far as I can tell wouldn't be anywhere in your directory structure at all. I realized you have to go make & attach volumes, and then mount them. (correct me if there's another way) I'm still learning about it but I will leave this comment up for other clueless beginners. – user Mar 23 '15 at 22:53

0 Answers0