I found bizarre behavior in Jupyter notebook, when I load my own csv. On one GCE it works, on the other it works fine. Probably, I need to set up my instance better - maybe, I've missed something trivial here.
My code goes like this (look below) - most is basically tutorial stuff. The bucket and filenames are good because it works in different instance. The new GCE has more RAM and more CPUs. But I don't think this should be the problem. Any ideas what could go wrong? (the issue is on python 2 and 3)
from google.datalab import Context
import google.datalab.bigquery as bq
import google.datalab.storage as storage
import pandas as pd
try:
from StringIO import StringIO
except ImportError:
from io import BytesIO as StringIO
%%gcs read --object gs://ls_w/tmax_out_save.csv --variable bla # crashes here
I've also tried this
bla = storage.Object('ls_w', 'tmax_out_save.csv').read_stream() # crashes here