I have 100 large json files in gcs and want to load them in a panda dataframe. I've used something like below in dask:
dd.read_json('gs://dask_poc/2018-04-18/data-*.json')
But when I used:
pd.read_json('gs://dask_poc/2018-04-18/data-*.json')
I got the below error: ValueError: Expected object or value
Wondering if panda cant aggregate all the files together similar to dask?