I'm referring to this question since I'm facing with a weird behaviour of column types before and after reading the same dataframe from a .csv. Starting from:
In [137]: df
Out[137]:
node1 node2 lang w c1 c2
0 1 2 it 1 a a
1 1 2 en 1 a a
2 2 3 es 2 a b
3 3 4 it 1 b b
4 5 6 it 1 c c
5 3 5 tg 1 b c
6 1 7 it 1 a a
7 7 1 es 1 a a
8 3 8 es 1 b b
9 8 4 es 1 b b
10 1 9 it 1 a a
Then performing a groupby:
In [138]: g = df.groupby(['c1','c2'])['lang'].unique().reset_index()
In [139]: g
Out[139]:
c1 c2 lang
0 a a [it, en, es]
1 a b [es]
2 b b [it, es]
3 b c [tg]
4 c c [it]
and get the values of the lang
column returns:
In [148]: g['lang'].values
Out[148]:
array([array(['it', 'en', 'es'], dtype=object),
array(['es'], dtype=object), array(['it', 'es'], dtype=object),
array(['tg'], dtype=object), array(['it'], dtype=object)], dtype=object)
Then if I:
In [141]: g.to_csv('g.csv',index=False)
In [142]: g = pd.read_csv('g.csv')
In [143]: g
Out[143]:
c1 c2 lang
0 a a ['it' 'en' 'es']
1 a b ['es']
2 b b ['it' 'es']
3 b c ['tg']
4 c c ['it']
In [145]: g['lang'].values
Out[145]: array(["['it' 'en' 'es']", "['es']", "['it' 'es']", "['tg']", "['it']"], dtype=object)
So reading the file from .csv leads to an array of string which is more complex to handle than the original array of arrays before writing/reading the dataframe. Anyone knows whether there is a way to keep the same format after reading the dataframe from file?