0

I'm creating a numpy array which then will be exported into a django model. As dtype attribute I have a None, but I have one column which should be an integer which admits NULL values. Now, when in the csv a 'NULL' string is found, the type of that column is changed in bool, which doesn't admit a None.

Now, is it possible to do something like dtype={'name of the columns', 'int'} just for that column and make the rest be still on None and let numpy decide the type?

Lee
  • 29,398
  • 28
  • 117
  • 170
foebu
  • 1,365
  • 2
  • 18
  • 35

1 Answers1

0

Assuming you are talking about genfromtxt, you can set the dtype using the dtype parameter:

e.g.

If your file contains

1.0 2.0 3.0 4.0
1.0 2.0 3.0 4.0
1.0 2.0 3.0 4.0
1.0 2.0 3.0 hello

Then

a=np.genfromtxt('vlen.txt',dtype=[('col0', 'i4'), ('col1', '<f8'), ('col2', '<f8'), ('col3', 'S5')])

a['col0']
array([1, 1, 1, 1]) # note ints not floats
Lee
  • 29,398
  • 28
  • 117
  • 170
  • I was wondering if I have to write a tuple for every column or if I can just define one, and leave the others to numpy. In the end I solved with pandas. – foebu Mar 31 '14 at 20:34