0

A sample of my data file is below. Note that I have not included all of the header. Note also that often a specific data value is left blank (in this case CALL for rows 1 and 5 but it can be other columns too).

USAF   WBAN  STATION NAME                  CTRY ST CALL  LAT     LON      ELEV(M) BEGIN    END
703165 99999 SAND POINT                    US   AK       +55.333 -160.500 +0006.0 19730107 20041231
703210 25513 DILLINGHAM AIRPORT            US   AK PADL  +59.050 -158.517 +0026.2 20060101 20200516
703210 99999 DILLINGHAM MUNI               US   AK PADL  +59.050 -158.517 +0029.0 19730101 20051231
703260 25503 KING SALMON AIRPORT           US   AK PAKN  +58.683 -156.656 +0020.4 19420110 20200516
703263 99999 KING SALMON                   US   AK       +58.683 -156.683 +0017.0 19801002 19960630

I'd like to simply read each column in as a different 1 dimensional numpy array. I've used the following code:

usaf, wban, name, ctry, st, call, lat3, lon3, elv, begin, end = \
  np.genfromtxt('./documentation/isd-history.txt', \
  dtype=('S6', 'S6', 'S30', 'S3', 'S5', 'S5', float, float, float, int, int), \
  comments='None', delimiter=[6, 6, 30, 3, 5, 5, 9, 9, 8, 9, 9 ],\
  skip_header=22, unpack=True)

I get the following error

ValueError: too many values to unpack

This seems like a pretty straightforward procedure but clearly I'm missing something. Any advice is appreciated.

twhawk
  • 5
  • 3

1 Answers1

0

Your sample with the delimiter does produce 11 fields:

In [83]: data = np.genfromtxt(txt.splitlines(),delimiter=[6, 6, 30, 3, 5, 5, 9, 9, 8, 9, 
    ...: 9 ],names=True, dtype=None, encoding=None)                                      
In [84]: data                                                                            
Out[84]: 
array([(703165, 99999, ' SAND POINT                   ', ' US', '   AK', '     ', 55.333, -160.5  ,  6. , 19730107, 20041231),
       (703210, 25513, ' DILLINGHAM AIRPORT           ', ' US', '   AK', ' PADL', 59.05 , -158.517, 26.2, 20060101, 20200516),
       (703210, 99999, ' DILLINGHAM MUNI              ', ' US', '   AK', ' PADL', 59.05 , -158.517, 29. , 19730101, 20051231),
       (703260, 25503, ' KING SALMON AIRPORT          ', ' US', '   AK', ' PAKN', 58.683, -156.656, 20.4, 19420110, 20200516),
       (703263, 99999, ' KING SALMON                  ', ' US', '   AK', '     ', 58.683, -156.683, 17. , 19801002, 19960630)],
      dtype=[('USAF', '<i8'), ('WBAN', '<i8'), ('STATION_NAME', '<U30'), ('CT', '<U3'), ('RY_ST', '<U5'), ('CALL', '<U5'), ('LAT', '<f8'), ('LON', '<f8'), ('ELEVM', '<f8'), ('BEGIN', '<i8'), ('END', '<i8')])

or with the rest of your command

In [93]: data=np.genfromtxt(txt.splitlines(), \ 
    ...:   dtype=('S6', 'S6', 'S30', 'S3', 'S5', 'S5', float, float, float, int, int), \ 
    ...:   comments='None', delimiter=[6, 6, 30, 3, 5, 5, 9, 9, 8, 9, 9 ],\ 
    ...:   skip_header=1 
    ...: )                                                                               
In [94]: data.dtype.fields                                                               
Out[94]: 
mappingproxy({'f0': (dtype('S6'), 0),
              'f1': (dtype('S6'), 6),
              'f2': (dtype('S30'), 12),
              'f3': (dtype('S3'), 42),
              'f4': (dtype('S5'), 45),
              'f5': (dtype('S5'), 50),
              'f6': (dtype('float64'), 55),
              'f7': (dtype('float64'), 63),
              'f8': (dtype('float64'), 71),
              'f9': (dtype('int64'), 79),
              'f10': (dtype('int64'), 87)})

unpack=True doesn't change that.

unpack : bool, optional
    If True, the returned array is transposed, so that arguments may be
    unpacked using ``x, y, z = loadtxt(...)``

With compound dtype, fields, the array is 1d:

In [99]: data.shape                                                                      
Out[99]: (5,)

and the transpose does nothing. This unpacking only works if the dtype is simple, and result is a 2d array (e.g. (5,11), transpose to (11,5) and unpack to 11 variables.). Unpacking should be clearer about that distinction.

You can unpack fields with individual assignment

In [100]: data['f0']                                                                     
Out[100]: array([b'703165', b'703210', b'703210', b'703260', b'703263'], dtype='|S6')
In [101]: data['f2']                                                                     
Out[101]: 
array([b' SAND POINT                   ',
       b' DILLINGHAM AIRPORT           ',
       b' DILLINGHAM MUNI              ',
       b' KING SALMON AIRPORT          ',
       b' KING SALMON                  '], dtype='|S30')

In [102]: data.dtype.names                                                               
Out[102]: ('f0', 'f1', 'f2', 'f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10')
In [103]: foo,bar,baz = [data[name] for name in data.dtype.names[:3]]                    
In [104]: bar                                                                            
Out[104]: array([b' 99999', b' 25513', b' 99999', b' 25503', b' 99999'], dtype='|S6')
hpaulj
  • 221,503
  • 14
  • 230
  • 353
  • Thanks! This fixed the problem. Very helpful information re: unpack. What is the purpose of .splitlines? My code works fine with my entire dataset w/o splitlines but I get the following error when I include splitlines: next(fhd) StopIteration – twhawk May 31 '20 at 13:32
  • My `txt` is a copy-n-paste of your sample. The splitlines turns it into a list of strings, which ia used just like a file. – hpaulj May 31 '20 at 15:03