1

I have a set of netcdf datasets that basically look like a CSV file with columns for latitude, longitude, value. These are points along tracks that I want to aggregate to a regular grid of (say) 1 degree from -90 to 90 and -180 to 180 degrees, by for example calculating the mean and/or standard deviation of all points that fall within a given cell.

This is quite easily done with a loop

D = np.zeros((180, 360))
for ilat in np.arange(-90, 90, 1, dtype=np.int):
    for ilon in np.arange(-180, 180, 1, dtype=np.int):
        p1 = np.logical_and(ds.lat >= ilat,
                        ds.lat <= ilat + 1)
        p2 = np.logical_and(ds.lon >=ilon,
                        ds.lon <= ilon+1)
        if np.sum(p1*p2) == 0:
            D[90 + ilat, 180 +ilon] = np.nan
        else:
            D[90 + ilat, 180 + ilon] = np.mean(ds.var.values[p1*p2])
            #   D[90 + ilat, 180 + ilon] = np.std(ds.var.values[p1*p2])

Other than using numba/cython to speed this up, I was wondering whether this is something you can directly do with xarray in a more efficient way?

Jose
  • 2,089
  • 2
  • 23
  • 29

2 Answers2

2

You should be able to solve this using pandas and xarray.

You will first need to convert your data set to a pandas data frame.

Once this is done, df is the dataframe and assuming longitude and latitude are lon/lat, you will need to round the lon/lats to the nearest integer value, and then calculate the mean for each lon/lat. You will then need to set lon/lat to indices. Then you can use xarray's to_xarray to convert to an array:

import xarray as xr
import pandas as pd
import numpy as np
df = df.assign(lon = lambda x: np.round(x.lon))
df = df.assign(lat = lambda x: np.round(x.lat))
df = df.groupby(["lat", "lon"]).mean()

df = df.set_index(["lat", "lon"])
df.to_xarray()
Robert Wilson
  • 3,192
  • 11
  • 19
1

I use @robert-wilson as a starting point, and to_xarray is indeed part of my solution. Other inspiration came from here. The approach that I used is shown below. It's probably slower than numba-ing my solution above, but much simpler.

import netCDF4
import numpy as np
import xarray as xr
import pandas as pd



fname = "super_funky_file.nc"


f = netCDF4.Dataset(fname)

lat = f.variables['lat'][:]
lon = f.variables['lon'][:]
vari = f.variables['super_duper_variable'][:]

df = pd.DataFrame({"lat":lat,
                   "lon":lon,
                   "vari":vari})

# Simple functions to calculate the grid location in rows/cols 
# using lat/lon as inputs. Global 0.5 deg grid
# Remember to cast to integer
to_col = lambda x: np.floor(
                            (x+90)/0.5).astype(
                            np.int)
to_row = lambda x: np.floor(
                            (x+180.)/0.5).astype(
                            np.int)

# Map the latitudes to columns
# Map the longitudes to rows
df['col'] = df.lat.map(to_col)
df['row'] = df.lon.map(to_row)

# Aggregate by row and col
gg = df.groupby(['col', 'row'])

# Now, create an xarray dataset with 
# the mean of vari per grid cell
ds = gg.mean().to_xarray()
dx = gg.std().to_xarray()
ds['stdi'] = dx['vari']
dx = gg.count().to_xarray()
ds['counti'] = dx['vari']```
Jose
  • 2,089
  • 2
  • 23
  • 29