I have an array that has 14000 columns and 7000 rows of terrain data for the US that are equally spaced 500m apart. I also have the lower-left latitude and longitude:
ncols = 14000
nrows = 7000
xllcorner = -130
yllcorner = 20
cellsize = 0.05
I also have another dataset (polar --> Cartesian radar data) that is already in a projected coordinate system:
# radial data being converted to Cartesian
x = rangee * np.sin(np.deg2rad(az))[:,None]
y = rangee * np.cos(np.deg2rad(az))[:,None]
latitude = 35.9339
longitude = -80.0212
dataproj = Proj(f"+proj=stere +lat_0={latitude} +lat_ts={latitude} +lon_0={longitude} +ellps=WGS84 +units=m")
lons,lats = dataproj(x,y,inverse=True)
It should be noted that the terrain data spans throughout the US, whereas the radar data is located over North Carolina. Therefore, I have two separate gridded datasets where I would like to be able to match the terrain data as best as possible to the radar data. In other words, whether through interpolation and/or other methods, there should be one value of terrain for each [x,y] location of the radar data.
How could one achieve this?