I have Python dictionaries that look like this:
a = {0.0 : {0.0: 343, 0.5: 23, 1.0: 34, 1.5: 9454, ...}, 0.5 : {0.0: 359, 0.5: -304, ...}, ...}
So they are 2D dictionaries that resemble 2D grids of m * n elements.
The values are evenly distributed, but the dictionaries may have different "resolutions". For instance, in the above example the values are separated by 0.5. Another dictionary has the separation of 1.0. Further, the range of the values is also variable, for example:
- one dictionary has the x range from -50.0 to 50.0, and y from -60.0 to 60.0, with a resolution of 0.5
- second dictionary has the x range from -100.0 to 100.0, y from -100.0 to 100.0, with a resolution of 5.0
I need to create a function that takes a 2D value (e.g. 3.49, 20.31) and returns the interpolated value in the grid.
How to do that?
I guess it'd help to first convert this form to a Numpy array, but I don't know how to do it.
Edit:
- the input 2D values will always be in the grid, they cannot be outside its range.
- I have no preference for the interpolation method, a linear interpolation method is fine.