0

I am trying to read the gzipped grib2 files at tis URL: https://mtarchive.geol.iastate.edu/2022/12/24/mrms/ncep/SeamlessHSR/

I want to read the grib file into an xarray DataFrame. I know I could write a script to download the file to disk, decompress it, and read it in, but ideally I want to be able to do this entirely in-memory.

I feel like I should be able to do this with some combination of the urllib and gzip packages, but I can't quite figure it out.

I have the following code so far:

import urllib
import io
import gzip

URL = 'https://mtarchive.geol.iastate.edu/2022/12/24/mrms/ncep/SeamlessHSR/SeamlessHSR_00.00_20221224-000000.grib2.gz'

response = urllib.request.urlopen(URL)
compressed_file = io.BytesIO(response.read())
decompressed_file = gzip.GzipFile(fileobj=compressed_file)

But I can't figure out how to read decompressed_file into xarray.

Bonus points if you can figure out how to open_mfdataset on all of the URLs there at once.

hm8
  • 1,381
  • 3
  • 21
  • 41

1 Answers1

0

One way that works for me is writing the decompressed data into a temporary file which can then be opened with xarray.

import urllib
import gzip
import tempfile

import xarray as xr

URL = 'https://mtarchive.geol.iastate.edu/2022/12/24/mrms/ncep/SeamlessHSR/SeamlessHSR_00.00_20221224-000000.grib2.gz'


response = urllib.request.urlopen(URL)
compressed_file = response.read()

with tempfile.NamedTemporaryFile(suffix=".grib2") as f:
    f.write(gzip.decompress(compressed_file))
    xx = xr.load_dataset(f.name)

display(xx)

enter image description here

Val
  • 6,585
  • 5
  • 22
  • 52