Short answer
You can't do this single large gzipped files because gzip compression does not allow for random access.
Long Answer
Usually with large files Dask will pull out blocks of data of a fixed size, like 128MB, and process them independently. However some compression formats like GZip don't allow for easy chunked access like this. You can still use GZipped data with Dask if you have many small files, but each file will be treated as a single chunk. If those files are large then you'll run into memory errors as you have experienced.
You can use dask.bag, which is usually pretty good about streaming through results. You won't get the Pandas semantics though and you won't get any parallelism within a single file.