0

Currently, I'm working with data where each image is a .tiff hyperspectral image that's about 900 MB in size. I have roughly 1000 of these images, and the end goal is to use a CNN for classification on these images. But it seems that these images are way too big for this to be possible.

Is there generally a way to compress hyperspectral images in some way to make them usable for ML? What's the standard practice here?

Thanks a lot!

taylorSeries
  • 505
  • 2
  • 6
  • 18
  • How many bands do you have? Have you thought about dimension reduction techniques such as Principal Components Analysis (PCA)? In what floating-point precision is the data stored? Let's say it is float64 you might get away with float32 and therefore reducing the data size by a factor of 2? Are you performing classification using whole images? Often you'll see patch-based approaches that take a certain window (e.g. 15x15xB with B=bands) to classify a single pixel. – Lennert Antson Feb 27 '23 at 16:32

0 Answers0