I have a dataset with drone view images of size 4000x6000, grayscale. Each individual pixel value corresponds to a class (I have 20 classes in total), so a pixel value of 3 would mean "tree" for example. Using the original image, I can very easily create binary masks for all 20 of the classes by using equality operators in NumPy and I get pixel-perfect masks.
Here's an example of what one row would look like:
[[2, 2, 2, 2, ...... , 5, 5, 5]]
However, 4000x6000 is much too big for my purposes, and I want to resize these segmentation targets to something a bit more bearable, such as 400x400 or 400x600. Though I've tried a few different Python libraries, all of them convert my pixel values to different float values causing me to lose my segmentation map labels. Is there any method (not including cropping), where I can resize my segmentation target maps AND the original RGB input images without losing my labels?