I am working on some HD imaging software and although very large textures are not currently planned for this version I wanted to do some future proofing.
If i was dealing with very large textures (e.g. 1095630 x 939495), could I use standard combination of compressed textures and mip mapping or are these textures going to be far to large to store in texture memory?
My assumption is they would be too big and I would have to do a psudeo manual mip map on the CPU. I.e grab the very large data, create a sensible compressed version for the full zoom out and then as the user zooms in, send sub sections of the large texture to the GPU?
The idea of doing said compression on the CPU would be awfully slow, so my plan is to tile the data and send it it chunks to the GPU for compression. In which case, how do I find out, preferably dynamically, the maximum texture size the GPU can handle?
I am new to the TIFF format but from the looks of it, its already stored as tiles, is this correct? I hope to play a bit with the libtiff but I haven't found many examples of its use (my google skills fail me today, apologies). https://stackoverflow.com/questions/ask
Existing Examples I am hoping to get some pointers from these two:
- BioView3D (open source ftw)
- BigTiffViewer
In summary:
- How do i find out the maximum texture size the GPU can handle (preferably 3D texture size...)
- Whats the best way to break up the large texture format and compress it to what the GPU can handle
- Whats the best way to facilitate the zooming?
- Pointers on using libTiff?