A little of the background story.. I am trying to get a qualitative/quantitative judgement on whether there exists a useful solution(if any) that a convolutional neural network can arrive at for a set of synthetic images containing 3 classes.
Now, I am trying to run TSNE on a folder containing 3195 RGB images of resolution 256x256.
First question I would like to ask is, am I converting my image folder into an appropriate format for usage with TSNE? The python code can be seen here https://i.stack.imgur.com/79gNy.png.
Secondly, I managed to get the t-sne to run, although I am not sure if I am using it correctly, which can be seen here. https://i.stack.imgur.com/ZtOlR.png . The sourcecode is basically just a slight modification from Alexander Fabisch's MNIST example on Jupyter Notebook(apologies, however I cannot post more than two links since reputation <10.) So, I would like to ask whether is there anything blatantly wrong for forcing a TSNE architecture used for MNIST dataset on a set of RGB images?
Lastly, I encountered a difficulty for the code in the second imgur link posted above with the below code,
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(X[i].reshape(256, 256)), X_embedded[i])
The first argument for offsetbox.AnnotationBbox is a 256x256 image(because my image resolution is such), which basically covers up my entire screen, obscuring the results), but I get an error when i try to change it:
ValueError: total size of new array must be unchanged
So, how can I reduce the size of the images being plotted?(or other ways to work around the issue)