I have a program where I get one image as input and I have to compare it with ~640 known images to see which one is the most similar. To do this I was thinking of using OpenCV's match template, as it seems very fast and effective in doing what I want to do.
I noticed that matching two images, both 400x240 px, 1000 times is much slower than matching a 400x240 px image in a 1400x240 px, despite both of them being 1000 matches. My idea was to combine the 640 images in one big image containing them in a grid (easy to do since they all have the same size).
Doing this I could really speed up the process if I could match template the input image in only some of the subimages of the big combined reference (the ones with the top-left corner in a "grid pixel", as in a subimage that was actually one of those 640 images that I used to combine into the big reference image), but it doesn't seem to be there a way to tell match template only to match on a specified set of positions.
How could I go to speed up this process? Is there a different library (I'm working with Python) that does something similar to match template but where I can specify which subimages to match? Is there an entirely different approach that suits more my goal?
EDIT: Basically what I have to do is: I'm taking a screenshot from a videogame and in the screenshot there's a clean portrait of some character (400x240 px). I have clean portraits of all the characters (80 of them, 8 skins each for a total of 640 portraits) and I want to find which portraits is the closest to the one in the screenshot so that I can identify the character played. It'd be awesome if I could also "mask" some pixels like with matchTemplate from OpenCV as there are some specific parts of the 400x240px rectangle that change from player to player and I'd like if I didn't have to account for that and could just mask out those (known) pixels