I'm have been breaking my brain on the code below. I believe I found a rather clean way to tackle the issue: Scan an area (or input picture) for one specific color (afterwards maybe multiple). For each pixel that is not equal to the input color "targetColorPix", the picture is cleaned and set in white.
However, what I've noticed with my output (see below) that there are still areas without the distinct color that are included.
Question: Could someone help me out? Does the np.where function truly check that a value has to be equal to the rgb/bgr code or only a part of it (r individually)? It seems odd that the Windows area are still shown, as the color on my PC is black, not light blue (255,255,0).
[Output image with (in)correct areas highlighted][1]
import cv2
import numpy as np
import pyautogui
function(image, color):
targetColor = cv2.imread(color) # In the case of a color image, it is a 3D ndarray of row (height) x column (width) x color (3). shape is a tuple of (row (height), column (width), color (3)).
targetColorPix = targetColor[1,1] #select one color of input file -
b,g,r = targetColorPix[:]
targetColorPix = [r, g, b]
im = pyautogui.screenshot()
im.save('inputfile.png') #before
img_rgb = np.array(im)
img_rgb2 = np.where(img_rgb == targetColorPix,img_rgb,255) #if equal to selected color stay, otherwise white
rescaled = (255.0 / img_rgb2.max() * (img_rgb2 - img_rgb2.min())).astype(np.uint8)
im = Image.fromarray(rescaled)
im.save('cleaned.png')