0

well actually i dont really understand how to use this T-API opencl and still newbie at it, in the documentation https://www.learnopencv.com/opencv-transparent-api/

it use cv2.UMat to the .image file and then read it. In my problem i want to use the opencl T-API to my reconizer.predict line because the image is taken/processing while streaming the camera

recognizer = cv2.face.LBPHFaceRecognizer_create()
#colec = cv2.face.MinDistancePredictCollector()
recognizer.read("trainer_data_array.yml")

labels = {"persons_name":0}
with open("labels.pickle", "rb") as f:
    og_labels = pickle.load(f)
    labels = {v:k for k,v in og_labels.items()}


cap = cv2.VideoCapture(0)

while(True):
    #video cap
    ret, frame = cap.read()
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)

    for (x,y,w,h) in faces:
        #print(x,y,w,h)
        roi_gray = gray[y:y+h, x:x+w]
        roy_color = frame[y:y+h, x:x+w]

        #recognize how?

        id_ , conf = recognizer.predict(roi_gray) #some error, some say cuz its opencv 3.1.0 bug 
                                                                #solution : up opencv to 3.3 or just use MinDistancePredictCollector(...)
        if conf>=45 and conf<=85:
            print(idppl)
            print(labels[idppl])
            font = cv2.FONT_HERSHEY_SIMPLEX
            name = labels[idppl]
            color = (255,255,255)
            stroke = 2
            cv2.putText(frame,name,(x,y),font,1,color,stroke,cv2.LINE_AA)
        elif conf > 85:
            print("unknown")

can someone help me how to do it, if just put it raw like id_ , conf = cv2.UMat(recognizer.predict(roi_gray)) it give me error cv2.UMat' object is not iterable

well without the T-API mybe this line program still give a good frame rate, but after given many modification or implementation/process it'll run in low frame rate when detecting/recog- people face.

this why i want to use openCl so when it run in gpu mybe will give me a pretty good frame rate

  • Maybe consider threading or multiprocessing instead - the Odroid has 4/8 cores. You could just acquire continuously on core 1 and then process frame 1 on core 2, process frame 2 on core 3, frame 3 on core 4 and frame 4 back on core 2 again. – Mark Setchell Aug 20 '18 at 17:45
  • i'm also implement the multithread in the frame for increase the fps, well it sure improve the fps, but its still not in the .. i dont know how to say it, default fps for normal camera. i'm only thinking mybe because too many process working while the camera stream. thats why if there are a way i also want to implement the T-API on it – Shinogami Aug 21 '18 at 06:47

0 Answers0