0

I'm new in this field. I'm using YOLOv8 to detect custom objects in realtime. It was implemented successfully. Now, I next want to use the YOLOv8 output for the other inputs. Actually I want to make a program that will talk about the sign language alphabet. The classes are: 'A', 'B', 'C' to 'Z'. I don't know how to do it. Please help me with this.

what I hope is to be able to provide a sample code yolov8 with text to speech output of sign language alphabets in real time to detect it

import torch
import cv2
from numpy import ndarray as nd
from ultralytics import YOLO

model = YOLO('models/best.pt')

cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)

frame_rate = 30
delay = int(1000 / frame_rate)

while cap.isOpened():
    ret, frame = cap.read()

    results = model(frame)

    cv2.imshow('Detection-Bisindo', nd.squeeze(results[0].plot()))

    if cv2.waitKey(delay) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

0 Answers0