0

I got the wrong pixel values between Swift iOS and OpenCV Python.

In Python, I read video with video.read() and got BGR 79, 82, 90 of the first pixel.

In Swift, I read the video by AVURLAsset and I got BGR 80, 84, 92.

It seems quite the same, but maybe there's some rounding here, isn't there? How can I deal with it? I really appreciate your help.

Here is some code to reproduce:

Python:

import cv2
vidcap = cv2.VideoCapture('test_video.mp4')
success, image = vidcap.read()

image[0][0] # output is [79, 82, 90]

Swift:

DispatchQueue.global().async(execute: {
        while reader?.status == AVAssetReader.Status.reading {
            if(rout.copyNextSampleBuffer() != nil){
                // Buffer of the frame to perform processing on
                let sampleBuffer: CMSampleBuffer = rout.copyNextSampleBuffer()!
                let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
            }
            usleep(10000)
        }
    })

Extract pixel from first pixelBuffer and I got BGR [80, 84, 92]

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
nguyencse
  • 1,028
  • 9
  • 17
  • 1
    a [mre] must include the data required to run it. -- video compression is usually lossy. different decoders may produce different outputs. additionally, color space transforms may happen, or not. – Christoph Rackwitz Jun 08 '23 at 10:06

0 Answers0