0

My app takes a snapshot of view: ZStack ( image with opacity 0.4 , white rectangle with opacity 0.25 then text) and save it as a image then enables user to generate a video using that image and some audio, I followed tutorials

https://img.ly/blog/how-to-make-videos-from-still-images-with-avfoundation-and-swift/ http://twocentstudios.com/2017/02/20/creating-a-movie-with-an-image-and-audio-on-ios/ https://www.hackingwithswift.com/quick-start/swiftui/how-to-convert-a-swiftui-view-to-an-image

The video is generated successfully with audio and image, however the output video is always darker than the produced image from snapshot.

saved image and video from photos app

image looks like what appear in device:

photo

output video darker than the used image :S

video

here are some functions

snapshot

extension View {
    func snapshot() -> UIImage {
        let controller = UIHostingController(rootView: self.ignoresSafeArea(.all))
        let view = controller.view

    let targetSize = controller.view.intrinsicContentSize 
        view?.bounds = CGRect(origin: .zero, size: targetSize)
        view?.backgroundColor = .clear
    
        let renderer = UIGraphicsImageRenderer(size: targetSize)

        return renderer.image { _ in
            view?.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true)
        }
        
    }
    
}

Create video using image

  func writeSingleImageToMovie(image: UIImage, movieLength: TimeInterval, outputFileURL: URL, completion: @escaping (Error?) -> ())
     {
        print("writeSingleImageToMovie is called")
   
     
     
        do {
            let imageSize = image.size
            let videoWriter = try AVAssetWriter(outputURL: outputFileURL, fileType: AVFileType.mp4)
            let videoSettings: [String: Any] = [AVVideoCodecKey: AVVideoCodecType.h264,
                                                AVVideoWidthKey: imageSize.width, //was imageSize.width
                                                AVVideoHeightKey: imageSize.height] //was imageSize.height
            
          
            let videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoSettings)
            let adaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput, sourcePixelBufferAttributes: nil)
            
            if !videoWriter.canAdd(videoWriterInput) { throw NSError() }
            videoWriterInput.expectsMediaDataInRealTime = true
            videoWriter.add(videoWriterInput)
            
            videoWriter.startWriting()
            let timeScale: Int32 = 4 // 600 recommended in CMTime for movies.
            let halfMovieLength = Float64(movieLength/2.0) // videoWriter assumes frame lengths are equal.
            let startFrameTime = CMTimeMake(value: 0, timescale: timeScale)
         
            
            let endFrameTime = CMTimeMakeWithSeconds(Double(60), preferredTimescale: timeScale) 
                                                  
            videoWriter.startSession(atSourceTime: startFrameTime)
          
         
            
        guard let cgImage = image.cgImage else { throw NSError() }
             
     
            let buffer: CVPixelBuffer = try CGImage.pixelBuffer(fromImage: cgImage, size: imageSize)

            while !adaptor.assetWriterInput.isReadyForMoreMediaData { usleep(10) }
            adaptor.append(buffer, withPresentationTime: startFrameTime)
            while !adaptor.assetWriterInput.isReadyForMoreMediaData { usleep(10) }
            adaptor.append(buffer, withPresentationTime: endFrameTime)
            
            videoWriterInput.markAsFinished()
            videoWriter.finishWriting
            {
                completion(videoWriter.error)
            }
            
        } catch {
            print("CATCH Error in writeSingleImageToMovie")
            completion(error)
        }
}

Here is the function to create CVPixelBuffer, I tried to create buffer using CIImage also but got the same result

extension CGImage {
 
    static func pixelBuffer(fromImage image: CGImage, size: CGSize) throws -> CVPixelBuffer {
        print("pixelBuffer from CGImage")
        let options: CFDictionary = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true] as CFDictionary
        var pxbuffer: CVPixelBuffer? = nil
        let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(size.width), Int(size.height), kCVPixelFormatType_32ARGB, options, &pxbuffer) 
        guard let buffer = pxbuffer, status == kCVReturnSuccess else { throw NSError() }
        
        CVPixelBufferLockBaseAddress(buffer, [])
        guard let pxdata = CVPixelBufferGetBaseAddress(buffer)
        else { throw NSError() }
        let bytesPerRow = CVPixelBufferGetBytesPerRow(buffer)
        
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
         
            guard let context = CGContext(data: pxdata, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue) else { print("error in `CG context")
                throw NSError() }
        context.concatenate(CGAffineTransform(rotationAngle: 0))
        context.draw(image, in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
        
        CVPixelBufferUnlockBaseAddress(buffer, [])
        
        return buffer
        }
}

I got stuck into this problem, I can't seem to find a solution.. any hint would be appreciated.

I created mini showcase app: https://github.com/digitallegend/view2video

user2814778
  • 296
  • 6
  • 14
  • It will be ***much easier*** to help you diagnose this if you can create a [mre] (post it somewhere like GitHub) so we can run and test it. – DonMag Jan 29 '23 at 15:34
  • tried to reproduce your issue with a simple showcase app , https://github.com/kasimok/75029229 the image and video just looks fine – kakaiikaka Jan 30 '23 at 12:34
  • please look again: the image is brighter than the video, even thought the video use this image @kakaiikaka – user2814778 Jan 31 '23 at 18:35
  • you mean in my showcase app? – kakaiikaka Jan 31 '23 at 23:55
  • Thanks, I tried your code, and it looks fine have you made any edit to CVPixelBuffer? in my case the image is a snapshot of the view (text and img with.opacity(0.4)) saved as UIImage then converted to CGImage to create CVPixelBuffer – user2814778 Feb 01 '23 at 07:20
  • @DonMag thanks for being interested I add more details about the view it may help in figuring out the issue – user2814778 Feb 01 '23 at 07:36
  • @user2814778 - you need to post enough code for a [mre] so we can run it directly to see exactly what you're seeing. Based on your comment to **kakaikaka** though, are you setting `opacity` to `0.4` on both the image and text? If so, that would explain the result you're getting. – DonMag Feb 01 '23 at 12:25
  • please cheack this https://github.com/digitallegend/view2video @DonMag – user2814778 Jul 12 '23 at 16:21

1 Answers1

1

It appears you are generating the image based on a black (or nil) background...

In your ShareVideoSheet file, set the VStack background to white:

var imageTextBackground: some View {
   
    ZStack {

        // everything as you have it...

    }
    .background(Color.white)

}

That should fix the "dark video output" issue.

Alternatively, in your:

extension View {
    func snapshot() -> UIImage {

change:

    view?.backgroundColor = .clear

to:

    view?.backgroundColor = .white
DonMag
  • 69,424
  • 5
  • 50
  • 86