Some of the iOS devices have cameras capable of 720P and others are 1080P.
Holding the screen size fixed, obviously the 1080P will provide a better picture since we are fitting more pixels in the same screen size.
But if we wanted to manipulate pixels using:
-(void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
and for the sake of argument, we will not be rendering them anywhere but rather run calculations on them.
Obviously, the buffer height and width will be larger but does the 1080P camera capture more pixels because possibly of a wider camera "field of vision" and so there is no enhancement to the quality or is the 1080p camera working within the same "field of vision" of the 720p camera and it is simply capturing more pixels per inch and so even if I don't output the buffer to an image, I should expect more "grain/detail" from my frame buffer.
Thanks