VideoHelp Forum
+ Reply to Thread
Results 1 to 7 of 7
Thread
  1. hopefully in an easy to understand way.

    i was reading an announcement about black magic's new 17k camera and that led me to a review of the original bm 12k camera where the reviewer said something that was odd.

    she said that the 12k camera was not a real 12k camera because it didn't have 12 thousand white pixels, 12k red pixels, 12k green pixels, 12k blue pixels, it only like 6k white, 2k each of red, green blue and so the effective resolution was more like 8k.

    this got me googling, which led me to a bunch of what look like professional forums with people that seem to have legit resumes with dozens of feature films under their belts, and the more i read the more confused i got.

    one of the examples i found was that the alexa 4k was not actually 4k, the effective resolution was more like 2.5k, which is why people didn't see any difference in quality between the newer 4k alexa and the original 2.5k models.

    what i have been able to gather is that the effective resolution is somehow tied to the sensor size and the number of photo sites in the camera, oh and apparently there are different size pixels in the sensors so that a camera with larger but fewer pixels actually produces better quality than one with more but smaller pixels because more light can be recorded.

    so if someone can give me an nice, easy to understand, explanation of what effective resolution is, how it's mapped to the storage resolution of the digital file and how to calculate the effective resolution from the sensor size, that would be greatly appreciated.
    Quote Quote  
  2. It has to do with the way the sensors are laid out on the silicon chip:

    https://en.wikipedia.org/wiki/Bayer_filter

    That 8x8 grid would be called "64 pixel resolution" by a camera manufacturer. But there's no way it can actually resolve a checkerboard pattern of 64 black and white (or other color) squares, or an alternating pattern of 8 vertical (or horizontal) black and white lines. This is nothing new. It's pretty much always been the case. The manufactures have always exaggerated this.

    There was one manufacture, Foveon, that used stack RGB sensors and could resolve more detail. But they don't seem to have gone anywhere.

    https://en.wikipedia.org/wiki/Foveon_X3_sensor
    https://en.wikipedia.org/wiki/Foveon
    Quote Quote  
  3. Originally Posted by jagabo View Post
    It has to do with the way the sensors are laid out on the silicon chip:

    https://en.wikipedia.org/wiki/Bayer_filter

    That 8x8 grid would be called "64 pixel resolution" by a camera manufacturer. But there's no way it can actually resolve a checkerboard pattern of 64 black and white (or other color) squares, or an alternating pattern of 8 vertical (or horizontal) black and white lines. This is nothing new. It's pretty much always been the case. The manufactures have always exaggerated this.

    There was one manufacture, Foveon, that used stack RGB sensors and could resolve more detail. But they don't seem to have gone anywhere.

    https://en.wikipedia.org/wiki/Foveon_X3_sensor
    https://en.wikipedia.org/wiki/Foveon
    so to understand, how would that apply to the following hypothetical cameras:

    camera records raw, lacks in-camera debayer filter, claims it records 4k, this is not effective 4k, correct?

    camera records raw, has in-camera debayer filter, claims it records 4k, this is effective 4k, correct?

    camera records prores which does not need debayering, claims it records 4k, this is effective 4k, correct?

    there's a lot of confusing stuff out there, like this:

    https://gafpagear.com/download-kinefinity-mavo-edge-8k-log-footage/

    where they talk about how this new 12 thousand dollar camera does not shoot raw, only prores, and from i can find it seems that all new cine cameras are moving to prores 4444xq instead of raw and from what i can tell it seems they end up with higher quality because raw needs to be debayered and is also compressed most of the time due to the huge file sizes.

    thanks for your help, if you can either explain a but more or point me to some easy to understand sourcesm i would appreciate it.
    Quote Quote  
  4. i'm guessing noone knows the exact answer to my question, judging by the lack of responses, except for the one of course, for which i am grateful.

    i will take my question to another forum where maybe i will have better luck.

    thanks anyway guys.
    Quote Quote  
  5. Originally Posted by frank_footer View Post
    so to understand, how would that apply to the following hypothetical cameras:

    camera records raw, lacks in-camera debayer filter, claims it records 4k, this is not effective 4k, correct?

    camera records raw, has in-camera debayer filter, claims it records 4k, this is effective 4k, correct?

    camera records prores which does not need debayering, claims it records 4k, this is effective 4k, correct?
    I think those will all have the same effective resolution. Adding ProRes compression will reduce quality a little. Even if the camera gives you raw beyer data, it needs to be converted to normal RGB pixels to be seen/used. Doing that in software on a computer may give you more options -- for example more or less sharpening.

    Originally Posted by frank_footer View Post
    there's a lot of confusing stuff out there, like this:

    https://gafpagear.com/download-kinefinity-mavo-edge-8k-log-footage/

    where they talk about how this new 12 thousand dollar camera does not shoot raw, only prores, and from i can find it seems that all new cine cameras are moving to prores 4444xq instead of raw and from what i can tell it seems they end up with higher quality because raw needs to be debayered and is also compressed most of the time due to the huge file sizes.
    I'm not real familiar with RGBW but I believe it collects RGB and W at each pixel (instead of sharing across pixels, which results in blurring), and hence gives better resolution.
    Quote Quote  
  6. thanks for your replies jagabo, i think i have found an answer.

    i found an article where the author said that the limiting factor in so far as resolution is concerned is the number of photosites on the sensor and the size of the sites controls how much light can be processed; the larger the sites, the better the image quality.

    using the alexa mini as an example, it has 3424 x 2202 sites, so this is the maximum effective resolution that the camera is capable of:

    https://www.arri.com/en/camera-systems/cameras/legacy-camera-systems/alexa-mini

    i guess when recording to a 3840 x 2160 resolution, it interpolates the missing pixels.

    i suppose it also helps explain this:

    https://www.eoshd.com/news/discovery-4k-8bit-420-panasonic-gh4-converts-1080p-10bit-444/

    where an 8-bit 4k 420 was claimed to be equivalent to a 10-bit 1080p 444.

    i guess in the end it probably doesn't matter all the much to the end user, by the time we receive the content it's been scaled so many times, starting within the camera itself, that we are lucky it's viewable at all.
    Quote Quote  
  7. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Also:
    Glass, optical resolve, is a primary limiter in cameras,
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!