Because of the size of the “decoded picture buffer” (GPU), or the L3 cache of the CPU, the maximum resolution that works efficiently apparently has a limit at any framerate.
But does framerate also have such a limit?
For example: 3840×2160@30fps has 248832000 pixels per second.
10×10@2488320fps would also have 248832000 pixels per second.
Could the same GPU or CPU that decodes and encodes 2160p@30fps smoothly also decode/encode 10×10@2488320fps smoothly?
I know, 10×10@2488320fps is somewhat hypothetical, but I am just asking this for case study purposes.
+ Reply to Thread
Results 1 to 9 of 9
Last edited by LG7; 7th Feb 2020 at 12:43. Reason: Title character limit.
You might get different results by just decoding raw YUV or RGB, but with low complexity H.264 I didn't get anything close to your estimate.
[Attachment 51900 - Click to enlarge]
I encoded my own 60x32 H.264 video with a CRF of 23, which created a file size of 1MB even though the file is 10 minutes long. No audio. No CABAC. Very fast Preset. I averaged a decode speed 1247 using directshow and lav filters to pipe my AMD GPU into decoding it. This same GPU can decode 1080p 12Mbit at around 300fps.
As far as I was told, these factors limit the upper resolution of a video at any framerate.
Videos above that resolution would require very inefficient frame-by-frame rendering, because one frame does not fit into the DPB or L3 cache anymore.
But I am not yet aware of an equivalent limitation for framerates.
An old computer might play 1920×1080@60fps smoothly, but not 3840×2160@15fps due to the DPB or L3 cache size, despite the same pixel rate.
Let's assume a 1×1 pixel video at 124416000 frames per second. It would have the exact same number of pixels per second (also known as pixel rate) as a 1920×1080 video at 60 fps.
Could a device that renders the latter smoothly also render the former, or does it hit a wall?
(I.e. how far can framerate be increased while maintaining steady pixel rate?)
There's another factor to include. The monitor's refresh rate. Most monitors refresh at 60Hz, meaning the picture being displayed is refreshed 60 times per second. For a 30fps video, it means each frame is displayed for two screen refreshes. For 24fps, the frames alternate displaying for two refreshes and three refreshes. For a 60Hz display, 60fps is the limit, otherwise frames must be dropped.
Of course there are displays that'll refresh at higher rates these days, but there's still a limit. GPUs can decode video more efficiently than a CPU, which is the "jack of all trades" so to speak, but the DPB probably has limitations. I know almost nothing about how that works, but the DPB stores pictures, not pixels use such, because modern codecs use B and P frames that need to reference past and future frames in order to be decoded correctly, so there's limits to the number of reference frames that can be used when encoding video, which changes with resolution, because the DPB has to store the appropriate number of pictures. Obviously it can store more at lower resolutions, but there's probably still a limit regardless of their size, and to the speed at which they can enter or leave the DPB etc. For a CPU, the DPB is probably the PC's RAM, and there's a limit to it's speed.
I just had a look at Wikipedia and the formula for calculating the capacity of the DPB for h264 uses macroblocks, not pixels.
DpbCapacity = min(floor(MaxDpbMbs / (PicWidthInMbs * FrameHeightInMbs)), 16)
If you scroll upwards there's a list of levels, the resolutions supported and the maximum number of reference frames and maximum frame rate for those resolutions, although it only lists typical resolutions and their maximums. Try the formula above, but unless I'm interpreting it incorrectly, it indicates 16 pictures is the limit. Different levels support different maximum bitrates, more reference frames and higher frame rates at lower resolutions etc, but it doesn't appear to be as simple as the number of pixels per frame relating directly to the maximum frame rate.
For h265 there's similar information here.
Last edited by hello_hello; 8th Feb 2020 at 14:29.
Obviously, a screen can not display 120 million frames per second. But about the hardware, there is apparently a limit at some point.
I wonder how much higher.
Maybe, a video editor (e.g. KDEnlive) can generate a very long super-low resolution video (e.g. 10×10) and FFmpeg can test how many frames it renders per second.
I believe I've illustrated things clear in the original post.
Why cannot we have a CPU of infinitely high core frequency?
For all finite beings are being in a finite world...
Take it as an analogy:
Say you only visit your home once a day. You may never be able to realize that some malicious magician deliberately setting your house on fire while you are out... and just purposely restore everything right before you go back.
Your life would proceed, as if nothing had ever happened. Even if that magician does it on a daily basis...
If you have a video that should display at "x" number of frames per second, even if it's lower than the refresh rate, if the video isn't being decoded/processed fast enough, then either the frame rate slows to display all the frames, or it continues at the same rate and frames that don't make it to the output buffer in time, or don't leave it in time etc, are dropped. If it happens regularly the result would be "jerky" playback and you'd probably notice something is happening. There's lots of potential causes for that sort of thing.
MadVR will tell you the same thing.
Last edited by hello_hello; 9th Feb 2020 at 22:45.
It was just for the clarification and emphasizing the character of the subject.
this thread. (which reveal ideas on how the video could be decoded)
Even supported, have you thought of the combination of weak GPU and superpowerful CPU? (which would effectively become hardware deceleration...)
And there are many more possibilities.