VideoHelp Forum

Try DVDFab and copy Ultra HD Blu-rays and DVDs! Or rip iTunes movies and music! Download free trial !
+ Reply to Thread
Results 1 to 18 of 18
Thread
  1. A lower value means better quality, right? I've read users mentioning that faster settings mean a better quality video and others say the opposite.

    I'm using DAIN APP to interpolate videos and their default CRF is 16, but as I have to use the VRAM reduced usage setting a 1080p file is reduced to 540p in the output file, so I'm losing quality. Would lowering the default CRF help in any way? Thanks.
    Quote Quote  
  2. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    Yes, lower is better but uses more bitrate.
    Quote Quote  
  3. I'm a Super Moderator johns0's Avatar
    Join Date
    Jun 2002
    Location
    canada
    Search Comp PM
    It only gives you better quality as long as the source is very good to excellent quality,it will never up the original quality.
    I think,therefore i am a hamster.
    Quote Quote  
  4. Lower crf with x264/x265 yields diminishing returns (quality vs bitrate). CRF16 is already quite low.

    Higher resolution with higher crf typically gives a better result.
    Quote Quote  
  5. Originally Posted by butterw View Post
    Higher resolution with higher crf typically gives a better result.
    I hate it when people say stupid things.
    Quote Quote  
  6. @OP, CRF is a bit rate control method, lower CRF values will produce, in theory, higher quality, because they use more bit rate.

    As was mentioned by someone else, the practical limit is that it can never give you more quality than the source but an equal limit is the settings of all the so-called "psy-optimizations", AQ, MB-TREE, PSY-RD, PSY-Trellis, Trellis, etc, that all work together to distribute the bit rate Intra and Inter frame in what the developer's hope will result in optimal visual at any given bit rate range.

    It's all a bunch of horse feathers of course, but that is the theory.
    Quote Quote  
  7. Well in reality and not some theory, what butterw says is valid. For HD source, you get away with higher CRF 19-20. For SD resolution , while watching on big TV screens, content gets upscaled, every artifact is blown up, so that 19-20 equivalent for HD might show up as a worse,CRF is better to be lower, say CRF, 16-17 or 18.

    Of course , you start to use a 120" TV screen and then again that CRF 19-20 for HD content might not be enough. That is the relativity for those things. But I'm sure he is talking about a given screen and setting scenario, using similar or same settings so things make sense in this regard. Not what settings do what.

    Encoding business IS businesses dealing with illusion. When you start to bring up, what settings does and criticize it makes a little sense, because at the end it is that illusion that matters. That's the point.
    Quote Quote  
  8. Originally Posted by _Al_ View Post
    Well in reality and not some theory, what butterw says is valid. For HD source, you get away with higher CRF 19-20. For SD resolution , while watching on big TV screens, content gets upscaled, every artifact is blown up, so that 19-20 equivalent for HD might show up as a worse,CRF is better to be lower, say CRF, 16-17 or 18.

    Of course , you start to use a 120" TV screen and then again that CRF 19-20 for HD content might not be enough. That is the relativity for those things. But I'm sure he is talking about a given screen and setting scenario, using similar or same settings so things make sense in this regard. Not what settings do what.

    Encoding business IS businesses dealing with illusion. When you start to bring up, what settings does and criticize it makes a little sense, because at the end it is that illusion that matters. That's the point.
    This is absurd, it hurts reading. HD has more pixels to encode than SD, higher CRF uses less bit rate than lower CRF, so when you combine HD+higher CRF you end up with a worse pixel/bit ratio.

    It's advice such as your and the other guys that results in constant questions on this and other forums as to why theire encodes don't look good.

    Think before you type.

    As for the other hogwash about illusions,
    Quote Quote  
  9. You are what you are.

    At the end, size of a content and upscale while watching matters. Because it is us who are watching it. Not some rule book. I can encode 100x100 pixel with CRF 0 and guess how it is going to look on my HD TV? Can you comprehend this?
    Last edited by _Al_; 19th Aug 2020 at 11:40.
    Quote Quote  
  10. As long as they use NotEnoughAV1Encodes, their encodes will look great.
    Quote Quote  
  11. Member Alkl's Avatar
    Join Date
    Mar 2018
    Location
    Germany
    Search Comp PM
    Artifacts in an sd encode are much bigger on an hd screen than artifacts in an hd encode on the same screen.

    HD still needs more bits of course, and using a fixed crf will of course generate a much higher bitrate for the half encode relative to the sd encode. However, it’s often good to use a lower crf for SD content.

    It's common knowledge to use higher CRF Values, as the resolution goes up.

    From the official Handbrake Documentation:

    Recommended settings for x264 and x265 encoders:
    RF 18-22 for 480p/576p Standard Definition
    RF 19-23 for 720p High Definition
    RF 20-24 for 1080p Full High Definition
    RF 22-28 for 2160p 4K Ultra High Definition

    Imperfections tend to be more noticeable at larger display sizes and closer viewing distances. This is especially true for lower resolution videos (less than 720p), which are typically scaled or “blown up” to fill your display, magnifying even minor imperfections in quality.

    You may wish to slightly increase quality (lower CRF) for viewing on larger displays (50 inches / 125 cm diagonal or greater), or where viewing from closer than average distances. Reduced quality may be acceptable for viewing on smaller screens or where storage space is limited, e.g. mobile devices.
    Quote Quote  
  12. Originally Posted by Alkl View Post
    From the official Handbrake Documentation:
    Handbrake? This is your checkmate?
    Quote Quote  
  13. Originally Posted by sophisticles View Post
    Originally Posted by Alkl View Post
    From the official Handbrake Documentation:
    Handbrake? This is your checkmate?
    You only have to look at the encodes you uploaded yourself a while ago, where you failed to prove another encoder was better than x264 once again.
    https://forum.videohelp.com/threads/397165-X264-vs-Intel-QSV-on-Ubuntu-Linux#post2582807
    You encoded the 4k samples at CRF24, and they looked great. I would never encode a SD source at CRF24. Try downscaling one of your 4k sources to SD and encode at CRF24, then upscale the encode for playback on a 4k display. It's not hard to deduce which encode is going to have more visible encoding artefacts, when viewed at the same resolution.
    Quote Quote  
  14. Well, the original source is 1080HD but as I said due to the limitations of VRAM usage in the program(720p requires 10-11GB of VRAM) it must be halved in resolution(960x540) before interpolation. So the output is SD albeit very good quality SD. I then use AI to upscale to 4K which produces a folder of several TB of tif images, which then is compressed to h.265 where I look for a bitrate similar to that of 4KUHD discs 60-90mbps.

    My follow up questions are about the resulting SD files from the DAIN APP. If I set the CRF for DAIN at 14 that gives a higher bitrate SD file. So if I'm looking for best quality, is it reasonable to try and get a bitrate that is half that of the original file, given that it is half the resolution? Did that make sense? Ultimately this about the quality I feed the AI upscale software as that creates(literally) uncompressed 4K data.
    Quote Quote  
  15. Member
    Join Date
    Mar 2011
    Location
    Nova Scotia, Canada
    Search Comp PM
    Originally Posted by sophisticles View Post
    Originally Posted by _Al_ View Post
    Well in reality and not some theory, what butterw says is valid. For HD source, you get away with higher CRF 19-20. For SD resolution , while watching on big TV screens, content gets upscaled, every artifact is blown up, so that 19-20 equivalent for HD might show up as a worse,CRF is better to be lower, say CRF, 16-17 or 18.

    Of course , you start to use a 120" TV screen and then again that CRF 19-20 for HD content might not be enough. That is the relativity for those things. But I'm sure he is talking about a given screen and setting scenario, using similar or same settings so things make sense in this regard. Not what settings do what.

    Encoding business IS businesses dealing with illusion. When you start to bring up, what settings does and criticize it makes a little sense, because at the end it is that illusion that matters. That's the point.
    This is absurd, it hurts reading. ...
    Agreed, utter nonsense. I'm pretty used to this sort of crap on the newbie oriented distro Linux forums I read.
    Quote Quote  
  16. Would you care to say why it is a crap? Encoding UHD or SD is not the same, because size does matter.
    Quote Quote  
  17. @Chauceratemyhamster
    The meaning of crf values actually depends on what codec you are using (ex: x264), see ffmpeg Video Encoding Guide: https://trac.ffmpeg.org/wiki/Encode/H.264)
    I don't know what DAIN APP is or what it outputs.
    540p is mod4 (544 would be mod16). 540 to UHD is x4 on Height for AI upscaling.

    Given your very specific application, it's probably a good idea to test out what works best for you on a small representative sample of your sources.
    Quote Quote  
  18. Originally Posted by Hoser Rob View Post
    Agreed, utter nonsense. I'm pretty used to this sort of crap on the newbie oriented distro Linux forums I read.
    Have you encoded a video before?? You appear to be posting with a similar level of wrongness as sophisticles.

    I know almost nothing about how video encoding works, but take the use of motion vectors as an example. You have a picture that's spread over 4k's worth of pixels. The picture changes from frame to frame, but isn't it more likely the encoder will find a good match for a block at a higher resolution, because neighbouring blocks are more likely to be different at lower resolutions? Just a thought....



    Screenshots below.
    1: A 4k frame, encoded at CRF24 (x264), downscaled to 1080p on playback.
    2: The same frame, downscaled to 960x400 and encoded at CRF24 (default settings), then upscaled to 1080p on playback.
    3: The same frame, downscaled to 960x400 and encoded at CRF18 (default settings), then upscaled to 1080p on playback.

    Notice how the SD CRF24 encode looks pretty average next to the CRF24 UHD source at the same resolution (1080p)?
    Notice how the SD CRF18 encode compares far more favourably?

    Conclusion.... when upscaling SD to HD or UHD on playback, the encoding quality, or lack thereof, is more noticeable. Therefore, lower resolutions require lower CRF values, while for higher resolutions you can get away with higher CRF values.
    Image Attached Thumbnails Click image for larger version

Name:	UHD at 1080p and CRF24.png
Views:	12
Size:	1.94 MB
ID:	54812  

    Click image for larger version

Name:	SD at 1080p and CRF24.png
Views:	12
Size:	1.21 MB
ID:	54813  

    Click image for larger version

Name:	SD at 1080p and CRF18.png
Views:	11
Size:	1.27 MB
ID:	54814  

    Quote Quote  



Similar Threads