VideoHelp Forum
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 52 of 52
Thread
  1. @mindphaser: I don't really see the connection of your settings in regard to uploading content to YouTube.
    (encoding, interlaced 4:3 SD content with ac3 and Blu-ray restrictions seems like a rather unfitting choice when aiming to upload the content to YouTube)
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  2. Originally Posted by Selur View Post
    I agree that you either need to keep the quality (using some measurement) or the file size constant to draw (useful) conclusions.

    AFAIK NVenc doesn't have a CRF equivalent, it always uses QP for 1-pass encoding.
    iirc NVEncC only does:
    1. one pass abr (with optional lookahead)
    2. one pass vbr (with optional lookahead)
    3. one pass cq
    3. one pass vbr (with optional lookahead and reencoding of frames) <- this sometimes is referred to as 2pass encoding, bug is something totally different than the classic Xpass encoding
    Thanks for the hint with NVEncC.
    I had mainly NVEnc (ffmpeg) in mind. There I am still confused how to use the -cq (constant quality?) parameter. Is it some kind of x264 CRF?
    The -qp (constant quantizer, quantizer preset) is more clear what it does, I think. Is there a good documentation somewhere? I didn't find one yet.
    Well, I am afraid I am leaving the topic …….
    Quote Quote  
  3. Originally Posted by Selur View Post
    @mindphaser: I don't really see the connection of your settings in regard to uploading content to YouTube.
    (encoding, interlaced 4:3 SD content with ac3 and Blu-ray restrictions seems like a rather unfitting choice when aiming to upload the content to YouTube)
    Not to mention the fact that movies aren't 29.97fps. Or is there something in there performing an IVTC? I understand none of the ffmpeg gobbledygook.
    Quote Quote  
  4. Originally Posted by Selur View Post
    @mindphaser: I don't really see the connection of your settings in regard to uploading content to YouTube.
    (encoding, interlaced 4:3 SD content with ac3 and Blu-ray restrictions seems like a rather unfitting choice when aiming to upload the content to YouTube)
    I use this for personal movie backup, to reduce the space, I have tried this combination for many years, maybe I'm wrong? I do not upload files to youtube it is only a personal backup of 1080p mp4 files
    Quote Quote  
  5. Originally Posted by Sharc View Post
    I understand, but it doesn't make sense to compare the quality of encodes with a huge difference in file size (bitrate) such as +69% in the case of Swan Goose, does it?
    It makes sense in the context of the secondary discussion that developed within this thread, I took the stance that QP is a superior encoding method to CRF and I laid out reasons for my stance, so I did test encodes that directly compare CRF with the default preset against the settings I advocate. In some instances, CRF results in a smaller file size, in some the settings I advocate result in a smaller file size.

    The difference is this, if you use CRF, with or without any Psy "optimizations" and with the default I/P or P/B ratios, all you know is that the average quantizer used was for instance 23, but the instantaneous quantizer used can and does fluctuate wildly; with the settings I advocate you know the instantaneous quantizer used and can be sure of uniform quality from frame to frame and within a frame. You end up with a more pleasing even experience with what I propose verses CRF.

    That's what this test was designed to show.
    Quote Quote  
  6. all you know is that the average quantizer used was for instance 23,
    no you don't,.. crf 23 doesn't mean average quantizer 23. Rate factor and quantizer are different things.
    I don't really care that you prefer constant quantizer over constant rate factor, but the stated is simply wrong.

    2.1.4 Constant rate-factor
    This is a one-pass mode that is optimal if the user does not desire a specific bitrate, but specifies quality instead. It is the same as ABR, except that the scaling factor is a user defined constant and no overflow compensation is done.
    2.1.5 Constant quantizer
    This is a one-pass mode where QPs are simply based on whether the frame is I-, P- or B-frame
    see: https://www.researchgate.net/publication/4289294_Improved_Rate_Control_and_Motion_Esti...r_H264_Encoder
    Also interesting read https://www.researchgate.net/publication/224257610_A_rate_control_algorithm_for_X264_h...o_conferencing about DCRF.

    And no quality and quantizer are also too different things.
    btw. crf was initially called '1pass quality-based VBR' when it was introduced ~13 years ago in x264.

    To fully grasp why cfr is more like abr than cq you will probably have to read and understand the source code, but crf X does not aim for an average quantizer at all.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  7. Originally Posted by Selur View Post
    no you don't,.. crf 23 doesn't mean average quantizer 23. Rate factor and quantizer are different things.
    I don't really care that you prefer constant quantizer over constant rate factor, but the stated is simply wrong.
    It really is a shame that the developer of Hybrid doesn't understand what CRF actually means:

    https://slhck.info/video/2017/02/24/crf-guide.html

    CRF is a “constant quality” encoding mode, as opposed to constant bitrate (CBR). Typically you would achieve constant quality by compressing every frame of the same type the same amount, that is, throwing away the same (relative) amount of information. In tech terminology, you maintain a constant QP (quantization parameter). The quantization parameter defines how much information to discard from a given block of pixels (a Macroblock). This typically leads to a hugely varying bitrate over the entire sequence.

    Constant Rate Factor is a little more sophisticated than that. It will compress different frames by different amounts, thus varying the QP as necessary to maintain a certain level of perceived quality. It does this by taking motion into account. A constant QP encode at QP=18 will stay at QP=18 regardless of the frame (there is some small offset for different frame types, but it is negligible here). Constant Rate Factor at CRF=18 will increase the QP to, say, 20, for high motion frames (compressing them more) and lower it down to 16 for low motion parts of the sequence. This will essentially change the bitrate allocation over time.
    It is a sad state of affairs that you seem to believe that CRF is the same as ABR (average bit rate) and if this part is true:

    crf was initially called '1pass quality-based VBR' when it was introduced ~13 years ago in x264
    Then it speaks volumes about CRF and about those that advocate it, most people claim that 1 pass ABR is among the lowest quality you can have, with 1 pass VBR slightly higher. If CRF did in fact function like a 1pass quality-based VBR then NVENC does support CRF because it offers a 1pass quality-based VBR encoding mode.

    I can prove that CRF attempts to maintain an average bit rate over time, fire up AviDemux and start a CRF encode, the dialog box that pops up has an advanced detail option, click on that and you can see the quantizer used per frame as the encoding is done, you will see the quantizer fluctuate up and down from CRF 23 as the encoding progresses.

    When I get home tonight I will go dumpster diving in the code to show you exactly what it is doing.

    Straight from the x265 docs:

    https://x265.readthedocs.io/en/default/cli.html#quality-rate-control-and-rate-distortion-options

    Quality-controlled variable bitrate. CRF is the default rate control method; it does not try to reach any particular bitrate target, instead it tries to achieve a given uniform quality and the size of the bitstream is determined by the complexity of the source video. The higher the rate factor the higher the quantization and the lower the quality. Default rate factor is 28.0.
    How do you think it's determining quality when these developers constantly claims that artificial metrics like PSNR, SSIM and VMAF are not reliable quality measurement?
    Quote Quote  
  8. How do you think it's determining quality when these developers constantly claims that artificial metrics like PSNR, SSIM and VMAF are not reliable quality measurement?
    Look at the code but quality is not the same as quantizer.

    Sadly Loren Merritt aka pengvado/akupenguin the author of the paper I linked to and the main dev behind crf in x264 isn't really active in the doom9 forum otherwise this could be easily settled by him.

    I can prove that CRF attempts to maintain an average bit rate over time, fire up AviDemux and start a CRF encode, the dialog box that pops up has an advanced detail option, click on that and you can see the quantizer used per frame as the encoding is done, you will see the quantizer fluctuate up and down from CRF 23 as the encoding progresses.
    I never questioned that the quantizer is fluctuating. I questioned your statement that crf X means that x264 aims for an average quantizer of X, that is simply false and your statement supports this too since an average bitrate also isn't the same as an average quantizer.

    -> Sorry, but atm. I'm still convinced that you are the one that is misunderstanding things here.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  9. Originally Posted by sophisticles View Post
    Originally Posted by Sharc View Post
    I understand, but it doesn't make sense to compare the quality of encodes with a huge difference in file size (bitrate) such as +69% in the case of Swan Goose, does it?
    It makes sense in the context of the secondary discussion that developed within this thread, I took the stance that QP is a superior encoding method to CRF and I laid out reasons for my stance, so I did test encodes that directly compare CRF with the default preset against the settings I advocate. In some instances, CRF results in a smaller file size, in some the settings I advocate result in a smaller file size.

    The difference is this, if you use CRF, with or without any Psy "optimizations" and with the default I/P or P/B ratios, all you know is that the average quantizer used was for instance 23, but the instantaneous quantizer used can and does fluctuate wildly; with the settings I advocate you know the instantaneous quantizer used and can be sure of uniform quality from frame to frame and within a frame. You end up with a more pleasing even experience with what I propose verses CRF.

    That's what this test was designed to show.
    Using the same value for CRF or QP does not make sense as the principles are very different. Moreover, one does normally not view a movie by stepping through it frame by frame. Someone once said its like judging the beauty of a waterfall by inspecting single droplets. One will always find frames which look better for one method or the other. QP may for example give better pictures in fast action scenes where one would however not see any details when watching real time. On the other hand - under the constraint of the same file size - other scenes will suffer. Again, for comparing apples with apples you would have to adjust the CRF until you end up with the same filesize as you got for the QP encode, and then do some blind tests or perhaps apply a PSNR comparison. Have fun!
    Quote Quote  
  10. I tested 2 more clips, you guys can see the results for yourselves.


    Re:
    "Using the same value for CRF or QP does not make sense as the principles are very different."

    No they are not, I will demonstrate shortly with code directly from x264.


    Re:
    "comparing apples with apples"

    This saying drives me up the wall, it's just about the dumbest things anyone has ever conceived. There are different types of apples, and if we were to normalize to say Granny Smith Apples, a comparison between 2 of these would be of limited value. All you could say is that one was bigger than the other, one weighed more, one was fresher, but really that's about it. A comparison between Red Delicious and Golden Delicious is valid, as is a comparison between an apple and an orange and in fact the latter comparison would yield more information, such as which spikes your blood sugar more, which has more nutrients, which is easier to digest as well as the physical attributes that an apple to apple comparison limits us to.


    The next time you want to say only an "apples to apples" comparison is valid, take a minute to think and to help you realize how stupid it is to believe that imagine if someone said that only a Ford Mustang to Ford Mustang comparison is valid, that someone a Ford Mustang to Chevy Camaro to Dodge Charger is invalid because they are made by different manufactures to different specification. I hope now you guys realize why anytime someone says do an "apples to apples" comparison they sound silly.


    Regarding CRF, I did a very quick dumpster dive into the x264 code, and believe me it's a dumpster all right, this thing seems to work despite it's best efforts, not because of them and here's some interesting tidbits:


    https://code.videolan.org/videolan/x264/blob/master/encoder/ratecontrol.c


    https://code.videolan.org/videolan/x264/blob/master/encoder/ratecontrol.h


    _________________


    /* Completely arbitrary. Ratecontrol lowers relative quality at higher framerates
    * and the reverse at lower framerates; this serves as the center of the curve.
    * Halve all the values for frame-packed 3D to compensate for the "doubled"
    * framerate. */
    #define BASE_FRAME_DURATION (0.04f / ((h->param.i_frame_packing == 5)+1))
    _________________


    if( h->param.rc.i_rc_method == X264_RC_CRF )
    {
    h->param.rc.i_qp_constant = h->param.rc.f_rf_constant + QP_BD_OFFSET;
    h->param.rc.i_bitrate = 0;
    }
    ____________________


    if( h->param.rc.i_rc_method == X264_RC_CRF )
    {
    /* Arbitrary rescaling to make CRF somewhat similar to QP.
    * Try to compensate for MB-tree's effects as well. */
    double base_cplx = h->mb.i_mb_count * (h->param.i_bframe ? 120 : 80);
    double mbtree_offset = h->param.rc.b_mb_tree ? (1.0-h->param.rc.f_qcompress)*13.5 : 0;
    rc->rate_factor_constant = pow( base_cplx, 1 - rc->qcompress )
    / qp2qscale( h->param.rc.f_rf_constant + mbtree_offset + QP_BD_OFFSET );
    _____________________


    if( rc->b_abr )
    {
    /* FIXME ABR_INIT_QP is actually used only in CRF */
    #define ABR_INIT_QP (( h->param.rc.i_rc_method == X264_RC_CRF ? h->param.rc.f_rf_constant : 24 ) + QP_BD_OFFSET)
    rc->accum_p_norm = .01;
    rc->accum_p_qp = ABR_INIT_QP * rc->accum_p_norm;
    /* estimated ratio that produces a reasonable QP for the first I-frame */
    rc->cplxr_sum = .01 * pow( 7.0e5, rc->qcompress ) * pow( h->mb.i_mb_count, 0.5 );
    rc->wanted_bits_window = 1.0 * rc->bitrate / rc->fps;
    rc->last_non_b_pict_type = SLICE_TYPE_I;
    _________________


    if( frame_num >= rc->num_entries )
    {
    /* We could try to initialize everything required for ABR and
    * adaptive B-frames, but that would be complicated.
    * So just calculate the average QP used so far. */
    h->param.rc.i_qp_constant = (h->stat.i_frame_count[SLICE_TYPE_P] == 0) ? 24 + QP_BD_OFFSET
    : 1 + h->stat.f_frame_qp[SLICE_TYPE_P] / h->stat.i_frame_count[SLICE_TYPE_P];
    ________________________


    /* FIXME ABR_INIT_QP is actually used only in CRF */
    #define ABR_INIT_QP (( h->param.rc.i_rc_method == X264_RC_CRF ? h->param.rc.f_rf_constant : 24 ) + QP_BD_OFFSET)
    _______________________


    if( h->param.rc.i_rc_method == X264_RC_CRF )
    {
    q = get_qscale( h, &rce, rcc->rate_factor_constant, h->fenc->i_frame );
    }
    __________________________

    else if( h->param.rc.i_rc_method == X264_RC_CRF && rcc->qcompress != 1 )
    {
    q = qp2qscale( ABR_INIT_QP ) / fabs( h->param.rc.f_ip_factor );
    }
    rcc->qp_novbv = qscale2qp( q );
    _________________________


    I will admit the above isn't conclusive but the code is far from straight forward, it's obviously the work of various developers over the course numerous years and unfortunately this leads to kludgy code that is hard to decipher.


    Tomorrow after work I will dive into the x265 code in the hopes that since it's newer it may be better documented and more straight forward.


    There is no question though, from reading the x264 code that CRF is intricately tied to QP.
    Image Attached Files
    Quote Quote  
  11. Originally Posted by sophisticles View Post


    Re:
    "Using the same value for CRF or QP does not make sense as the principles are very different."

    No they are not, I will demonstrate shortly with code directly from x264.

    He's probably saying "The principles are different" because they aim to achieve different goals.

    Hint: one uses a constant quantizer, one does not


    Re:
    "comparing apples with apples"

    This saying drives me up the wall

    Maybe it's a bad "saying" , but that's why it's in quotation marks - English speakers should get the gist of what it means in this context.

    The next time you want to say only an "apples to apples" comparison is valid, take a minute to think and to help you realize how stupid it is to believe that imagine if someone said that only a Ford Mustang to Ford Mustang comparison is valid, that someone a Ford Mustang to Chevy Camaro to Dodge Charger is invalid because they are made by different manufactures to different specification. I hope now you guys realize why anytime someone says do an "apples to apples" comparison they sound silly.
    But you're the one doing the imagining...

    The point is - test what you set out to test and eliminate the other confounding variables. If you're strictly testing QP vs. CRF, you should use the same settings (or as much as possible) . Otherwise you don't know what is causing what. It's basic scientific method, but some people need it spelled out



    I will admit the above isn't conclusive but the code is far from straight forward, it's obviously the work of various developers over the course numerous years and unfortunately this leads to kludgy code that is hard to decipher.


    Tomorrow after work I will dive into the x265 code in the hopes that since it's newer it may be better documented and more straight forward.


    There is no question though, from reading the x264 code that CRF is intricately tied to QP.


    LOL of course they are tied to QP. Everything is .

    Macroblocks have quantizers . Gosh what a huge revelation!

    Guess what? 1pass, 2pass, npass use quantizers too .

    You should read up on what "QP" (quantization parameter) means

    x264 syntax is a bit lazy, because --qp should really be cqp (c for constant)






    If you're doing tests to actually compare things like quality , it doesn't make sense to use the same value - because it's unlikely you're going to get the same filesize (bitrate) .

    So the only encode pair that comes close in size is "Cat Eye". But whatever method you used messed up the levels compared to the 102MB h264 source (I'm assuming that's the one you used) .The contrast is increased, levels crushed. I'm assuming the others are messed up too, but I didn't bother checking
    Quote Quote  
  12. Originally Posted by sophisticles View Post
    This saying drives me up the wall .....…..
    Ohhhhh .... now measure your blood pressure and compare it with .... whatever
    My point was to compare the (visual or measured) quality at equal file size = equal compression strength with respect to the source, rather than fixing the value for CRF and QP to say 23. No one doubts that CRF adjusts the quantizer dynamically as a means to deliver visual quality based 1-pass encoding. It may not be perfect for all cases and scenarios though.
    For your CRF samples you are using "medium" for the encoder settings. Mind you telling us what "Custom" means for the QP samples?
    Quote Quote  
  13. Originally Posted by poisondeathray View Post
    So the only encode pair that comes close in size is "Cat Eye". But whatever method you used messed up the levels compared to the 102MB h264 source (I'm assuming that's the one you used) .The contrast is increased, levels crushed. I'm assuming the others are messed up too, but I didn't bother checking
    That's my mistake, I neglected to mention this. For all sources, I used the PhotoJPEG version as source when available, I believe only the Squirrel and Chipmunk had a H264 source because that's all that's available, I am at work right now but I am almost certain that Cat Eye has a PhotoJPEG version available.
    Quote Quote  
  14. Originally Posted by Sharc View Post
    For your CRF samples you are using "medium" for the encoder settings. Mind you telling us what "Custom" means for the QP samples?
    I used Medium and CRF 23 because that the x264 default if you were to fire up the command line and simply point the encoder as a file without any additional parameters other than a source name and destination name.

    "Custom" I have already explained, I set the QP to 23 (though in practice I always use between 15 and 18), I disabled AQ (which naturally means I MB-Tree is off), Psy-RD is set to zero (because I couldn't find a way in AviDemux to simply disable all Psy "optimizations", though I supposed I could have just set tune to PSNR and accomplished the same thing), I set I/P and P/B ratio to 1 so that each frame is encoded with the same QP and I disabled B frames as references.

    This ensures uniform QP through out the file and if I was encoding something for public consumption, for instance if I owned a site that sold videos, I would also employ a segmented video encoding strategy like professional optical media producers do. If i was using x264 or x265 (most likely the latter) I would only encode with closed GOP that was 1 second long, I would do a single encode with QP18 and the above settings, the watch the video to see if there are any scenes where the quality isn't up to snuff. If I found a scene where there were visual artifacts, I would either reencode just that scene with a lower QP or I may reencode the whole thing using the zones option in x264/x265 to specify a lower QP be used just for those scenes where there were visible quality shortcomings.

    this is one of the main reasons at times i have consider encoder comparisons to be meaningless, like the famous MSU tests, because no matter which encoder wins it means next to nothing to a professional. You give the average guy x264 (which people on boards such as this one seem to have a love affair with) and a pro Apple's H264 enocder, I guarantee you the pros encode will look nicer, because the average guy will fire up x264, choose that idiotic CRF mode, choose a value, maybe tweak those absurd "psy optimizations" and call it a day. The pro will go through scene by scene reencode every scene that doesn't look perfect until the whole thing looks pristine.

    Same thing with the guys that think they can restore video with avisynth, you see them all the time, posting avisynth scripts, trying to tweak parameters and many of them believe that's how "Hollywood" does it. True restoration is long, tedious, time consuming work, often they will work with TIFFs, going frame by frame, manually adjusting each frame in a photo editor until they are happy with the results.

    I know grading is done on a scene by scene basis, one segment at a time, as is most filtering.

    There was a time when I was like most hobbyist, I simply fired up Handbrake or Media Coder or some similar app, picked a setting for the encoder and called it a day.

    After spending time with the Tears of Steel sources and reading up on how they produced the movie, including how they made the master and then decided to regrade it and create a higher quality master, as well as becoming aware of the Valkaama project and downloading the hundreds of gigabytes of footage they made available and then playing around with the horror scene footage I posted about, I have come to realize that I, and all hobbyists have been doing things the wrong way, the lazy way, all along. There's a reason why Hollywood hasn't embraced x264 or x265 wholesale, despite both being legally free to use.

    In the end, it just doesn't matter.
    Quote Quote  
  15. Originally Posted by sophisticles View Post
    This ensures uniform QP through out the file and if I was encoding something for public consumption,
    There is a reason why absolutely no professional uses CQP for public consumption. Have a look at any professional release on any media - blu-ray, web/streaming, or terrestrial or sat broadcast (europe for AVC). They understand that CQP does not mean "constant quality". Just like CRF does not mean "constant quality". It's a common misunderstanding. They understand that CQP is compromised in many ways. Professionals only use CQP for intermediates, very high bitrate, because of the efficiency losses

    for instance if I owned a site that sold videos, I would also employ a segmented video encoding strategy like professional optical media producers do. If i was using x264 or x265 (most likely the latter) I would only encode with closed GOP that was 1 second long, I would do a single encode with QP18 and the above settings, the watch the video to see if there are any scenes where the quality isn't up to snuff. If I found a scene where there were visual artifacts, I would either reencode just that scene with a lower QP or I may reencode the whole thing using the zones option in x264/x265 to specify a lower QP be used just for those scenes where there were visible quality shortcomings.
    x264/x265 zones are useful , but it takes a lot of time going back and forth , and more importantly it's not 100% VBV compliant - you can have buffer underruns / overruns so you cannot use it properly for professional optical media or other VBV constrained situations. Supposedly x264 was going to get proper segment encoding way back , but it never materialized.

    this is one of the main reasons at times i have consider encoder comparisons to be meaningless, like the famous MSU tests, because no matter which encoder wins it means next to nothing to a professional.
    They are limited in scope, many issues. But all tests have some limited value if you pick them apart. You don't have necessarily put a lot of weight into them . You take everything in context, and when you combine those tests results with other types of tests, you get a better picture of what is going on


    You give the average guy x264 (which people on boards such as this one seem to have a love affair with) and a pro Apple's H264 enocder, I guarantee you the pros encode will look nicer, because the average guy will fire up x264, choose that idiotic CRF mode, choose a value, maybe tweak those absurd "psy optimizations" and call it a day. The pro will go through scene by scene reencode every scene that doesn't look perfect until the whole thing looks pristine.
    Well a pro will spend more time doing things, it's common sense, since it's their job

    But not Apple's h264 encoder. Professionals on Mac , FCP/X boards all tend to use x264. Also, apples' h264 encoder does not have proper segment encoding, nor does it even have zones


    Same thing with the guys that think they can restore video with avisynth, you see them all the time, posting avisynth scripts, trying to tweak parameters and many of them believe that's how "Hollywood" does it. True restoration is long, tedious, time consuming work, often they will work with TIFFs, going frame by frame, manually adjusting each frame in a photo editor until they are happy with the results.
    It's not exclusively one or the other. They are just tools .

    Pros use the best tools at hand , just like professionals in all areas, not just AV related. "It's not one or the other." That means a combination of tools in the workflow, including photoshop, rotoscoping, compositing, in house custom restoration software, and scripts - yes sometimes avisynth is used in Hollywood restoration for parts of the workflow. Avisynth is good at what it does. It's not so good at other operations . It's a tool that has various pros , cons. Just like photoshop or any software isn't suitable for some operations

    I know grading is done on a scene by scene basis, one segment at a time, as is most filtering.
    Yes

    There's a reason why Hollywood hasn't embraced x264 or x265 wholesale, despite both being legally free to use.
    It's not "free" for commercial purposes, there are various licencing fees
    Quote Quote  
  16. Originally Posted by poisondeathray View Post
    There is a reason why absolutely no professional uses CQP for public consumption. Have a look at any professional release on any media - blu-ray, web/streaming, or terrestrial or sat broadcast (europe for AVC). They understand that CQP does not mean "constant quality". Just like CRF does not mean "constant quality". It's a common misunderstanding. They understand that CQP is compromised in many ways. Professionals only use CQP for intermediates, very high bitrate, because of the efficiency losses
    For Optical Media, as you pointed out earlier, they need to maximum file size they are allowed to hit and so they use a bit rate based encoding mode to ensure they remain within the bit rate ceiling, but they definitely use segmented video encoding techniques, especially the better done ones, it's obvious just from watching the video.

    For web streaming and broadcast, based on what I have seen, it seems like they don't give a flying fig, I have seen TV shows on prime time on a major network where their were artifacts and macro-blocking galore.

    x264/x265 zones are useful , but it takes a lot of time going back and forth , and more importantly it's not 100% VBV compliant - you can have buffer underruns / overruns so you cannot use it properly for professional optical media or other VBV constrained situations. Supposedly x264 was going to get proper segment encoding way back , but it never materialized.
    Like I said, pros don't use these encoders, they spend the money on a pro caliber encoder , like Main Concept; even the x265 people realize that their encoder is substandard and allow encoding using Intel's SVT.

    But not Apple's h264 encoder. Professionals on Mac , FCP/X boards all tend to use x264. Also, apples' h264 encoder does not have proper segment encoding, nor does it even have zones
    That's because the guys on the Mac/FCP/X boards are hobbyists that do not know what they are doing. I have seen 1080P @ 5mb/s that was some of the cleanest, most pristine work I ever laid eyes on.

    An encoder doesn't necessarily have to explicitly support segmented encoding in order to do segmented encoding, it just makes things a lot easier and less time consuming. You can do segmented encoding with x264/x265, if you're willing to take the time and here's how:

    Do a test encode of a video using a base setting, since you guys like CRf so much, let's say CRF 20.

    Watch the video and see if there are any areas that you would like to see higher quality or where noticeable visual artifacts exist.

    Make a note of the start and end time of where these points are.

    Use ffmpeg to create chunks of the video at the points where the video needs higher quality, choosing a lower CRF for those sections, for instance maybe CRF 18.

    After all the segments are done, go back and check the quality. If you're happy with it, then use ffmpeg to concatenate them to a final product. If you're not happy with them, go back and repeat until you are happy with the results.

    It's not "free" for commercial purposes, there are various licencing fees
    LOL!!! It is GPL'd software, it is free for all purposes. You are confusing the x264 LLC and x265 LLC licensed variants, with the MPEG-LA licensing costs associated with the H264 and H265 standards that have patents owned by various corporate entities,

    X264 and X265 are GPL'd software, free to use, free to modify and free to include in your own software, so long as you provide the source code to the original, the source code to any modifications you make and the GPL license with your product.

    X264 LLC and X265 LLC came about because of the way the GPL reads, if one was to integrate a GPL'd program into a non-GPL'd program, the GPL demands that the source for the non-GPL also be made available, so for instance when the Pegasus people decided to add x264 support to TMPG, they would have needed to also release the source code to any program that integrated x264 into their frame work.

    Enter X264 LLC, which is released under a different license, This allowed companies to incorporate x264 into their product without having to release the code to their main program; I don't know about x264 but I do know based on what reps from x265 have said, that the X265 LLC even allows a company to make modifications to x265 without needing to release their modification upstream.

    The "licensing fees" you are alluding to are MPEG-LA patent fees, which do not depend on which encoder or decoder you use, they are standards dependent, meaning that you pay the same fee is you use X265 or Main Concept or Ateme or a hardware encoder or whatever.

    The "free" I was talking about is the software fees associated with a specific app, for instance Intel's HEVC license was 5 grand the last time I checked, Main Concept's encoder cost $1500, with different fees for broadcasting, etc. X264 and X265 are free to use, the GPL'd variants, in these scenarios, and pros still choose not to use them.

    Because they are like a free cheeseburger, they are good enough but eventually they will make you want to s***.
    Quote Quote  
  17. Originally Posted by sophisticles View Post

    For web streaming and broadcast, based on what I have seen, it seems like they don't give a flying fig, I have seen TV shows on prime time on a major network where their were artifacts and macro-blocking galore.
    And just think - if they used CQP, it would be even worse !



    Like I said, pros don't use these encoders, they spend the money on a pro caliber encoder , like Main Concept; even the x265 people realize that their encoder is substandard and allow encoding using Intel's SVT.
    Have you seen any good comparisons with SVT ? MSU has a bunch , but they are of limited value




    But not Apple's h264 encoder. Professionals on Mac , FCP/X boards all tend to use x264. Also, apples' h264 encoder does not have proper segment encoding, nor does it even have zones
    I have seen 1080P @ 5mb/s that was some of the cleanest, most pristine work I ever laid eyes on.
    I doubt it. Definitely not Apple encoder. It's the absolute worst AVC encoder

    "Clean" - doesn't necessary indicate a good reproduction of the source.

    Did you do a proper "apples to apples" comparison ? Did you look at the source ? What if encoder "B" and "C" achieved 2x more "pristine" ?

    An encoder doesn't necessarily have to explicitly support segmented encoding in order to do segmented encoding, it just makes things a lot easier and less time consuming. You can do segmented encoding with x264/x265, if you're willing to take the time and here's how:

    Do a test encode of a video using a base setting, since you guys like CRf so much, let's say CRF 20.

    Watch the video and see if there are any areas that you would like to see higher quality or where noticeable visual artifacts exist.

    Make a note of the start and end time of where these points are.

    Use ffmpeg to create chunks of the video at the points where the video needs higher quality, choosing a lower CRF for those sections, for instance maybe CRF 18.

    After all the segments are done, go back and check the quality. If you're happy with it, then use ffmpeg to concatenate them to a final product. If you're not happy with them, go back and repeat until you are happy with the results.
    Yes, but this doesn't work smoothly all the time, because of the way ffmpeg cuts.

    There are often multiple issues, especially on streams with non IDR frames, but also on normal streams (There are other ways to do this properly but you have to cut on IDR frames, and using other tools)

    Also it's very clunky, time consuming. Proper segment encoding has a nice GUI, you can do it right there all at once, preview the results, adhere by VBV constraints



    It's not "free" for commercial purposes, there are various licencing fees
    LOL!!! It is GPL'd software, it is free for all purposes. You are confusing the x264 LLC and x265 LLC licensed variants, with the MPEG-LA licensing costs associated with the H264 and H265 standards that have patents owned by various corporate entities,

    The "licensing fees" you are alluding to are MPEG-LA patent fees, which do not depend on which encoder or decoder you use, they are standards dependent, meaning that you pay the same fee is you use X265 or Main Concept or Ateme or a hardware encoder or whatever.
    Yes, I'm talking about MPEG-LA fees.

    When you do it for commercial purposes, MPEG-LA charges you a fee based on the product . When I make a BD for distribution, the fee depends on categories of the number of product replicated.

    Small "professional" outfits, such as some local video producer, maybe wedding videos often fly under the radar
    Quote Quote  
  18. He is successfully derailing from his previous abysmal amount of bs statments, do not fall for that.
    Quote Quote  
  19. Originally Posted by poisondeathray View Post
    Have you seen any good comparisons with SVT ? MSU has a bunch , but they are of limited value

    If only someone on these very boards had tested SVT-HEVC:


    https://forum.videohelp.com/threads/392544-Intel-SVT-HEVC-encoder-test


    You probably missed it, it didn't get too much attention, I received zero replies and some of the files were only viewed 138 times.
    Quote Quote  
  20. Thanks, I missed it . 139 now. I'll take a closer look at it later

    Which SVT version did you use ?


    Because, meanwhile I was doing some tests today too . Some early observations

    So far it looks fast, even at the slower presets (like 1 or 2) , even with a meagre desktop computer with 4C/8T (it's supposed to be a zillion times faster than x265 with same quality as veryslow if you believe the marketing flyer. But maybe that's on a monster server. But they only tested (C)QP . Still, it's like 100-200x faster supposedly) . I didn't take close look at speed , but it's easily 3-4x faster than x265 "slower" when using "-encMode 2"
    https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/aws-visual-c...hnology-wp.pdf
    https://github.com/OpenVisualCloud/SVT-HEVC/blob/master/Docs/svt-hevc_encoder_user_guide.md

    No CRF mode, only QP and 1pass VBR .

    Encodes are clean , almost a denoised look to them - loss of fine details. Seems like excessive blurring and problems with dark areas, banding, fades, worse than x265 at least with -tune 0 , -brr 0 (disabling the bitrate reduction mode) does not seem to recover the details or fix the banding . I only tested 1080p resolution but I can't help to think that CRF and AQ would help in those types of scenarios (it does with x264, x265) . And if you had the ability to control the strength, not just on/off. And the ability to use at all resolutions

    ImproveSharpness
    This is a visual quality knob that allows the use of adaptive quantization within the picture and enables visual quality algorithms that improve the sharpness of the background. This feature is only available for 4k and 8k resolutions (no support for –tune 1 or -tune 2)
    0 = OFF, 1 = ON
    BitRateReduction
    Enables visual quality algorithms to reduce the output bitrate with minimal or no subjective visual quality impact. (no support for –tune 1 or -tune 2)
    0 = OFF, 1 = ON
    I'm thinking -tune 2 optimized for VMAF might yield better results , I'll try that next . My early impressions are that it looks like early development and could be improved. Similar problems like x265 had a few years ago . Less control/customizability than x265, but x265 has had a head start. x265 also had problems with excessive blurring and banding, fades (and still does to an extent), but with the correct settings you can overcome that now



    even the x265 people realize that their encoder is substandard and allow encoding using Intel's SVT.
    That is what got me thinking .

    This doesn't make logical sense to me, as they are "competitors" - at least in a limited sense. You might release a competitor's product if you *know* it's bad, that would make more sense . When you buy a Ford, does the dealer hand you the keys to a Chevy to go along with your purchase ? Only if they know it's junk

    Looking around, it seems like SVT HEVC encodes fast at the expense of quality. And my early tests and impressions are beginning to look like that too. It's early days, and the speed is really nice . But they need to fix the blurring and banding (and that was with 10bit HEVC too, which is less prone to banding). For me that's the most glaring problem.
    Last edited by poisondeathray; 23rd Jun 2019 at 17:29.
    Quote Quote  
  21. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    In the AV1 encoding world, SVT also has an AV1 encoding variant. And it's generally considered fast but the worst, maybe above the slowest x264. WIth Aomenc (libaom) considered the best but slowest AV1 encoder out there.
    Last edited by KarMa; 25th Jun 2019 at 01:29. Reason: swapped a for an
    Quote Quote  
  22. SVT performance probably scales much better on the 2p servers than other software solutions . I think it's meant for the speed over quality tradeoff . But for SVT HEVC, even at the highest quality , slowest setting, the same issues with detail loss , blurring, banding. I think NVEnc with quadros makes more cost effective solution for that fast, multiple streams, server scenarios
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!