VideoHelp Forum




+ Reply to Thread
Page 3 of 4
FirstFirst 1 2 3 4 LastLast
Results 61 to 90 of 92
  1. I reassembled everything again and the results were the same. The file sizes are different, but the picture is the same. By the way, this is another strange thing when the video is the same, but the file size may differ by bytes after encoding.
    Quote Quote  
  2. Originally Posted by gelo333 View Post
    I reassembled everything again and the results were the same. The file sizes are different, but the picture is the same. By the way, this is another strange thing when the video is the same, but the file size may differ by bytes after encoding.
    This is not supposed to happen. Encoder is deterministic - same results every time. It should be bit for bit identical

    You have to enable non-deterministic on purpose , for results to vary. This switch is "off" by default

    Code:
          --non-deterministic     Slightly improve quality of SMP, at the cost of repeatability
    Maybe something else wrong with your computer ? Overclocked ?


    On the script side, there are some filters that add "randomness" on purpose - some types of noise generation and dithering filters for example . There can be a "random" seed and encoding same script may produce very slightly different results
    Quote Quote  
  3. side note: container might not be deterministic (iirc. at least mkvtoolnix adds a random number)
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  4. I was already asked on the first page about overclocking and overheating of the computer. No, not even close to overheating. The processor is not overclocked. Perhaps there is only something wrong with the motherboard; it is no longer new. Could the motherboard be at fault? The container and codec are standard from the FFMPEG 6.0 package. Clarification: in the video file properties, the file size is different, but the disk space is the same.

    System parameters.
    Intel Xeon CPU E3-1230 V2
    Gigabyte H77-DS3H motherboard
    DDR 8 Gb
    Windows 10 Home 22H2
    Quote Quote  
  5. The container and codec are standard from the FFMPEG 6.0 package. Clarification: in the video file properties, the file size is different, but the disk space is the same.
    Any details on that?
    How much of a difference are we talking about? What container is used?
    Like I wrote before depending on the settings some containers will have some random numbers in them, so even if you just remux a file, it will not give the exact same output.
    (the extracted streams would still be identical)

    Since this whole thread does not contain full details about the used calls, tools, input an output I which everyone much fun doing the 'wild guessing' game.

    Cu Selur

    Ps.: if you use gpu filters also make sure the gpu isn't overclocked. I once spend a few weeks searching with a Hybrid user why he got random artifacts until we realized his gpu was factory overclocked and after down clocking to the manufacturer stock values the issues disappeared.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  6. Originally Posted by Selur View Post
    I once spend a few weeks searching with a Hybrid user why he got random artifacts until we realized his gpu was factory overclocked
    Yes this is true. I remembered, I have a video card that is overclocked at the factory. The so-called Black Edition. Full name:
    XFX AMD RX570 4 GB Black Edition.

    KNLmeans uses GPU.

    I have MSI Afterburner installed, which can reduce the frequency. I need to understand which parameters to reduce and what value to set.

    Last edited by gelo333; 16th Nov 2023 at 10:22.
    Quote Quote  
  7. I'm using an MP4 container.

    Codec Libx264. I tried x264 mod by Patman in the Staxrip application, the results were the same. I can also use libx265.
    Quote Quote  
  8. Originally Posted by gelo333 View Post
    I'm using an MP4 container.

    Codec Libx264. I tried x264 mod by Patman in the Staxrip application, the results were the same. I can also use libx265.
    "results were the same" - as in encodes are still slightly different each time despite resetting the GPU overclock? or encoding results are now the same (the problem was GPU overclock)?
    Quote Quote  
  9. Originally Posted by poisondeathray View Post
    Originally Posted by gelo333 View Post
    I'm using an MP4 container.

    Codec Libx264. I tried x264 mod by Patman in the Staxrip application, the results were the same. I can also use libx265.
    "results were the same" - as in encodes are still slightly different each time despite resetting the GPU overclock? or encoding results are now the same (the problem was GPU overclock)?
    I haven't changed anything yet because I don't understand anything about it.
    Quote Quote  
  10. try whether lowering the 1286 to 1150 MHz Core Clock speed changes anything.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  11. There is no change. I'm not even sure that there is a reduction in encoding speed. But the frequency is exactly 1150, this can be seen in GPU-Z.
    Quote Quote  
  12. Then your problem might not be due to an overclocked gpu.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  13. Another attempt to show that there is no relationship between sharpening and the presence of obvious stripes.

    Original. Pay attention to the chin, neck, wrist.
    chin - defects are barely noticeable
    neck - no defects
    wrist - defects are barely noticeable


    Settings FineSharp(mode=1, sstr=2.0, cstr=0.2, xstr=0, lstr=1.8, pstr=7, ldmp=0.1) and I will increase "sstr" from 2.0 to 2.5.

    sstr=2.0
    chin - defects are barely noticeable
    neck - no defects
    wrist - no defects



    sstr=2.1
    chin - no defects
    neck - no defects
    wrist - defects are barely noticeable



    sstr=2.2
    chin - defects are visible
    neck - defects are visible
    wrist - defects are visible



    sstr=2.3
    chin - defects are visible
    neck - no defects
    wrist - defects are barely noticeable



    sstr=2.4
    chin - defects are barely noticeable
    neck - no defects
    wrist - defects are barely noticeable



    sstr=2.5
    chin - no defects
    neck - no defects
    wrist - defects are visible
    Last edited by gelo333; 22nd Nov 2023 at 05:15.
    Quote Quote  
  14. Originally Posted by gelo333 View Post
    Another attempt to show that there is no relationship between sharpening and the presence of obvious stripes.

    You mean after qp 26 encoding. There is some expected "randomness" as the result of low bitrate lossy encoding, this was discussed earlier. See post#2 if you've forgotten. Check other frames within the same encode, and between sstr encodes - and you will deviations as well. The higher the bitrate, the higher the quality of encoding settings, closer the IPB ratios => the more similar to the source, the more predictable the results. I get slightly different results than you again at qp26 , but there is some "randomness" and inconsistency that you would expect at qp26. eg. qp 10 less "random". qp 0 not "random" at all - all the stripes and defects remain

    Increasing sstr increases the severity of sharpening artifacts that you send to the encoder - and that relationship is very predictable. Try qp 0, it's 100% definitive proof that increasing sstr 0 to 1, to 2, to 2.2, to 2.5, to 3 etc.. .... increases the artifacts . qp 0 eliminates all the "randomness" from low bitrate lossy encoding . You get exactly what you send.

    This is the script output with increasing sstr . Cropped to region of interest and nearest neighbor enlargement x4 . As you can see you're sending significant oversharpening artifacts ("stripes") and "hoping" that the encoder deals with them appropriately. It's just common sense - stronger sharpening leads to stronger oversharpening artifacts. Not a conventional compression strategy...Most people wouldn't send oversharpened "garbage" to the encoder in the first place - i.e. fix your script . See post #58 again.



    Originally Posted by poisondeathray View Post
    It's a more predictable strategy to filter out the problems in the source to begin with, than to blindly "hope" that the artifacts are taken care of by the encoder. You have more control over the filtering stage to deal with artifacts, than over how the encoder deals with artifacts. But if you don't even look at the preview - you will not know if there are artifacts remaining that you didn't address. Or conversely you will not know you use too strong denoising and removed details that didn't need to be removed. But most people will agree it's better to use the correct filters for that source, instead of sending problems to the encoder and "hope" encoder deals with them correctly.
    What you're calling "defects" or "stripes" are actually "details" to the encoder. When there are no "stripes" in the lossy result, it's actually a compression artifact because it's different than the input. There is not enough bitrate to preserve all the stripes. Why send "stripes" and garbage in the first place ? Just don't sharpen so much

    You have additional problems - Since you get "random" results each time in the first place - aka. non deterministic (different filesizes using same script settings, same encode settings) - How do you know it's not the "randomness" of your computer results the you've been getting before instead of the sstr ? Another way to put it is encode the same script , same encode settings 10 times, if you get different results every time, you cannot draw any solid conclusions, because there are some other factor(s) contaminating your results .
    Quote Quote  
  15. Another way to put it is encode the same script , same encode settings 10 times
    The picture will be the same. The file size will also be the same. But if I restart the computer, or do all the operations again in another folder, the file size may differ, no more than 1 kilobyte. But the picture is still the same.

    I, of course, used the preview in the AvsPmod and Staxrip applications. The results after encoding are not consistent with the preview.

    As you can see you're sending significant oversharpening artifacts ("stripes") and "hoping" that the encoder deals with them appropriately. It's just common sense - stronger sharpening leads to stronger oversharpening artifacts.
    I expect KNLmeans to remove these stripes.
    Last edited by gelo333; 22nd Nov 2023 at 15:58.
    Quote Quote  
  16. Originally Posted by gelo333 View Post
    I, of course, used the preview in the AvsPmod and Staxrip applications. The results after encoding are not consistent with the preview.
    Not after lossy encoding, that's what "lossy" implies - quality loss. If the results after encoding don't look 100% identical after qp 0 encoding, there is something wrong . The lower the bitrate, the lower the quality encoding settings - the more deviation from the preview

    Instead of preserving the stripes (because that's what your script is sending) , lossy encoding only preserves some of the "stripes" on some frames, on some parts of frames. Removing the "stripes" in your case is actually an encoding artifact

    I expect KNLmeans to remove these stripes.
    You enhance "stripes" with finesharp . That apng in the post above is finesharp applied after KNLMeansCL

    Code:
    LSmashVideoSource("HHH.mp4")
    Spline36ResizeMT(480, 270)
    KNLMeansCL(D=1, A=1, h=2, device_type="auto")
    FineSharp(mode=1, sstr=x, cstr=0.2, xstr=0, lstr=1.8, pstr=7, ldmp=0.1)
    Where "x" in sstr=x is the changing value of sstr
    Quote Quote  
  17. Removing the "stripes" in your case is actually an encoding artifact
    It doesn't matter. I need to get predictable coding. The question will be different, how to get rid of artifact blur?

    At sstr=2.2, stripes appeared on the neck. They are not in the original. They are not present in other sstr values. Why was this artifact taken? Or why did all the other videos show artifactual blur?

    "sstr" values 2.0 - 2.5 are not oversharp. This is the average. Moreover, before this, a denoiser is used.
    Last edited by gelo333; 22nd Nov 2023 at 16:30.
    Quote Quote  
  18. Originally Posted by gelo333 View Post
    Removing the "stripes" in your case is actually an encoding artifact
    It doesn't matter. I need to get predictable coding. The question will be different, how to get rid of artifact blur?
    Use higher bitrate and the stripes will appear all over - similar to the apng

    At sstr=2.2, stripes appeared on the neck. They are not in the original. They are not present in other sstr values. Why was this artifact taken? Or why did all the other videos show artifactual blur?
    "Stripes" are in the input source . Again, the "original" does not matter to the encoder. Encoder doesn't use the "original". Encoder uses the direct source, which is the output of the script - which has "stripes" everywhere and other oversharpen artifacts, and encoder cannot preserve all the "stripes" at that low bitrate - so some are blurred, others create different types of artifacts

    But if you check other frames, there are other "stripes" in other parts of frames , and in different frames. The "2.2" does not matter. They occur at 2, 2.2. 2.5 etc... If not that current frame, then other frames - that's the part of "randomness" - the bitrate used is not sufficient, so it cannot preserve all the "stripes" (to the encoder, those sharp edges are "details", not artifacts). Only some are preserved, others are blurred. If you used adequate bitrate, there would be stripes everywhere - similar to the input source (the script) . Encode it at QP 20, 15, 10, 5 .... etc... and you can see the progression of stripes increasing - because encoder is preserving more detail

    It's analogous to encoding grainy source (forget sharpening for a second). At low bitrates, encoder cannot preserve all the grain, so you get splotchy almost "random" sparse grain pattern. Parts look blurred, but parts look like grain . The higher bitrate you use, the more grain returns like the source, more even instead of splotchy or random

    "sstr" values 2.0 - 2.5 are not oversharp. This is the average. Moreover, before this, a denoiser is used.
    Most people would call it oversharp on that source, because it enhances artifacts . Look at the apng. The artifacts are clearly less severe at sstr 0 or 0.5.

    That's just the chin on that single frame, almost every other frame has other types of oversharpened artifacts as well . Direct correlation with sstr. I can highlight some problems if you like - you can see the problems definitely worsen at sstr 2.
    Quote Quote  
  19. So, we can come to the conclusion that randomness still happens in the work of a coder? Is there any way to avoid this? I don’t think that for a 480x270 video, 800 bitrate is not enough.

    weightp, weightb and mixed references can affect this? I don't really understand yet.
    Last edited by gelo333; 22nd Nov 2023 at 18:12.
    Quote Quote  
  20. Another question. Preset faster in FFMPEG and preset faster in x264 mod Patman give different results. This is fine? If I set preset medium in both methods, then both results will be identical.
    Quote Quote  
  21. Originally Posted by gelo333 View Post
    So, we can come to the conclusion that randomness still happens in the work of a coder?
    It's not really "random" per se - encoder has fixed logic - but it may appear "random" in a generic sense - this was discussed earlier if you've forgotten look back a few pages. It's the nature of lossy encoding

    => Remember you encoded qp 0, and it was completely consistent, but didn't "look as good". It doesn't "look as good" because what you're sending is not good in the first place - it's full of "stripes" and other artifacts from oversharpening.

    See post #22 about qp 0 : "But there is no unpredictability, the changes have become logical, yes. " . The higher the bitrate, the better the encoding settings, the more predictable. At qp 0 it is 100% predictable, you get same as the input. No variation at all. Input=Output. qp 1 would yield more 99.99% predictable results, more than say qp 10. But you would expect significantly more quality fluctuation problems and unpredictabililty at qp 26 - that is the nature of lossy encoding.


    Is there any way to avoid this? I don’t think that for a 480x270 video, 800 bitrate is not enough.
    See above. But low bitrate is only part of your problem for the inconsistencies between frame quality (and within frame quality). The other main problem is your script and sharpening

    "enough bitrate" is a relative statement and depends on situation, source, context.

    The lower the bitrate: the more inconsistencies, the more fluctuations, and more quality issues


    weightp, weightb and mixed references can affect this? I don't really understand yet.
    Not really, and you're still ignoring the other main problem - you're sending artifacts, an oversharpened input to the encoder . Look again at the apng. Higher bitrate and/or better encoding settings will just preserve the "stripes" and other artifacts better with more coverage and be more consistent . ie. more artifacts, not less. Because that's what you're sending

    There are other types of artifacts that are enhanced in the script, sent to the encoder, such as typical ringiing and edge artifacts


    There are several other types and examples I can post too - they are all enhanced by sstr

    The point is you are sending oversharpen artifacts to the encoder. What do you expect it to do ? Again, sharpening is counterproductive for low bitrate encoding - it requires more bitrate (larger filesizes), and is prone to generate artifacts . If you didn't sharpen so much, the "stripes" and other artifacts would not look as bad.

    An encoder's primary job is to attempt to reproduce input , not to remove "stripes" or other artifacts. When you send "stripes" (or any "artifact") as the input - it tries to reproduce them . At low bitrates, it succeeds in some areas, on some frames (you see stripes), fails on others (blurred) . In this case, blurred might be preferred, but it's a deviation from the input - so "blurred" is actually an artifact for your script. Encoders currently are not "smart" enough to tell what is a "detail" or what is an "artifact". It thinks the "stripes" that you send are "details" . At qp 0, it 100% reproduces all the artifacts and details you send. A big reason why people denoise/degrain for low bitrate encoding - to make it "easier" for the encoder . But when you sharpen, you make it more difficult for the encoder - sharpening is counterproductive if you goal was to reduce filesize

    If you send a red frame and get a purple frame as result, that purple frame is an "artifact". Even if you "like" the purple frame better, it's a deviation from the input red frame. A perfect result would be the same red frame - it would be PNSR infinity. You could argue that if you wanted a purple frame , then why not send a purple frame in the first place? It's the same thing here - if you didn't want "stripes", then why are you sending "stripes" and other artifacts to the encoder ??? Encoder is going to try to reproduce stripes (and it does successfully on some parts of some frames) . The insufficient bitrate results in the "randomness" of the reproduction . If you test higher bitrates as I suggested, CQ 10, 5, 1 etc... it gets closer approximation the input, much more predictable - many more "stripes" and artifacts are reproduced - because that's what you sent to the encoder


    Originally Posted by gelo333 View Post
    Another question. Preset faster in FFMPEG and preset faster in x264 mod Patman give different results. This is fine? If I set preset medium in both methods, then both results will be identical.
    Not sure, because you had non deterministic results prior and could not track down the problem. If you do it 10 times, and get the same results 10 times, then there might be a difference in the faster preset. But maybe you do it 2 times on medium but now it's different the 2nd time (shouldn't be, but you have some other issues on your system)

    You can also track down the core version and revision, look at the changelog . There might have been material changes that resulted in preset differences between versions
    Quote Quote  
  22. if you didn't want "stripes", then why are you sending "stripes" and other artifacts to the encoder ???
    I want to get an acceptable image. But the encoder works differently. If the stripes were consistent, I would simply reduce the sharpness, but there is a chance of getting too much inconsistent blur.
    This is the main point of my problem.
    Quote Quote  
  23. There is another problem that I mentioned in passing earlier. I can't find any examples yet. In short, the problem looks like this.

    original video
    101 frames - worst image
    102 frames - improvement
    103 frames - even greater improvement
    104 frames - good

    one way to configure filters
    101 - worst
    102 - worst
    103 - worst
    104 - worst

    second way
    101 - worst image
    102 - improvement
    103 - even greater improvement
    104 - good
    everything is framed almost the same as in the original.

    This probably depends on the arrangement of IBP frame types. How to control this? I only found a dependency on "keyint" and "keyint_min". Enlarging them improves the image. I use keyint=10/keyint_min=1 or 20/1 or 30/1 or 30/5. The problems (all of which I talked about in this thread) are most visible at 30/5.
    Quote Quote  
  24. Originally Posted by gelo333 View Post
    if you didn't want "stripes", then why are you sending "stripes" and other artifacts to the encoder ???
    I want to get an acceptable image. But the encoder works differently. If the stripes were consistent, I would simply reduce the sharpness, but there is a chance of getting too much inconsistent blur.
    This is the main point of my problem.
    This was already answered

    Your main problem with consistency is inadequate bitrate .

    Oversharpening requires even more bitrate , otherwise it results in even more inconsistencies and other problems

    See the previous post it explains why you are getting those results. Did you try it ? QP 0 = 100% consistent for the encoder; QP 26 = inconsistent. In between is in between consistent.


    And your other problem is your script and source. There are unfiltered inconsistencies sent to the encoder as well. For example the cheek keyframe pop I mentioned earlier , things like that. Frame fluctation quality in the source from low bitrate


    Originally Posted by gelo333 View Post
    There is another problem that I mentioned in passing earlier. I can't find any examples yet. In short, the problem looks like this.

    original video
    101 frames - worst image
    102 frames - improvement
    103 frames - even greater improvement
    104 frames - good

    one way to configure filters
    101 - worst
    102 - worst
    103 - worst
    104 - worst

    second way
    101 - worst image
    102 - improvement
    103 - even greater improvement
    104 - good
    everything is framed almost the same as in the original.

    This probably depends on the arrangement of IBP frame types. How to control this? I only found a dependency on "keyint" and "keyint_min". Enlarging them improves the image. I use keyint=10/keyint_min=1 or 20/1 or 30/1 or 30/5. The problems (all of which I talked about in this thread) are most visible at 30/5.

    This was already addressed earlier

    Everything relates to inadequate bitrate . In your source and/or your encode.

    You're not going to be able to fix this at lower bitrate ranges. This is the nature of temporal lossy encoding.



    So when the source already has fluctuations, you have to filter it if you want more consistent quality (the problem was low bitrate for your source in the first place).

    But even if you fix it with proper filters (pretend you have a perfect source now, or another perfect source), if you encode with low bitrates, you will generate new consistency problems - just like the source you have now. Do you see the problem? They all come from inadequate bitrate .




    Some things you can experiment with, but none will help very much, unless you have adequate bitrate:

    For example, you can use all I frames. No P, B . This should have more eveness to the quality, but the compression ratio will be poor - you'd need probably 2-3x the bitate for a similar level of quality

    It was mentioned earlier - You can override "auto" control using a --qpfile to manually specify parameters per frame for full control; not only frametype, but quantizer too. Another method of control is --zones - you can specify different settings for ranges or frames, or a bitrate multiplier for example, this gives you more manual control if you don't like what encoder is doing.

    Larger keint means a longer GOP , before a new keyframe is "forced" for a new GOP. At low bitrates, longer GOP can help efficiency (or higher quality at a given bitrate), because you have more frequent lower cost p and b frames, instead of "expensive" I frames. The default keint is 250 . For low bitrate encoding, you generally want lots of b-frames, longer keyframe intervals because it helps with efficiency. There are diminishing returns for keint length

    For higher bitrate encoding, you generally want shorter keyframe interval. E.g. film BD's use 24. 1 sec interval, and they have adequate bitrate

    As mentioned earlier, you can experiment with adjusting the IP, PB ratios for more even distribution of bitrate between frame types (in general, you want closer to 1 for higher bitrates, because you can afford it; for exmple, many BD's are done with 1.1 and 1.1 instead of 1.4 and 1.3 - but you generally can't afford it with low bitrates, you want lower cost b-frames), or adjust max number of consecutive b-frames (0 to 16, in generate you want more for lower bitrates)
    Quote Quote  
  25. As a side note: sometimes it helps to look at a source using Retinex to see where are 'hidden' artifacts/details which might 'popup' through filtering (or should be dealt with, through filtering).
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  26. Oversharpening requires even more bitrate , otherwise it results in even more inconsistencies and other problems
    So if I reduce the sharpness the output will be more predictable?
    Quote Quote  
  27. Originally Posted by Selur View Post
    As a side note: sometimes it helps to look at a source using Retinex to see where are 'hidden' artifacts/details which might 'popup' through filtering (or should be dealt with, through filtering).
    I would accept a moderate presence of these artifacts. The main thing is to understand what the encoder will give me at the output. Now I slightly change one setting in the filter, but the output result is very different, not at all logical. I can increase the sharpness, but the stripes, on the contrary, blur. Or vice versa, I reduce the sharpness, and the stripes become sharper. I reduce it even more, and the stripes disappear again.
    Quote Quote  
  28. For example, a post where I showed pictures of changes in sharpness settings. If we accept that blur is an artifact, then the conclusion is:
    artifacts 2
    middle artifacts 2
    no artifacts 2

    Out of 6 attempts, only 2 files were encoded without artifacts. If this behavior is normal for x264, then this is not a program, it is junk!
    Quote Quote  
  29. Originally Posted by gelo333 View Post
    Oversharpening requires even more bitrate , otherwise it results in even more inconsistencies and other problems
    So if I reduce the sharpness the output will be more predictable?
    Yes - it's been mentioned several times already.


    Originally Posted by gelo333 View Post
    If this behavior is normal for x264, then this is not a program, it is junk!
    It's the same behaviour for all encoders. It's the nature of lossy low bitrate encoding.

    Adjusting script and encoding settings can help a bit, but at the end of the day, bitrate trumps everything. High bitrate = more consistent for all encoders

    You can use something better as suggested, like HEVC (x265), AV1 (AOM-AV1 or SVT-AV1) . They are definitely better - especially at lower bitrates - but not "magical". If your script is not good, and/or you use low bitrates, you will still get problems
    Quote Quote  
  30. After editing the posts, the messages are on the next page...
    Last edited by gelo333; 18th Dec 2023 at 21:57.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!