VideoHelp Forum
+ Reply to Thread
Page 2 of 3
FirstFirst 1 2 3 LastLast
Results 31 to 60 of 66
Thread
  1. Originally Posted by Selur View Post
    Why is this not a measurement of video quality that I can see in ffmpeg console output?
    Because you make the assumption that the original video isn't already 3fps.
    If A already has a fps of 3 the above command would not change the frame rate and thus no additional choppiness' would be added.
    You are using information which, in your example only your brother has.
    You are changing my test. Again, when I get the two files from my brother, I KNOW that one is the original and one has been degraded in quality - but I don't know which is which. If I look at the ffmpeg console output of both files after the conversion, I see where one has 3fps and the other has 23fps. I didn't know before the conversion that A was 23fps and that B was going to be 3fps or vice-versa. All I know is that my brother changed ONE thing. I look at the two console outputs and the only difference I see is 23fps vs 3fps. Can't I say that the 3fps file has lesser quality?
    Quote Quote  
  2. Originally Posted by Selur View Post
    The premise is that I run across two computer videos files where one of the files is the result of REDUCING the quality of the other. So, yes, you can say "B is better than A" and "A is worse than B", but, in my premise here, the difference is because of REDUCING the video quality.
    Yes, but I can reduce the playback quality of a video by:
    a. reencoding it while increasing the bitrate (simply because the compression isn't lossless)
    b. increasing the frame rate

    Again, you are introducing more than one factor in the ffmpeg command
    No, I'm not. No clue why you think that.
    Why do you think that doing your "a" and "b" is doing ONLY ONE THING?!?!?! Again, look at my two commands where I use "-b:v" and "-r"
    Quote Quote  
  3. Originally Posted by Selur View Post
    The premise is that I run across two computer videos files where one of the files is the result of REDUCING the quality of the other. So, yes, you can say "B is better than A" and "A is worse than B", but, in my premise here, the difference is because of REDUCING the video quality.
    Yes, but I can reduce the playback quality of a video by:
    a. reencoding it while increasing the bitrate (simply because the compression isn't lossless)
    b. increasing the frame rate
    OK, what is your ONE ffmpeg statement where you use only ONE switch (in addition to "-i") to accomplish what you are saying?
    Quote Quote  
  4. In all I mentioned I only would change the frame rate or bit rate and keep all other settings the same.

    Why do you think that doing your "a" and "b" is doing ONLY ONE THING?!?!?! Again, look at my two commands where I use "-b:v" and "-r"
    a. and b. are examples of things one can do.

    -------------------------

    Can you agree to these statements:

    1. Reencoding a video (with a lossy compression; like ffmpeg uses by default) and increasing the bitrate will still lower the viewing quality?

    2. Reencoding a video and changing the frame rate will lower viewing quality?


    Can't I say that the 3fps file has lesser quality?
    Since you do not know which is the original. You don't know whether the original was slowed down or sped up. So the 23fps might be the one with lesser viewing quality since it plays the video nearly 8 times faster as intended.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  5. Originally Posted by Selur View Post
    In all I mentioned I only would change the frame rate or bit rate and keep all other settings the same.

    Why do you think that doing your "a" and "b" is doing ONLY ONE THING?!?!?! Again, look at my two commands where I use "-b:v" and "-r"
    a. and b. are examples of things one can do.

    -------------------------

    Can you agree to these statements:

    1. Reencoding a video (with a lossy compression; like ffmpeg uses by default) and increasing the bitrate will still lower the viewing quality?

    2. Reencoding a video and changing the frame rate will lower viewing quality?
    Your #1 seems to be doing TWO things: compressing AND changing the bitrate. But, show me how you would accomplish that in ONE ffmpeg statement using ONLY ONE switch other than "-i".

    Your #2 seems to be doing TWO things: reencoding AND reducing the frame rate. But, show me how you would accomplish that in ONE ffmpeg statement using ONLY ONE switch other than "-i".



    OK, one more try asking a different way. Consider this statement:
    Code:
    ffmpeg -i A.mp4 -b:v 20k Z.mp4
    Do you consider that statement to be changing: A) one thing B) more than one thing C) nothing

    Knowing NOTHING else beyond that ONE ffmpeg statement, would you say that: A) A.mp4 is better quality B) Z.mp4 is better quality C) A.mp4 and Z.mp4 are the same quality D) there is nothing in the statement that can tell you any difference in quality in A.mp4 and Z.mp4 (and tell me why)

    If you were to know NOTHING else beyond that ONE ffmpeg statement with ONE switch was used (but you don't know which was input and which was output) and you were to ONLY view the ffmpeg console output from A.mp4 and Z.mp4, would you find sufficient information in the console outputs to say that: A) A.mp4 is better quality B) Z.mp4 is better quality C) A.mp4 and Z.mp4 are the same quality D) there is nothing in ONLY the console outputs that can tell you any difference in quality in A.mp4 and Z.mp4 (and tell me why)



    Next, consider this statement:
    Code:
    ffmpeg -i A.mp4 -r 3 Z.mp4
    Do you consider that statement to be changing: A) one thing B) more than one thing C) nothing

    Knowing NOTHING else beyond that ONE ffmpeg statement, would you say that: A) A.mp4 is better quality B) Z.mp4 is better quality C) A.mp4 and Z.mp4 are the same quality D) there is nothing in the statement that can tell you any difference in quality in A.mp4 and Z.mp4 (and tell me why)

    If you were to know NOTHING else beyond that ONE ffmpeg statement where ONLY ONE switch was used beyond "-i" (but you don't know which was input and which was output) and you were to ONLY view the ffmpeg console output from A.mp4 and Z.mp4, would you find sufficient information in the console outputs to say that: A) A.mp4 is better quality B) is better quality C) A.mp4 and Z.mp4 are the same quality D) there is nothing in ONLY the console outputs that can tell you any difference in quality in A.mp4 and Z.mp4 (and tell me why)
    Last edited by the_steve_randolph; 5th Jun 2021 at 03:09.
    Quote Quote  
  6. Originally Posted by Selur View Post
    Can't I say that the 3fps file has lesser quality?
    Since you do not know which is the original. You don't know whether the original was slowed down or sped up. So the 23fps might be the one with lesser viewing quality since it plays the video nearly 8 times faster as intended.
    If you are looking at the console output and the ONLY difference you see is the fps, what else could have been changed when the "-r" is the ONLY switch used?
    Quote Quote  
  7. Your #1 seems to be doing TWO things: compressing AND changing the bitrate. But, show me how you would accomplish that in ONE ffmpeg statement using ONLY ONE switch other than "-i".
    You can't change the bit rate without recompressing,....

    our #2 seems to be doing TWO things: reencoding AND reducing the frame rate. But, show me how you would accomplish that in ONE ffmpeg statement using ONLY ONE switch other than "-i".
    ffmpeg -i A.mp4 -r 3 Z.mp4
    recompresses and changes the bit rate your would use '-c:v copy' to not change the compression and only change the fps.

    Do you consider that statement to be changing: A) one thing B) more than one thing C) nothing
    That statement will:
    a. recompress with the default encoder (currently x264 for mp4 container in ffmpeg iirc)
    b. change the bit rate during that recompression to 20k

    Knowing NOTHING else beyond that ONE ffmpeg statement, would you say that: A) A.mp4 is better quality B) Z.mp4 is better quality C) A.mp4 and Z.mp4 are the same quality D) there is nothing in the console outputs that can tell you any difference in quality in A.mp4 and Z.mp4 (and tell me why)
    A. since a loss compression is used which will always result in quality loss.

    If you were to know NOTHING else beyond that ONE ffmpeg statement with ONE switch was used (but you don't know which was input and which was output) and you were to ONLY view the ffmpeg console output from A.mp4 and Z.mp4, would you find sufficient information in the console outputs to say that: A) A.mp4 is better quality B) is better quality C) A.mp4 and Z.mp4 are the same quality D) there is nothing in the console outputs that can tell you any difference in quality in A.mp4 and Z.mp4 (and tell me why)
    D. nothing in the console output of 'ffmpeg -i A.mp4' and 'ffmpeg -i Z.mp4' can tell me about quality differences since for that I would need to know more.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  8. If you are looking at the console output and the ONLY difference you see is the fps, what else could have been changed when the "-r" is the ONLY switch used?
    video was reencoded since '-c:v copy' wasn't used.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  9. Originally Posted by Selur View Post
    Your #1 seems to be doing TWO things: compressing AND changing the bitrate. But, show me how you would accomplish that in ONE ffmpeg statement using ONLY ONE switch other than "-i".
    You can't change the bit rate without recompressing,....

    our #2 seems to be doing TWO things: reencoding AND reducing the frame rate. But, show me how you would accomplish that in ONE ffmpeg statement using ONLY ONE switch other than "-i".
    ffmpeg -i A.mp4 -r 3 Z.mp4
    recompresses and changes the bit rate your would use '-c:v copy' to not change the compression and only change the fps.

    Do you consider that statement to be changing: A) one thing B) more than one thing C) nothing
    That statement will:
    a. recompress with the default encoder (currently x264 for mp4 container in ffmpeg iirc)
    b. change the bit rate during that recompression to 20k

    Knowing NOTHING else beyond that ONE ffmpeg statement, would you say that: A) A.mp4 is better quality B) Z.mp4 is better quality C) A.mp4 and Z.mp4 are the same quality D) there is nothing in the console outputs that can tell you any difference in quality in A.mp4 and Z.mp4 (and tell me why)
    A. since a loss compression is used which will always result in quality loss.

    If you were to know NOTHING else beyond that ONE ffmpeg statement with ONE switch was used (but you don't know which was input and which was output) and you were to ONLY view the ffmpeg console output from A.mp4 and Z.mp4, would you find sufficient information in the console outputs to say that: A) A.mp4 is better quality B) is better quality C) A.mp4 and Z.mp4 are the same quality D) there is nothing in the console outputs that can tell you any difference in quality in A.mp4 and Z.mp4 (and tell me why)
    D. nothing in the console output of 'ffmpeg -i A.mp4' and 'ffmpeg -i Z.mp4' can tell me about quality differences since for that I would need to know more.

    Cu Selur

    When you do this statement:
    Code:
    ffmpeg -i A.mp4 -b:v 20k -c:v copy Z.mp4
    that PREVENTS the bitrate change because "-c:v copy" means to NOT CHANGE THE VIDEO STREAM. When I do that statement and I view the console outputs of A.mp4 and Z.mp4, they have the SAME video bitrate. So, what are you claiming was changed with that statement?

    When you do this statement:
    Code:
    ffmpeg -i A.mp4 -r 3 -c:v copy Z.mp4
    that PREVENTS the bitrate change because "-c:v copy" means to NOT CHANGE THE VIDEO STREAM. When I do that statement and I view the console outputs of A.mp4 and Z.mp4, they have the SAME video fps. So, what are you claiming was changed with that statement?
    Quote Quote  
  10. You are right with the '-b:v 20k -c copy', sorry my mistake. (b:v is a video stream parameter)

    You are wrong with '-r 3 -c copy' this works fine, since '-r 3' is a container parameter.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  11. Originally Posted by Selur View Post
    You are right with the '-b:v 20k -c copy', sorry my mistake. (b:v is a video stream parameter)
    So, is my statement changing ONLY ONE THING and you can tell the difference in the console outputs?

    You are wrong with '-r 3 -c copy' this works fine, since '-r 3' is a container parameter.
    Actually, I mistyped that. It should have been "-r 3 -c:v copy". BUT, using both those ("-c copy" and "-c:v copy), the result is NO change in the fps in Z.mp4. If you believe that fps is a "container parameter", please post a link to an ffmpeg site page that states that. And/or, tell me where in the console output that the fps information is in the header/metadata information and not in the stream information.
    Quote Quote  
  12. So, is my statement changing ONLY ONE THING and you can tell the difference in the console outputs?
    Sure you can tell differences in the console output, but you can't tell which is the original and which is the reencode and thus those differences can't be used for conclusions.


    You are right I confused '-r' with '-frame rate', so you got frame added or removed.
    Still without knowing which is the original you got two options (Assuming A has 23fps and B has 3 fps), either:
    1. A was the source and B was produced by reencoding and dropping frames, so B would provide a lower viewing quality.
    or
    2. B was the source and A was produced by reencoding and adding duplicate frame, so A would provide the lower viewing quality.
    Since you can't decide whether the case is 1. or 2. you can't tell which file offers a better viewing quality.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  13. Originally Posted by Selur View Post
    So, is my statement changing ONLY ONE THING and you can tell the difference in the console outputs?
    Sure you can tell differences in the console output, but you can't tell which is the original and which is the reencode and thus those differences can't be used for conclusions.


    You are right I confused '-r' with '-frame rate', so you got frame added or removed.
    Still without knowing which is the original you got two options (Assuming A has 23fps and B has 3 fps), either:
    1. A was the source and B was produced by reencoding and dropping frames, so B would provide a lower viewing quality.
    or
    2. B was the source and A was produced by reencoding and adding duplicate frame, so A would provide the lower viewing quality.
    Since you can't decide whether the case is 1. or 2. you can't tell which file offers a better viewing quality.
    Once again, I AM NOT TRYING TO DETERMINE WHICH WAS THE ORIGINAL!!!!!!!!!!!!!

    I simply want to know if I can look at the ffmpeg console output and if I ONLY see a difference in video bitrate and I assume that nothing else has EVER been changed in the two files, is "video bitrate" a measurement of video quality where I can state the the video with the higher bitrate is better quality?
    Quote Quote  
  14. Originally Posted by Selur View Post
    Still without knowing which is the original you got two options (Assuming A has 23fps and B has 3 fps), either:
    1. A was the source and B was produced by reencoding and dropping frames, so B would provide a lower viewing quality.
    or
    2. B was the source and A was produced by reencoding and adding duplicate frame, so A would provide the lower viewing quality.
    Since you can't decide whether the case is 1. or 2. you can't tell which file offers a better viewing quality.
    Concerning your statement #2, why are you saying that "A would provide lower viewing quality"? As stated above, B (converted from A at 23fps) at 3fps is jumpy. If B at 3 fps is the "original" and it was converted to A by changing the fps to 23, then, my understanding is that extra frames are created, but, that they are duplicates of one of the frames next to it. That is, in one second, B frame 1 & 2 & 3 would be converted in A to 1 & 1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 3 & 3 & 3 etc. until there are 23 frames in that one second. In that case, I would think that those 23 frames in that one second in A would only be 3 DIFFERENT frames would have the same viewing quality as the original 3 frames in B. But, that is not what I am asking/talking about

    Now, if you were to have video X that was 23fps and it was converted to video B with 3fps and then B at 3fps was converted to A at 23fps, then, yes, A would look worse than X. But, that is NOT what I have been asking/talking about in this thread.
    Quote Quote  
  15. Once again, I AM NOT TRYING TO DETERMINE WHICH WAS THE ORIGINAL!!!!!!!!!!!!!
    No clue why you are screaming,....

    I ONLY see a difference in video bitrate and I assume that nothing else has EVER been changed in the two files, is "video bitrate" a measurement of video quality where I can state the the video with the higher bitrate is better quality?
    No, not without knowing which was the original.

    I know you will complain about me getting to complicated for you, but try to bear with it:
    If you have an original O.
    And do a reencode A with a bitrate of 20k and a reencode B with a bitrate of 2000k
    and you know that:
    a. both are reencode of the same source
    b. both use the same reencoding configuration aside from the bitrate
    then you can take the bitrate as a quality measurement.
    But you need both of these (a., b.) to be true.
    If you are not sure of a. then one could have first converted O to another file with a bitrate of 10k and then to B using a bitrate of 2000k. In this case A with a bitrate of 20k would have better quality than A with a bitrate of 2000k.
    If you are not sure of b. the problem is that one of the encoding parameters might negate the bitrate difference.

    -> were you able to understand that?

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  16. Now, if you were to have video X that was 23fps and it was converted to video B with 3fps and then B at 3fps was converted to A at 23fps, then, yes, A would look worse than X. But, that is NOT what I have been asking/talking about in this thread.
    Okay, good. So you do get, that based on the frame rate you can't tell whether frames were reduced or added to the file and thus frame rate info can't be used as a criteria to decide whether the viewing quality of the file is better than another file?
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  17. Originally Posted by Selur View Post
    Once again, I AM NOT TRYING TO DETERMINE WHICH WAS THE ORIGINAL!!!!!!!!!!!!!
    No clue why you are screaming,....
    Because you are continuing to misunderstand what I am asking!

    I ONLY see a difference in video bitrate and I assume that nothing else has EVER been changed in the two files, is "video bitrate" a measurement of video quality where I can state the the video with the higher bitrate is better quality?
    No, not without knowing which was the original.

    I know you will complain about me getting to complicated for you, but try to bear with it:
    If you have an original O.
    STOP!

    THERE IS NO "O"!

    In my examples, throughout this entire thread, is that A is converted to B. There is no "original", there is no "O". I don't care where A came from. I don't care if A is the original or if some other "O" is "the original". I don't want to think about, know about, or, ask anything here about any file other than A and B. I START with A. I convert it to B by ONLY reducing the video bitrate. How do A and B compare?

    And do a reencode A with a bitrate of 20k and a reencode B with a bitrate of 2000k
    and you know that:
    a. both are reencode of the same source
    b. both use the same reencoding configuration aside from the bitrate
    then you can take the bitrate as a quality measurement.
    But you need both of these (a., b.) to be true.
    If you are not sure of a. then one could have first converted O to another file with a bitrate of 10k and then to B using a bitrate of 2000k. In this case A with a bitrate of 20k would have better quality than A with a bitrate of 2000k.
    If you are not sure of b. the problem is that one of the encoding parameters might negate the bitrate difference.

    -> were you able to understand that?

    Cu Selur
    Yes, I understand what you are saying. But, that is not what I am asking! In my examples/question, A is NOT converted from "O". B is NOT converted from "O". A, B, and, O are THREE files! I only care about TWO files - A and B. I am ONLY asking about converting A to B!
    Quote Quote  
  18. Originally Posted by Selur View Post
    Now, if you were to have video X that was 23fps and it was converted to video B with 3fps and then B at 3fps was converted to A at 23fps, then, yes, A would look worse than X. But, that is NOT what I have been asking/talking about in this thread.
    Okay, good. So you do get, that based on the frame rate you can't tell whether frames were reduced or added to the file and thus frame rate info can't be used as a criteria to decide whether the viewing quality of the file is better than another file?
    Again, you have misunderstood. I am not trying to compare A and B in how they were differently converted from O. In ALL of my examples and questions, there is no "O". There is only A and B.
    Quote Quote  
  19. In my examples, throughout this entire thread, is that A is converted to B. There is no "original", there is no "O". I don't care where A came from.
    If A is converted B, normal convention is to say that A is the 'original' and B is the 'reencode'.

    I am ONLY asking about converting A to B!
    And just by looking at the ffmpeg header info (ffmpeg -i) there is nothing that can tell you whether A has better quality than B without knowing wether A was the 'original' or the 'reencode'.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  20. Looking at any of the information of 'ffmpeg -i' and trying whether two files have the same quality is like tossing a (ideal) coin. You are right in 50% of the cases and the other 50% you are wrong.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  21. Originally Posted by Selur View Post
    In my examples, throughout this entire thread, is that A is converted to B. There is no "original", there is no "O". I don't care where A came from.
    If A is converted B, normal convention is to say that A is the 'original' and B is the 'reencode'.
    OK, fine, that is understandable. But, beyond A and B, I don't care about any other files.

    I am ONLY asking about converting A to B!
    And just by looking at the ffmpeg header info (ffmpeg -i) there is nothing that can tell you whether A has better quality than B without knowing wether A was the 'original' or the 'reencode'.
    I don't think so. If my brother converts A to B by REDUCING the bitrate or the fps, changes the names of the files to X and Y without telling me which is the "original", I can look at the console output and see that one of X or Y has a higher bitrate or fps. That is a fact. One will have 23 fps and the other will have 3fps. I have tested and proved that in the past hours during our back and forth. And, knowing that INCREASING the bitrate or fps does not actually increase video quality (and discarding that possibility in my examples/questions) and knowing that bitrate or fps is the ONLY difference I see in the console outputs, why can't it be stated as fact that of X and Y, that the file with the higher bitrate or fps has better quality with everything else being equal?

    Sure, if the conversion was to INCREASE the fps from 3 to 23 (your example), the playback viewing quality would not be different. But, that increase would cause the converted file to have a larger O/S size. Why would somebody want to do that? My guess is that people are reencoding a video to decrease the size of the file (for a multitude of reasons), not increase the size without increasing quality (which is your example). Since doing this does not make any sense, why can't we assume that it doesn't happen for my supposedly simple/basic question here?

    I am not looking for all of the possible combinations or individual things that some idiot could do to a video. As stated in the very first sentence of my OP, I am new to this and trying to learn about what I would consider to be very basic information that I can start with when I want to compare two files. That is, from what I have learned from this thread and subsequent research, if the ONLY difference between two files after a conversion between those two files is bitrate, then, I should (in the vast majority of "reasonable" cases) be able to tell from ffmpeg console output that the file with the higher bitrate is better quality. And, fps is another basic quality measurement. If we can agree on those two measurements given the limited scenario that I am presenting in this thread, are there any other basic quality measurements that can be isolated in a conversion and where I am able to see that measurement difference in ffmpeg console output?
    Quote Quote  
  22. Originally Posted by Selur View Post
    Looking at any of the information of 'ffmpeg -i' and trying whether two files have the same quality is like tossing a (ideal) coin. You are right in 50% of the cases and the other 50% you are wrong.
    In general, yes, where you don't know anything about the origin of the two files. But, again, that is not what I am asking about. I am trying to implement "the scientific method" or WTH is it called where you compare two things where you KNOW that they did not start as two different things and there is only one thing that is different about the two things. For this test/question/thread, the assumption is that A was converted to B. Nothing else.

    **** EDIT: OK, let me try that again. I am trying to create a scenario where everything not stated in the test/experiment is ignored and/or assumed to be something reasonable. The universe for this scenario is that there are these two facts: 1) there are two video files and 2) one was converted from the other with a single ffmpeg using a single switch (other than "-i") and 3) this conversion was done to reduce the quality. The test subject (me) does not know anything about the files except for those three facts only. What are the possible quality measurement(s) visible in ffmpeg console output that I can use to determine which of the two files has the better quality?
    Last edited by the_steve_randolph; 5th Jun 2021 at 05:03. Reason: clarifications
    Quote Quote  
  23. If my brother converts A to B by REDUCING the bitrate or the fps, changes the names of the files to X and Y without telling me which is the "original", I can look at the console output and see that one of X or Y has a higher bitrate or fps. That is a fact.
    A fact I agree on.
    One will have 23 fps and the other will have 3fps.
    Correct.
    . And, knowing that INCREASING the bitrate or fps does not actually increase video quality (and discarding that possibility in my examples/questions) and knowing that bitrate or fps is the ONLY difference I see in the console outputs, why can't it be stated as fact that of X and Y, that the file with the higher bitrate or fps has better quality with everything else being equal?
    you overlooked that it can hurt the quality!
    By adding/dropping frames.
    By the needed recompression.

    Sure, if the conversion was to INCREASE the fps from 3 to 23 (your example), the playback viewing quality would not be different.
    I would disagree with that.
    But, that increase would cause the converted file to have a larger O/S size.
    That wouldn't be possible since you said that only one thing is changed, so the bitrate and thus the file size would still be the same.

    Why would somebody want to do that?
    Because people often do things that are not optimal due to various reasons..
    -> do you want to add as additional knowledge that the person who did the conversion only did objectively meaningful things?

    My guess is that people are reencoding a video to decrease the size of the file (for a multitude of reasons), not increase the size without increasing quality (which is your example).
    People often convert to other formats to get playback compatibility with different players. For example convert 10bit content to 8bit content and similar.

    Since doing this does not make any sense, why can't we assume that it doesn't happen for my supposedly simple/basic question here?
    Sure we can assume quite a bit, but you did not state that you did this.
    If you assume that then the only reason to change the fps should be to be compatible with some playback device.
    So adding duplicate frames might be needed to archive a specific fps since the player wouldn't playback the content otherwise and any fps different from the the supported fps of the player would indicate a low quality version. (not caring about the image quality)
    Same with the bitrate, if you assume the content of the video is the same (color, resolution, etc. both have no artifacts which would require higher bitrate or lower quality) and whoever reencoded used the optimal average bitrate then bitrate would be a proper metric.

    I am not looking for all of the possible combinations or individual things that some idiot could do to a vido
    You should have started with that.


    If we can agree on those two measurements given the limited scenario that I am presenting in this thread, are there any other basic quality measurements that can be isolated in a conversion and where I am able to see that measurement difference in ffmpeg console output?
    Okay, let's assume the ideal human for a quick thought:

    Then you can say:
    a. higher bitrate = better quality
    b. higher color sampling = better quality
    c. higher frame rate = better quality
    d. vfr = higher quality
    basically you always can say 'higher value' = 'higher quality'.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  24. I am trying to implement "the scientific method" or WTH is it called where you compare two things where you KNOW that they did not start as two different things and there is only one thing that is different about the two things. For this test/question/thread, the assumption is that A was converted to B. Nothing else.
    You also added a humongous assumption to that in your last post,...
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  25. Originally Posted by Selur View Post
    If my brother converts A to B by REDUCING the bitrate or the fps, changes the names of the files to X and Y without telling me which is the "original", I can look at the console output and see that one of X or Y has a higher bitrate or fps. That is a fact.
    A fact I agree on.
    One will have 23 fps and the other will have 3fps.
    Correct.
    . And, knowing that INCREASING the bitrate or fps does not actually increase video quality (and discarding that possibility in my examples/questions) and knowing that bitrate or fps is the ONLY difference I see in the console outputs, why can't it be stated as fact that of X and Y, that the file with the higher bitrate or fps has better quality with everything else being equal?
    you overlooked that it can hurt the quality!
    By adding/dropping frames.
    By the needed recompression.
    I am not overlooking these possibilities. I am trying to tell you that I am wanting to reduce the variables in my questions/examples to drill down to some basic information

    Sure, if the conversion was to INCREASE the fps from 3 to 23 (your example), the playback viewing quality would not be different.
    I would disagree with that.
    OK, let me restate, given what I have already stated in previous posts. If I take a file that is 3fps and convert it to 23fps into another file. Since we already agree that converting in that direction results in one second having frames 1 & 2 & 3 at 3fps to having frames 1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 3 & 3 & 3 (until I have a total of 23 frames) at 23fps, then, how would the playback viewing quality be different? If I am watching frame #1 for one-third of a second, how is that different from watching frame #1 displayed 8 times in one-third of a second?

    But, that increase would cause the converted file to have a larger O/S size.
    That wouldn't be possible since you said that only one thing is changed, so the bitrate and thus the file size would still be the same.
    Then, we don't agree. Isn't converting from 3fps to 23fps creating duplicate frames? Aren't these duplicated frames using up more O/S file space?

    Why would somebody want to do that?
    Because people often do things that are not optimal due to various reasons..
    -> do you want to add as additional knowledge that the person who did the conversion only did objectively meaningful things?
    YES! As I stated in the OP, I am new to this. Why should I confuse myself by trying try to learn EVERY possibility at once? Divide and conquer. Have a specific idea/issue/problem and try to construct a question/scenario where you can eliminate a lot of noise so that I can focus in on one piece of information at a time. So, yes, to learn some basic things, I want to assume many things to be "meaningful" so that I can eliminate some larger things while focusing on a smaller thing.

    My guess is that people are reencoding a video to decrease the size of the file (for a multitude of reasons), not increase the size without increasing quality (which is your example).
    People often convert to other formats to get playback compatibility with different players. For example convert 10bit content to 8bit content and similar.
    Granted. But, that is actually the second part of what I am asking for in this thread (and will delve into that later). If my next step is to view A and B on my 1080p 20" monitor from my computer HDD, on my 20" monitor from my computer's internal Blu-ray player, on my 1080p 55" TV connected to the same video card with an HDMI cable, and, on my 1080p 55" TV with my external Blu-ray player (which is connected directly to my TV), then, that is a vastly more wide open question/problem/solution. And, you are stating some information that might be the cause of problems and/or the thing to use/set to resolve the problems I would run into with those options to playback view the videos. As we have established so far, I NOW know that "bitrate" and "fps" are two quality measures that I can look at and/or change when playback viewing the videos and trying to determine which version has better quality (therefore allowing me to eliminate a change that reduced the quality too much). I didn't know that before starting the thread and without knowing THE MOST basic quality measurements like "bitrate" and "fps", I can't have an intelligent start to resolving those more complicated attributes (frankly, those that you have kept bringing up that I tell you is not what I am asking about right now). I have to crawl before I can walk before I can run. "Device compatibility" is not a "crawl" issue. I would not have understood any discussions about it without having the building blocks of bitrate, fps, etc.

    Since doing this does not make any sense, why can't we assume that it doesn't happen for my supposedly simple/basic question here?
    Sure we can assume quite a bit, but you did not state that you did this.
    If you assume that then the only reason to change the fps should be to be compatible with some playback device.
    So adding duplicate frames might be needed to archive a specific fps since the player wouldn't playback the content otherwise and any fps different from the the supported fps of the player would indicate a low quality version. (not caring about the image quality)
    Same with the bitrate, if you assume the content of the video is the same (color, resolution, etc. both have no artifacts which would require higher bitrate or lower quality) and whoever reencoded used the optimal average bitrate then bitrate would be a proper metric.
    Yes, I just addressed that. Clearly we we not trying to start from the same question/test/problem, which is what I kept trying to explain.

    I am not looking for all of the possible combinations or individual things that some idiot could do to a vido
    You should have started with that.
    How do you "start" when you are new at something, don't know where to start, and, your questions is "how do I start?" (i.e. "what are basic quality measurements?")? Yes, the OP might have been too general or not started in the right direction, but, again, I was just trying to "start" somewhere. Is there a more basic place to "start" when talking about "video quality" than bitrate, fps, etc.?


    If we can agree on those two measurements given the limited scenario that I am presenting in this thread, are there any other basic quality measurements that can be isolated in a conversion and where I am able to see that measurement difference in ffmpeg console output?
    Okay, let's assume the ideal human for a quick thought:
    LOL Nonono I am trying to eliminate the human factor by defining a test/universe where I can focus on one specific thing that I can use to build upon more understanding and more/better questions. Kinda like learning to drive a car. In that first lesson I don't need to be told how often drunk drivers will run stop signs or what to do if one of my tires blows out or what to do if I hydroplane in the rain. In my OP I am in the first lesson trying the hell to figure out where the pedals are, how to shift gears, and, whether or not closing my eyes is a good idea.

    Then you can say:
    a. higher bitrate = better quality
    b. higher color sampling = better quality
    c. higher frame rate = better quality
    d. vfr = higher quality
    basically you always can say 'higher value' = 'higher quality'.

    Cu Selur

    GREAT! That's what I was trying to ask for. Now, I will go visit Google and ask things like "ffmpeg change bitrate" and "ffmpeg change 'color sampling'" or "ffmpeg change 'frame rate'" or "ffmpeg change vfr". Prior to my OP I didn't know what terms and/or attributes (i.e. bitrate, fps, vfr, etc.) to use to get anything out of Google. Therefore, I started here with my (destined to be) imperfect OP, thinking that asking video "experts" on "videohelp.com" was a good place to start instead of hoping that Google knew what I was asking (which we know that many times it does not). I actually did start before the OP with Googling things like "ffmpeg video quality" and the results were using terminology that I didn't know that definition of (like "color sampling"), and, there were tons of results and I had no clue which results to read and try to understand.
    Quote Quote  
  26. Originally Posted by Selur View Post
    I am trying to implement "the scientific method" or WTH is it called where you compare two things where you KNOW that they did not start as two different things and there is only one thing that is different about the two things. For this test/question/thread, the assumption is that A was converted to B. Nothing else.
    You also added a humongous assumption to that in your last post,...
    "Assumption" is the wrong word for me to use in that statement. As stated recently, that is a basic "fact" for this thread/example question. I thought I had made that clear throughout the thread, but, I guess not.

    **** EDIT: In my defense, go back and reread post #11 in this thread. I thought, and still think, that was a very basic/simple scenario. But, the responses I got kept making the scenario more complicated and using terminology that I didn't know. Again, I was trying to crawl and y'all thought I was trying to run.
    Last edited by the_steve_randolph; 5th Jun 2021 at 06:00.
    Quote Quote  
  27. How would the playback viewing quality be different?
    That depends on the player, display and person watching such content.
    To some the content seems not as smooth, which usually happens especially if the player/display would otherwise interpolate frames for smoother playback or prefers specific frame rates.

    Then, we don't agree. Isn't converting from 3fps to 23fps creating duplicate frames? Aren't these duplicated frames using up more O/S file space?
    Not if the average bitrate per second stays the same or if the codec is a modern codec which notices that the frame didn't change and uses non-coded frames. (yes a minimal increase in file size might be there, but not much)

    How do you "start" when you are new at something
    Start with the basics, like how video compression works.

    If you understand how compression works, you should understand what:
    a. different bit rates and rate control means
    b. what different color samplings/spaces mean

    Atm. you are not starting with the basics.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  28. OK, now that we seem to have gotten the first part of this thread cleared up, I can move to the second part.

    This "project" is to figure out the best way "for me" to determine acceptable quality for me to copy DVDs and computer videos to disc for future watching. Ultimately, I will come to a conclusion as to whether or not doing this is a better longterm solution compared to streaming. A component of this is acknowledging that we simply don't know what "streaming" will be like in 10-20 years. But, right now, I am in possession of DVDs and computer videos. To watch them, I don't have to buy a ticket at the theater or have internet service and/or have a streaming service. They are in my grubby hands, and, I have a real good handle on how reliable computer HDDs and disc drives are. So, if I want to burn/rip/whatever my videos to disc, I want to try to move them to the discs in a way that does not lose "obvious" viewing quality. Right now, I can test on a computer HDD, internal disc drive, monitor, TV, and, external disc player connected directly to the TV. I know that I can tweak the quality via numerous measures (i.e. video bitrate, fps, etc.) to reduce the size of the files residing on discs. So, I am trying to find out what kind of trade-offs can be made to put them on discs, trying to not lose too much quality, and rely on the discs to be a better solution than whatever streaming might be like in 10-20 years.

    Yes, I cannot predict the future, but, I can find out what I can do to be more efficient on discs but not see the difference in the degraded quality when I view it on my monitor and on my 55" TV. Yes, this is subjective in what is acceptable to me compared to others. But, I want to limit the universe to what I have right now and do some testing to determine what is acceptable to me. So, I will now try to ask some narrow questions to help move me along and help keep me away from wild goose chases (which I was doing a week ago on a different issue).

    If I were to take a single quality attribute like bitrate or fps or frame rate, and convert the video by reducing only this quality, what are some reasonable (i.e. "rule of thumb") increments that will make testing better and easier? For example, if I am starting with a video bitrate that is 2000 kb/s, should I reduce that by 200 kb/s increments? Or, would it be better to go in increments of 300 or 400?. What I will do is play the original next to the converted file on my 20" monitor and see if I can tell the difference. Like, would the average person be able to see a difference once it reaches 1200 kb/s? 900 kb/s? Like, if I probably wouldn't see a difference until it gets down to 500 kb/s, then, me testing in increments of 100 kb/s would be wasting a lot of time as I go from 200 to 1900 to 1800 to 1700 to 1600 and so on. Then, I would do the same testing on my 55" TV. (For this discussion, the quality of the cable to my monitor is the same quality as to my TV, and, both the monitor and TV are 1080p). An assumption is that I would notice a difference on my TV at a higher bit rate than I saw on my monitor. That is, for example, I might end up determining that when I convert to 1000 kb/s that is OK for watching on a 20" monitor, but, for me, I can't go below 1500 kb/s when viewing on my 55" TV.

    So, I am not asking for exact numbers and/or view settings because I know that the end result is going to be subjective. I am asking to get pointed in the right direction and for some terminology/settings to look at first. Again, I can only test on the equipment that I have at this moment. I understand that there can be device compatibility issues in the future, but, that is something I didn't know to look for when I started this thread. I am not really interested in editing the quality of the video/audio or "remastering" or fixing any quality attribute (beyond perhaps only a screwed up aspect ratio). At this time, I am not really interested in discussing/debating the comparison of various codecs, for example - unless there is a major difference, like, subtitles can be text-based or image-based. I am not interested in converting surround sound to stereo or more complicated/personal editing like that. As stated above, I entered this thread crawling. I might be doing a bit of walking right now, but, I am not ready for running yet. Just get me pointed in the right directions. Thanks.
    Quote Quote  
  29. Since you do not want to restore/enhance content:

    I would advice keep the originals, since displays get higher resolutions, better color precision anything that might look okay on your tv now may look bad on a new one.
    (depending on electricity and hardware costs, usually buying new hdds is cheaper than spending the time and energy on converting stuff)

    If you want to get rid of your DVDs copy images of them to your hdd, if you do not care about the menus on DVDs, use for example MakeMKV to repackage them to mkv files.

    If you want to convert.
    a. to not mess with the frame rate unless you deinterlace or ivtc and you know what you are doing. (which at this point you do not)
    b. do not lower the resolution (anything that is lost, is lost)
    c. do not create vfr content

    What I usually would recommend:
    a. use a tool like StaxRip, MeGui, Handbrake
    b. use x264 or x265 as encoder and use the 'slow'-preset; depending on your Hardware you can also try hardware encoders which usually help with the encoding speed, but provide a worse compression ratio (bitrate vs. quality)
    c. use crf / 'constant quality' encoding create a bunch of encodes with crf values between 15 and 23 (do not use 1pass vbr, abr, cbr encoding; use 2pass encoding if you need to archive a specific file size)
    -> look at the results and stick with the crf value that gives you a good balance between file size and quality.
    Once you get a general 'feel' for your 'size vs. quality' perception, learn how to filter content and tweak settings.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  30. Originally Posted by Selur View Post
    How would the playback viewing quality be different?
    That depends on the player, display and person watching such content.
    To some the content seems not as smooth, which usually happens especially if the player/display would otherwise interpolate frames for smoother playback or prefers specific frame rates.
    You left out a bit of my quote and again added more variables to the testing. If 3fps is converted to 23fps and EVERYTHING else is the same, how would somebody see a difference in quality? Are you saying that, if the only difference in two computer videos is that one has duplicated frames, there are specific devices that would not play the duplicated frames the same way as the non-duplicated frames?

    Then, we don't agree. Isn't converting from 3fps to 23fps creating duplicate frames? Aren't these duplicated frames using up more O/S file space?
    Not if the average bitrate per second stays the same or if the codec is a modern codec which notices that the frame didn't change and uses non-coded frames. (yes a minimal increase in file size might be there, but not much)
    OK, more information there. So, a "rule of thumb" might be that if that final file size is not larger, then I need to know more about codecs (duh). But, I don't anticipate trying to increase a quality attribute unless there is something obvious and agreed upon in the video community that I am not aware of. That is, if I start with a video file and I am not unhappy with its current fps, I would not try to increase the fps (i.e I shouldn't be running across 3fps videos that I want to put on disc). But, y'all might tell me something like "if you really wanna retain quality, but, reduce size, look to see if your file is using codec XYZ and convert it to codec UVW".

    How do you "start" when you are new at something
    Start with the basics, like how video compression works.

    If you understand how compression works, you should understand what:
    a. different bit rates and rate control means
    b. what different color samplings/spaces mean
    OK, so I started "wrong". Again, I had to start somewhere and I thought I was starting with a basic simple example.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!