VideoHelp Forum




+ Reply to Thread
Page 4 of 6
FirstFirst ... 2 3 4 5 6 LastLast
Results 91 to 120 of 163
  1. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    (Number of seconds) x (Average Kilobits per second) = Number of Kilobits
    I'm not arguing this point (at least not completely). Encoder variations aside, each additional pass made with multipass will shrink the resulting MPEG to a smaller size. When it comes down to it, your still guessing. A good guess maybe, but it's still a guess . The only sure way to predict the output size is to use CBR.

    If you read my post above, I've observed that the average setting has no correlation to the output MPEG's average bitrate using multipass VBR mode with CCE.

    In the tests that I ran, I set the average directly between the min/max values on a 2-pass VBR M2V. The average bitrate was considerably higher (this was on an svcd). Again, as I stated above, this was in CCE. I don't know if TMPGenc does this as well.

    The avg setting was 1262. The Max setting was 2524, and the min setting was 0. Analyzing the resulting M2V file with Bitrate Viewer gave me an mpeg with an average bitrate of 2300 Kb/sec. Substantially higher than my average setting.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  2. Member
    Join Date
    May 2002
    Location
    Rainy City, England
    Search Comp PM
    Originally Posted by DJRumpy
    For VCD, use the 'Ultra Low Bitrate' setting. For SVCD, use the 'Low Bitrate Setting'.
    According to the manual, "Ultra low bitrate is intended to be used when the bitrate is 2Mbps or less", which would cover many SVCDs.
    Quote Quote  
  3. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    Most of my SVCD's seem to stay in the 2000 - 2500 range on average. Lets face it, you can barely squeeze a decent picture into SVCD, or VCD, as they almost always have a bit shortage, so your averages are going to be on the high side anyway, unless your watching something like 'On Golden Pond', where the actors are 2 days from the grave .

    If your using CBR, this is a no brainer. The settings are not set in stone though. Each would just depend on your source. If you have a bitviewer, you can just load up your source, and get the avg, and select accordingly. I'm too lazy, assuming that any svcd I make is going to be 2000+. The quality works for me, which is all that's important in the end.

    I actually got that one wrong too. For VCD, I would suggest "Ultra Low". The SVCD setting should have been "Very Low". Sheeze..you would think they could have given easier (read: clear) lables, like maybe the actual bitrate average the setting was good for.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  4. Originally Posted by C53248
    In the second case, you are second guessing the encoder and risk imposing your own ham-handed imperfection on it, so it's best to keep the minimum low and the maximum high. My observation is that it mostly doesn't stray too far anyway, especially towards the low end. There is no magic about the max. and min. being equally far from the average value. However, if either one of them is too close to the average value, you will get VBR that approximates CBR, because it has no room to vary.
    I agree with your posting in general, C53248, but I'd like to clarify a few points if I may:

    VBR works by "borrowing" excess bits from the low-motion scenes to "spend" on the high-motion scenes later, a strategy that allows the instantaneous bitrate to vary with the complexity of the source material while keeping the average bitrate constant.

    Every bit that gets added to a complex scene must be paid for by taking it from a low motion scene that doesn't need it, otherwise the average bitrate (and the resulting file size) would be totally unpredictable. That being said, the minimum and maximum bitrates aren't arbitrary figures. They really do need to be balanced (equidistant) from the average to the greatest extent possible, otherwise the encoder's output will be unbalanced also.

    If the minimum bitrate is too high, fewer bits will be available for the encoder to "borrow" to cover the high motion scenes when it needs them -- the same as if you had chosen a lower maximum. A VBR bitrate of 4000 / 5000 / 9000 has an effective maximum bitrate of 6000, because the encoder can't move more than 1000 to the other side of the average.

    If the minimum bitrate is too low, the encoder is obliged to overcompress the low-motion scenes whether it needs the extra bits or not. If it extracts more than it needs to cover the high motion scenes, it simply throws the surplus bits away, the same as if you had chosen a lower average. A VBR bitrate of 0 / 5000 / 9000 will actually shrink because the encoder can't need more than 4000, but you've told it to take up to 5000 away.

    The ideal minimum bitrate is therefore as far below the average as the maximum is above (min = avg-(max-avg)). A VBR bitrate of 1000 / 5000 / 9000 is ideally balanced because Paul can't possibly borrow more than Peter is willing to lend.
    Quote Quote  
  5. How big is the "window"? If say you have a scene with nothing much happening, then 30 seconds later a scene that requires a higher data rate, can the encoder remember back 30 seconds and take the datarate from there? Is that how it works?

    Is the maximum data rate ever exceeded? If for an SVCD say, the limit is the speed at which the DVD player can read a CDR, then can it queue up data from the disc and then have a quick burst at a higher data rate? Or is it always limited to the maximum?

    Dave
    Quote Quote  
  6. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    There is no limit as to when/where VBR can take away, or add bitrate. If the first 30 minutes of the film are all low motion, then the mpeg wouldn't require much bitrate, and the encoder wouldn't give it much as a result. It does NOT have to balance out (meaning the movie's avg bitrate will be the same as your AVG setting). Don't mistake the AVG setting as a rule. It simply defines the +/- swing for the variable bitrate between min and max. If your movie is very complex, and you set a min0, avg 500, and max 9000, the average bitrate for the movie could be 6000, 7000, or higher (just an example). With an AVG somewhere equidistant between your min and max, you give the encoder the greatest range between your min and max to use for variable bitrate.

    You can verify this with a quick test using a small video clip. Encode a minute of high action SVCD video, using min 0, avg 1260, max 2524. When it's done, use a bitviewer to see what the average bitrate for the clip is. You'll find it's substantially higher than your AVG setting.

    In regards to spikes in your bitrate (max exceeded), they happen all the time. A spike in your bitrate is normally handled without issue by, but as a rule, your encoder should try to stay within the limits you've set.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  7. Originally Posted by DEmberton
    How big is the "window"? If say you have a scene with nothing much happening, then 30 seconds later a scene that requires a higher data rate, can the encoder remember back 30 seconds and take the datarate from there? Is that how it works?
    I'm speaking metaphorically, of course. For simplicity let's assume you have an I-frame-only MPEG that you encode in CBR mode at 5000 and VBR mode at 1000 / 5000 / 9000:

    In CBR mode, the encoder quantizes (scales) the frames by whatever degree is necessary to achieve the target bitrate of 5000. Each frame gets the same amount of quanitization applied to it even though the complexity of the frames varies from moment to moment according to the amount of motion these frames convey.

    If a simple scene only requires 2000 to encode but the bitrate is constant at 5000, the extra 3000 kbps is essentially wasted. Likewise if a complex scene requires 8000 kbps and only 5000 are available, the encoder has to overcompress the high motion scenes to bring them down to 5000 even if that generates artifacts as a result.

    In VBR mode, every frame gets the average bitrate (5000 kbps) by default, but the actual bitrate is allowed to vary with scene complexity. If a simple scene requires 2000 kbps, it gets 2000 kbps, while if a complex scene requires 8000 kbps, it gets 8000 kbps and so forth. So long as the encoder can recover as many bits from the simple scenes as it needs to cover the difficult ones, the average bitrate will remain stable at 5000 kbps even though the momentary bitrate varies from 2000 to 8000 throughout the length of the recording.

    The bitrate can dip as low as 1000 and rise as high as 9000 in this example, but the encoder won't compress the low-motion scenes any more than it needs to cover the high-motion scenes -- if it did, the average bitrate would change as a result. It makes its decisions regarding how much bitrate each scene needs based on one or more passes in which the complexity of the scenes are checked, and bitrate is reallocated from the simple scenes to the complex ones within the minimum and maximum bitrates it's given to work with.

    Is the maximum data rate ever exceeded? If for an SVCD say, the limit is the speed at which the DVD player can read a CDR, then can it queue up data from the disc and then have a quick burst at a higher data rate? Or is it always limited to the maximum?
    CDs are read using constant angular velocity (variable rotation rate, constant sector transfer rate) so what really changes from one scene to the next is the number of frames that get packed in a sector. Both the player and the decoder perform buffering at different points along the chain, the net result being that most or all of the burstiness is smoothed out by the time the bitstream is interpreted. Of course, if you exceed the data rate mandated by a particular format (an SVCD at 8 kbps for example) either the player's read buffer or the decoder's VBV buffer can be saturated causing playback problems, but that wouldn't be the MPEG encoder's fault.

    Originally Posted by DJRumpy
    There is no limit as to when/where VBR can take away, or add bitrate. If the first 30 minutes of the film are all low motion, then the mpeg wouldn't require much bitrate, and the encoder wouldn't give it much as a result. It does NOT have to balance out (meaning the movie's avg bitrate will be the same as your AVG setting).
    Yes it does, as a matter of fact: otherwise you could never predict the quality or size of the encoded file. A CBR encoding at 5000 kbps and a VBR encoding at an average bitrate of 5000 kbps have the same encoded file size, but the VBR encoding is bound to be of higher quality because the bitrate can dip as low as 1000 or rise as high as 9000 according to the complexity of each scene.

    If there was no relationship (or an arbitrary relationship) between the minimum, average and maximum, the file size would be proportional to the average scene complexity making it impossible to predict -- the simpler the average scene, the smaller the encoded file size regardless of its length, and vice versa. This may be the case for so-called "constant quality" encoding modes, but that's not the principle by which VBR works.

    If your movie is very complex, and you set a min0, avg 500, and max 9000, the average bitrate for the movie could be 6000, 7000, or higher (just an example).
    In this case (0 / 500 / 9000) the average encoded scene quality would be 500 kbps, with an effective maximum bitrate of 1000 kbps (assuming the encoder was infinitely efficient and could actually compress the easiest scenes to 0 kbps). In order to maintain an average of 500, the encoder couldn't spend more than 500 on the complex scenes regardless of the maximum bitrate available to it -- if it did, the average would be higher than 500 kbps and the recording wouldn't fit on the media you intended it for.

    I believe the encoder would have difficulty compressing even the easiest scenes to 500 kbps in this case, but assuming it couldn't, the average would be less than 1000 kbps rather than more than 6000. I could see how this might happen if the average bitrate was held proportional to the average scene complexity, but that just isn't the way VBR works.

    Encode a minute of high action SVCD video, using min 0, avg 1260, max 2524. When it's done, use a bitviewer to see what the average bitrate for the clip is. You'll find it's substantially higher than your AVG setting.
    I think this would give you misleading results:

    (1) There has to be a mixture of simple and complex scenes in order for VBR to do what it's intended for. 30 minutes would be a suitable length for an accurate test, but 1 minute is not.

    (2) The bitrate viewer may not be accurate, particularly for such a small test file. Does it add up the encoded size of all the frames and divide that figure by the length of the file, or does it read the MPEG header and make an estimate based on the figures it finds there?

    The most accurate way to determine the average (for a file of any length) is to divide the size of the file in bits by the length of the file in seconds. In other words, an encoded file of 350 mb that's 23 minutes long = (350 000 000 * 8) / (23 * 60) = 2 028 985 bits per second or 2028 kbps.

    If the bitrate viewer gives you a figure different than this it's most definitely in error. Check the arithmetic by hand before relying on its results without question -- they may not be as accurate as you think.
    Quote Quote  
  8. hmmm my impression of a vbr file is that even if the avg is set at say 1000 and the min is say 300 and the max is 3000 you imply that the max will mever be reached because the encoder will only go as far above the avg as it can go below. Also you imply that the encoder can only "borrow" as many bits as it saves, this is true but obviously if you have a 30 min clip and 28 mins is very low motion then the bits "saved"(below the avg) can be fiully expended in the final two minutes of high action thus enabling the max bitrate to be reached, which can be way above the average. it would be interesting to encode a video of say 28 minutes of blackness and two minutes of high speed action with various encoders and various settings of min/max/avg to see exactly the behaviour and bitrates. It might actually set this argument to bed!.
    I most heartily agree with the stmt that the avg bitrate is length of movie/ divided by file size.
    Corned beef is now made to a higher standard than at any time in history.
    The electronic components of the power part adopted a lot of Rubycons.
    Quote Quote  
  9. Originally Posted by RabidDog
    if the avg is set at say 1000 and the min is say 300 and the max is 3000 you imply that the max will mever be reached because the encoder will only go as far above the avg as it can go below.
    Right. If you have a bitrate of 300 / 1000 / 3000, the amount the encoder can go above the average is determined by how much it can squeeze out of the picture below the average -- in the average case, of course. :)

    Now and again it may give a GOP the full maximum bitrate, but it can only do this on rare occasions because otherwise the average bitrate would increase. The majority of GOPs won't get more than 1700, and in order for them to get even that much other GOPs will have to be squished to the minimum, or the low-motion GOPs will have to outnumber the high-motion GOPs by a wide margin.

    Also you imply that the encoder can only "borrow" as many bits as it saves, this is true but obviously if you have a 30 min clip and 28 mins is very low motion then the bits "saved" (below the avg) can be fiully expended in the final two minutes of high action thus enabling the max bitrate to be reached, which can be way above the average.
    Right again. If the average amount of motion is low, plenty of bandwidth will be available when it's needed. The two minutes of fast action can have all the bitrate they need up to the maximum available, though if they only need 7000 the encoder isn't going to give them 9000 just to "make up" for it. (You might say that VBR exercises a philosophy of GOP socialism: from each according to its surplus, to each according to its deficit.)

    These two minutes could occur anywhere in the recording -- the beginning, middle or the end -- and the encoder's behavior would be the same. VBR encoding is done in a plurality of passes. In the first pass, the encoder checks the complexity of each GOP and how much bandwidth it can extract or will give relative to the average; in the second pass, it encodes the GOPs based on the accumulated information.

    It might actually set this argument to bed!.
    I most heartily agree with the stmt that the avg bitrate is length of movie/ divided by file size.
    It isn't an argument that I can see. The principles of VBR encoding are straightforward and can be validated using simple arithmetic. I may not be describing the process as clearly as it could be, but that doesn't alter the underlying facts:

    A CBR encoding at 5000 kbps is the same size as a VBR encoding at 1000 / 5000 / 9000 kbps. The VBR encoding may in fact be slightly smaller than the CBR due to better allocation of bits, but it can never be larger than the CBR because the average bitrate in both cases is the same.

    The VBR minimum bitrate is more important than it's given credit for because it determines how much bandwidth can be reallocated to the other side of the average. An ideal VBR bitrate has a balanced spread between the minimum, average and maximum bitrates. Eccentricity results in the encoder throwing away bits it could otherwise have used (minimum too low) or not making use of all the bits it has available to it (minimum too high).
    Quote Quote  
  10. Member adam's Avatar
    Join Date
    Sep 2000
    Location
    United States
    Search Comp PM
    DJRumpy, it sounds like something is very wrong somewhere, I can't think of what could be causing it but this should not happen. In multipass encoding the resulting avg should be exactly as you set it and the resulting filesize should be exactly as predicted. With both TMPGenc and CCE I hit my target size to the exact MB everytime. Like I said, its not even so much a mathmatical process...the encoder essentially makes an initial CBR run at your specified avg, and it will not allocate higher than the avg, what it does not take away somewhere first. So the avg really cannot change.

    As for those questions you raised before about the amount of bitrate you specify and its effects on the encoder's analysis, well it seems you raised a valid point there. I honestly don't know. Basically what I was getting at was that its all relative regardless of whether you are encoding in 1 pass or x-pass. You still have the same source for analysis, and even if the bitrate allocated has some effect, well if you are going to compare 1 pass to 2 pass of course you would have to assume the bitrate was the same.

    As someone said already, this thread seems aptly titled. Alot of people seem to be confused about the exact effects of min and max settings. It is true that the encoder cannot allocate above your avg, what it doesnt first save by allocating less than your avg. But this does not mean that if your min is only 100kbits less than your avg, that your max will never exceed 100kbits higher than your avg. ALL of the bitrate saved on low motion scenes effectively becomes fair game for allocation. The encoder will take this excess and allocate it as it is needed. So it is entirely possible to have a very high bitrate peak even if your min setting is very close to your avg setting but logically, the closer your avg is to either your min or max, the less effective the bitrate allocation will be, and it should be pretty obvious why. If you are only saving 100kbits at a time, when you could be saving 500kbits, then overall you have less bitrate to apply to high motion scenes. As a result your spikes above your avg setting will be smaller and less frequent. Basically, the closer your avg gets to either your max or min, the closer you get to encoding in CBR.
    Quote Quote  
  11. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    I'm going to take groyal's suggestion, and try a longer clip. I have a hard time thinking that there should be any difference between a 1 minute clip, and a 30 minute clip, as each should hit it's avg target bitrate, but this doesn't seem to be the case. I'll try a few more (longer) clips, to see what I get. I also found an interesting research paper on IBM's site, which may shed light on it for me.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  12. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    Adam, it looks like it sets the average, by 'intervals' (usually GOPS..it calls them intervals), treating each interval (GOP) as CBR, but with each GOP's bitrate being different from the next, according to the "softness" or "hardness" of encoding for that particular gop, giving you a VBR stream. Bits are allocated across each gop, with the averaging done across multiple gops, by testing the relative difficulty of adjacent gop's to determine how to meet the AVG. No where in the document does it say WHEN it must spend additional bits, at least not in plain english . I've scanned this multiple times..(makes yer head hurt).

    http://www.research.ibm.com/journal/rd/434/westerink.html

    To assign bit rates to specific segments of a video sequence such that we attain constant visual quality, it is optimal to have knowledge of the characteristics of the entire video. This can only be done by playing the whole sequence and gathering certain statistics over time. Thus, a VBR system for DVD, in which we wish to distribute the available bits optimally over the video, is essentially a multiple-pass system. A one-pass VBR system can be designed as well, but it will always be suboptimal.
    This holds with my opinion of CQ 1-Pass VBR. It has no knowledge of scene changes/fades. The bitrate allocation for these would come up short, as the quantization scale would be too high for the avialable bitrate.

    Adam, after re-reading this, It answered my question regarding re-using the VAF file. According to this document, the first pass AVG value sets the quantization scale for the additional passes
    The rate at which this is run is determined by the total available number of bits and by the duration of the video. Typically, we expect this to be in the range of 4 to 5 Mb/s. We then gather various numerical values for each picture.
    This looks like it takes the length of the movie(fps), at AVG number of bits per second, and uses that bitvalue to determine what can be allocated (total available bits). This would explain why the AVG value is important, and how it affects your picture, as this would seem to relate directly to the Quantization scale for the encode using that AVG value. Change your values while still using the same VAF file could result in a video, which uses a very different bit allocation than what was required.

    In regards to the question of where one can borrow bits from, this document describes this process much like banking. When a particular GOP is processed, which has an excess bits after encoding, these remaining bits are 'banked'. Bits exceeding the MAX value are also 'banked'. As the encoder processes each gop, it allocates additional bits as needed, and gaining bits if any are in excess. It's unclear from this document as to when any leftover bits HAVE to be spent. Since the encoder already has forknowledge of the bit requirements of each gop, and the total avialable bitrate, it would seem logical to conclude (at least to me), that it could allocate the excess bits to any GOP which required them.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  13. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    Still confused as to why a short clips AVG setting should be off. Going to run more tests. According to the doc, the encoder can raise or lower the quantization scale on the second pass, to effectively deal with scene changes/fades. I suppose if the clip was short enough, and the scene was very complex, the encoder would not have sufficient time to adjust the total output bitrate. This doesn't make sense though, as this is multipass, and the encoder knows exactly how much bitrate is available, and how to balance it accordingling.

    I'm going to retry this test. Perhaps I'm loosing it here
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  14. Member
    Join Date
    May 2002
    Location
    Rainy City, England
    Search Comp PM
    Originally Posted by DJRumpy
    This holds with my opinion of CQ 1-Pass VBR. It has no knowledge of scene changes/fades. The bitrate allocation for these would come up short, as the quantization scale would be too high for the avialable bitrate.
    With TMPG CQ you are not specifying the available bitrate, so it can take what it needs. This is why there is no average setting, and the filesize is unpredictable.

    "This mode [CQ] will guarantee a high quality movie; however, movies with many scene with rapid motion can become rather large."
    Quote Quote  
  15. I think a lot of people view x-pass a miracle encoding method. IT aint!!

    Its usage depends also alot on the source. And many times a waste of encoding time and possible a Loss in quality or no change in quality in comparision to CBR.

    For example a 3 minute fast-moving video clip, there would be absolutly no point in using x-pass VBR encoding.

    Another example ,a 1 hour film with action sceenes and slow sceens, makes sense to use x-pass VBR

    the alternative POLL - keep Porn Alive........
    Quote Quote  
  16. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    Thus, whereas most conventional MPEG-2 coding systems are designed for a constant bit rate (CBR), an MPEG-2 encoder for DVD can be designed for a variable bit rate (VBR). CBR systems typically produce a constant-bit-rate stream, but inevitably with a corresponding variable picture quality. VBR, however, has the potential to produce constant picture quality throughout an entire video sequence. Such constant quality can be obtained by appropriately distributing the total available bits over the different video segments.

    To assign bit rates to specific segments of a video sequence such that we attain constant visual quality, it is optimal to have knowledge of the characteristics of the entire video. This can only be done by playing the whole sequence and gathering certain statistics over time. Thus, a VBR system for DVD, in which we wish to distribute the available bits optimally over the video, is essentially a multiple-pass system. A one-pass VBR system can be designed as well, but it will always be suboptimal.
    Oracle, CBR has no fore knowledge of what it's encoding. It's pretty much been debated to death, that CBR cannot product the quality of multi-pass, simply because multipass handles fades, and scene changes better.

    Banj, I know that CQ mode doesn't use an AVG. It cannot always accurately predict a scene change/fade, causing it to spike. During a fade for example, it would essentially have to start throwing wasteful amounts of bitrate, at every frame, simply because every frame looks different than the previous frame. If the MAX isn't high enough, your quality will suffer. If it is high enough (likely for DVD), then you get a much large file than is necessary.
    As was described in Section 3, the picture target bits for the second pass can be based directly upon the first-pass quantization scale settings. However, at first-pass run time, these values were set on the basis of very limited knowledge of the future, and are not necessarily the best that could have been set. Particularly in special situations, such as immediately following scene changes or during fades, when the scene characteristics are changing in an unpredictable manner, the first-pass rate control can temporarily become unstable. In certain rare circumstances, the first-pass quantization scale settings can even be so far off that poor visual quality is the result.
    I agree, that one pass is an excellent option for encoding. I also agree that the quality is usually excellent, but not always. It often produces a file that's too large. The new versions of CCE will let you do a single run using CQ mode, while creating a VAF file. An existing VAF file can be used for later multipass, if the file is to big, or quality suffers.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  17. Oracle, CBR has no fore knowledge of what it's encoding. It's pretty much been debated to death, that CBR cannot product the quality of multi-pass, simply because multipass handles fades, and scene changes better.
    Rumpy-the highlighted part of your coment is simply not true, read the 1st example of my post again.

    For example a 3 minute fast-moving video clip, there would be absolutly no point in using x-pass VBR encoding.
    You do not have any excess bitrate to reallocate!! another example a 3 minute rock video with the whole band on stage jumping up and down all the time.
    Quote Quote  
  18. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    Of course there is. The stage the band is jumping up and down on, isn't moving around. The colloseum behind the band isn't moving. Any piece of information, that doesn't move around from frame to frame benefits from mpeg compression, and multipass. This includes changes in brightness/colors. If the camera pans, but the background brightness/color remains the same, then those portions would not need bitrate, assuming the encoder properly detects the change (or lack thereof). Simply throwing bitrate at a frame isn't guarenteed to give you good results. CBR is limited in what it can work with. If any random GOP contains 18 frames, 8 of which exceed the amount of bitrate allowed by CBR, then those 8 frames will simply suffer. Using VBR, the can borrow from other GOPS, who do not require all of the bitrate, and the additional bitrate is then applied to the 8 'needy' frames. According to the ref doc, the first pass allocates bitrate at the GOP level. The second pass allows the encoder to allocate bitrate at the frame level.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  19. For any given bitrate, CBR will always give better quality pictures than VBR. The only difference is in filesize. If you are more concerned about filesize than picture quality, then use VBR, as this will give the best quality FOR A GIVEN FILESIZE. The alternative is to use CQ encoding, which will give you smaller filesizes than CBR but with the same quality and speed as CBR.
    Quote Quote  
  20. If I can hijack the thread for a moment (actually it's a topical reply, but the subject has shifted since the last time I read it...)

    Let's set aside the question of how a VBR encoder works for the moment and review how VBR bitrates are estimated. I gather from this earlier post that you've found a set of values that work for you in most cases:

    Originally Posted by DJRumpy
    For CVD/SVCD I use min:300, avg:1412,max:2524.
    For VCD, which I rarely ever encode in, I use CBR: 1150
    For DVD, I use min:300, Avg:5000, Max 9000
    I can tell from your SVCD figure of 1412 kbps that you're expecting about an hour of encoded video per CD-R, because ((1412 * 1000) * 3600) / 8 = 635 400 000. But if the program were 45 minutes, you'd be cheating yourself of some picture quality at 1412 because the disc will hold a file encoded at 1881, and if the program were 30 minutes you'd really be having a V-8 moment because (635 000 000 / 2700) * 8 = 2822.

    For this reason, one doesn't take a "one size fits all" approach to VBR, encoding everything that that comes along at a standard average bitrate. Instead we compute a custom average bitrate for each recording in order to ensure that we're getting the highest average bitrate possible.

    The arithmetic is easy: all we need to know is the media size (in bytes) and the program length (in seconds). If we have a 650 MB CD-R and we want to fit 36 minutes of recording time onto it, we estimate:

    650 000 000 bytes
    / 2160 seconds
    = 300 925 bytes per second
    * 8 bits per byte
    = 2407 kbps (VBR AVG).

    This calculation also works in reverse, allowing us to prove it's correct by finding the file size of a program encoded at the average bitrate:

    2407 kbps (VBR AVG)
    = 2 407 000 bits per second
    * 2160 seconds
    = 5 199 120 000 bits
    / 8 bits per byte
    = 649 890 000 bytes.

    Whether we encode CBR at 2407 or VBR AVG at 2407, the recording is never going to exceed 650 MB, no matter how many VBR passes we apply. But before we talk about setting the minimum and maximum bitrates, we first need to modify the formula we're using to take the audio into account.

    Audio is always encoded in CBR. I like to figure out how much room the audio tracks are going to occupy and subtract it from the disk space before finding the average. This way I can treat it as simple overhead and the method I've described remains the same:

    224 kbps (MPEG-1 Layer 2 Stereo)
    = 224 000 bits per second
    * 2160 seconds
    = 483 840 000 bits
    / 8 bits per byte
    = 60 480 000 bytes.

    Taking overhead into account, we estimate:

    650 000 000 bytes (CD-R 650)
    - 60 480 000 bytes (audio overhead)
    = 589 520 000 bytes
    / 2160 seconds
    = 272 925 bytes per second
    * 8 bits per byte
    = 2183 kbps (VBR AVG).

    Working these figures in reverse, I can prove to myself the multiplexed elementary streams are going to fit on the disc like this:

    video = (2 183 000 * 2160) / 8 = 589 410 000
    audio = (224 000 * 2160) / 8 = 60 480 000
    video + audio = 649 890 000

    -- perfect!

    The VBR maximum bitrate is simply the multiplex rate minus the audio bitrate. (There's some disagreement over what the "correct" SVCD multiplex rate is, but let's assume it's 2600 kbps for purposes of discussion):

    VBR MAX = (2600 - 224) = 2376

    Now, if you have a brief program -- 23 minutes, say -- the average bitrate would be 3544 kbps, substantially higher than the maximum. In this case we don't need to use VBR at all. The entire program will fit on the disc at the maximum bitrate, so the best quality we can achieve will be CBR at 2376 kbps.

    But if the average bitrate is less than the maximum we compute the minimum bitrate as follows:

    VBR MIN = AVG-(MAX-AVG) = 1990

    And now the figures are now complete. VBR = 1990 / 2183 / 2376 for a 36-minute program with a 224 kbps soundtrack.

    Note that these bitrates are balanced -- the minimum is as far below the average as the maximum is above. This is optimal from the VBR encoder's perspective because it can never need more bits on the high side than the minimum will allow it to extract from the low.

    But what if the program were 64 minutes long rather than 36? In that case the average bitrate would be so low (1130 kbps) compared to the maximum (2376 kbps) that our minimum would be below zero, or -116 using the formula above. What then?

    The theoretical MPEG-2 minimum bitrate is zero, but real-world encoders can have problems (and sometimes crash) given that figure because a minimum bitrate of zero is as logical as a maximum bitrate of a bajillion: it just can't quantize a GOP to that size. If the minimum bitrate is lower than the size of the smallest GOP it can generate it's going to become very confused, so we'd be wise to impose an absolute minimum it can live with -- if the minimum bitrate is less than 64 kbps, use 64 kbps instead. This will ensure the encoder has the resources to generate a healthy GOP stream even under the most extreme circumstances.

    The rules for setting VBR bitrates can be summarized thusly:

    (1) AVG = (disc space - overhead) / program length
    (2) MAX = multiplex rate - audio bitrate
    (3) If AVG > MAX, encode CBR at MAX bitrate, otherwise
    (4) MIN = AVG-(MAX-AVG), and
    (5) If MIN < 64, use 64 as the minimum instead.

    -----

    Now we can apply this information to the question of what an encoder does with surplus bandwidth and get a meaningful answer:

    As the encoder processes each gop, it allocates additional bits as needed, and gaining bits if any are in excess. It's unclear from this document as to when any leftover bits HAVE to be spent.
    If the spread between the minimum bitrate and the average is greater than the spread between the average and the maximum, the encoder may extract more bits from the low-motion scenes than it needs to cover the high. In this case it has to throw the excess bits away. It can't spend them on the high-motion scenes if they're already saturated, and it can't divide them equally among all the GOPs because doing so would increase the average bitrate. If it has no way to put the bits back where they came from it simply discards them instead, causing the encoded file to shrink in proportion to the surplus.

    You can prove this with an experiment: encode a test segment at 64 / 5000 / 6000. At 5 mbps, you can expect the encoded file size to be 37 500 000 bytes per minute, but it ends up smaller because the encoder is forced to discard the bits it's extracted but can't use. The same segment encoded at 4000 / 5000 / 6000 exhibits little or no shrinkage because the minimum and maximum bitrates are balanced -- the encoder can't draw more from the low end than it could possibly need at the high, so none of the bits are wasted.
    Quote Quote  
  21. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    groyal, read the ibm research paper. It's very clear on the benefits, and drawbacks between the two encoding methods.

    Now your talking about calculating the average bitrate, which is kind of off topic. I know the math. The figures were general, however, I wouldn't be 'cheating' myself of video quality, unless my average was skewed enough, that the 'easy' scenes' suffered visibly. It's not the high motion scenes that people notice, it's the low motion scenes. If my average bitrate, is high enough, that the quality on these is acceptable, while still giving me enough swing to produce a good variable bitrate, then you will see no difference between the two.

    I should also mention that a 650 MB CD-R will hold substantially more than 650MB. A 700MB CD-R will hold approximately 830MB with overburn, or 800MB without.

    The maximum allowed bitrate for svcd is 2748 (including audio).

    I have yet to see an encoder that crashes due to a 0 minimum bitrate. I'm beginning to believe this is a myth. I have encoded hundreds of VCD's, SVCD/CVD's, and at least 30 dvd's, using 0 min, and not one player has ever had a problem with them. If the setting was an issue, it's not likely the encoder would allow it, don't you agree? Yet the setting is there, in every encoder on the market that uses VBR.
    The theoretical MPEG-2 minimum bitrate is zero, but real-world encoders can have problems (and sometimes crash) given that figure because a minimum bitrate of zero is as logical as a maximum bitrate of a bajillion: it just can't quantize a GOP to that size.
    Even an empty frame, still requires bits for the keyframes. The GOP would not be empty. Low bitrate yes, but not empty.
    the encoder may extract more bits from the low-motion scenes than it needs to cover the high. In this case it has to throw the excess bits away.
    And this would mean? Do you think, that since the high motion scene doesn't require these bits, that adding them to it, or to the low motion scenes that also did not need the bitrate, will make it better anyway? No, it will simply give you a bigger file. This is the nature (read: benefit) of VBR. It is functioning as intended.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  22. One thing you must take into account is the difference between what an MPEG encoder thinks is a good picture, and what the human eye thinks is a good picture. Just because an MPEG encoder says that a given scene needs 1.5Mbps doesn't mean that it will look good at that low a bitrate. MPEG 2 looks pretty bad once you drop below 2Mbps and I don't want any of my encodes to drop below this (especially on "still" pictures which are rarely still at all!). Therefore I would not recommend setting the low bitrate below 2Mbps - and certainly not at or near zero!
    Quote Quote  
  23. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    The tests were done with HUMAN viewers. Did anyone actually read the document. The tests were done with subjective viewers, on a variety of easy, and difficult clips.

    Most people on this board will readily agree that a min at 300 is perfectly acceptable. We were discussing 0 as an option. Setting your MIN as 2000 is pointless.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  24. Member
    Join Date
    May 2002
    Location
    Rainy City, England
    Search Comp PM
    Originally Posted by DJRumpy
    Did anyone actually read the document.
    Can you step through the maths with us? 8)
    Quote Quote  
  25. groyal, read the ibm research paper. It's very clear on the benefits, and drawbacks between the two encoding methods.
    I have read Westerink's paper. It's a classic work, but it mustn't be relied upon as gospel for two important reasons:

    (1) The text of the article supplements the mathematics but it doesn't substitute for them. If you consider the math without the text, or the text without the math, you're only getting half the message.

    (2) If you rely upon the paper as the definitive source of truth, the same set of facts can be used to support or refute anyone's interpetation of anything depending on whether you agree with them or not(*).

    It's not the high motion scenes that people notice, it's the low motion scenes. If my average bitrate, is high enough, that the quality on these is acceptable, while still giving me enough swing to produce a good variable bitrate, then you will see no difference between the two.
    (*) Case in point: Westerink's paper says "In visual evaluations by different viewers, it was found that the visual quality of an entire video sequence is judged by the minimum quality across the whole sequence." If we accept this statement as infallible truth, the minimum quality for SVCD is going to be found among the busier scenes, those that require a bitrate higher than the format can possibly deliver.

    If the difference between motion and bitrate is great enough, the busy scenes are going to show artifacts (macroblocks) that the eye isn't going to forgive just because the low-motion scenes look okay. Someone might be distracted by the motion and not see all the artifacts that are present, but once they catch a macroblock that's what they're going to remember.

    I should also mention that a 650 MB CD-R will hold substantially more than 650MB. A 700MB CD-R will hold approximately 830MB with overburn, or 800MB without.

    The maximum allowed bitrate for svcd is 2748 (including audio).

    I have yet to see an encoder that crashes due to a 0 minimum bitrate. I'm beginning to believe this is a myth.
    Actually a 650 MB CD will hold 74 mins * 60 seconds * 75 sectors * 2324 bytes per mode 2 sector, or 773 892 000 bytes.

    The theoretical SVCD multiplex rate is 150 sectors * 2324 bytes * 8 bits, or 2 788 800 bits per second (2788 kbps).

    Cinema Craft's web site used to have a support section which is where I learned of the problem. Apparently it's not accessible to the public anymore, so I wasn't able to verify this. Since it can't be substantiated I withdraw it.

    Apparently I've said something to put you on the defensive, and I apologize if that's the case. I'm not interested in debating the facts, but in exchanging them, and discussing their practical applications. I'm sure you feel the same.

    I have encoded hundreds of VCD's, SVCD/CVD's, and at least 30 dvd's, using 0 min, and not one player has ever had a problem with them. If the setting was an issue, it's not likely the encoder would allow it, don't you agree? Yet the setting is there, in every encoder on the market that uses VBR.
    The player is a different story, I think: but there are in fact plenty of MPEG-2 encoders that don't let you specify a minimum bitrate -- Canopus ProCoder, Ligos LSX and GoMotion, etc.

    the encoder may extract more bits from the low-motion scenes than it needs to cover the high. In this case it has to throw the excess bits away.
    And this would mean? Do you think, that since the high motion scene doesn't require these bits, that adding them to it, or to the low motion scenes that also did not need the bitrate, will make it better anyway? No, it will simply give you a bigger file. This is the nature (read: benefit) of VBR. It is functioning as intended.
    This would mean that the file will shrink (i.e., the average bitrate will decrease) proportional to the overage which will have a negative impact on quality.

    The encoder should only compress the low-motion scenes to the extent the high-motion scenes need the extra bits. A balanced bitrate guarantees this because the encoder can't "save" any more bits than it can possibly "spend". When the minimum is too low and the encoder starts throwing away bits it never needed to extract in the first place, the average bitrate decreases along with it.

    If your objective is to achieve "constant visual quality" relative to the average bitrate, it's to your benefit to maintain as high an average bitrate as possible. Consider what would happen to a file encoded at 0 / 5000 / 5000: no bits are needed above the average and the encoder can remove as many as it likes from below. Would you expect that encoding to look worse, the same, or better than an encoding at a constant bitrate of 5000?

    I would expect the VBR to look worse than the CBR because it's not removing bits for any important reason, it's just hemorrhaging them out of the picture for the hell of it. The VBR file will be smaller than the CBR file (my guess? 25% to 50% smaller) but it's not going to look any better. In fact I expect it to look worse, because the "constant quality" the encoder is supposed to maintain will be relative to a lower effective average.

    My point is simply that a balanced bitrate is desirable no matter what the average. For this reason, the minimum bitrate is more important than it's given credit for. Set it too low, and the encoder throws bits away. Set it too high, and the encoder can't save enough bits to cover the high-motion scenes when it needs them.

    It isn't a radical idea, but it's not an obvious one either.
    Quote Quote  
  26. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    LOL Banj. Thanks for that. I needed a little perspective! 8)

    Don't worry groyal..not getting defensive with you. You've held an excellent debate so far. I was flabberghasted (sp?) at energy80s response. The whole zero minimum setting has almost become as legendary as the 'bad audio' myth for CCE. When we all sat down and had a discussion, it turned out that most every either just preferred the hands on work with some other gui, or they were happy with what they got. I'd be curious to know what settings those encoders use for a min on multipass VBR. Nowhere in the DVD specification can I find anything that says a low bitrate is not allowed (by low, I mean close to or equal to 0). They only list the max. Just because a screen is black, and unchanging, it still takes bits to encode. The frame would be handled like any other keyframe, with no changes from frame to frame, until it reached another keyframe, or the content changed.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  27. Originally Posted by DJRumpy
    I was flabberghasted (sp?) at energy80s response.
    What's to be flabbergasted about? As MPEG2 at full D1 looks shite below 2Mbps, I don't want my encodes dropping below this so I set the minimum at 2Mbps. Hardly a difficult concept to grasp.
    Quote Quote  
  28. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    MPEG-2 is almost identical to MPEG-1 in every way, except for some additional features, like a higher maximum bitrate. The compression method is pretty much identical.

    If your MPEG-2's look bad when their less than 2MB per second, then your probably doing something wrong.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  29. MPEG2 and MPEG1 are NOT identical. MPEG 1 was designed for bitrates below 2Mbps whereas MPEG 2 is meant for bitrates above 2Mbps.
    Quote Quote  
  30. Member
    Join Date
    Mar 2002
    Location
    United States
    Search Comp PM
    The whole point and only point (from TMPGenc support) of using 2-pass is to reduce the size of the resulting MPG file without the loss of quality. No other reason to use it according to TMPGenc. I can get 90 minute - 2 hour movies that look great to fit on 2 CDs where I could not before using the 2-pass.

    BUT .... I know have given up on 2-pass because of TMPGs freezing when I cut the movies. It is terrible. Also, I have had some difficulty with my stand alone DVD player.

    By using CBR and controlling MPG size/quality, it opens up the useage of M2-PRO and Honestech's MPEG editor to split and manipulate the video afterwards. It is just simpler for me and CDs cost me a .25 a piece.

    Just two more quick things, TMPG is free ONLY FOR VCD. TMPG is NOT free for SVCD features. It is still the best bargin around.
    Also, if it hasn't been mentioned there is a cache setting that speeds up the 2-Pass quite a bit.

    Jon
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!