+ Reply to Thread
Results 31 to 60 of 119
TMPGenc doesn't make many of those those adjustments (it does, when you tell it to), and so will TMPGenc's editors. I don't appreciate HCenc making those decisions for me. If I wanted to re-cut this video in an MPEG editor, the kinds of uncontrolled or over-controlled grouping would give me no end of grief. Nor do I appreciate its nearly total lack of documentation concerning settings and matrices. I understand the thing is free, but that's no excuse for wasting my time through thousands of posted recommendations (mostly made by people who haven't used anything else because they're swayed by other reviews and/or too cheap to pay for anything), posts that have bandied HCenc as an easy-to-use app for non-professional compressionists -- yes, there really is a pro job by that title. I'm absolutely convinced that HCenc and similar apps should never be used by amateurs without strict scene-by-scene professionally trained supervision. I'm with unclescoob on that score, although I have learned to exercise a little more patience with some of this free stuff. I don't assume that the physics required for anime are the same requirements for non-toon video that has a far more extensive range of visual detail, subtlety, and motion to contend with.
Last edited by sanlyn; 23rd Mar 2014 at 11:41.
TMPGEnc editor (not free) but there are many free cutters around that can work just as well.
Last edited by sanlyn; 23rd Mar 2014 at 11:41.
Good, just looking for more info about the source and how you got your sample. Thanks for staying with it.
Last edited by sanlyn; 23rd Mar 2014 at 11:42.
TMPGEnc Plus 2.5. As you say, TMPGEnc will happily let you change the GOP structure, force open or closed GOPs, and detect scene changes.
If I wanted to re-cut this video in an MPEG editor, the kinds of uncontrolled or over-controlled grouping would give me no end of grief.
The problem with RGB is that 8-bit digital conversion from YUV<>RGB is poor. True, the camera and display will work in RGB, usually converting to/from YUV, but they're not limited to digital 8-bit. Any source you or I will have to work with will be 8-bit YUV. Any encode we ever do will be 8-bit YUV. Forcing 8-bit RGB in the middle without cause (as TMPGEnc does) is bad - it re-quantises the video, introduces banding, and hard clips blacker-than-black and whiter-than-white.
In AVIsynth you can use the PC level matrix (0-255>0-255) when converting to/from RGB - to avoid most clipping, and the worst of the re-quantisation. But (as far as a I know - again, you may be able to educate me here) with TMPGEnc you're stuck with the normal matrix (16-235>0-255 and the reverse), so these problems are inevitable.
I find TMPGEnc easy to use, and like some of the tools that are built in. It's deceptively powerful. But it seems to me that HcEnc has better VBR quality control. It can look better with a given set of restrictions (average + peak bitrate). It's not perfect though, and it's possible to make it look worse. It's the opposite of user friendly, though the manual is pretty good IMO and the defaults are pretty good too.
They're both toys compared to the real stuff. Yes, I have heard of professional compressionists.
YUV <=> RGB is a lossy tranformation, period. They are non overlapping color models, and you incur rounding errors with each conversion. The chroma will degrade - it's more evident on animated content which typically feature crisp color edges that become more blurry
The point is, it's avoidable quality loss. But if you have to take a trip into RGB land for whatever reason, plan the workflow so you do it only once, because the quality loss compounds with each conversion
I love coming here with a simple question, and watch the whole thing turn into a Columbo episode.
...combined with Law & Order
Nah, it more like CSI, because there is scientific proof for these best practices. It's easy to demonstrate the differences
It's up to you to decide where to willing to take "short cuts." Example - some people are fine with capturing in DV or high bitrate MPEG2, others want lossless. "Best practices" dictate you avoid quality loss, and taking a RGB trip is certainly avoidable if you are just using TMPGEnc for MPEG2 encoding
For your "grid artifacts" , it's just a process of elimination with the filters and settings, going step by step backwards. Check the preview in avspmod with histogram("luma") before encoding. If the grid is there, something else is causing it (not the actual encoder or encoding settings)
Last edited by sanlyn; 23rd Mar 2014 at 11:42.
No problem, 2B, I know what you're saying and agree. I'd be using HCenc right now if I had time to master it, and I do take it up now and then. I've hit TMPGenc enough to know its limitations by now and have 3 copies, so in a rush I depend on it rather than struggle with something I haven't mastered yet to my satisfaction. But I'll get there. Eventually.
Mainconcept Reference is considered the highest quality (and expensive, at what..900 dollars or so?) encoder out there. However I've tried it and the results look great. But it's indistinguishable from HcEnc, and I'm sure that TMPEGenc's results are no different. The difference? Mainconcept is a transcoder, so it can convert your files to other formats (Blu-Ray included). HcEnc and TMP only do MPEG-2.
So what exactly does the uh..."real stuff" do otherwise? Just curious.
And as far as this color space stuff is concerned, Sanlyn, on prior posts you have mentioned prepping your clips in Avisynth and then sending them over to Virtualdub for NeatVideo finishing touches. Now those who use Neat know that it converts your video to YCbCr. Here's what the workflow looks like:
#1- Isn't the native video colorspace YV12? Correct me if I'm wrong.
#2 - Sending that to Virtualdub converts it to RGB32.
#3 - Loading NeatVideo then converts you to YCbCr.
#4 - Send it back to Avisynth (for encoding) and you have to re-convert it to YV12
Here's my question: What do you do in Avisynth concerning colorspace conversion before you send it to V-dub for Neat?
vdub) . If you use "fast recompress" mode, you can bypass colorspace conversions (but can't use filters)
Pro MPEG2 encoders (e.g. CCE SP3) allow for segment re-encoding and better fine tuning.
IMO, one of HCEnc's weaknesses are gradients and banding. Unfortunately, anime has lots of gradients, so IMO, it's not a great choice for animated content. But overall it's still a good encoder
Anyways, in this context, they refer to "shades" of a color or hue . Eg. a texture from light grey to dark grey. Or light blue to medium blue etc... Simple animation typically has many gradients (as opposed to typical live action content) . You often see flickering banding or blocking in things like dark scenes featuring gradients or something like a blue sky - it's from the way most encoders distribute bits to "flat gradient" areas.
AQ is supposed to help in HCEnc, but it's not very effective in my experience (AQ is ported from x264, and it's very effective with x264)
Dithering and adding noise can help if you have enough bitrate (similarly, OVERdenoising can make it worse). I'm certain that I mentioned all these things in some of your previous threads
IMO, TMPGEnc isn't much better in respect to gradients than HCEnc.
Mainconcept handles gradients better, but all have their pros/cons in different areas
Last edited by unclescoob; 18th Mar 2012 at 17:34.
Why would it work in rgba don't make sense to me.
No it work in I420 natively and can work in rgb if requested (not very efficient according to the official doc)
neat video is an external filter and as such must be converted to rgb32 for whatever reason thus the video is converted as follow:
YV12(source) > rgb32 (neat, full processing mode) > I420 (inside neat) > YV12
Had developpers cared to fix this issue we would be using neat in I420 from start to finish as we speak
With avisynth you could convert to rgb beforehand and avoid the first yv12>rgb convertion in vdub but the gain would be extremely small. I know i never do it but perhaps that's bad practice.
Last edited by themaster1; 18th Mar 2012 at 17:49.
Oh big friggin' deal. It's not like you're subsampling or anything. Look, these color space wars seem useless to me. And even if they held any weight, none of you can even agree with each other on the issue. Yeah, you have to dip into this particular color space for a moment here, and then go there to do that other type of work, but as stated, visually, it does not make much of a difference. It does not degrade anything, nor does it make it any less "crisp". So from this point on, I'd like to stick to the original subject here, which was....shit, I forgot! Because, as usual, people come here with one issue and then everyone with their cutesy little "HO's" (hehe) drift off into the land of confusion. You all start disagreeing with each other on jargon and crap that has nothing to do with the poster's question, and the poster is left with his hands in the air saying "errr....guys? uh....my question?" Only to receive some arrogant response (Like David did once), suggesting that unless the poster "pays our wages", he/she shouldn't "tell us what to do". Of course, the "I just ripped him a new one" statement is always sugar coated with the typical smiley.
So, if you'll all excuse me, I'm going to scroll up. Waaaaaay up, and see what my ORIGINAL question was, and if it was even answered. I think Sanlyn answered it.
The reason is, "gradient" in terms of video , image processing , photos etc.. is a fairly straight forward description
Eitherway, I assumed that you were genuine and tried to answer your question
Here's something for you all to munch on in the meantime. Some of you apparently could use the basics:
The color encoding system used for analog television worldwide (NTSC, PAL and SECAM). The YUV color space (color model) differs from RGB, which is what the camera captures and what humans view. When color signals were developed in the 1950s, it was decided to allow black and white TVs to continue to receive and decode monochrome signals, while color sets would decode both monochrome and color signals.
Luma and Color Difference Signals
The Y in YUV stands for "luma," which is brightness, or lightness, and black and white TVs decode only the Y part of the signal. U and V provide color information and are "color difference" signals of blue minus luma (B-Y) and red minus luma (R-Y). Through a process called "color space conversion," the video camera converts the RGB data captured by its sensors into either composite analog signals (YUV) or component versions (analog YPbPr or digital YCbCr). For rendering on screen, all these color spaces must be converted back again to RGB by the TV or display system.
Mathematically Equivalent to RGB
YUV also saves transmission bandwidth compared to RGB, because the chroma channels (B-Y and R-Y) carry only half the resolution of the luma. YUV is not compressed RGB; rather, Y, B-Y and R-Y are the mathematical equivalent of RGB. See color space conversion and YUV/RGB conversion formulas.
Composite Video and S-video
The original TV standard combined luma (Y) and both color signals (B-Y, R-Y) into one channel, which uses one cable and is known as "compositevideo." An option known as "S-video" or "Y/C video" keeps the luma separate from the color signals, using one cable, but with separate wires internally. S-video is a bit sharper than composite video.
When luma and each of the color signals (B-Y and R-Y) are maintained in separate channels, it is called "component video," designated as YPbPr when in the analog domain and YCbCr when it is digital. Component video is the sharpest of all.
The Term Is Generic
In practice, YUV refers to the color difference encoding system whether composite or component, and "YUV," "Y, B-Y, R-Y" and "YPbPr" are used interchangeably for analog signals. Sometimes, "YCbCr," which is digital, is used interchangeably as well, which does not help to clarify the subject. See YPbPr, YCbCr, luma, ITU-R BT.601, YIQ and chroma subsampling.
(Red Green Blue) The computer's native color space and the system for capturing and displaying color images electronically. All TV, computer and electronic display screens create color by generating red, green and blue (RGB) lights. This is because our eyes are sensitive to red, green and blue, and our brain mixes the colors together (see trichromaticity). See RGBW and RGBY.
Cameras and scanners capture color with sensors that record the varying intensities of red, green and blue at each pixel location in the frame. See24-bit color, CCD, scanner and digital camera.
Display and Printing (RGB and CMYK)
For screen display, red, green and blue subpixels (dots) are energized to the appropriate intensity. When all three subpixels are turned on high, white is produced. As intensities are equally lowered, shades of gray are derived. The base color of the screen appears when all subpixels are turned off.
For printing on paper, the CMYK color space is used, not RGB. Combinations of cyan, magenta, yellow and black ink make up the colors. White is typically derived by using white paper and no ink for those areas; however, if white is of critical importance, a white spot color can be added to the CMYK process. See CMYK and spot color.
Video Processing (RGB or YUV)
TV/video signals are mostly in the YUV color space. They are converted to RGB in the computer for editing when RGB is the desired output. If YUV is the desired output, and the video editing program supports YUV, there is no need to convert to RGB for internal processing. However, no matter which color space is used for editing, all data must be converted to RGB for screen display. See YUV, Adobe RGB, sRGB and color space.
Color Mixing Methods
The two major ways to create colors are RGB and CMY. RGB uses red, green and blue pixels for the display screen. CMY uses cyan, magenta and yellow inks to print on paper. In theory, equal parts of cyan, magenta and yellow ink make black, but the blacks tend to be muddy. Thus, a pure black fourth ink is always used in the four color CMYK process (K for blacK).
Color Space Conversion
Since displays, printers and TV/video all use different color spaces, conversion between them is necessary and commonplace.
One of two primary color spaces used to represent digital component video (the other is RGB). The difference between YCbCr and RGB is that YCbCr represents color as brightness and two color difference signals, while RGB represents color as red, green and blue. In YCbCr, the Y is the brightness (luma), Cb is blue minus luma (B-Y) and Cr is red minus luma (R-Y). See component video.
YCbCr Is Digital
MPEG compression, which is used in DVDs, digital TV and Video CDs, is coded in YCbCr, and digital camcorders (MiniDV, DV, Digital Betacam, etc.) output YCbCr over a digital link such as FireWire or SDI. The ITU-R BT.601 international standard for digital video defines both YCbCr and RGB color spaces. See chroma subsampling.
YPbPr Is Analog
YPbPr is the analog counterpart of YCbCr. It uses three cables for connection, whereas YCbCr uses only a single cable (see YPbPr). See YUV,YUV/RGB conversion formulas and ITU-R BT.601.
I don't believe this is how it works. I had a look at some test charts. The output pin after neat video is RGB (e.g. use null filter in vdub, and use "show image formats", yet there is no evidence of colorspace conversion or quality loss (even using YCbCr in Neat). If it had converted to i420, there would be evidence of additional quality lost. This suggests it works internally in RGB
I knew what gradients were, I had just forgotten
The term, anyway. Anyone who's used Photoshop knows what gradients are. Even if they forgot.
neat video now support cuda encoding,for this reason only it is more likely it works in I420 natively as i have pointed out in another thread.
But since you're pointing out it output in rgb (i'm guessing for previwing frames, which make plenty of sense) the situation is even worse (one more yuv>rgb conversion)
Perhaps you didn't understand what I wrote above.
If there was one more YCbCr=>RGB conversion done by Neat, it would be evidenced by worsening chroma if you exported a RGB format. This is not what is observed. This suggests it's not really working in I420, this suggests internally it's working in RGB/A. This type of chroma handling is typical of many programs, it's "YCbCr equivalents" , but no actual colorspace/colormodel conversion actually occurs (beyond vdub's conversion to RGB, if using a YV12 souce)