VideoHelp Forum
+ Reply to Thread
Page 2 of 4
FirstFirst 1 2 3 4 LastLast
Results 31 to 60 of 119
Thread
  1. Member 2Bdecided's Avatar
    Join Date: Nov 2007
    Location: United Kingdom
    Search Comp PM
    Originally Posted by unclescoob View Post
    Like HcEnc, right?
    Yes.
    Quote Quote  
  2. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    Originally Posted by unclescoob View Post
    Originally Posted by unclescoob View Post
    My goal is a professional output and I can't rely on monitor calibration.
    Originally Posted by sanlyn View Post
    Why not? Seems a rather odd thing to say.
    What I mean is, I do not want the artifacts in my encode at all. I would not want to rely on someone else's monitor calibration in hopes that the screen doesn't display them. I just don't want them in my clip.
    IMHO, I don't think you understand the concept of monitor calibration. It seems you have it backwards.
    Quote Quote  
  3. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 23rd Mar 2014 at 12:41.
    Quote Quote  
  4. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 23rd Mar 2014 at 12:41.
    Quote Quote  
  5. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    Originally Posted by sanlyn View Post
    vob and mpeg are the same thing
    *ahem* I know that, doofus

    Regarding the other stuff, I gotta run. But I'm going to post back this afternoon.
    Quote Quote  
  6. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 23rd Mar 2014 at 12:42.
    Quote Quote  
  7. Member 2Bdecided's Avatar
    Join Date: Nov 2007
    Location: United Kingdom
    Search Comp PM
    Originally Posted by sanlyn View Post
    Originally Posted by 2Bdecided View Post
    It's far better to have an encoder that adapts the GOP structure to the content, than one that sticks with a dumb fixed pattern.

    e.g. It makes good sense to use an I-frame on any scene change. An encoder that uses a fixed-GOP pattern may use a B-frame on a scene change. Problem is, a B-frame only describes the differences between a previous frame and the current one, typically for the sake of using fewer bits because differences are often small. How dumb would it be to use this type of encoding on a frame that's completely different from all the previous ones?! Yet that's exactly what fixed-GOP encoding will do.

    A good encoder will vary the GOP structure while remaining DVD compliant, and every decoder+player in the world will play such a stream just fine.

    Cheers,
    David.
    Don't assume that TMPGenc doesn't make many of those those adjustments (it does, when you tell it to), and so will TMPGenc's editors. I don't appreciate HCenc making those decisions for me. If I wanted to re-cut this video in an MPEG editor, the kinds of uncontrolled or over-controlled grouping would give me no end of grief. Nor do I appreciate its nearly total lack of documentation concerning settings and matrices. I understand the thing is free, but that's no excuse for wasting my time through thousands of posted recommendations (mostly made by people who haven't used anything else because they're swayed by other reviews and/or too cheap to pay for anything), posts that have bandied HCenc as an easy-to-use app for non-professional compressionists -- yes, there really is a pro job by that title. I'm absolutely convinced that HCenc and similar apps should never be used by amateurs without strict scene-by-scene professionally trained supervision. I'm with unclescoob on that score, although I have learned to exercise a little more patience with some of this free stuff. I don't assume that the physics required for anime are the same requirements for non-toon video that has a far more extensive range of visual detail, subtlety, and motion to contend with.

    As long as we're into a rant, let's add that using RGB is not a crime against nature. Yep, you have to be careful with how you do it. But I beleive some of the anti-rgb mob fail to understand the following, and fail completely:

    All video (including film) begins as RGB. Ultimately, it ends up as RGB on your TV, PC and movie screens. YUV/YCbCr/YCyCb/YPbPr etc. are storage techniques that make crap out of the original RGB event and strip 40 to 80% of the original event information, not the other way around.

    Think I've had enough coffee for today.
    Sanlyn, I use both HcEnc and TMPGEnc Plus 2.5. As you say, TMPGEnc will happily let you change the GOP structure, force open or closed GOPs, and detect scene changes.

    If I wanted to re-cut this video in an MPEG editor, the kinds of uncontrolled or over-controlled grouping would give me no end of grief.
    It's either OCD or naive to be restricting the GOP structure of projects aimed at DVD, just to make possible future MPEG editing easier. For one thing, decent MPEG editors can cope with any kind of GOP structure (e.g. VideoReDo) with minimal re-encoding. For another, reducing the quality of your encodes by crippling the GOP choice on the off chance that you might want to cut it on a GOP boundary in future is a strange priority. The GOP boundary might not be where you want it, and they'll you'll have crippled the quality for no benefit at all. Plus any DVD compliant encode starts a new GOP every half second or so. Plus any Open GOP encode needs intelligent MPEG editing anyway. I guessed it from your first post, but now it seems even clearer: you have some strange hang up about GOP structure which has no basis in fact (unless you know something I don't - quite possible!).

    The problem with RGB is that 8-bit digital conversion from YUV<>RGB is poor. True, the camera and display will work in RGB, usually converting to/from YUV, but they're not limited to digital 8-bit. Any source you or I will have to work with will be 8-bit YUV. Any encode we ever do will be 8-bit YUV. Forcing 8-bit RGB in the middle without cause (as TMPGEnc does) is bad - it re-quantises the video, introduces banding, and hard clips blacker-than-black and whiter-than-white.

    In AVIsynth you can use the PC level matrix (0-255>0-255) when converting to/from RGB - to avoid most clipping, and the worst of the re-quantisation. But (as far as a I know - again, you may be able to educate me here) with TMPGEnc you're stuck with the normal matrix (16-235>0-255 and the reverse), so these problems are inevitable.

    I find TMPGEnc easy to use, and like some of the tools that are built in. It's deceptively powerful. But it seems to me that HcEnc has better VBR quality control. It can look better with a given set of restrictions (average + peak bitrate). It's not perfect though, and it's possible to make it look worse. It's the opposite of user friendly, though the manual is pretty good IMO and the defaults are pretty good too.

    They're both toys compared to the real stuff. Yes, I have heard of professional compressionists.

    Cheers,
    David.
    Quote Quote  
  8. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by 2Bdecided View Post
    The problem with RGB is that 8-bit digital conversion from YUV<>RGB is poor. True, the camera and display will work in RGB, usually converting to/from YUV, but they're not limited to digital 8-bit. Any source you or I will have to work with will be 8-bit YUV. Any encode we ever do will be 8-bit YUV. Forcing 8-bit RGB in the middle without cause (as TMPGEnc does) is bad - it re-quantises the video, introduces banding, and hard clips blacker-than-black and whiter-than-white.

    In AVIsynth you can use the PC level matrix (0-255>0-255) when converting to/from RGB - to avoid most clipping, and the worst of the re-quantisation. But (as far as a I know - again, you may be able to educate me here) with TMPGEnc you're stuck with the normal matrix (16-235>0-255 and the reverse), so these problems are inevitable.
    David knows this already, but it's worth mentioning that levels and quantization are only part of the issue.

    YUV <=> RGB is a lossy tranformation, period. They are non overlapping color models, and you incur rounding errors with each conversion. The chroma will degrade - it's more evident on animated content which typically feature crisp color edges that become more blurry

    The point is, it's avoidable quality loss. But if you have to take a trip into RGB land for whatever reason, plan the workflow so you do it only once, because the quality loss compounds with each conversion
    Quote Quote  
  9. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    I love coming here with a simple question, and watch the whole thing turn into a Columbo episode.
    Quote Quote  
  10. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    ...combined with Law & Order
    Quote Quote  
  11. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Nah, it more like CSI, because there is scientific proof for these best practices. It's easy to demonstrate the differences

    It's up to you to decide where to willing to take "short cuts." Example - some people are fine with capturing in DV or high bitrate MPEG2, others want lossless. "Best practices" dictate you avoid quality loss, and taking a RGB trip is certainly avoidable if you are just using TMPGEnc for MPEG2 encoding

    For your "grid artifacts" , it's just a process of elimination with the filters and settings, going step by step backwards. Check the preview in avspmod with histogram("luma") before encoding. If the grid is there, something else is causing it (not the actual encoder or encoding settings)
    Quote Quote  
  12. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 23rd Mar 2014 at 12:42.
    Quote Quote  
  13. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    No problem, 2B, I know what you're saying and agree. I'd be using HCenc right now if I had time to master it, and I do take it up now and then. I've hit TMPGenc enough to know its limitations by now and have 3 copies, so in a rush I depend on it rather than struggle with something I haven't mastered yet to my satisfaction. But I'll get there. Eventually.
    Quote Quote  
  14. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    Originally Posted by 2Bdecided View Post

    They're both toys compared to the real stuff.
    What exactly does that mean? If anything, Mainconcept Reference is considered the highest quality (and expensive, at what..900 dollars or so?) encoder out there. However I've tried it and the results look great. But it's indistinguishable from HcEnc, and I'm sure that TMPEGenc's results are no different. The difference? Mainconcept is a transcoder, so it can convert your files to other formats (Blu-Ray included). HcEnc and TMP only do MPEG-2.

    So what exactly does the uh..."real stuff" do otherwise? Just curious.
    Quote Quote  
  15. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    And as far as this color space stuff is concerned, Sanlyn, on prior posts you have mentioned prepping your clips in Avisynth and then sending them over to Virtualdub for NeatVideo finishing touches. Now those who use Neat know that it converts your video to YCbCr. Here's what the workflow looks like:

    #1- Isn't the native video colorspace YV12? Correct me if I'm wrong.

    #2 - Sending that to Virtualdub converts it to RGB32.

    #3 - Loading NeatVideo then converts you to YCbCr.

    #4 - Send it back to Avisynth (for encoding) and you have to re-convert it to YV12

    Here's my question: What do you do in Avisynth concerning colorspace conversion before you send it to V-dub for Neat?
    Quote Quote  
  16. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by unclescoob View Post
    #1- Isn't the native video colorspace YV12? Correct me if I'm wrong.
    yes, native mpeg2 DVD video is YV12


    #2 - Sending that to Virtualdub converts it to RGB32.
    Not necessarily. Only if you use "full processing mode" and don't specify color depth options, or use filters that work in RGB (almost all of them in vdub) . If you use "fast recompress" mode, you can bypass colorspace conversions (but can't use filters)


    #3 - Loading NeatVideo then converts you to YCbCr.
    I'm fairly certain Neat works in RGB, and doesn't convert to YCbCr. Even when the dials say YCbCr, it's only "YCbCr equivalents". Many color correction & grading programs do that as well. Internally , they work in RGB/A, no additional conversions are actually performed

    Here's my question: What do you do in Avisynth concerning colorspace conversion before you send it to V-dub for Neat?
    I'm guessing he's referring to making sure levels are within Y'CbCr legal range 16<Y'<235 , 16<CbCr<240 , otherwise you get clipping of superbrights/darks, or use a full range matrix when converting to RGB
    Quote Quote  
  17. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by unclescoob View Post
    Originally Posted by 2Bdecided View Post

    They're both toys compared to the real stuff.
    What exactly does that mean? If anything, Mainconcept Reference is considered the highest quality (and expensive, at what..900 dollars or so?) encoder out there. However I've tried it and the results look great. But it's indistinguishable from HcEnc, and I'm sure that TMPEGenc's results are no different. The difference? Mainconcept is a transcoder, so it can convert your files to other formats (Blu-Ray included). HcEnc and TMP only do MPEG-2.

    So what exactly does the uh..."real stuff" do otherwise? Just curious.

    Pro MPEG2 encoders (e.g. CCE SP3) allow for segment re-encoding and better fine tuning.

    IMO, one of HCEnc's weaknesses are gradients and banding. Unfortunately, anime has lots of gradients, so IMO, it's not a great choice for animated content. But overall it's still a good encoder
    Quote Quote  
  18. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    1. What are gradients?

    2. What is a better choice for animated content? I was considering TMPGenc
    Quote Quote  
  19. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by unclescoob View Post
    1. What are gradients?
    really??

    Anyways, in this context, they refer to "shades" of a color or hue . Eg. a texture from light grey to dark grey. Or light blue to medium blue etc... Simple animation typically has many gradients (as opposed to typical live action content) . You often see flickering banding or blocking in things like dark scenes featuring gradients or something like a blue sky - it's from the way most encoders distribute bits to "flat gradient" areas.

    AQ is supposed to help in HCEnc, but it's not very effective in my experience (AQ is ported from x264, and it's very effective with x264)

    Dithering and adding noise can help if you have enough bitrate (similarly, OVERdenoising can make it worse). I'm certain that I mentioned all these things in some of your previous threads


    2. What is a better choice for animated content? I was considering TMPGenc
    It depends on the type of animated content. Modern CGI stuff is different to encode than say this 80's Ghostbusters material.

    IMO, TMPGEnc isn't much better in respect to gradients than HCEnc.

    Mainconcept handles gradients better, but all have their pros/cons in different areas
    Quote Quote  
  20. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    Originally Posted by poisondeathray View Post

    really??
    Was there an issue with the question, or have we forgotten our humble beginnings of the first time we wondered what the term meant?
    Last edited by unclescoob; 18th Mar 2012 at 18:34.
    Quote Quote  
  21. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    double post
    Quote Quote  
  22. Member themaster1's Avatar
    Join Date: Nov 2006
    Location: France
    Search Comp PM
    Why would it work in rgba don't make sense to me.

    No it work in I420 natively and can work in rgb if requested (not very efficient according to the official doc)

    neat video is an external filter and as such must be converted to rgb32 for whatever reason thus the video is converted as follow:

    YV12(source) > rgb32 (neat, full processing mode) > I420 (inside neat) > YV12

    Had developpers cared to fix this issue we would be using neat in I420 from start to finish as we speak

    With avisynth you could convert to rgb beforehand and avoid the first yv12>rgb convertion in vdub but the gain would be extremely small. I know i never do it but perhaps that's bad practice.
    Last edited by themaster1; 18th Mar 2012 at 18:49.
    Quote Quote  
  23. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    Oh big friggin' deal. It's not like you're subsampling or anything. Look, these color space wars seem useless to me. And even if they held any weight, none of you can even agree with each other on the issue. Yeah, you have to dip into this particular color space for a moment here, and then go there to do that other type of work, but as stated, visually, it does not make much of a difference. It does not degrade anything, nor does it make it any less "crisp". So from this point on, I'd like to stick to the original subject here, which was....shit, I forgot! Because, as usual, people come here with one issue and then everyone with their cutesy little "HO's" (hehe) drift off into the land of confusion. You all start disagreeing with each other on jargon and crap that has nothing to do with the poster's question, and the poster is left with his hands in the air saying "errr....guys? uh....my question?" Only to receive some arrogant response (Like David did once), suggesting that unless the poster "pays our wages", he/she shouldn't "tell us what to do". Of course, the "I just ripped him a new one" statement is always sugar coated with the typical smiley.

    So, if you'll all excuse me, I'm going to scroll up. Waaaaaay up, and see what my ORIGINAL question was, and if it was even answered. I think Sanlyn answered it.

    Quote Quote  
  24. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by unclescoob View Post
    Originally Posted by poisondeathray View Post

    really??
    Was there an issue with the question, or have we forgotten our humble beginnings of the first time we wondered what the term meant?
    There was an issue. I wasn't sure if you were reverted to being dick again or genuinely asking a question.

    The reason is, "gradient" in terms of video , image processing , photos etc.. is a fairly straight forward description

    Eitherway, I assumed that you were genuine and tried to answer your question
    Quote Quote  
  25. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    Here's something for you all to munch on in the meantime. Some of you apparently could use the basics:

    YUV:

    The color encoding system used for analog television worldwide (NTSC, PAL and SECAM). The YUV color space (color model) differs from RGB, which is what the camera captures and what humans view. When color signals were developed in the 1950s, it was decided to allow black and white TVs to continue to receive and decode monochrome signals, while color sets would decode both monochrome and color signals.

    Luma and Color Difference Signals
    The Y in YUV stands for "luma," which is brightness, or lightness, and black and white TVs decode only the Y part of the signal. U and V provide color information and are "color difference" signals of blue minus luma (B-Y) and red minus luma (R-Y). Through a process called "color space conversion," the video camera converts the RGB data captured by its sensors into either composite analog signals (YUV) or component versions (analog YPbPr or digital YCbCr). For rendering on screen, all these color spaces must be converted back again to RGB by the TV or display system.

    Mathematically Equivalent to RGB
    YUV also saves transmission bandwidth compared to RGB, because the chroma channels (B-Y and R-Y) carry only half the resolution of the luma. YUV is not compressed RGB; rather, Y, B-Y and R-Y are the mathematical equivalent of RGB. See color space conversion and YUV/RGB conversion formulas.

    Composite Video and S-video
    The original TV standard combined luma (Y) and both color signals (B-Y, R-Y) into one channel, which uses one cable and is known as "compositevideo." An option known as "S-video" or "Y/C video" keeps the luma separate from the color signals, using one cable, but with separate wires internally. S-video is a bit sharper than composite video.

    Component Video
    When luma and each of the color signals (B-Y and R-Y) are maintained in separate channels, it is called "component video," designated as YPbPr when in the analog domain and YCbCr when it is digital. Component video is the sharpest of all.

    The Term Is Generic
    In practice, YUV refers to the color difference encoding system whether composite or component, and "YUV," "Y, B-Y, R-Y" and "YPbPr" are used interchangeably for analog signals. Sometimes, "YCbCr," which is digital, is used interchangeably as well, which does not help to clarify the subject. See YPbPr, YCbCr, luma, ITU-R BT.601, YIQ and chroma subsampling.


    RGB:

    (Red Green Blue) The computer's native color space and the system for capturing and displaying color images electronically. All TV, computer and electronic display screens create color by generating red, green and blue (RGB) lights. This is because our eyes are sensitive to red, green and blue, and our brain mixes the colors together (see trichromaticity). See RGBW and RGBY.

    Capturing
    Cameras and scanners capture color with sensors that record the varying intensities of red, green and blue at each pixel location in the frame. See24-bit color, CCD, scanner and digital camera.

    Display and Printing (RGB and CMYK)
    For screen display, red, green and blue subpixels (dots) are energized to the appropriate intensity. When all three subpixels are turned on high, white is produced. As intensities are equally lowered, shades of gray are derived. The base color of the screen appears when all subpixels are turned off.

    For printing on paper, the CMYK color space is used, not RGB. Combinations of cyan, magenta, yellow and black ink make up the colors. White is typically derived by using white paper and no ink for those areas; however, if white is of critical importance, a white spot color can be added to the CMYK process. See CMYK and spot color.

    Video Processing (RGB or YUV)
    TV/video signals are mostly in the YUV color space. They are converted to RGB in the computer for editing when RGB is the desired output. If YUV is the desired output, and the video editing program supports YUV, there is no need to convert to RGB for internal processing. However, no matter which color space is used for editing, all data must be converted to RGB for screen display. See YUV, Adobe RGB, sRGB and color space.


    Color Mixing Methods
    The two major ways to create colors are RGB and CMY. RGB uses red, green and blue pixels for the display screen. CMY uses cyan, magenta and yellow inks to print on paper. In theory, equal parts of cyan, magenta and yellow ink make black, but the blacks tend to be muddy. Thus, a pure black fourth ink is always used in the four color CMYK process (K for blacK).







    Color Space Conversion
    Since displays, printers and TV/video all use different color spaces, conversion between them is necessary and commonplace.

    YCbCr :

    One of two primary color spaces used to represent digital component video (the other is RGB). The difference between YCbCr and RGB is that YCbCr represents color as brightness and two color difference signals, while RGB represents color as red, green and blue. In YCbCr, the Y is the brightness (luma), Cb is blue minus luma (B-Y) and Cr is red minus luma (R-Y). See component video.

    YCbCr Is Digital
    MPEG compression, which is used in DVDs, digital TV and Video CDs, is coded in YCbCr, and digital camcorders (MiniDV, DV, Digital Betacam, etc.) output YCbCr over a digital link such as FireWire or SDI. The ITU-R BT.601 international standard for digital video defines both YCbCr and RGB color spaces. See chroma subsampling.

    YPbPr Is Analog
    YPbPr is the analog counterpart of YCbCr. It uses three cables for connection, whereas YCbCr uses only a single cable (see YPbPr). See YUV,YUV/RGB conversion formulas and ITU-R BT.601.
    Quote Quote  
  26. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by themaster1 View Post
    Why would it work in rgba don't make sense to me.

    No it work in I420 natively and can work in rgb if requested (not very efficient according to the official doc)

    neat video is an external filter and as such must be converted to rgb32 for whatever reason thus the video is converted as follow:

    YV12(source) > rgb32 (neat, full processing mode) > I420 (inside neat) > YV12

    Had developpers cared to fix this issue we would be using neat in I420 from start to finish as we speak

    With avisynth you could convert to rgb beforehand and avoid the first yv12>rgb convertion in vdub but the gain would be extremely small. I know i never do it but perhaps that's bad practice.

    I don't believe this is how it works. I had a look at some test charts. The output pin after neat video is RGB (e.g. use null filter in vdub, and use "show image formats", yet there is no evidence of colorspace conversion or quality loss (even using YCbCr in Neat). If it had converted to i420, there would be evidence of additional quality lost. This suggests it works internally in RGB
    Quote Quote  
  27. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    Originally Posted by poisondeathray View Post
    I wasn't sure if you were reverted to being dick again
    OoooOoooOooohhooo hooooooOOooo!!!!!! I love it when you do that!!!

    Yeah, that's how I like it, PDR!
    Quote Quote  
  28. Member
    Join Date: Dec 2010
    Location: New York
    Search Comp PM
    I knew what gradients were, I had just forgotten

    The term, anyway. Anyone who's used Photoshop knows what gradients are. Even if they forgot.
    Quote Quote  
  29. Member themaster1's Avatar
    Join Date: Nov 2006
    Location: France
    Search Comp PM
    Originally Posted by poisondeathray View Post
    Originally Posted by themaster1 View Post
    Why would it work in rgba don't make sense to me.

    No it work in I420 natively and can work in rgb if requested (not very efficient according to the official doc)

    neat video is an external filter and as such must be converted to rgb32 for whatever reason thus the video is converted as follow:

    YV12(source) > rgb32 (neat, full processing mode) > I420 (inside neat) > YV12

    Had developpers cared to fix this issue we would be using neat in I420 from start to finish as we speak

    With avisynth you could convert to rgb beforehand and avoid the first yv12>rgb convertion in vdub but the gain would be extremely small. I know i never do it but perhaps that's bad practice.

    I don't believe this is how it works. I had a look at some test charts. The output pin after neat video is RGB (e.g. use null filter in vdub, and use "show image formats", yet there is no evidence of colorspace conversion or quality loss (even using YCbCr in Neat). If it had converted to i420, there would be evidence of additional quality lost. This suggests it works internally in RGB
    neat video now support cuda encoding,for this reason only it is more likely it works in I420 natively as i have pointed out in another thread.
    But since you're pointing out it output in rgb (i'm guessing for previwing frames, which make plenty of sense) the situation is even worse (one more yuv>rgb conversion)
    Quote Quote  
  30. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by themaster1 View Post

    neat video now support cuda encoding,for this reason only it is more likely it works in I420 natively as i have pointed out in another thread.
    But since you're pointing out it output in rgb (i'm guessing for previwing frames, which make plenty of sense) the situation is even worse (one more yuv>rgb conversion)
    No, not from previewing frames. Even before you export, the null filter indicates it's RGB. If you export YV12, then you are causing colorspace conversion, not Neat. Neat exports RGB

    Perhaps you didn't understand what I wrote above.

    If there was one more YCbCr=>RGB conversion done by Neat, it would be evidenced by worsening chroma if you exported a RGB format. This is not what is observed. This suggests it's not really working in I420, this suggests internally it's working in RGB/A. This type of chroma handling is typical of many programs, it's "YCbCr equivalents" , but no actual colorspace/colormodel conversion actually occurs (beyond vdub's conversion to RGB, if using a YV12 souce)
    Quote Quote  



Similar Threads