VideoHelp Forum
+ Reply to Thread
Results 1 to 27 of 27
Thread
  1. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    Can there be a difference between x264vfw versions to explain a shift in saturation/brightness? The long story...in 2008 edited a DV movie which was smart rendered to a DV copy and also compressed under x264 core 105. Recently revisited same video, using same DV source copy but compressing with x264 core 130. Settings in both cases were out of the box medium presets along with qtgmc to delace. Side by side comparison shows up the earlier encode to be somewhat darker or more saturated. I doubt this is a playback issue. MPC HC isn't using overlay, and the problem isn't as severe as you'd expect with the PC vs TV color range problematic. Curiously, the new encode is more comparable to the DV source. I suppose could go back to the DV and give it a slight boost, but that would mean another generational loss. Other curiosity, x264 core 130 includes a setting for 'convert to yv12'. The core 105 encoder had no such 'front page' options regarding color space at all. I include the relevant media info below if it might be of help. Thanks in advance for any hunches.

    x264 core 105 r1724 b02df7b
    Encoding settings : cabac=1 / ref=3 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=7 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=3 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / constrained_intra=0 / bframes=3 / b_pyramid=2 / b_adapt=1 / b_bias=0 / direct=1 / weightb=1 / open_gop=0 / weightp=2 / keyint=250 / keyint_min=25 / scenecut=40 / intra_refresh=0 / rc_lookahead=40 / rc=crf / mbtree=1 / crf=20.0 / qcomp=0.60 / qpmin=10 / qpmax=51 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00

    x264 core 130 r2274bm c832fe9
    Encoding settings : cabac=1 / ref=3 / deblock=1:-1:-1 / analyse=0x3:0x113 / me=hex / subme=7 / psy=1 / psy_rd=1.00:0.15 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-3 / threads=4 / lookahead_threads=4 / sliced_threads=1 / slices=4 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=0 / weightp=2 / keyint=250 / keyint_min=25 / scenecut=40 / intra_refresh=0 / rc=crf / mbtree=0 / crf=20.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00
    Quote Quote  
  2. I'm not aware of any changes to x264(vfw) that would cause brightness/saturation changes. If the difference is real, it's more likely something else in your processing. Different DV decoder?

    Open the two videos you're comparing in the opposite order. Is the previously brighter video still the bright one?
    Quote Quote  
  3. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    Yes the reversal of order was the first thing I tried, but no banana. I don't suppose a switch in hardware would have any effect (new comp, gpu)? But the DV decoder might well have been different the first time around. I used/use Pinnacle Studio (for better or mostly worse). Don't know if they switched from v14 to v15, but wouldn't this only affect the rendered DV 'source'? The newer encode is more faithful to the original DV from several years back. But I simply like it less. I remember there were quality différences when I used to go the dvd route. The mpeg2 codecs I think are known to vary. Can the same be said in the DV paradigm. Are there known issues with certain DV decoders? Is one known to give a bit more sparkle than the others?
    Quote Quote  
  4. Originally Posted by Sumsaris View Post
    Are there known issues with certain DV decoders? Is one known to give a bit more sparkle than the others?
    Quicktime screws up everything it touches. Panasonic DV Codec converts to RGB with a rec.601 matrix. Cedocida lets you specify almost all paramters.
    Quote Quote  
  5. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    Went back and managed to find another DV to x264 core 105 conversion from 2011 done on the former computer hardware. No variations here whatsoever. The original DV source, old and new x264 encodes (same parameters) all produce similar brightness/density. So this effectively rules out x264 and graphics card as potential issues. I also tried qtgmc variations, encoding directly in virtualdubmod w/o avs script and also did a test from the old machine, but couldn't reproduce the difference. Oh well, it's pretty subtle, so I suppose I can live with it but it would still be nice to know what's going on.

    Thanks jagabo for the input, but does the DV codec really matter in my case. To be truthful, I've never paid attention to the decoders in the workflow as they always seemed beneath the radar. But again, doesn't smart rendering process just ignore whatever flavor of a DV decoder you're using? And even if it didn't, aren't those decoders pretty much locked in Pinnacle or Premiere to boot?
    Quote Quote  
  6. If you're converting to h.264 you're not smart rendering so what DV decoder you use matters. If you're only cut/paste editing with a smart renderer the DV decoder doesn't matter (it's only used to display the video in the editor so you can see what you're doing).
    Quote Quote  
  7. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    Sooo...in that case is it conceivable using, say, your Cenocida decoder I might be able to replicate/correct for my luminance issue? And I take it this would mean an instruction somewhere in my avs script? Which gets back to my point above, how do you know which DV encoder is being used in the first place. All I've ever seen listed in media info or the like has been dvcpro, which I believe is the codec that was internal to my old Panasonic GS400. Or am I way off third base here?

    And a slightly related question, does their exist a method to smart render an x264 encode? Which would allow one to put an AVC video on a timeline to make small corrections and not redo the whole shebang, re deinterlace, etc? I know it's do-able in mpeg2 (Womble) but as to mpeg4, the last I checked it wasn't an option anywhere. Thanks again jag.
    Quote Quote  
  8. When I had Canopus DV codec in PC I used this line to load DV avi video, I had to include fourcc for canopus DV codec:
    AviSource("c:\DV.avi", fourcc="cdvc")

    After getting rid of that codec, loading older os image, or new os installation or with uninstalling Canopus aplication, and installing Cedocida, I just use:
    AviSource("c:\DV.avi") #(default generic fourcc is dvsd ???)

    So not really much help, because I do not exactly remember how I god rid of that Canopus codec, and not doing any tweaking for preferences, if that would be possible, most likely I went back with OS image back up, but anyway, just mentioning those Avisynth lines
    Quote Quote  
  9. Originally Posted by Sumsaris View Post
    Sooo...in that case is it conceivable using, say, your Cenocida decoder I might be able to replicate/correct for my luminance issue?
    Maybe. It would depend on what is/was wrong.

    Originally Posted by Sumsaris View Post
    And I take it this would mean an instruction somewhere in my avs script?
    Indirectly. For example, AviSource() will use whatever VFW decoder has the highest priority. DirectShowSourc() will use whatever DirectShow decoder has the highest priority.

    Originally Posted by Sumsaris View Post
    Which gets back to my point above, how do you know which DV encoder is being used in the first place.
    Unfortunately, that can be difficult to ascertain. You can use a filter manager tool to specify which DV decoder has the highest priority. Though some programs will use their own built in decoder, not system installed decoders.

    Originally Posted by Sumsaris View Post
    All I've ever seen listed in media info or the like has been dvcpro, which I believe is the codec that was internal to my old Panasonic GS400. Or am I way off third base here?
    That's just a "friendly" name specified by the decoder. It doesn't necessarily indicate which decoder is being used. Ie, multiple decoders might use the same name.

    Originally Posted by Sumsaris View Post
    And a slightly related question, does their exist a method to smart render an x264 encode?
    There are a few editors that claim to support smart encoding of AVC. VideoRedo, for example.
    Quote Quote  
  10. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    Thanks guys for the feedback. Thought I’d lost you. Yesterday I buckled down as I do every 2-3 years to attempt to sort out this color space thing again. Color space is a fascinating world. And the theory is not even all that difficult to ingest. It’s just when it comes to the practical application in your own world that one bogs down in an inpenetrable jungle. Yet, by rediscovering avisynth info () and looking at the pin tabs in various places I’ve got a somewhat better grasp on how things flow and are handed off from one stage to the next. Yet a few things elude me.

    I nearly always use directshowsource and almost never converttoyv12. Since the letters RGB never appear anywhere in my workflow I figure I’m usually good to go. YUY appears now and again, but as I can’t distinguish it from YUV without my glasses, I tend to ignore it as well. My new machine is sparsely populated. There’s Pinnacle 15, Premiere CS3 and ffdshow. With DV enabled in ffdshow DV plays back under ffdshow (duh) and renders out to YV12, disabled it runs under ‘DV decoder’ (would this be the MS decoder?)and renders out to YUY2. With ffdshow disabled, I ran both a directshowsource and an avisource script on my DV avi file. DSS produced the following: avisynth info () >> YUY2, virtualdubmod file info >> decompressor YUY2. Avisource produced this: avisynth info () >> YV12, virtualdubmod file info >> decomp. XVID codec. The DSS route to x264 output a garbage file. The avisource x264 encode went off without any hitches.

    Questions:

    - From where cometh xvid? It was not invited to the party. (With a similar test on my older more heavily laden computer the gate crasher was Helix YV12 YUV codec).
    - Are not the decoder and the decompressor more or less the same thing? Then what’s decompressing YUY2?
    - Why can MPC-HC play back the DV avi in YUY2 space, but not an h264 file? (the better question might be can one encode an h264 file that plays in YUY2?)
    - If all digital DV camcorders output YV12 4:2:0 in the capturing process, who or what is stepping in to turn the image into YUY2 4:2:2, and where is it happening? Is this good, bad or indifferent?
    - Should these YV12-YUY2-YV12 turnarounds impact on quality in a discernible way?
    - Are what I’ve just described even really happening or are they only a figment of Charlie Kaufman’s imagination and something else is going on?

    This weekend I’m going to take a stab at recapturing some footage in different ways. Odd I should come here not to find out why something failed to do as expected but why it succeeded in doing as unexpected.

    @Al, how did you find the Canopus DV decoder? Is it as good as their mpeg2 Procoder?
    @jagabo, thanks for general insight. It led to the above. Found that v 1.10 of standard virtualdub will do the smart rendering trick just fine. Why do I stay with vdubmod anyway?

    A couple of final thoughts. Could a DV >>MPEG2>>x264 transcode account for some marginal color ‘improvement’? Or conversely, would a DV master saved to MiniDV tape and then recaptured back suffer any possible degradation? I might actually have done one or the other several years ago. Well, that’s more than a mouthful for now. Later.
    Quote Quote  
  11. There's a tool that will show you what filters are used by DirectShow: GraphStudio. Just opening (or drag/drop) a video file with the program will show you a graph of the DirectShow filters used to open the file and render it. That's usually the same filters as DirectShowSource() and Windows Media Player will use. You can right click on any of the filters to get further details. You can also build your own graphs if you care to.

    I generally recommend you avoid DirectShowSource() because it leads to uncertainty about what filters are being used and it's not frame accurate with some file types.

    I don't know what's going on with Xvid in your DSS example.

    VirtualDub has a built in DV decoder.

    PAL DV uses interlaced YUV 4:2:0 internally. NTSC DV uses YUV 4:2:2. So it's best to stick with those as output of the DV decoder. One thing you need to watch out for: VirtualDub (and mod) don't handle interlaced YV12 properly. It will blend the colors of the two fields together. Of course, when you do your own filtering in AviSynth you need to keep track of whether the video is interlaced or not and handle it appropriately.

    DV >>MPEG2>>x264 wouldn't necessarily lead to levels/color issues, they all use the same rec.601 matrix. But any time you convert video you could create levels/color problems if you aren't careful.
    Quote Quote  
  12. Originally Posted by Sumsaris View Post
    how did you find the Canopus DV decoder? Is it as good as their mpeg2 Procoder?
    It came with Procoder, but those two things have nothing to do with each other, DV decoder and mpeg2 encoder. And free HcEncoder is good enough nowadays.
    DV video is YV12 in Avisynth, so you do not need to work with YUY2 (4:2:2) to make it better using straight DV to mpeg2 or H.264 encoding. YUY2 is good for intermediate codek, when you transfer your DV video to some lossless 4:2:2 , color correct , edit, etc., and then encoding to delivery format, something you seem not be doing.

    Did you try that simple line, ..., Avisource("C:my_video.avi") or AviSource("c:\DV.avi", fourcc="dvsd") to drop it in MPC-HC ?
    NTSC DV uses 4:1:1
    Quote Quote  
  13. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    Ok, well there’s good news and bad news. The good is that thanks to Madame Foo’s post at an obscure British website from a few years back, and against the prevailing wisdom of every Tom, Geek and Harry in the IT world I’ve talked to about it, my old firewire-only camcorder can now upload video onto my new usb-only computer with a simple cord. A real exploit.

    On the other hand, inspite of cedocida, graph studio (thanks for that jacobo) and any number of capture permutations on XP3, Vista and Win 7 via firewire or usb, Pinnacle or Moviemaker, every test I’ve tried has had identical results, technical stats and visual quality alike. All DV captures loaded into GS reflect Microsoft’s DV decoder (qdv.dll) with pin in=dvsd, pin out=YUY2. Similarly all avisynth scripts, be they directshow or avi source, kick out the following schema: file.avs > avi compressor > color space converter > video renderer. The compressor inputs YV12 for avisource and YUY2 for directshow. Both then shoot out RGB32 all the way through to the video renderer. HuH? For the actual captures, still have no idea who’s doing what. Cedocida appears in virtualdub’s codec list as well as the system32 folder and also is the sole dvsd driver showing in vcswap. I’d thought cedocida was ‘automatically’ supposed to replace the previous dv decoder. Tried Al’s fourcc parameter in avisynth, too. The upshot is everything looks pretty much exactly the same no matter what I do. Oddly, the most pleasing saturated image I’ve truly found is the one on the 2x3 inch camcorder screen as it downloads to the comp, lol. But perhaps that image is artificially cranked up.

    So either I'm still missing something or it looks like back to square A. Although, as I’d said I can live with it, I’m still out to lunch about a number of queries.

    - Is YUY2 coming off the camcorder and/or is my NLE outputting as such? Is this normal?
    - Is RGB indeed being sent to the x264 encoder via virtualdub and avisynth? How to avoid?
    - DV codec comparative tests I’ve seen…do they mean anything in the real user world
    - Am I blind or are there discernible quality hits going on with these real/pseudo color space shifts. Some have said with the way the shift in subsampling averages things out, one might even wind up with a file in a 4:1:0 space without knowing it.

    Does anyone have a verifiable scheme for getting from step A to Z in a project without ever leaving the YV12 space. I mean it’s not a myth, right? It would also be cool if there were a recipe for an in-your-face example of how wrong settings in this domain can in fact screw up a project. Thanks for the continuing interest.
    Quote Quote  
  14. Originally Posted by Sumsaris View Post
    Ok, well there’s good news and bad news. The good is that thanks to Madame Foo’s post at an obscure British website from a few years back, and against the prevailing wisdom of every Tom, Geek and Harry in the IT world I’ve talked to about it, my old firewire-only camcorder can now upload video onto my new usb-only computer with a simple cord. A real exploit.

    On the other hand, inspite of cedocida, graph studio (thanks for that jacobo) and any number of capture permutations on XP3, Vista and Win 7 via firewire or usb, Pinnacle or Moviemaker, every test I’ve tried has had identical results, technical stats and visual quality alike. All DV captures loaded into GS reflect Microsoft’s DV decoder (qdv.dll) with pin in=dvsd, pin out=YUY2. Similarly all avisynth scripts, be they directshow or avi source, kick out the following schema: file.avs > avi compressor > color space converter > video renderer. The compressor inputs YV12 for avisource and YUY2 for directshow. Both then shoot out RGB32 all the way through to the video renderer. HuH? For the actual captures, still have no idea who’s doing what. Cedocida appears in virtualdub’s codec list as well as the system32 folder and also is the sole dvsd driver showing in vcswap. I’d thought cedocida was ‘automatically’ supposed to replace the previous dv decoder. Tried Al’s fourcc parameter in avisynth, too. The upshot is everything looks pretty much exactly the same no matter what I do. Oddly, the most pleasing saturated image I’ve truly found is the one on the 2x3 inch camcorder screen as it downloads to the comp, lol. But perhaps that image is artificially cranked up.

    So either I'm still missing something or it looks like back to square A. Although, as I’d said I can live with it, I’m still out to lunch about a number of queries.

    - Is YUY2 coming off the camcorder and/or is my NLE outputting as such? Is this normal?
    - Is RGB indeed being sent to the x264 encoder via virtualdub and avisynth? How to avoid?
    - DV codec comparative tests I’ve seen…do they mean anything in the real user world
    - Am I blind or are there discernible quality hits going on with these real/pseudo color space shifts. Some have said with the way the shift in subsampling averages things out, one might even wind up with a file in a 4:1:0 space without knowing it.

    Does anyone have a verifiable scheme for getting from step A to Z in a project without ever leaving the YV12 space. I mean it’s not a myth, right? It would also be cool if there were a recipe for an in-your-face example of how wrong settings in this domain can in fact screw up a project. Thanks for the continuing interest.




    1) PAL DV is 4:2:0 (That would be YV12, not YUY2). NTSC DV is 4:1:1. The problem with interlaced 4:2:0 is some programs do not upsample it correctly (vdub is one, where the author feels interlaced 4:2:0 doesn't exist, so it's upsampled as progressive, leaving notching artifacts in the RGB preview or if you convert to RGB). Forcing 4:2:2 usually alleviates all they associated problems with interlaced 4:2:0 chroma in most programs because each line now has a chroma value (interlaced YV12 only has it for every 2nd line)

    2) What the NLE outputs is determined by the NLE and settings. Most can smart render DV with cuts only editing, so indeed if you put in PAL DV and have it setup correctly, it will output PAL DV without any losses - compression, colorspace or otherwise

    3) Don't use graphstudio for avs. Because you are converting it to directshow. Basically there are 2 subsystems, directshow and VFW (video for windows). When you use AVISource(), you are using VFW. Use info() in the avs script to determine what colorspace is from the source filter. If you use avs, you can have full control over colorspace, sampling, decoder used etc...

    3) If using vdub, use video=>fast recompress to prevent inadvertent RGB conversion

    4) Yes, going back & forth will result in discernable quality loss along color borders. Perhaps not to your average viewer, under normal viewing conditions, but the loss is easily seen on graphics, titles, less easily seen on "normal" footage. The bigger danger is clipping superbrights. Most DV camcorders record usable superbrights. If you convert to RGB with a standard (Rec) matrix, you lose that data unless you "legalize" values first
    Quote Quote  
  15. Also, you can force the colorspace in AviSource() and some other input filters. For example:

    Code:
    AviSource("filename.avi", pixel_type="YV12")
    See the documentation for AviSource() for more details.
    Quote Quote  
  16. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    So it finally sunk in. Couldn’t see the trees for the forest. And you guys have been very polite with me. Clearly, MS DV decoder isn’t squelching another decoder, because it only loads with directshowsource. Under avisource, as I’ve only got cedocida dvsd installed, it alone is being used under a video for windows scenario (even if there’s no explicit way to visualize this). So that’s cool. Also, it necessarily means that not only are my captures in yv12, but also supposes that my NLEs, having smart rendered their edited capture footage, will also output in yv12 space (except for the transition and filtered segments which are presumably decoded and reencoded via rgb before rejoining the yv12 mother file). Right so far? So according to jagabo there’s hardly a need to force yv12 since in principle that color space will take priority over yuy2 in the pecking order. OK?

    Still, poisondeathray threw me another loop: “What the NLE outputs is determined by the NLE and settings, if you…have it set up correctly.” Hold on. As far as I know, Pinnacle, CS3 and Moviemaker give absolutely no control to anyone to do anything as regards color space input or output in DV. CS3 only lists MS DV as the its ‘hard-coded’ export codec. In general, these editors already use their own dedicated decoder/encoders, no? So PDR, could you elaborate a bit on this point?

    Where things might make a difference is at the encode to x264 stage. I understand the point regarding alternating chroma lines in yv12.But wouldn’t this mean I should open my avisynth script in virtualdub with a ‘converttoyuy2 (interlaced=true)’ instruction before sending it to the deinterlacer and on to the h264 encoder? Would this theoretically produce a superior encode as opposed to a yv12 straight shot?
    Quote Quote  
  17. Originally Posted by Sumsaris View Post
    Also, it necessarily means that not only are my captures in yv12, but also supposes that my NLEs, having smart rendered their edited capture footage, will also output in yv12 space (except for the transition and filtered segments which are presumably decoded and reencoded via rgb before rejoining the yv12 mother file). Right so far? So according to jagabo there’s hardly a need to force yv12 since in principle that color space will take priority over yuy2 in the pecking order. OK?
    Right, but it depends on the specific NLE used, or software used. Some filters actually work in YUV. In newer premiere versions, they are labelled "YUV". Even in older Premiere versions some filters worked in YUV. eg. "fast color corrector".

    Still, poisondeathray threw me another loop: “What the NLE outputs is determined by the NLE and settings, if you…have it set up correctly.” Hold on. As far as I know, Pinnacle, CS3 and Moviemaker give absolutely no control to anyone to do anything as regards color space input or output in DV. CS3 only lists MS DV as the its ‘hard-coded’ export codec. In general, these editors already use their own dedicated decoder/encoders, no? So PDR, could you elaborate a bit on this point?
    Not sure about WMM, or Pinnacle, but in Premiere you have to set it so the project sequence seettings, and render settings match. You can tell away it's working properly, because it exports very fast. If it was re-encoding , it would be slower. You can also tell indirectly because those sections that you've applied filters to will suddenly have "red render bar", but the other sections won't (those are the sections that will be smart rendered and passed through untouched). Similarly you can tell right away something is amiss if you have red render bar every where before you even do anything (just the untouched video on the timeline). It usually means you have wrong sequence settings

    Where things might make a difference is at the encode to x264 stage. I understand the point regarding alternating chroma lines in yv12.But wouldn’t this mean I should open my avisynth script in virtualdub with a ‘converttoyuy2 (interlaced=true)’ instruction before sending it to the deinterlacer and on to the h264 encoder? Would this theoretically produce a superior encode as opposed to a yv12 straight shot?
    No. If you're doing this in avisynth and deinterlacing, it's not necessary. You'll actually get slightly lower quality. It's better to keep YV12 all the way

    The problem mentioned before is "chroma upsampling error" or CUE. It's when interlaced material gets upsampled as progressive or vice versa leading to various chroma issues. That doesn't affect you when you keep YV12, nothing is being upsampled by you (until it gets displayed, but that is determined by whatever software/hardware is being used). (and when you've deinterlaced, it's progressive content, and hopefully encoded properly as progressive, so there is minimal risk of it being upsampled incorrectly as interlaced by other programs / software or hardware) . So the times you might force YUY2 is when you know the other programs treat interlaced 4:2:0 as progressive (e.g. vdub), and you were using an RGB workflow
    Last edited by poisondeathray; 20th Oct 2014 at 13:59.
    Quote Quote  
  18. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    Thanks, poisondeathray, things have pretty much cleared up and I think I can lay this thread to rest. Overall, the jist of the thread was merely to reassure myself that with DV Pal from capture up until the final compression output (x264, mpeg2, etc) that there is little user input that will make a qualitative difference if the user isn't purposely doing such an operation (e.g. transition, color filter). And secondly if you're only using a DV decoder (and not mastering in DV) whichever brand you use will ultimately not put a dent in your aesthetic sensibilities.

    You do seem to insist that at least in Premiere (certainly not in Pinnacle or Moviemaker) there is are such options for such user intervention. I've humbly yet to see it. Now, I normally use Pinnacle for my main editing doing all the basics since it is, bar none, the most beautifully simple and intuitive software out there. Their built-in filters are another matter. Their slow motion widgit is terrible and I'm convinced Studio does it by deinterlacing and blending and it sucks. So I've preferred to send such snippets to Premiere which will do it without blending. But, when I then export those segments out again the DV related settings are set in stone, and they go by the name of....MS DV. Which leads you to believe that Premiere 'borrows' the system DV codec. There just aren't any render settings that don't match since there is basically nothing to choose in the first place. (Unless you mean converting PAL to NTSC or reversing the fields, but who'd do that?) But anyway...

    So I'll leave the chroma bug vaccination to those who are infected. To be specific then, by saying

    the times you might force YUY2 is when you know the other programs treat interlaced 4:2:0 as progressive (e.g. vdub),
    you mean vdub in full processing mode (as opposed to fast proc) and/or vdub direct (as opposed to avisynth), right?

    Other than that, thanks guys for these exchanges and perhaps I'll be back at some future time with an issue re/ x264. There's a domain that truly can make a difference with respect to settings.


    All the best...
    Quote Quote  
  19. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    Blimey, I spoke too soon and it looks like I am not out of the woods yet.

    A few more facts. I uninstalled xvid, installed helix, and have ffdshow (cccpack variety) with DV decoder box unchecked. With that in mind...

    An avisource avs opened in vdubmod shows file info as decompressor helixyv12 which I'd assumed meant a 'hidden' cedocida decoder (out of a false? presumption that a decompressor and a decoder were similar but different drivers that worked together.) I'm not sure anymore because I then tried to load the actual avi into vdubmod and here it spat out decompressor cedocida dv codec (dvsd)?!? In turn, I tried the same thing with virtualdub 1.10 which returned decompressor values of 'internal DIB decoder (yv12) with the avs and cedocida dv codec (dvsd) with the avi loaded directly. Loaded with directshowsource I got: 'internal DIB decoder' (yuy2).

    How is this possible? Why does the avs cause vdubmod/vdub to load the helix/vdub decoders instead of the expected cedocida when the option to use the internal codec is unchecked? And why are they different with respect to the vdub versions?? And the still unanswered question: why does directshowsource cause the internal decoder to spit out yuy2 when it can clearly handle yv12? I'm still in PAL-land, yeah? None of this is transparent.

    Dunno, is someone up there telling me I should use Megui for example in order to get some coherency into all this. As always, you're sleuthing talents would be more than welcomed now.
    Quote Quote  
  20. When opening AVS scripts VirtualDub has no idea what decoder was used to decompress the source. It receives uncompressed frames from AviSynth and can only tell you what it's using to interpret those uncompressed frames. VirtualDub has internal handlers for YV12, VirtualDubMod does not.
    Quote Quote  
  21. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    Gotcha...sort of. There must still be more to what's going on under the hood. Vdub is preconfigured by Windows and/or itself to prefer the cedocida codec. Now, whether or not it actually decodes the file or 'knows' the identity of avisynth's mystery usurper, there seems to be no reason why it should step into the fray with some other garden variety tool. Unless, this gets down to my still unanswered query about the nuance between a decoder and a decompressor. As avisynth has already decoded the file (via cedocida), by what necessity does vdub even need to call up the dib or helix decompressor. That step's already a done deal. Or is a back room arrangement being made in which vdub agrees to, in some sense, take over and finish up the job of handing over the process to the encoder (whatever that might mean). On the other hand, this negotiation doesn't take place when loading a file direct into vdub since in this case the decoder completes the task from A-Z. So what exactly is the DIB decoder doing? Possibly the answer lies with your mention of the term 'handler'. But in that case, what does the handler handle. Thanks
    Last edited by Sumsaris; 22nd Oct 2014 at 02:57.
    Quote Quote  
  22. Originally Posted by Sumsaris View Post
    As avisynth has already decoded the file (via cedocida), by what necessity does vdub even need to call up the dib or helix decompressor.
    If VirtualDubMod doesn't understand the fourcc output by the codec another filter will be used to convert it to something VirtualDubMod understands.

    "Decompressor" is a subset of "decoders". Both are filters that take in one thing and put out another. The main difference is a decompressor decompresses compressed video. A decoder may or may not decompress, it may only transform the data (IYUV to YV12, for example).

    Originally Posted by Sumsaris View Post
    So what exactly is the DIB decoder doing?
    It's VirtualDubMod's internal handler for the incoming fourcc. It's way of recognizing the format of the bitmap data.

    http://fourcc.org/

    The process of opening an A/V file with VFW or DirectShow involves a negotiation between the program and the library. They discuss what colorspaces the program understands and what colorspaces the codec can output. If there is no common colorspace they will search for another transform filter to get them to a common colorspace. They build a pipeline of filters just like you see in GraphStudio. Note that GraphStudio uses DirectShow by default.
    Quote Quote  
  23. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    Been gone a few days and wanted to return to thank jagabo again for his continuing followups.

    Yes, I finally see that nuance between the transformation of the color space and the actually decoding. I'd always assumed it was an all-in-one operation.

    By the way, I finally managed to hone in on the source of that great but mysterious encode (which provoked me to open this thread). It turns out a change in gamma will reproduce this perceived change in quality. Not that I ever did this willfully. I wonder if there was a function in one of the eariler megui presets that might have had something to do with this.

    Anyway, a sideshot to this investigation is that I not only actually got to really understand how gamma manipulates a file but for the first time took a bath in the histogram pool. Very interesting and helpful. Takes very little to produce a fairly substantial effect. A coloryuv (gamma_y=-20) is plenty to give a decent bit of density whereas at -40, it's already stomping out detail in contrasty scenes. And oddly, an (off_y=-10) brightness setting will darken out those scènes even more than the gamma will. Also, then there's that 'gain' lever. Never liked it. Doom9 did a little wiki on it in conjunction with the off_y parameter. To me all it seems to do is to erase the subtle shades of grey that I'd struggled to bring out with the brightness setting in the first place. Perhaps I'm misusing it. No matter.

    One final question for y'all regarding filtering. All else being equal would a change in gamma applied during the editing stage in the NLE (using presumably RGB) followed by export to encoder suffer a greater, lesser or similar quality hit than it would if applied post edit (as I've done) using the YUV avisynth filters? Thanks
    Quote Quote  
  24. In ColorYUV() gain, offset, and contrast are all linear adjustments. Gamma is a non-linear adjustment, effecting darks more than brights. What you use depends on what you need to do with the video.

    Gamma in RGB will have a different effect than gamma in YUV.

    Extract the "ColorYUV Animation" folder in the attached ZIP file and open greyramp.avs in VirtualDub. You can see the effects of gain_y, off_y, cont_y, and gamma_y visually and graphically as you scrub through the video. Frame 256 shows the neutral settings.
    Image Attached Files
    Last edited by jagabo; 28th Oct 2014 at 08:27.
    Quote Quote  
  25. Originally Posted by Sumsaris View Post

    One final question for y'all regarding filtering. All else being equal would a change in gamma applied during the editing stage in the NLE (using presumably RGB) followed by export to encoder suffer a greater, lesser or similar quality hit than it would if applied post edit (as I've done) using the YUV avisynth filters? Thanks


    The "losses" you are referring to are different. YUV <=> RGB is a different type of loss compared to 8bit vs. higher bit depth level of precision processing loss. Note some NLE's can work in YUV with YUV filters all the way through, and some "cheap" NLE's only work at 8bit

    Potentially, if you are working in higher bit depths in NLE, there is higher precision when making changes. If source material is 10bit or higher to begin with, then yes it will be a larger difference. But even on 8bit source material there can be a visible difference. Because avisynth works in 8bit precision, there can be more "banding" introduced when doing manipulations. It's more noticable on clean material, gradients, CGI. Be aware that there are "fake" stacked MSB/LSB 16bit workflows for avisynth, which may produce slightly better results than the standard 8bit filters
    Quote Quote  
  26. Member
    Join Date
    Jul 2014
    Location
    France
    Search PM
    @jagabo

    That little avs of yours looks to be an invaluably rich instructional tool. I'll have to take a good closer look. Also need to get used to the rotated histogram. In a way, though, it makes more sense that way.

    @poisondeathray

    I doubt strongly that in terms of technical prowess, even since going Avid, Pinnacle Studio remains anything other than an off the rack NLE. So it looks to be a toss-up as regards the generational quality hit. Also a reminder to myself to treat video as film. Once on tape, consider it finished. Laziness just doesn't pay.
    Quote Quote  
  27. Originally Posted by Sumsaris View Post
    need to get used to the rotated histogram.
    It's not really a historgram it's a waveform display. It's based on what you see on an oscilloscope watching the luma channel. Television scans horizontally so the horizontal waveform monitor (as used in my script) matches what you would see on a scope. And I arranged the videos so that the horizontal waveform monitor would most elucidating.

    In my images every scan line is the same. So you can consider the images to be only one scan line. Then the waveform monitor is just a 2d graph of that scan line -- where the brightness of the pixel is transformed to height in the graph.
    Quote Quote  
Visit our sponsor! Try DVDFab and backup Blu-rays!