EDIT: Changed title to reflect how this thread expanded from maintaining PC levels in Avisynth to maintaining YUV411 color format as well when working with DV AVI. Both issues have been resolved and many thanks to all those who assisted me.
I have some DV AVI Type 2 video that is full range versus clamped. The only way I have found to maintain Y[0...255] versus Y[16...235] is by using ConvertToYUY2. As close to lossless I have found (iow, leaving out ConvertToYUY2 results in clamped output):
However, I am trying to de-interlace it using QTGMC which doesn't support YUY2. I thought that using ConvertToYV12(matrix="PC.601") would avoid clamping but it doesn't even without the QTGMC filter. Does anyone know why this is happening? I am testing the results in PP after transcoding the avs input in ffmpeg:Code:AVISource(V) ConvertToYUY2
So, in summary, the following Avisynth script results in clamped output when the source video is full range:Code:ffmpeg -i in.avs -c:v copy -an out.avi
+ Reply to Thread
Results 1 to 30 of 36
Last edited by SameSelf; 5th Mar 2016 at 18:30.
With NTSC sources your DV decoder should be putting out YUY2 to start with. If not, try forcing it by specifying pixel_type="YUY2" in the AviSource() line. If still not, get a different decoder.
Converting YUY2 to YV12 with ConverToYV12() shouldn't clamp the levels.
ColorYUV() doesn't clamp levels by default but some of the other color filters do. Tweak() for example -- specify coring=false to prevent clamping.
There is a QTGMC variant that supports YUY2
Premiere is clamping (actually clipping), because you're using -c:v copy with an avs script , not one of the "special" uncompressed formats such as UYVY for 4:2:2, or IYUV for 4:2:0
I don't know about the matrix part, but if the video is interlaced you are going to want
Some filters provide a coring parameter to disable the clamping to TV scale.
From the AviSynth Wiki about Levels():
Is SameSelf using "clamping" (clipping) to describe what's usually referred to as "converting" from one range of levels to another?
I virtually never work with DV video but in theory
would convert to limited range and
would leave the levels untouched.
If poisondeathray is correct and Premiere is "clipping" the second output above, then it wouldn't have the same luminance levels as the first output, where the levels are "reduced" first, so they shouldn't be "clipped".
You seem to assume that AviSource(V) first loads an RGB clip, so that there is an actual conversion using any RGB/YUV matrix?
If AviSource(V) already loads a YUV clip, then Convert will not convert from PC to TV range anymore.
In case of DV, you may want to read about ChromaInPlacement / ChromaOutPlacement. Especially NTSC DV with 4:1:1 chroma subsampling is a heavy loss of horizontal chrominance precision.
Last edited by LigH.de; 3rd Mar 2016 at 04:36.
It looks like the problem lies with PP interpreting the file correctly. Using the Histogram tool in Avisynth (thanks, jagabo for the tip), this is the input video using the following code:
[Attachment 35991 - Click to enlarge]
As can be seen there are lots of superblacks and superwhites in the the video. Now, if I transcode the above using ffmpeg:
ffmpeg -i in.avs -c:v copy -an out.avi
[Attachment 35992 - Click to enlarge]
However, when I load out.avi into PP, the vectorscope in PP is clamped at 7.5 IRE. I did notice that PP has trouble interpreting the video correctly and thinks it is progressive for some reason.
I need to run some additional tests, but I am fairly confident at this point that PP is the problem.
That's probably in an INFO chunk. And beyond that:
Last edited by jagabo; 3rd Mar 2016 at 19:54.
This workflow seems to work so far. I just need to vet it for QTGMC.
1. NTSC DV AVI Type 2 720x480i29.97 4:1:1 video (that's a mouthful)
2. Load in Avisynth:
Coring=false #May not need this AVISource(V,false).AssumeFPS(30000,1001).AssumeBFF()
ffmpeg -i in.avs -pix_fmt uyvy422 -c:v rawvideo -an -vtag "UYVY" out.avi
Also ffmpeg reports the following warning:
[swscaler @ 029069c0] Warning: data is not aligned! This can lead to a speedloss
Coring=false at the top of your script is doing nothing. It's a parameter of certain functions. All you're doing is assigning a named variable that you aren't using.
To confirm the color format you're getting in Avisynth, add Info(). Better yet, force it to YUY2 as jagabo mentioned earlier.
How are you loading DV-AVI into avisynth ? With what decoder ?
How are you determining "nearly lossless" ? Comparing what to what with which decoder ?
Slight technicality, but PP is actually "clipping" , not clamping. Clamping would imply everything gets "squished" from both both ends. As a result the midtones would be compressed, and the curve function would look different even in the middle section. An 8 bit waveform would become "banded" with clamping as you try to "squeeze" 0 to 255 values into 16-235 "slots" . Clamping implies at least some data is recoverable if you "unsquish" it - that is not the case here. It's just cut right off. The curve is otherwise the same. As mentioned in your other thread , the reason is PP treats uncompressed YUY2 as RGB, and it undergoes a Rec conversion to RGB. 0-15 and 236-255 are cut off. You need to use the special fourcc's to get uncompressed YUV treatment
Interesting that it defaults to interpreting "progressive" here for the ffmpeg generated AVI, yet "fields" in the other scenario. Maybe it's making a "guess" based on resolution ?
Thanks again for the feedback guys.
1. I have deleted the Coring=false from my script. I suspected it wasn't doing anything for me, so thank you for confirming vaporeon800.
2. I inserted pixel_type="YUY2" into my Avisource() line but:
a. It doesn't seem to make any difference whether I include it or not for a simple transcode
b. When I insert QTGMC, it complains. So I believe it is better to leave it out versus inserting a ConvertToYV12, 16, or 24, since I don't need it anyway.
3. I am using Cedocida 0.2.3 as my DV AVI decoder. In fact, other than QT, this is the only other codec I have installed on my system other than the default codecs that come with W7 x64 Pro. Is there a better one you recommend? I know from testing I need something.
4. As for determining lossless, I simply look at the vectorscope in PP with the original and transcoded overlain in a timeline. If the Luma wiggles around then I assume it is not lossless. I know I could bring it into AE to be double sure. I guess I am just lazy.
5. As for PP clipping which results in lost information, that sounds correct. I didn't pay too close attention to the rest of the waveform. Hard to know for sure what PP is or isn't doing.
EDIT:One last thing, the UYVY transcodes are recognized by DaVinci Resolve which is great! Because it won't recognize the original files and I was under the assumption that I would need to transcode as ProRes or something.
Last edited by SameSelf; 3rd Mar 2016 at 21:33.
Cedocida must be putting out YV12 since you aren't using it after AviSource. You can always verify with Info(). I prefer Cedocida for encoding and decoding DV because it gives you lots of control.
I like to get YUY2 out of NTSC DV decoders because that's closer to the source format. Then I convert to YV12 myself because that gives me control over the conversion.
What you really want to avoid is getting RGB out of a DV decoder (Panasonic DV codec only outputs RGB, for example) as that will clip superdarks and superbrights.
In theory , if you look at just the Y plane it should be the same. But you are assuming the DV is being decoded in the same way. It is not. The reason for the difference is DV decoders actually have different outputs . If you take Adobe, or Cedocida, or ffmpeg/libav, or Sony, or Mainconcept, Microsoft etc - they all have very slightly different output from the same DV video - even if you isolate the Y plane only. You can demonstrate and compare with amplifed differences. You can do this in avisynth too, swapping out the decoder with vcswap for VFW, or constructing a directshow graph with graphstudio for directshow decoders.
However if you take the same decoder and compare the Y plane (e.g. with the Y in YC wave form, or with luma waveform), even with different pixel formats (e.g. 4:1:1 or 4:2:0 or 4:2:2 or 4:4:4 you will get the same output when comparing the Y plane with greyscale or converttoy8 . For example if you use Adobe only, use it to convert to UYVY, you will see the microscopic wiggle in the Y waveform disappear when comparing DV 4:1:1 to UYVY 4:2:2
The comparison is problematic to do in AE with the layer amplified difference method, because it definintely works in RGB only, and you'll obviously get differences from 411 compared to 422 . You actually have more control over the amp/diff method in avisynth because you can compare planes separately. You can too in AE, but it actually operates in RGB underneath the hood
(And the SD AVI's from ffmpeg are also interpeted as fields unknown dominance in CC, like the other thread, so not sure why the difference in CS6)
Do you even know which video format AviSource(V) loads when you omit the parameter "pixel_type"? Please compare:
- AviSource(V, pixel_type="RGB24").Histogram(mode="levels").Info( )
- AviSource(V, pixel_type="YUY2").Histogram(mode="levels").Info()
- AviSource(V, pixel_type="YV12").Histogram(mode="levels").Info()
PLEASE NOTE: Some variants above may provoke an error if the VfW codec handling your AVI file's video content does not support color space conversions while delivering the decoded video to the VfW API.
I believe that everyone is correct. Avisource with Cedocida decodes DV AVI as YV12. Looking at my picture above, it is not advised to go through 4:2:0 (a 2x2 configuration) on the way to 4:2:2 when the source is 4:1:1. So I agree with jagabo that forcing Cedocida to decode as YUY2 is preferred. I haven't been looking closely at the chroma just yet. But I think I now see the wisdom of the following:
AVISource(V,false,pixel_type="YUY2").AssumeFPS(30000,1001).AssumeBFF() ConvertToYV16(interlaced=true) # Maybe this isn't needed since the QTGMC wiki says it supports YV12 and YUY2 QTGMC()
1. Force Cedocida to decode DV AVI as 4:2:2
2. Avoid 4:2:0 when converting to YV16 for QTGMC
Slowly this is all coming together and starting to make sense. Thanks for all the help.
Another option is to use l-smash or ffms2 which will decode in the original 4:1:1 source format . You can then use your algorithm of choice to get what you want depending on your goals . The "negative" is they require the extra time indexing for AVI, whereas AVISource() will open it up immediately
Thanks, pdr. I didn't realize L-SMASH and FFMS2 default to 4:1:1 for DV AVI. With that said however, I tried opening my DV AVI with L-SMASH and for some reason it complained. I attempted to troubleshoot but quickly gave up and went back to AVISource(). I may give it another shot once I finish the AVISource workflow.
In my experience LSMASHVideoSource() works with very few files (maybe only MP4?). LWLibavVideoSource() works with a lot more, including DV AVI.
My experience isn't any different. But what I really don't like is how L-SMASH "pollutes" my disk with all the indexing files. Can someone explain to me what is the benefit of indexing versus AVIsource? I clearly get the advantage when it comes to loading AVCHD, but DV?
LSMASHVideoSource is only for MP4, MOV - the benefit over something like FFMS2 is it doesn't require indexing . LWLibavVideoSource is for everything else. Both fall under "l-smash" because the use the same "LSMASHSource.dll"
Indexing is recommended for non I-frame only formats, and especially for when you have situations that require non linear seeking with non intra only formats (e.g video editing). An example would be temporal filtering. Indexing is more accurate and you're less likely to mix up frames. AVISource is fine for DV because it's I-frame only. One of the reasons why dgindex is so reliable and consistent for mpeg2 sources is because of the indexing and "pollution" .