VideoHelp Forum




+ Reply to Thread
Results 1 to 23 of 23
  1. DV Pal to MPEG2. I captured this with WinDV and then converted to type 2 with Enosoft DV Processor, its AVI repair tool, I assume this process along loading video in avisynth with ffms2 kept the original DV chroma placement.

    There's this internally in avisynth:
    Code:
    ConvertToYV12(matrix="Rec601", interlaced=true,ChromaInPlacement="DV",ChromaOutPlacement="MPEG2",chromaresample="spline36")
    but I would like to learn how to do it manually, for understanding the whole chroma placement concept and also for using it with Dither tools, its Dither_resize16() function.

    This is what I got so far by observing the diagram in this link.
    Code:
    separatefields()
    Y = ConvertToY8()
    # 360x144 is original field size
    U = UToY8().Spline36Resize(360, 144, src_top=0.375)
    V = VToY8().Spline36Resize(360, 144, src_top=-0.125)
    YToUV(U, V, Y)
    To conform to MPEG2 U plane needs to go up by 3/8 its pixel size, and V plane 1/8 its pixel size. Is this correct? When tested I don't get the same image. edit: Or maybe 0.250 on U and -0.250 on V if its in relation to Luma pixels, either way I can't match results.
    Last edited by Dogway; 18th Jan 2015 at 13:25.
    Quote Quote  
  2. I think there is an additional problem with even/odd field offset for PAL DV. That chart on the bottom of the 1st link says so too

    When you use separatefields() only , it's even/odd/even/odd.., so I think you have to group even and odd fields apply the separate transform and weave

    I know you don't like going to that neighborhood , but someone at doom9 probably already knows how to do this manually
    Quote Quote  
  3. What do you mean? offsets are relative to luma and those don't change on field parity. I think as most I have to be aware of offsets, making them half than for full frame, but I'm not sure. I posted yesterday on doom9, there's not much action there in any case. : /
    Last edited by Dogway; 18th Jan 2015 at 14:10.
    Quote Quote  
  4. Yes I think you're right. I was confused by the grid schematic. It's the grey dots that are suppoed to be luma samples, not the grey lines, so they do reflect the correct field offset.

    Crap, I'm not even look at the right one, I'm looking at DV. You want to go from DV TO MPEG2. But for MPEG2 there is a different even/odd offset isn't there ? Upper gets (0,+1/4) for both U, V ; Lower gets (0, +3/4) for both U,V . So don't the source DV chroma samples have to be mapped correctly to MPEG2 configuration by using that different offset for upper vs. lower ?

    Go PM gavino
    Last edited by poisondeathray; 18th Jan 2015 at 14:16.
    Quote Quote  
  5. I removed my above edit since clearly the linked post is a very specific situation where the OP wants to keep the fields as frames, so in order to avoid bobbing he needs to use + and -0.25 for planes, it's unrelated to chroma placement.

    Mpeg1 to mpeg2, and format changes within mpeg2 are well documented. Instead DV to mpeg2 is not, it should be easy but for some reason I'm not getting the correct results, or maybe yes and I'm not aware...
    Quote Quote  
  6. Well as you removed, I added an edit

    Since DV chroma samples have the same offset for upper and lower it should be easy to map to MPEG2 , just with different upper and lower shouldn't it ?

    You start at Cb (0,+1) Cr (0,0) for both upper and lower for PAL DV, and want to go to CbCr upper (0, 0.25) and CbCr lower (0,0.75) for MPEG2

    So move the Cb upper y=-0.75, Cr upper y=+0.25 ; Cb lower y=-0.25, Cr lower +0.75
    Quote Quote  
  7. Yes I think I need to divide into even and odds. Still the values as I said are relative to chroma pixels, so 0.375 instead of 0.75 and 0.125 instead of 0.25.

    edit: so far. Looks a bit wrong.

    Code:
    separatefields()
    # source is BFF
    raw=last
    selectevery(2,0)
    Y = ConvertToY8()
    U = UToY8().Spline36Resize(width()/2, height()/2, src_top=0.125)
    V = VToY8().Spline36Resize(width()/2, height()/2, src_top=-0.375)
    even=YToUV(U, V, Y)
    raw
    selectevery(2,1)
    Y = ConvertToY8()
    U = UToY8().Spline36Resize(width()/2, height()/2, src_top=0.375)
    V = VToY8().Spline36Resize(width()/2, height()/2, src_top=-0.125)
    odd=YToUV(U, V, Y)
    interleave(even,odd)
    Last edited by Dogway; 18th Jan 2015 at 15:34.
    Quote Quote  
  8. Not sure why it's not working out

    I would PM gavino or Ian B. at doom9
    Quote Quote  
  9. Yes, I fixed a few things, it should be working as it is now.

    There are a few things I don't understand, and even a few other things people said in a doom9 thread that I think aren't correct (ie. using a plain spline36resize() has it wrong, wtf!). Maybe the linked page above is not correct, or we are reading it wrong, who knows.
    Quote Quote  
  10. (You forgot to weave() the fields back , but it still doesn't work the same)

    I think you have to look at the code and see what

    ConvertToYV12(matrix="Rec601", interlaced=true,ChromaInPlacement="DV",ChromaOutPl acement="MPEG2",chromaresample="spline36")

    is actually doing. Whether or not it's "right" or "wrong" according to some standard is another issue

    I would have thought those would be functionally equivalent, if avisynth was using those same offsets as in that link
    Quote Quote  
  11. I don't know programming, I think I would have a little hard time trying to look for that the variables, etc It's not so easy.

    If you check the MPEG2 placement of my above link, and the MPEG2 placement of this PDF, they are different. They match though if you rotate the view 90º, but some of the other schemes then don't match like 4:2:2 or 4:1:1.
    Quote Quote  
  12. Well Poyton is like the "Godfather" for this stuff So I would say he's the authority

    But that pdf is for Progressive 4:2:0 MPEG
    Figure 2 shows MPEG-2’s 4:2:0 subsampling for frame-coded
    (progressive) pictures. For field-coded interlaced (top and bottom)
    pictures, the situation is more complicated;

    a description of chroma subsampling for field-coded pictures is outside the scope of this document
    Quote Quote  
  13. Have you looked at the dither documentation, or just the "colorspace-subsampling.png" ? Those diagrams in there look different for 4:2:0 MPEG2, top and bottom field . Cretindesalpes/firesledge is another person who will know what is going on for these equations

    I've been following along that other thread you mentioned too, also trying to make sense of it
    Quote Quote  
  14. I don't know Poynton but I'm not questioning him, ignoring MPEG2 for a moment (which coincidentally is one of the few that matches my link), the rest of the schemes don't match... in this regard it's difficult to know which one to adhere to. Poynton's is a bit more difficult to come across due to the lack of a grid and oversimplification.

    I'm asking things here and there, maybe not without reason since so many things are so wrongly and scarcely explained.


    edit: thanks a lot. Yes, I forgot there was that. I think both are right, just that the link only shows the pixel 'centers', and the PDF and cretindesalpes pic show the whole pixels. From the diagram is hard to tell what is U and what is V though.
    Last edited by Dogway; 18th Jan 2015 at 17:03.
    Quote Quote  
  15. Originally Posted by Dogway View Post
    I think both are right, just that the link only shows the pixel 'centers', and the PDF and cretindesalpes pic show the whole pixels. From the diagram is hard to tell what is U and what is V though.
    Which makes it even more confusing - because if you look at "top field" for example, the chroma (regardless of it's Cb or Cr) is placed left and above the Y samples . Or I'm not interpreting it correctly. I get that the Y samples are a 8x4 grid, and the chroma 4x2 grid, that make sense that 1 chroma sample is stretched to fit , but the placement I dont' understnand
    Quote Quote  
  16. I think he is accounting for the field bob which for chroma having less resolution, is higher than luma and thus going above it for TFF and below it for BFF.
    But I didn't know I had to have that into account too... Why in the webpage that detail isn't reflected is beyond me.
    Quote Quote  
  17. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Why are you doing this in AVIsynth?

    1. You should be using Cedocida as your DV decoder. All others are inferior.
    2. Cedocida lets you pick what chroma format you want to output, and hence does the conversion for you.

    Have you found a problem with it? I looked many years ago with test images, and could not find a problem.

    I also found that, with my (admittedly rubbish) SD camcorder, the images were rarely good enough to detect any problem from using completely the wrong format anyway (i.e. taking DV chroma and using it as if it was MPEG-2 chroma). If I A/B'd right/wrong I could see a slight difference, but without a reference the camcorder's image just wasn't good enough to be able to say what you were looking at was right or wrong.

    Cheers,
    David.
    Quote Quote  
  18. Now that I'm on it, for learning too, but I keep hearing about Cedocida and I wonder what is so special about it... I want to backup the masters untouched without conversion, and if necessary do the conversion myself in avisynth, when remastering or editing. AFAIK WinDV does a bit transfer (doesn't decode) but don't call me on that.
    Last edited by Dogway; 19th Jan 2015 at 05:42.
    Quote Quote  
  19. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Of course WinDV does a bit-accurate transfer. If your camcorder plays the tape perfectly. Except with certain audio format changes. Except with blank sections and certain camcorders. When it loses audio completely, drops hundreds of frames, etc.

    For pure cut-and-paste editing you should use something that does direct stream copy (Many NLEs can; VirtualDUB can). You should not run DV-AVI through AVIsynth for pure cut-and-paste editing, because you are then needlessly re-encoding.

    When doing more advanced processing, you should use Cedocida as the DV codec when opening DV-AVI files in AVIsynth, using AVIsource.

    Sorry if I'm telling you things that are obvious to you.

    Cheers,
    David.
    Quote Quote  
  20. Well, yeah, I'm closing the project right now, have been with this one month. Here is the original thread.
    I don't mind some accurate info reminded once in a while but didn't want to derail too much.

    I used WinDV for capturing to Type 1 avi, Premiere needs Type 2 so I used the Enosoft AVI repair tool to convert to that. I think that's a lossless conversion I can test it but I bet I did at the time, the wary me, so let's assume the chroma placement of the WinDV capture is maintained through the process which if correct should be DV-PAL. That's for the edited part. Following that exported to lagarith and encoded to h264 through avs.

    For the remaster part indeed I'm loading DV-AVI with FFMS2, I'm 2 tapes away of the total of 7 to close remaster but, do you suggest I should install Cedocida and load with avisource() in avisynth instead?
    I'm going to install it to check how it compares to other solutions, as you can see in the other thread I and poisondeathray had a hot discussion in relation to that. I posted my findings here.


    edit: Cedocida aligns with the new-ffms2 avs test which is eerie. Chroma placement seems fine just like the blue colored layers, but there's something going on with quantization matrix. Compared to cedocida's, the blue decoders seem to have a focus on a quantization with an acutance matrix. Not saying one is wrong, it's possible they are just two accepted methods for decoding but until someone confirms...
    Last edited by Dogway; 19th Jan 2015 at 11:29.
    Quote Quote  
  21. Rolled some tests, tried to mimic manually the conversion done by Cedocida (to MPEG2) and avisynth's internal ConvertToYV12 depicted on OP.

    Interesting finds:

    Celocida matches my above manual snippet, I repost here again.

    Code:
    separatefields()
    raw=last
    selectevery(2,0)
    Y = ConvertToY8()
    U = UToY8().Spline36Resize(width()/2, height()/2, src_top=-0.125)
    V = VToY8().Spline36Resize(width()/2, height()/2, src_top=0.375)
    even=YToUV(U, V, Y)
    raw
    selectevery(2,1)
    Y = ConvertToY8()
    U = UToY8().Spline36Resize(width()/2, height()/2, src_top=-0.375)
    V = VToY8().Spline36Resize(width()/2, height()/2, src_top=0.125)
    odd=YToUV(U, V, Y)
    interleave(even,odd)
    weave()
    This instead does something very strange:
    Code:
    ConvertToYV12(matrix="Rec601", interlaced=true,ChromaInPlacement="DV",ChromaOutPlacement="MPEG2",chromaresample="spline36")
    equals to:
    Code:
    separatefields()
    raw=last
    selectevery(2,0)
    Y = ConvertToY8()
    U = UToY8().Spline36Resize(width()/2, height()/2, src_top=0.125)
    V = VToY8().Spline36Resize(width()/2, height()/2, src_top=-0.125)
    even=YToUV(U, V, Y)
    raw
    selectevery(2,1)
    Y = ConvertToY8()
    U = UToY8().Spline36Resize(width()/2, height()/2, src_top=0.125)
    V = VToY8().Spline36Resize(width()/2, height()/2, src_top=-0.125)
    odd=YToUV(U, V, Y)
    interleave(even,odd)
    weave()
    Someone needs to explain the latter to me, or maybe fix it. I think I'm going to go with Cedocida and my intuition.
    Last edited by Dogway; 19th Jan 2015 at 12:50.
    Quote Quote  
  22. Member Skiller's Avatar
    Join Date
    Oct 2013
    Location
    Germany
    Search PM
    Originally Posted by Dogway View Post
    Someone needs to explain the latter to me, or maybe fix it. I think I'm going to go with Cedocida and my intuition.
    I cannot explain AviSynth's behaviour but I noticed the very same thing a couple of months ago when I was editing some DV footage. Like you I compared AviSynth's DV to MPEG2 chroma placement conversion with Cedocida's. I also think Cedocida is right and AviSynth is wrong.
    Quote Quote  
  23. Yes, I observed a few colored sharp frames/fields and it seems to fix things slightly, it looks indeed better. Using the manual way is blurrier, not sure why, so it's preferable to use Cedocida. I'm assuming that choosing "MPEG2 interlaced" means convert to "MPEG2 interlaced", and not defining the input type.
    Quote Quote  
Visit our sponsor! Try DVDFab and backup Blu-rays!