VideoHelp Forum




+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 51
  1. I have a video and some frames are interlaced whereas others are deinterlaced. Please look at the attached pictures and short video. Is it a problem withthe camera or a bad processing on the video?

    I would like to fix this problem but the interlaced frames are randomly positioned in the video.. Are there some tools to detect if the frames are interlaced? And then which filter should I use? Yadif?


    Click image for larger version

Name:	00000.jpeg
Views:	973
Size:	183.5 KB
ID:	7588
    Click image for larger version

Name:	00001.jpeg
Views:	1015
Size:	170.3 KB
ID:	7589

    http://www.mediafire.com/?2v3jb55nyr9e5zu


    Thank you in advance for any advices
    Quote Quote  
  2. I think this is actually progressive content from a PAL source - the timecode overlay gives it away

    You should be able to field match , and decimate the dupes. This would be better than deinterlacing and decimating the dupes

    MPEG2Source()
    AssumeTFF()
    TFM()
    TDecimate(mode=2,rate=25)

    The black level is high, you might want to bring it down with levels or smoothlevels, but I'll leave that up to you



    Which MJ concert was this from ?
    Quote Quote  
  3. You have to add AssumeFPS(25) to make it perfect 25.0 FPS . (It's fractionally off on that short clip)

    It looks like Japan concert? from the billboard advertisements ?
    Quote Quote  
  4. I'm glad to see someone had the good sense to do a 3:2:3:2:2 pulldown rather than field blending.
    Quote Quote  
  5. Originally Posted by jagabo View Post
    I'm glad to see someone had the good sense to do a 3:2:3:2:2 pulldown rather than field blending.
    Yes that's fantastic!

    I don't think I've ever seen that, all that's ever posted are samples with bad field blends
    Quote Quote  
  6. About 10 years ago I recorded one show of cable TV with pulldown rather than field blends. You don't see it very often.
    Quote Quote  
  7. What would be the script to do that in avisynth? (if I was converting to NTSC from a PAL source, and not through DGPulldown TFF/RFF flags)
    Quote Quote  
  8. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by poisondeathray View Post
    What would be the script to do that in avisynth? (if I was converting to NTSC from a PAL source, and not through DGPulldown TFF/RFF flags)
    The 'hard pulldown' part (resizing may also be required) is most simply done as follows:

    ChangeFPS(60000, 1001)
    AssumeTFF() # or BFF, as desired
    SeparateFields()
    SelectEvery(4, 0, 3)
    Weave()

    Note that this script is independent of the source rate (ie source can be PAL, or anything else).
    See http://forum.doom9.org/showthread.php?p=1413536#post1413536
    Quote Quote  
  9. Yes, basically you're just pull alternating top and bottom fields out of duplicated frames.
    Quote Quote  
  10. Hi

    First of all, I would like to apologize for my late answer. I was unexpectedly very busy these day.. and I could only study your answers today. I have some questions, not necessarily related to the original issue, but I think it's better to post them here :

    • First, thank you poisondeathray for your script, it works perfectly. In fact, I'll kinda reconvert the video to PAL... the aspect ratio looks also more natural in 720*576. Just one question: AssumeFPS(25) will slightly change the duration of my video, right? As I have demuxed the audio & video, I guess they'll not match exactly when I'll join them after processing the video.. How would you solve this problem? Is it possible to add an AC3 source to my clip so that AssumeFPS(25) automatically speeds up the audio to make it match the video?
    • I have edited the levels as advised.. but I don't know if I made it right. I wrote:
      Code:
      SmoothLevels(16, 1, 235, 0, 255, chroma=0, limiter=2)
      I set chroma=0, else the colors seem a bit altered.. What makes me doubt I used the right parameters, is that the black areas are sometimes "noisy". Please look at this short sample, you'll understand the problem: http://www.mediafire.com/?00gxmv0bmek7qza There are many other options for SmoothLevels(), but I don't know if they can be used to get rid of that noise.. or maybe it's not a problem with the levels?
    • The last question is about the file size. The original file is more than 1Go (for 19 min).. but when I use the Avidemux auto settings to make a DVD (MPEG2 avcodec), the file is only 500 Mo. Why this difference? Do I use the right tool and codec to make a DVD?
    Thank you in advance for enlighten me on these points

    PS: the concert is indeed from Japan. It's the bad tour 1987 held in Tokyo. MJ was wearing a red shirt at the beginning of the tour. If you want the whole footage, I can upload it
    Quote Quote  
  11. Originally Posted by mathmax View Post
    First, thank you poisondeathray for your script, it works perfectly. In fact, I'll kinda reconvert the video to PAL... the aspect ratio looks also more natural in 720*576. Just one question: AssumeFPS(25) will slightly change the duration of my video, right? As I have demuxed the audio & video, I guess they'll not match exactly when I'll join them after processing the video.. How would you solve this problem? Is it possible to add an AC3 source to my clip so that AssumeFPS(25) automatically speeds up the audio to make it match the video?
    No, duration will be fine. That's one reason why this method of pulldown is used to convert PAL=>NTSC, the original audio & pitch is preserved. When you convert it back with that script to PAL, the duration is the same, audio isn't touched. The AssumeFPS is just to make it perfect (there are some slight ms errors probably because of short length of sample, on a longer sample it would be closer)

    You can test it out with audiodub() and play it in media player. It should be in sync, you shouldn't have to adjust audio. I would leave AssumeFPS in , however






    • I have edited the levels as advised.. but I don't know if I made it right. I wrote:
      Code:
      SmoothLevels(16, 1, 235, 0, 255, chroma=0, limiter=2)
      I set chroma=0, else the colors seem a bit altered.. What makes me doubt I used the right parameters, is that the black areas are sometimes "noisy". Please look at this short sample, you'll understand the problem: http://www.mediafire.com/?00gxmv0bmek7qza There are many other options for SmoothLevels(), but I don't know if they can be used to get rid of that noise.. or maybe it's not a problem with the levels?

    Ideally you would probably have to adjust levels by scene, because different parts might be lit differently. Most people probably would not use a limiter (clamp the footage)

    In that AVI footage, the black level can be brought down a tad lower. You can use Histogram() to see this. The goal is to keep roughly within those brown borders which represents Y=16 and Y=235

    If you are referring to the static "dots" in the black background , it's probably from your temporal denoiser that is trying to stabilize the background. If you don't filter it properly before applying a temporal stabilizer, some "dots" can get stuck once you use that filter. Sometimes changing the order of filters can help (e.g. fix the levels before your denoising filters , or sometimes the reverse). Sometimes you have to change the filters or settings.


    The last question is about the file size. The original file is more than 1Go (for 19 min).. but when I use the Avidemux auto settings to make a DVD (MPEG2 avcodec), the file is only 500 Mo. Why this difference? Do I use the right tool and codec to make a DVD?
    Filesize = bitrate x running time

    When making a DVD, there are limitations on what you can use.

    You should use a bitrate calculator to get the optimum amount. Search for DVD bitrate calculator. There are many, one on this site too

    Don't use avidemux to encode a DVD, it uses a very poor MPEG2 encoder. Use HCEnc, or if you want a GUI that encodes and authors for you , try AVStoDVD. It uses HCEnc




    PS: the concert is indeed from Japan. It's the bad tour 1987 held in Tokyo. MJ was wearing a red shirt at the beginning of the tour. If you want the whole footage, I can upload it
    No, but thanks for the offer

    I'm curious what you used to remove timecode overlay ? Just my opinion, but I'm not sure if it's better than leaving it on ? It flickers and seems intrusive ?
    Last edited by poisondeathray; 30th Jun 2011 at 17:51.
    Quote Quote  
  12. Thank you for your answer

    Now I use NicAC3Source() + AudioDub(), it's more easy than joining the audio and video later.. and they both match

    About the levels, do you think I should scales the chroma too? (chroma=100 or an intermediate value..)

    Yes, I'm exactly referring to the static dots in the black background. And your assumption seems to be good cause it appears after applying an avisynth script that I used to denoise and sharpen the video.
    This is the script : http://forum.doom9.org/showthread.php?t=144271
    you might know it.. usually it gives me good results, but I tried to play the parameters values and these static dots don't want to disappear. Maybe I don't use the right script.. what do you think?

    Thank you for the advice about the DVD. I'll test the tool when the encoding will be finished

    oh.. and I used InpaintFunc() to remove the logo... the best I could find till now.
    Quote Quote  
  13. Originally Posted by mathmax View Post

    Now I use NicAC3Source() + AudioDub(), it's more easy than joining the audio and video later.. and they both match

    If you are reusing original audio, don't use audiodub() for the encoding. Because when you process through avisynth , you would have to re-encode the audio (quality loss). I only meant using audiodub() to check the sync

    About the levels, do you think I should scales the chroma too? (chroma=100 or an intermediate value..)
    If you are using recent versions it ranges 0-200 . 100 is arbitrary intermediate. It's personal choice, so try different values. There is no "right" or "wrong"

    Yes, I'm exactly referring to the static dots in the black background. And your assumption seems to be good cause it appears after applying an avisynth script that I used to denoise and sharpen the video.
    This is the script : http://forum.doom9.org/showthread.php?t=144271
    you might know it.. usually it gives me good results, but I tried to play the parameters values and these static dots don't want to disappear. Maybe I don't use the right script.. what do you think?
    I've seen this type of effect before , and I described above what typically causes it. So you either use a stronger denoiser before the temporal stabilizer (I haven't looked closely, but I think it's mvdegrainmulti that is causing this) , or different settings or different filters. Or another approach is to add grain .

    A lot of this is personal preference.
    Quote Quote  
  14. Originally Posted by poisondeathray View Post
    I've seen this type of effect before , and I described above what typically causes it. So you either use a stronger denoiser before the temporal stabilizer (I haven't looked closely, but I think it's mvdegrainmulti that is causing this) , or different settings or different filters. Or another approach is to add grain .

    A lot of this is personal preference.
    I tried to change the parameters of mvdegrainmulti() this way:

    denoising_strenght=0#denoising level of second denoiser: MVDegrainMulti()
    denoising_frames= 1 #number of frames for averaging (forwards and backwards) 3 is a good start value
    block_size= 16 #block size of MVDegrainMulti()
    block_size_v= 16
    block_over= 8 #block overlapping of MVDegrainMulti()
    but even those values can't get rid of these fixed dots..

    I guess you want to add a spatial denoiser before the function to soften these statics areas and then add grain at then end of the script to "dilute" them..

    I tried to find a spatial denoiser (I guess it must be spatial, else it would only strengthen the fixed areas). I came acrross RemoveGrain(), I don't know if that is the best denoiser nor which mode should I use.. but it only removes the small statics dots and also removes a lot of details on the rest of the videos. I thought I could use a mask to only apply the filter on the dark areas.. but I think it would be better to change what's wrong in the script rather than trying to compensate this bad effect.
    Quote Quote  
  15. I'm pretty sure it's mvdegrainmulti , I've had similar problems using it before in the past

    If you post a sample of that section of source , your current script, what you have tried so far, and maybe at doom9 as well, you may get better suggestions on how to treat the "noise"
    Quote Quote  
  16. I simplified the script trying to isolate the part responsible of the problem. It's seems indeed to be mostly caused MVDegrainMulti() allthough RemoveDirtMC() generate some static dots too..

    Here is a package containing the simplified script and a sample of the original file.

    http://www.mediafire.com/?1qgyxxz3797zibb
    Quote Quote  
  17. My personal preference would be to use very mild denoiser. This is pretty clean source, all things considered. IMO, the fine details, like MJ's curly hair are more important to preserve than giving him "plastic surgery" (what I mean is when you use strong denoiser, and then sharpen, everything looks like "plastic dolls" and very unnatural)

    Even something like MCTemporalDenoise(settings="low") is almost too strong IMO

    Remember, Fred's script was for super 8 transfers - they are usually full with dirt and crap, never this clean. I'm not sure those would be the "best" filters for this scenario

    I don't like overdenoising on live action type footage, because the grain actually helps with fine details and banding along gradients. Too smooth can actually work against you once you encode the DVD. Many people actually add fine grain or dither before they encode

    But we probably have different expecations and tastes - If you describe what you want , or what your expectations are more clearly , maybe you will get better suggestions
    Last edited by poisondeathray; 1st Jul 2011 at 16:01.
    Quote Quote  
  18. Well.. I like the result of Fred's script even on that video Maybe it could be better, but I guess I'm not experienced enough to know..

    How to describe it....well, it's more sharp, less noisy.. in fact only these blacks areas are a problem.

    before:
    Click image for larger version

Name:	before.jpg
Views:	995
Size:	175.0 KB
ID:	7662

    after:
    Click image for larger version

Name:	after.jpg
Views:	1091
Size:	232.9 KB
ID:	7663
    Quote Quote  
  19. If you like those results, you'll probably have to work within that script and play with the settings

    or try to use other filters to get similar results. Most of the difference in that screenshot is from levels
    Quote Quote  
  20. Finally, I could deal with the problem by using a smoother and then adding grain on the final clip:

    Code:
    smoothedclip = result.ConvertToRGB().VD_SmartSmoothHiQ(9, 40, 254, "weighted").ConvertToYV12().GrainFactory3(g1str=2, g2str=4, g3str=6)
    mymask = film.Lanczos4Resize(W,H).tweak(sat=0).Invert().levels(100,0.5, 255, 0, 240).blur(1.58).blur(1.58).blur(1.58).blur(1.58).blur(1.58).blur(1.58).blur(1.58).blur(1.58).blur(1.58).blur(1.58).blur(1.58)
    
    Overlay(result, smoothedclip , mask= mymask)
    I use the mask to apply this correction only on the dark areas and keep the rest as sharp as possible. I'm quite satisfied with the final result

    But I still have one last problem. After processing the clip, I realized that some frames have a problem that I can better describe by posting an example. First frame is ok, the second one is badly processed:

    Click image for larger version

Name:	20000.jpeg
Views:	924
Size:	181.8 KB
ID:	7682Click image for larger version

Name:	20001.jpeg
Views:	950
Size:	203.0 KB
ID:	7681

    I already have seen such problem before but I don't know where it comes from.. Do you know what it is?
    Quote Quote  
  21. Originally Posted by jagabo View Post
    I'm glad to see someone had the good sense to do a 3:2:3:2:2 pulldown rather than field blending.
    why? Is it better than blending the fields? For watching on TV, which method gives the best result?
    Quote Quote  
  22. I already have seen such problem before but I don't know where it comes from.. Do you know what it is?
    The artifacts are likely from not perfect mask, like grey areas. So you have partial overlap of top & bottom layers

    Is it better than blending the fields? For watching on TV, which method gives the best result?
    watching on NTSC TV, that pulldown method for PAL=>NTSC conversion is better because

    1) original audio is preserved
    2) no blends (the other method , blends are blurry)
    3) you can recover the original progressive frames a lot easier, like you are doing now (blends are hard to reverse)

    some viewers might notice a slight irregular judder, but most will not
    Quote Quote  
  23. Originally Posted by poisondeathray View Post
    The artifacts are likely from not perfect mask, like grey areas. So you have partial overlap of top & bottom layers
    mmm.. when I run again the script, I don't get the same error on the same frames... each time on different frames, that's strange...
    So I combined several videos... but it took me time to find the damaged parts...

    Originally Posted by poisondeathray View Post
    watching on NTSC TV, that pulldown method for PAL=>NTSC conversion is better because

    1) original audio is preserved
    2) no blends (the other method , blends are blurry)
    3) you can recover the original progressive frames a lot easier, like you are doing now (blends are hard to reverse)

    some viewers might notice a slight irregular judder, but most will not
    Thank you for the explanations

    Why is the audio altered in the other method?

    Does the blend method blurry each frame or does it insert blend intermediate frames from time to time? In that case it should give a better result than interlacing half of the frames in an order which is moreover not exact (according the pulldown method).. shouldn't it?

    yes that irregular judder is what I didn't like when I watched the video on my PC because the order of the fields is not exact... but maybe it's better on TV.
    Quote Quote  
  24. Originally Posted by mathmax View Post
    mmm.. when I run again the script, I don't get the same error on the same frames... each time on different frames, that's strange...
    Are you using DirectShowSource()? That's not frame accurate and will lead to problems like that.

    Originally Posted by mathmax View Post
    Why is the audio altered in the other method?
    With the field blending method the audio isn't altered. But with the slowdown method (25 fps to 23.976 fps, then 3:2 pulldown) the audio has to be slowed along with the video.

    Originally Posted by mathmax View Post
    Does the blend method blurry each frame or does it insert blend intermediate frames from time to time?
    About 1/3 of the fields are blends of two film frames.

    Originally Posted by mathmax View Post
    In that case it should give a better result than interlacing half of the frames in an order which is moreover not exact (according the pulldown method).. shouldn't it?
    I don't know what you mean by "not exact". When properly displayed you see one field at a time, not one frame at a time. So 3:2:3:2:2 pulldown looks very similar to 3:2 pulldown when you watch it.

    Originally Posted by mathmax View Post
    yes that irregular judder is what I didn't like when I watched the video on my PC because the order of the fields is not exact... but maybe it's better on TV.
    I think you have some other display problem.
    Quote Quote  
  25. Originally Posted by jagabo View Post
    Originally Posted by mathmax View Post
    Why is the audio altered in the other method?
    With the field blending method the audio isn't altered. But with the slowdown method (25 fps to 23.976 fps, then 3:2 pulldown) the audio has to be slowed along with the video.
    Sorry I was confusing the PAL slowdown method with the field blended conversion method

    Originally Posted by mathmax View Post
    yes that irregular judder is what I didn't like when I watched the video on my PC because the order of the fields is not exact... but maybe it's better on TV.
    I think you have some other display problem.
    It will look better on TV. It depends on what PC software you are using, if you are using dedicated DVD software it should look normal.
    Quote Quote  
  26. Originally Posted by jagabo View Post
    Are you using DirectShowSource()? That's not frame accurate and will lead to problems like that.
    no.. I use Avisource(). But I think it's an error in Fred's script itself..(maybe a problem with unsharpmask() as poisondeathray pointed out..?) cause it already happened when I used it on other videos before.


    Originally Posted by jagabo View Post
    I don't know what you mean by "not exact". When properly displayed you see one field at a time, not one frame at a time. So 3:2:3:2:2 pulldown looks very similar to 3:2 pulldown when you watch it.
    Well I don't know the details of 3:2:3:2:2 pulldown, I couldn't find it in the documentation. I would be interrested to know more about it.
    But for 3:2 pulldown, it's written:
    source: AtAb BtBb CtCb DtDb (four frames)

    3:2 pulldown: AtAb AtBb BtCb CtCb DtDb (five frames)
    for example Bb comes before Bt and Cb comes before Ct... that's what I mean by not the exact order.. and I guess that causes the "irregular judder" poisondeathray mentioned before.

    Originally Posted by jagabo View Post
    About 1/3 of the fields are blends of two film frames.
    why is it necessary to blend 1/3 of the frames? Why not blend only the inserted frames?

    Originally Posted by poisondeathray View Post
    It will look better on TV. It depends on what PC software you are using, if you are using dedicated DVD software it should look normal.
    I use VLC
    Quote Quote  
  27. Originally Posted by mathmax View Post
    for 3:2 pulldown, it's written:
    source: AtAb BtBb CtCb DtDb (four frames)

    3:2 pulldown: AtAb AtBb BtCb CtCb DtDb (five frames)
    for example Bb comes before Bt and Cb comes before Ct... that's what I mean by not the exact order.. and I guess that causes the "irregular judder" poisondeathray mentioned before.
    Whether top or bottom fields are displayed first doesn't matter. The cause of the judder is the fact that some frames are seen for 2/60 of a second and some are seen for 3/60 of a second.

    Originally Posted by mathmax View Post
    Well I don't know the details of 3:2:3:2:2 pulldown, I couldn't find it in the documentation. I would be interrested to know more about it.
    It's pretty much the same thing. It pulls fields down from the frames but instead of repeating in a 3:2... pattern (2 frames become 5 fields) they repeat in a 3:2:3:2:2... pattern (5 frames become 12 fields, except every 501 fields one of the threes becomes a two to make up for the difference between 59.94 fields per second and 60 fields per second):

    At Ab At Bb Bt Cb Ct Cb Dt Db Et Eb...

    Originally Posted by mathmax View Post
    Originally Posted by jagabo View Post
    About 1/3 of the fields are blends of two film frames.
    why is it necessary to blend 1/3 of the frames? Why not blend only the inserted frames?
    Each frame of a 25 fps source is to be viewed for 1/25 second. When displayed at 60 fields per second with blending each field that crosses a 1/25 second boundary is blended, a double exposure. Think of what you would see if you filmed a 25 fps source at 60 fps. Most of the 60 fps frames would be exposed to only one of the 25 fps frames. But some will be exposed to two consecutive source frames:

    Name:  graph.png
Views: 1126
Size:  2.0 KB

    In that bar chart the horizontal axis represents time increasing to the right. The top row of blocks represents five 25 fps progressive frames -- ie, the width of each colored block is 40 ms (1/25 second). The bottom row represents 12 NTSC fields of ~17 ms (1/59.94 second). The total length of the bars represents 200 ms (5/25 second). The first two of the NTSC fields are exposed to only the first frame of the 25 fps source so there's no blending. The third NTSC field is exposed to both the first and second frames of the 25 fps source -- a blended field. Etc. So out of 12 NTSC fields 4 are double exposures.
    Quote Quote  
  28. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by mathmax View Post
    for example Bb comes before Bt and Cb comes before Ct... that's what I mean by not the exact order..
    As jagabo says,
    Originally Posted by jagabo View Post
    Whether top or bottom fields are displayed first doesn't matter.
    The reason it doesn't matter is that Bb and Bt both correspond to the same moment (and the same goes for Cb and Ct), so either one could be shown first without disturbing the temporal sequence.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!