VideoHelp Forum
+ Reply to Thread
Page 2 of 4
FirstFirst 1 2 3 4 LastLast
Results 31 to 60 of 97
Thread
  1. Did you cap in RGB?

    Code:
    AVISource("snimak eden.avi")
    Info()
    What does that say about the colorspace?


    nope that way invalid mode 20 in remove grain ?!
    This is a PITA. There are like 10 different versions of removegrain. I think the one I'm using is from the QTGMC thread on doom9. There is a package there
    Quote Quote  
  2. Yes, it's the QTGMC version. Only RemoveGrainSSE2.dll, no other versions on that computer. (no RemoveGrainSSE3.dll, no RemoveGrain.dll, etc...)

    There is no version number, the modified date is
    Sunday, July 31, 2005, 11:08:11 PM

    Which is different than the dozen or so other versions of RemoveGrain____.dll . There is a PR (pre-relase version), a 1.0b version, a RemoveGrainHD, a SSE Tools version, ... each with SSE2, SSE3 variants

    It's a nightmare. I have different versions on different computers. Some work for YUY2 in QTGMC, some don't. What a mess.
    Quote Quote  
  3. Originally Posted by mammo1789 View Post
    What type of file are you trying to open?
    lagarith yuv2 file is the source
    Then you want to use AviSource("filename.avi"). I'm pretty sure ffmpeg doesn't include a Lagarith decoder.

    Originally Posted by mammo1789 View Post
    if i put avisource i get now different error only planar source
    Some of the filters work only in YV12. Use: AviSource("filename.avi").ConvertToYV12(interlaced =true).

    Originally Posted by mammo1789 View Post
    should i put yv12 i tough there are no color conversation happening ?
    You will be converting from YUY2 (?) to YV12. That will reduce the color resolution by half on the vertical axis.
    Quote Quote  
  4. Did you cap in RGB?
    Of course not, i captured in yuv2
    What does that say about the colorspace?
    yuv2

    modified Sunday, ‎May ‎01, ‎2005, ‏‎2:41:16 PM removegrain and sse2 version Thursday, ‎May ‎05, ‎2005, ‏‎10:16:08 PM

    Which is different than the ten or so other versions of RemoveGrain____.dll

    It's a nightmare. I have different versions on different computers
    So you use that version only july 31 2005 version sse2 version ( i have sse3 q6600 will that matter for speed or something or irrelevant) where to find that version should i delete the other 2 files
    Quote Quote  
  5. Originally Posted by jagabo View Post
    Originally Posted by mammo1789 View Post
    I'm pretty sure ffmpeg doesn't include a Lagarith decoder.
    Actually ffmpeg does now, it has decoder for ut video codec as well.

    But I don't know if they have made it into the FFMS2.dll builds yet for avisynth
    Quote Quote  
  6. Originally Posted by mammo1789 View Post
    Did you cap in RGB?
    Of course not, i captured in yuv2
    What does that say about the colorspace?
    yuv2
    There might be a workaround in avisynth 2.6.x, you can use YV16 (planar 4:2:2) for some filters

    (of course you need the right .dll combination, some don't work )

    So you use that version only july 31 2005 version sse2 version ( i have sse3 q6600 will that matter for speed or something or irrelevant) where to find that version should i delete the other 2 files
    It's <1% difference in speed. But I read somewhere from jedi master didee that SSE3 versions for some filters are unreliable and can cause crashes, so I stay away from them (I can't remember which ones, so I ignore them all). You can use whatever you want. Feel free to experiment
    Quote Quote  
  7. Then you want to use AviSource("filename.avi"). I'm pretty sure ffmpeg doesn't include a Lagarith decoder.
    i tried that as i sad but in that case i loose
    Yes, AVISource is fine. The reason why that was used for FFVideoSource, was that sometimes the frame rate is a bit off, and seekmode=0 gives a bit more frame accuracy (when you do non linear seeks when tweaking the script, it can lose it's place, or give slightly different results - but if you go linearly you will get consistent results)
    which poisondeathray is suggesting for better results

    Some of the filters work only in YV12. Use: AviSource("filename.avi").ConvertToYV12(interlaced =true).
    I no that and first i tried that but it has error as i sad removegrain invalid mode 20 ? does poison is making yv12 conversation or not where to find teh correct version of removegrain that you sad i downloaded mine from official site
    Quote Quote  
  8. ^ AVISource is fine, the problem is with ffvideosource for some filetypes. But the non linear seek problem still occurs with AVISource or FFVideoSource - it has to do with some of the temporal filters. e.g. say you're tweaking a script, and you jump to a frame to preview the result - the results might be different than if you did it linearly from way back a long distance like 100 frames and went foward frame by frame

    does poison is making yv12 conversation or not
    Your test file was YV12 (because you re-encoded with x264 in YV12 - although x264 supports YUY2 now...)


    But one might argue VHS chroma resolution (even on pristine quality VHS) is so low anyway, so why bother? ....
    Quote Quote  
  9. Avisource ("D:\VIDEO OBR\snimak eden.avi")
    ConvertToYV12(interlaced =true)
    AssumeTFF()
    SeparateFields()
    a=last

    clense(reduceflicker=false).merge(last,0.5).clense (reduceflicker=false)
    mot=removegrain(11,0).removegrain(20,0).DepanEstim ate(range=2)
    take2=a.depaninterleave(mot,prev=2,next=2,subpixel =2)
    clean1=take2.TMedian2().selectevery(5,2)

    sup1 = clean1.minblur(1).removegrain(11,0).removegrain(11 ,0)
    \ .mt_lutxy(clean1,"x 1 + y < x 2 + x 1 - y > x 2 - y ? ?",U=2,V=2)
    \ .msuper(pel=2,sharp=0)
    sup2 = a.msuper(pel=2,levels=1,sharp=2)

    bv22=sup1.manalyse(isb=true, truemotion=false,global=true,delta=2,blksize=16,ov erlap=8,search=5,searchparam=4,DCT=5)
    bv21=sup1.manalyse(isb=true, truemotion=false,global=true,delta=1,blksize=16,ov erlap=8,search=5,searchparam=4,DCT=5)
    fv21=sup1.manalyse(isb=false,truemotion=false,glob al=true,delta=1,blksize=16,overlap=8,search=5,sear chparam=4,DCT=5)
    fv22=sup1.manalyse(isb=false,truemotion=false,glob al=true,delta=2,blksize=16,overlap=8,search=5,sear chparam=4,DCT=5)

    interleave(a.mcompensate(sup2,fv22),a.mcompensate( sup2,fv21),a,a.mcompensate(sup2,bv21),a.mcompensat e(sup2,bv22))
    TMedian2().selectevery(5,2)

    sup3 = last.msuper(pel=2,sharp=2)
    bv33=sup3.manalyse(isb=true, truemotion=false,global=true,delta=3,blksize=8,ove rlap=4,search=5,searchparam=4,DCT=5)
    bv32=sup3.manalyse(isb=true, truemotion=false,global=true,delta=2,blksize=8,ove rlap=4,search=5,searchparam=4,DCT=5)
    bv31=sup3.manalyse(isb=true, truemotion=false,global=true,delta=1,blksize=8,ove rlap=4,search=5,searchparam=4,DCT=5)
    fv31=sup3.manalyse(isb=false,truemotion=false,glob al=true,delta=1,blksize=8,overlap=4,search=5,searc hparam=4,DCT=5)
    fv32=sup3.manalyse(isb=false,truemotion=false,glob al=true,delta=2,blksize=8,overlap=4,search=5,searc hparam=4,DCT=5)
    fv33=sup3.manalyse(isb=false,truemotion=false,glob al=true,delta=3,blksize=8,overlap=4,search=5,searc hparam=4,DCT=5)

    last.mdegrain3(sup3,bv31,fv31,bv32,fv32,bv33,fv33, thSAD=499)

    Interleave()
    Weave()

    ###
    #####interpolate bad frames and residual cleaning
    ###

    Super = msuper()
    bv1 = manalyse(Super, isb=true, delta=2)
    fv1 = manalyse(Super, isb=false, delta=2)
    bv2 = manalyse(Super, isb=true, delta=3)
    fv2 = manalyse(Super, isb=false, delta=3)
    global CandidatesForN = mflowinter(Super, bv2, fv2, time=33.3, ml=100)
    global CandidatesForO = mflowinter(Super, bv2, fv2, time=66.7, ml=100)
    global CandidatesForC = mflowinter(Super, bv1, fv1, time=50.0, ml=100)

    last
    rx(104,12) #104-116 replaced

    AssumeTFF()
    SeparateFields()
    f1=SelectEven().RemoveDirtMC(500,false).LSFMod(str ength=50)
    f2=SelectOdd().RemoveDirtMC(500,false).LSFMod(stre ngth=50)
    Interleave(f1,f2)
    Weave()





    function MinBlur(clip clp, int r, int "uv")
    {
    uv = default(uv,3)
    uv2 = (uv==2) ? 1 : uv
    rg4 = (uv==3) ? 4 : -1
    rg11 = (uv==3) ? 11 : -1
    rg20 = (uv==3) ? 20 : -1
    medf = (uv==3) ? 1 : -200

    RG11D = (r==0) ? mt_makediff(clp,clp.sbr(),U=uv2,V=uv2)
    \ : (r==1) ? mt_makediff(clp,clp.removegrain(11,rg11),U=uv2,V=u v2)
    \ : (r==2) ? mt_makediff(clp,clp.removegrain(11,rg11).removegra in(20,rg20),U=uv2,V=uv2)
    \ : mt_makediff(clp,clp.removegrain(11,rg11).removegra in(20,rg20).removegrain(20,rg20),U=uv2,V=uv2)
    RG4D = (r<=1) ? mt_makediff(clp,clp.removegrain(4,rg4),U=uv2,V=uv2 )
    \ : (r==2) ? mt_makediff(clp,clp.medianblur(2,2*medf,2*medf),U= uv2,V=uv2)
    \ : mt_makediff(clp,clp.medianblur(3,3*medf,3*medf),U= uv2,V=uv2)
    DD = mt_lutxy(RG11D,RG4D,"x 128 - y 128 - * 0 < 128 x 128 - abs y 128 - abs < x y ? ?",U=uv2,V=uv2)
    clp.mt_makediff(DD,U=uv,V=uv)
    return(last)
    }

    # median of 5 clips from Helpers.avs by G-force
    Function Median2(clip "input_1", clip "input_2", clip "input_3", clip "input_4", clip "input_5", string "chroma")
    {
    chroma = default(chroma,"process") #default is "process". Alternates: "copy first" or "copy second"
    #MEDIAN(i1,i3,i5)
    Interleave(input_1,input_3,input_5)
    chroma == "process" ? Clense(reduceflicker=false) : Clense(reduceflicker=false,grey=true)
    m1 = selectevery(3,1)
    #MAX(MIN(i1,i3,i5),i2)
    m2 = input_1.MT_Logic(input_3,"min",chroma=chroma).MT_L ogic(input_5,"min",chroma=chroma).MT_Logic(input_2 ,"max",chroma=chroma)
    #MIN(MAX(i1,i3,i5),i4)
    m3 = input_1.MT_Logic(input_3,"max",chroma=chroma).MT_L ogic(input_5,"max",chroma=chroma).MT_Logic(input_4 ,"min",chroma=chroma)
    Interleave(m1,m2,m3)
    chroma == "process" ? Clense(reduceflicker=false) : Clense(reduceflicker=false,grey=true)
    selectevery(3,1)
    chroma == "copy first" ? last.MergeChroma(input_1) : chroma == "copy second" ? last.MergeChroma(input_2) : last
    Return(last)
    }

    function TMedian2(clip c) {
    Median2( c.selectevery(1,-2), c.selectevery(1,-1), c, c.selectevery(1,1), c.selectevery(1,2) ) }



    function RemoveDirt(clip input, int limit, bool _grey)
    {
    clensed=input.Clense(grey=_grey, cache=4)
    alt=input.RemoveGrain(2)
    return RestoreMotionBlocks(clensed,input,alternative=alt, pthreshold=6,cthreshold=8, gmthreshold=40,dist=3, dmode=2,debug=false,noise=limit,noisy=4, grey=_grey)

    # Alternative settings
    # return RestoreMotionBlocks(clensed,input,alternative=alt, pthreshold=4,cthreshold=6, gmthreshold=40,dist=1,dmode=2,debug=false,noise=li mit,noisy=12,grey=_grey,show=true)
    # return RestoreMotionBlocks(clensed,input,alternative=alt, pthreshold=6,cthreshold=8, gmthreshold=40,dist=3,tolerance= 12,dmode=2,debug=false,noise=limit,noisy=12,grey=_ grey,show=false)
    }

    function RemoveDirtMC(clip,int limit, bool "_grey")
    {
    _grey=default(_grey, false)
    limit = default(limit,6)
    i=MSuper(clip,pel=2)
    bvec = MAnalyse(i,isb=false, blksize=8, delta=1, truemotion=true)
    fvec = MAnalyse(i,isb=true, blksize=8, delta=1, truemotion=true)
    backw = MFlow(clip,i,bvec)
    forw = MFlow(clip,i,fvec)
    clp=interleave(backw,clip,forw)
    clp=clp.RemoveDirt(limit,_grey)
    clp=clp.SelectEvery(3,1)
    return clp
    }



    function R(clip Source, int N)
    {
    # N is number of the frame in Source that needs replacing.
    # Frame N will be replaced.

    Source.trim(0,-N) ++ CandidatesForC.trim(N-1,-1) ++ Source.trim(N+1,0)
    }


    function RP(clip Source, int N)
    {
    # N is number of the first frame in Source that needs replacing.
    # Frames N and N+1(O) will be replaced.

    Source.trim(0,-N) ++ CandidatesForN.trim(N-1,-1) \
    ++ CandidatesForO.trim(N-1,-1) ++ Source.trim(N+2,0)
    }


    function RX(clip Source, int N, int X)
    {
    # N is number of the 1st frame in Source that needs replacing.
    # X is total number of frames to replace
    #e.g. RX(101, 5) would replace 101,102,103,104,105 , by using 100 and 106 as reference points for mflowfps interpolation

    start=Source.trim(N-1,-1) #one good frame before, used for interpolation reference point
    end=Source.trim(N+X,-1) #one good frame after, used for interpolation reference point

    start+end
    AssumeFPS(1) #temporarily FPS=1 to use mflowfps

    super = MSuper()
    backward_vec = MAnalyse(super, isb = true)
    forward_vec = MAnalyse(super, isb = false)
    MFlowFps(super, backward_vec, forward_vec, blend=false, num=X+1, den=1) #num=X+1
    AssumeFPS(FrameRate(Source)) #return back to normal source framerate for joining
    Trim(1, framecount-1) #trim ends, leaving replacement frames

    Source.trim(0,-N) ++ last ++ Source.trim(N+X+1,0)
    }


    i also put the QTGMC version of removegrain and now is loading in avsmode
    but it could not render in mc classic

    D:\script\scrip 5.avs::Avisynth video #1

    Media Type 0:
    --------------------------
    Video: YV12 720x576 25.00fps

    AM_MEDIA_TYPE:
    majortype: MEDIATYPE_Video {73646976-0000-0010-8000-00AA00389B71}
    subtype: MEDIASUBTYPE_YV12 {32315659-0000-0010-8000-00AA00389B71}
    formattype: FORMAT_VideoInfo {05589F80-C356-11CE-BF01-00AA0055595A}
    bFixedSizeSamples: 1
    bTemporalCompression: 0
    lSampleSize: 622080
    cbFormat: 88

    VIDEOINFOHEADER:
    rcSource: (0,0)-(0,0)
    rcTarget: (0,0)-(0,0)
    dwBitRate: 0
    dwBitErrorRate: 0
    AvgTimePerFrame: 400000

    BITMAPINFOHEADER:
    biSize: 40
    biWidth: 720
    biHeight: 576
    biPlanes: 1
    biBitCount: 12
    biCompression: YV12
    biSizeImage: 622080
    biXPelsPerMeter: 0
    biYPelsPerMeter: 0
    biClrUsed: 0
    biClrImportant: 0

    pbFormat:
    0000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    0020: 00 00 00 00 00 00 00 00 80 1a 06 00 00 00 00 00 ........€.......
    0030: 28 00 00 00 d0 02 00 00 40 02 00 00 01 00 0c 00 (...Ð...@.......
    0040: 59 56 31 32 00 7e 09 00 00 00 00 00 00 00 00 00 YV12.~..........
    0050: 00 00 00 00 00 00 00 00 ........
    Quote Quote  
  10. i cannot use this line anymore is it important for the script fpsnum=25, fpsden=1, seekmode=0)
    i really would like to get result like you did with the sample it looked great
    Quote Quote  
  11. I have no idea what the problem is

    Maybe they changed the versions since I downloaded it

    I'll upload the Zip file where I got it from . Only take that 1 .dll "RemoveGrainSSE2.dll" (don't dump all the .dll's into the plugins folder - that's the fastest way to get avisynth problems and conflicts. Also remove all the other versions from your plugins folder)

    i cannot use this line anymore is it important for the script fpsnum=25, fpsden=1, seekmode=0)
    It doesn't affect anything if you use AVIsource() . If you preview and encode linearly you should get same result

    If you want to add AssumeFPS(25) then that would be same thing as fpsnum, fpsden . When I open your mkv with FFVideosource it returned FPS of 25.001, not 25.0 exact. That was the reason
    Image Attached Files
    Quote Quote  
  12. Yes i put already and it works now it opens in vdub. Holly smoke thanks a million poisondeathray its slow and sometimes hangs in preview

    i will try to edit now and let you know in a minute
    thanks again

    when i slide trough preview it says on bottom in vdub ( decoding frame) is this normal ? and a move very slow in preview is that also normal? I have quadcore 3.2 ghz
    Quote Quote  
  13. What was the problem? Was it that version of that .dll ?

    Yes , it's a slow script, non multithreaded. (Another reason to do it in segments with lossless intermediates instead of stacking the whole thing)

    (Personally, I don't use MT for temporal filters, because the results can be bizzare with errors - and can get different results each time very inconsisten. But some people have got MT to work with temporal filters, you can experiment if you want.)
    Quote Quote  
  14. Thanks again poisondeathray, you helped me a lot, for the chroma shift on the clips i tried one script

    AVISource("D:\VIDEO OBR\snimak eden.avi")
    FixChromaBleeding()

    Function FixChromaBleeding (clip input) {

    # prepare to work on the V channel and reduce to speed up and filter noise
    area = input.tweak(sat=4.0).VtoY.ReduceBy2

    # select and normalize both extremes of the scale
    red = area.Levels(255,1.0,255,255,0)
    blue = area.Levels(0,1.0,0,0,255)

    # merge both masks
    mask = MergeLuma(red, blue, 0.5).Levels(250,1.0,250,255,0)

    # expand to cover beyond the bleeding areas and shift to compensate the resizing
    mask = mask.ConvertToRGB32.GeneralConvolution(0,"0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0").ConvertToYV12

    # back to full size and binarize (also a trick to expand)
    mask = mask.BilinearResize(Width(input),Height(input)).Le vels(10,1.0,10,0,255)

    # prepare a version of the image that has its chroma shifted and less saturated
    input_c = input.ChromaShift(C=-4).tweak(sat=0.8)

    # combine both images using the mask
    return input.overlay(input_c,mask=mask,mode="blend",opaci ty=1)
    }

    http://img845.imageshack.us/img845/845/91391724.png

    but it didn't helped much the problem is that chroma shift is probably in the upper fields only ( i had very similar problem with another tape which im planing to use your script on it for rolling lines )
    sorry for million questions guys


    What was the problem? Was it that version of that .dll ?
    yep i think so i changes the other version and it didnt show the same error so it must have been that

    Yes , it's a slow script, non multithreaded. (Another reason to do it in segments with lossless intermediates instead of stacking the whole thing)
    like you sad earlier ? part by part you mean?
    i don't mind the slowlines (is it posible to use MTsource() and use multitreated and will that speed up i get 0,75 frames i guess that's normal for this kind of script right? how much are you getting ?)

    And its cold in here and my quadcore is getting 20c on the cores while working the script

    Also is it posible to run 2 scripts on the same time one will work on one segment ( one trimmed video file ) the other on another file in 2 instances of vdub just asking ?
    Last edited by mammo1789; 7th Feb 2012 at 18:32.
    Quote Quote  
  15. Click image for larger version

Name:	kraen.png
Views:	374
Size:	2.51 MB
ID:	10822end Click image for larger version

Name:	pocetok.png
Views:	333
Size:	1.43 MB
ID:	10823beginning

    I used neat video on top of the script ( in vdub) and colormill in vdub i lowered the levels a bit and color ( i think it has more natural look )
    only the chroma bleed now and that blue cast on the tuxedo I think I'm getting somewhere here much better than before
    Quote Quote  
  16. Sorry , not sure what to do about that bleed besides manually fixing it - I think it's too much for typical "automatic" avisynth filters. MergeChroma(AwarpSharp2(#some settings)) can help with minor bleed, but the extent is too much here. But maybe someone has better ideas

    It's not a simple chroma shift, because if you shift the blue right, the bleed on the subject's left side of the face will get worse

    Did you have any chroma noise filters enabled when capture, or earlier on in the filter chain? Because that will exacerbate this.
    Quote Quote  
  17. Did you have any chroma noise filters enabled when capture, or earlier on in the filter chain? Because that will exacerbate this.
    I have noise reduction on vcr on and tbc off / also on dvd recorder line in dnr is on but i turn them off both and not much difference in chroma noise only much more grain.

    The chroma “shift” on the logo is because of poor align antenna ( analog signal tv station ) because of the different transponders in one city the one channel is excellent the other is like shifted ( like problem on the picture ) I use satellite tv for over 2 decades, and digital cable for local stations now but then more than decade ago the local stations I catch them with regular vhf/uhf antenna .
    The shift is in the signal picture not because of the degradation of the tape itself that’s why it’s more difficult to solve than simple chroma shift I guess.
    O and little tip it’s better to kill audio, and to process it separately ( I do it in audition and audacity ) and then to merge it back because I noticed that with audio left in script it leaves some very strange artifacts and some digital gaps in the audio signal ( because of the pixel processing I guess )
    Jagabo with median script I tried it on the clips (and in the garbage area because of the loose oxidation which show random, it has some improvement, although I didn’t notice much improvement in the noise area and random drop out areas ( they are still there ) when on same spot there is signal drop, and on clip number 2 on the same spot no, but on clip number 2 there is drop out on another place and on clip 1 there is not on that same place, and so on ( median 3)?
    So after script I guess I will get the frame with no drop signal in the time x and y and the signal will be continuously good without signal dropouts right? Now I have still will blend technique be better or it cant address that ?
    I think that the tape is slowly showing its death and I will try to bake it if it’s ok in the regular Owen
    I must say that poisondeathray help me with this script for another tape that had similar problem
    https://forum.videohelp.com/threads/339642-Tbc-makes-bad-picture-worse and then everybody told me that it’s not reparable and it’s even more visible there

    O and I think I made the script quasi mt I opened 2 instances of vdub, 2 scripts with ( one clip divided in to two segments and each instance is processing each segment like 10min clip 2x5 min that way the time for processing the 10 min clip is like half the time
    Click image for larger version

Name:	Untitled.png
Views:	252
Size:	405.1 KB
ID:	10827

    I will try with 4 instances and let you know

    Thanks again
    Last edited by mammo1789; 8th Feb 2012 at 07:34.
    Quote Quote  
  18. Be careful where you divide the cuts to process in parallel - because where you start will get you different results with temporal filters. Notice the ends are always dirty (because there is no beginning or end frame to reference) . Also you will run into the non linear seek problem. Starting in the middle somewhere might get you slightly different results than if you did linear processing from beginning to end.
    Quote Quote  
  19. Originally Posted by mammo1789 View Post
    Jagabo with median script I tried it on the clips (and in the garbage area because of the loose oxidation which show random, it has some improvement, although I didn’t notice much improvement in the noise area and random drop out areas ( they are still there ) when on same spot there is signal drop, and on clip number 2 on the same spot no, but on clip number 2 there is drop out on another place and on clip 1 there is not on that same place, and so on ( median 3)?
    Yes. The theory is that occasional outliers can be eliminated with a median function. But if those horizontal lines occur in the same place in each cap (or if there are a very large number of them in each frame) a median function won't help.
    Quote Quote  
  20. Yes. The theory is that occasional outliers can be eliminated with a median function. But if those horizontal lines occur in the same place in each cap (or if there are a very large number of them in each frame) a median function won't help.
    Jagabo Hm theory is tuff sometimes, I even tried median5/3/9 and it doesn’t get rid of like signal drop on frame number x ( of the 3 copy, and it’s not on copy number 2 and number 1, but those copies have other issues so that’s why I can’t use only 3th copy as relevant and tried median function). The thing is that “drop out” is like big ¼ of the screen “tv noise” like lost signal thing, and its shows like blink (probably 2 frames I guess) only on copy number 3 and not on previous copies ( and now that I tried median 5 is there on copy 4 trough 9 probably the tape got damaged even more then before as I sad it’s her bed time)
    The other thing I noticed is that with blend ( from 3 clips) script that you posted in some other tread that we discussed, the noise was significantly lost ( like suppressed ) probably because it was from 8mm camera ( and playback was adding noise that the script get rid of and left with only material noise) and this material is from vhs that has better play characteristics and the hardware is not adding more noise?

    Be careful where you divide the cuts to process in parallel - because where you start will get you different results with temporal filters. Notice the ends are always dirty (because there is no beginning or end frame to reference) . Also you will run into the non linear seek problem. Starting in the middle somewhere might get you slightly different results than if you did linear processing from beginning to end.
    Yes I noticed that on last frame, the thing is that on one tape 30 min material ( one is from one television 10 min the other from another and 8 min ( junk in the middle )I cut them separately to work the script ( not the same clip twice) so it can work on both clips on the same time because of the snowlines to wait one then the other

    chroma == "process" ? Clense(reduceflicker=false) : Clense(reduceflicker=false,grey=true)
    if i put reduceflicker true what will happen ( i think i have flicker on on or use deflicker /msu and what is grey=true
    Quote Quote  
  21. poisondeathray thanks to you I think I found the holy grail of drop out fix script, I found an old tape that I recorded 6 years ago ( before I even new about this site and had very limited knowledge to none of the video transfer) with cheap LG VHS with composite cable and had it on my pc as huffyuv keeping for better times ( it had a lot of drop outs ). I tried it tonight with your script ( it’s still rendering the whole tape its around 2 hours so I will wait about 30+ hours I first tried it on small segment 1 min and it is incredible how it masks the errors)
    So I have few questions
    It seems that some noise temporal reduction is happening here right? What are the safe parameters and values that I can play with, change to increase decries?
    You sad that someone had used MT to make it multithreaded is there something that can be put in the script to make it happen?
    Is there something that can make the script use more resources and analogy be faster (like more ram and cpu time ) like neat video or x264 for example which utilizes all the cores to 90-100%
    Now the script uses hardly 20% of my processor time. I don’t complain
    Quote Quote  
  22. You have to play with it; I cannot answer those questions because they are too vague and every source is different. It's just meant as a suggestion or starting point - you're supposed to tweak the settings to get better results

    Personally , I'd only use this as a starting point - there is a lot more you can do with combining multiple versions layers and compositing in other programs . For me, it's only meant to reduce the amount of work in other programs. For example, there is still a residual rolling band in that sample, it would be fairly easy to fix in other programs by replacing the background. Same with the color bleed issues - I would motion track it and replace those parts. It depends how much time/effort you want to put in, or how important it is to you .Those types of repairs are more difficult with avisynth


    You can experiment with MT some of the filters, I already explained why I personally don't do it with most types of temporal filtering, but feel free to try. Some people have been able to do so successfully with some of the temporal filters

    If you do it by scene, you can certainly process it in parallel , that's how I'd do it, with multiple instances per node on multiple computers.

    Make sure you change the interpolation part of the script, because the interpolated frame numbers need to be manually adjusted for each clip (another reason to break script out into segments). Some clips might not need interpolation. Some might not need residual cleaning. Some might need more. etc... Some of the settings might not be appopriate for other sources, so make sure you experiment
    Quote Quote  
  23. Originally Posted by mammo1789 View Post
    I used neat video on top of the script ( in vdub) and colormill in vdub i lowered the levels a bit and color ( i think it has more natural look )
    only the chroma bleed now and that blue cast on the tuxedo I think I'm getting somewhere here much better than before
    For what it's worth, here is my tweak
    http://imageupload.org/en/file/177171/mytweak.jpg.html
    *** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
    Quote Quote  
  24. For what it's worth, here is my tweak
    http://imageupload.org/en/file/177171/mytweak.jpg.html
    It seems that the blue cast on tuxedo is very little visible ( did you did it in colormill or something else what settings did you use ) its better just i think levels are bity to high ( the background of the guy ) i will try it on the cleaned video
    Quote Quote  
  25. Personally , I'd only use this as a starting point - there is a lot more you can do with combining multiple versions layers and compositing in other programs . For me, it's only meant to reduce the amount of work in other programs. For example, there is still a residual rolling band in that sample, it would be fairly easy to fix in other programs by replacing the background. Same with the color bleed issues - I would motion track it and replace those parts. It depends how much time/effort you want to put in, or how important it is to you .Those types of repairs are more difficult with avisynth
    Could you suggest what programs, after effects maybe, Photoshop ? Is there something on the web perhaps to learn about motion tracking and replacing. ( found it i will study now

    But which lines are breaking points where I can split the script on separate scripts ( let say first part stabilization with depan, one lagarith file, then second part cleaning with removegrain, then third part and so on is this where second part starts?
    ###
    #####interpolate bad frames and residual cleaning
    ###

    Super = msuper()
    bv1 = manalyse(Super, isb=true, delta=2)
    fv1 = manalyse(Super, isb=false, delta=2)
    bv2 = manalyse(Super, isb=true, delta=3)
    fv2 = manalyse(Super, isb=false, delta=3)
    global CandidatesForN = mflowinter(Super, bv2, fv2, time=33.3, ml=100)
    global CandidatesForO = mflowinter(Super, bv2, fv2, time=66.7, ml=100)
    global CandidatesForC = mflowinter(Super, bv1, fv1, time=50.0, ml=100)

    last
    rx(104,12) #104-116 replaced

    AssumeTFF()

    Thanks
    Quote Quote  
  26. I made a zip with all the filters needed + the .vcf
    http://www.sendspace.com/file/q31zg6
    *** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
    Quote Quote  
  27. Originally Posted by mammo1789 View Post

    But which lines are breaking points where I can split the script on separate scripts ( let say first part stabilization with depan, one lagarith file, then second part cleaning with removegrain, then third part and so on is this where second part starts?
    I just did it in 2 parts, as described earlier. Everything before the "#####interpolate bad frames and residual cleaning" in 1st part, and everything after in 2nd part. The reason why you can't reuse the 2nd part exactly is the frame numbers will be different for the interpolation. You need manual ID the replacement frames (if any), but it's pretty much "automatically" fixed. Interpolation doesn't always work well. Sometimes you get weird morphing edges and artifacts

    You also need to decide whether or not you need to stack other filters (maybe some defects still remain) - in this example RemoveDirtMC was stacked because there was residual defects missed in the 1st part - and even after that, there is still some defects (those I would do in another program) . If you can get away will fewer filters, not only will it be faster , but also quality better and detail will be higher.


    Could you suggest what programs, after effects maybe, Photoshop ? Is there something on the web perhaps to learn about motion tracking and replacing. ( found it i will study now
    Probably AE +/- mocha for tracking/rotoscoping, although most tracking and roto can be done in AE alone
    Quote Quote  
  28. Click image for larger version

Name:	dehalo.png
Views:	772
Size:	1.53 MB
ID:	10843
    Thanks guys once more
    I wanted to bother you just slightly I’m trying to dehalo a little bit ( the script seems reduced a bit or just mask it I don’t know, from the original footage )
    As you can see on the picture, I tried dehalo script in avisynth and fft3 implementation ( from their site about dehaloing script) played a little with the parameters but now luck it is like nothing is happening Am I expecting to much or is it something else I should try
    Quote Quote  
  29. You can use dehalo filters, but they are the amongst most damaging filters for fine details IMO.

    This would be a case for using multiple filtered versions and layers, and applying the dehaloed version through rotoscoping, so you preserve the details and remove only the halos

    You would apply filter it in separated grouped fields (even/odd), because most do not have an interlaced mode. You cannot apply progressive filters to interlaced content, unless they are interlace aware with an interlace switch

    Also you would adjust the script probably dehalo before sharpening, because sharpening will exacerbate the halos (notice LSFMod and RDMC was applied in this fashion, on grouped even/odd fields in the original example script) . IF you still wanted to sharpen, you would probably do it after dehaloing


    Here I applied dehalo_alpha(darkstr=0.8, brightstr=1.1, rx=2, ry=2) on the separated grouped fields

    Save these to your desktop (view at 100%) and flip back and forth, notice how the halos are improved, but the fine details, hair, eyes, all are erroded.
    Image Attached Thumbnails Click image for larger version

Name:	before lsfmod.png
Views:	662
Size:	580.6 KB
ID:	10847  

    Click image for larger version

Name:	after dehaloing.png
Views:	898
Size:	522.2 KB
ID:	10848  

    Quote Quote  
  30. Interpolation doesn't always work well. Sometimes you get weird morphing edges and artifacts
    You mean like this one http://www.mediafire.com/?ra82f0br392g9f3
    Do i get on the end interlaced file or interpolated ( deinterlaced) ?


    I encoded the three files mkv ( x264 l3 5000kbs cbr)


    I’m thinking that this script is doing some kind of jitter reduction http://www.mediafire.com/?o5m12hvaszz2wwl ( with little blurring waterly effect) the neat video version seams has more details http://www.mediafire.com/?o5m12hvaszz2wwl ( or suppressed more noise aka the lower size on the same settings) but jitters a lot( its 1 generation copy from tv program directly). The Panasonic fs200 vcr tbc seems that has no impact on this error, on or off ( full tbc ? I didn’t try the dvd recorder passtrough es15 because its hooked up in my living room tv temporarily now.



    I see that median is part of this script , does that means 3 layers of the video are taken plant on one another and median from them is taken as last export file or and what effect does that have ?
    Last edited by mammo1789; 13th Feb 2012 at 09:27.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!