Hello,
Will someone please post examples on processing restoration? Will someone also explain the lines of the script? Thank you.
+ Reply to Thread
Results 1 to 30 of 40
-
-
Script for what? Which program are you asking about?
What format? What type of restoration?
If it's AVISynth, start here: http://avisynth.org/mediawiki/Main_Page
You would have to provide a lot more information before anyone could give you any advice.
Describe what you want to accomplish. -
Original post:
I don't use or need AVISynth. I use hardware, VirtualDub, TMPGEnc Plus, Premiere and some other specialties (anti-shake video, audio, etc) Scripting would piss me off, make an already lengthy process take a hell of a lot longer than is required.
Update: I've started to add more Avisynth as needed, but only when absolutely required.Last edited by lordsmurf; 9th Jul 2011 at 06:32.
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
I use VD most of the time also. I really like the 'gradation curves' filter to adjust contrast and brightness levels. It's saved a couple of flat looking videos. I used the 'levels' filter more in the past, but 'gradation curves' does what I want. And I use ColorMill a lot. It has most all needed adjustments in one filter.
I've used De-Shaker a few times with good results. And I still use Audacity and ffmpegGUI for audio most of the time. Just use to them.
On a really noisy video, I've had good luck with the Neat Video filter, though it's a bit expensive.I could probably use a combination of other filters to accomplish the same thing, but I find it quick and easy. I don't mind a slightly 'plastic' looking video if the original was mostly noise anyway. But I keep the original in case I want to spend a bit more time with it.
I use AVIsynth mostly for feeding RM(VB) and WMV to VirtualDub. Just haven't taken the time to understand it for other uses. -
I have to use Colormill for the first time here this weekend.
I'll try gradation curves again, since you like it. I'm still using levels, stuck in a rut, I guess. But hey, seems to work when I need it. Generally I fix those problems in a proc amp anyway. Only need these filters are already-digital source somebody else screwed up!Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
I use the FixEverythingThat'sWrongWithThisVideo() filter. Works perfectly every time.
-
*** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
-
Some nice work by VideoFred over at Doom9
The script is quite complex, but you can learn lots from it. I don't understand all of it (yet). If you are polite and respectful, I'm sure you can ask specific questions in that thread
You can find it in this post
http://forum.doom9.org/showthread.php?t=144271
http://www.vimeo.com/2823934
http://www.vimeo.com/2823934
Code:# film restoring script by videoFred. # denoising, resizing, stabilising, sharpening, auto-levels and auto-white balance. film="F:\002_dodcaps_tebewaren\privé 64\privé_64T_0014.avi" # source clip, you must specify the full path here short="L" # L=long clip S=short clip try it! result="resultS3" # specify the wanted output here trim_begin=6 trim_end=6 play_speed=18.75 #trim frames and play speed (PAL: 16.6666 or 18.75) saturation=1.0 #saturation X=4 gamma=0.7 # X is a special parameter for reducing the autolevels effect black_level=0 white_level=255 output_black=0 output_white=255 # manual levels, when returning result4 AGC_max_sat=2 AGC_max_gain=1.0 #parameters of HDRAGC filter, improves colors and shadows blue=-4 red=2 #manual color adjustment, when returning result2. Values can be positive or negative denoising_strenght=800 #denoising level of first denoiser: MVDegrainMulti() denoising_frames= 4 #number of frames for averaging (forwards and backwards) 3 is a good start value block_size= 16 #block size of MVDegrainMulti() block_over= 4 #block overlapping of MVDegrainMulti() temp_radius=20 temp_luma=6 temp_chroma=6 #second denoiser: TemporalSoften grain_luma=10 grain_chroma=10 # this will add some digital grain to the final result, set it to zero if you do not want it. LSF_sharp_ness=250 LSF_radi_us=3 LSF_sub=1.5 #first sharpening parameters (LimitedSharpenFaster) sub=subsampling USM_sharp_ness=40 USM_radi_us=2 USM_thres_hold=0 #second sharpening parameters (UnsharpMask) USM_sharp_ness2=20 USM_radi_us2=1 USM_thres_hold2=0 #third sharpening parameters (UnsharpMask) maxstab=60 #maximum values for the stabiliser (in pixels) 20 is a good start value est_left=20 est_top=20 est_right=20 est_bottom=20 est_cont=0.8 #crop values for special Estimate clip CLeft=30 CTop=30 CRight=30 CBottom=30 #crop values after Depan and before final resizing (40,30,40,30) W=720 H=576 #final size from the returned clip bord_left=0 bord_top=0 bord_right=0 bord_bot=0 #you can add black borders after resizing, final size is then size + borders!! # End variables, begin script #==================================================================================================== SetMemoryMax(1024) #set this to 1/3 of the available memory Loadplugin("Depan.dll") LoadPlugin("DepanEstimate.dll") Loadplugin("removegrain.dll") LoadPlugin("AGC.dll") LoadPlugin("MVTools.dll") Loadplugin("mt_masktools.dll") LoadPlugin("MaskTools.dll") Loadplugin("warpsharp.dll") LoadPlugIn("LimitedSupport_09Jan06B.dll") LoadPlugin("MT.dll") LoadPlugin("autolevels.dll") LoadPlugin("AddGrainC.dll") Import("LimitedSharpenFaster.avs") SetMTMode(5) source1= Avisource(film).assumefps(play_speed).trim(trim_begin,0).converttoYV12() end= source1.framecount() end2= end-trim_end frames=end+trim_begin skip= end2/5 skip0=skip+3 skipend= 3 skipend1= skip0+3 skip2= skipend1+skip skipend2= skip2+3 skip3= skipend2+skip skipend3= skip3+3 skip4= skipend3+skip skipend4= skip4+3 skip5= end2-3 L= trim(source1,0,end2) LS= trim(source1,0,end2).scriptclip("""subtitle("frame "+string(trim_begin+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"\ +string(trim_end),x=100,y=60,size=32)""") sourceT1= trim(source1,0,skipend).scriptclip("""subtitle("frame "+string(trim_begin+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)\ +" steps: "+string(skip),x=100,y=60,size=32)""") sourceT2= trim(source1,skip0,skipend1).scriptclip("""subtitle("frame "+string(trim_begin+skip0+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)+" steps: "\ +string(skip),x=100,y=60,size=32)""") sourceT3= trim(source1,skip2,skipend2).scriptclip("""subtitle("frame "+string(trim_begin+skip2+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)+" steps:\ "+string(skip),x=100,y=60,size=32)""") sourceT4= trim(source1,skip3,skipend3).scriptclip("""subtitle("frame "+string(trim_begin+skip3+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)+\ " steps: "+string(skip),x=100,y=60,size=32)""") sourceT5= trim(source1,skip4,skipend4).scriptclip("""subtitle("frame "+string(trim_begin+skip4+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)+\ " steps: "+string(skip),x=100,y=60,size=32)""") sourceT6= trim(source1,skip5,end2).scriptclip("""subtitle("frame "+string(trim_begin+skip5+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)+\ " steps: "+string(skip),x=100,y=60,size=32)""") SS= sourceT1+sourceT2+sourceT3+sourceT4+sourceT5+sourceT6 sourceT10= trim(source1,0,skipend) sourceT20= trim(source1,skip0,skipend1) sourceT30= trim(source1,skip2,skipend2) sourceT40= trim(source1,skip3,skipend3) sourceT50= trim(source1,skip4,skipend4) sourceT60= trim(source1,skip5,end2) S= sourceT10+sourceT20+sourceT30+sourceT40+sourceT50+sourceT60 stab_reference= eval(short).crop(est_left,est_top,-est_right,-est_bottom)\ .tweak(cont=est_cont).binarize(threshold=80).greyscale().invert() mdata=DePanEstimate(stab_reference,trust=1.0,dxmax=maxstab,dymax=maxstab) stab=DePanStabilize(eval(short),data=mdata,cutoff=0.5,dxmax=maxstab,dymax=maxstab,method=1,mirror=15) stab2= stab.crop(CLeft,CTop,-CRight,-CBottom).tweak(sat=saturation) stab3=DePanStabilize(eval(short),data=mdata,cutoff=0.5,dxmax=maxstab,dymax=maxstab,method=1,info=true) WS= width(stab) HS= height(stab) stab4= stab3.addborders(10,10,10,10,$B1B1B1).Lanczos4Resize(WS,HS) vectors= stab2.MVAnalyseMulti(refframes=denoising_frames, pel=2, blksize=block_size, overlap=block_over, idx=1) denoised= stab2.MVDegrainMulti(vectors, thSAD=denoising_strenght, SadMode=1, idx=1).tweak(sat=saturation) leveled= denoised.HDRAGC(coef_gain=2.0,max_gain=AGC_max_gain,min_gain=0.5,max_sat=AGC_max_sat,shadows=true) sharp1=limitedSharpenFaster(leveled,smode=1,strength=LSF_sharp_ness,overshoot=50,\ radius=LSF_radi_us, ss_X=LSF_sub, SS_Y=LSF_sub, dest_x=W,dest_y=H) sharp2= unsharpmask(sharp1,USM_sharp_ness,USM_radi_us,USM_thres_hold) sharpX= unsharpmask(sharp2,USM_sharp_ness2,USM_radi_us2,USM_thres_hold2) sharp3= sharpX.TemporalSoften(temp_radius,temp_luma,temp_chroma,15,2).addgrainC(grain_luma,grain_chroma,0.2,0.2,5) #backward_vectors = sharp3.MVAnalyse(isb = true,truemotion=true,idx=2) #forward_vectors = sharp3.MVAnalyse(isb = false,truemotion=true,idx=2) #frameclip=MVFlowFps(sharp3,backward_vectors, forward_vectors, num=25, den=1, ml=100, idx=2) result1= sharp3.addborders(X,0,0,0,$FFFFFF).levels(0,gamma,255,0,255).autolevels().coloryuv(autowhite=true)\ .crop(X,0,-0,-0).addborders(bord_left, bord_top, bord_right, bord_bot) result2= sharp3.levels(black_level,gamma,white_level,0,255).coloryuv(autowhite=true)\ .addborders(bord_left, bord_top, bord_right, bord_bot) result3= sharp3.coloryuv(off_U=blue,off_V=red).levels(0,gamma,255,0,255).addborders(X,0,0,0,$FFFFFF)\ .autolevels().crop(X,0,-0,-0).addborders(bord_left, bord_top, bord_right, bord_bot) result4= sharp3.coloryuv(off_U=blue,off_V=red).levels(black_level,gamma,white_level,0,255)\ .addborders(bord_left, bord_top, bord_right, bord_bot) result5= overlay(eval(short),stab_reference,x=est_left,y=est_top).addborders(2,2,2,2,$FFFFFF).Lanczos4Resize(WS,HS) W2= W+bord_left+bord_right H2= H+bord_top+bord_bot short2=short+"S" source2=Lanczos4Resize(eval(short2),W2,H2) source3=Lanczos4Resize(eval(short2),W,H) resultS1= stackhorizontal(subtitle(source2,"original",size=32,align=2)\ ,subtitle(result1,"autolevels, autowhite",size=28,align=2)) resultS2= stackhorizontal(subtitle(source2,"original",size=32,align=2)\ ,subtitle(result2,"autowhite, manual levels correction",size=28,align=2)) resultS3= stackhorizontal(subtitle(source2,"original",size=32,align=2)\ ,subtitle(result3,"autolevels + manual color correction",size=28,align=2)) resultS4= stackhorizontal(subtitle(source2,"original",size=32,align=2)\ ,subtitle(result4,"manual colors and levels correction",size=28,align=2)) resultS2H= stackhorizontal(subtitle(source2,"original",size=32,align=2) ,subtitle(result2,"autowhite, manual levels\ correction",size=28,align=2).histogram(mode="levels")) resultS3H= stackhorizontal(subtitle(source2,"original",size=32,align=2),subtitle(result3,"autolevels + manual color\ correction",size=28,align=2).histogram(mode="levels")) resultS4H= stackhorizontal(subtitle(source2,"original",size=32,align=2),subtitle(result4,"manual colors and levels\ correction",size=28,align=2).histogram(mode="levels")) result6= stackhorizontal(subtitle(result5,"baseclip for stabiliser -only the B/W clip is used",size=32,align=2)\ ,subtitle(stab4,"test stabiliser: dx=horizontal, dy=vertical",size=32,align=5)) Eval(result)
-
Originally Posted by jagabo
-
Just a quick word about 'gradation curves'. I ran into a few videos that are flat and low contrast. The several VD contrast/brightness filters had no real effect. So I tried gradation curves. I grab the graph line at about 90 percent and drag it sideways. The same with the bottom of the graph. Now I have contrast and I can easily adjust the brightness and the darkness levels at the same time. Really nice results with some video.
One version: http://members.chello.at/nagiller/vdub/index.html
And if you haven't tried Color Mill, definitely recommended.http://fdump.narod.ru/rgb.htm
Other VD filters: http://www.thedeemon.com/VirtualDubFilters/ -
Deemon craps itself on my Phenom X4, refuses to run correctly.
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
deemon rox
(& i got a p4 3g ht)
*** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE -
Deemon is the filter website, not the program, Daemon Tools, if that's what you mean.
For VD filters, there is also: http://neuron2.net/ or MSU: http://compression.ru/video/public_filters.htm
I haven't used Daemon Tools for a while, but I did have problems with it making up a bunch of 'virtual drives' that conflicted with some other programs.
Or do you mean 'Dee Mon Video Enhancer 1.9.2'? http://www.ipmart-forum.com/showthread.php?t=320766 I don't recall if I've tried that. -
The last one, Deemon Video Enhancer, which was the site you linked to. It is supposed to use most VirtualDub filters, but unlike Vdub, it does so with multi-core. But for me, it never works, it either goes slower than single-core Vdub, or it just plain ol' craps itself with an error and the app closes.
This was on a Phenom X4, not tried it on my Core 2 Duo yet.Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
I frameserve from vdub to the devilish Deemon & it do an awesome job
(i frameserve because neat don't work with it and a couple other filters)
Try that one day with your dual cores...
if it crashes i assume you don't have a valid copy*** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE -
Avisynth function TemporalDegrain(degrain=3)
Xvid AVI
compare2.avi"Quality is cool, but don't forget... Content is King!" -
Hey everyone,
I just want to make this thread alive again. Will the thread made by posiondeathray will work? What do I have to download? What do I need to write in my AVS script? This would be for AVS restoration. -
Why? It's pointless. What you're asking here is something like "I want to learn how to fix cars. Can everyone upload instructions on how to fix cars?" You need to start at the beginning. Take one video you need to fix. Figure out what's wrong with it. Find filters that will fix those problems. Ask for help with particular problems. Nobody has time to write a book on video restoration for your.
Start here: http://avisynth.org/mediawiki/Main_Page
Thread? You mean script? Yes.
AviSynth, a bunch of filters (note all the LoadPlugin() commands), and the script. Then you can spend a year of your life analyzing the script and what it does.
You need a text editor to create and write your script. You need to write commands to do what needs to be done to fix your video.
No, it's for video restoration.Last edited by jagabo; 16th Mar 2010 at 17:39.
-
I remember seeing that goofy script years ago...have fun!
-
I can't find AGC, or autolevels, or ImportLimitedSharpenFaster. No, I can't post at Doom9. Myself requested more then 2 or 3 times to be removed or banned. I am not sure if he banned me. I know I been removed from the database.
-
AGC:
http://strony.aster.pl/paviko/hdragc.htm
AutoLevels:
http://www.avisynth.info/?plugin=attach&refer=%A5%A2%A1%BC%A5%AB%A5%A4%A5%D6&openfile=...olevels0.3.zip
LimitedSharpenFaster:
http://avisynth.org/mediawiki/LimitedSharpen
No, I can't post at Doom9. -
How does this script look? I get an error in VirtualDub.
Code:DirectShowSource("E:\The Godfather 1 (1972)\video.avi") Import("C:\Program Files\AviSynth 2.5\plugins\LimitedSupportFaster.avs) SetMTMode(5) assumefps(play_speed).trim(trim_begin,0).converttoYV12() end= source1.framecount() end2= end-trim_end frames=end+trim_begin skip= end2/5 skip0=skip+3 skipend= 3 skipend1= skip0+3 skip2= skipend1+skip skipend2= skip2+3 skip3= skipend2+skip skipend3= skip3+3 skip4= skipend3+skip skipend4= skip4+3 skip5= end2-3 L= trim(source1,0,end2) LS= trim(source1,0,end2).scriptclip("""subtitle("frame "+string(trim_begin+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"\ +string(trim_end),x=100,y=60,size=32)""") sourceT1= trim(source1,0,skipend).scriptclip("""subtitle("frame "+string(trim_begin+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)\ +" steps: "+string(skip),x=100,y=60,size=32)""") sourceT2= trim(source1,skip0,skipend1).scriptclip("""subtitle("frame "+string(trim_begin+skip0+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)+" steps: "\ +string(skip),x=100,y=60,size=32)""") sourceT3= trim(source1,skip2,skipend2).scriptclip("""subtitle("frame "+string(trim_begin+skip2+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)+" steps:\ "+string(skip),x=100,y=60,size=32)""") sourceT4= trim(source1,skip3,skipend3).scriptclip("""subtitle("frame "+string(trim_begin+skip3+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)+\ " steps: "+string(skip),x=100,y=60,size=32)""") sourceT5= trim(source1,skip4,skipend4).scriptclip("""subtitle("frame "+string(trim_begin+skip4+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)+\ " steps: "+string(skip),x=100,y=60,size=32)""") sourceT6= trim(source1,skip5,end2).scriptclip("""subtitle("frame "+string(trim_begin+skip5+current_frame)\ +" from "+string(frames)+" trim_begin-"+string(trim_begin)+" trim_end-"+string(trim_end)+\ " steps: "+string(skip),x=100,y=60,size=32)""") SS= sourceT1+sourceT2+sourceT3+sourceT4+sourceT5+sourceT6 sourceT10= trim(source1,0,skipend) sourceT20= trim(source1,skip0,skipend1) sourceT30= trim(source1,skip2,skipend2) sourceT40= trim(source1,skip3,skipend3) sourceT50= trim(source1,skip4,skipend4) sourceT60= trim(source1,skip5,end2) S= sourceT10+sourceT20+sourceT30+sourceT40+sourceT50+sourceT60 stab_reference= eval(short).crop(est_left,est_top,-est_right,-est_bottom)\ .tweak(cont=est_cont).binarize(threshold=80).greyscale().invert() mdata=DePanEstimate(stab_reference,trust=1.0,dxmax=maxstab,dymax=maxstab) stab=DePanStabilize(eval(short),data=mdata,cutoff=0.5,dxmax=maxstab,dymax=maxstab,method=1,mirror=15) stab2= stab.crop(CLeft,CTop,-CRight,-CBottom).tweak(sat=saturation) stab3=DePanStabilize(eval(short),data=mdata,cutoff=0.5,dxmax=maxstab,dymax=maxstab,method=1,info=true) WS= 640(stab) HS= 352(stab) stab4= stab3.addborders(10,10,10,10,$B1B1B1).Lanczos4Resize(WS,HS) vectors= stab2.MVAnalyseMulti(refframes=denoising_frames, pel=2, blksize=block_size, overlap=block_over, idx=1) denoised= stab2.MVDegrainMulti(vectors, thSAD=denoising_strenght, SadMode=1, idx=1).tweak(sat=saturation) leveled= denoised.HDRAGC(coef_gain=2.0,max_gain=AGC_max_gain,min_gain=0.5,max_sat=AGC_max_sat,shadows=true) sharp1=limitedSharpenFaster(leveled,smode=1,strength=LSF_sharp_ness,overshoot=50,\ radius=LSF_radi_us, ss_X=LSF_sub, SS_Y=LSF_sub, dest_x=W,dest_y=H) sharp2= unsharpmask(sharp1,USM_sharp_ness,USM_radi_us,USM_thres_hold) sharpX= unsharpmask(sharp2,USM_sharp_ness2,USM_radi_us2,USM_thres_hold2) sharp3= sharpX.TemporalSoften(temp_radius,temp_luma,temp_chroma,15,2).addgrainC(grain_luma,grain_chroma,0.2,0.2,5) #backward_vectors = sharp3.MVAnalyse(isb = true,truemotion=true,idx=2) #forward_vectors = sharp3.MVAnalyse(isb = false,truemotion=true,idx=2) #frameclip=MVFlowFps(sharp3,backward_vectors, forward_vectors, num=25, den=1, ml=100, idx=2) result1= sharp3.addborders(X,0,0,0,$FFFFFF).levels(0,gamma,255,0,255).autolevels().coloryuv(autowhite=true)\ .crop(X,0,-0,-0).addborders(bord_left, bord_top, bord_right, bord_bot) result2= sharp3.levels(black_level,gamma,white_level,0,255).coloryuv(autowhite=true)\ .addborders(bord_left, bord_top, bord_right, bord_bot) result3= sharp3.coloryuv(off_U=blue,off_V=red).levels(0,gamma,255,0,255).addborders(X,0,0,0,$FFFFFF)\ .autolevels().crop(X,0,-0,-0).addborders(bord_left, bord_top, bord_right, bord_bot) result4= sharp3.coloryuv(off_U=blue,off_V=red).levels(black_level,gamma,white_level,0,255)\ .addborders(bord_left, bord_top, bord_right, bord_bot) result5= overlay(eval(short),stab_reference,x=est_left,y=est_top).addborders(2,2,2,2,$FFFFFF) LanczosResize(640,352)
-
What's the message? You can't just copy and use his script. You have to first understand it and then adapt it to your own video.
What's LimitedSupportFaster? Did you rename LSF, or is that a typo? And you're trying to use this on a downloaded AVI? Why not save yourself the trouble and get the DVD? What you're trying to do is a complete waste of time when there's a better source available. In my opinion. -
rocky12, you obviously have no idea what you're doing. Start with something simple so you can learn something.
-
Ugh. See, this is why I stopped contributing to MAME. I hate coding. I see those examples and cringe.
-
Similar Threads
-
Another post that's a mix of Audio, Editing and Restoration...
By takearushfan in forum Newbie / General discussionsReplies: 2Last Post: 11th Feb 2010, 02:00 -
Post examples of scenes with "fade" in them for encoding w/ x264
By vhelp in forum Video ConversionReplies: 1Last Post: 8th Sep 2009, 02:27 -
ffdshow post-processing problem
By _migz_ in forum Newbie / General discussionsReplies: 16Last Post: 25th Apr 2009, 22:41 -
Post-conversion processing?
By ktwbc in forum ffmpegX general discussionReplies: 1Last Post: 8th Jan 2009, 13:16 -
Post-processing filters?
By pfxz in forum RestorationReplies: 3Last Post: 28th Jul 2008, 21:15