Thanks for all the efforts! I am still busy capturing/sorting/etc. But, at some stage, I will get there!
Albie
+ Reply to Thread
Results 91 to 120 of 143
-
-
I know how it is. I've transferred many of my old tapes from the 1990's, but I still have about 200 hours remaining!
Last edited by sanlyn; 19th Mar 2014 at 11:21.
-
I am back again!
All the capturing has been done with VD and duplicate scenes/videos were deleted. I do not know how many clips I have, but it is more than enough. I have sorted them in folders per year since 1986 till 1995 and numbered. This was all done in VD.
I want to try out the scripts that were posted on short clips and I like 2Bdecided's idea:
You can do a split-screen effect of original vs processed, or process1 vs process2 in AVIsynth by using crop to crop each clip in half, and then stack horizontal to put them side-by-side. Then at least you can see the difference easily - though of course just watching one after the other is more representative of normal viewing.
Regards
Albie -
Show center portion of two videos, vid1 and vid2:
Code:StackHorizontal(vid1.Crop(vid1.width/4, 0, vid1.width/2, vid1.height), vid2.Crop(vid2.width/4, 0, vid2.width/2, vid2.height))
Code:StackHorizontal(vid1, vid2)
Code:StackVertical(vid1, vid2)
Code:Interleave(vid1, vid2)
In general, stacking is better when looking for motion artifacts, interleaving is better when looking for still artifacts.
Using a screen magnifier can be very helpful. I use Windows 7's built in magnifier. Start -> All Programs -> Accessories -> Ease Of Access -> Magnifier. -
Thanks Jagabo, will definitely try.
When the family complains, I just comment that this is their heritage from me! -
The progress has been much slower than I anticipated. I have now eventually divided the videos in numerous clips and the duplicate clips were deleted.
In total, I have 264 clips and this totals 967 GB.
I have now subdivided the clips in folders- per camera taken, and I have moved the extra dark clips to separate folders.
This was all done with VD.
My next step is to look at the levels.
(The whole script does not appear)
If I use this script, the files combine as one file, but I want them as separate files, as they are all mixed now.
Is there a way in which I can "edit" the files, but they still remain separate files and are not combined?
Thanks
Albie -
Note: You do not have to make an image of a page in Notepad. You can copy the text and paste it into your post.
The first line in your example script uses "+" to join clips together as one video.
The "Levels" line darkens all the darks in your video, lowering them fom RGB 16 to RGB 1. Instead of
Code:Levels(16,1.0,235,1,235, coring = false)
Code:Levels(16,1.0,235,16,235, coring = false)
Code:Levels(2,1.0,235,12,235, coring = false)
Or you can use ColorYUV and raise luma offset ("off_y") by a few pointsto make everything brighter, and then use levels to bring darks and brights into line:
Code:ColorYUV(off_y=10) Levels(16,1.0,255,16,235, coring = false)
Last edited by sanlyn; 19th Mar 2014 at 11:22.
-
To keep clips separate you need to name each:
Code:v1 = AviSource("filename1.avi").Levels(v1, 16, 1, 235, 0, 255, coring=false) v2 = AviSource("filename2.avi") v3 = AviSource("filename3.avi") v2 = ColorYUV(v2, cont_y=42, off_y=2) # similar to Levels(v1, 16, 1, 235, 0, 255) v3 = v3.Tweak(cont=1.16, bright=-18, coring=false) # similar to Levels(v1, 16, 1, 235, 0, 255) return(v1++v2++v3) # output them as one video with aligned audio
-
With regards to arranging all my clips, I have made 4 "big" folders, each folder containing all the files/clips originated from one camera.
Each of these folders contain folders according to the general impression of the (exposure) dark/light of the clips. (I hope this is clear).
The "very dark" folder will need individual assessment of each clip and therefore a script per clip.
But, then I have a folder called "average" where the light/dark composition seems to be quite adequate. I am not sure if these clips actually needs to be filtered.
So, during this process, I would like to keep each file separately and not join the files, as there are quite a mix of clips per folder and eventually, they will need to go back to the original folder. In other words, the clips are currently not in chronological order (as I initially sorted them) but in "quality" order.
To keep clips separate you need to name each:
Code:
v1 = AviSource("filename1.avi").Levels(v1, 16, 1, 235, 0, 255, coring=false) v2 = AviSource("filename2.avi") v3 = AviSource("filename3.avi") v2 = ColorYUV(v2, cont_y=42, off_y=2) # similar to Levels(v1, 16, 1, 235, 0, 255) v3 = v3.Tweak(cont=1.16, bright=-18, coring=false) # similar to Levels(v1, 16, 1, 235, 0, 255) return(v1++v2++v3) # output them as one video with aligned audio
If you want to save them as separate videos you would use separate scripts. Or using the above script, you could return only one video with return(v1), encode the clip, then go back and edit the script to return(v2), encode, etc.
I used this script for one clip this afternoon:
Code:AviSource("C:\Users\User\Videos\1. Projek\a.avi").Levels(2,1.0,235,5,235, coring = false)
The "Levels" line darkens all the darks in your video, lowering them fom RGB 16 to RGB 1. Instead of
Code:
Levels(16,1.0,235,1,235, coring = false)
You should have: Code:
Levels(16,1.0,235,16,235, coring = false)
You can experiment in several ways with videos that are too dark. You can use "Levels" to raise all darks that are at RGB 2 and raise them to about RGB 8 or 10 and see how that works:
Code:
Levels(2,1.0,235,10,235, coring = false)
which takes everything from RGB 2 and raises it to RGB 10.
Or you can use ColorYUV and raise luma offset ("off_y") by a few points to make everything brighter, and then use levels to bring darks and brights into line:
Code:
ColorYUV(off_y=10) Levels(16,1.0,255,16,235, coring = false)
Just some explanation please:
Code:Levels(16,1.0,235,16,235, coring = false)
However, where does the second 16 originate?
And, if the general exposure is too light, is it best to adjust the second digit-1.0 or the last 16?
Thanks
Albie -
I don't understand what you mean. You want to use separate scripts for each video, then join them together? You can open AviSynth scripts from within another AviSynth script with AviSource().
Code:AviSource("script1.avs")++AviSource("script2.avs")
Code:Levels(16,1.0,235,16,235, coring = false)
However, where does the second 16 originate? [/QUOTE]
The arguments are the same as VirtualDub's Levels filter.
The second argument is the gamma (1.0 is neutral, linear). It's used to bring out details in the shadows or brights without crushing brights or darks. Ie, it's a non-linear control. Play with VirtualDub's Levels filter and you'll understand it. Whether you want to use gamma or pull down the black depends on the black level of the source.
Equally spaced grey bars video and waveform monitor (Histogram in AviSynth)
gamma 1.0:
gamma 1.5 (enhance shadow detail):
gamma 0.66 (enhance bright detail):
Last edited by jagabo; 8th Oct 2013 at 15:44.
-
He wants to apply the same script to 100 separate video files, and end up with 100 separate processed video files. Batch processing.
I can't remember the elegant way of doing this. When I did it, I cheated, and created (automatically, using a MATLAB script) 100 separate AVIsynth scripts, all identical apart from the filename they loaded and the filename they were saved to, and then just batch ran the 100 scripts using either VirtualDUB or avs2avi. Anyone know an easier way?
Cheers,
David. -
Using levels, the way of making no change islevels(0,1.0,255,0,255,coring=false)
Whereas using
levels(16,1.0,235,16,235,coring=false)
will clip all the blacker-than-blacks and whiter-than-whites, which isn't always something you want to do. It will make no change within the normal/legal video range though.
With a gamma of 1.0, what levels does to luma is really simple - it maps the input range to the output range linearly, and clips everything outside of either range.
It also tries to do something appropriate to the chroma (based on the range you specified) but sometimes that's undesirable/surprising/confusing when working in YUV/YV12. I often put the chroma back to how it was when I don't want levels to touch it, e.g.
a=last
levels(input_low,gamma,input_high,output_low,outpu t_high,coring=false)
mergechroma(a)
When working in RGB, levels just does the same thing to R, G and B which usually gives the result you'd expect with no unexpected colour change - but that's no reason to work in RGB.
If you are tweaking values solely in AVIsynth, make sure you're looking at the results properly. e.g. add a bob().converttorgb() at the end of the script JUST for previewing the results so you can see what they really look like in VirtualDUB or whatever. Then take it out again before processing! With default settings in most programmes it's not necessary to add this line, but there are times when the RGB conversion doesn't happen as you expect and what you see on preview display isn't what you'll see after encoding - adding this lines usually ensure that it is.
Cheers,
David. -
Filter.avs:
Code:FlipVertical()
Code:echo AviSource("%~d1%~p1%~n1%~x1") > "%~d1%~p1%~n1.avs" echo import("Filter.avs") >> "%~d1%~p1%~n1.avs" x264 --preset="veryfast" --output "%~d1%~p1%~n1.mkv" "%~d1%~p1%~n1.avs"
Code:for %%F in (*.avi) do FilterOne.bat "%%F"
FilterOne.bat builds an AviSynth script for a particular AVI file. It includes importing of Filter.avs for filtering. It then calls x264 to convert the video. Obviously you can change this to whatever encoder you want; or leave out the encoding to just create an AVS file. You can drag/drop individual AVI files onto this bat file to process them. Or...
FilterAll.bat calls FilterOne.bat for each AVI file in a folder. Ie, an AVS script is built for each AVI file, then that script is encoded with x264.
You should be able to use these as a starting point for whatever you want to do. -
I really struggle to make myself clear!
My idea is/was to, for example, work out levels for e.g. 4 clips per evening.
It might look something like this:
Code:v1 = AviSource("filename1.avi").Levels(v1, 16, 1, 235, 10, 235, coring=false) v2 = AviSource("filename2.avi").Levels(v2, 2, 1, 235, 5, 235, coring=false) v3 = AviSource("filename6.avi").Levels(v3, 16, 1, 235, 10, 235, coring=false) v4 = AviSource("filename21.avi").Levels(v4, 2, 1, 235, 0, 235, coring=false)
He wants to apply the same script to 100 separate video files, and end up with 100 separate processed video files. Batch processing.
Once done, I would like to start the process and go to bed, not waiting for v1 to finish before needing to manually start v2
In the morning, I would like to see 4 clips and not one combined clip/video, because, as I stated before, the clips are currently sorted (1)by the camera used and (2) by the quality of the clips. These clips eventually needs to be moved back to their chronological order.
Having done this process, I want to experiment with the various scripts which were produced by everyone, especially Sanlyn. Most of those clips will need to be done individually.
The second argument is the gamma (1.0 is neutral, linear). It's used to bring out details in the shadows or brights without crushing brights or darks. Ie, it's a non-linear control.
So, is there anyone that can help? -
Code:
v1 = AviSource("filename1.avi").Levels(16, 1, 235, 10, 235, coring=false) v2 = AviSource("filename2.avi").Levels(2, 1, 235, 5, 235, coring=false) v3 = AviSource("filename6.avi").Levels(16, 1, 235, 10, 235, coring=false) v4 = AviSource("filename21.avi").Levels(2, 1, 235, 0, 235, coring=false) return v1 ++ v2 ++ v3 ++ v4
Last edited by sanlyn; 19th Mar 2014 at 11:22.
-
Then you need to set up a script for each video and encode it separately via a batch file:
Code:x264 --output video1.filtered.mkv video1.avs x264 --output video2.filtered.mkv video2.avs x264 --output video3.filtered.mkv video3.avs
-
That's easy. The only thing is, running a simple levels command on a lossless or DV-AVI clip isn't going to take long at all.
Anyway, it's easy.
You start with
myvideo1.avi
myvideo2.avi
myvideo3.avi
etc
you create myvideo1.avs with the contents
avisource("myvideo1.avi").levels(5,1.0,235,0,255,c oring=false)
you create myvideo2.avs with the contents
avisource("myvideo2.avi").levels(5,1.0,200,0,255,c oring=false)
you create myvideo3.avs with the contents
avisource("myvideo3.avi").levels(5,1.0,220,0,255,c oring=false)
etc
You then use VirtualDUB with Direct Stream Copy and either
a) after you've previewed each .avs file and got the levels you want, you use Queue batch operation>Save as AVI, or
b) after you've got all the levels as you want them in all files, you use File>Queue batch operation,Batch Wizzard to load up all the .avs files into one list, then at the bottom of the list click "add to queue" "re-save as AVI".
Then you use the Job Control to set it all off and leave it going.
Honestly though, unless you've got a really slow computer or really long clips, you're not going to be running this overnight. Getting the values in the levels command right is going to take far longer than applying the result.
I think it's also worth saying that, if you want to do individual level tweaks to every clip, most people would use an NLE rather than AVIsynth, and many would do it on the timeline of the finished project rather than the individual clips (to make any poor matches immediately obvious so they could be corrected, and to avoid correcting footage that doesn't make it to the final cut).
Cheers,
David. -
Slow progress, but at least some progress!
I have identified the "dark" clips and with individual scripts adjusted the levels.
I have now started with the clip in post #64.
Code:#--- Avisynth plugins & scripts: #- QTGMC-3.32.avsi #- Stab.avs #- ChromaShift.dll #- mvtools.dll (v1.11.4.5) #- aWarpSharp.dll (v2, March 2012) #- TTempSmooth.dll #- LSFmod.avsi #-----VirtualDub plugins #- temporal smoother (built-in, set at 4) #- CamCorder Color Denoise (set at 24) AviSource("J:\1. VHS Video projek 2013\1.0 Izak\1 Izak.avi") COlorYUV(cont_y=10,off_y=-17,gamma_y=90) Tweak(coring=false,sat=0.75) ConvertToYV12(interlaced=true) ChromaShift(L=2).MergeChroma(awarpsharp2(depth=30)) QTGMC(preset="very fast",sharpness=0.6) Stab() Crop(14,8,-6,-8).AddBorders(10,8,10,8) # --- Denoiser and deinterlace/reinterlace via 2BDecided ------ source=last #save original #denoiser: backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) source.MVDegrain2(backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400,idx=1) clean=last #save cleaned version #return clean # return cleaned version to check it if required diff1=subtract(source,clean).Blur(0.25) diff2=diff1.blur(1.5,0) diff3=subtract(diff1,diff2) #diff3 is high-ish frequency noise only sharpen(0.4,0.0) # sharpen cleaned version a little #mix high frequency noise back in overlay(last,diff3.levels(128,1.0,255,0,127,coring=false),mode="add", opacity=0.7) overlay(last,diff3.levels(0,1.0,127,128,255,coring=false).Invert(),mode="subtract", opacity=0.7) #put cleaned chroma back in with warp sharpening mergechroma(clean.aWarpSharp(depth=20.0, thresh=0.75, blurlevel=2, cm=1)) #re-interlace: assumetff().separatefields().selectevery(4,0,3).weave() # ---- END of MVdegrain2 cleaner ---------- TTempSMooth() # ---- To RGB for VirtualDub filters ConvertToRGB32(matrix="Rec601", interlaced=true) return last
"There is no function named "Stab" and there is no function named "TTempSMooth"
I did not have those 2 in my "To Avisynth Plugins Folder".
I found this "stab" script:
Code:temp = last.TemporalSoften(7,255,255,25,2) Interleave(temp.Repair(last.TemporalSoften(1,255,255,25,2)),last) DePan(last,data=DePanEstimate(last,trust=0,dxmax=10,dymax=10),offset=-1) SelectEvery(2,0)
I also downloaded TTempSmoothv094 and added TTempSmooth.dll to that folder, but I still get the error message.
Being such a newbie in this scripting business, I suppose it must be something minor that I am doing wrong.
Advice please? -
That's not the full Stab script:
Code:############################################################################## #Original script by g-force converted into a stand alone script by McCauley # #latest version from December 10, 2008 # ############################################################################## function Stab (clip clp, int "range", int "dxmax", int "dymax") { range = default(range, 1) dxmax = default(dxmax, 8) dymax = default(dymax, 8) temp = clp.TemporalSoften(7,255,255,25,2) inter = Interleave(temp.Repair(clp.TemporalSoften(1,255,255,25,2)),clp) mdata = DePanEstimate(inter,range=range,trust=0,dxmax=dxmax,dymax=dymax) DePan(inter,data=mdata,offset=-1) SelectEvery(2,0) }
Code:import("C:\Program Files (x86)\AviSynth 2.5\plugins\Stab.avs")
TTempSmooth.dll should autoload if it's in AviSynth's plugins folder. If it doesn't then import it manually in your script:
Code:LoadPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\TTempSmooth.dll")
Last edited by jagabo; 20th Oct 2013 at 12:41.
-
Thanks for the advice.
I feel really dumb with this. I did what you said, but the next message was:
I downloaded DePanEstimate version 1.9.2 and pasted the dll in C:\Program Files (x86)\AviSynth 2.5\plugins as well as in J:\1. VHS Video projek 2013\1. Projek\To Avisynth Plugins Folder.
Still no luck. I get the same error message. -
You also need depan.dll from depan tools 1.10.1. At the bottom of this page:
http://avisynth.org.ru/depan/depan.html
I think the next problem you'll have is a missing FFTW3 library. There are instructions on where to get it and where to put it in the middle of the above page. If you're running 32 bit AviSynth on 64 bit Windows it goes in c:\windows\syswow64\, not c:\windows\system32\. -
I thought the FFTW3 syslibs came with the QTGMC package. But here they are, with instructions, attached.
Last edited by sanlyn; 19th Mar 2014 at 11:23.
-
If you rename the scripts/functions in the plugins folder from .avs to .avsi they will be automatically loaded and you won't need to import them. However, some people prefer to import them explicitly in every script they write which needs them. The full set of import lines can be longer than the script, but at least you know which functions you are and are not loading. It can help with debugging if you have multiple version of one function. Being lazy, I have all the ones I use called .avsi and I never need to use import in a script.
I'll say again: I think it's also worth saying that, if you want to do individual level tweaks to every clip, most people would use an NLE rather than AVIsynth.
Cheers,
David. -
I'll say again: I think it's also worth saying that, if you want to do individual level tweaks to every clip, most people would use an NLE rather than AVIsynth.
I subdivided the clips per camera (4 big folders) and also grouped the “darker” clips in folders. The dark clips were improved one by one by changing the levels, a big job, but not so big.
As you state, 2Bdecided, it is physically impossible to do one clip at a time, individualizing every clip.
Sanlyn did a marvelous job with difficult clips. These scripts I will use and adapt for some of those difficult clips.
BUT, going through the scripts, I found a lot of similarities in the scripts. From this I can make a generic script-as I am really not looking for perfect- and do some more adjusting in a NLE.
Here is the “general” "2Bdecided" script:
Code:avisource("a.avi")+avisource("b.avi")+avisource("c.avi")+avisource("d.avi")+avisource("e.avi")+avisource("g.avi")+avisource("h.avi")+avisource("i.avi")+avisource("j.avi")+avisource("k.avi")+avisource("l C.avi")+avisource("m C.avi")+avisource("n.avi")+avisource("o.avi") assumetff() bob(0.0, 1.0) # lossless (perfectly reversible) bob deinterlace #o=last #a=last.levels(0,1.0,255,10,250,coring=false) #b=last.hdragc() #overlay(a,b,opacity=0.5) levels(0,1.0,255,10,255,coring=false) # raise black level converttoyv12() # need YV12 for denoiser source=last #save original #denoiser: backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) source.MVDegrain2(backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400,idx=1) clean=last #save cleaned version #return clean # return cleaned version to check it if required diff1=subtract(source,clean).Blur(0.25) diff2=diff1.blur(1.5,0) diff3=subtract(diff1,diff2) #diff3 is high-ish frequency noise only sharpen(0.4,0.0) # sharpen cleaned version a little #mix high frequency noise back in overlay(last,diff3.levels(128,1.0,255,0,127,coring=false),mode="add", opacity=0.7) overlay(last,diff3.levels(0,1.0,127,128,255,coring=false).Invert(),mode="subtract", opacity=0.7) #put cleaned chroma back in with warp sharpening mergechroma(clean.aWarpSharp(depth=20.0, thresh=0.75, blurlevel=2, cm=1)) #crop junk in borders... #very clean: left=18, top=6, right=10,bottom=6,w=704, h=576 #OK for DVD: left=12, top=2, right=8, bottom=0, w=704, h=576 crop(left,top,-right,-bottom) # add equal (mod2) borders nleft=((w-width(last))/4)*2 nright=w-width(last)-nleft ntop=((h-height(last))/4)*2 nbottom=h-height(last)-ntop addborders(nleft,ntop,nright,nbottom) #return last # preview deinterlaced version if required #re-interlace: assumetff() separatefields() selectevery(4,0,3) weave()
1. Denoiser
Code:#denoiser: backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) source.MVDegrain2(backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400,idx=1)
2. Sharpen
Code:sharpen(0.4,0.0) # sharpen cleaned version a little
3. High frequency noise
Code:#mix high frequency noise back in overlay(last,diff3.levels(128,1.0,255,0,127,coring=false),mode="add", opacity=0.7) overlay(last,diff3.levels(0,1.0,127,128,255,coring=false).Invert(),mode="subtract", opacity=0.7)
4. Put cleaned chroma back in with warp sharpening
Code:#--- put cleaned chroma back in with warp sharpening --- mergechroma(clean.aWarpSharp(depth=20.0, thresh=0.75, blurlevel=2, cm=1))
Code:MergeChroma(awarpsharp2(depth=30)) ChromaShift(C=-2)
Code:MergeChroma(MCTemporalDenoise(settings="high",interlaced=true)) ChromaShift(c=-4).MergeChroma(aWarpSharp2(depth=30))
Code:ChromaShift(L=2).MergeChroma(awarpsharp2(depth=30))
Code:MergeChroma(MCTemporalDenoise(settings="very high",interlaced=true))
Code:ChromaShift(L=-2,C=-2,v=2)
Code:COlorYUV(cont_u=-25,off_u=-10,cont_v=-25)
Code:COlorYUV(cont_y=15,gamma_y=-25,cont_v=-40,cont_u=-20,off_u=-3)
Code:COlorYUV(cont_y=10,off_y=-17,gamma_y=90)
Code:COlorYUV(cont_y=-15,off_y=-4,gamma_y=10,cont_v=-30,cont_u=-30,off_u=-3)
Code:ColorYUV(cont_y=10,off_y=5,gamma_y=-25)
Code:ColorYUV(off_y=12,cont_y=45,cont_v=-20)
6. De interlace
Code:QTGMC(preset="very fast",sharpness=0.6)
Code:QTGMC(preset="medium")
7. Even and odd
Code:AssumeTFF().SeparateFields() a=last # -- save starting point as "a" e1=a.SelectEven() # -- filter "e1" EVEN fields, keep results as "e2" e1 chubbyrain2() smoothuv(radius=7) e2=last o1=a.SelectOdd() # -- filter "o1" ODD fields, keep results as "o2" o1 chubbyrain2() smoothuv(radius=7) o2=last Interleave(e2,o2) # -- rejoin e2 + o2, crop and keep as "b" SmoothTweak(hue1=5,hue2=-5) crop(0,0,0,-276,true) b=last overlay(a,b,x=0) # -- overlay top border "b" onto "a" weave()
What should be included or not??
Thanks
Albie -
All of the code pieces you quoted are filters and procedures that were customized for individual videos. In some cases you can use the same line of code for many clips -- but this assumes that the clips on which you use that code have the same problems. For example, you had a couple of short video samples that had bright, oversaturated, flashing red. If you look at those clips again you'll see that the code used to clean them is similar. You had other clips with bright flashing bars and discoloration along the top borders; the same procedures were used to clean that noise. You wouldn't want to apply that same code to clips that don't have those problems.
The code that you quoted from 2BDecided's "original" script is OK as it is, but if you look at the scripts submitted earlier, some of the original code was modified. The first line of his original code is:
Code:assumetff() bob(0.0, 1.0) # lossless (perfectly reversible) bob deinterlace
The first two lines of 2BDecided's denoiser and the last several lines are used to deinterlace and then to re-interlace after running the filters and cleaners. You quoted separately from several sections of that same denoising procedure -- they all belong together as a single process with many steps.
Both of those QTGMC statements accomplish the same thing: they deintetrlace the video. QTGMC has many settings that yield different effects. The presets are the most common parameter. Presets such as "very fast", "fast" deinterlace quickly. Slower presets such as "Medium" or "slow" are progressively slower and do more denoising and cleanup, which is why the slower presets run slower. It is possible to do too much denoising, so the faster presets are used to prevent it. The "sharpness" parameter obviously sharpens. The default value is 1.0. Smaller numbers mean less sharpening; when you see the sharpness value set below 1.0, it's usually done purposely to prevent over sharpening, especially when other sharpeners are used later in the processing. In all of my submitted scripts I used QTGMC to deinterlace. Bob() and yadif are two deinterlacers, but they don't have the same quality output as QTGMC. Two safe preset settings that you can use with QTGMC are "medium" or "very fast". The default value is "Slow", which can often give an overly filtered look and will remove some fine detail. Many people will use either bob() or yadif to quickly work a very fast deinterlace for testing, but in their final script they'll replace those with QTGMC.
Code:MergeChroma(MCTemporalDenoise(various settings))
You also quoted the use of chubbyrain2, which was used in two basic ways. The way you posted used Even and Odd fields, but chubbyrain2 was also used without processing Even and Odd fields separately. It was used to accomplish the same purpose as MCTD, to clean flashing chroma noise and to smooth out "spikes" of oversaturated color. Chubbyrain2 is a temporal filter; that is to say, it observes differences between multiple frames and decides which of the disturbances is noise and which is not. If the noise takes up more than one frame, such as lasting for three or four frames, a temporal filter would ignore that disturbance as not being noise. But if you separate the fields so that some fields show the noise for a shorter span of time, a temporal filter can be more effective. If the same noise is in frames 1, 2, and 3, the filter will see the same thing in all 3 frames and will assume that the noise isn't noise. But if you separate the fields, the noise in the Even frames will appear only in one frame (frame 2). In odd frames, the noise would appear only in 2 frames (frames 1 and 3) but not in the others, so this would also be interpreted as noise. If the original noise lasts for only a frame or two, SeparateFields() would likely be unnecessary.
Sharpeners: Different sharpeners have different effects. All of them can be overused. Avisynth has its own built-in sharpener (sharpen()), but LSFMod is another one that has more than a dozen parameters. LSFMOD can be set to sharpen edges only, to ignore edges, or to avoid posterization or "clay face" effects.
ChromaShift is used to move chroma bleed to the left, right, up, or down. The settings depend on how you need to displace the bad colors. You can't use the same shifts for every video if those shifts are too wide or too narrow for the clip being processed. In one case you would shift the chroma too far in one direction, so that you create chroma bleed in the opposite direction. In other cases you might not shift far enough, which is a waste of the filter.
You are correct in that many procedures were used frequently, and in some cases the particular combination of filters or filter settings would be different. That's because the video being processed required either more or less of the work. There is no "universal script" for everything. I do indeed wish that such a script existed. I also wish that every video I see had the same, identical problems. But those videos don't exist.
On my PC's I have two text files that contain nothing but coded lines for various filters, settings, ways of opening a video, deinterlacing, separating fields, etc. I copy those procedures and filters line by line, one at a time, into a script for testing any particular video. I don't use all of those coded lines and procedures at the same time, and not for every video. Those two text files are merely templates from which I can copy lines of code as needed.
Take, for instance, this line of sample code in one of my text files:
Code:AviSource("")
Code:AviSource("E:\forum\avz10_B\4.avi")
Here are some other lines of template code from one of my text files:
Code:ppath="D:\Avisynth 2.5\plugins\" Import(ppath+"SmoothD2c.avs") Import(ppath+"RemoveDirt.avs") Import(ppath+"RemoveSpots.avs") Import(ppath+"TemporalDeGrain.avs") Import(ppath+"QTGMC-3.32.avs") Import(ppath+"FastLineDarken 1.3.avs")
Code:ppath="D:\Avisynth 2.5\plugins\" Import("D:\Avisynth 2.5\plugins\SmoothD2c.avs") Import("D:\Avisynth 2.5\plugins\RemoveDirt.avs") Import("D:\Avisynth 2.5\plugins\RemoveSpots.avs") Import("D:\Avisynth 2.5\plugins\TemporalDeGrain.avs") Import("D:\Avisynth 2.5\plugins\QTGMC-3.32.avs") Import("D:\Avisynth 2.5\plugins\FastLineDarken 1.3.avs")
I also have lines of code that are pre-composed for many procedures:
Code:AssumeBFF().SeparateFields() AssumeTFF().SeparateFields()
Code:Sangnom order 0=TFF, 1=BFF, default = BFF. default strength = 48
This version of my changes simply uses SangNom with its default values:
Code:Sangnom()
Code:Sangnom(order=1, strength = 24)
Here is a procedure that I copied from the Doom9 forum:
Code:# ----- repair broken lines + edges ---- w = width h = height nnedi3_rpow2(opt=2,rfactor=2,cshift="spline64resize").TurnLeft().\ NNEDI3().TurnRight().NNEDI3().spline64resize(w,h)
Here is 2Bdecided's denoiser as I have it in my sample file, made a little neater than the original. In this case I've converted it to a function that I can call from any place in my script. This version begins by deinterlacing, and ends by re-interlacing. I call it in my script with one line:
MVDegrain2B_QTGMC(last)
Code:#----- 2BDecided MVDegrain idea (require old mvtools.dll) ------------ function MVDegrain2B_QTGMC (clip) { AssumeTFF().QTGMC(preset="very fast",sharpness=0.6) source=last #save original #denoiser: backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) source.MVDegrain2(backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400,idx=1) clean=last #save cleaned version diff1=subtract(source,clean).Blur(0.25) diff2=diff1.blur(1.5,0) diff3=subtract(diff1,diff2) #diff3 is high-ish frequency noise only sharpen(0.4,0.0) # sharpen cleaned version a little #mix high frequency noise back in overlay(last,diff3.levels(128,1.0,255,0,127,coring=false),mode="add", opacity=0.7) overlay(last,diff3.levels(0,1.0,127,128,255,coring=false).Invert(),mode="subtract", opacity=0.7) #put cleaned chroma back in with warp sharpening mergechroma(clean.aWarpSharp(depth=20.0, thresh=0.75, blurlevel=2, cm=1)) #re-interlace: AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave() return last }
MVDegrain2B(last)
Code:function MVDegrain2B (clip) { source=clip #save original #denoiser: backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1) forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1) source.MVDegrain2(backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400,idx=1) clean=last #save cleaned version diff1=subtract(source,clean).Blur(0.25) diff2=diff1.blur(1.5,0) diff3=subtract(diff1,diff2) #diff3 is high-ish frequency noise only sharpen(0.4,0.0) # sharpen cleaned version a little #mix high frequency noise back in overlay(last,diff3.levels(128,1.0,255,0,127,coring=false),mode="add", opacity=0.7) overlay(last,diff3.levels(0,1.0,127,128,255,coring=false).Invert(),mode="subtract", opacity=0.7) #put cleaned chroma back in with warp sharpening mergechroma(clean.aWarpSharp(depth=20.0, thresh=0.75, blurlevel=2, cm=1)) return last }
Work flow: why not keep all the gigabytes of gathered clips as they are, but work with only one series of them at a time? For example, work on 15 or 20 minutes of finished video, get rid of the intermediate work AVi's, and just save the final MPEg outpout? Then work on another few minutes, save those final MPEG's, and start on anotehr section. The MPEG's can be joined in an editor later. Because I don't know exactly how you're arranging those clips, this is just a suggestion.Last edited by sanlyn; 19th Mar 2014 at 11:23.
-
I've not tried chubbyrain2, so would offer the general advice to only deinterlace once, and never to work on separated fields separately. It's true that loads of scripts are written to work with separated fields, but I have VHS camcorder tapes that will not denoise properly like that: one field is subtly but consistently different from the other - the difference is swamped by the noise originally, but is brought out by denoising. Denoising deinterlaced frames smooths out this difference (a good thing). Denoising separated fields in two separate filter chains maintains the difference, which leads to a slight flicker in the final video.
If you are re-interlacing at the end, and if you don't crop or stabilise or trim by 1 field etc (essentially if you return the lines from the original fields at the end of the script, rather than the lines that were invented by the deinterlacer) then the choice of deinterlacer is much less important. If you somehow keep/output the invented lines at the end of the script (e.g. full progressive output, cropping in a way that swaps the fields over, vertical scaling, etc) then the choice of deinterlacer is crucial.
All my tapes that have been through the same process have the same chroma offset. Different generation tapes have a different vertical chroma offset. Different camcorders + VCRs generate a different horizontal offset. Vertical offsets are usually a specific number of lines - no subjectivity about it, and the right answer will be obvious on critical content. Horizontal offsets are more subjective, and sometimes content dependent. The warpsharp trick warps the chroma edges to the luma edges - it'll fix-up small chroma offsets anyway, but it's best to put it right manually first.
Cheers,
David. -
Hmm. Definitely some info that deserves notice. Thanks for that.
Last edited by sanlyn; 19th Mar 2014 at 11:23.
-
-
I don't think so. But you could easily do that yourself by recombining its output with the original video. It will likely look bad though. Example:
Code:s=AviSource("filename.avi") q=QTGMC(s) sfields=SeparateFields(s) # the original fields qfields=SeparateFields(q).SelectEvery(4,1,2) # throw out fields that correspond to the original fields Interleave(sfields, qfields) # weave the original fields with the remaining qtgmc fields SelectEvery(4, 0,1,3,2) # the last two are out of order, correct the order Weave() # weave the fields back into frames
Last edited by jagabo; 26th Oct 2013 at 17:26.
-
I am progressing very slowly.
I have been experimenting with the clips and struggling with the scripts!, but at least I have a few clips that I can compare.
I tried HCEnc, but with more scripts, I just thought I have had enough.
I bought TMPGEnc Video Mastering Works 5, but would like to get comments on the settings:
This one seems fine
I chose the MPEG setting, but perhaps the setting under Custom Output Template outputs might also be suitable
My biggest uncertainty is the bitrate. The first screen clip is the default. In this screen clip, the fps, the rate control mode and the display mode is wrong.
I changed the fps to 25 fps;
the constant bitrate to vbr (VBR constant quality or VBR average bitrate- which one should I choose? I chose average);
and progressive to interlaced.
With regards to the bitrate, I have read different opinions. Most feel that the bitrate should be high (6000-9000).
The options here are:
- bitrate
- maximum bitrate
- minimum bitrate
Thanks for any opinions
Similar Threads
-
Basic advice on codecs/filters/hardware combinations
By mrg155 in forum Newbie / General discussionsReplies: 5Last Post: 19th Dec 2011, 10:37 -
Could I get some advice on which VirtualDub filters to use?
By ryangarfield in forum RestorationReplies: 8Last Post: 1st Jan 2011, 00:26 -
Help/Advice Converting .AVI Divx Files To ...
By MourningStar in forum Video ConversionReplies: 2Last Post: 12th Dec 2009, 16:18 -
Advice on converting TV recorded .TS to AVI or H.264
By doveman in forum Video ConversionReplies: 8Last Post: 22nd Apr 2009, 09:06 -
Converting old VHS-C and Hi-8 tapes to AVI files--HELP!
By molon labe in forum Newbie / General discussionsReplies: 10Last Post: 19th Dec 2008, 18:28