Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or try DVDFab DRM and remove iTunes movie & music protection! :)

# advise with restoration with avisynth.

1. There is only one denoiser in the script, at least that I can see (all those other calls are commented out). However, once the video is progressive, it doesn't need to be fed to the interlaced MDegrain function (MDegrain2i2). I don't think it will cause major artifacts, but it will certainly cause minor problems. Also, as already pointed out the denoising strength (thSAD) is too high, although it looks like you have edited your original post to change that back from 1700 to 400.
2. Thanks for the new2 sample. Yep, there's a bluish light off to the right. I fixed the horizontal rip in frame 1036, thanks to jagabo's ReplaceFramesMC idea. SmoothLevels and ColorYUV used to refine the results of HDRAGC. You can adjust values in HDRAGC, SmoothLevels and ColorYUV to suit. The denoisers are dfttest in QTGMC and Avisynth's TemporalSoften. Still don't know what to do about that volcanic sparkle in the background window.

Code:
# ####################################
# Imported plugin: ReplaceFramesMC.avs
# ####################################
Avisource("Drive:\path\to\video\new2.00.avi")
Santiag()
ConvertToYV12(interlaced=true)
HDRAGC(corrector=0.6)
SmoothLevels(16,1.0,255,16,245,chroma=200,limiter=0,tvrange=true,dither=100,protect=6)
SmoothTweak(hue1=-3,hue2=1)
ColorYUV(off_u=-3)
AssumeTFF().QTGMC(preset="fast",denoiser="dfttest",sharpness=0.7)
ChromaShift(c=2)
MergeChroma(awarpsharp2(depth=30))
ReplaceFramesMC(2072,2)             #<- 2072 is deinterlaced frame 1036
TemporalSoften(4,4,8,15,2)
AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()
return last
3. thxxx LMotlow,now video look very good.thxx for help.
4. Thanks. Keep in mind, exactly the same filters and settings won't work for every scene especially with home videos. That first scene in your earliest samples had much darker shadows and darks, so some adjustments have to be made.
5. Given the bluish cast, I tried to warm the colors up a bit so the girls' looked a little more natural:

Code:
AviSource("new2.00.avi")
AssumeTFF()
ColorYUV(gain_y=-15, off_u=-3) # pull the brights down to legal levels, fix U black level

ConvertToRGB(interlaced=true)
RGBAdjust(r=1.10, b=0.90) # warmer colors

converttoyv12(interlaced=true)
QTGMC(Preset="slow") # keep temporal resolution

DeHalo_alpha(rx=1, ry=3.5) # remove vertical halos

Spline36Resize(432, height) # downscale
MergeChroma(aWarpSharp(depth=20)) # sharpen colors
TurnRight()
nnedi3(dh=true) # upscale
TurnLeft()
Spline36Resize(720,height) # back to normal width
ChromaShift(c=2)
Crop(8,2,-8,-10)
AddBorders(0,8,0,4)
I don't think it really needs any noise reduction. But if you want you can try using QTMC's NR feature: QTGMC(Preset="slow", EZDenoise=1.0). Bigger values give more noise reduction. But I hate it when features like wood grain turn to mush.

You should learn to use Histogram() or VideoScope() to check your levels (and with the latter, chroma channels). Another useful tool is CSamp. It reports RGB values under the mouse.
6. Originally Posted by jagabo
You should learn to use Histogram() or VideoScope() to check your levels (and with the latter, chroma channels). Another useful tool is CSamp. It reports RGB values under the mouse.
Very good suggestion and essential for checking color and levels. But ye olde Csamp.exe is getting tough to find these days. Csamp has no installer, it's a small standalone app that puts its icon on your desktop. Click to run it, and the Csamp panel appears. Simple instructions included in the attached zip.

It deactivates if you click on other windows, so it puts a tiny blue icon in your taskbar tray that you can click if the panel disappears.

If you want something that runs all the time you can try the free ColorPic from http://www.iconico.com/colorpic/. The window has several panels that you can open and close. Below is ColorPic with three of its panels open and the others closed:

7. As jagabo and others showed earlier, there's more than one way to do things. Often you can use the same filters on most scenes, even if you might have a setting or two. But sometimes you have to change filters and use other techniques. The very first camera shot in your earlier new.00.avi has badly crushed darks with a lot of dark-level noise. The brighter scenes need little or no denoising, but that first scene is both dark and noisy. There are limits to how much dark detail you can bring out before the image starts to look a little unreal.

That scene has some bad aliasing and line twitter, too. You can just live with (common with consumer cameras) or try getting rid of it. Strong anti-alias filters will soften those saw tooth edges -- but they'll also soften everything else! It's probably overkill, but I used an anti-alias filter to smooth those edges and to show what happens when you try to make too many "improvements". Sometimes you just have to live with a few defects. The filter used was maa(), but if you overdo any antialias filter you'll get similar results.

In the attached mp4, the first shot was much darker than the second shot with the girls singing. I used two different filter setups for each scene. The first script works with the first scene, the second script works with the rest of the clip, and the third clip joins the two sections together. The second scene had excessive interlace combing that could use some cleaning.

Script for Part1 (very dark scene):
Code:
# ###################################################
# imported Avisynth plugins: ContrastMask.avs
#                            LimitedSharpenFaster.avs
# ###################################################

Avisource("Drive:\path\to\new.00.avi")
Trim(0,584)    # <- include a few extra frames for audio
#    delay and filtering of last frames.
ConvertToYV12(interlaced=true)
HDRAGC(corrector=0.6,reducer=1.0)
SmoothLevels(16,0.9,255,16,255,chroma=200,limiter=0,tvrange=true,smooth=200,dither=100,protect=6)
AssumeTFF().SeparateFields()
Mdegrain2p()
MergeChroma(awarpsharp2(depth=30))
LimitedSharpenFaster(strength=75,edgemode=2)
Weave()
maa()
maa()
return last

function MDegrain2p(clip source, int "blksize", int "overlap", int "dct")
{
#### --- for progressive or field-separated video --- ####

overlap=default(overlap,0) # overlap value (0 to 4 for blksize=8)
dct=default(dct,0) # use dct=1 for clip with light flicker

super = source.MSuper(pel=2, sharp=1)
backward_vec2 = super.MAnalyse(isb = true, delta = 2, blksize=blksize, overlap=overlap, dct=dct)
forward_vec2 = super.MAnalyse(isb = false, delta = 2, blksize=blksize, overlap=overlap, dct=dct)
backward_vec4 = super.MAnalyse(isb = true, delta = 4, blksize=blksize, overlap=overlap, dct=dct)
forward_vec4 = super.MAnalyse(isb = false, delta = 4, blksize=blksize, overlap=overlap, dct=dct)

}
The script for Part 2 (brighter scene):
Code:
# ###################################################
# imported Avisynth plugins: ContrastMask.avs
#                            LimitedSharpenFaster.avs
# ###################################################

Avisource("Drive:\path\to\new.00.avi")
Trim(564,00)
ConvertToYV12(interlaced=true)
SmoothLevels(16,1.05,255,16,255,chroma=200,limiter=0,tvrange=true,smooth=200,dither=100,protect=6)
SmoothTweak(hue1=-3,hue2=4)
ColorYUV(off_v=-2,cont_u=-30,off_u=-3)
AssumeTFF().QTGMC(preset="super fast")
vInverse()
ChromaShift(c=2)
MergeChroma(awarpsharp2(depth=20))
DeHalo_Alpha()
LimitedSharpenFaster(edgemode=2)
AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()
santiag(2,2)
return last
The script to join Parts 1 and 2:
Code:
Part1 = Avisource("Drive:\path\to\new00_Part1.avi").Trim(0,563)
Part2 = Avisource("Drive:\path\to\new00_Part2.avi")
vid=Part1+Part2

return vid
Frankly, I wouldn't struggle with maa(). Running two passes really softens the video, as you can see in the attached mp4. Also attached is ContrastMask, the extra plugins needed to use it, and another old timer that's getting tough to find (LimitedSharpenFaster).

IMO some of these filters are hardly needed. I didn't see much use for DeHalo_Alpha or ContrastMask in Part 2 (the dark dresses could have been brightened another way), and you don't really need Santiag because vInverse cleared most of the sloppy interlace edges and combing. Except for the edge noise, you could very well use Part 2 as-is.
8. thxx,in my home tape video has many scenes,i had to trim all different part and restore with various script.then join with last script refered LMotlow .but LMotlow can you look on first post sample 1 and 2,they are in very low light and very low light noise there,any suggestion for these two semple.thxx in advance
9. You would have to treat those earlier samples in a manner similar to the ContrastMask, HDRAGC, and MDegrain used for the dark scene above. The contrast filters have settings to give you some control. I'll try a part of one of the earlier later when it's not so busy here at home.

The problem with the birthday cake scene is horrible aliasing and line twitter. That will take some real work. Meanwhile I'm surprised that gurus who are better experts than I am haven't tried some ideas.
10. LMotlow Script for Part1 (very dark scene) i was trying but cannot get it work,there is two error,first error is SmoothLevels does not has argument"smooth" and second there is no function name "maa".i have all dll realeted these funtion inside my plugin folder.for first error smoothadjust.dll and second masktools-26.dll and SangNom.dll.
11. You're using a different version of SmoothAdjust. Not a problem. Remove the "smooth" value in this statement:
Code:
SmoothLevels(16,1.05,255,16,255,chroma=200,limiter=0,tvrange=true,smooth=200,dither=100,protect=6)
and change it to:
Code:
SmoothLevels(16,1.05,255,16,255,chroma=200,limiter=0,tvrange=true,dither=100,protect=6)
You don't have to use maa(). You can just use two instances of Santiag(). Change this:
maa()
maa()

To this:
santiag(2,2)
santiag()

Santiag is similar but doesn't soften as much. maa() is getting very old these days and works better for anime.
12. hello again,thxxx for help all.one more advise,like this sample my home made movie had various scenes in one video,some are very dark and some are bright,i think i have to apply different setting each scene.lease help me about how to trim varios scene and apply different script with avisynth and rejoin then back to one clip.second problem auido out of sync ,how to fix that.thxxx
13. Originally Posted by navi82
i think i have to apply different setting each scene.lease help me about how to trim varios scene and apply different script with avisynth and rejoin then back to one clip.second problem auido out of sync ,how to fix that.thxxx
This post and the one after it describe two ways of applying different filters to different segments:

Use ++ instead of + to help avoid audio sync problems. Eg:

Code:
part1 ++ part2 ++ part3 ++ part4
14. hello ,i want to apply two different set of filter in one video in two different scenes and join them.if want to join this two scripts to one how can i do,can i have a exemple with one these scripts,i am try but can not get working that scripts

Script for Part1 (very dark scene):
Code:
# ################################################## #
# imported Avisynth plugins: ContrastMask.avs
# LimitedSharpenFaster.avs
# ################################################## #

Avisource("D:\recording(20151025-0138).avi")
Trim(28,27327)
ConvertToYV12(interlaced=true)
HDRAGC(corrector=0.6,reducer=1.0)
SmoothLevels(16,0.9,255,16,255,chroma=200,limiter= 0,tvrange=true,smooth=200,dither=100,protect=6)
AssumeTFF().SeparateFields()
Mdegrain2p()
MergeChroma(awarpsharp2(depth=30))
LimitedSharpenFaster(strength=75,edgemode=2)
Weave()
maa()
maa()
return last

function MDegrain2p(clip source, int "blksize", int "overlap", int "dct")
{
#### --- for progressive or field-separated video --- ####

overlap=default(overlap,0) # overlap value (0 to 4 for blksize=8)
dct=default(dct,0) # use dct=1 for clip with light flicker

super = source.MSuper(pel=2, sharp=1)
backward_vec2 = super.MAnalyse(isb = true, delta = 2, blksize=blksize, overlap=overlap, dct=dct)
forward_vec2 = super.MAnalyse(isb = false, delta = 2, blksize=blksize, overlap=overlap, dct=dct)
backward_vec4 = super.MAnalyse(isb = true, delta = 4, blksize=blksize, overlap=overlap, dct=dct)
forward_vec4 = super.MAnalyse(isb = false, delta = 4, blksize=blksize, overlap=overlap, dct=dct)

}

The script for Part 2 (brighter scene):
Code:
# ################################################## #
# imported Avisynth plugins: ContrastMask.avs
# LimitedSharpenFaster.avs
# ################################################## #

Avisource("D:\recording(20151025-0138).avi")
Trim(27328,67798)
ConvertToYV12(interlaced=true)
SmoothLevels(16,1.05,255,16,255,chroma=200,limiter =0,tvrange=true,smooth=200,dither=100,protect=6)
SmoothTweak(hue1=-3,hue2=4)
ColorYUV(off_v=-2,cont_u=-30,off_u=-3)
AssumeTFF().QTGMC(preset="super fast")
vInverse()
ChromaShift(c=2)
MergeChroma(awarpsharp2(depth=20))
DeHalo_Alpha()
LimitedSharpenFaster(edgemode=2)
AssumeTFF().SeparateFields().SelectEvery(4,0,3).We ave()
santiag(2,2)
return last

The script to join Parts 1 and 2:
Code:
Part1 = Avisource("D:\recording(20151025-0138).avi").Trim(28,27327)
Part2 = Avisource("D:\recording(20151025-0138).avi").Trim(27328,67798)
vid=Part1+Part2

return vid
15. Keep in mind that when you don't specify a stream by name AviSynth will use the name "last" instead. And most filters take a stream as input and output a new stream. So when you use a filter like:

Code:
MyFilter()
What you're doing is equivalent to:

Code:
last = MyFilter(last)
So you can join your scripts like this:

Code:
function MDegrain2p(clip source, int "blksize", int "overlap", int "dct")
{
#### --- for progressive or field-separated video --- ####

overlap=default(overlap,0) # overlap value (0 to 4 for blksize=8)
dct=default(dct,0) # use dct=1 for clip with light flicker

super = source.MSuper(pel=2, sharp=1)
backward_vec2 = super.MAnalyse(isb = true, delta = 2, blksize=blksize, overlap=overlap, dct=dct)
forward_vec2 = super.MAnalyse(isb = false, delta = 2, blksize=blksize, overlap=overlap, dct=dct)
backward_vec4 = super.MAnalyse(isb = true, delta = 4, blksize=blksize, overlap=overlap, dct=dct)
forward_vec4 = super.MAnalyse(isb = false, delta = 4, blksize=blksize, overlap=overlap, dct=dct)

}

src=Avisource("D:\recording(20151025-0138).avi")

last=src
Trim(28,27327)
ConvertToYV12(interlaced=true)
HDRAGC(corrector=0.6,reducer=1.0)
SmoothLevels(16,0.9,255,16,255,chroma=200,limiter=0,tvrange=true,smooth=200,dither=100,protect=6)
AssumeTFF().SeparateFields()
Mdegrain2p()
MergeChroma(awarpsharp2(depth=30))
LimitedSharpenFaster(strength=75,edgemode=2)
Weave()
maa()
maa()
part1=last

last=src
Trim(27328,67798)
ConvertToYV12(interlaced=true)
SmoothLevels(16,1.05,255,16,255,chroma=200,limiter=0,tvrange=true,smooth=200,dither=100,protect=6)
SmoothTweak(hue1=-3,hue2=4)
ColorYUV(off_v=-2,cont_u=-30,off_u=-3)
AssumeTFF().QTGMC(preset="super fast")
vInverse()
ChromaShift(c=2)
MergeChroma(awarpsharp2(depth=20))
DeHalo_Alpha()
LimitedSharpenFaster(edgemode=2)
AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()
santiag(2,2)
part2=last

part1+part2
16. any suggestion script for that low light nosiy part of the video.in this sample too much low light nosie,please help me with good script for this one,thxxx in advance,sorry for bad english.
17. You could remove more noise from the chroma than the luma. Something like this after deinterlacing:

Code:
MergeChroma(TemporalDegrain(SAD1=420, SAD2=150, Sigma=8), TemporalDegrain(SAD1=400, SAD2=300, Sigma=16))
I'd brighten that video up first with something like:

Code:
ColorYUV(gain_y=25, off_y=-12, gamma_y=100, cont_u=50, cont_v=50)
18. jagabo i cannot get it ,can you help me with a complete script with this video,please
19. Try something like:
Code:
import("C:\Program Files (x86)\AviSynth\plugins\TemporalDegrain.avs")

AviSource("capture.avi")
AssumeTFF()
ColorYUV(gain_y=25, off_y=-12, gamma_y=100, cont_u=50, cont_v=50)
ConvertToYV12(interlaced=true)
QTGMC(preset="fast")
dehalo_alpha(rx=1, ry=3)
Change the path to TemporalDegrain.AVS if yours is in a different location. The script assumes all the other filters have already autoloaded (the dll files are in AviSynth's plugins folder).
20. why when i am running avisynth script i get 50fps framerate,my video is 25 fps before,what i can do to get same framerate that i before?
21. Originally Posted by navi82
why when i am running avisynth script i get 50fps framerate...
QTGMC is a bobber. It makes each field into a full frame, doubling both the framecount and the framerate. I haven't seen the sample(s) but it's often done that way when the source is interlaced.
...my video is 25 fps before,what i can do to get same framerate that i before?
When bobbed it keeps the smoothness of the interlaced source. By removing half the frames the video will play more 'jerky', which may or may not bother you. To return it to 25fps, add:

SelectEven()

after the QTGMC line.
22. Or, if you need to go back to interlaced video use SeparateFields().SelectEvery(4,0,3).Weave().
23. i want to encode this video in mpeg with tmpeg encoder and want to play in tv or pc.
24. Watch the video in this post:

Note the difference between the 60 fps and 30 fps bars. The difference between 50 and 25 fps will be similar. Of course, white bars moving over a black background is a worst case scenario. It's less obvious with most video.

If you're making DVDs and you want fluid motion you need to use 25i because DVD doesn't support 50p. If you're using a media player that supports 50p you can use 50p.
25. Originally Posted by jagabo
If you're making DVDs and you want fluid motion you need to use 25i because DVD doesn't support 50p.
In other words, use this code that jagabo just posted:

Code:
SeparateFields().SelectEvery(4,0,3).Weave()
26. how to convert 4:3 video a 16:9,on my 48" tv on side is black border,i want my video in full screen,when i have to do it,when capturing or when processing with avisynth or when encoding with tmpg and how?
27. Here we go again.

A 4:3 video stretched to 16:9 will be distorted (stretched). Circles will be ovals. Squares will be rectangles. People will look fat. The wheels on a car will look like eggs lying on their side. The image will also be more blurred.

If you want to see how your 4:3 video will look when it is stretched to 16:9, use the picture controls on your TV to make it play at wide screen. All modern TV's have a setting that will do this.

You can distort your video in several ways:
- encode the 4:3 video at a 16:9 display aspect ratio. Your encoder can do this. On playback, the video will be stretched horizontally.
- resize the image so that it is stretched horizontally to 16:9 proportions. The video will look distorted during playback.
- Enlarge the image on all sides until it is wide enough for 16:9 display. You will have to crop off a large number of pixels from the top and bottom of the movie to make it fit a 16:9 screen.
- For DVD encoding, your frame size must be 720x576. Set your encoder for a 16:9 display aspect ratio.

Most people would prefer to view a video the way the original was created. But many like to distort the picture. Your choice.

We could get into more detail is we could see a sample or image from the 4:3 video you want to stretch.
28. Originally Posted by navi82
how to convert 4:3 video a 16:9,on my 48" tv on side is black border,i want my video in full screen,when i have to do it,when capturing or when processing with avisynth or when encoding with tmpg and how?
Best not to do it at all. ALWAYS do the least violence to your video because anything you do to it before encoding is "baked in" for all time. If you really want to stretch the 4:3 video to 16:9, in order to fulfill some inner desire to see your entire screen filled up, then simply use the "stretch" function on your TV. Of course everything will be horribly distorted, and all your friends an family (if these are personal videotapes) will have gained fifty pounds.
29. Originally Posted by navi82
how to convert 4:3 video a 16:9,on my 48" tv on side is black border,i want my video in full screen,when i have to do it,when capturing or when processing with avisynth or when encoding with tmpg and how?
You can also use the zoom on your remote control or the 'format' function (or whatever your television calls it) on your TV to make it fill the TV set, keeping the aspect ratio but without the 25% of the picture lost from the top and bottom in the process.

As johnmeyer says, better not to do it at all. This is Videohelp.com, not Videoruin.com
30. one question,can i feed my avisynth script directly to TMPGEnc Video Mastering Works 5 insted of processing with virtualdub beacuse after virtual dub processing big size file too, what is best output to encode to play in smart tv with usb ?i have lg led smart tv.

Statistics