VideoHelp Forum




+ Reply to Thread
Page 4 of 5
FirstFirst ... 2 3 4 5 LastLast
Results 91 to 120 of 143
  1. Member
    Join Date
    Jan 2006
    Location
    South Africa
    Search Comp PM
    Thanks for all the efforts! I am still busy capturing/sorting/etc. But, at some stage, I will get there!

    Albie
    Quote Quote  
  2. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I know how it is. I've transferred many of my old tapes from the 1990's, but I still have about 200 hours remaining!
    Last edited by sanlyn; 19th Mar 2014 at 11:21.
    Quote Quote  
  3. Member
    Join Date
    Jan 2006
    Location
    South Africa
    Search Comp PM
    I am back again!

    All the capturing has been done with VD and duplicate scenes/videos were deleted. I do not know how many clips I have, but it is more than enough. I have sorted them in folders per year since 1986 till 1995 and numbered. This was all done in VD.

    I want to try out the scripts that were posted on short clips and I like 2Bdecided's idea:

    You can do a split-screen effect of original vs processed, or process1 vs process2 in AVIsynth by using crop to crop each clip in half, and then stack horizontal to put them side-by-side. Then at least you can see the difference easily - though of course just watching one after the other is more representative of normal viewing.
    How will I be able to capture this on my PC, so that I can write these examples on a DVD and play on TV? Is this possible, or is it only possible to do this on one's PC. I really find it difficult when one play the one clip after the other and then need to decide which one is best!


    Regards


    Albie
    Quote Quote  
  4. Show center portion of two videos, vid1 and vid2:

    Code:
    StackHorizontal(vid1.Crop(vid1.width/4, 0, vid1.width/2, vid1.height), vid2.Crop(vid2.width/4, 0, vid2.width/2, vid2.height))
    For comparison purposes you don't need to show only the center portion you can just stack them:

    Code:
    StackHorizontal(vid1, vid2)
    Stacking them vertically make work better for widescreen material:

    Code:
    StackVertical(vid1, vid2)
    Often it's more useful to interleave the frames of two videos:

    Code:
    Interleave(vid1, vid2)
    Then you can flip back and forth between frames of the two videos in an editor (using the left and right arrows in VirtualDub, for example). It's much easier to see subtle differences this way.

    In general, stacking is better when looking for motion artifacts, interleaving is better when looking for still artifacts.

    Using a screen magnifier can be very helpful. I use Windows 7's built in magnifier. Start -> All Programs -> Accessories -> Ease Of Access -> Magnifier.
    Quote Quote  
  5. Member
    Join Date
    Jan 2006
    Location
    South Africa
    Search Comp PM
    Thanks Jagabo, will definitely try.

    When the family complains, I just comment that this is their heritage from me!
    Quote Quote  
  6. Member
    Join Date
    Jan 2006
    Location
    South Africa
    Search Comp PM
    The progress has been much slower than I anticipated. I have now eventually divided the videos in numerous clips and the duplicate clips were deleted.

    In total, I have 264 clips and this totals 967 GB.

    I have now subdivided the clips in folders- per camera taken, and I have moved the extra dark clips to separate folders.
    This was all done with VD.

    My next step is to look at the levels.

    Click image for larger version

Name:	Image 10.jpg
Views:	401
Size:	181.7 KB
ID:	20394
    (The whole script does not appear)

    If I use this script, the files combine as one file, but I want them as separate files, as they are all mixed now.

    Is there a way in which I can "edit" the files, but they still remain separate files and are not combined?

    Thanks
    Albie
    Quote Quote  
  7. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Note: You do not have to make an image of a page in Notepad. You can copy the text and paste it into your post.

    The first line in your example script uses "+" to join clips together as one video.

    The "Levels" line darkens all the darks in your video, lowering them fom RGB 16 to RGB 1. Instead of
    Code:
    Levels(16,1.0,235,1,235, coring = false)
    You should have:
    Code:
    Levels(16,1.0,235,16,235, coring = false)
    You can experiment with videos are too dark in several ways. You can use "Levels" to raise all darks that are at RGB 0 and raise them to about RGB 8 or 10 and see how that works:
    Code:
    Levels(2,1.0,235,12,235, coring = false)
    which takes everything from RGB 2 and raises it to RGB 12.

    Or you can use ColorYUV and raise luma offset ("off_y") by a few pointsto make everything brighter, and then use levels to bring darks and brights into line:
    Code:
    ColorYUV(off_y=10)
    Levels(16,1.0,255,16,235, coring = false)
    Why don't you move some of those selected clips to another area and work on a smaller group of clips?
    Last edited by sanlyn; 19th Mar 2014 at 11:22.
    Quote Quote  
  8. To keep clips separate you need to name each:

    Code:
    v1 = AviSource("filename1.avi").Levels(v1, 16, 1, 235, 0, 255, coring=false)
    v2 = AviSource("filename2.avi")
    v3 = AviSource("filename3.avi")
    
    v2 = ColorYUV(v2, cont_y=42, off_y=2) # similar to Levels(v1, 16, 1, 235, 0, 255)
    v3 = v3.Tweak(cont=1.16, bright=-18, coring=false)  # similar to Levels(v1, 16, 1, 235, 0, 255)
    
    return(v1++v2++v3) # output them as one video with aligned audio
    If you want to save them as separate videos you would use separate scripts. Or using the above script, you could return only one video with return(v1), encode the clip, then go back and edit the script to return(v2), encode, etc.
    Quote Quote  
  9. Member
    Join Date
    Jan 2006
    Location
    South Africa
    Search Comp PM
    With regards to arranging all my clips, I have made 4 "big" folders, each folder containing all the files/clips originated from one camera.

    Each of these folders contain folders according to the general impression of the (exposure) dark/light of the clips. (I hope this is clear).

    The "very dark" folder will need individual assessment of each clip and therefore a script per clip.

    But, then I have a folder called "average" where the light/dark composition seems to be quite adequate. I am not sure if these clips actually needs to be filtered.

    So, during this process, I would like to keep each file separately and not join the files, as there are quite a mix of clips per folder and eventually, they will need to go back to the original folder. In other words, the clips are currently not in chronological order (as I initially sorted them) but in "quality" order.

    To keep clips separate you need to name each:

    Code:
    v1 = AviSource("filename1.avi").Levels(v1, 16, 1, 235, 0, 255, coring=false) v2 = AviSource("filename2.avi") v3 = AviSource("filename3.avi") v2 = ColorYUV(v2, cont_y=42, off_y=2) # similar to Levels(v1, 16, 1, 235, 0, 255) v3 = v3.Tweak(cont=1.16, bright=-18, coring=false) # similar to Levels(v1, 16, 1, 235, 0, 255) return(v1++v2++v3) # output them as one video with aligned audio
    If you want to save them as separate videos you would use separate scripts. Or using the above script, you could return only one video with return(v1), encode the clip, then go back and edit the script to return(v2), encode, etc.
    The initial part is quite clear, but I am not sure about the second part.

    I used this script for one clip this afternoon:

    Code:
    AviSource("C:\Users\User\Videos\1. Projek\a.avi").Levels(2,1.0,235,5,235, coring = false)
    But from what I read in your post jagabo, it does not seem to be possible to set a script up which will allow that one clip to follow automatically after the other, even if I chose the same settings (e.g. levels), but obviously a different filename.

    The "Levels" line darkens all the darks in your video, lowering them fom RGB 16 to RGB 1. Instead of
    Code:
    Levels(16,1.0,235,1,235, coring = false)
    You should have: Code:
    Levels(16,1.0,235,16,235, coring = false)
    You can experiment in several ways with videos that are too dark. You can use "Levels" to raise all darks that are at RGB 2 and raise them to about RGB 8 or 10 and see how that works:
    Code:
    Levels(2,1.0,235,10,235, coring = false)
    which takes everything from RGB 2 and raises it to RGB 10.

    Or you can use ColorYUV and raise luma offset ("off_y") by a few points to make everything brighter, and then use levels to bring darks and brights into line:
    Code:
    ColorYUV(off_y=10) Levels(16,1.0,255,16,235, coring = false)
    Thank you for this explanation. I have processed a few clips with these settings, but although sometimes not clear if there is a difference, in some cases the difference was obvious.

    Just some explanation please:
    Code:
    Levels(16,1.0,235,16,235, coring = false)
    I am quite clear when I examine a histogram where the first three values originate: 16 on the left, 1.0 in the middle and 235 on the right.
    However, where does the second 16 originate?
    And, if the general exposure is too light, is it best to adjust the second digit-1.0 or the last 16?

    Thanks
    Albie
    Quote Quote  
  10. Originally Posted by avz10 View Post
    Code:
    AviSource("C:\Users\User\Videos\1. Projek\a.avi").Levels(2,1.0,235,5,235, coring = false)
    But from what I read in your post jagabo, it does not seem to be possible to set a script up which will allow that one clip to follow automatically after the other, even if I chose the same settings (e.g. levels), but obviously a different filename.
    I don't understand what you mean. You want to use separate scripts for each video, then join them together? You can open AviSynth scripts from within another AviSynth script with AviSource().

    Code:
     AviSource("script1.avs")++AviSource("script2.avs")
    [QUOTE=avz10;2272554]Just some explanation please:
    Code:
    Levels(16,1.0,235,16,235, coring = false)
    I am quite clear when I examine a histogram where the first three values originate: 16 on the left, 1.0 in the middle and 235 on the right.
    However, where does the second 16 originate? [/QUOTE]

    The arguments are the same as VirtualDub's
    Levels filter.

    Originally Posted by avz10 View Post
    And, if the general exposure is too light, is it best to adjust the second digit-1.0 or the last 16?


    The second argument is the gamma (1.0 is neutral, linear). It's used to bring out details in the shadows or brights without crushing brights or darks. Ie, it's a non-linear control. Play with VirtualDub's Levels filter and you'll understand it. Whether you want to use gamma or pull down the black depends on the black level of the source.

    Equally spaced grey bars video and waveform monitor (Histogram in AviSynth)

    gamma 1.0:
    Click image for larger version

Name:	gamma1.0.png
Views:	294
Size:	2.3 KB
ID:	20411

    gamma 1.5 (enhance shadow detail):
    Click image for larger version

Name:	gamma1.5.png
Views:	302
Size:	2.2 KB
ID:	20412

    gamma 0.66 (enhance bright detail):
    Click image for larger version

Name:	gamma0.66.png
Views:	333
Size:	2.2 KB
ID:	20413
    Last edited by jagabo; 8th Oct 2013 at 15:44.
    Quote Quote  
  11. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    He wants to apply the same script to 100 separate video files, and end up with 100 separate processed video files. Batch processing.

    I can't remember the elegant way of doing this. When I did it, I cheated, and created (automatically, using a MATLAB script) 100 separate AVIsynth scripts, all identical apart from the filename they loaded and the filename they were saved to, and then just batch ran the 100 scripts using either VirtualDUB or avs2avi. Anyone know an easier way?


    Cheers,
    David.
    Quote Quote  
  12. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Using levels, the way of making no change islevels(0,1.0,255,0,255,coring=false)


    Whereas using
    levels(16,1.0,235,16,235,coring=false)
    will clip all the blacker-than-blacks and whiter-than-whites, which isn't always something you want to do. It will make no change within the normal/legal video range though.


    With a gamma of 1.0, what levels does to luma is really simple - it maps the input range to the output range linearly, and clips everything outside of either range.
    It also tries to do something appropriate to the chroma (based on the range you specified) but sometimes that's undesirable/surprising/confusing when working in YUV/YV12. I often put the chroma back to how it was when I don't want levels to touch it, e.g.


    a=last
    levels(input_low,gamma,input_high,output_low,outpu t_high,coring=false)
    mergechroma(a)


    When working in RGB, levels just does the same thing to R, G and B which usually gives the result you'd expect with no unexpected colour change - but that's no reason to work in RGB.




    If you are tweaking values solely in AVIsynth, make sure you're looking at the results properly. e.g. add a bob().converttorgb() at the end of the script JUST for previewing the results so you can see what they really look like in VirtualDUB or whatever. Then take it out again before processing! With default settings in most programmes it's not necessary to add this line, but there are times when the RGB conversion doesn't happen as you expect and what you see on preview display isn't what you'll see after encoding - adding this lines usually ensure that it is.


    Cheers,
    David.
    Quote Quote  
  13. Originally Posted by 2Bdecided View Post
    He wants to apply the same script to 100 separate video files, and end up with 100 separate processed video files. Batch processing.
    Filter.avs:
    Code:
    FlipVertical()
    FilterOne.bat:
    Code:
    echo AviSource("%~d1%~p1%~n1%~x1") > "%~d1%~p1%~n1.avs"
    echo import("Filter.avs") >> "%~d1%~p1%~n1.avs"
    x264 --preset="veryfast" --output "%~d1%~p1%~n1.mkv" "%~d1%~p1%~n1.avs"
    FilterAll.bat:
    Code:
    for %%F in (*.avi) do FilterOne.bat "%%F"
    Filter.avs is the filter sequence you want to use for each video. I'm just using FlipVertical() because it's easy to see the result.

    FilterOne.bat builds an AviSynth script for a particular AVI file. It includes importing of Filter.avs for filtering. It then calls x264 to convert the video. Obviously you can change this to whatever encoder you want; or leave out the encoding to just create an AVS file. You can drag/drop individual AVI files onto this bat file to process them. Or...

    FilterAll.bat calls FilterOne.bat for each AVI file in a folder. Ie, an AVS script is built for each AVI file, then that script is encoded with x264.

    You should be able to use these as a starting point for whatever you want to do.
    Quote Quote  
  14. Member
    Join Date
    Jan 2006
    Location
    South Africa
    Search Comp PM
    I really struggle to make myself clear!

    My idea is/was to, for example, work out levels for e.g. 4 clips per evening.

    It might look something like this:

    Code:
    v1 = AviSource("filename1.avi").Levels(v1, 16, 1, 235, 10, 235, coring=false) 
    v2 = AviSource("filename2.avi").Levels(v2, 2, 1, 235, 5, 235, coring=false)
    v3 = AviSource("filename6.avi").Levels(v3, 16, 1, 235, 10, 235, coring=false)
    v4 = AviSource("filename21.avi").Levels(v4, 2, 1, 235, 0, 235, coring=false)
    He wants to apply the same script to 100 separate video files, and end up with 100 separate processed video files. Batch processing.
    This is not totally correct. I expect that a number of clips will need the same level values and that I can batch, but quite a lot will need some tweaking with different values.
    Once done, I would like to start the process and go to bed, not waiting for v1 to finish before needing to manually start v2
    In the morning, I would like to see 4 clips and not one combined clip/video, because, as I stated before, the clips are currently sorted (1)by the camera used and (2) by the quality of the clips. These clips eventually needs to be moved back to their chronological order.

    Having done this process, I want to experiment with the various scripts which were produced by everyone, especially Sanlyn. Most of those clips will need to be done individually.

    The second argument is the gamma (1.0 is neutral, linear). It's used to bring out details in the shadows or brights without crushing brights or darks. Ie, it's a non-linear control.
    Now that you have showed it to me, it is very clear- almost like curves in Photoshop!

    So, is there anyone that can help?
    Quote Quote  
  15. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by avz10 View Post
    Code:
    v1 = AviSource("filename1.avi").Levels(v1, 16, 1, 235, 10, 235, coring=false) 
    v2 = AviSource("filename2.avi").Levels(v2, 2, 1, 235, 5, 235, coring=false)
    v3 = AviSource("filename6.avi").Levels(v3, 16, 1, 235, 10, 235, coring=false)
    v4 = AviSource("filename21.avi").Levels(v4, 2, 1, 235, 0, 235, coring=false)
    Code:
    v1 = AviSource("filename1.avi").Levels(16, 1, 235, 10, 235, coring=false) 
    v2 = AviSource("filename2.avi").Levels(2, 1, 235, 5, 235, coring=false)
    v3 = AviSource("filename6.avi").Levels(16, 1, 235, 10, 235, coring=false)
    v4 = AviSource("filename21.avi").Levels(2, 1, 235, 0, 235, coring=false)
    return v1 ++ v2 ++ v3 ++ v4
    Last edited by sanlyn; 19th Mar 2014 at 11:22.
    Quote Quote  
  16. Originally Posted by avz10 View Post
    I expect that a number of clips will need the same level values and that I can batch, but quite a lot will need some tweaking with different values.
    Once done, I would like to start the process and go to bed, not waiting for v1 to finish before needing to manually start v2
    In the morning, I would like to see 4 clips and not one combined clip/video
    Then you need to set up a script for each video and encode it separately via a batch file:

    Code:
    x264 --output video1.filtered.mkv video1.avs
    x264 --output video2.filtered.mkv video2.avs
    x264 --output video3.filtered.mkv video3.avs
    Or use an encoder that supports batch operations on AVS scripts.
    Quote Quote  
  17. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by avz10 View Post
    ...quite a lot will need some tweaking with different values.
    Once done, I would like to start the process and go to bed, not waiting for v1 to finish before needing to manually start v2
    In the morning, I would like to see 4 clips and not one combined clip/video
    That's easy. The only thing is, running a simple levels command on a lossless or DV-AVI clip isn't going to take long at all.


    Anyway, it's easy.


    You start with
    myvideo1.avi
    myvideo2.avi
    myvideo3.avi
    etc


    you create myvideo1.avs with the contents
    avisource("myvideo1.avi").levels(5,1.0,235,0,255,c oring=false)


    you create myvideo2.avs with the contents
    avisource("myvideo2.avi").levels(5,1.0,200,0,255,c oring=false)



    you create myvideo3.avs with the contents
    avisource("myvideo3.avi").levels(5,1.0,220,0,255,c oring=false)



    etc




    You then use VirtualDUB with Direct Stream Copy and either
    a) after you've previewed each .avs file and got the levels you want, you use Queue batch operation>Save as AVI, or
    b) after you've got all the levels as you want them in all files, you use File>Queue batch operation,Batch Wizzard to load up all the .avs files into one list, then at the bottom of the list click "add to queue" "re-save as AVI".
    Then you use the Job Control to set it all off and leave it going.


    Honestly though, unless you've got a really slow computer or really long clips, you're not going to be running this overnight. Getting the values in the levels command right is going to take far longer than applying the result.


    I think it's also worth saying that, if you want to do individual level tweaks to every clip, most people would use an NLE rather than AVIsynth, and many would do it on the timeline of the finished project rather than the individual clips (to make any poor matches immediately obvious so they could be corrected, and to avoid correcting footage that doesn't make it to the final cut).


    Cheers,
    David.
    Quote Quote  
  18. Member
    Join Date
    Jan 2006
    Location
    South Africa
    Search Comp PM
    Slow progress, but at least some progress!

    I have identified the "dark" clips and with individual scripts adjusted the levels.

    I have now started with the clip in post #64.

    Code:
    #--- Avisynth plugins & scripts:
    #- QTGMC-3.32.avsi
    #- Stab.avs
    #- ChromaShift.dll
    #- mvtools.dll (v1.11.4.5)
    #- aWarpSharp.dll (v2, March 2012)
    #- TTempSmooth.dll
    #- LSFmod.avsi
    #-----VirtualDub plugins
    #- temporal smoother (built-in, set at 4)
    #- CamCorder Color Denoise (set at 24)
    
    AviSource("J:\1. VHS Video projek 2013\1.0 Izak\1 Izak.avi")
    
    COlorYUV(cont_y=10,off_y=-17,gamma_y=90)
    Tweak(coring=false,sat=0.75)
    ConvertToYV12(interlaced=true)
    ChromaShift(L=2).MergeChroma(awarpsharp2(depth=30))
    
    QTGMC(preset="very fast",sharpness=0.6)
    Stab()
    Crop(14,8,-6,-8).AddBorders(10,8,10,8)
    
    # --- Denoiser and deinterlace/reinterlace via 2BDecided ------
    source=last #save original
    
    #denoiser:
    backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1)
    backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1)
    forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1)
    forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1)
    source.MVDegrain2(backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400,idx=1)
    
    clean=last #save cleaned version
    #return clean # return cleaned version to check it if required
    
    diff1=subtract(source,clean).Blur(0.25)
    diff2=diff1.blur(1.5,0)
    diff3=subtract(diff1,diff2) #diff3 is high-ish frequency noise only
    
    sharpen(0.4,0.0) # sharpen cleaned version a little
    
    #mix high frequency noise back in
    overlay(last,diff3.levels(128,1.0,255,0,127,coring=false),mode="add", opacity=0.7)
    overlay(last,diff3.levels(0,1.0,127,128,255,coring=false).Invert(),mode="subtract", opacity=0.7)
    
    #put cleaned chroma back in with warp sharpening
    mergechroma(clean.aWarpSharp(depth=20.0, thresh=0.75, blurlevel=2, cm=1))
    #re-interlace:
    assumetff().separatefields().selectevery(4,0,3).weave()
    # ---- END of MVdegrain2 cleaner ----------
    
    TTempSMooth()
    # ---- To RGB for VirtualDub filters
    ConvertToRGB32(matrix="Rec601", interlaced=true)
    return last
    If I use that script, I get 2 error messages:
    "There is no function named "Stab" and there is no function named "TTempSMooth"

    I did not have those 2 in my "To Avisynth Plugins Folder".

    I found this "stab" script:

    Code:
    temp = last.TemporalSoften(7,255,255,25,2)
    Interleave(temp.Repair(last.TemporalSoften(1,255,255,25,2)),last)
    DePan(last,data=DePanEstimate(last,trust=0,dxmax=10,dymax=10),offset=-1)
    SelectEvery(2,0)
    I saved it as "Stab.avs" in the "To Avisynth Plugins Folder".



    I also downloaded TTempSmoothv094 and added TTempSmooth.dll to that folder, but I still get the error message.

    Being such a newbie in this scripting business, I suppose it must be something minor that I am doing wrong.

    Advice please?
    Quote Quote  
  19. That's not the full Stab script:

    Code:
    ##############################################################################
    #Original script by g-force converted into a stand alone script by McCauley  #
    #latest version from December 10, 2008                                       #
    ##############################################################################
    
    function Stab (clip clp, int "range", int "dxmax", int "dymax")
    {
    
        range = default(range, 1)
        dxmax = default(dxmax, 8)
        dymax = default(dymax, 8)
    
        temp  = clp.TemporalSoften(7,255,255,25,2)
        inter = Interleave(temp.Repair(clp.TemporalSoften(1,255,255,25,2)),clp)
        mdata = DePanEstimate(inter,range=range,trust=0,dxmax=dxmax,dymax=dymax)
    
        DePan(inter,data=mdata,offset=-1)
        SelectEvery(2,0)
    }
    Import that into your main script with
    Code:
     import("C:\Program Files (x86)\AviSynth 2.5\plugins\Stab.avs")
    Change the path to match the setup on your computer.

    TTempSmooth.dll should autoload if it's in AviSynth's plugins folder. If it doesn't then import it manually in your script:

    Code:
    LoadPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\TTempSmooth.dll")
    Last edited by jagabo; 20th Oct 2013 at 12:41.
    Quote Quote  
  20. Member
    Join Date
    Jan 2006
    Location
    South Africa
    Search Comp PM
    Thanks for the advice.

    I feel really dumb with this. I did what you said, but the next message was:



    I downloaded DePanEstimate version 1.9.2 and pasted the dll in C:\Program Files (x86)\AviSynth 2.5\plugins as well as in J:\1. VHS Video projek 2013\1. Projek\To Avisynth Plugins Folder.

    Still no luck. I get the same error message.
    Quote Quote  
  21. You also need depan.dll from depan tools 1.10.1. At the bottom of this page:
    http://avisynth.org.ru/depan/depan.html

    I think the next problem you'll have is a missing FFTW3 library. There are instructions on where to get it and where to put it in the middle of the above page. If you're running 32 bit AviSynth on 64 bit Windows it goes in c:\windows\syswow64\, not c:\windows\system32\.
    Quote Quote  
  22. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I thought the FFTW3 syslibs came with the QTGMC package. But here they are, with instructions, attached.
    Last edited by sanlyn; 19th Mar 2014 at 11:23.
    Quote Quote  
  23. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    If you rename the scripts/functions in the plugins folder from .avs to .avsi they will be automatically loaded and you won't need to import them. However, some people prefer to import them explicitly in every script they write which needs them. The full set of import lines can be longer than the script, but at least you know which functions you are and are not loading. It can help with debugging if you have multiple version of one function. Being lazy, I have all the ones I use called .avsi and I never need to use import in a script.

    I'll say again: I think it's also worth saying that, if you want to do individual level tweaks to every clip, most people would use an NLE rather than AVIsynth.


    Cheers,
    David.
    Quote Quote  
  24. Member
    Join Date
    Jan 2006
    Location
    South Africa
    Search Comp PM
    I'll say again: I think it's also worth saying that, if you want to do individual level tweaks to every clip, most people would use an NLE rather than AVIsynth.
    When I initially started with the project, I did not have an idea how big it will become. My initial idea was to have a “generic” script, a “one size fits all” as some of our population say. I ended up with 264 clips and this totals 967 GB!!

    I subdivided the clips per camera (4 big folders) and also grouped the “darker” clips in folders. The dark clips were improved one by one by changing the levels, a big job, but not so big.


    As you state, 2Bdecided, it is physically impossible to do one clip at a time, individualizing every clip.

    Sanlyn did a marvelous job with difficult clips. These scripts I will use and adapt for some of those difficult clips.


    BUT, going through the scripts, I found a lot of similarities in the scripts. From this I can make a generic script-as I am really not looking for perfect- and do some more adjusting in a NLE.


    Here is the “general” "2Bdecided" script:

    Code:
    avisource("a.avi")+avisource("b.avi")+avisource("c.avi")+avisource("d.avi")+avisource("e.avi")+avisource("g.avi")+avisource("h.avi")+avisource("i.avi")+avisource("j.avi")+avisource("k.avi")+avisource("l C.avi")+avisource("m C.avi")+avisource("n.avi")+avisource("o.avi")
      assumetff()
      bob(0.0, 1.0) # lossless (perfectly reversible) bob deinterlace
      #o=last
      #a=last.levels(0,1.0,255,10,250,coring=false)
      #b=last.hdragc()
      #overlay(a,b,opacity=0.5)   
      levels(0,1.0,255,10,255,coring=false) # raise black level
      converttoyv12() # need YV12 for denoiser
      source=last #save original
      #denoiser:
      backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1)
      backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1)
      forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1)
      forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1)
      source.MVDegrain2(backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400,idx=1)
      clean=last #save cleaned version
      #return clean # return cleaned version to check it if required
      diff1=subtract(source,clean).Blur(0.25)
      diff2=diff1.blur(1.5,0)
      diff3=subtract(diff1,diff2) #diff3 is high-ish frequency noise only   
      sharpen(0.4,0.0) # sharpen cleaned version a little
      #mix high frequency noise back in
      overlay(last,diff3.levels(128,1.0,255,0,127,coring=false),mode="add", opacity=0.7)
      overlay(last,diff3.levels(0,1.0,127,128,255,coring=false).Invert(),mode="subtract", opacity=0.7)
      #put cleaned chroma back in with warp sharpening
      mergechroma(clean.aWarpSharp(depth=20.0, thresh=0.75, blurlevel=2, cm=1))
      #crop junk in borders...
      #very clean: left=18, top=6, right=10,bottom=6,w=704, h=576
      #OK for DVD:
      left=12, top=2, right=8, bottom=0, w=704, h=576
      crop(left,top,-right,-bottom)
      # add equal (mod2) borders
      nleft=((w-width(last))/4)*2
      nright=w-width(last)-nleft
      ntop=((h-height(last))/4)*2
      nbottom=h-height(last)-ntop
      addborders(nleft,ntop,nright,nbottom)
      #return last # preview deinterlaced version if required
      #re-interlace:
      assumetff()
      separatefields()
      selectevery(4,0,3)
      weave()
    I am not sure that the cropping is necessary- what does everyone think?

    1. Denoiser

    Code:
    #denoiser:
      backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1)
      backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1)
      forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1)
      forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1)
      source.MVDegrain2(backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400,idx=1)
    This denoiser has been used in quite a number of scripts.

    2. Sharpen

    Code:
    sharpen(0.4,0.0) # sharpen cleaned version a little
    Sharpen has been used in quite a number of scripts.

    3. High frequency noise

    Code:
    #mix high frequency noise back in
      overlay(last,diff3.levels(128,1.0,255,0,127,coring=false),mode="add", opacity=0.7)
      overlay(last,diff3.levels(0,1.0,127,128,255,coring=false).Invert(),mode="subtract", opacity=0.7)
    Add high frequency noise has also been used in quite a number of scripts.

    4. Put cleaned chroma back in with warp sharpening

    Code:
    #--- put cleaned chroma back in with warp sharpening ---
      mergechroma(clean.aWarpSharp(depth=20.0, thresh=0.75, blurlevel=2, cm=1))
    Used in at least 4 scripts

    Code:
    MergeChroma(awarpsharp2(depth=30))
      ChromaShift(C=-2)
    Code:
    MergeChroma(MCTemporalDenoise(settings="high",interlaced=true))
      ChromaShift(c=-4).MergeChroma(aWarpSharp2(depth=30))
    Code:
    ChromaShift(L=2).MergeChroma(awarpsharp2(depth=30))
    Code:
    MergeChroma(MCTemporalDenoise(settings="very high",interlaced=true))
    Code:
    ChromaShift(L=-2,C=-2,v=2)
    5. Colour

    Code:
    COlorYUV(cont_u=-25,off_u=-10,cont_v=-25)
    Code:
    COlorYUV(cont_y=15,gamma_y=-25,cont_v=-40,cont_u=-20,off_u=-3)
    Code:
    COlorYUV(cont_y=10,off_y=-17,gamma_y=90)
    Code:
    COlorYUV(cont_y=-15,off_y=-4,gamma_y=10,cont_v=-30,cont_u=-30,off_u=-3)
    Code:
    ColorYUV(cont_y=10,off_y=5,gamma_y=-25)
    Code:
    ColorYUV(off_y=12,cont_y=45,cont_v=-20)
    I suppose that one should use very “average” settings.

    6. De interlace

    Code:
    QTGMC(preset="very fast",sharpness=0.6)
    Code:
    QTGMC(preset="medium")
    I am not sure when one should use which of the above?

    7. Even and odd

    Code:
    AssumeTFF().SeparateFields()
      a=last            # -- save starting point as "a"
      e1=a.SelectEven()    # -- filter "e1" EVEN fields, keep results as "e2"
      e1
      chubbyrain2()
      smoothuv(radius=7)
      e2=last
      o1=a.SelectOdd()        # -- filter "o1" ODD fields, keep results as "o2"
      o1
      chubbyrain2()
      smoothuv(radius=7)
      o2=last
      Interleave(e2,o2)    # -- rejoin e2 + o2, crop and keep as "b"
      SmoothTweak(hue1=5,hue2=-5)
      crop(0,0,0,-276,true)
      b=last
      overlay(a,b,x=0)        # -- overlay top border "b" onto "a"
      weave()
    Any comments on the possibility to create a basic scrip, which will improve the image to some extent? If you agree, any suggestions?
    What should be included or not??

    Thanks

    Albie
    Quote Quote  
  25. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    All of the code pieces you quoted are filters and procedures that were customized for individual videos. In some cases you can use the same line of code for many clips -- but this assumes that the clips on which you use that code have the same problems. For example, you had a couple of short video samples that had bright, oversaturated, flashing red. If you look at those clips again you'll see that the code used to clean them is similar. You had other clips with bright flashing bars and discoloration along the top borders; the same procedures were used to clean that noise. You wouldn't want to apply that same code to clips that don't have those problems.

    The code that you quoted from 2BDecided's "original" script is OK as it is, but if you look at the scripts submitted earlier, some of the original code was modified. The first line of his original code is:

    Code:
      assumetff()
      bob(0.0, 1.0) # lossless (perfectly reversible) bob deinterlace
    Those two lines make an assumption about the video (top field first) and then deinterlaces the video using the bob() deinterlacer. In the scripts used earlier, the code was applied to video that was already deinterlaced with QTGMC -- so in some cases those lines don't appear in the script that was actually used.

    The first two lines of 2BDecided's denoiser and the last several lines are used to deinterlace and then to re-interlace after running the filters and cleaners. You quoted separately from several sections of that same denoising procedure -- they all belong together as a single process with many steps.

    Originally Posted by avz10 View Post
    Code:
    QTGMC(preset="very fast",sharpness=0.6)
    Code:
    QTGMC(preset="medium")
    I am not sure when one should use which of the above?
    Both of those QTGMC statements accomplish the same thing: they deintetrlace the video. QTGMC has many settings that yield different effects. The presets are the most common parameter. Presets such as "very fast", "fast" deinterlace quickly. Slower presets such as "Medium" or "slow" are progressively slower and do more denoising and cleanup, which is why the slower presets run slower. It is possible to do too much denoising, so the faster presets are used to prevent it. The "sharpness" parameter obviously sharpens. The default value is 1.0. Smaller numbers mean less sharpening; when you see the sharpness value set below 1.0, it's usually done purposely to prevent over sharpening, especially when other sharpeners are used later in the processing. In all of my submitted scripts I used QTGMC to deinterlace. Bob() and yadif are two deinterlacers, but they don't have the same quality output as QTGMC. Two safe preset settings that you can use with QTGMC are "medium" or "very fast". The default value is "Slow", which can often give an overly filtered look and will remove some fine detail. Many people will use either bob() or yadif to quickly work a very fast deinterlace for testing, but in their final script they'll replace those with QTGMC.

    Code:
    MergeChroma(MCTemporalDenoise(various settings))
    The above line with MCTD at various "settings" values was used to curb the red and/or blue flashing color and to smooth very bad chroma noise, or to help clean those flashing colors along the top border and/or stains along the side borders. This was used with very strong "High" or "Very High" settings that process only the chroma (color); if you used very strong settings without MergeChroma you would also filter luma, which could remove a great deal of detail and give a soft image in most cases.

    You also quoted the use of chubbyrain2, which was used in two basic ways. The way you posted used Even and Odd fields, but chubbyrain2 was also used without processing Even and Odd fields separately. It was used to accomplish the same purpose as MCTD, to clean flashing chroma noise and to smooth out "spikes" of oversaturated color. Chubbyrain2 is a temporal filter; that is to say, it observes differences between multiple frames and decides which of the disturbances is noise and which is not. If the noise takes up more than one frame, such as lasting for three or four frames, a temporal filter would ignore that disturbance as not being noise. But if you separate the fields so that some fields show the noise for a shorter span of time, a temporal filter can be more effective. If the same noise is in frames 1, 2, and 3, the filter will see the same thing in all 3 frames and will assume that the noise isn't noise. But if you separate the fields, the noise in the Even frames will appear only in one frame (frame 2). In odd frames, the noise would appear only in 2 frames (frames 1 and 3) but not in the others, so this would also be interpreted as noise. If the original noise lasts for only a frame or two, SeparateFields() would likely be unnecessary.

    Sharpeners: Different sharpeners have different effects. All of them can be overused. Avisynth has its own built-in sharpener (sharpen()), but LSFMod is another one that has more than a dozen parameters. LSFMOD can be set to sharpen edges only, to ignore edges, or to avoid posterization or "clay face" effects.

    ChromaShift is used to move chroma bleed to the left, right, up, or down. The settings depend on how you need to displace the bad colors. You can't use the same shifts for every video if those shifts are too wide or too narrow for the clip being processed. In one case you would shift the chroma too far in one direction, so that you create chroma bleed in the opposite direction. In other cases you might not shift far enough, which is a waste of the filter.

    You are correct in that many procedures were used frequently, and in some cases the particular combination of filters or filter settings would be different. That's because the video being processed required either more or less of the work. There is no "universal script" for everything. I do indeed wish that such a script existed. I also wish that every video I see had the same, identical problems. But those videos don't exist.

    On my PC's I have two text files that contain nothing but coded lines for various filters, settings, ways of opening a video, deinterlacing, separating fields, etc. I copy those procedures and filters line by line, one at a time, into a script for testing any particular video. I don't use all of those coded lines and procedures at the same time, and not for every video. Those two text files are merely templates from which I can copy lines of code as needed.

    Take, for instance, this line of sample code in one of my text files:

    Code:
    AviSource("")
    I copy that line into a script I'm writing, exactly as shown above. Then I change the text that lies inside the empty quotation marks, as follows:

    Code:
    AviSource("E:\forum\avz10_B\4.avi")
    I still have to type the path and the name of the video, but at least I don't have to type the entire line from scratch.

    Here are some other lines of template code from one of my text files:

    Code:
    ppath="D:\Avisynth 2.5\plugins\"
    Import(ppath+"SmoothD2c.avs")
    Import(ppath+"RemoveDirt.avs")
    Import(ppath+"RemoveSpots.avs")
    Import(ppath+"TemporalDeGrain.avs")
    Import(ppath+"QTGMC-3.32.avs")
    Import(ppath+"FastLineDarken 1.3.avs")
    Notice the word "ppath" in the top line. That word is something I invented. It's nothing more than a name I give to a place in memory that will hold some text characters that I assign to it. In this case, the text that I assigned to ppath is ""D:\Avisynth 2.5\plugins\". Note that I included the backward slash "\" which separates a path name from a file name. Then I cp[y and paste some lines that join ppath with a "+ symbol to whatever text follows the "+" sign. Therefore, when Avisynth reads that script it inserts the characters from ppath and joins them to whatever text follows it. So when the script runs, those lines are "interpreted" this way:

    Code:
    ppath="D:\Avisynth 2.5\plugins\"
    Import("D:\Avisynth 2.5\plugins\SmoothD2c.avs")
    Import("D:\Avisynth 2.5\plugins\RemoveDirt.avs")
    Import("D:\Avisynth 2.5\plugins\RemoveSpots.avs")
    Import("D:\Avisynth 2.5\plugins\TemporalDeGrain.avs")
    Import("D:\Avisynth 2.5\plugins\QTGMC-3.32.avs")
    Import("D:\Avisynth 2.5\plugins\FastLineDarken 1.3.avs")
    You can see that my use of a user-created variable like ppath has saved the effort of having to type the path statement in every one of those copied lines. One of those template text files contains about 60 lines of code like those shown above, each line having the name of a different script or plugin. That saves a lot of time, especially for plugins/scripts whose names I can't remember, and I copy a line for any avs I choose to import.

    I also have lines of code that are pre-composed for many procedures:

    Code:
    AssumeBFF().SeparateFields()
    AssumeTFF().SeparateFields()
    Which of those lines I copy depends on the video I'm working with. After you type either of those lines in a dozen scripts or more, you'll see how much time can be saved with copy and paste.

    Code:
    Sangnom order 0=TFF, 1=BFF, default = BFF. default strength = 48
    The line above is just a notation. I got tired of looking up the value settings for SangNom, which is an anti-alias filter. Many filters use numbers like 0, 1, 2, etc., for various settings. They are not consistent between filters. So I copy that line into my script, and then I change it as required:

    This version of my changes simply uses SangNom with its default values:

    Code:
    Sangnom()
    But sometimes the video requires different settings:
    Code:
    Sangnom(order=1, strength = 24)
    When I see an interesting script in a forum, I copy it and save it into any one of several text files that contain oddball or specialized scripts, usually with some notes from the thread that explains what the script is doing or gives the names of any additional support files required. Sometimes those little files become collections: I have an entire folder of scripts and functions from jagabo and another for scripts from poisondeathray and others. I can copy those ideas one at a time into a script and test them on a few frames. If they work well, I keep them in the script. If they don't, I move on. Very little time lost.

    Here is a procedure that I copied from the Doom9 forum:

    Code:
    # ----- repair broken lines + edges ----
    w = width
    h = height
    nnedi3_rpow2(opt=2,rfactor=2,cshift="spline64resize").TurnLeft().\
      NNEDI3().TurnRight().NNEDI3().spline64resize(w,h)
    I've used those lines in countless scripts. All I had to do was copy it and try it.

    Here is 2Bdecided's denoiser as I have it in my sample file, made a little neater than the original. In this case I've converted it to a function that I can call from any place in my script. This version begins by deinterlacing, and ends by re-interlacing. I call it in my script with one line:

    MVDegrain2B_QTGMC(last)

    Code:
    #----- 2BDecided MVDegrain idea (require old mvtools.dll) ------------
    function MVDegrain2B_QTGMC (clip)
    {
    AssumeTFF().QTGMC(preset="very fast",sharpness=0.6)
    
        source=last #save original
    
        #denoiser:
        backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1)
        backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1)
        forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1)
        forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1)
        source.MVDegrain2(backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400,idx=1)
    
        clean=last #save cleaned version
    
        diff1=subtract(source,clean).Blur(0.25)
        diff2=diff1.blur(1.5,0)
        diff3=subtract(diff1,diff2) #diff3 is high-ish frequency noise only
    
        sharpen(0.4,0.0) # sharpen cleaned version a little
    
        #mix high frequency noise back in
        overlay(last,diff3.levels(128,1.0,255,0,127,coring=false),mode="add", opacity=0.7)
        overlay(last,diff3.levels(0,1.0,127,128,255,coring=false).Invert(),mode="subtract", opacity=0.7)
    
        #put cleaned chroma back in with warp sharpening
        mergechroma(clean.aWarpSharp(depth=20.0, thresh=0.75, blurlevel=2, cm=1))
        #re-interlace:
        AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()
    return last 
    }
    But what if the clip has already been deinterlaced ? ? ? The function should not have to deinterlace and re-interlace again, which would be an error. So I have an alternate version of the same function, which does not include QTGMC or re-interlacing:

    MVDegrain2B(last)

    Code:
    function MVDegrain2B (clip)
    {
        source=clip #save original
    
        #denoiser:
        backward_vec2 = source.MVAnalyse(isb = true, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1)
        backward_vec1 = source.MVAnalyse(isb = true, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1)
        forward_vec1 = source.MVAnalyse(isb = false, delta = 1, pel = 2, overlap=4, sharp=1, idx = 1)
        forward_vec2 = source.MVAnalyse(isb = false, delta = 2, pel = 2, overlap=4, sharp=1, idx = 1)
        source.MVDegrain2(backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=400,idx=1)
    
        clean=last #save cleaned version
    
        diff1=subtract(source,clean).Blur(0.25)
        diff2=diff1.blur(1.5,0)
        diff3=subtract(diff1,diff2) #diff3 is high-ish frequency noise only
    
        sharpen(0.4,0.0) # sharpen cleaned version a little
    
        #mix high frequency noise back in
        overlay(last,diff3.levels(128,1.0,255,0,127,coring=false),mode="add", opacity=0.7)
        overlay(last,diff3.levels(0,1.0,127,128,255,coring=false).Invert(),mode="subtract", opacity=0.7)
    
        #put cleaned chroma back in with warp sharpening
        mergechroma(clean.aWarpSharp(depth=20.0, thresh=0.75, blurlevel=2, cm=1))
    return last 
    }
    It's the same function, but the first and last lines are omitted.

    Work flow: why not keep all the gigabytes of gathered clips as they are, but work with only one series of them at a time? For example, work on 15 or 20 minutes of finished video, get rid of the intermediate work AVi's, and just save the final MPEg outpout? Then work on another few minutes, save those final MPEG's, and start on anotehr section. The MPEG's can be joined in an editor later. Because I don't know exactly how you're arranging those clips, this is just a suggestion.
    Last edited by sanlyn; 19th Mar 2014 at 11:23.
    Quote Quote  
  26. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    I've not tried chubbyrain2, so would offer the general advice to only deinterlace once, and never to work on separated fields separately. It's true that loads of scripts are written to work with separated fields, but I have VHS camcorder tapes that will not denoise properly like that: one field is subtly but consistently different from the other - the difference is swamped by the noise originally, but is brought out by denoising. Denoising deinterlaced frames smooths out this difference (a good thing). Denoising separated fields in two separate filter chains maintains the difference, which leads to a slight flicker in the final video.

    If you are re-interlacing at the end, and if you don't crop or stabilise or trim by 1 field etc (essentially if you return the lines from the original fields at the end of the script, rather than the lines that were invented by the deinterlacer) then the choice of deinterlacer is much less important. If you somehow keep/output the invented lines at the end of the script (e.g. full progressive output, cropping in a way that swaps the fields over, vertical scaling, etc) then the choice of deinterlacer is crucial.


    All my tapes that have been through the same process have the same chroma offset. Different generation tapes have a different vertical chroma offset. Different camcorders + VCRs generate a different horizontal offset. Vertical offsets are usually a specific number of lines - no subjectivity about it, and the right answer will be obvious on critical content. Horizontal offsets are more subjective, and sometimes content dependent. The warpsharp trick warps the chroma edges to the luma edges - it'll fix-up small chroma offsets anyway, but it's best to put it right manually first.


    Cheers,
    David.
    Quote Quote  
  27. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Hmm. Definitely some info that deserves notice. Thanks for that.
    Last edited by sanlyn; 19th Mar 2014 at 11:23.
    Quote Quote  
  28. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    Originally Posted by 2Bdecided View Post
    If you are re-interlacing at the end, and if you don't crop or stabilise or trim by 1 field etc (essentially if you return the lines from the original fields at the end of the script, rather than the lines that were invented by the deinterlacer)
    Is there a mode for QTGMC that keeps the original fields intact? By default they are unrecoverable (same with Bob).
    Quote Quote  
  29. Originally Posted by vaporeon800 View Post
    Is there a mode for QTGMC that keeps the original fields intact?
    I don't think so. But you could easily do that yourself by recombining its output with the original video. It will likely look bad though. Example:

    Code:
    s=AviSource("filename.avi")
    q=QTGMC(s)
    
    sfields=SeparateFields(s) # the original fields
    qfields=SeparateFields(q).SelectEvery(4,1,2) # throw out fields that correspond to the original fields
    Interleave(sfields, qfields) # weave the original fields with the remaining qtgmc fields
    SelectEvery(4, 0,1,3,2) # the last two are out of order, correct the order
    Weave() # weave the fields back into frames
    The original fields won't have the noise reduction and edge smoothing that the qtgmc fields have.
    Last edited by jagabo; 26th Oct 2013 at 17:26.
    Quote Quote  
  30. Member
    Join Date
    Jan 2006
    Location
    South Africa
    Search Comp PM
    I am progressing very slowly.

    I have been experimenting with the clips and struggling with the scripts!, but at least I have a few clips that I can compare.

    I tried HCEnc, but with more scripts, I just thought I have had enough.
    I bought TMPGEnc Video Mastering Works 5, but would like to get comments on the settings:

    This one seems fine



    I chose the MPEG setting, but perhaps the setting under Custom Output Template outputs might also be suitable



    My biggest uncertainty is the bitrate. The first screen clip is the default. In this screen clip, the fps, the rate control mode and the display mode is wrong.



    I changed the fps to 25 fps;
    the constant bitrate to vbr (VBR constant quality or VBR average bitrate- which one should I choose? I chose average);
    and progressive to interlaced.

    With regards to the bitrate, I have read different opinions. Most feel that the bitrate should be high (6000-9000).

    The options here are:
    • bitrate
    • maximum bitrate
    • minimum bitrate
    I gave maximum and minimum both the 9000 value- Advice here please?



    Thanks for any opinions
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!