VideoHelp Forum

Try DVDFab and download streaming video, copy, convert or make Blu-rays,DVDs! Download free trial !
+ Reply to Thread
Page 3 of 5
FirstFirst 1 2 3 4 5 LastLast
Results 61 to 90 of 128
Thread
  1. I chose to make the lighter lines darker because the video seemed a little too bright overall. There's virtually nothing down near Y=16, and there are peaks over Y=235, some even pegged at 255.

    Regarding the threshold and multiplier: after subtracting 120 the values range from 0 to 135. When you multiply that by 3 any value that was 85 and over becomes 255. 85 corresponds to pixels of Y=85+120, 205, in the original video. I chose to multiply by 3 somewhat randomly.

    The Levels filter is more intuitive for this. But I always forget whether it limits values to 16-235, either before or after the conversion, and what the argument is to prevent it... I just checked, by default it "cores" before and after the conversion. If you want to use Levels() you should add "coring=false" to prevent it from clipping superblacks and superwhite before the operation, and to allow the mask to range all the way from 0 to 255.

    Code:
    ...Levels(85,1,205,0,255, coring=false)...
    Quote Quote  
  2. Try this:

    Code:
    AviSource("1986_1011 Pool Sample.avi") 
    src = last
    
    SeparateFields()
    AssumeFrameBased()
    SeparateFields()
    evenlines = SelectEven()
    oddlines = SelectOdd()
    
    umask = oddlines.UtoY().BilinearResize(oddlines.width, oddlines.height).Levels(128,1,140,0,255, coring=false)
    oddlines = Overlay(oddlines, oddlines.ColorYUV(gain_y=-6), mask=umask)
    
    Interleave(evenlines, oddlines)
    Weave()
    AssumeFieldBased()
    Weave()
    
    Interleave(src, last, umask.SelectEven().PointResize(width,height))
    I used the U channel to build the mask. The last line interleaves the original video, the processed video, and the mask that was used.
    Quote Quote  
  3. Thanks! This is very helpful.

    Researching gain, gamma, offset, etc., I found this at https://www.provideocoalition.com/whats-in-a-name/:
    Image
    [Attachment 53117 - Click to enlarge]


    I'm not finding any detail specific to Avisynth, but based on this, gain is applied (increasing linearly) from 0 at Y=0 to 100% of the multiplier (3 in your code) at Y=255. Is that correct for Avisynth?

    Edit: I posted this before I saw your last post... Thanks again!
    Quote Quote  
  4. Originally Posted by jagabo View Post
    Try this:

    I used the U channel to build the mask. The last line interleaves the original video, the processed video, and the mask that was used.
    Now that is VERY cool!

    I'm going to study that for a while!!!
    Quote Quote  
  5. Try this:

    Code:
    function GreyRamp()
    {
       black = BlankClip(color=$000000, width=1, height=256, pixel_type="RGB32", length=512)
       white = BlankClip(color=$010101, width=1, height=256, pixel_type="RGB32", length=512)
       StackHorizontal(black,white)
       StackHorizontal(last, last.RGBAdjust(rb=2, gb=2, bb=2))
       StackHorizontal(last, last.RGBAdjust(rb=4, gb=4, bb=4))
       StackHorizontal(last, last.RGBAdjust(rb=8, gb=8, bb=8))
       StackHorizontal(last, last.RGBAdjust(rb=16, gb=16, bb=16))
       StackHorizontal(last, last.RGBAdjust(rb=32, gb=32, bb=32))
       StackHorizontal(last, last.RGBAdjust(rb=64, gb=64, bb=64))
       StackHorizontal(last, last.RGBAdjust(rb=128, gb=128, bb=128))
    }
    
    
    function AnimateGain(clip vid, int gain)
    {
    	ColorYUV(vid, gain_y = gain)
    	Subtitle("gain="+String(gain))
    }
    
    
    function AnimateOff(clip vid, int off)
    {
    	ColorYUV(vid, off_y = off)
    	Subtitle("off="+String(off))
    }
    
    function AnimateCont(clip vid, int cont)
    {
    	ColorYUV(vid, cont_y = cont)
    	Subtitle("cont="+String(cont))
    }
    
    
    function AnimateGamma(clip vid, int gamma)
    {
            gamma = gamma < -255 ? -255 : gamma
    
    	ColorYUV(vid, gamma_y = gamma)
    	Subtitle("gamma="+String(gamma))
    }
    
    GreyRamp()
    ConvertToYUY2(matrix="PC.601")
    
    v1=Animate(0,512, "AnimateGain", -256, 256)
    v2=Animate(0,512, "AnimateOff", -256, 256)
    v3=Animate(0,512, "AnimateCont", -256, 256)
    v4=Animate(0,512, "AnimateGamma", -256, 256)
    
    StackHorizontal(v1,v2,v3,v4)
    
    TurnRight().Histogram().TurnLeft()
    ConvertToRGB(matrix="pc.601")
    Last edited by jagabo; 4th May 2020 at 19:05.
    Quote Quote  
  6. Originally Posted by jagabo View Post
    Try this:
    An animated answer!

    More to study!
    Quote Quote  
  7. Originally Posted by jagabo View Post

    So I tried using a mask based on brightness to only apply it to brighter parts of the picture:

    Code:
    ...
    bmask = oddlines.ColorYUV(off_y=-120).ColorYUV(gain_y=512)
    ...
    I have been trying to fully understand how the masks work. I modified the Greyramp code you just sent, and I believe I have reproduced the mask you coded back in post #45.
    Code:
    function GreyRampOffGain()
    {
       black = BlankClip(color=$000000, width=1, height=256, pixel_type="RGB32", length=512)
       white = BlankClip(color=$010101, width=1, height=256, pixel_type="RGB32", length=512)
       StackHorizontal(black,white)
       StackHorizontal(last, last.RGBAdjust(rb=2, gb=2, bb=2))
       StackHorizontal(last, last.RGBAdjust(rb=4, gb=4, bb=4))
       StackHorizontal(last, last.RGBAdjust(rb=8, gb=8, bb=8))
       StackHorizontal(last, last.RGBAdjust(rb=16, gb=16, bb=16))
       StackHorizontal(last, last.RGBAdjust(rb=32, gb=32, bb=32))
       StackHorizontal(last, last.RGBAdjust(rb=64, gb=64, bb=64))
       StackHorizontal(last, last.RGBAdjust(rb=128, gb=128, bb=128))
    }
    
    
    function AnimateOffGain(clip vid, int gain)
    {
    	ColorYUV(vid, gain_y = gain, off_y = -120)
    	Subtitle("gain/off="+String(gain)+"/"+"-120")
    }
    
    
    GreyRampOffGain()
    ConvertToYUY2(matrix="PC.601")
    
    Animate(0,512, "AnimateOffGain", 512, 512)
    
    TurnRight().Histogram().TurnLeft()
    ConvertToRGB(matrix="pc.601")
    Visualizing these masks is helpful for me.
    Quote Quote  
  8. Originally Posted by jagabo View Post
    Try this:
    Code:
    ...
    umask = oddlines.UtoY().BilinearResize(oddlines.width, oddlines.height).Levels(128,1,140,0,255, coring=false)
    oddlines = Overlay(oddlines, oddlines.ColorYUV(gain_y=-6), mask=umask)
    ...
    I used the U channel to build the mask. The last line interleaves the original video, the processed video, and the mask that was used.
    This looks great! I don't see how it could get any better with regard to removing the Weird Lines.

    Can we call this success?
    Quote Quote  
  9. I think this is what you were trying to do:

    Code:
    function GreyRampOffGain()
    {
       black = BlankClip(color=$000000, width=1, height=256, pixel_type="RGB32", length=512)
       white = BlankClip(color=$010101, width=1, height=256, pixel_type="RGB32", length=512)
       StackHorizontal(black,white)
       StackHorizontal(last, last.RGBAdjust(rb=2, gb=2, bb=2))
       StackHorizontal(last, last.RGBAdjust(rb=4, gb=4, bb=4))
       StackHorizontal(last, last.RGBAdjust(rb=8, gb=8, bb=8))
       StackHorizontal(last, last.RGBAdjust(rb=16, gb=16, bb=16))
       StackHorizontal(last, last.RGBAdjust(rb=32, gb=32, bb=32))
       StackHorizontal(last, last.RGBAdjust(rb=64, gb=64, bb=64))
       StackHorizontal(last, last.RGBAdjust(rb=128, gb=128, bb=128))
    }
    
    
    function AnimateOffGain(clip vid, int gain)
    {
            ColorYUV(vid, off_y=-120) # always subtract 120 from Y
    	ColorYUV(gain_y = gain) # apply requested gain
    	Subtitle("gain/off="+String(gain)+"/"+"-120")
    }
    
    
    GreyRampOffGain()
    ConvertToYUY2(matrix="PC.601")
    
    Animate(0,512, "AnimateOffGain", 0, 512) # step through gain values from 0 to 512
    
    TurnRight().Histogram().TurnLeft()
    ConvertToRGB(matrix="pc.601")
    I had to change your line:

    Code:
    ColorYUV(vid, gain_y = gain, off_y = -120)
    Because when you use both gain_y is applied before off_y.

    logically:

    Y' = (Y * gain) + off

    not

    Y' = (Y + off) * gain
    Quote Quote  
  10. Originally Posted by jagabo View Post
    Because when you use both gain_y is applied before off_y.

    logically:

    Y' = (Y * gain) + off

    not

    Y' = (Y + off) * gain
    Thanks for just teaching me the order of the parameters matters. This clarifies the interaction for me... and now I have a script to visualize the result!

    Actually, I guess I need to follow the order presented on the Avisyth page:
    ColorYUV(clip [,
    float gain_y, float off_y, float gamma_y, float cont_y,
    ... )
    Quote Quote  
  11. If you specify the parameters by name they don't have to be in order. But that doesn't mean they are applied in the order you specify them, they're applied in that fixed order. If you just enter the numbers they have to be in that specified order. So

    Code:
    ColorYUV(1, 2, 3, 4)
    ColorYUV(gain_y=1, off_y=2, gamma_y=3, cont_y=4)
    ColorYUV(cont_y=4, gamma_y=3, off_y=2, gain_y=1)
    all do the same thing.
    Quote Quote  
  12. Regarding the script that used the U values to generate a mask -- I don't know if it will work for all your videos. There's are only a few basic colors in the pool sample. It may just be coincidental that the mask works. It may not work in shots with a lot of other colors. You'll have to check.
    Quote Quote  
  13. Originally Posted by jagabo View Post
    Regarding the script that used the U values to generate a mask -- I don't know if it will work for all your videos. There's are only a few basic colors in the pool sample. It may just be coincidental that the mask works. It may not work in shots with a lot of other colors. You'll have to check.
    Thanks, I'll do that. I was also thinking about (for testing) reversing your code to brighten even lines, and increase the brightening levels to check the fringes and tweak settings (if necessary in the future).

    Another failed (so far) experiment is running your code on a V410 video. I tried various permutations of this:
    Code:
    #AviSource("1986_1011 Meema Pool V410 unc.avi")
    AviSource("1986_1011 Meema Pool V410 unc.avi", "fourcc"="v410")
    #DirectShowSource("1986_1011 Meema Pool V410 unc.avi")
    Image Attached Files
    Quote Quote  
  14. I don't have a VFW decoder for v410. I would use:

    LWLibavVideoSource("1986_1011 Meema Pool V410 unc samp.avi", format="YUY2")

    You'll need the LSMASH package for AviSynth. That reduces the video to 8 bit though.
    Quote Quote  
  15. Intermittent Fields in the original have lizard skin:
    Image
    [Attachment 53123 - Click to enlarge]


    Is that just a limitation of the original format?

    Edit: Actually, I should go back and verify if these captures were done with the camera (player) DNR on or off...
    Last edited by GrouseHiker; 5th May 2020 at 10:09.
    Quote Quote  
  16. Originally Posted by jagabo View Post
    I don't have a VFW decoder for v410. I would use:

    LWLibavVideoSource("1986_1011 Meema Pool V410 unc samp.avi", format="YUY2")

    You'll need the LSMASH package for AviSynth. That reduces the video to 8 bit though.
    Thanks - Maybe I go down that rabbit hole later...
    Quote Quote  
  17. Something has blown up for me, so I want back to simplicity and tried running this:
    Code:
    DirectShowSource("1986_1011 Pool Sample DV.avi")
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    This also generated errors so I tried one SeparateFields on a video with 117 frames. This produced 234 fields as expected.

    When I run the code above, it produces 32,496 "images" or something, but only the first image shows. The remainder are not visible. At the bottom it reads:
    "Error reading source frame 1: vdffvideosource - DecodeFrame - disontinuity larger than in options (want stream:70 target:70, got 139, 69 bad frames)"

    Same problem on YUY2 files.

    I restarted, reinstalled Avisyinth+, and reinstalled VirtualDub.. restarted again...

    Added: went back and downloaded the first file I posted... same problem. By the way, this computer has Norton Security, which watches every move...
    Last edited by GrouseHiker; 5th May 2020 at 20:51.
    Quote Quote  
  18. Tried running the Avisynth script in MPC-HC & seems ok:
    Image
    [Attachment 53128 - Click to enlarge]


    Also tried VirtualDub 32 bit - same problem.

    This is the VirtualDub Info. Strange it says 422, since the original is DV 4:1:1.
    Image
    [Attachment 53132 - Click to enlarge]
    Last edited by GrouseHiker; 5th May 2020 at 21:27.
    Quote Quote  
  19. I don't know your double SeparateFields script is failing. But DirectShowSource() is usually the last choice of source filters. It's not frame accurate (DV AVI should be OK though) and its behavior depends on what DirectShow filters you have installed. I've never seen the issue you're having though. I AviSource() not working? That's pretty much the standard for DV AVI. If that doesn't work you can try LWlibavVideoSource() as I mentioned earlier.

    Almost all DV decoders output YUV 4:2:2. Usually YUY2.

    With DirectShowSource() try specifying pixel_type="YUY2".

    Code:
    DirectShowSource("1986_1011 Pool Sample DV.avi", pixel_type="YUY2")
    Quote Quote  
  20. Originally Posted by jagabo View Post
    With DirectShowSource() try specifying pixel_type="YUY2".

    Code:
    DirectShowSource("1986_1011 Pool Sample DV.avi", pixel_type="YUY2")
    Tried it - same issue.

    Same probem occurs on YUY2 files that were captured with the Magewell card. Those are 4:2:2.

    Media info for this (1986_1011 Pool Sample DV.avi) file is:
    Format : DV
    Commercial name : DVCPRO
    Codec ID : dvsd
    Codec ID/Hint : Sony
    Duration : 3 s 904 ms
    Bit rate mode : Constant
    Bit rate : 24.4 Mb/s
    Width : 720 pixels
    Height : 480 pixels
    Display aspect ratio : 4:3
    Frame rate mode : Constant
    Frame rate : 29.970 (30000/1001) FPS
    Original frame rate : 29.970 (29970/1000) FPS
    Standard : NTSC
    Color space : YUV
    Chroma subsampling : 4:1:1
    Bit depth : 8 bits
    Scan type : Interlaced
    Scan order : Bottom Field First

    Since the script runs in MPC-HC, I guess it must be a VirtualDub problem.
    Quote Quote  
  21. By the way, I can't get DV files to run via avisource(). I had searched and found DirectShowSource() as an alternate.

    Code:
    #avisource("1986_1011 Pool Sample2 YUY2 unc.avi")
    DirectShowSource("1986_1011 Pool Sample DV.avi", pixel_type="YUY2")
    Strange thing is one SeparateFields works fine.
    Quote Quote  
  22. Tried LWlibavVideoSource and got the same message I get with AviSource on DV files - "av-SeparateFields.avs does not have a video stream."
    Quote Quote  
  23. Looks like I recently installed MagicYUV and FFMpegPlugin.
    Image
    [Attachment 53134 - Click to enlarge]


    When I reinstalled VirtualDub earlier this evening, I didn't delete all the plugin folders and start over...
    Quote Quote  
  24. Originally Posted by GrouseHiker View Post
    Tried LWlibavVideoSource and got the same message I get with AviSource on DV files - "av-SeparateFields.avs does not have a video stream."
    What exactly is that script? Are you using VirtualDub2?
    Quote Quote  
  25. Yes - VirtualDub2-64 and also tried 32 bit version.

    This one won't run:
    Code:
    LWlibavVideoSource("1986_1011 Pool Sample DV.avi")
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    This one will (edit: but still produces the problem):
    Code:
    DirectShowSource("1986_1011 Pool Sample DV.avi")
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    However, I have been using DirectShowSource all along. Sounds line I should work out something different for DV files, but I'm really not planning to use DV via firewire anyway. These were brought in from the camera before I got the Magewell card.

    also, just tried LWlibavVideoSource("1986_1011 Pool Sample DV.avi", format="YUY2") - wouldn't load


    I'll try a different computer tomorrow... Thanks for helping!
    Last edited by GrouseHiker; 6th May 2020 at 15:49. Reason: Clarification: Problem still remains
    Quote Quote  
  26. Try running the AvISynth diagnostic AviSynth Info Tool. See if it reports any errors.

    Try installing Cedocida -- a VFW DV codec. You need a VFW DV decoder to use AviSource().

    By the way, when something doesn't work include any error messages you get. There are often hints there.
    Quote Quote  
  27. The only error message I found (bottom of VirtualDub screen) was the one I posted above:
    "Error reading source frame 1: vdffvideosource - DecodeFrame - disontinuity larger than in options (want stream:70 target:70, got 139, 69 bad frames)"

    I searched for menu items that output error messages, but couldn't find anything.

    I had already been searching on DV codecs for a few hours and had downloaded cedocida_0.2.3_bin.zip. I got hung up on their Install.txt file, which states:
    For installing Cedocida DV, DVCPRO25 and DVCPRO50 Codec (caution: your current installed DV/DVCPRO25/DVCPRO50 codecs will be replaced):

    ==> right click on "cedocida.inf" and select install

    For installing *only* Cecodida DV Codec and not DVCPRO25/50 (caution: your current installed DV codec will be replaced):

    ==> right click on "cedocida_dv_only.inf" and select install
    I was trying to figure out if the install will wipe out a codec that should stay:
    Image
    [Attachment 53147 - Click to enlarge]


    I tried to to ID what these codecs do but gave up.

    Should I go ahead and install?

    AvisynthInfo (Opti790):
    [OS/Hardware info]
    Operating system: Windows 10 (x64) (Build 17763)

    CPU: Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz / Sandy Bridge (Core i5)
    MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX
    4 physical cores / 4 logical cores


    [Avisynth info]
    VersionString: AviSynth+ 3.5 (r3106, 3.5, x86_64)
    VersionNumber: 2.60
    File / Product version: 3.5.0.0 / 3.5.0.0
    Interface Version: 7
    Multi-threading support: Yes
    Avisynth.dll location: C:\WINDOWS\SYSTEM32\avisynth.dll
    Avisynth.dll time stamp: 2020-04-02, 22:27:26 (UTC)
    PluginDir2_5 (HKLM, x64): C:\Program Files (x86)\AviSynth+\plugins64
    PluginDir+ (HKLM, x64): C:\Program Files (x86)\AviSynth+\plugins64+


    [C++ 2.6 Plugins (64 Bit)]
    C:\Program Files (x86)\AviSynth+\plugins64+\ConvertStacked.dll [2020-04-02]
    C:\Program Files (x86)\AviSynth+\plugins64+\DirectShowSource.dll [2020-04-02]
    C:\Program Files (x86)\AviSynth+\plugins64+\ImageSeq.dll [2020-04-02]
    C:\Program Files (x86)\AviSynth+\plugins64+\Shibatch.dll [2020-04-02]
    C:\Program Files (x86)\AviSynth+\plugins64+\TimeStretch.dll [2020-04-02]
    C:\Program Files (x86)\AviSynth+\plugins64+\VDubFilter.dll [2020-04-02]

    [Scripts (AVSI)]
    C:\Program Files (x86)\AviSynth+\plugins64+\colors_rgb.avsi [2020-03-12]

    [Uncategorized files]
    C:\Program Files (x86)\AviSynth+\plugins64+\colors_rgb.txt [2020-03-12]
    Last edited by GrouseHiker; 6th May 2020 at 15:39.
    Quote Quote  
  28. Originally Posted by GrouseHiker View Post

    also, just tried LWlibavVideoSource("1986_1011 Pool Sample DV.avi", format="YUY2") - wouldn't load
    Are you using recent lsmash version ?

    Try this branch. Frequently updated and stable, works with DV
    https://github.com/HolyWu/L-SMASH-Works/releases

    One benefit of using lsmash or ffms2 , is they can return the original 4:1:1 format for NTSC DV . This gives you options to convert the chroma in a manner you see best

    LWlibavVideoSource("1986_1011 Pool Sample DV.avi") - That will return the original 4:1:1

    Other decoders will upsample or convert, and often using nearest neighbor algorithm. The chroma samples are simple duplicated, leaving you blocky color edges. It might not be as noticible on VHS or lower quality sources. Cedocida cannot output 4:1:1

    There are times when you might want to use nearest neighbor, if you're planning to go back to 4:1:1 for some reason, it can be lossless as chroma samples are just duplicated and discarded. (But avisynth does not perform a true nearest neighbor resize because of the way it interprets the chroma center; there are some workarounds you have to use)
    Quote Quote  
  29. Originally Posted by poisondeathray View Post
    Try this branch. Frequently updated and stable, works with DV
    https://github.com/HolyWu/L-SMASH-Works/releases
    Thanks! Copied appropriate LSMASHSource.dll to:
    C:\Program Files (x86)\AviSynth+\plugins+
    C:\Program Files (x86)\AviSynth+\plugins64+

    This didn't work:
    LSMASHVideoSource("1986_1011 Pool Sample DV.avi").info
    Got the same error "...does not have a video stream."
    do I need parameters?

    UPDATE:
    Code:
    LWlibavVideoSource("1986_1011 Pool Sample DV.avi").info
    works
    Last edited by GrouseHiker; 6th May 2020 at 12:41.
    Quote Quote  
  30. This loads the file and runs, but the double SeparateFields problem remains:
    Code:
    LWlibavVideoSource("1986_1011 Pool Sample DV.avi")
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    I will try on a different computer in a few hours
    Quote Quote  



Similar Threads