VideoHelp Forum
+ Reply to Thread
Page 2 of 3
FirstFirst 1 2 3 LastLast
Results 31 to 60 of 85
Thread
  1. If I load the script into MeGUI, I can see the green bar in the preview window prior to encoding. Its just barely visible when resizing to the 704x396, but without resizing it is very in your face. I'll try a different editor to see if its just something weird with MeGUI.

    I may end up doing both a new DVD and an x264 encode, and if I did a new DVD then I'd look at those options for getting an exact 25fps. But also like I mentioned a few posts up, someone is mailing me a copy of their DVDR version which is encoded in PAL rather than NTSC and I believe it looks like the original DVD recording before someone converted it to NTSC. So in this case, the script used to improve the PQ would still be applied but the issue with converting back to 25fps should already be taken care of. In any case, I do enjoy the opportunity to learn more about using Avisynth for these projects.
    Quote Quote  
  2. Still waiting on the new copy to arrive in mail before proceeding further with this one, but I also have some other VHS captures that could benefit from similar filtering with color corrections.

    Are there any good guides covering the basics of performing color correction on video such as this? I can experiment with settings and figure it out, but would be helpful if there was a basic tutorial covering how to go about it all
    Quote Quote  
  3. The green bar at the right may be a problem with one of the filters in AviSynth. In the past some required mod16 or mod8 frame sizes. If the versions you have are very old try updating them one at a time to see if it goes away. This is true of encoders and playback decoders too.

    Color correction in AviSynth is pretty difficult. Partly because the feedback loop is slow: edit script, reload video, edit script, reload video... compared an NLE: move a slider and watch image change. And partly because of much of it working in YUV which isn't as intuitive as RGB.

    What I did for the green droop across the frame was create two videos, the original and one where the V channel was moved up 12 units. Then I used a linear horizontal gradient as an alpha channel to mix the two images together with Overlay(). The left of edges of the resulting image was 100 percent the new V, the right was 100 percent the original V, in the middle was a 50:50 mix, etc. The net result was to pull up the V channel more at the left.

    White balance can sometimes be done in RGB with RGBAdjust(). But with this video I worked in YUV. First I located a dark part of the image that I thought should be nearly black. I used VideoScope() to view the U and V channels:

    Code:
    ConvertToYUY2() # videoscope requires YUY2
    VideoScope("both", true, "U", "V", "UV")
    Image
    [Attachment 47514 - Click to enlarge]


    At the top left is the dark patch from the video. The rest are graphs of the U and V channels (U increases left to right, V increases bottom to top). What you want to do is move the little white dot to where the crosshairs meet (perfect grey). In this example U needs to move about 4 units to the right (+4) and the V channel needs to move about 11 units up (+11). In the original sample I used it was 4 and 9. I did the same for the brights and ended up with two versions of the video:

    Code:
    darks = ColorYUV(off_u=4, off_v=9)
    brights = ColorYUV(off_u=18, off_v=3)
    Then I blended the two videos together using an alpha mask generated from the Y channel (brightness).

    Code:
    Overlay(darks, brights, mask=ColorYUV(cont_y=50))
    The end result is that dark parts of the picture get the "darks" adjustment, the bright parts of the picture get the "brights" adjustment, and parts in the middle get a weighted adjustment base on their brightness.
    Quote Quote  
  4. thanks again jagabo

    I'll try your suggestions regarding the green bar and see if that solves it.

    And thanks for taking the time to lay out your approach to color correction here. I will take this info and do my own experiments with some other videos and see if I can make some progress. I don't want to be a pest and keep posting for help with every video I'd like to work on, so it would be great to learn these skills myself. You've been a huge help.
    Quote Quote  
  5. +1 jagabo: color and gamma correction are MUCH easier to do in your NLE. I use Vegas, but almost any NLE will have pretty good correction tools.
    Quote Quote  
  6. Is the color correction better quality in a NLE such as Vegas? I'm fairly well versed in Vegas, but I was under the impression that Avisynth gives better results

    I'm more concerned with quality than ease of use
    Quote Quote  
  7. I doubt AVISynth would give better results. It's main advantage over an NLE is that AVISynth lets the user write scripts that can adapt to what is found in each pixel on each frame (or field) of video. However, its lack of interactivity make it a lousy tool for any activity that requires feedback. This is particularly true of gamma and color correction.

    If you are interested in quality, then the first thing you must do is get a colorimeter (like a Spyder) and calibrate your monitor. Otherwise you will be adjusting all of your videos to a false reference and will end up with a mess. Then, you should learn to use not only the color corrector tools built into Vegas (color corrector, secondary color corrector, levels, color curve, etc.) but also some of the amazing color tools available as plugins, like those from Hitfilm.

    Finally, you need to understand and then use the various "scopes" available in Vegas which provide you with important graphic feedback on color and gamma levels.
    Quote Quote  
  8. You can use the Animate() function in AviSynth to get some interactivity. For example:

    Code:
    ##########################################################################
    
    function _VaryHue(clip v, float value)
    {
        Tweak(v, hue=value) # adjust hue
        Subtitle("hue="+string(value)) #show hue value used
    }
    
    function VaryHue(clip v, int "frame_num", int "num_frames", float "start_val", float "end_val")
    {
        Trim(v, frame_num,frame_num)
        Loop(num_frames, 0, 0)
        Animate(0,num_frames, "_VaryHue", last, start_val, last,end_val)
    }
    
    
    ##########################################################################
    
    
    Mpeg2Source("clip4.d2v", Info=3) 
    TFM()
    TDecimate()
    VaryHue(frame_num=1630, num_frames=360, start_val=0.0, end_val=360.0)
    That will vary hue from 0.0 to 360.0 over 360 copies of frame 1630 (after IVTC). You can open the script in VirtualDub and scrub around to see the result. Once you've found the hue value you want you would replace VaryHue() with Tweak().
    Last edited by jagabo; 17th Dec 2018 at 11:56.
    Quote Quote  
  9. So I've been trying to apply the tips in this thread to another project - an old TV recording of another western. This one was very dark and had some greenish tint to the picture. Attached is what I was able to do... just wondering if there are any tips on if I am on the right track with this or suggestions for improvement?

    Attached is a few clips joined in one video file, and the 2nd is my filtered video in side-by-side comparison.

    This one is PAL, encoded/flagged as interlaced but I don't think its true interlacing as TFM() handles it fine.

    One thing I am not sure about - in the last 2 clips where there is movement in the picture, TFM() has taken care of the combing but if you look at the bottom, there is still one horizontal line showing that looks like combing.

    Here is the code I am using for this. I have Stab commented out because I wasn't sure if I wanted to apply it, but the picture does seem to benefit from it:
    Code:
    Mpeg2Source("newclips.d2v", CPU2="ooooxx", Info=3) 
    
    TFM()
    
    # remove black borders
    Crop(4,78,700,-90)
    src=last
    
    # adjust levels, saturation, color
    Tweak(bright=2, cont=1.2, sat=1.3, hue=3)
    
    # antialiasing
    Santiag()
    
    # stabilize frame
    #Stab(mirror=15)
    
    #sharpen, denoise
    ConvertToYV12()
    QTGMC(InputType=1, Preset="Medium", EzDenoise=2.0, DenoiseMC=true)
    LSFMod(Strength=75)
    GradFun3()
    I also have received the new copy of the film I originally posted about, so I'll be loading that up soon as well.
    Image Attached Files
    Quote Quote  
  10. Here's a more generalized version of my earlier script for varying hue on a single frame.

    Code:
    ##########################################################################
    
    function _VaryTweak(clip v, string property, float value)
    {
        (property == "hue") ? Tweak(v, hue=value, coring=false) : v
        (property == "sat") ? Tweak(v, sat=value, coring=false) : last
        (property == "bright") ? Tweak(v, bright=value, coring=false) : last
        (property == "cont") ? Tweak(v, cont=value, coring=false) : last
    
        Subtitle(property+"="+string(value)) #show property and value used
    }
    
    function VaryTweak(clip v, int "frame_num", int "num_frames", string "property", float "start_val", float "end_val")
    {
        Trim(v, frame_num,frame_num)
        Loop(num_frames, 0, 0)
        Animate(0,num_frames, "_VaryTweak", last,property,start_val, last,property,end_val)
    }
    
    ##########################################################################
    With this script you can vary any of Tweak's four main variables: hue, sat, bright, cont. Call like:

    Code:
    VaryTweak(last, 300, 100, "cont", 0, 2.0)
    Similar to the earlier script, that displays frame #300 100 times with contrast varying from 0 to 2.0.
    Last edited by jagabo; 27th Dec 2018 at 09:06. Reason: fixed bug
    Quote Quote  
  11. Thanks! Going to experiment with this script tonight

    Edit: wow, this is a great script! Very cool
    Last edited by autephex; 27th Dec 2018 at 23:37.
    Quote Quote  
  12. Here's a similar "vary" function for ColorYUV():

    Code:
    ##########################################################################
    
    function _VaryColorYUV(clip v, string property, int value)
    {
        (property == "cont_y") ? ColorYUV(v, cont_y=value) : v
        (property == "cont_u") ? ColorYUV(v, cont_u=value) : last
        (property == "cont_v") ? ColorYUV(v, cont_v=value) : last
    
        (property == "gain_y") ? ColorYUV(v, gain_y=value) : last
        (property == "gain_u") ? ColorYUV(v, gain_u=value) : last
        (property == "gain_v") ? ColorYUV(v, gain_v=value) : last
    
        (property == "off_y") ? ColorYUV(v, off_y=value) : last
        (property == "off_u") ? ColorYUV(v, off_u=value) : last
        (property == "off_v") ? ColorYUV(v, off_v=value) : last
    
        (property == "gamma_y") ? ColorYUV(v, gamma_y=value) : last
        # gamma_u not supported
        # gamma_v not supported
    
        Subtitle(property+"="+string(value)) #show property and value used
    }
    
    function VaryColorYUV(clip v, int "frame_num", int "num_frames", string "property", int "start_val", int "end_val")
    {
        Trim(v, frame_num,frame_num)
        Loop(num_frames, 0, 0)
        Animate(0,num_frames, "_VaryColorYUV", last,property,start_val, last,property,end_val)
    }
    
    ##########################################################################
    Similar calling sequence:
    Code:
    VaryColorYUV(last, 300, 100, "off_u", -256, 256)
    Supported paramters are gain_y, cont_y, off_y, gamma_y, gain_u, cont_u, off_u, gain_v, cont_v, off_v. gamma_u and gamma_v don't really make sense and aren't supported by ColorYUV() -- they don't do anything.
    Quote Quote  
  13. Thanks again for another useful script, jagabo - I'm currently using your two scripts provided here to do some tweaking on the last video I posted.

    I've now got what I believe is the original PAL recording of the video I posted clips from previously. This looks like its the original DVDR recording before someone converted it to NTSC (probably an American bootleg seller). Attached is the same clips as I originally posted, but from the PAL disc.

    It seems that just applying TFM() on this PAL version works while leaving at 25fps. I wasn't sure though if it would also be better to restore back to 23.976fps

    The commented out code achieves 23.976 fps, but I'm not sure if this is the correct way to do it. But leaving at just TFM() without the commented code, it remains at 25fps and seems to look alright. Everything else in the script I've left the same so far, except the new crop values since the PAL resolution is different.

    Code:
    TFM()
    
    #Interleave(TFM(field=1, pp=0), TFM(field=0, pp=0))
    #vInverse()
    #Dup(threshold=4, blend=true, show=false)
    
    # deblend the 50fps video back to 23.976fps
    #SRestore() 
    
    #new crop values for the PAL resolution
    Crop(22,64,-10,-70)

    I would like to make the black levels a bit darker/more black, so I will maybe be using your above script to experiment with the VaryColorYUV, although I still don't understand the ColorYUV() function very well yet.

    I have also started looking at some other scenes, and may have to do some different settings for certain scenes - there are a few that take place outside and the brightness/colors are a bit off, and also some very dark night time scenes where the darks look too lightened. I can attach some example clips of those also in next posts if interested in seeing.
    Image Attached Files
    Quote Quote  
  14. Originally Posted by autephex View Post
    It seems that just applying TFM() on this PAL version works while leaving at 25fps. I wasn't sure though if it would also be better to restore back to 23.976fps
    It's already progressive. You don't need either TFM or SRestore. If you want to return it to film speed then use AssumeFPS and slow the audio to match.
    I would like to make the black levels a bit darker/more black...
    No you don't. It's the contrast that's messed up, and it's the contrast you want to fix. I did it this way:

    Tweak(Bright=10,Cont=0.8,Coring=False)

    Others might fix it differently. It's slightly 'tilted' and you might want to use Rotate to correct that. I used:

    Rotate(-0.1)

    I got slightly different crop values than you did, and you have the additional problem of the crop values changing during different parts. I only looked at Brawl2.
    Quote Quote  
  15. I had already tried changing the contrast a bit but I figured that changing the settings of the original script where it adjusts the black and white levels would be the better way to do it. Maybe you're confused because you didn't notice the original script being used for color/brightness/contrast.

    Regarding not needing TFM, I disagree... it may not be needed for a DVD encoding but I am working on an encode that will not be DVD video.

    It is encoded as interlaced despite being progressive, and as I understand it, it needs field matching performed by TFM(). Or am I not understanding this correctly?
    Last edited by autephex; 29th Dec 2018 at 16:09.
    Quote Quote  
  16. Well I just did a test encoding without TFM and it looks like you're right... with and without TFM the encodes look identical as far as I can tell.

    I'm confused though because I thought progressive video encoded as interlaced required field matching to restore progressive. Also the original video has what looks similar to combing and I thought this was fixed by TFM() but I guess not.

    Also adjusted crop values to the following:
    Crop(22,74,-10,-70)

    Like you say, the black areas shift a bit so sometimes it moves into the frame slightly with this value. May add a resize to get true 16x9 or could crop vertically a bit more for 688x387 which would be true 16x9, but would crop out a bit of the frame
    Last edited by autephex; 29th Dec 2018 at 16:03.
    Quote Quote  
  17. Originally Posted by autephex View Post
    Maybe you're confused because you didn't notice the original script being used for color/brightness/contrast.
    I'm not 'confused', but you're right in that I checked nothing but the first M2V of the four you uploaded and only your previous post. However, you also mentioned not understanding ColorYUV and I corrected the contrast using the easier-to-understand Tweak.

    I'm confused though because I thought progressive video encoded as interlaced required field matching to restore progressive.
    You thought wrong. If the content is progressive, no matter how it's encoded, then treat it as progressive to begin with. This is a common problem people have - not understanding that there's often a difference between the content and how it's encoded.

    ...could crop vertically a bit more for 688x387 which would be true 16x9, but would crop out a bit of the frame
    AviSynth won't let you do that as all crops must be in multiples of two (at least, depending on whether or not the content is really interlaced). There are a number of ways to handle different crops for different parts of a video. You could crop the different parts differently followed by resizing those different parts to the same resolution. Then join them all together. That's one way, but one would have to see the whole thing (not asking for it to be uploaded) to make a decision because some of it will have a very slightly wrong aspect ratio. You want to avoid cutting into the active video as much as possible, even with crap DVDs such as yours.
    Quote Quote  
  18. I didn't mean that you're confused about what you're doing, only that you missed that I was still applying the previous script by jagabo on the current video. This is why I mentioned using his original method with ColorYUV regarding the levels, because he already did an excellent job with the colors/levels but I think it could just use some slight tweaking regarding the darks/blacks. His work looks much better than simple Tweak adjustments...

    I do understand there's a difference between content and what its reported as in flags, etc, but I've read several threads where posters here even have stated that actually progressive footage which is encoded as interlaced still needs field matching (when it does not display the usual interlaced combing)....
    Quote Quote  
  19. I think I figured out why I was confused about the progressive/interlaced thing... the other video I'm working on right now, which I posted 2 clips from above ( newclips.demuxed.m2v, barclips-muxed.mp4) is also pal/progressive and encoded as interlaced. But it actually has combing present and TFM() field matching removes the combing.

    This one though does not seem to have the combing present, although I initially thought it did but I guess I'm just seeing jagged edges or something.

    so here's the full script I am running right now:

    Code:
    import("C:\Program Files (x86)\AviSynth 2.6\plugins\TemporalDegrain.avs") 
    import("C:\Program Files (x86)\AviSynth 2.6\plugins\Ramp.avs") 
    import("C:\Program Files (x86)\AviSynth 2.6\plugins\Santiag.avs") 
    import("C:\Program Files (x86)\AviSynth 2.6\plugins\Stab.avs") 
    
    Mpeg2Source("grave.d2v", CPU2="ooooxx", Info=3) 
    
    
    # remove black borders
    Crop(22,74,-10,-70)
    src=last
    
    # fix droop of V channel at left
    ramp = GreyRamp().BilinearResize(width, height)
    Overlay(last, ColorYUV(off_v=-12), mask=ramp)
    
    # adjust levels
    ColorYUV(gain_y=-20, gamma_y=50, off_y=-2)
    
    # white balance darks, brights
    darks = ColorYUV(off_u=4, off_v=9)
    brights = ColorYUV(off_u=18, off_v=3)
    Overlay(darks, brights, mask=ColorYUV(cont_y=50))
    
    # increase saturation, maybe too much
    ColorYUV(cont_u=30, cont_v=30)
    
    # make reds less orange, set contrats a bit darker
    Tweak(hue=5)
    
    # remove residual combing from bad time base, AGC, compression
    vInverse()
    
    # antialiasing
    Santiag()
    
    # stabilize frame
    Stab(mirror=15)
    
    # edge stabilization
    QTGMC(InputType=1)
    
    # chroma noise reduction, sharpen, shift
    MergeChroma(last, TemporalDegrain().aWarpSharp(depth=10).ChromaShift(c=-2, l=-4))
    GradFun3()
    
    
    #splitscreen A/B
    #left=last
    #right=Mpeg2Source("grave.d2v", CPU2="ooooxx", Info=3).Crop(22,74,-10,-70)
    #StackHorizontal(left,right)
    Quote Quote  
  20. Actually, upon checking the rest of the video again, TFM() is indeed needed

    If you look at the cabin clip, you can see what I'm saying if you apply TFM() versus without it:




    Quote Quote  
  21. What I've figured out so far regarding the dark levels & contrast is adjusting the following line:

    Code:
    #originally gain_y=-20
    ColorYUV(gain_y=-50, gamma_y=50, off_y=-2)
    This helps with scenes where there is some noise showing in the blacks and where its too bright. Here are some screenshots showing some of the scenes that had problems with the original settings - one shot with gain_y=-20 and comparison shot with gain_y=-50 .... its more noticeable in the actual video than screenshots. The daytime scene also shows some of the coloring problems that may need individual scene tweaking








    Quote Quote  
  22. The video does switch between in-phase and out-of-phase fields. So some frames look progressive, some interlaced. TFM() takes care of that, aside from some residual combing from horizontal time base jitter. But there is uet another problem with the fields. Even when they are matched properly sometimes one field is sometimes in the wrong position. Here I have cropped the frame and displayed the original frame on the left and the result of a FieldSwap() on the right. Be sure to view the images full size or zoomed.

    Image
    [Attachment 47670 - Click to enlarge]


    The fields are in the correct position on the left, and swapping them makes the picture much worse on the right. But a few frames later:

    Image
    [Attachment 47671 - Click to enlarge]


    The original frame has the fields in the wrong position, and swapping them looks much better (though still not great, something like a dup field deinterlace). This happens throughout the four clips, alternating every few frames, with a cycle of about 6 frames.

    Because of this I think you'll be better off using QTGMC() instead of TFM().

    Image
    [Attachment 47672 - Click to enlarge]


    Thats TFM() on the left, QTGMC(preset="fast", FPSDivisor=2, sharpness=0.6) on the right.
    Quote Quote  
  23. Interesting.. that's some crazy stuff that a video ends up in such conditions


    I agree with you that QTGMC looks better
    Quote Quote  
  24. Yes, that Cabin.m2v is a mess. But neither TFM nor Bob/Srestore fix it. ... I had written more but jagabo addressed the main problem with that cabin sample.

    In that Cabin sample the blacks are crushed and the whites blasted out and your ColorYUV settings keep the black levels way too low while fixing the contrast. Either you need to calibrate your monitor or to put on the Histogram filter to see what's going on. You shouldn't just 'eyeball' these things. I test the luma using ColorYUV(Analyze=True).Limiter(Show="Luma") as it shows illegal black values as red and illegal white values as green, making it easy to see what's going on. I used this script for the two pics:

    MPEG2Source("Cabin.d2v")
    Rotate(-0.1)
    Crop(22,62,-10,-66)
    #ColorYUV(gain_y=-50, gamma_y=50, off_y=-2)###Turned it on for the "After" picture
    ColorYUV(Analyze=True).Limiter(Show="Luma")


    I purposely didn't do any color correction, in case that's why you thought your ColorYUV settings were better.
    Image Attached Thumbnails Click image for larger version

Name:	BeforeColorYUV.jpg
Views:	52
Size:	95.4 KB
ID:	47673  

    Click image for larger version

Name:	AfterColorYUV.jpg
Views:	85
Size:	89.8 KB
ID:	47674  

    Quote Quote  
  25. Well, they aren't my settings. As I've stated I am brand new to this and all I am doing is experimenting. I have calibrated my monitor to the best of my ability (using just software)

    Anyway those are jagabo's settings in the ColorYUV() scripting, all I was doing is trying different values for the contrast... so jagabo could reply to what you've said better than I can

    personally I thought his color corrections were great compared to the original video

    And while TFM() may not fix all the problems, the video certainly looked way better with it than if not using it or QTGMC
    Last edited by autephex; 29th Dec 2018 at 21:28.
    Quote Quote  
  26. And again I have to ask if you're actually running the entire script or just running the individual lines of code I post... because even if I just post the one line of script code, its meant to be put into the entire thing. Because there are additional ColorYUV() calls for brights/darks after the line you're quoting in your A/B screenshots

    Really kinda confusing the thread tbh....
    Quote Quote  
  27. Originally Posted by autephex View Post
    Anyway those are jagabo's settings in the ColorYUV() scripting
    Note that those settings were for the earlier NTSC video. They need to be tweaked for the new PAL video. Also, I used an nominal setting for all the clips. So they are not perfect for each shot.

    Originally Posted by autephex View Post
    This helps with scenes where there is some noise showing in the blacks
    Rather than trying to hide noise by darkening the picture you would likely be better off using noise reduction filters.
    Quote Quote  
  28. Yeah I figured that some of the scenes being off is due to the fact you were trying to get a good setting to apply to all scenes. Also why I was thinking I may end up having to process the film in different segments with different settings.

    The PAL/NTSC coloring seems mostly the same but there is some very slight difference.

    I'll try some different noise removal settings, but also I thought the dark areas needed to be a bit darker overall... may be due to the slight difference in the PAL video
    Quote Quote  
  29. Here's a sample script that might help you understand what ColorYUV does:

    Code:
    ##########################################################################
    
    function GreyRamp()
    {
       black = BlankClip(color=$000000, length=256, width=1, height=256, pixel_type="RGB32")
       white = BlankClip(color=$010101, length=256, width=1, height=256, pixel_type="RGB32")
       StackHorizontal(black,white)
       StackHorizontal(last, last.RGBAdjust(rb=2, gb=2, bb=2))
       StackHorizontal(last, last.RGBAdjust(rb=4, gb=4, bb=4))
       StackHorizontal(last, last.RGBAdjust(rb=8, gb=8, bb=8))
       StackHorizontal(last, last.RGBAdjust(rb=16, gb=16, bb=16))
       StackHorizontal(last, last.RGBAdjust(rb=32, gb=32, bb=32))
       StackHorizontal(last, last.RGBAdjust(rb=64, gb=64, bb=64))
       StackHorizontal(last, last.RGBAdjust(rb=128, gb=128, bb=128))
    }
    
    ##########################################################################
    
    function _VaryColorYUV(clip v, string property, int value)
    {
        (property == "cont_y") ? ColorYUV(v, cont_y=value) : v
        (property == "cont_u") ? ColorYUV(v, cont_u=value) : last
        (property == "cont_v") ? ColorYUV(v, cont_v=value) : last
    
        (property == "gain_y") ? ColorYUV(v, gain_y=value) : last
        (property == "gain_u") ? ColorYUV(v, gain_u=value) : last
        (property == "gain_v") ? ColorYUV(v, gain_v=value) : last
    
        (property == "off_y") ? ColorYUV(v, off_y=value) : last
        (property == "off_u") ? ColorYUV(v, off_u=value) : last
        (property == "off_v") ? ColorYUV(v, off_v=value) : last
    
        (property == "gamma_y") ? ColorYUV(v, gamma_y=value) : last
        # gamma_u not supported
        # gamma_v not supported
    
        Subtitle(property+"="+string(value)) #show property and value used
    }
    
    function VaryColorYUV(clip v, int "frame_num", int "num_frames", string "property", int "start_val", int "end_val")
    {
        Trim(v, frame_num,frame_num)
        Loop(num_frames, 0, 0)
        Animate(0,num_frames, "_VaryColorYUV", last,property,start_val, last,property,end_val)
    }
    
    ##########################################################################
    
    
    GreyRamp()
    PointResize(16,height).PointResize(width,height)
    ConvertToYV12(matrix="PC.601")
    v1 = VaryColorYUV(last, 1, 100, "gain_y", -256, 256)
    v2 = VaryColorYUV(last, 1, 100, "cont_y", -256, 256)
    v3 = VaryColorYUV(last, 1, 100, "off_y", -256, 256)
    v4 = VaryColorYUV(last, 1, 100, "gamma_y", -256, 256)
    StackHorizontal(V1,v2,v3,v4)
    
    TurnRight().Histogram().TurnLeft()
    At frame 50 is the untouched video. Frame numbers lower than that show the result of negative values to ColorYUV. Higher frames numbers show the result of positive values.

    Image
    [Attachment 47675 - Click to enlarge]
    Quote Quote  
  30. Feeding a videoclip into your sample script is indeed pretty helpful! The original with the greyramp is also, but seeing a video frame being altered explains a lot
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!