VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or Try ConvertXtoDVD and convert all your movies to DVD. Free trial ! :)
+ Reply to Thread
Page 2 of 4
FirstFirst 1 2 3 4 LastLast
Results 31 to 60 of 112
Thread
  1. You might find VirtualDub's levels filter easier to work with to gain an understanding of what AviSynth's Levels() and SmoothLevels() filters are doing. Enable the preview and you can see the changes live as you move the sliders.
    Quote Quote  
  2. And you can use the GUI sliders in avspmod . It comes by default with the levels() sliders for the 5 basic parameters (levels has other parameters like coring; and dither in avisynth 2.6.x) because it's a standard avisynth filter. The preview automatically refreshes live when you adjust the sliders . For filters that don't have sliders by default, you can create sliders (edit => insert user slider , or F12) . If you want manual refresh, or prefer to use text edits, push f5 to preview. When switching tabs, the preview automatically refreshes as well
    Quote Quote  
  3. Originally Posted by jagabo View Post
    Yes. It peels the two fields apart making each one a half height image and orders them sequentially.

    By the way, you can view the state of the video at any point with Return(last) -- or if you're using a named stream return using the name of that stream.
    Oh yes, that is very useful, thank you! I already played around a bit with naming streams at certain points, very useful for comparisons sometimes.

    Originally Posted by jagabo View Post
    This type of deblocking is designed to work on the type of blocks you get from MPEG encoding. Frames are broken up into 8x8 pixel blocks for compression.
    Ah right, like with JPEG for pictures. So it's only to keep the right format for accurate deblocking?

    Originally Posted by jagabo View Post
    The video has already been deblocked after Deblock_QED(). AviSynth keeps track of whether the images are frames or separated fields. Since SeparateFields() was used earlier it remembers that they aren't full frames. It wouldn't allow you to SeparateFields() again. AssumeFrameBased() tells it to consider the separated fields as full frames, allowing SeparateFields() to work again.
    Oh, that's true, hehe. So basically, we double every line to ensure accurate deblocking due to the MPEG restriction of 8x8 blocks, then we AssumeFrameBased in order to be able to separate the fields again, which we need to do in order to get rid of the excess lines created by doubling the height with PointResize? This is complicated, lol, but I think I got it.

    Originally Posted by jagabo View Post
    After SeparateFields() there are no black lines between the lines of the field. They were removed and the 540 lines packed together into a 1920x540 images.
    I see, now that you cleared up my confusion about PointResize and deblocking, that makes sense.

    Originally Posted by jagabo View Post
    out of every group of N frames keep only the frames indicated in the following list.
    That is a very good rule of thumb to more easily explain it, thanks!


    Originally Posted by jagabo View Post
    This is kinda hard to explain. The first 8 scan lines of your original video contains 8 lines:

    [...]

    Of those duplicated scan lines only the first of the duplicates is in the correct location (line) in the left image, only the second of the duplicates is in the correct location in the second image. So the SeprateFields().SelectEvery() sequence is to assure that the correct scan lines are taken from each field when reconstructing the original interlaced frame with Weave().
    OOH. Reading this first, I didn't understand a word, but of course, that makes sense. Separating the fields isn't enough, the excess lines are still there, so we use SelectEvery to discard those. That's brilliant. You're doing a great job at explaining this, I don't know if I could've done that.


    Originally Posted by jagabo View Post
    I think you are doing very well.
    :3 Thanks!

    Originally Posted by jagabo View Post
    You have to be very careful when using automatic levels (brightness, contrast, saturation, etc.) filters. They are prone to brightness "pumping". Say for example you have a dark scene where all the pixels range from Y=16 to Y=126. An auto levels filter might brighten that so that Y ranges from 16 to 235. That would change a dark dingy shot to a nice bright sunny shot (which may or may not be what you want). But then someone walks into the frame wearing a bright white t-shirt at Y=235. Suddenly the auto levels filter will darken the shot to keep the t-shirt from blowing out. The background will return to its original dark dingy state.
    That makes sense, I already figured that it wouldn't be that easy Guess I'll have to look into colours soon, then.

    Originally Posted by jagabo View Post
    Color is another whole big issue! I'll write a bot about that later.
    Yeah, I noticed there's a ton of info on colours. Excited to get into that as well, but I think I'll have a harder time than with the other things.

    Originally Posted by poisondeathray View Post
    Yeah, you're doing well and have a ton of patience. My first few times, I gave up trying to absorb avisynth.
    Thank you! I have to say that this applies to you and jagabo more than to me, I'm really grateful that you take so much time to help me out! Yeah well, I dabbled in encoding for a bit, but always kept it really simple and as a result, was often unhappy with the outcome. So I figure it would be about time to do some things right and invest a bit of time and effort to the best of my ability!

    Originally Posted by poisondeathray View Post
    My opinion - "auto" anything will give subpar results compared to if you did it manually scene by scene.
    Yeah, this doesn't surprise me much, but it was worth a try

    Originally Posted by poisondeathray View Post
    Instead of shifting the entire waveform down, another option is to preferentially adjust the top half, or apply a limit through smoothlevels (part of smoothadjust), so the overall brightness isn't reduced as much. Lmode=1 allows you to limit the change of darker pixels (so only the brighter pixels are affected).
    I see, so I can "cut the waveform in half" (for example) and only apply changes to one half, e.g. raise the darker half?

    Originally Posted by poisondeathray View Post
    Look at the documentation and learn what the parameters mean for levels and smoothlevels (e.g. input low, gamma, input high, output low, output high). Those 5 parameters are the same universally in almost all programs (including photoshop), just that adjusting in RGB, is different than adjusting in YUV . First try adjusting those parameters with normal levels() , then try playing with the numbers with smootlevels with the lmode limiter
    Okay, so look at documentation for levels and smoothlevels, then play with levels, afterwards move to smoothlevels, especially with lmode? Will do.

    Originally Posted by poisondeathray View Post
    Play with the parameters and see how they alter the final image, and the waveform in the histogram - that's how you learn. Another good way to compare the results is to use avspmod. You can put different versions scripts in different tabs and toggle between them with the number keys. So tab1 might have certain settings, tab2 might have different ones , tab3, tab4 etc.... it's a very fast way to get feedback on what your scripts/settings are doing, and learn what settings do what
    That sounds like a very useful program! I will definitely download that and try to get accustomed with it ASAP, it's easy to get lost with all the changes and Interleaves to compare what filter to which.

    Originally Posted by poisondeathray View Post
    I know you're still working on the 1st clip, especially the macroblocking, but just to mention the 2nd clip has other issues like oversaturation. Oversaturaton and high levels (>235) reduce the amount of details that can be visible in those bright regions and saturated regions when it's rendered to RGB for display
    Yeah, I noticed that the second clip has plenty more issues, that's why I gave it as another example to highlight the problems I try to work on. Unfortunately, I'm very bad at noticing these things, Saturation, Hue, Gamma, those are all terms that I came across more than once and I know what they mean (mostly), but I don't feel able to identify if and what is wrong with them. Are there special filters like the Histogram for the brights and darks to help me notice issues with saturation and other things? Or is it just a matter of playing around with it and learning how it should look over time?

    Originally Posted by poisondeathray View Post
    RE: Macroblocking - basically the details are non recoverable. You would need a better source. The stronger the deblocking settings and filters applied, the more smooth the image and fewer fine details will be retained . It's really a balance and up to subjective tastes
    Yes, this has been mentioned before, I realise that everything is a trade-off and am keeping it in mind. I find that some of the blocks are quite large though and even if it means some smoothing, that is a trade that's worth it to me. Also, for the sake of learning more, it can't hurt to apply some filters I won't keep. I can't change the source unfortunately, if I could, I would, so I have to make do with what I have

    Originally Posted by jagabo View Post
    You might find VirtualDub's levels filter easier to work with to gain an understanding of what AviSynth's Levels() and SmoothLevels() filters are doing. Enable the preview and you can see the changes live as you move the sliders.
    I didn't even know VirtualDub had such a slider feature. Sliders seem like a good way of experimenting with values quickly!

    Originally Posted by poisondeathray View Post
    And you can use the GUI sliders in avspmod . It comes by default with the levels() sliders for the 5 basic parameters (levels has other parameters like coring; and dither in avisynth 2.6.x) because it's a standard avisynth filter. The preview automatically refreshes live when you adjust the sliders . For filters that don't have sliders by default, you can create sliders (edit => insert user slider , or F12) . If you want manual refresh, or prefer to use text edits, push f5 to preview. When switching tabs, the preview automatically refreshes as well
    So is avspmod the preferrable choice here? Does it make a difference if I try with avspmod or with VirtualDub? I definitely like the idea of using sliders to get acquainted with colours. But for now I think I'll have to call it a day, still have to read a text on the Thirty Years War. Which unfortunately is very interesting as well, so my loyalties are split here, hehe.

    Once again, I can't stress this enough, thanks so much to you two! I feel like instead of just getting into this, I made giant leaps thanks to your help, even though I still barely scratched the surface of the whole matter. It's highly interesting and I wanted to do this for quite some time.
    Quote Quote  
  4. So is avspmod the preferrable choice here? Does it make a difference if I try with avspmod or with VirtualDub? I definitely like the idea of using sliders to get acquainted with colours.

    But for now I think I'll have to call it a day, still have to read a text on the Thirty Years War. Which unfortunately is very interesting as well, so my loyalties are split here, hehe.
    It's more of a personal preference . Some people do everything in notepad / notepad++ or some text editor. These are just suggestions or options that you might try out , and you can test out what's best for you and your workflow style. Nobody is going to get angry that you aren't using avspmod LOL

    And if it makes you feel any better, I have this text file of topics to go back and read over or research on. I copy links to discussions of topics that are interesting or might be valuable etc.. that I pass by, but I currently don't have time to learn about. It's like 300 pages long and growing (not all video stuff, but you get the idea) . (Jagabo is a big repeat offender on that list LOL) . So don't feel you have to learn everything in 1 day


    I see, so I can "cut the waveform in half" (for example) and only apply changes to one half, e.g. raise the darker half?
    Sort of - that's the general idea, but it 's hard to explain unless you play with the values. It also depends on how strong you set effect or limiting, or the darkstr (for dark values), or brightstr (for bright values).

    Recall earlier I said there are many different ways to get similar end results - It's probably too much to absorb at this stage, but another powerful way is to remap input to output values using curves or smoothcurve. One drawback of levels() is the changes are linear . f you've every use photoshop or gimp curves, it's the same idea. The one in avisynth doesn't have a GUI, you need to enter a string .

    And yet another more advanced way is to use masks. e.g. you might use a bright mask (derived from high Y' values) to composite layers to reduce the bright areas. Same thing with saturation - you might selectively bring down , say red saturation of a certain hue (but not affect other colors)

    But in color work , you usually cannot adjust a narrow range without making something look out of place, especially with 8bit values. You have discrete changes that look unnatural and out of place. Also - don't just "treat" the histogram, waveform, vectorscope etc..ONLY . They are monitoring aids meant to be used in conjunction with the actual rendered image



    Are there special filters like the Histogram for the brights and darks to help me notice issues with saturation and other things?
    Some things are blatantly obvious, the the red saturation in some parts of the 2nd clip . When you reduce the red saturation, some details become visible that were previously obscured. They would be "broadcast illegal" colors, and I'm surprised they passed QC for television. The tool to measure saturation and color tones is called a vectorscope. All decent NLE's have them. There is one in vdub 's plugin colortools. Avisynth doesn't have a traditional one (it doesn't have markings or gradations) , but you can use histogram("color") to get a rough idea

    And you said your eyes aren't that good at picking out some defects - you already used one technique to help out (enlargement), but another useful way is to use histogram("luma") - it's an enhanced view that will emphasize defects like macroblocking, banding, many others
    Last edited by poisondeathray; 22nd Jun 2014 at 13:58.
    Quote Quote  
  5. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Sorry I was away from this thread while on the road and missed so many great tips. I have to agree, some of the junk can't be fixed, or if it can there won't be much video left to watch. Still, there's a lot one can do. I'm with poisondeathray, how did the broadcasters let some of this stuff out of the gate? Someone asleep at the switch, or they were smoking some really great stuff during the broadcast. Those jaggies shouldn't be there with proper interlacing, and the levels are off the charts.

    I ignored the stuff that would drive most people crazy and just tried to make things look more realistic and less annoying. My scripts were fairly simple (but mighty slow at 1080 lines!). I smoothed edges with the Santiag plugin, cleaned up some other stuff with MCTemporalDenoise on "low" (I abbreviate it MCTD). In the AfterSchool clip I let the audiences go darker. No one's watching them anyway, but to keep them more illuminated you could play with some contrast masking. Too bad the brights are blown to hell and won't come back. For those blazing reds in the 2nd sample I used FixChromaBleeding -- not a perfect filter and an old-timer, but sure comes in handy sometimes. I fed the clips to ColorFinesse to get ideas for better levels and color -- but it wouldn't be fair to prescribe a $500 filter for color work, so I did what I could with Avisynth and YV12. The encodes are MPEG4 for BluRay.

    After School script:
    Code:
    MPEG2Source(path to video "SAMPLE After School.d2v")
    ColorYUV(cont_y=-35,off_y=-10)
    SmoothLevels(10, 0.78, 255, 16, 255,chroma=200,limiter=0,tvrange=true,dither=100,protect=4)
    santiag()
    MCTemporalDenoise(settings="low",interlaced=true)
    Orange Caramel script:
    Code:
    MPEG2Source(path to "SAMPLE Orange Caramel.d2v")
    ColorYUV(cont_y=-40,off_y=-14,cont_v=-50,off_v=-6)
    SmoothLevels(16,0.95, 255,16,245,chroma=200,limiter=0,tvrange=true,dither=100,protect=4)
    FixChromaBleeding2()
    santiag()
    MCTemporalDenoise(settings="low",interlaced=true)
    Image Attached Files
    Last edited by LMotlow; 22nd Jun 2014 at 17:58.
    - My sister Ann's brother
    Quote Quote  
  6. Originally Posted by poisondeathray View Post
    It's more of a personal preference . Some people do everything in notepad / notepad++ or some text editor. These are just suggestions or options that you might try out , and you can test out what's best for you and your workflow style. Nobody is going to get angry that you aren't using avspmod LOL
    Haha, that is a big relief! I just tried out avspmod, only very quickly, but adding new sliders is very easy and works like a charm! I also really liked the colouring of the different functions and it was really handy how it previews all the possible variables when you highlight a function. Saves time instead of having to look it up in the wiki every instance!

    Originally Posted by poisondeathray View Post
    And if it makes you feel any better, I have this text file of topics to go back and read over or research on. [...] It's like 300 pages long and growing (not all video stuff, but you get the idea).
    If anything, that scares me more of what awaits me, lol

    Originally Posted by poisondeathray View Post
    So don't feel you have to learn everything in 1 day
    I know, but I want to give it the best and want to get somewhere without stagnating for too long. Also, I want to use the help while it's on offer, just to be on the safe side.

    Originally Posted by poisondeathray View Post
    Recall earlier I said there are many different ways to get similar end results - It's probably too much to absorb at this stage, but another powerful way is to remap input to output values using curves or smoothcurve. One drawback of levels() is the changes are linear . f you've every use photoshop or gimp curves, it's the same idea. The one in avisynth doesn't have a GUI, you need to enter a string.
    So basically curves would allow me to perform gradual changes? Don't have much experience with image editing either, sorry.

    Originally Posted by poisondeathray View Post
    And yet another more advanced way is to use masks. e.g. you might use a bright mask (derived from high Y' values) to composite layers to reduce the bright areas. Same thing with saturation - you might selectively bring down , say red saturation of a certain hue (but not affect other colors)
    I've seen plenty of people using masks when I dug through some threads to find some answers, examples and explanations, so I figured they would be very useful, but I can't yet figure out how exactly they work and much less how I could use them. Unfortunately, I don't really use Photoshop, I know that works with Layer Masks and such, but I never knew what to do with it in that context either.

    Originally Posted by poisondeathray View Post
    But in color work , you usually cannot adjust a narrow range without making something look out of place, especially with 8bit values. You have discrete changes that look unnatural and out of place. Also - don't just "treat" the histogram, waveform, vectorscope etc..ONLY . They are monitoring aids meant to be used in conjunction with the actual rendered image
    Sure, I guess it's no different with colours than it is with everything else, almost everything is just a matter of preference and changing something for the better will often also bring negative side effects, I assume.

    Originally Posted by poisondeathray View Post
    Some things are blatantly obvious, the the red saturation in some parts of the 2nd clip . When you reduce the red saturation, some details become visible that were previously obscured.
    Yeah, it does look a bit weird. I'm not even sure if I had noticed it though, because of all the red lighting.

    Originally Posted by poisondeathray View Post
    They would be "broadcast illegal" colors, and I'm surprised they passed QC for television.
    Life is easier in the mysterious Far East Seriously though, some things over there are really weird, you should see some of the camerawork over there... And those visual effects, oh my, I still have nightmares of those.

    Originally Posted by poisondeathray View Post
    The tool to measure saturation and color tones is called a vectorscope. All decent NLE's have them. There is one in vdub 's plugin colortools. Avisynth doesn't have a traditional one (it doesn't have markings or gradations) , but you can use histogram("color") to get a rough idea
    I tried that histogram mode, it did show a lot of red but I'm not sure what to make of that. If there's lots of reds, that doesn't necessarily mean that anything is wrong by itself right? There could theoretically just be a lot of red in the picture, lol. I'm not trying to be snappy or anything, I'm just trying to work this out, because I have a really hard time pinning down exactly when something is wrong with the colour.

    Originally Posted by poisondeathray View Post
    And you said your eyes aren't that good at picking out some defects - you already used one technique to help out (enlargement), but another useful way is to use histogram("luma") - it's an enhanced view that will emphasize defects like macroblocking, banding, many others
    Ah yes, I tried that mode, it was quite surprising, I did not expect it to look like it did, because I didn't look it up beforehand. All the small white and black pixels make noise very visible to me, that was really handy for me! Thanks a lot!

    Originally Posted by LMotlow View Post
    Sorry I was away from this thread while on the road and missed so many great tips.
    No reason to apologise, but I'm glad you're back, every bit of help is useful and appreciated!

    Originally Posted by LMotlow View Post
    I have to agree, some of the junk can't be fixed, or if it can there won't be much video left to watch. Still, there's a lot one can do.
    I should hope so! And I'm already quite happy with the progress, I'm very positive that I'll learn a lot more.

    Originally Posted by LMotlow View Post
    I'm with poisondeathray, how did the broadcasters let some of this stuff out of the gate? Someone asleep at the switch, or they were smoking some really great stuff during the broadcast. Those jaggies shouldn't be there with proper interlacing, and the levels are off the charts.
    Tell me about it. I mean, obviously I don't know much, but even with what I know, I can tell that at least some things are very wrong.

    Originally Posted by LMotlow View Post
    I ignored the stuff that would drive most people crazy and just tried to make things look more realistic and less annoying. My scripts were fairly simple (but mighty slow at 1080 lines!). I smoothed edges with the Santiag plugin, cleaned up some other stuff with MCTemporalDenoise on "low" (I abbreviate it MCTD). In the AfterSchool clip I let the audiences go darker. No one's watching them anyway, but to keep them more illuminated you could play with some contrast masking. Too bad the brights are blown to hell and won't come back.
    Oh yes, I read about MCTemporalDenoise because I read repeatedly that it's supposedly great. I noticed that on higher presets, even when having only that in the script, I run out of memory. Here I thought I'd never run into any problems with 16 GB of RAM, hehe.

    But BTT, I couldn't use SmoothLevels. The wiki said there is now SmoothAdjust, which includes SmoothLevels + more functions. But it required AviSynth 2.6.x and I downloaded 2.5.8. Should I get the 2.6.0 alpha? I guess so, since SmoothLevels has been mentioned repeatedly so it seems to be quite useful. The audience being darkened is totally fine, as you said, I don't care because I don't look at them. If it makes the stage look better, I'm all for it.

    Originally Posted by LMotlow View Post
    For those blazing reds in the 2nd sample I used FixChromaBleeding -- not a perfect filter and an old-timer, but sure comes in handy sometimes. I fed the clips to ColorFinesse to get ideas for better levels and color -- but it wouldn't be fair to prescribe a $500 filter for color work, so I did what I could with Avisynth and YV12. The encodes are MPEG4 for BluRay.
    Heh, thank you, $500 are slightly beyond my budget I definitely need to learn about Luma, Chroma, YUV and all that stuff to get a real handle on what to look for. The samples you gave definitely looked quite nice, it's always good to have some visuals to get a better handle on the result, and especially considering that your samples aren't even de-interlaced yet, they look really impressive! Thanks so much for taking all the time to do this!
    Quote Quote  
  7. Originally Posted by bschneider View Post
    So basically curves would allow me to perform gradual changes?
    Curves allow you to make non-linear changes. I suggest you look at Gradation Curves (a third party filter) in VirtualDub.

    Originally Posted by bschneider View Post
    I've seen plenty of people using masks when I dug through some threads to find some answers, examples and explanations, so I figured they would be very useful, but I can't yet figure out how exactly they work and much less how I could use them.
    The simplest form of a mask is a stencil. In the real world you might use a stencil to paint a black pattern onto a white wall. The stencil itself is a piece of cardboard with a hole cut into it. Where there's cardboard the paint won't reach the wall and it will remain white. Where there's a hole in the stencil the wall will be painted black. In computer graphics and digital video the stencil takes the form a black and white image (no shades of grey). When you merge two images together using a stencil black parts of the stencil indicate the pixels should be taken from one image, white parts indicate the pixels should be taken from the other image. In practice the stencil doesn't have to black and white, it can be grey scale, usually called an alpha mask. Where the mask is full black or full white it acts the same as a stencil. But shades of gray in between represent weights -- how much of each source image used in the output image. A medium grey indicates a 50:50 weighting, the output image is a simple average of the two sources, (A+B)/2. A 25 percent grey pixel in the mask indicates 25 percent comes from one source image, 75 percent from the other.

    https://forum.videohelp.com/threads/347583-Color-correction-%28or-grading-%29?p=2174838...=1#post2174838
    https://forum.videohelp.com/threads/268043-VirtualDub-Using-INTERNAL-logo-filter?p=1603...68#post1603468

    With video you can often use the video itself to build an alpha mask. Say you have a video where the bright parts of the picture are too red (pink tinge) but the dark parts of the picture are normal. If you reduce red over the entire image the bright parts will look right but the dark parts won't be red enough (shifted to cyan). In a case like this you can use the brightness of the image as an alpha mask, then overlay the original image with the color shifted image using the mask. In dark areas the mask indicates the original image should be used, in bright areas the mask indicates the color shifted image should be used. So darks retain their original color, brights get shifted toward cyan.
    Quote Quote  
  8. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    That's a neat explanation, jagabo. Wish I could explain things that clearly.

    Originally Posted by bschneider View Post
    I read about MCTemporalDenoise because I read repeatedly that it's supposedly great. I noticed that on higher presets, even when having only that in the script, I run out of memory. Here I thought I'd never run into any problems with 16 GB of RAM, hehe.

    But BTT, I couldn't use SmoothLevels. The wiki said there is now SmoothAdjust, which includes SmoothLevels + more functions. But it required AviSynth 2.6.x and I downloaded 2.5.8. Should I get the 2.6.0 alpha? I guess so, since SmoothLevels has been mentioned repeatedly so it seems to be quite useful.
    I sometimes run MCTD in a script by itself or with maybe one or two other, more simple plugins. Yep, it pigs out on memory. I save the output to Lagarith AVI, then feed it to another script if I need to add more plugins. That's one of many tricks I learned by browsing this site for 6 years. Also, you often want to test a short clip with it, then tweak other plugins. So running a separate script avoids running slowpoke MCTD over and over to get what you want. I do the same thing with QTGMC or IVTC -- those are often necessary first steps, so why keep running a long step over and over? I often delete the AVI intermediate but save the script, just in case.

    SmoothAdjust was designed by LaTo, who posts on doom9. He doesn't keep old versions of his stuff, and that's a headache. I have new versions, but frankly Avisynth MT's drive me crazy. Most times I still use SmoothLevels 2.62. Runs with 2.5 and 2.6 Avisynth. A copy is attached.

    AvsPMod: I hate it. Slows me down. Just me, I guess.

    I think jagabo and poisondeathray explain stuff like luma, chroma, etc., better than most people. I'll just simplify it and say that luma is brightness and chroma is color. In YUV the two are stored separately as 3 channels. Y = luma. V is mostly red, U is mostly blue. And that's simplified as well, because U and V kind of sneak into other hues. V includes some yellow and orange, U includes some cyan (blue + green). I once asked, if YUV stores red and blue, where is green? Well, in YUV you get green by subtracting red data and blue data from Y. Complicated? Yep. If you subtract all the chroma, you're left with Y which gives you a grayscale image. Some images of YUV here: http://en.wikipedia.org/wiki/YUV .

    RGB is different. Each pixel of an RGB image stores luma and chroma data in the same place. So the R in RGB is really just red together with its own brightness component. Same with Blue and Green.

    I've fiddled with color for a long time, started learning with Photoshop Pro and Windows 3.1, LOL !! . I use YUV and RGB histograms both. Something I learned over time from posts in videohelp is that YUV histograms show the way image data is stored in video. RGB histograms show the way it's displayed. Yep, you do have to use those graph and chart things: eyeballs can say whether something looks right or wrong, but histograms tell you why. Looks intimidating at first, and for some reason website tutorials about histograms are a little lean on video sites but are all over the place on photo sites. Try these two (I got the links from an old videohelp post):
    Part 1: http://www.cambridgeincolour.com/tutorials/histograms1.htm
    Part 2: http://www.cambridgeincolour.com/tutorials/histograms2.htm
    I found those links years ago in videohelp and they were very helpful. The tutorials are for still photo. But when it comes to luma and color the principles are the same. Video is just a stream of still photos, after all.
    Avisynth also has an RGB histogram plugin or script of some kind. I've seen jagabo refer to it.

    Originally Posted by bschneider View Post
    Heh, thank you, $500 are slightly beyond my budget
    Mine, too. Mine came with AfterEffects -- still not cheap! I use VirtualDub a lot for RGB work. VDub has some color filters similar to those in pricey NLE's:
    ColorMill uses sliders instead of wheels: http://fdump.narod.ru/rgb.htm
    Gradation Curves (Photoshop-style): http://members.chello.at/nagiller/vdub/index.html .
    RGB Levels: one with a GUI is built-in to VDub. Simplified, but it works.
    ColorTools Histogram/vectorscope: http://trevlac.us/colorCorrection/colorTools.html. Warning: doesn't work in Win7!

    I didn't use RGB for the video posts. Color balance and levels change with every scene, sometimes they don't even make sense. I just set up the same filters to cover every shot in a clip. A compromise, at best. Actually it was contrast, gamma and levels that made most of the difference. I didn't hit color itself very much. A pro colorist would be in hog heaven with those samples.

    Other members can fill you in on more details. I just started posting here but, like you, I had to take a deep breath first because I'm still learning. Try browsing the Restoration section for ideas about fixing some godawful video. Plenty of examples about noise problems and how they get fixed. I've spent a lot of time in there.
    Image Attached Files
    Last edited by LMotlow; 23rd Jun 2014 at 09:05.
    - My sister Ann's brother
    Quote Quote  
  9. Originally Posted by bschneider View Post
    Oh yes, I read about MCTemporalDenoise because I read repeatedly that it's supposedly great. I noticed that on higher presets, even when having only that in the script, I run out of memory. Here I thought I'd never run into any problems with 16 GB of RAM, hehe.
    You're not really running out of memory. 32 bit program can only access 2 to ~3 GB of memory. 64 bit AviSynth could access all 16 GB but it's not as stable as 32 bit AviSynth and many filters are not available in 64 bit versions.
    Quote Quote  
  10. Originally Posted by jagabo View Post
    Curves allow you to make non-linear changes. I suggest you look at Gradation Curves.
    I did take a look, I understand what you mean now. Simpler concept than I thought.

    Okay, I understood how masks work, your links with the illustrations were very helpful! I wonder how you would create such a mask in avisynth though, it seems complicated. Can you couple it to conditions, e.g make avisynth check brightness or hue and then, based on those conditions, give the mask orders to work its function only on the video's spots that show high brightness, for example? Do people create these masks manually, through a text editor? That sounds like a serious challenge, but also looks like a really cool feature.

    Originally Posted by LMotlow View Post

    [...]

    I do the same thing with QTGMC or IVTC -- those are often necessary first steps, so why keep running a long step over and over? I often delete the AVI intermediate but save the script, just in case.
    That does sound very useful for saving time with computing-intense filters. But doesn't exporting to AVI cause any kinds of problems? Doesn't a conversion to another format usually mean loss of quality?

    Originally Posted by LMotlow View Post
    Most times I still use SmoothLevels 2.62. Runs with 2.5 and 2.6 Avisynth. A copy is attached.
    Great! Thank you!

    Originally Posted by LMotlow View Post
    AvsPMod: I hate it. Slows me down. Just me, I guess.
    Hehe, it's good to hear multiple opinions on it. For now, I think I'll keep using it until I run into problems. The slider feature will be very handy for exploring more filters and variables, so at least for the beginning, it'll come in handy. Once I know a few more things, there's the capacity to look for other things, I suppose.

    Originally Posted by LMotlow View Post
    I think jagabo and poisondeathray explain stuff like luma, chroma, etc., better than most people.
    I can say they're doing a terrific job. (And looking at the next section, so are you!)

    Originally Posted by LMotlow View Post
    [YUV explanation]
    Some images of YUV here: http://en.wikipedia.org/wiki/YUV.
    I'm sure it's more complicated in praxis but you explained it very comprehensively. The pictures in the Wikipedia article are a big help, too. Thanks a lot, I think I got what it's about! I'll still have to look at all the more detailed stuff, when it gets to terms like "planar" and "4:2:0" and such.

    Originally Posted by LMotlow View Post
    RGB is different. Each pixel of an RGB image stores luma and chroma data in the same place. So the R in RGB is really just red together with its own brightness component. Same with Blue and Green.
    Oh yes that makes sense. I stumbled over RGB quite often, it's very common to talk about (sweet lord I hated art class...lol). But I didn't know this precisely, so thank you for clarifying!

    Originally Posted by LMotlow View Post
    I've fiddled with color for a long time, started learning with Photoshop Pro and Windows 3.1, LOL !! .
    Oh my, Windows 3.1? That is crazy, haha, I feel like I'm in capable hands here, if your experience dates back THAT far!

    Originally Posted by LMotlow View Post
    I use YUV and RGB histograms both. Something I learned over time from posts in videohelp is that YUV histograms show the way image data is stored in video. RGB histograms show the way it's displayed. Yep, you do have to use those graph and chart things: eyeballs can say whether something looks right or wrong, but histograms tell you why. Looks intimidating at first
    I am not adverse to using those graphs and visual aids! To the contrary, I'm very fond of having those means to help me. As I said earlier, I'm not trained nor used to work with colours and such, so those graphs do a great deal of helping me understand what's wrong.

    Originally Posted by LMotlow View Post
    Video is just a stream of still photos, after all.
    That's true! I guess for colours, photo editing sites might be just as useful then. Thanks for linking those websites! There's a lot in there, and many graphs and examples, that's always good! I'll be sure to take a closer look at that.

    Originally Posted by LMotlow View Post
    VDub has some color filters similar to those in pricey NLE's:
    ColorMill uses sliders instead of wheels: http://fdump.narod.ru/rgb.htm
    Gradation Curves (Photoshop-style): http://members.chello.at/nagiller/vdub/index.html .
    RGB Levels: one with a GUI is built-in to VDub. Simplified, but it works.
    ColorTools Histogram/vectorscope: http://trevlac.us/colorCorrection/colorTools.html. Warning: doesn't work in Win7!
    Aww, I have Win7 on all my systems by now. So the last one probably won't be for me. I already played shortly with Gradation Curves, just to see what it does, will take a look at ColorMill, too.

    Originally Posted by LMotlow View Post
    I didn't use RGB for the video posts. Color balance and levels change with every scene, sometimes they don't even make sense. I just set up the same filters to cover every shot in a clip. A compromise, at best. Actually it was contrast, gamma and levels that made most of the difference. I didn't hit color itself very much. A pro colorist would be in hog heaven with those samples.
    So if I wanted to get more into colours in those videos, I'd have a ton of work ahead of me if I actually wanted to do it right? Well gee, aren't I a lucky one But some flaws stretch throughout the video, don't they, like in the Orange Caramel and the red tint. So I won't get perfect colours out of it unless I put in some serious work, but I could iron out some major consistent flaws, right?

    Originally Posted by LMotlow View Post
    Other members can fill you in on more details. I just started posting here but, like you, I had to take a deep breath first because I'm still learning. Try browsing the Restoration section for ideas about fixing some godawful video. Plenty of examples about noise problems and how they get fixed. I've spent a lot of time in there.
    Oh yes, it will take a lot of patience, but once I'm interested, it's easy for me as long as I can see I'm getting somewhere. Your suggestion is to maybe take a peek at some threads where REALLY bad video is used to better recognise the changes that the filters create? That sounds like a good idea, I'll be sure to do that!

    Originally Posted by jagabo View Post
    You're not really running out of memory. 32 bit program can only access 2 to ~3 GB of memory. 64 bit AviSynth could access all 16 GB but it's not as stable as 32 bit AviSynth and many filters are not available in 64 bit versions.
    Oh my, I'm so used to downloading x64 version of every program, I simply forget that I'm using the x86 version of avisynth. Obviously, you're right, I'll effectively have a maximum of around 3 GB like that.


    EDIT: Just looked at ColorMill. Impressive and looks easy to use! Is it possible to export the slider positions to avisynth or something? I don't like how VirtualDub only has AVI as output, and it crashed for me quite often during encoding.
    Last edited by bschneider; 23rd Jun 2014 at 13:54.
    Quote Quote  
  11. I don't remember if it was mentioned earlier in this thread but CSamp is a very useful tool. It allows you to read RGB values off the desktop.
    Quote Quote  
  12. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Wow, questions. I recall I had even more of my own back then, but I was too scared to post questions with all these experts around (I still am, almost).

    Originally Posted by bschneider View Post
    Originally Posted by LMotlow View Post
    [...]
    I do the same thing with QTGMC or IVTC -- those are often necessary first steps, so why keep running a long step over and over? I often delete the AVI intermediate but save the script, just in case.
    That does sound very useful for saving time with computing-intense filters. But doesn't exporting to AVI cause any kinds of problems? Doesn't a conversion to another format usually mean loss of quality?
    Most people work with lossless compressed AVI. I use the popular Lagarith. There are others, but Lagarith seems to be everywhere and has good compatibility with different systems.

    I don't decode out of VirtualDub.For SD DVD video I use HCenc or ye olde TMPGenc Plus 2.5 . For h264 I use TMPGenc Mastering Works, TX264, and -- when I can remember the darn command lines -- plain X264. Yep, saving lossless AVI takes up some room. But almost all of those working files are deleted after the final output is tested. I save avs scripts, though.

    I'm still using XP. It's a new build, but I also made a Win7 PC just for heavy HD stuff, even tho I don't work with HD that much. A lot of people out there are still using XP, even nwith ancient machines. I have one AMD oldie that I use just for VHS capture. Lots of people do that, as well.

    Originally Posted by bschneider View Post
    So if I wanted to get more into colours in those videos, I'd have a ton of work ahead of me if I actually wanted to do it right? Well gee, aren't I a lucky one But some flaws stretch throughout the video, don't they, like in the Orange Caramel and the red tint. So I won't get perfect colours out of it unless I put in some serious work, but I could iron out some major consistent flaws, right?
    Some things you just have to live with. With a little effort, though, a lot of that red grunge was toned down enough to make the video bearable.

    Well....yep. It's tough at first, for everybody. You don't put that much time into everything in sight, just the important stuff. Eventually you sit there scratching your head with a color problem and -- pow! -- all of a sudden it all falls together. Discard the idea that it's a matter of just adding more red or something. It's a matter of playing with those filters, trying things, learning what happens when you correct levels, fiddle with gamma, determine white balance, black balance (yep, black's a color just like the others). On that subject, I had to sit here today waiting for two big guys to install a new air conditioner and got to watching the After School video. Shucks, I wish they hadn't blown away the highlights! Anyway I put another version together (attached), this time with more levels and color work and some background color matching. Video should look like all the scenes were shot at the same time and place. Sometimes that's not easy, especially with changing stage light colors, or when the originator screws up the source. But I came close. Used a few tricks I learned from some people who don't seem to be around here any more. You might find with this mkv that decent levels and color grading make a vid look different -- almost enough to make your brain disregard some lingering artifacts. I also used FFT3DFilter in chroma-only mode to kill some rainbows and color blotches (how the heck does a broadcast video end up with that kind of noise anyway?), and GradFun2DBmod to sidestep some banding. The mkv looks sharper. But I never sharpened it. Not once. All done with VirtualDub, ColorMill, and gradation curves, with lossless Lagarith work files.

    EDIT:
    Originally Posted by jagabo View Post
    I don't remember if it was mentioned earlier in this thread but CSamp is a very useful tool. It allows you to read RGB values off the desktop.
    Yep. Good tip, jagabo. I just looked for csamp myself, but that website's gone. CSamp was posted in another thread here, but I can't find it. I attached CSamp below. Handy little tool. No installer. Just copy to your desktop. Can read pixels in whatever window is active, and while it's running it puts a little icon in the taskbar. Right-click that icon to end it. Left-click the little icon when you wanna read some pixel RGB values. Why would you want to do that? Well....that's getting ahead of things. Later. Might just want to keep the zip around until then.
    Image Attached Files
    Last edited by LMotlow; 23rd Jun 2014 at 18:14.
    - My sister Ann's brother
    Quote Quote  
  13. Originally Posted by bschneider View Post
    I don't like how VirtualDub only has AVI as output, and it crashed for me quite often during encoding.
    vdub can export almost anything now, through the external encoder feature . e.g. you can use ffmpeg. There is a guide on the vdub forum



    Avspmod has a pixel sampler similar to csamp. Like csamp, it will return RGB values, HEX values if you wish with the cursor/mouse over . But the benefit of doing it in avspmod is you can even get YUV values. Also the pixel (x,y) position for that video is also given (csamp does it for the native display resolution, which isn't as useful) . All the data returned can be customized on the status bar to show what you want (things like FPS, colorspace, current frame, framecount, audio characteristics, many others)
    Quote Quote  
  14. LMotlow and others have given some information about YUV earlier. But to understand color a little background may help:

    Your eyes see in RGB. They have receptors that are sensitive to red light, others that are sensitive to green light, some that are sensitive to blue light, and some that are sensitive to any color (ie, grey scale; these are the most sensitive and the most numerous). RGB is pretty easy to understand. It's an additive process. You start with black then add red, green, or blue light to make colors. Red+green = yellow, red+blue = magenta, red+green+blue = white, etc. So you can make all colors you can see (more or less) with red, green, and blue.

    This is a little different from what you probably learned in school about paints and primary colors -- a subtractive process. With paints you start with a white canvass (white light falling on the paint). The paint subtracts from that white leaving "colors". So red paint is really minus-green and minus-blue paint. Ie, the paint absorbs green and blue, only allowing red to be reflected back to your eye. Green paint is really minus-blue and minus-red paint. When you mix red and green paint together red, green, and blue are all absorbed leaving no light to be reflected back to your eye, black. In practice, the removal isn't perfect so you get some murky shade of brown.

    TV was originally greyscale. When color TV was invented they need a way to transmit a color signal that could be viewed on existing black and white TVs. They came up with the idea of sending a greysacle picture along with information that tells the color TV how to add and subract colors from that greyscale image to produce the desired final colors. Just as it takes three components to generate a full color image with RGB, it takes three components to specify a full color image with this greyscale+color system. Generically, we'll refer to this as YUV. Y (luma) is the greyscale picture, U and V (chroma) are the colors that are added/subtracted. (There are minor variations in color sustems around the world. You'll see YIQ, YCbCr, YPbPr, rec.601, rec.709, etc. But they're all based on the same idea of a greyscale picture and two chroma channels.)

    Here's an example using your second video. First the original colors:

    Click image for larger version

Name:	color.jpg
Views:	434
Size:	16.7 KB
ID:	25905

    Here's Y on top, U in the middle, V on the bottom (U and V are visualized as greyscale here):

    Click image for larger version

Name:	yuv.jpg
Views:	120
Size:	27.8 KB
ID:	25903

    Or maybe more instructive, here is the Y+U on top, Y+V on the bottom:

    Click image for larger version

Name:	UV.jpg
Views:	521
Size:	30.2 KB
ID:	25904

    One thing to note: U and V represent colors that are added or subtracted from the greyscale image. But video is usually encoded as 8 bit unsigned integers and can only have values from 0 to 255, no negative values allowed. So U and V are normalized around 128. Values above 128 indicate colors added to the greyscale image, values below 128 indicate colors subtracted from the greyscale image. When U and V are at 128 nothing is added or subtracted from the greyscale image.

    I'll write up something about hue and saturation next...
    Quote Quote  
  15. Member
    Join Date
    Feb 2010
    Location
    canada
    Search PM
    Wow! This is an education!
    Quote Quote  
  16. Before I go into replying to what you said, JESUS CHRIST, that sample you attached is like night and day, LMotlow! Holy... Was it very hard to do that? Because the improvement is amazing! I never even realised how bad the colours were oO It's like you said, it looks sharper for sure, and the fact that you actually didn't even sharpen it is extremely impressive. And the colour I don't even need to start on, haha, sweet lord.

    Originally Posted by jagabo View Post
    I don't remember if it was mentioned earlier in this thread but CSamp is a very useful tool. It allows you to read RGB values off the desktop.
    I'll check it out, thank you!

    Originally Posted by LMotlow View Post
    Wow, questions. I recall I had even more of my own back then, but I was too scared to post questions with all these experts around (I still am, almost).
    Heh, sorry I know I have many questions, because of university, I don't have that much time to look into things, but I'm doing my best researching a bit. I read up on a few other pages on that photo editing website you linked earlier. Mostly basics, but well, that's pretty much what I need!

    Originally Posted by LMotlow View Post
    Most people work with lossless compressed AVI. I use the popular Lagarith. There are others, but Lagarith seems to be everywhere and has good compatibility with different systems.
    I see, that's good to know then, thanks!

    Originally Posted by LMotlow View Post
    For h264 I use TMPGenc Mastering Works, TX264, and -- when I can remember the darn command lines -- plain X264.
    So far, I still used MeGUI, simply because Vdub always crashed. With MeGUI, I just throw in the .avs that I wrote (or stole ) beforehand and it encodes directly to MP4. Would it be better to use something else? I almost exclusively do HD video, 720p sometimes, but 1080p in most cases. I noticed a lot of people here try to recover old VHS recordings, those memories~ hehe

    Originally Posted by LMotlow View Post
    Well....yep. It's tough at first, for everybody. You don't put that much time into everything in sight, just the important stuff. Eventually you sit there scratching your head with a color problem and -- pow! -- all of a sudden it all falls together. Discard the idea that it's a matter of just adding more red or something. It's a matter of playing with those filters, trying things, learning what happens when you correct levels, fiddle with gamma, determine white balance, black balance (yep, black's a color just like the others).
    Yeah, it all sounds very complicated. I guess when I have some more free time, I should maybe dedicate a day or two to fiddling around with sliders like all of you suggested earlier, getting used to the 3 billion variables that all those colouring filters bring with them

    Originally Posted by LMotlow View Post
    On that subject, I had to sit here today waiting for two big guys to install a new air conditioner and got to watching the After School video. Shucks, I wish they hadn't blown away the highlights!
    Me too, hehe. Hope the installation went well!

    Originally Posted by LMotlow View Post
    Anyway I put another version together (attached), this time with more levels and color work and some background color matching. Video should look like all the scenes were shot at the same time and place. Sometimes that's not easy, especially with changing stage light colors, or when the originator screws up the source. But I came close. Used a few tricks I learned from some people who don't seem to be around here any more. You might find with this mkv that decent levels and color grading make a vid look different -- almost enough to make your brain disregard some lingering artifacts. I also used FFT3DFilter in chroma-only mode to kill some rainbows and color blotches (how the heck does a broadcast video end up with that kind of noise anyway?), and GradFun2DBmod to sidestep some banding. The mkv looks sharper. But I never sharpened it. Not once. All done with VirtualDub, ColorMill, and gradation curves, with lossless Lagarith work files.
    As I said in the beginning, it's such an incredible improvement! I wouldn't have thought it was possible to get so much out of it, and I didn't even notice that many issues with the video at first... Goes to show how this is all extremely fresh territory for me. What exactly does the banding look like? I found an article on noise on that photo editing site you posted and it mentioned banding noise. Is that what you mean? Or are there other sorts of banding? (here, scroll down a bit, there are three sample pictures next to each other)

    Originally Posted by LMotlow View Post
    Yep. Good tip, jagabo. I just looked for csamp myself, but that website's gone. CSamp was posted in another thread here, but I can't find it. I attached CSamp below. Handy little tool. No installer. Just copy to your desktop. Can read pixels in whatever window is active, and while it's running it puts a little icon in the taskbar. Right-click that icon to end it. Left-click the little icon when you wanna read some pixel RGB values. Why would you want to do that? Well....that's getting ahead of things. Later. Might just want to keep the zip around until then.
    That will surely help to better imagine the colours that result out of the specific RGB values! I have it on my desktop, ready to go, I might sneak a peek today

    Originally Posted by poisondeathray View Post
    vdub can export almost anything now, through the external encoder feature . e.g. you can use ffmpeg. There is a guide on the vdub forum
    I'll have to look into that! Thanks for telling me, I googled for a while, but couldn't find anything on it, weirdly. I'll look in the vdub forum as soon as I can, thank you!

    Originally Posted by poisondeathray View Post
    Avspmod has a pixel sampler similar to csamp. Like csamp, it will return RGB values, HEX values if you wish with the cursor/mouse over . But the benefit of doing it in avspmod is you can even get YUV values. Also the pixel (x,y) position for that video is also given (csamp does it for the native display resolution, which isn't as useful) . All the data returned can be customized on the status bar to show what you want (things like FPS, colorspace, current frame, framecount, audio characteristics, many others)
    Oh so it's like a sort of debugging overlay in the corner that tells me a bunch of values? That sounds handy, I will check that out!

    Originally Posted by jagabo View Post
    [YUV, RGB explanation]
    Oh, I see! I really like that canvas analogy/explanation, that sounds really interesting! I started reading about YUV in the context of TV on the Wikipedia article, but Wikipedia has a way of complicating everything so aside from the introduction, almost everything was beyond me. What you said is a lot easier to understand.

    Originally Posted by jagabo View Post
    [Examples from Orange Caramel video]
    Oh yes, visualisations are always nice. I think I got the basic principles of YUV! Now I just need to translate it to all the variables I can use for all the various filters that adjust colour. As I said, I'll probably try to derive a lot of what happens from playing with sliders in avspmod and with the ColorMill filter that LMotlow showed me. The instant previews are really helpful in avspmod and vdub!

    Originally Posted by jagabo View Post
    One thing to note: U and V represent colors that are added or subtracted from the greyscale image. But video is usually encoded as 8 bit unsigned integers and can only have values from 0 to 255, no negative values allowed. So U and V are normalized around 128. Values above 128 indicate colors added to the greyscale image, values below 128 indicate colors subtracted from the greyscale image. When U and V are at 128 nothing is added or subtracted from the greyscale image.
    Oh yes that makes sense, thanks for noting that!

    Originally Posted by jagabo View Post
    I'll write up something about hue and saturation next...
    Wonderful! I can't wait to learn more

    Originally Posted by hizzy7 View Post
    Wow! This is an education!
    I know, right? I hope that a few other people will profit from this thread as well!
    Quote Quote  
  17. There's another thread where a lot of this was covered before:

    https://forum.videohelp.com/threads/361379-VCR-Hi8-capture-tests-help-evaluate

    Unfortuantely, Sanlyn deleted all his posts, many of which were useful.
    Quote Quote  
  18. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Yep, I kept tracking those posts over the years. Too bad about the ruckus. I've seen others fade away over time, especially some pro tech types who posted samples from various hardware, VCR's, tbc's, 'scopes, and all. I copied a ton of posts, saved on another drive. Got to where I had to make subfolders so I could relocate stuff. Glad to see jagabo still around. Great info. Thank you.
    - My sister Ann's brother
    Quote Quote  
  19. Originally Posted by jagabo View Post
    There's another thread where a lot of this was covered before:

    https://forum.videohelp.com/threads/361379-VCR-Hi8-capture-tests-help-evaluate
    I'm currently looking through it but I'm not very far yet. Thanks for posting!

    Originally Posted by jagabo View Post
    Unfortuantely, Sanlyn deleted all his posts, many of which were useful.
    Originally Posted by LMotlow View Post
    Too bad about the ruckus.
    Yeah, I kept seeing the name "sanlyn" pop up over multiple threads but the post always just contained "-30-" or something like that.


    I read up a bit about terms like subsampling and found a link to fourcc.org in a doom9 thread when I wanted to research the colour systems. The FourCC overview is very technical and a bit confusing at times, but I guess it helped me understand some more details about the system. Too much to remember many specifics though. But it seemed very complete, there were tons of YUV formats in there.
    Quote Quote  
  20. Originally Posted by bschneider View Post
    Originally Posted by jagabo View Post
    There's another thread where a lot of this was covered before:

    https://forum.videohelp.com/threads/361379-VCR-Hi8-capture-tests-help-evaluate
    I'm currently looking through it but I'm not very far yet. Thanks for posting!
    The earlier posts are mostly about different VHS decks and capture cards. About halfway though there's talk of levels, colors, etc.



    Originally Posted by bschneider View Post
    I read up a bit about terms like subsampling...
    YUV video usually has the chroma channels at a lower resolution than the luma channel. Again, this started with cramming the color information into a broadcast signal back when color TV was introduced. To fit the chroma channels into the existing signal required that it be sent with a lower bandwidth, hence lower resolution. This wasn't considered a huge loss because your eyes have less color resolution than greyscale resolution.

    With digital formats you'll see a few main formats:

    4:2:2 (for every four Y samples there are 2 U and V samples. A 720x480 luma channel is accompanied by 360x480 chroma channels. Many capture devices use this subsampling. This also closest to what's in an good analog video signal.

    4:1:1 (for every four Y samples there is one U and V sample). A 720x480 luma channel is accompanied by 180x480 chroma channels. NTSC DV camcorders use this subsampling.

    4:2:0 (for every four Y samples there are one U and V samples). A 720x480 luma channel is accompanied by 360x240 chroma channels). DVD, Blu-ray, and broadcast digital TV uses this subsampling.
    Last edited by jagabo; 24th Jun 2014 at 12:02.
    Quote Quote  
  21. Originally Posted by jagabo View Post
    The earlier posts are mostly about different VHS decks and capture cards. About halfway though there's talk of levels, colors, etc.
    Ah, good to know!

    Originally Posted by jagabo View Post
    YUV video usually has the chroma channels at a lower resolution than the luma channel.
    Yes, I read about that and that the eye perceives greyscale differences better than colour differences too.

    Originally Posted by jagabo View Post
    Again, this started with cramming the color information into a broadcast signal back when color TV was introduced. To fit the chroma channels into the existing signal required that it be sent with a lower bandwidth, hence lower resolution.
    Oh, what I read was simply about compression and such, didn't mention its TV roots. Good to know, thanks!

    Originally Posted by jagabo View Post
    4:2:2 (for every four Y samples there are 2 U and V samples. A 720x480 luma channel is accompanied by 360x480 chroma channels. Many capture devices use this subsampling.

    4:1:1 (for every four Y samples there is one U and V sample). A 720x480 luma channel is accompanied by 180x480 chroma channels. NTSC DV camcorders use this subsampling.

    4:2:0 (for every four Y samples there are one U and V samples). A 720x480 luma channel is accompanied by 360x240 chroma channels). DVD, Blu-ray, and broadcast digital TV uses this subsampling.
    It was a bit confusing this time around. The German Wikipedia page actually has a nice visualisation of it (of course I don't know if it's accurate, but it seemed logical): here. So the higher the second and third number, the more accurately colours are determined? Does chroma bleeding have to do with this? As I understood it, chroma bleeding means that colours tend to... flow instead of being clear cut? Meaning that two shades of red being separated by a thin, black line might bleed over into the black line, so that line becomes invisible? Only checking every second pixel for colour horizontally sounds a bit like interlacing, but for colours, weird analogy maybe, hehe.
    Quote Quote  
  22. Originally Posted by bschneider View Post
    So the higher the second and third number, the more accurately colours are determined?
    Yes. The higher the subsampling rate the sharper the colors are.

    Originally Posted by bschneider View Post
    Does chroma bleeding have to do with this?
    Yes, to some extent. Many other things can cause color bleeding.

    Some examples: https://forum.videohelp.com/threads/294144-Viewing-tests-and-sample-files?p=1792760&vie...=1#post1792760
    Quote Quote  
  23. Originally Posted by jagabo View Post
    Yes, to some extent. Many other things can cause color bleeding.

    Some examples: https://forum.videohelp.com/threads/294144-Viewing-tests-and-sample-files?p=1792760&vie...=1#post1792760
    That is very nice material, very good, clear examples. And one for deringing too! Thank you
    Quote Quote  
  24. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Man, thanks to jagabo I'm getting a full semester of education here. Glad I got back to this thread.

    Originally Posted by bschneider View Post
    That sample you attached is like night and day, ... Was it very hard to do that?
    Not so much "hard" as very tricky. Rough going if you're new to it. Honest, a lot of this bad color stuff, you guess a lot. Lord knows how the original event looked. I looked at my mkv this morning and said, well, this is too red, that's too blue, etc. Always let it steep overnight. It's like what they say about getting married: things look different the next day.

    I made some pics about the denoising and color. The top image is the original frame 116 from After School, bottom is the mkv version. I think it's too red. I also think the gals aren't wearing pure white. Eyeing other shots, that white should have have a hint of pale jade. Levels and colors change with each shot. I started with the background color, which looks different in every scene and in different parts of the stage. Overall there should be more green IMO. That ain't easy, or the gals turn yellow. I tried matching that background color in each shot -- which means, of course, you cut the clip into 4 separate files, work each one, and read pixel values while adjusting. Avisynth came first, to correct basic levels and prevent RGB conversion from losing whatever brights and darks were left from the careless broadcaster.

    Both images are reduced to 720 pixels wide. I attached full sized 1920x1080 originals. The smaller pics show the overall color change, which was YUV levels first, then color RGB tweaks. You can see how fixing black levels makes it seem sharper. The original has no depth or dimension. The original backgrounds don't match either, even considering lighting changes. I still think the fix is too red and a bit dark in the midtones. You can fix that with ColorMill by raising the midpoint. The "midpoint" is RGB 128, but you can raise it in small increments to affect the midtones with minimal damage to darks and brights.

    Click image for larger version

Name:	f116 - original.jpg
Views:	410
Size:	49.3 KB
ID:	25909

    Click image for larger version

Name:	f116 - after.jpg
Views:	450
Size:	44.2 KB
ID:	25910

    Below is a pic of the YUV levels histogram, frame 116 original. You can see that black levels are a little anemic. The luma stretch into the unsafe zone at the right isn't as bad as the other shots -- but remember that the darks and brights will be expanded in RGB. In RGB I get pixel readings off the costume that hit or exceed RGB 255. That can make brights look too hot. Pixel readings off the face show too much green.
    Click image for larger version

Name:	f116-original YUV Levels.png
Views:	477
Size:	9.5 KB
ID:	25911

    Noise: In the attached 1920x1080's you'll see interlace effects, but notice that many edges look a bit rough and the combing seems excessive. The interlacing looks weird to me anyway (where are those jaggies coming from in other parts of the images?). Those noisy edges get more noisy when the video plays. The santiag plugin is a mild edge smoother. You can set it higher or use something strong like SangNom, but I thought those did too much damage. I don't know how well set up your monitor is, but in the big original pic look at the mild mottling and uneven texture in the background. Doesn't look too bad, but consider that it changes with every frame and, therefore, it gets noisy in motion. I used MCTemporalDenoise and added FFT3Dfilter in chroma mode to smooth that stuff and make the background and other flat areas more uniform. Those plugins also helped clean some ugly blotches in skin tones elsewhere.

    I should mention that the samples use Rec709 for HD color. Most of the filters use Rec601, so I had to jockey back and forth between the two color systems. I used the Avisynth ColorMatrix plugin for the chageover. BT.709 and BT.601 matrices are similar, but not exactly alike. Orange Caramel was easier to work with after using the right matrix for filtering.

    I posted 1920x1080 PNG's separately. They take up too much room in the browser. The forum's viewer doesn't show them full size, so download them if you would, and look them over at 100% size. I also note that some forums make PNG and GIF look too dark in some browsers - as you can likely see if the attached 1920's are displayed below. For some reason JPG looks about right. The JPGs posted are 96% quality, made in Photoshop with the HVSJPEG plugin. Glad I spent that twenty on it.

    Sometimes, though, heavy color work doesn't make a huge difference. It depends on how badly the original is screwed up.

    EDIT: Oh, I forgot. You asked about banding. Here's a whole page of banding and macroblock examples from Google. Just click on some pics. https://www.google.com/search?q=color+banding&tbm=isch&imgil=bTAhObOS5_AZaM%253A%253Bh...40%3B800%3B600
    Image Attached Thumbnails Click image for larger version

Name:	f116 - original 1920x1080.png
Views:	370
Size:	2.47 MB
ID:	25912  

    Click image for larger version

Name:	f116 - after 1920x1080.png
Views:	402
Size:	2.69 MB
ID:	25913  

    Last edited by LMotlow; 24th Jun 2014 at 14:56.
    - My sister Ann's brother
    Quote Quote  
  25. A link was given earlier to the Wikipedia article about YUV. That gave all the math but a picture will make it much more understandable. From http://techpubs.sgi.com/library/dynaweb_docs/0650/SGI_Developer/books/DIVO_OG/sgi_html/apd.html



    You can see here that the RGB cube is rotated within the YUV colorspace such that it's standing on its black corner at Y=16, U=128, V=128, with its white corner at Y=235, U=128, V=128. (Note that the image is labeled as Y, Cb, Cr, I've translated that to Y, U, V since we work in YUV in AviSynth) The line of the Y axis in the image is where all the U and V values are 128. All the grey shades of the RGB cube fall on that axis, from 0,0,0 to 255,255,255.

    The YUV cube is much bigger than the RGB cube. Only YUV values that fall inside the RGB cube are "legal". It's generally stated that legal Y values are between 16 and 235, legal U and V values between 16 and 240. But not all combinations of YUV within those bounds are legal. Look at the black corner of the RGB cube where it touches Y=16. The only U and V values that fall within the RGB cube at that point are 128. The same is true at the white tip of the RGB cube. Between those extremes the extent of legal U and V values depends on the value of Y.

    One way to check your YUV image for illegal colors is to convert it to RGB (where illegal RGB values get truncated), convert it back to YUV, then compare the YUV values before after the round trip conversion. Where the YUV values are identical thcolors were legal. Where they diff the colors were illegal. This isn't quite perfect since YUV/RGB conversions with 8 bit integers isn't 100 percent lossless. But if you add a small fudge factor it's accurate enough for most purposes. I wrote a function that does this. It highlights illegal YUV pixels with a user selectable color:

    https://forum.videohelp.com/threads/360935-Capturing-Correct-Chroma-and-Hue-Levels-From...=1#post2289910

    Earlier in that thread Gavino gave a more rigorous test but it takes forever to start up.

    Note that my HighlightBadRGB code assumes rec.601 color. It would have to be modified for the rec.709 colors usually used with HD video.

    This is a good place to introduce the color vectorscope. A vectorscope is a 2d plot of U vs. V values. It's a view of the YUV cube from directly overhead (or below). Historgam() has a vectorscope mode ("color").

    Click image for larger version

Name:	hist.jpg
Views:	135
Size:	30.3 KB
ID:	25916

    VideoScope() also has an optional vectorscope. Here is the result of VideoScope("both", true, "U", "V", "UV"):

    Click image for larger version

Name:	vscope.jpg
Views:	172
Size:	85.3 KB
ID:	25917

    VideoScope shows the vectorscope vertically flipped compared to Histogram(). Some vectorscope displays show the locations of the standard SMPTE color bars. Sony's Vegas, for example:

    http://www.sonycreativesoftware.com/using_the_vegas_pro_color_scopes

    Greys are at the middle of the vectorscope box. Colors get more saturated toward the edges.
    Last edited by jagabo; 24th Jun 2014 at 17:02.
    Quote Quote  
  26. Originally Posted by LMotlow View Post
    Glad I got back to this thread.
    So am I!

    Originally Posted by LMotlow View Post
    Not so much "hard" as very tricky. Rough going if you're new to it. Honest, a lot of this bad color stuff, you guess a lot.
    I imagine... I would probably use one of the many programs/plugins that have sliders and previews and simply play around until I found something that looks more natural.

    Originally Posted by LMotlow View Post
    I looked at my mkv this morning and said, well, this is too red, that's too blue, etc. Always let it steep overnight. It's like what they say about getting married: things look different the next day.
    That's probably good advice for me, sometimes I get fixated on a problem and can't stop. I will try to keep this in mind.

    Originally Posted by LMotlow View Post
    I made some pics about the denoising and color. The top image is the original frame 116 from After School, bottom is the mkv version. I think it's too red. I also think the gals aren't wearing pure white. Eyeing other shots, that white should have have a hint of pale jade. Levels and colors change with each shot. I started with the background color, which looks different in every scene and in different parts of the stage. Overall there should be more green IMO. That ain't easy, or the gals turn yellow. I tried matching that background color in each shot -- which means, of course, you cut the clip into 4 separate files, work each one, and read pixel values while adjusting. Avisynth came first, to correct basic levels and prevent RGB conversion from losing whatever brights and darks were left from the careless broadcaster.
    Thank you for the pictures! I downloaded them and looked at them, they really highlight what an impressive progress you have made! I'm not entirely sure if I know why you think that the clothes aren't plain white, but of course I can't prove otherwise, I'd chalk that up to my lack of experience. You think it's too red, I guess I can kind of see what you mean, but not greatly. How did you read pixel values while staying in avisynth? I assume you could use what poisondeathray mentioned earlier in avspmod, switch on that overlay that gives you the values and then play with the variables? But since you didn't like avspmod, did you use CSamp then? Since ColorMill allows me to play with RGB values, is it safe to assume that it converts to RGB and causes the additional problems with the brights and darks you mentioned?

    Originally Posted by LMotlow View Post
    Both images are reduced to 720 pixels wide. I attached full sized 1920x1080 originals. The smaller pics show the overall color change, which was YUV levels first, then color RGB tweaks. You can see how fixing black levels makes it seem sharper. The original has no depth or dimension. The original backgrounds don't match either, even considering lighting changes. I still think the fix is too red and a bit dark in the midtones. You can fix that with ColorMill by raising the midpoint. The "midpoint" is RGB 128, but you can raise it in small increments to affect the midtones with minimal damage to darks and brights.
    You did YUV levels first, so you used the SmoothLevels filter in avisynth? You say you did RGB tweaks, does that mean you finally converted it to RGB? I'm a bit confused how this works together, since it comes up so often. I can definitely see it looking sharper, the effect is astounding. I can see what you mean by a bit too dark, but only slightly in my opinion, the picture is looking pretty damn well. When you say raising the midpoint, I'm not quite sure what you mean. I assume you mean in "Levels" I use the slider called "Middle"? Would make sense since we're talking about brightness and midtones, but I want to make sure I've got the right idea.

    Assuming I used ColorMills to fix the colour of my video, hypothetically. How would I go about exporting these settings ideally? (It's surprisingly easy to just take one frame, play around with the sliders carefully and come up with a better picture. It's fun to play around with it a bit :3 Obviously this will be much more work if you have to account for every single scene...)

    Originally Posted by LMotlow View Post
    Below is a pic of the YUV levels histogram, frame 116 original. You can see that black levels are a little anemic. The luma stretch into the unsafe zone at the right isn't as bad as the other shots -- but remember that the darks and brights will be expanded in RGB. In RGB I get pixel readings off the costume that hit or exceed RGB 255. That can make brights look too hot. Pixel readings off the face show too much green.
    You say that black levels are a little anemic. By that you mean the fact that on the first part of the histogram, the left side is nearly empty, while on the right side, there are pixels even beyond what would be recommended, correct? (Sorry that I'm asking about literally every sentence you write, I'm still getting used to reading these graphs)
    Yeah, I noticed that the costumes, especially the pants, have been hit pretty hard. Finally one thing even my eyes can see right away I guess that's something I'll have to live with, but we're making so much progress on many other fronts, I'm not too broken up about that.

    Originally Posted by LMotlow View Post
    Noise: In the attached 1920x1080's you'll see interlace effects, but notice that many edges look a bit rough and the combing seems excessive. The interlacing looks weird to me anyway (where are those jaggies coming from in other parts of the images?). Those noisy edges get more noisy when the video plays. The santiag plugin is a mild edge smoother. You can set it higher or use something strong like SangNom, but I thought those did too much damage. I don't know how well set up your monitor is, but in the big original pic look at the mild mottling and uneven texture in the background. Doesn't look too bad, but consider that it changes with every frame and, therefore, it gets noisy in motion. I used MCTemporalDenoise and added FFT3Dfilter in chroma mode to smooth that stuff and make the background and other flat areas more uniform. Those plugins also helped clean some ugly blotches in skin tones elsewhere.
    The interlacing does look strong, but if I'm not mistaken, QTGMC did quite a good job. Which doesn't account for all the other problems, but it's a start. An edge smoother... Somewhat like anti-aliasing or something completely different? My monitor... it's a good monitor, but I didn't do much in the way of setup. Pretty much as it came. I can definitely see what you mean by mottling, yes, it's noticeably better in your filtered version and I imagine that stuff is really messy when in motion.

    Good to know that those plugins helped with skin blotches, because those were really bothering me! Having that fixed is a huge plus, thank you! Looking at FFT3Dfilter's documentation, I assume chroma mode means you set the "plane" variable to 3?

    Originally Posted by LMotlow View Post
    I should mention that the samples use Rec709 for HD color. Most of the filters use Rec601, so I had to jockey back and forth between the two color systems. I used the Avisynth ColorMatrix plugin for the chageover. BT.709 and BT.601 matrices are similar, but not exactly alike. Orange Caramel was easier to work with after using the right matrix for filtering.
    Sorry, my brain gives out around here, I don't know anything about those. Guess I have something new to look up tomorrow

    Originally Posted by LMotlow View Post
    Sometimes, though, heavy color work doesn't make a huge difference. It depends on how badly the original is screwed up.
    Sure, but I'm kind of glad that it coincidentally came up in the example I used, that way I get used to that aspect of editing and encoding as well.

    Originally Posted by LMotlow View Post
    EDIT: Oh, I forgot. You asked about banding. Here's a whole page of banding and macroblock examples from Google. Just click on some pics.
    Ah yes, that is how I imagined it, thanks!

    @jagabo: I just saw you posted something as well, I'll have to look at it tomorrow. It's past midnight here, I'm getting sleepy, it looks highly interesting though and I can't wait to get my hands on it :3 Thanks to all of you for your dedication and your patience!

    EDIT: Just looked at Vdub and ColorMill, noticed I could check "Show Image Formats". It says the input is RGB32 as well as the output. How is the input RGB32? Shouldn't it be YUV originally?
    Last edited by bschneider; 24th Jun 2014 at 17:51.
    Quote Quote  
  27. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Originally Posted by bschneider View Post
    I would probably use one of the many programs/plugins that have sliders and previews and simply play around until I found something that looks more natural.
    LOL, well it's not quite that random. You start with some common principles. Example, you know from your color theory class (or from many of jagabao's old posts, LOL) that white, black, and all shades of gray in between are each made from equal proportions of Red+Green+Blue. The RGB value of super-bright white is R-255 + G-255 + B-255. A light gray might have RGB values 192-192 192. Most people know neutral gray as "middle gray" or RGB 128-128-128. The dark gray shadows in the folds of a gray coat would be 64-64-64. Then there practical guidelines for skin tones: Skin that looks too pink usually has too much blue overcoming the red and green (i.e, yellow) in skin tones. An obvious overbalance case is the oversaturated red in OrangeCaramel.

    Knowing how something is supposed to look is where I start. If your white shirt is RGB 190-160-160, that shirt either doesn't look white (it looks reddish) or you need to adjust something to make it right. If you change the above red from 190 to 160 and match the other two guys, you'll have white. It won't be a very bright white (RGB 160 is really a light gray, but it depends on the lighting involved). If it's supposed to be a brighter white, then reducing red didn't help. Instead, raise Green and Blue to 190. RGB 190-190-190 is a fairly bright area of a fairly white shirt. But make that shirt into RGB 255-255-255 and it'll look like it's neon, which would look pretty tacky.

    The idea behind getting whites and grays correct is that if you get all 3 colors in proper balance for those neutral hues, the other colors fall in place. If you don't have any whites, grays, or blacks, well.....you're kinda stuck, and will have to work on stuff like skin tones, or proper color for familiar objects like trees, leaves, specific flowers, brick homes, etc. Graphics look "real" when they remind us of the real stuff we see all the time. (Lordy, I think I just quoted from about 10 of poisondeathray's posts I learned from a while back !!).

    But if you're working on a Disney fantasy, well...anything goes.

    Originally Posted by bschneider View Post
    How did you read pixel values while staying in avisynth?
    I run Avisynth scripts in VirtualDub and used one of my pixel samplers to read right off VDub's input and output panes.

    Originally Posted by bschneider View Post
    You did YUV levels first, so you used the SmoothLevels filter in avisynth? You say you did RGB tweaks, does that mean you finally converted it to RGB?
    I used the first script I posted the other day, but I modified it later. I added a line to the very end of the script, to prep for RGB:

    Code:
    ConvertToRGB32(matrix="Rec601", interlaced=true)
    After checking the output and tweaking ColorYUV, SmoothLevels, and everything else coming from Avisynh, I told VirtualDub to save that script's output as lossless Lagarith RGB. After that, I opened the AVI directly in VirtualDub without a script, cut out the sections I wanted and made RGB corrections to each, then saved each small clip as Lagarith RGB. The 4 clips can be joined later in Avisynth.

    Originally Posted by bschneider View Post
    When you say raising the midpoint, I'm not quite sure what you mean. I assume you mean in "Levels" I use the slider called "Middle"? Would make sense since we're talking about brightness and midtones, but I want to make sure I've got the right idea.
    You're on the right track. The middle slider in standard "Levels" controls is for gamma. Usually gamma affects midtones (that's another one of those simplifications!). I referred to the "Middle Point" section at the top of ColorMill's dialog window. CM has a "Gamma" section too, but Middle Point is a bit different. It contracts or expands values over a more selective range. If you start with the standard RGB assumption for middle point at RGB 128, you can change it in small increments to higher or lower values. This can help compensate for video sources that have incorrect IRE range or that seem to have everything shifted to the left or right in a histogram -- I'd say the AfterSchool clip qualifies along those lines.

    Originally Posted by bschneider View Post
    Assuming I used ColorMills to fix the colour of my video, hypothetically. How would I go about exporting these settings
    Save VirtualDub filter settings as .vcf files. Click "File" -> "Save processing settings...", then give the .vcf a name and location. You can import saved settings by doing the reverse with "Load processing settings...". A .vcf file is plain text. You can open and read it in Notepad. Careful -- if you've loaded some filters already and then load a .vcf on top of them, guess what? The filters you already loaded will be kaput or overwritten. I found out the hard way.

    Originally Posted by bschneider View Post
    An edge smoother... Somewhat like anti-aliasing or something completely different?
    Santiag is often used for anti-alias. Among aa filters, I guess santiag is one of the least destructive, IMO. By screwy interlacing I mean to say that aliasing and sawtooth edges usually result from faulty interlacing or deinterlacing. Jaggies don't come from nowhere.

    Originally Posted by bschneider View Post
    Just looked at Vdub and ColorMill, noticed I could check "Show Image Formats". It says the input is RGB32 as well as the output. How is the input RGB32? Shouldn't it be YUV originally?
    Virtualdub filters work in RGB. Most of them, anyway. Same is true for almost every similar editor or NLE. A few of the big guys can work in YUV, and so can some of my TMPGenc encoder's filters. You can always tell Virtualdub to save or compress to other formats.
    Last edited by LMotlow; 24th Jun 2014 at 23:38.
    - My sister Ann's brother
    Quote Quote  
  28. Originally Posted by jagabo View Post
    The YUV cube is much bigger than the RGB cube. Only YUV values that fall inside the RGB cube are "legal". It's generally stated that legal Y values are between 16 and 235, legal U and V values between 16 and 240. But not all combinations of YUV within those bounds are legal. Look at the black corner of the RGB cube where it touches Y=16. The only U and V values that fall within the RGB cube at that point are 128. The same is true at the white tip of the RGB cube. Between those extremes the extent of legal U and V values depends on the value of Y.
    Oh yeah, I never thought about that. What happens if a colour is determined to fall outside the cube? Will it be approximated to the nearest legal colour?

    Originally Posted by jagabo View Post
    [Checking for illegal colours]
    I see, I will bookmark that function! Thank you!

    Originally Posted by jagabo View Post
    Note that my HighlightBadRGB code assumes rec.601 color. It would have to be modified for the rec.709 colors usually used with HD video.
    I looked it up now, can't do much with the specifics but I roughly understood what it's about, I guess. LMotlow mentioned that rec.601 and rec.709 aren't identical, but not very different. So that means that conversion between the two wouldn't cause too many problems, right?

    Originally Posted by jagabo View Post
    This is a good place to introduce the color vectorscope. A vectorscope is a 2d plot of U vs. V values. It's a view of the YUV cube from directly overhead (or below). Historgam() has a vectorscope mode ("color").
    Oh yes, I remember you mentioning that early on in the thread. I took a peek at it already but couldn't do much with it yet, I just assumed it's kind of like the regular Histogram, it matches each pixel to the colour on the vectorscope.

    Originally Posted by jagabo View Post
    VideoScope() also has an optional vectorscope. Here is the result of VideoScope("both", true, "U", "V", "UV"):

    VideoScope shows the vectorscope vertically flipped compared to Histogram().
    I read that videoscope only supports YUY2 colorspace. Since my video is YV12, that means conversion, right?

    Originally Posted by jagabo View Post
    Greys are at the middle of the vectorscope box. Colors get more saturated toward the edges.
    Ooh, yeah, I see! Thank you!

    Originally Posted by LMotlow View Post
    LOL, well it's not quite that random. You start with some common principles. Example, you know from your color theory class (or from many of jagabao's old posts, LOL) that white, black, and all shades of gray in between are each made from equal proportions of Red+Green+Blue. [...] An obvious overbalance case is the oversaturated red in OrangeCaramel.
    Haha, I'm glad to hear that Yeah, the part about grey makes sense. Got it, I'll try looking at things that seem easily identifiable first, very good idea.

    Originally Posted by LMotlow View Post
    Knowing how something is supposed to look is where I start. [...] But make that shirt into RGB 255-255-255 and it'll look like it's neon, which would look pretty tacky.
    Yeah, that means I need to know what colour something on stage should have and try to go by that. In the After School example, I'm still not quite sure why you believe that the costumes aren't supposed to be pure white, but rather have a light jade colour?

    Originally Posted by LMotlow View Post
    The idea behind getting whites and grays correct is that if you get all 3 colors in proper balance for those neutral hues, the other colors fall in place. [...]
    Okay, makes sense, I would hope so. But evidently, you're not 100% happy with the two sample pictures you posted earlier, so it can't be as easy as this :/

    Originally Posted by LMotlow View Post
    But if you're working on a Disney fantasy, well...anything goes.
    I never did that so far and I don't plan on doing that, so I guess I can count myself lucky

    Originally Posted by LMotlow View Post
    I run Avisynth scripts in VirtualDub and used one of my pixel samplers to read right off VDub's input and output panes.
    I see.

    Originally Posted by LMotlow View Post
    I used the first script I posted the other day, but I modified it later. I added a line to the very end of the script, to prep for RGB:

    Code:
    ConvertToRGB32(matrix="Rec601", interlaced=true)
    After checking the output and tweaking ColorYUV, SmoothLevels, and everything else coming from Avisynh, I told VirtualDub to save that script's output as lossless Lagarith RGB. After that, I opened the AVI directly in VirtualDub without a script, cut out the sections I wanted and made RGB corrections to each, then saved each small clip as Lagarith RGB. The 4 clips can be joined later in Avisynth.
    So basically, as long as you stayed in avisynth, before the last step, everything stayed in YUV? SmoothLevels and ColorYUV adjust YUV values? Then you converted to RGB because you knew that your VirtualDub filters worked with RGB? And then you have another plugin that allows you to export the script in a format Lagarith accepts? Or does Lagarith work with .vcf that you explained below?

    Originally Posted by LMotlow View Post
    You're on the right track. The middle slider in standard "Levels" controls is for gamma. Usually gamma affects midtones (that's another one of those simplifications!). I referred to the "Middle Point" section at the top of ColorMill's dialog window. [...]
    Ah yes, I saw that, but couldn't really understand what it would be for. Okay, thanks a lot!

    Originally Posted by LMotlow View Post
    Save VirtualDub filter settings as .vcf files. Click "File" -> "Save processing settings...", then give the .vcf a name and location. You can import saved settings by doing the reverse with "Load processing settings...". A .vcf file is plain text. You can open and read it in Notepad. Careful -- if you've loaded some filters already and then load a .vcf on top of them, guess what? The filters you already loaded will be kaput or overwritten. I found out the hard way.
    Okay, vdub gives me a .vdscript file, but I can open it with Notepad++ just as well, so I assume it's no issue. How can I use this file aside from backing up filter changes I applied in VirtualDub? How would I best go about continuing at that stage? Exporting it to another encoder like Lagarith? Or can I use these variables for avisynth as well, somehow? Oh boy, I'm sure that overwriting bit will get me once or twice... But I will try my best to keep it in mind.

    Originally Posted by LMotlow View Post
    Santiag is often used for anti-alias. Among aa filters, I guess santiag is one of the least destructive, IMO. By screwy interlacing I mean to say that aliasing and sawtooth edges usually result from faulty interlacing or deinterlacing. Jaggies don't come from nowhere.
    I see. Do you think there was a problem with the deinterlacing progress? I heard mostly good things about QTGMC and I got a lot of assistance from you guys, so I suppose it's more likely that the interlacing was screwy to begin with?

    Originally Posted by LMotlow View Post
    Virtualdub filters work in RGB. Most of them, anyway. Same is true for almost every similar editor or NLE. A few of the big guys can work in YUV, and so can some of my TMPGenc encoder's filters. You can always tell Virtualdub to save or compress to other formats.
    Oh okay. I was just confused that even "Input" stated that RGB32 went in, when I opened an untouched file that should be YV12? Should I tell VirtualDub to save to YUV after working the colours? Wouldn't it cause yet another conversion? Or will encoding to RGB result in any problems? I assume the compression would be worse, i.e. the file size would be larger?
    Quote Quote  
  29. Originally Posted by bschneider View Post
    Originally Posted by jagabo View Post
    The YUV cube is much bigger than the RGB cube. Only YUV values that fall inside the RGB cube are "legal". It's generally stated that legal Y values are between 16 and 235, legal U and V values between 16 and 240. But not all combinations of YUV within those bounds are legal. Look at the black corner of the RGB cube where it touches Y=16. The only U and V values that fall within the RGB cube at that point are 128. The same is true at the white tip of the RGB cube. Between those extremes the extent of legal U and V values depends on the value of Y.
    Oh yeah, I never thought about that. What happens if a colour is determined to fall outside the cube? Will it be approximated to the nearest legal colour?
    It depends on the software or device. YUV colors outside the RGB cube will result in RGB component values less than 0 or greater than 255. Most software and devices these days perform a bounds check and limit the resulting RGB values to 0 to 255. Typically they will work internally in higher precision with signed numbers, say 16 bit signed integers, then before saving the result perform a bounds check. For example, if red is less than 0 make it 0, if red is greater than 255 make it 255. Then do the same for green and blue. But I have seen software that doesn't perform that bounds check. An 8 bit unsigned value can't hold negative numbers or positive numbers greater than 255. When values exceed that range they "wrap around". 256 becomes 0, 257 becomes 1, 258 becomes 2, etc. So if a pixel was suppose to be very bright red (R=255, G=0, B=0), but the YUV to RGB conversion resulted in 257 instead of 255 for the red, it would wrap around to 1. Instead of bright red pixel you would have a nearly black pixel. At the negative end you get the opposite. -1 wraps to 255, -2 wraps to 254, etc. So black area you might get a bright pixels.

    Originally Posted by bschneider View Post
    I looked it up now, can't do much with the specifics but I roughly understood what it's about, I guess. LMotlow mentioned that rec.601 and rec.709 aren't identical, but not very different. So that means that conversion between the two wouldn't cause too many problems, right?
    On casual viewing you probably won't notice. Or the colors will just appear to be slighty "off". Here's an example that shows rec.601 incorrectly interpreted as rec.709, and vice versa:

    https://forum.videohelp.com/threads/329866-incorrect-collor-display-in-video-playback?p...=1#post2045830

    As a general rule: standard definition video uses rec.601 YUV, high definition uses rec.709 YUV. The correct color model can be specified within the video. But if it's not you have to guess.

    Originally Posted by bschneider View Post
    Originally Posted by jagabo View Post
    vectorscope...
    Oh yes, I remember you mentioning that early on in the thread. I took a peek at it already but couldn't do much with it yet
    In truth, I hardly ever use it either. If your video includes a standard SMPTE color bars you can use it as a guide to your color adjustments. That is, you can adjust the hue and saturation (or whatever you need to adjust) to get the color bars right, then hope that that correction holds for the entire video. Or at least use it as a starting point for adjusting individual shots. I'll use it later when discussing hue and saturation. If you see the vectorscope graph has data out at the very edges (UV<16, YV>240) you know the picture is over saturated -- but you probably already know that just from looking at the picture. I use the U and V waveform graphs a lot more.

    Originally Posted by bschneider View Post
    I read that videoscope only supports YUY2 colorspace. Since my video is YV12, that means conversion, right?
    Yes. You need to ConvertToYUY2(interlaced=[true or false]) before calling VideoScope().
    Quote Quote  
  30. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    You might be trying to cover too much detail in too many places at once. I did that at first. I used a lot of automated one-stop-shopping apps that spit out finished videos lickety-split. But I learned nothing about the details. What made me stop and look around was problem videos. Recording something off digital cable was simple. As soon as I started capturing ugly VHS, the auto apps left me standing in the rain. An old country saying describes it as 10 miles of bad road. 10 miles doesn't sound like much, but try it on a really bad one lane dirt road in February, LOL! I got deeper into this forum and others. Yep, slowed me down at first. At least every week I learned something different from posts like jagabo's and kept notes and copies. I still don't understand a lot of it, but most of the time I can at least plug something in and check the results.

    Originally Posted by bschneider View Post
    Originally Posted by jagabo View Post
    Note that my HighlightBadRGB code assumes rec.601 color. It would have to be modified for the rec.709 colors usually used with HD video.
    I looked it up now, can't do much with the specifics but I roughly understood what it's about, I guess. LMotlow mentioned that rec.601 and rec.709 aren't identical, but not very different. So that means that conversion between the two wouldn't cause too many problems, right?
    Depends. Most of the time you could get shifts in saturation or color balance. HD has generally higher saturation and other differences. I use the ColorMatrix plugin and check it out, just to cover my tracks if nothing else. If you get involved in CMS calibration systems at a site like AVSforum you get into math and charts that will have you on industrial strength Valium in no time.

    @jagabo, that script is cool. Glad for that link, I lost track of it a while back.

    Re: colorspaces for histograms and 'scopes: Check jagabo's notes on that. In VDub you can tell gradation curves to work with different color. Switching the plugin to a different colorspace takes time, though, because gradcurves builds a new copy of the video. With a long video you could be sitting there for quite a spell.

    If running VirtualDub in "full processing mode", it'll convert to RGB and outputs RGB by default if you don't specify otherwise. To avoid that, set output compression/color to what you want and use "Video" -> "fast recompress".

    Originally Posted by bschneider View Post
    But evidently, you're not 100% happy with the two sample pictures you posted earlier, so it can't be as easy as this
    Working with off color video is a pain. It won't be perfect whatever you do. After letting it rest overnite I saw a few things I'd change, but I let it be. Mind, you can't do it with every video that comes down the pike. Life ain't long enough. I made hundreds of recordings on a DVD recorder off cable TV, and all I did was cut out commercials with a smart-rendering editor and author a menu for it. Never had to touch color or noise.

    I get the idea of a slightly off-color costume from the first image you posted. It's too green. Unless somebody built a color coordinated dark green piano and made a similar green suit, I thought that part of the image should be a black or dark gray piano and suit, with whitish piano ivories. Get the darks close to that, then work on flesh tones. If everything looks mostly normal, the costumes have a slight jade tint. But stage lighting complicates issues. Also, look at the shadow detail on those costumes in every scene. A white costume has grayish shadows, but there's no way to get gray into those costumes (all the other midtones would turn purple). Part of the problem is the yellow-red backlights, and other lights elsewhere. Maybe the tops were white and the pants different, I don't know. If I could get skin tones that didn't make the girls look like they had liver disease, I used my best guess.

    Tips from other posts and graphics sites: don't work in bright light. Work in subdued, indirect light. Don't put a lamp in front of your monitor. Don't work in total darkness, or your eyes will think the monitor is too bright and will shut down on you. If you stare at something for too long, your brain tries to "normalize" it. Turn away for a while and come back, things will look different. 24 hours makes a big difference, too. Working with an uncalibrated monitor is like shooting yourself in the foot. That's a different Pandora's box too complicated for here and now, but you can run a cheap (free) monitor check here: http://www.lagom.nl/lcd-test/. Like many people here I use an IPS LCD display calibrated with a colorimeter kit. Not cheap. But see what the lagom link has to offer for free. Lots of people use their site.

    Noise is difficult to see in bright light. I didn't recognize that background grunge until much later in the process. Sometimes you miss noise problems altogether and see it 2 days later on your TV. Drat!

    QTGMC: I didn't see a need for it here. You can use it with various parameters, but it's slower than low-power MCTD and doesn't fix the same problems. The two plugins are different. I've seen bad video that required both. In that case, might as well drive around and do some shopping while you wait. Many standard video formats are usually interlaced or telecined. So it depends on your intended output. For BluRay (AVCHD is similar) the standards look pretty clear with this post: http://forum.doom9.org/showthread.php?t=154533.

    Originally Posted by bschneider View Post
    Okay, vdub gives me a .vdscript file, but I can open it with Notepad++ just as well, so I assume it's no issue. How can I use this file aside from backing up filter changes I applied in VirtualDub?
    You can copy settings from a .vcf to use VirtualDub filters in an Avisynth script. If you stick with script-only, be sure to convert to RGB for that filter. That RGB or other conversion isn't a plugin, but is a built-in Avisynth function (http://avisynth.org.ru/docs/english/corefilters/convert.htm). How you load the filter in AVisynth differs slightly with the filter. Many of them have documentation for this, or sometimes you have to search a forum or two to see how people did it. Some VDub filters don't work right in a .vcf file. FadeFX and BorderControl come to mind.

    The requirements in Avisynth are to define the VirtualDub plugin path, then define and import the filter and give it a name of your choosing. Next step is to get up a statement that uses the filter (convertToRGB32 first!). For example, you would set up ColorMill as follows, using the .vcf file to get the settings. The part of the .vcf file that stores ColorMill values would look like this:

    Code:
    VirtualDub.video.filters.Add("Color Mill(2.1)");
    VirtualDub.video.filters.instance[1].Config(25700, 25700, 25700, 25700, 24932, 25700, 25700, 25700, 25700, 25700, 26468, 25700, 25700, 1124, 5);
    Load that filter in Avisynth with this statement (I used a path for XP):
    Code:
    LoadVirtualDubPlugin("C:\Program Files\VirtualDub\plugins\ColorMill.vdf", "ColorMill", 1)
    Use that filter with these lines:
    Code:
    ConvertToRGB32(matrix="Rec601", interlaced=true)
    ColorMill(25700, 25700, 25700, 25700, 24932, 25700, 25700, 25700, 25700, 25700, 26468, 25700, 25700, 1124, 5)
    ConvertToYV12(interlaced=true)  #<- if you need to get back to YV12
    It can be more trouble than it's worth. You get the right filter values by first using the filter in VirtualDub, save the .vcf data, then put it in the script. If you want to change or tweak, you have to do it all over again unless you can figure out what those numbers mean. Pain in the whatsit. If you do this with something like gradation curves, you have a text string of over 132 characters to format and the numbers have to be formatted as text, not numbers. Triple pain.

    First, I'd stick with learning how to get up a script and just run it. Color can come later and you don't always need RGB. You have to denoise and do YUV stuff first anyway. That's a subject in itself.

    Good grief, after not posting here for many years, I seem to be in overdrive, LOL! Never thought that would happen. Better get to those earlier links from jagabo and poisondeathray for a while and learn something new.
    Last edited by LMotlow; 25th Jun 2014 at 09:56.
    - My sister Ann's brother
    Quote Quote