VideoHelp Forum
+ Reply to Thread
Page 2 of 5
FirstFirst 1 2 3 4 ... LastLast
Results 31 to 60 of 125
Thread
  1. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by mathmax View Post
    step 1

    - first, I'm a bit surprised the way you corrected the luma range. I already asked about that earlier in the thread, but I still don't get exactly how you chose the values gain_y, off_y and cont_y. I'd rather edit the levels out of fear to distort the scale. Moreover, I feel that you decreased a bit too much the upper limit. But I might be wrong about that... I admit I don't know well the reason why you chose this way. I uploaded below samples for each methods. Please don't hesitate to examine them.
    I didn't get as far ahead as this scene in processing the the clip. Your samples are, I think, frame 6211. Everyone sets levels differently. The ColorYUV I used was a compromise that would keep from crushing or blowing out values in the the darkest as well as brightest shots I worked with. It's not unusual to make different settings for problem scenes that require it. I just settled on a one-size fits all setting in this case.

    The scene you sampled has horrible lighting. But that's par for available-light work. In the panel below, with parts of your vectorscope shots cropped, ColorYUV(gain_y=-30,off_y=-4,cont_y=-10) is on the left, smoothlevels(0, 1, 255, 0, 235, limiter=2) on the right.
    Image
    [Attachment 11179 - Click to enlarge]


    The main point of interest is the right-hand (bright) side of each 'scope. Note the two sharply cutoff vertical white lines, in both samples. Abrupt cutoffs like those indicate brights being blown away. If the brights are hugging the right wall, they're either being cut off or clamped. In this case, the results of the two methods won't stop the curtailed highlights, because they were blown away when the shot was made to begin with.

    But I would still lower the high contrast a bit (and I would lower gamma, too, if I'd made it as far as this shot). I'd do so because leaving the brights as-is makes those room lights look like spotlights, and Michael's facial highlights and hands look too "hot". In other words, the brights seem unnatural. It would take a lot of work to finesse this shot.

    SmoothLevels as mentioned, + MCTemporalDenoise:
    Image
    [Attachment 11174 - Click to enlarge]


    I'm not sure what you mean by "distorting the scale". There's no law that says every last inch of the luma or color range has to be heavily populated. This is a dim-to-average lighted indoor shot; it's not supposed to look like a Broadway play. If this were a night scene, you'd see very little activity at the right-hand edge of a 'scope.

    ColorYUV as used earlier, + MCTemporalDenoise:
    Image
    [Attachment 11175 - Click to enlarge]


    Regardless of levels used, both scenes look thin and washed out. You can't lower the blacks to help add some snap to this scene, lest the back of Gibbs' jacket will descend into a dark, grimy pit (the shadows are already fairly grimy anyway). What you have left to work with to make the scene look indoors-y natural are the midtones (gamma) and the brights. I think the RGB histograms reveal midtone problems that would be fussy to work with.

    Back to some of the frames that appeared in the video I actually processed:

    Using the same SmoothLevels setting you used for the other shots, here is how they look in frame 1860.
    Image
    [Attachment 11176 - Click to enlarge]


    The right-hand side shows highlights starting to slip. Facial highlights tend to look rather stark when they exceed RGB 192 or so (the facial highlights here are well over that mark and hit RGB 255 in several places, which is as high they can go). One can always fix this later if careful, but it's a hassle. Again, the lights in this scene look like over-bright neon. The Rev. Gibbs is in really bad lighting (not his fault) and facial details are starting to burn away in the glare. Look also at your shot of a similar scene in the previous post.

    Same colorYUV settings from the earlier processed AVI, before NeatVideo and before mpg encoding:
    Image
    [Attachment 11177 - Click to enlarge]


    Similar to SmoothLevels, including the green color cast, but brights under better control -- those that still survive from the original, anyway. Lowering luma in the midrange made saturation look a little better, but this scene still has a long way to go.

    From the MPG, after some minor tweaks with TMPGenc:
    Image
    [Attachment 11178 - Click to enlarge]


    I think you get the idea that much of this levels and color stuff involves some techy stuff, but also a lot of personal preference, folks -- and very limited by the poor quality of the source. I think it looks "better" (less like football stadium arclights), even if I'm still not satisfied with the original camera exposure or the damage to what must have originally been a better looking video. IMHO this scene looks over sharpened, making the skin shadows too stark and contrasty. An example of this rule: you can't use the same filters, settings and procedures for every scene all the time.

    Originally Posted by mathmax View Post
    As you advised, I checked several frames carefully and I could notice that the noise is more reduced and the edges are cleaner when working on separate fields. However, there is a comb artifact, which I think might come from frame averaging in MCTemporalDenoise(). So I applied QTGMC() to get rid of this artifact and the result amazes me. Please compare the following frames by switching in separate tabs:
    You can certainly see a difference. I kept getting annoyed by the combing myself, so I fired up QTGMC. But you got out of bed earlier than I did today and beat me to it.

    Originally Posted by mathmax View Post
    The last ones really looks clean.. I even wonder if that is needed to apply neatvideo on the top of it..
    That's up to you. I don't throw NeatVideo at everything. In this case it cleaned up some blotchy noise in shadows, smoothed a lot of slender lines and facial contours, etc., and seemed worth it. But I haven't had time to evaluate my own QTGMC run yet,. Could be, I won't need the extra cleanup.
    Originally Posted by mathmax View Post
    I'm not sure that neatvideo contributes so much to calm down the video.. but I didn't compare in motion. I thought there might be other ways to smooth and sharpen but maybe there is a real plus in neatvideo that I don't really get now...
    With the temporal smoother, I guess you're trying to fix the image distortion when the camera moves. For example the fast zoom at frame 474 that you mentioned before. Am I right?
    I was trying to smooth the whole video. But I could get along without temp smoother and just use a mild antialias in the avs script.
    Originally Posted by mathmax View Post
    I don't have TMPGenc Plus. Is it an external tool? Could you detail your settings for fixing colors and contrast?
    I don't really understand what you mean by "ColorYUV has no effect on chroma gamma". Would be nice if I could do a quick test to realize the issue...
    The gamma_u and gamma_v parameters in ColorYUV() have no effect. Only gamma_y (luma) does anything. To oversimplify the description, gamma relates to midtones in the RGB scale. The exact center of the color and luma midtone range in RGB terms is RGB 128-128-128 (middle gray).

    I'll see if I can try ColorMill on the video and come up with the equivalent of what I did in TMPGenc.....which, actually, wasn't all that much.

    Originally Posted by mathmax View Post
    As I final step, I like to add grain with grainfactory()... it sounds a bit stupid to add grain after trying so hard to get rid of it. But there is good and bad grain.. and the last one gives depth to the video..
    Don't underestimate the old grain technique. Often it makes a good finishing touch.
    Last edited by sanlyn; 21st Mar 2014 at 17:23.
    Quote Quote  
  2. Thank you for your answer

    I'm not sure what you mean by "distorting the scale".
    well.. I meant that I don't really know what I do with ColorYUV(gain_y=-30,off_y=-4,cont_y=-10). I know it decreases the contrast (cont_y), shifts the luma range (off_y) and narrows it (gain_y).. but I wouldn't come up with these values instinctively and I don't know exactly how the resulting scale is altered. The combination of 3 functions makes it quite difficult to actually feel which parameter should I adjust.. but probably that is my lack of experience. I was curious to know how you came up with these values (-30, -4 and -10). I feel more at ease with SmoothLevel() as it processes linearly with the possibility to curve the scale with the gamma.

    Moreover, if you compare the histograms, the one with ColorYUV has some vertical strips that are not present with Smoothlevel(). I don't really know the reason for that..

    But I would still lower the high contrast a bit (and I would lower gamma, too, if I'd made it as far as this shot). I'd do so because leaving the brights as-is makes those room lights look like spotlights, and Michael's facial highlights and hands look too "hot".
    Yes you're right. Thank you very much for explaining the reasons of your settings and tweaks. But I wonder why the histograms in RGB have high intensity around 235 whereas it was around 220 in the simple histogram (for smoothlevels). And some values in the RGB histogram even get higher than 235... I don't understand this difference between the two histograms. And could you remind me why DVD should be between 16 and 235?

    What you have left to work with to make the scene look indoors-y natural are the midtones (gamma) and the brights. I think the RGB histograms reveal midtone problems that would be fussy to work with.
    here again I don't understand.. could you tell me how they reveal midtone problems? How would you edit the gamma for each color and luma?

    IMHO this scene looks over sharpened, making the skin shadows too stark and contrasty.
    yes I agree. And MCTemporalDenoise() doesn't help with this problem. After denoising the textures look even less natural.. almost like a cartoon sometimes. The shadings are awful.. it makes the faces look like plastic. I really wonder if we could deal better with this problem.. maybe some super debanding tools?
    I processed the whole video, you can download it here.. that poor texture is the main problem I wanna fix now: http://www.mynetdomain.de/mathmax/All_In_Your_Name.mpg

    Click image for larger version

Name:	texture0000.jpeg
Views:	453
Size:	161.8 KB
ID:	11198

    you can't use the same filters, settings and procedures for every scene all the time.
    well yes.. but that's always laborious to adjust the settings for each scene. Especially in avisynth...

    I was trying to smooth the whole video. But I could get along without temp smoother and just use a mild antialias in the avs script.
    well.. I think the purposes are different. Are you trying to smooth the edges with the antialising?

    I'll see if I can try ColorMill on the video and come up with the equivalent of what I did in TMPGenc.....which, actually, wasn't all that much.
    Thank you. I look forward to see how you adjust the colors and the gamma for each value.
    Quote Quote  
  3. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Tried the mpg briefly, it's coming along. Looking much better. Sorry I couldn't reply last nite, I had some partial answers ready before turning in, but looks as if the forum was down for a while. Be with you a little later today. I finally got this "home office" PC to act like my video PCs (it only took 6 weeks!), but forgot after juicing up the hardware that it has only 1-GB of RAM. Avisynth was running 0.12 frames per second at one point last nite and freezing the mouse. This PC wasn't previously set up for video. Gotta drive out for some RAM this morning, will be back.
    Last edited by sanlyn; 21st Mar 2014 at 17:22.
    Quote Quote  
  4. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by mathmax View Post
    I'm not sure what you mean by "distorting the scale".
    well.. I meant that I don't really know what I do with ColorYUV(gain_y=-30,off_y=-4,cont_y=-10). I know it decreases the contrast (cont_y), shifts the luma range (off_y) and narrows it (gain_y).. but I wouldn't come up with these values instinctively and I don't know exactly how the resulting scale is altered. . .I was curious to know how you came up with these values (-30, -4 and -10). I feel more at ease with SmoothLevel() as it processes linearly with the possibility to curve the scale with the gamma.
    Both methods accomplish similar ends, but in different ways.

    I played with SmoothLevels v2 this morning. Haven't used it in a while. Had the old version on my PC. I didn't use it much in the past because it's clamping at RGB16 and 235. But I see new settings that might alleviate that. Will try it later. The main idea is to adjust levels to prevent clipping the illegal brights and darks. More than one way to do that. I had set up a "safe averaging" quickie with ColorYUV, to be tweaked later.

    The way many observe levels in YUV is with Avisynth's "Levels" histogram (YV12 only). The images below show frames 1162 and 5016 with that histogram. Adjust luma (top white part of the histogram) to stay inside the colored side borders. In these scenes you can see luma creep into the right-hand margin.

    Image
    [Attachment 11213 - Click to enlarge]


    Image
    [Attachment 11214 - Click to enlarge]


    Originally Posted by mathmax View Post
    Moreover, if you compare the histograms, the one with ColorYUV has some vertical strips that are not present with Smoothlevel(). I don't really know the reason for that..
    By stripes, if you mean the luma "peaks" in that scene, it would indicate mild luma banding. But I didn't get to that scene in processing, as its levels are completely different problems from the other sequences. I would have made a separate ColorYUV setup for those shots.

    Originally Posted by mathmax View Post
    But I wonder why the histograms in RGB have high intensity around 235 whereas it was around 220 in the simple histogram (for smoothlevels). And some values in the RGB histogram even get higher than 235... I don't understand this difference between the two histograms. And could you remind me why DVD should be between 16 and 235?
    When YUV is converted to RGB (either in your PC or on your TV, brights will fall outside the legal Rec601/Rec709 standard and will look too "hot", if not washed out. With some encoders that attempt to adhere to broadcast/DVD standards, those brights will often start to change color, usually turning cyan. RGB histograms below show how those uncorrected YUV scenes will be translated into standard RGB matrices.

    Image
    [Attachment 11215 - Click to enlarge]


    Where you see parts of the histogram climbing up the right-hand wall, those elements are being clipped on conversion to RGB. YUV and RGB are two different matrices; the former is for video data storage, the latter is for display.

    So what's the big deal about RGB 16-235? Here's a post that starts out discussing what's meant by "video white" and such terms. It does go on, but the first few comments should explain:
    http://www.dvinfo.net/forum/non-linear-editing-pc/105938-video-white-what.html#post760894

    This article covers the same material, but in terms of "IRE" standards:
    http://www.glennchan.info/articles/technical/setup/75IREsetup.html

    Originally Posted by mathmax View Post
    here again I don't understand.. could you tell me how they reveal midtone problems? How would you edit the gamma for each color and luma?
    Some of those scenes look washed out and undersaturated. You can't just make them "darker" (some of the blacks are already crushed anyway).Gamma is a tricky to explain, but to most users it refers to midtones, the range between RGB64 and about 180. Most skin tones lie in that range, from shadows around RGB64 to skin highlights around 180.

    Most people don't get into detail about color very much -- why, I don't know, especially when their video's color looks pretty bad. One of tools used most often to play with filters and RGB controls, especially in high-end apps like pro versions of Photoshop, Premiere AfterEffects, Vegas, etc., and even VirtualDub, is the histogram. It's not difficult to get the hang of them. Articles about using histoghrams for various purposes usually show up in photo sites. Remmeber, photogs works with RGB 0-255; get involved with Premiere or AfterEffects' Color Finesse and they'll get really sticky about making you use the correct RGB colorspace setup for your preview window.

    This page deals with histogram and levels, contrast, in general and illustrates what clipping looks like:
    http://www.cambridgeincolour.com/tutorials/histograms2.htm

    Page 2 of that website gets into more detail about different contrast and illumination scenarios:
    http://www.cambridgeincolour.com/tutorials/histograms1.htm

    When working with RGB, you'll generally have a histogram like trevlac's multi-purpose ColorTools that works all RGB channels: luma, red, green, blue. You can see sample 'scopes and usage on this doom9 page, toward the lower-middle of the page: http://www.doom9.org/index.html?/capture/postprocessing_vdub.html.

    Color filters like VirtualDub gradation curves (similar to Premiere Pro, AfterEffects Pro, etc.) or ColorMill can adjust colors in specific regions, as well as gamma, midtone bias, black or white levels, and a lot of other good stuff. Popular ColorMill can do that, too. A page showing some COlorMill effects is here, with the download link at the bottom of the page: http://fdump.narod.ru/rgb.htm .

    Originally Posted by mathmax View Post
    you can't use the same filters, settings and procedures for every scene all the time.
    well yes.. but that's always laborious to adjust the settings for each scene. Especially in avisynth...
    Working with individual scenes is called color grading: scenes should look as if they were shot by the same camera at the same time as other, similar shots. And sometimes you just have problem lighting, as with this video. In Avisynth I make only basic level adjustments, sometimes a little color. I find YUV really tough for detailed color work.

    I've been trying a little color work today, but the big bugaboo I'm after is one you mentioned: compression artifacts, including some really crushed colors. Back a little later -- I'll have to capture a few frames.
    Last edited by sanlyn; 21st Mar 2014 at 17:22.
    Quote Quote  
  5. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I spent most of the night trying to discover why mosquito noise sparkling around some highlights in some scenes has suddenly appeared. Really annoying because it's not everywhere and won't go away. It appears right after running MCTD on this rebuilt PC - it's not that it shows up after later routines like QTGMC or NeatVideo, it's there from MCTD on. It's not in my mpg posted earlier, and it's almost entirely cleaned up in yours. Bummer.

    Meanwhile, I came up with a fix for some of the false contouring and posterization you objected to. I think that's what you meant earlier when you used the term "texture". The problems come from the overload of compression artifacts in the source. I tried two changes that made visible improvements:

    a) after running QTGMC, try running gradfun2dbmod(thr=2,str=0.9,mask=false). Helps with blocking and color banding.

    b) Increase NeatVideo's temporal setting from 2 to 3. Also helped to smooth that long zoom shot a bit (I really hate that shot. Zoom lenses should be removed from all amateur cameras!).

    Still working with color and some examples. Your full version of the clip looks better every time I view it. Nice work.
    Last edited by sanlyn; 21st Mar 2014 at 17:21.
    Quote Quote  
  6. Originally Posted by sanlyn View Post
    I spent most of the night trying to discover why mosquito noise sparkling around some highlights in some scenes has suddenly appeared. Really annoying because it's not everywhere and won't go away. It appears right after running MCTD on this rebuilt PC
    DCT ringing can be accentuated by a sharpening filter.
    Quote Quote  
  7. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    True. The first (and only) filter in the firt run was MCTemporalDenoise - which does some sharpening, I know. I'm saying that the earliest runs didn't show as much ringing or mosquitoes as later runs from the first step onward, but the first step in the processing hasn't changed (MCTD alone). Also thinking about trying a DCT filter first. The overall processing doesn't inject any more sharpeners, mostly smoothers and deblockers. I'll try to get two comparison images in a bit.
    Last edited by sanlyn; 21st Mar 2014 at 17:21.
    Quote Quote  
  8. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Shucks, another delay. The computer store didn't have the RAM I wanted, so I had to order over the 'net. Stuff like MCTD is freezing this underfed PC, so I'm moving this project back to my other PC until more memory arrives. Phooey.
    Last edited by sanlyn; 21st Mar 2014 at 17:21.
    Quote Quote  
  9. Originally Posted by sanlyn View Post
    but the first step in the processing hasn't changed (MCTD alone).
    But...

    Originally Posted by sanlyn View Post
    on this rebuilt PC
    Exactly the same versions of all filters?
    Quote Quote  
  10. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Exactly the same. I spent 5 days checking everything, versions, etc. -- for the most part, copied plugins, docs, folder names and structure, etc., from one to the other, one element at a time. Both XP Pro. Same hardware (different motherboards and graphics cards).

    Running same number of frames on the rebuilt PC, MCTD = 0.19 frames per sec. Original PC = 1.2 frames per sec. Both Athlon 2.2 GHz CPU. Will solve that problem with new RAM. The rebuilt PC freezes up after about an hour on heavy MCTD. The old 4-GB machine just keeps hummin'.

    Working on the old machine now. Somehow seem to have solved 50% of the problem. Source is YV12. Changed anything that says "ConvertToRGB32(matrix="Rec601", interlaced=-true)" by deleting "interlaced=true". Note, each processing step leaves the clip recompressed as YV12 until the last step.

    Now working with dehalo and de-ringing plugin trials on that scene. Odd, most of the other scenes don't have many mosquitoes. Caution: the source video is really bad ! ! ! Another "learning event".
    Last edited by sanlyn; 21st Mar 2014 at 17:21.
    Quote Quote  
  11. Originally Posted by sanlyn View Post
    Somehow seem to have solved 50% of the problem. Source is YV12. Changed anything that says "ConvertToRGB32(matrix="Rec601", interlaced=-true)" by deleting "interlaced=true".
    That will screw up the chroma if the video is interlaced.
    Quote Quote  
  12. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Mediainfo and others say it's not interlaced. When I force SeparateFields, though, shows something odd: one of the fields (I think it's the simulated "Odd") is in worse shape than the Even's. lordsmurf look earlier and hinted at some criminal conversion/transcoding problem by the publisher. It must have started out as progressive (supposedly DV in a webcam). Every time I "fake it" as BFF interlaced, the filters work. This has to be some kinda multi-generation botch job. It "acts like" it had to have been be interlaced/deinterlaced at some point. It just don't behave like progressive.
    Last edited by sanlyn; 21st Mar 2014 at 17:20.
    Quote Quote  
  13. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally the approach I/we used was denoise -> QTGMC -> SelectEven. I just started over in reverse with QTGMC first. Saved one clip as Even, another clip as Odd. Let's see what happens.

    ED: wouldn't DV from a webcam be BFF? IF you view this with SeparateFields either as BFF or TFF, there's no reverse movement using either.
    Last edited by sanlyn; 21st Mar 2014 at 17:20.
    Quote Quote  
  14. Originally Posted by sanlyn View Post
    IF you view this with SeparateFields either as BFF or TFF, there's no reverse movement using either.
    Then the source is progressive. Upload a sample?
    Quote Quote  
  15. Thank you for the links, sanlyn.

    However there are some shadow points that I don't really understand:

    • First, I observed the RGB histograms for the frame 6211

    original
    Click image for larger version

Name:	or0000.jpeg
Views:	595
Size:	262.3 KB
ID:	11242

    smoothlevels(0, 1, 255, 0, 235, limiter=2)
    Click image for larger version

Name:	sl0000.jpeg
Views:	540
Size:	269.7 KB
ID:	11243

    I wonder why on the first histogram the luma is clipped to 235 whereas the colors are up to 255. This difference remains on the second histogram. The peaks are just shifted by 20 to the left. Could you explain this difference between luma and colors? Does that mean I should still reduce the luminosity until all colors are under 235?

    • You said that you would correct the midtones by editing the gammas. But I don't really see how you can see this problem from the histograms. And I have no idea how you would tweak the gammas for each color.

    • By stripes, if you mean the luma "peaks" in that scene, it would indicate mild luma banding.
      yes.. isn't that luma banding annoying? At least SmoothLevels() produce a smooth histogram..

    • I don't use to work with premiere or vegas to adjust colors and levels. I always find it long and boring to write the code for adjusting each scene separatly in avisynth.
    Usually I do this:
    Code:
    scene1 = clip.Trim(0,100)
    scene1 = scene1.sometweak()
    
    scene2 = clip.Trim(101,200)
    scene2 = scene2.sometweak()
    
    scene3 = clip.Trim(201,300)
    scene3 = scene3.sometweak()
    
    scene1 ++ scene2 ++ scene3
    Should I use more sofisticated tools or change my method? What do you think about autolevels()?

    • The main problem remaining is that the video looks like a cartoon after denoising. If you look their faces, this is very clear on the following frame. It looks like the skin is chipped.

    Click image for larger version

Name:	texture0000.jpeg
Views:	289
Size:	161.8 KB
ID:	11244

    I tried gradfun2dbmod(thr=2,str=0.9,mask=false) but it doesn't seem to help... I'm now wondering if our settings on MCTD() were a bit strong..
    I made a comparison video: http://www.mediafire.com/?75appynaaobr167 (on the left, the problem of the noise which we could get rid of... but the right hand side doesn't look so natural)

    • I'm not sure to understand what you mean by mosquito noise or ringing. As far as I know the following image shows ringing artifacts, but I don't see them in our video:

    Click image for larger version

Name:	178_NTSC_DVD_133_2.jpeg
Views:	667
Size:	42.7 KB
ID:	11245

    If you refer to the grain around the edges which get stronger on motion, I think removgrain(3) does a great job and that's what I used for processing the last video.
    Quote Quote  
  16. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by sanlyn View Post
    IF you view this with SeparateFields either as BFF or TFF, there's no reverse movement using either.
    Then the source is progressive. Upload a sample?
    The source is m4v 39.6MB. I have no editor for it at hand and the OP's link to the original says the file has been removed. I have a direct download copy of the m4v source here: http://dc402.4shared.com/download/Osc_fd2F/AllinYourName.m4v . I recall 4shared upload monitor saying 13 minutes, but it got there in under 5 min. Just a few seconds in Lagarith AVI would be bigger than the m4v. Let me know if 4shared is OK.
    Last edited by sanlyn; 21st Mar 2014 at 17:19.
    Quote Quote  
  17. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by mathmax View Post
    However there are some shadow points that I don't really understand:

    • First, I observed the RGB histograms for the frame 6211
    original

    smoothlevels(0, 1, 255, 0, 235, limiter=2)

    I wonder why on the first histogram the luma is clipped to 235 whereas the colors are up to 255. This difference remains on the second histogram. The peaks are just shifted by 20 to the left.
    I posted a set of YUV and RGB histograms for the same scene, showing what happens when no corrections are made and YUV gets translated to RGB. I didn't use any corrections for the scene shown because I never got that far into the video during processing. But the corection shown would have brought everything down to a manageable level. I wouldn't have handled YUV in that scene in that same way. I would lower red and raise blue, with less luma correction.

    Originally Posted by mathmax View Post
    Could you explain this difference between luma and colors? Does that mean I should still reduce the luminosity until all colors are under 235?
    You're thinking in terms of RGB (because that's the way we "see" things", but not the way image data is stored in YUV. I'll look for a web page that describes the difference, but consider this:

    Let's say you're walking down the street at night and you see a blue neon sign. When you're standing right in front of it, the blue (its color or HUE) is very bright (luminosity). It's so bright it hurts your eyes to be so close to it. Now, walk down the block a while and look at the neon again. It's still blue (HUE), but not as bright (luminosity). The farther away you get, the darker (luminosity) the neon looks, but it's still the same blue (HUE). Now, most neon signs have a certain characteristic: within limits, you can turn down the brightness, but it's still the same hue.

    Reduce luma to get everything lowered? Eventually it would, but you'll have a very dark image. You'd reduce luma and color both, if needed. SmoothLevels does a bit of that. The luma indicates overall contrast in the dark/bright range. Luma refers to the intensity of LIGHT. Chroma is stored separately as an intensity of HUE. We humans don't store image information with brightness and hue separated. In YUV The colors won't necessarily agree with luma in shape: a scene could have lots of very dark reds in the range of Red 30 to 128 (YUV doesn't use "RGB" numbers. Those are interpreted later). You're dealing with two color spaces that store values differently. YUV stores chroma separately from luma. RGB stores luma and chroma together. In both a YUV and an RGB histogram, that darkish to medium red line would go only halfway across the graph, assuming no other higher hues of red are in the picture.

    Originally Posted by mathmax View Post
    You said that you would correct the midtones by editing the gammas. But I don't really see how you can see this problem from the histograms. And I have no idea how you would tweak the gammas for each color.
    You can't adjust u and v gamma in YUV. Well, you could I guess but not with Avisynth controls. When you adjust one color in YUV, you affect the others to some degree. In RGB you can adjust each color separately without affecting the other two. I'm working on some screen captures to show how it's done. But note, that simplified description of gamma is very general. Usually, when people talk of raising or lowering gamma they refer to raising or lowering the level of middle-range color and brightness without affecting darks and brights.

    I've been using the newer v2 of SmoothLevels since yesterday. Works better than the old one.

    Originally Posted by mathmax View Post
    By stripes, if you mean the luma "peaks" in that scene, it would indicate mild luma banding.
    yes.. isn't that luma banding annoying? At least SmoothLevels() produce a smooth histogram..
    Those peaks aren't major, more difficult to see than they look on a graph. This particular scene has lighting problems, to an extent that I wouldn't use the same ColortYUV setting for any of the scenes shot in that room. The camera's auto exposure meter didn't do that scene any favors. A pro would have used fill lights for a scene like that. Yellow (Red + Green) is oversaturated. You could adjust some of that in YUV, but it's easier in RGB.

    If you look at the YUV histogram with no corrections, you'll see little peaks elsewhere in the video. The objection I have to SmoothLevels is that it doesn't reduce overbrightness, it clamps it. Not the same operation. Lowering high contrast means gradually lowering the high range of luma/color to lower values; clamping means changing the overbright values to lower values at a cutoff point. IF you see a value higher than 235, you reinterpret it as 235. 254 becomes 235. 248 becomes 235. 240 becomes 235. Lowering a high range by gradually compressing values from the midrange on up and making them gradually lower keeps the information, it just makes it "darker". If you lower "Contrast" in color YUV, you compress that range without discarding the information. That's a difficult concept, I know. In SmoothLevels' favor, the clamping seems to be more gradual than the usual method, but it's still clamping.

    That "shift" of 20 points to left that SmoothLevels was doing: not the perfect way, but it did save some detail in Red. I posted YUV and RGB histograms of the same two uncorrected scenes to show what happens when YUV gets translated into RGB without prior adjustments.

    You asked about autogain...? Autogain tries to make everything look like an ideal histogram in broad daylight. That won't do for dimly lighted rooms (look at what it did to those listening room shots). Try it in a night scene and see what happens, or in a scene where someone moves from a bright or darker area into a different one.

    Originally Posted by mathmax View Post
    I don't use to work with premiere or vegas to adjust colors and levels. I always find it long and boring to write the code for adjusting each scene separatly in avisynth.

    Usually I do this:
    Code:
    scene1 = clip.Trim(0,100)
    scene1 = scene1.sometweak()
    
    . . . .
    Should I use more sofisticated tools or change my method? What do you think about autolevels()?
    I don't do that much Avisynth trimming or detailed color work. YUV is often OK for video with decent color to begin with, but most restorations from VHS have horrible color problems that don't lend themselves to fancy manipulation in YUV. The RGB color controls for VirtualDub are similar to those in the "big" apps--not as ultimately sophisticated as something like Color Finesse (a $500 or so plugin) in detailed features, but similar. Some people work only in YUV. It would drive me nuts.

    I trim video in AviSynth but mostly I pick a long sequence of scenes, make a basic correction in YUV to keep things under control, then go to RGB and cut and correct in more detail in VirtualDub. Now and then you find a scene that just has to have totally separate treatment and gets patched-in later. I also take some longish sections of AVI and cut and color-correct in TMPGEnc Plus.

    Originally Posted by mathmax View Post
    The main problem remaining is that the video looks like a cartoon after denoising. If you look their faces, this is very clear on the following frame. It looks like the skin is chipped.
    I agree with some scenes, but that's not the only big problem. There's lots of posterization, etc. The scenes in that microphone room with the side view looks "sharp", mainly because of the lighting. Those would have to be softened a bit, with filters and with contrast/color adjustment. One thing about Michael's facial features, and it's a toughie: he has some beard growth...mustache, chin hair, signs of a nascent goatee. That will take some handling. But he did have half of a new goatee; the bad compression obscures the fine details.

    As for the sculpted look: the lousy compression and other artifacts are causing it. Very fine detail is just plain gone. One has to be careful about making that better without softening the whole image. It takes practise.

    Originally Posted by mathmax View Post
    I tried gradfun2dbmod(thr=2,str=0.9,mask=false) but it doesn't seem to help... I'm now wondering if our settings on MCTD() were a bit strong..
    In general I don't think so, but MCTemporalDenoise can be modified. I'd cut its sharpener a bit and increase TTempSmooth's softening. The MCTD document shows how. I'll be trying that later. gradfun2dbmod did make a difference. It evened out some banding lines in shadows and cut down posterization and even a little shimmer in many shadows.

    Give me a while to have a closer look at your modified video.

    Originally Posted by mathmax View Post
    I'm not sure to understand what you mean by mosquito noise or ringing. As far as I know the following image shows ringing artifacts, but I don't see them in our video:

    If you refer to the grain around the edges which get stronger on motion, I think removgrain(3) does a great job and that's what I used for processing the last video.
    The animated image you posted does have ringing, halos, and dot crawl.

    I'm seeing it in the video where it was more subdued in earlier versions. That's what I'm trying to find out: where is it coming from? I'm getting set to fire up a few things like removegrain in a bit, but first I'm making a new AVI version of the video. Also, I'll make a few captures of working with some of the RGB controls that are popular.
    Last edited by sanlyn; 21st Mar 2014 at 17:19.
    Quote Quote  
  18. Originally Posted by sanlyn View Post
    I have a direct download copy of the m4v source here: http://dc402.4shared.com/download/Osc_fd2F/AllinYourName.m4v .
    That file is definitely progressive.
    Quote Quote  
  19. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Yes. It stays that way on my end at final output. Got some decent cleaning with QTGCM in progressive mode, but it was hardly worth the wait. If you "force" a SeparateFields operation and run the likes of MCTD or TemporalDegrain on Even and Odd separately, then reweave, you get entirely different results than just a straight run. I still don't know why. Challenges the imagination.

    Besides plenty of other damage, it just looks to me as if the clip was once interlaced, then deinterlaced using some weird system. ANd I can't imagine where all that grain came from. The high compression doesn't help.
    Last edited by sanlyn; 21st Mar 2014 at 17:25.
    Quote Quote  
  20. Originally Posted by sanlyn View Post
    If you "force" a SeparateFields operation and run the likes of MCTD or TemporalDegrain on Even and Odd separately, then reweave, you get entirely different results than just a straight run. I still don't know why.

    On interlaced content, when you separate fields and group even/odd fields for processing, each subsequent field in the odd or even field group represents the next moment in time, but contains ALL the information (each field is essentially half a progressive frame) - which is the correct way to do it

    But on progressive content, each frame is made up of 2 fields but belong in the same moment in time - so when you use the same interlace filtering treatment - you're only processing half the frame separately - thus motion vectors get truncated, edges are cut in half before it's reweaved. The denoising filters "think" that the individual field contains all the information - but it doesn't. That's the reason you introduce combing artifacts and line artifacts
    Quote Quote  
  21. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I know what you're saying. I agree. I'm back to treating it as plain vanilla progressive, and it never left the filter chain as anything but progressive.. But why does SeparateFields/reweave look cleaner and respond to further work, but progressive comes back as unworkable mush. I understand interlaced-vs-non-interlaced, no question in my mind. I'm just reporting what I see:
    https://forum.videohelp.com/threads/341917-remove-noise-and-block?p=2144101&viewfull=1#post2144101
    Last edited by sanlyn; 21st Mar 2014 at 17:25.
    Quote Quote  
  22. Originally Posted by sanlyn View Post
    I know what you're saying. I agree. I'm back to treating it as plain vanilla progressive, and it never left the filter chain as anything but progressive.. But why does SeparateFields/reweave look cleaner and respond to further work, but progressive comes back as unworkable mush. I understand interlaced-vs-non-interlaced, no question in my mind. I'm just reporting what I see:
    https://forum.videohelp.com/threads/341917-remove-noise-and-block?p=2144101&viewfull=1#post2144101

    There appears to be a difference in luminance as well , are you sure the only difference was separating odd/even fields?

    MCTD has an interlaced switch (interlaced=true), so there is no need to separate fields - usually grouped odd/even fields are only used on interlaced content and filters which don't have a progressive mode

    But those demonstrates the horizontal line artifacts when you separate fields - when you use a denoiser than counter sharpens, the edges of the field (not frame) is sharpened as well - because it treats the field as a frame . You can see the aliasing and lines near MJ's back and head

    So why does it appear "cleaner" - I don't know. I think in progressive mode it's doing a better job preserving more details because objects are full (not cut in half) , so things like motion vectors aren't cut. So small details like fold in the dark shirt , back of the right arm are preserved. But this clip is so bad, that what it classifies as detail might be noise. The noise pattern is very bizzare, even for xvid. It's not completely like random dancing grain, it lingers through frames
    Quote Quote  
  23. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I agree. Right now it's running as progressive, with temporal degrainers (two, one heavy and one light), deblockers (the same), line smoothers/antialias, staying away from any sharpeners for now....has 6 hours to go, so I'll let it finish overnite. I don't think the godawful wiggling in that zoom shot around frame 400 can ever get cleaned. But I'm making visible progress with some of the smearing.

    As for that noise I keep calling "grain". I don't know, I never saw anything like that. Will take a look in the morning.

    This is a helluva way to learn new tricks with Avisynth plugins .
    Last edited by sanlyn; 21st Mar 2014 at 17:26.
    Quote Quote  
  24. Originally Posted by sanlyn View Post

    As for that noise I keep calling "grain". I don't know, I never saw anything like that. Will take a look in the morning.

    It's very bizzare pattern. The original "grain" was probably due to low light/ sensor noise, but the xvid compression job is horrendous. If it was more like typical "dancing" grain, it would be much easier to clean .This is like smeary , lingering "grain" , with macroblocking thrown in. As mentioned earlier - this isn't a straight forward hi8 transfer - there's other stuff going on. I assume this was the original file that mathmax put up, and that he didn't re-encode it ?
    Quote Quote  
  25. @sanlyn: Regarding the first point, I didn't formulate well my question. I was wondering why the colors' peak were around 255 whereas the luma peak is around 235. If the three colors have high intensity around 255, the resulting luma should be intense around the same value, shouldn't it? And then, I wondered if all the colors should be under 235..

    You're right about the yellow oversaturated on bright areas.. and it's all along the video. Is that why you want to lower red and raise blue?

    I understand well why it's easier to work on RGB because we can tweak each color individually.. but don't you lose quality with YUV->RGB conversion? And I thought, Colormills or such plugin allows to work on each color individually and then convert the settings in term of YUV... this approach, as far as I understood it, would allow to have RGB logic for the user and keep the YUV format for the video itself.

    You say SmoothLevels() clamps the value to 235 but I think that is not the case if the limiter is set to 2. If you look a the 2 histograms posted at the top of this page, you'll see that they both preserve values above the vertical white lines. If the values were clamped, there shouldn't be any white above these lines, right?

    I didn't know that it's possible to select scenes (range of frames) in Virtualdub and then apply some filters or setting only on certain ranges. Is that really possible?

    I looked again gradfun2dbmod.. but I still can't really find where it makes a difference. Could you illustrate with some screenshots?

    I can't see the ringing issue in our video.. maybe I'm consusing ringing and halos..
    Last edited by mathmax; 29th Feb 2012 at 22:31.
    Quote Quote  
  26. Originally Posted by poisondeathray View Post
    I assume this was the original file that mathmax put up, and that he didn't re-encode it ?
    Indeed, I have nothing to do with this encoding.. it's sold like that on Barry Gibbs' website..
    Quote Quote  
  27. Originally Posted by mathmax View Post
    I wonder why on the first histogram the luma is clipped to 235 whereas the colors are up to 255.
    I think this is from your color tools setup. If you set source attributes to 16-235 in the configuration, you will see it go to RGB 255

    You say SmoothLevels() clamps the value to 235 but I think that is not the case if the limiter is set to 2. If you look a the 2 histograms posted at the top of this page, you'll see that they both preserve values above the vertical white lines. If the values were clamped, there shouldn't be any white above these lines, right?
    Actually, limiter=2 clamps Y' 16-235 , and CbCr 16-240 . By default, limiter=0 (no clamping)

    What I think sanyln was talking about is the hard white line - that's hard clipping from the camera - it goes beyond the lattitude of the capability of the sensor

    "clipping" is different than "clamping". Clipping suggests everything is cut off beyond a certain level. Clamping suggests everything is "squished"
    Quote Quote  
  28. Originally Posted by poisondeathray View Post
    I think this is from your color tools setup. If you set source attributes to 16-235 in the configuration, you will see it go to RGB 255
    ah ok.. but why are the scales of the RGB colors not altered by this attribute?


    Originally Posted by poisondeathray View Post
    So why does it appear "cleaner" - I don't know. I think in progressive mode it's doing a better job preserving more details because objects are full (not cut in half) , so things like motion vectors aren't cut. So small details like fold in the dark shirt , back of the right arm are preserved. But this clip is so bad, that what it classifies as detail might be noise. The noise pattern is very bizzare, even for xvid. It's not completely like random dancing grain, it lingers through frames
    this is strange but to process on separate fields really makes the video looks cleaner. Here two frames to compare:

    https://forum.videohelp.com/attachment.php?attachmentid=11169&d=1330280416
    https://forum.videohelp.com/attachment.php?attachmentid=11168&d=1330280413

    But the second one looks more like a cartoon... regarding the problem we mentioned above.
    Last edited by mathmax; 29th Feb 2012 at 22:25.
    Quote Quote  
  29. Originally Posted by mathmax View Post
    Originally Posted by poisondeathray View Post
    I think this is from your color tools setup. If you set source attributes to 16-235 in the configuration, you will see it go to RGB 255
    ah ok.. but why are the scales of the RGB colors not altered by this attribute?
    I'm not sure, and it really doesn't matter, it's just a monitoring tool, it doesn't affect the actual video. I don't even know why it's there as an option. The actual RGB values are governed by how the Y'CbCr => RGB conversion was done. If you didn't specify, and let vdub do it, then it will use Rec.601 matrix . Eitherway it would be reading off the RGB values (not the original video) and applying the equation to get the luma, not reading the luma directly.
    Quote Quote  
  30. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    This is getting into good color stuff, especaially the techy part of how YUV and RGB display color info. The progressed video has 1.3 hours to go, but it's getting late. Thanks, jagabo for"ulliminating" us (sorry for that) on more about YUV.

    To answer mathmax about why the color peaks at 255 but the luma peaks at 235: the luma peaks lower because the HUE might be, say, a very reddish red or a very bluish blue, but it's not giving off 255 bucks worth of light. Recall the neon sign: the farther away you get the more it loses brightness, but it's still the same blue.

    To jagabo, I just downloaded SmoothAdjust/SmoothLevels yesterday, hadn't used it for some time. I do see that it has more options than it used to and various means of avoiding hard clipping. I'd like to spend more time with the new version. I didn't do a great job of explaining clipping, clamping, manual control; your description seems more clear than mine (shorter, too, as usual).

    mathmax, I'll try a couple screen samples of what gradfun2dbmod was doing, but still images don't show motion noise very well. In shots where you get closer to that gray jacket M. Jackson is wearing, look at the way the darker portions and folds shimmer and block up on movement. The plugin helped to soften those effects.
    Last edited by sanlyn; 21st Mar 2014 at 17:26.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!