VideoHelp Forum
+ Reply to Thread
Page 2 of 5
FirstFirst 1 2 3 4 ... LastLast
Results 31 to 60 of 125
Thread
  1. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:23.
    Quote Quote  
  2. Thank you for your answer

    I'm not sure what you mean by "distorting the scale".
    well.. I meant that I don't really know what I do with ColorYUV(gain_y=-30,off_y=-4,cont_y=-10). I know it decreases the contrast (cont_y), shifts the luma range (off_y) and narrows it (gain_y).. but I wouldn't come up with these values instinctively and I don't know exactly how the resulting scale is altered. The combination of 3 functions makes it quite difficult to actually feel which parameter should I adjust.. but probably that is my lack of experience. I was curious to know how you came up with these values (-30, -4 and -10). I feel more at ease with SmoothLevel() as it processes linearly with the possibility to curve the scale with the gamma.

    Moreover, if you compare the histograms, the one with ColorYUV has some vertical strips that are not present with Smoothlevel(). I don't really know the reason for that..

    But I would still lower the high contrast a bit (and I would lower gamma, too, if I'd made it as far as this shot). I'd do so because leaving the brights as-is makes those room lights look like spotlights, and Michael's facial highlights and hands look too "hot".
    Yes you're right. Thank you very much for explaining the reasons of your settings and tweaks. But I wonder why the histograms in RGB have high intensity around 235 whereas it was around 220 in the simple histogram (for smoothlevels). And some values in the RGB histogram even get higher than 235... I don't understand this difference between the two histograms. And could you remind me why DVD should be between 16 and 235?

    What you have left to work with to make the scene look indoors-y natural are the midtones (gamma) and the brights. I think the RGB histograms reveal midtone problems that would be fussy to work with.
    here again I don't understand.. could you tell me how they reveal midtone problems? How would you edit the gamma for each color and luma?

    IMHO this scene looks over sharpened, making the skin shadows too stark and contrasty.
    yes I agree. And MCTemporalDenoise() doesn't help with this problem. After denoising the textures look even less natural.. almost like a cartoon sometimes. The shadings are awful.. it makes the faces look like plastic. I really wonder if we could deal better with this problem.. maybe some super debanding tools?
    I processed the whole video, you can download it here.. that poor texture is the main problem I wanna fix now: http://www.mynetdomain.de/mathmax/All_In_Your_Name.mpg

    Name:  texture0000.jpeg
Views: 105
Size:  161.8 KB

    you can't use the same filters, settings and procedures for every scene all the time.
    well yes.. but that's always laborious to adjust the settings for each scene. Especially in avisynth...

    I was trying to smooth the whole video. But I could get along without temp smoother and just use a mild antialias in the avs script.
    well.. I think the purposes are different. Are you trying to smooth the edges with the antialising?

    I'll see if I can try ColorMill on the video and come up with the equivalent of what I did in TMPGenc.....which, actually, wasn't all that much.
    Thank you. I look forward to see how you adjust the colors and the gamma for each value.
    Quote Quote  
  3. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:22.
    Quote Quote  
  4. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:22.
    Quote Quote  
  5. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:21.
    Quote Quote  
  6. Originally Posted by sanlyn View Post
    I spent most of the night trying to discover why mosquito noise sparkling around some highlights in some scenes has suddenly appeared. Really annoying because it's not everywhere and won't go away. It appears right after running MCTD on this rebuilt PC
    DCT ringing can be accentuated by a sharpening filter.
    Quote Quote  
  7. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:21.
    Quote Quote  
  8. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:21.
    Quote Quote  
  9. Originally Posted by sanlyn View Post
    but the first step in the processing hasn't changed (MCTD alone).
    But...

    Originally Posted by sanlyn View Post
    on this rebuilt PC
    Exactly the same versions of all filters?
    Quote Quote  
  10. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:21.
    Quote Quote  
  11. Originally Posted by sanlyn View Post
    Somehow seem to have solved 50% of the problem. Source is YV12. Changed anything that says "ConvertToRGB32(matrix="Rec601", interlaced=-true)" by deleting "interlaced=true".
    That will screw up the chroma if the video is interlaced.
    Quote Quote  
  12. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:20.
    Quote Quote  
  13. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:20.
    Quote Quote  
  14. Originally Posted by sanlyn View Post
    IF you view this with SeparateFields either as BFF or TFF, there's no reverse movement using either.
    Then the source is progressive. Upload a sample?
    Quote Quote  
  15. Thank you for the links, sanlyn.

    However there are some shadow points that I don't really understand:

    • First, I observed the RGB histograms for the frame 6211

    original
    Click image for larger version

Name:	or0000.jpeg
Views:	72
Size:	262.3 KB
ID:	11242

    smoothlevels(0, 1, 255, 0, 235, limiter=2)
    Click image for larger version

Name:	sl0000.jpeg
Views:	75
Size:	269.7 KB
ID:	11243

    I wonder why on the first histogram the luma is clipped to 235 whereas the colors are up to 255. This difference remains on the second histogram. The peaks are just shifted by 20 to the left. Could you explain this difference between luma and colors? Does that mean I should still reduce the luminosity until all colors are under 235?

    • You said that you would correct the midtones by editing the gammas. But I don't really see how you can see this problem from the histograms. And I have no idea how you would tweak the gammas for each color.

    • By stripes, if you mean the luma "peaks" in that scene, it would indicate mild luma banding.
      yes.. isn't that luma banding annoying? At least SmoothLevels() produce a smooth histogram..

    • I don't use to work with premiere or vegas to adjust colors and levels. I always find it long and boring to write the code for adjusting each scene separatly in avisynth.
    Usually I do this:
    Code:
    scene1 = clip.Trim(0,100)
    scene1 = scene1.sometweak()
    
    scene2 = clip.Trim(101,200)
    scene2 = scene2.sometweak()
    
    scene3 = clip.Trim(201,300)
    scene3 = scene3.sometweak()
    
    scene1 ++ scene2 ++ scene3
    Should I use more sofisticated tools or change my method? What do you think about autolevels()?

    • The main problem remaining is that the video looks like a cartoon after denoising. If you look their faces, this is very clear on the following frame. It looks like the skin is chipped.

    Name:  texture0000.jpeg
Views: 79
Size:  161.8 KB

    I tried gradfun2dbmod(thr=2,str=0.9,mask=false) but it doesn't seem to help... I'm now wondering if our settings on MCTD() were a bit strong..
    I made a comparison video: http://www.mediafire.com/?75appynaaobr167 (on the left, the problem of the noise which we could get rid of... but the right hand side doesn't look so natural)

    • I'm not sure to understand what you mean by mosquito noise or ringing. As far as I know the following image shows ringing artifacts, but I don't see them in our video:

    Name:  178_NTSC_DVD_133_2.jpeg
Views: 396
Size:  42.7 KB

    If you refer to the grain around the edges which get stronger on motion, I think removgrain(3) does a great job and that's what I used for processing the last video.
    Quote Quote  
  16. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:19.
    Quote Quote  
  17. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:19.
    Quote Quote  
  18. Originally Posted by sanlyn View Post
    I have a direct download copy of the m4v source here: http://dc402.4shared.com/download/Osc_fd2F/AllinYourName.m4v .
    That file is definitely progressive.
    Quote Quote  
  19. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:25.
    Quote Quote  
  20. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by sanlyn View Post
    If you "force" a SeparateFields operation and run the likes of MCTD or TemporalDegrain on Even and Odd separately, then reweave, you get entirely different results than just a straight run. I still don't know why.

    On interlaced content, when you separate fields and group even/odd fields for processing, each subsequent field in the odd or even field group represents the next moment in time, but contains ALL the information (each field is essentially half a progressive frame) - which is the correct way to do it

    But on progressive content, each frame is made up of 2 fields but belong in the same moment in time - so when you use the same interlace filtering treatment - you're only processing half the frame separately - thus motion vectors get truncated, edges are cut in half before it's reweaved. The denoising filters "think" that the individual field contains all the information - but it doesn't. That's the reason you introduce combing artifacts and line artifacts
    Quote Quote  
  21. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:25.
    Quote Quote  
  22. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by sanlyn View Post
    I know what you're saying. I agree. I'm back to treating it as plain vanilla progressive, and it never left the filter chain as anything but progressive.. But why does SeparateFields/reweave look cleaner and respond to further work, but progressive comes back as unworkable mush. I understand interlaced-vs-non-interlaced, no question in my mind. I'm just reporting what I see:
    http://forum.videohelp.com/threads/341917-remove-noise-and-block?p=2144101&viewfull=1#post2144101

    There appears to be a difference in luminance as well , are you sure the only difference was separating odd/even fields?

    MCTD has an interlaced switch (interlaced=true), so there is no need to separate fields - usually grouped odd/even fields are only used on interlaced content and filters which don't have a progressive mode

    But those demonstrates the horizontal line artifacts when you separate fields - when you use a denoiser than counter sharpens, the edges of the field (not frame) is sharpened as well - because it treats the field as a frame . You can see the aliasing and lines near MJ's back and head

    So why does it appear "cleaner" - I don't know. I think in progressive mode it's doing a better job preserving more details because objects are full (not cut in half) , so things like motion vectors aren't cut. So small details like fold in the dark shirt , back of the right arm are preserved. But this clip is so bad, that what it classifies as detail might be noise. The noise pattern is very bizzare, even for xvid. It's not completely like random dancing grain, it lingers through frames
    Quote Quote  
  23. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:26.
    Quote Quote  
  24. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by sanlyn View Post

    As for that noise I keep calling "grain". I don't know, I never saw anything like that. Will take a look in the morning.

    It's very bizzare pattern. The original "grain" was probably due to low light/ sensor noise, but the xvid compression job is horrendous. If it was more like typical "dancing" grain, it would be much easier to clean .This is like smeary , lingering "grain" , with macroblocking thrown in. As mentioned earlier - this isn't a straight forward hi8 transfer - there's other stuff going on. I assume this was the original file that mathmax put up, and that he didn't re-encode it ?
    Quote Quote  
  25. @sanlyn: Regarding the first point, I didn't formulate well my question. I was wondering why the colors' peak were around 255 whereas the luma peak is around 235. If the three colors have high intensity around 255, the resulting luma should be intense around the same value, shouldn't it? And then, I wondered if all the colors should be under 235..

    You're right about the yellow oversaturated on bright areas.. and it's all along the video. Is that why you want to lower red and raise blue?

    I understand well why it's easier to work on RGB because we can tweak each color individually.. but don't you lose quality with YUV->RGB conversion? And I thought, Colormills or such plugin allows to work on each color individually and then convert the settings in term of YUV... this approach, as far as I understood it, would allow to have RGB logic for the user and keep the YUV format for the video itself.

    You say SmoothLevels() clamps the value to 235 but I think that is not the case if the limiter is set to 2. If you look a the 2 histograms posted at the top of this page, you'll see that they both preserve values above the vertical white lines. If the values were clamped, there shouldn't be any white above these lines, right?

    I didn't know that it's possible to select scenes (range of frames) in Virtualdub and then apply some filters or setting only on certain ranges. Is that really possible?

    I looked again gradfun2dbmod.. but I still can't really find where it makes a difference. Could you illustrate with some screenshots?

    I can't see the ringing issue in our video.. maybe I'm consusing ringing and halos..
    Last edited by mathmax; 29th Feb 2012 at 21:31.
    Quote Quote  
  26. Originally Posted by poisondeathray View Post
    I assume this was the original file that mathmax put up, and that he didn't re-encode it ?
    Indeed, I have nothing to do with this encoding.. it's sold like that on Barry Gibbs' website..
    Quote Quote  
  27. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by mathmax View Post
    I wonder why on the first histogram the luma is clipped to 235 whereas the colors are up to 255.
    I think this is from your color tools setup. If you set source attributes to 16-235 in the configuration, you will see it go to RGB 255

    You say SmoothLevels() clamps the value to 235 but I think that is not the case if the limiter is set to 2. If you look a the 2 histograms posted at the top of this page, you'll see that they both preserve values above the vertical white lines. If the values were clamped, there shouldn't be any white above these lines, right?
    Actually, limiter=2 clamps Y' 16-235 , and CbCr 16-240 . By default, limiter=0 (no clamping)

    What I think sanyln was talking about is the hard white line - that's hard clipping from the camera - it goes beyond the lattitude of the capability of the sensor

    "clipping" is different than "clamping". Clipping suggests everything is cut off beyond a certain level. Clamping suggests everything is "squished"
    Quote Quote  
  28. Originally Posted by poisondeathray View Post
    I think this is from your color tools setup. If you set source attributes to 16-235 in the configuration, you will see it go to RGB 255
    ah ok.. but why are the scales of the RGB colors not altered by this attribute?


    Originally Posted by poisondeathray View Post
    So why does it appear "cleaner" - I don't know. I think in progressive mode it's doing a better job preserving more details because objects are full (not cut in half) , so things like motion vectors aren't cut. So small details like fold in the dark shirt , back of the right arm are preserved. But this clip is so bad, that what it classifies as detail might be noise. The noise pattern is very bizzare, even for xvid. It's not completely like random dancing grain, it lingers through frames
    this is strange but to process on separate fields really makes the video looks cleaner. Here two frames to compare:

    http://forum.videohelp.com/attachment.php?attachmentid=11169&d=1330280416
    http://forum.videohelp.com/attachment.php?attachmentid=11168&d=1330280413

    But the second one looks more like a cartoon... regarding the problem we mentioned above.
    Last edited by mathmax; 29th Feb 2012 at 21:25.
    Quote Quote  
  29. Member
    Join Date: Sep 2007
    Location: Canada
    Search Comp PM
    Originally Posted by mathmax View Post
    Originally Posted by poisondeathray View Post
    I think this is from your color tools setup. If you set source attributes to 16-235 in the configuration, you will see it go to RGB 255
    ah ok.. but why are the scales of the RGB colors not altered by this attribute?
    I'm not sure, and it really doesn't matter, it's just a monitoring tool, it doesn't affect the actual video. I don't even know why it's there as an option. The actual RGB values are governed by how the Y'CbCr => RGB conversion was done. If you didn't specify, and let vdub do it, then it will use Rec.601 matrix . Eitherway it would be reading off the RGB values (not the original video) and applying the equation to get the luma, not reading the luma directly.
    Quote Quote  
  30. Banned
    Join Date: Oct 2004
    Location: New York, US
    Search Comp PM
    -30-
    Last edited by sanlyn; 21st Mar 2014 at 16:26.
    Quote Quote  



Similar Threads