VideoHelp Forum




+ Reply to Thread
Results 1 to 28 of 28
  1. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    Hi,

    I'm using Ripbot264 for encoding movies.
    I just want to know if the ARRANGEMENT of the codes below is correct. I don't exactly know how to use MergeChroma. I want to see if i used it properly.

    DirectshowSource("abcde.mp4").ConvertToRGB32 / .ConvertToYV24 (matrix="", chromaresample="spline36")
    MergeChroma(Spline36Resize(x,y, 0, 1).Sharpen(.6))
    ConvertToYV12(matrix="", chromaresample="spline36")


    I want to apply the same thing that was suggested here https://nic.dnsalias.com/showthread.php?p=1571315#post1571315



    Thank you.
    Quote Quote  
  2. MergeChroma simply takes the colors from one source and applies them to another. Both videos have to have the same frame size and both must be in the same YUV format. As with most filters, if you don't supply the source video "last" will be assumed. Ie,

    Code:
    MergeChroma(othervideo)
    is the same as
    Code:
    MergeChroma(last, othervideo)
    is the same as
    Code:
    last = MergeChroma(last, othervideo)
    the colors from othervideo are applied to last.

    There are many things wrong with your script as written.
    Last edited by jagabo; 21st May 2013 at 08:55.
    Quote Quote  
  3. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    Oh ok, thanks. Hmm basically, using MergeChroma in resizing is not a good idea then unless you're encoding with the same frame size / resolution. So the script is wrong right? Hmm..still can you give me advice on how to rearrange the script i created to a better one? If it's even possible.

    My source is in YV12 format and based on some comments on that thread, YV12 > YV24 > YV12 is advised to at least preserve the quality like the source material. Hmm, what if I omit the MergeChroma? Is it better?

    Sorry for all these questions, I just wanted to know since I can't test it right now. I'm at work. Thank you for the input.
    Quote Quote  
  4. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I'd say:

    DirectshowSource("abcde.mp4")
    Spline36Resize(target_width, target_height)

    If your source is interlaced or telecined, etc., you must correct that first. If your source has noise + invalid luma/chroma levels + other defects, that has to be fixed first as well. I couldn't read your link (NOD32 gave me an "untrusted site" message), but if aliasing is involved, that comes first, too. It's not really possible to give accurate answers because we don't know what your video looks like.
    Last edited by sanlyn; 25th Mar 2014 at 19:31.
    Quote Quote  
  5. The point of that thread was to minimize losses when going into RGB and back to YV12

    You absolutely don't want to do that unless it's necessary for some reason to convert to RGB (e.g. maybe some other programs, some filters require RGB)
    Quote Quote  
  6. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    @sanlyn Thanks, yeah. That's how it looks like in Ripbot by default without any input.
    DirectShowSource("blahblah.extension").ConvertToYV 12 (that's how it looks like) comes first then Resizing.

    I want to obtain or preserve the quality same as my source that's why instead of .ConvertYV12 after DirectShowSource I use ConvertToRGB32 > then after Resizing I do another ConvertToYV12.

    I wanted to use ConvertToYV24 instead of RGB32 as suggested on the thread that I've seen.

    Main reason I'm doing this is I'm viewing the movie on a portable device.
    Converting to RGB32 then back to YV12 (which is also the format of my source) helps in eliminating jagged edges (for me at least), or probably that's what you call aliasing. Converting it without doing YV12 > RGB32 > YV12 shows jagged edges and they're extremely visible on solid colors like red, etc. (Sorry I can't show screenshots cause i'm at work right now)

    I know also that the resizer used plays part in eliminating those pesky jagged edges. I found that using nnedi helps in this aspect and it achieves the same thing (based on my test clips) like the conversion of YV12 > RGB32 > YV12 produces. However it is extremely slow. That's why I'll never use it.

    My only problem with YV12 > RGB32 > YV12 is that colors seem to be a bit washed out. Though not really very visible, it's somehow annoying since I'm aware how those colors should look. That's why I'm finding ways to preserve the source's quality (the colors for that matter) as much as possible. I even tried using Tweak after the YV12 > RGB32 > YV12 conversion but I'm finding it hard to get the colors that i want, last i use is Tweak(video, sat=1.117, bright=-.6, cont=1.117) but I don't know. I can't find the right feel to it using the Tweak function.
    Quote Quote  
  7. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    IF your source is YV12, the first ConvertToYV12 statement is ignored. There are many ways to treat aliasing that don't require colorspace conversion; it's best to avoid those conversions when possible or use something like the dither() plugin. Most anti-alias procedures I've seen involve resizers and other filters, but none require color conversion from YV12.

    Aliasing -- if that's really the problem -- is usually the result of improper deinterlace.

    Brightness and Contrast themselves are not the best way to improve color, gamma works better. But without a sample of your video, no one can really say.
    Last edited by sanlyn; 25th Mar 2014 at 19:31.
    Quote Quote  
  8. Originally Posted by zanaitoryoushi View Post
    My only problem with YV12 > RGB32 > YV12 is that colors seem to be a bit washed out.
    I suspect you just used the wrong matrix somewhere.

    Code:
    WhateverSource() # YV12 source
    ConvertToRGB32()
    ConvertToYV12()
    would not cause the picture to look washed out. It would cause slight blurring o the chroma channels. Superblacks and superbrights would be lost.
    Last edited by jagabo; 21st May 2013 at 10:31.
    Quote Quote  
  9. Converting to RGB32 then back to YV12 (which is also the format of my source) helps in eliminating jagged edges (for me at least), or probably that's what you call aliasing.
    Not really a valid reason to convert to RGB . You're losing more quality than necessary , and not preserving the quality of the source at all

    That thread suggests methods of minimizing chroma samples (chroma resolution) loss when converting to RGB - but there are other losses incurred when you convert to RGB

    These are avoidable quality losses - rounding losses, gamut clipping because YUV and RGB are non-overlapping color models. Many values of YUV do not have a valid representation in sRGB and are lost . YUV color gamut is about 4x larger than sRGB - as soon as you convert to 8bit RGB from 8bit YUV you will incur banding along gradients (re. You need at least a 10bit RGB model to avoid this)

    If you have aliasing, use an AA filter that works in the same color space . I suspect it's probably your device that is upsampling the chroma using nearest neighbor causing "blocky" chroma edges . What you're effectively doing by converting back & forth is blurring the chroma when using spline36 interpolation . Notice that the other thread used "point" - the reason is the goal was to preserve the original chroma samples (want to preserve the original blocky color edges, minimize the blurring lossess when going to RGB for other programs), not use some chroma smoothing interploation when up/down rezzing the chroma
    Last edited by poisondeathray; 21st May 2013 at 10:27.
    Quote Quote  
  10. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    Thank you all for the patience and in taking time in responding.
    @jagabo right now yes, that's what i want i guess. To at least match what my source looks like.
    Oh yeah, hmm probably what you said was correct, so I'm guessing that there is no way in achieving superblacks and superbrights after the colorspace conversion. But hmm, there could be something that would make me achieve something close to it, even though it's not exactly the same as the source, right?

    @poisondeathray
    I do know that yes, that's why i want to try if converting the colorspace like, YV12 > YV24 > YV12 is better than using RGB32.
    I also did quite some research before and yes, none of them speaks something of using colorspace conversion to eliminate aliasing.
    It was just a test of mine in which right now I really can't recall how i came up with such an idea but it does work. I'll be showing screenshots of doing the conversion using what Ripbot does at default and the utilization of the colorspace conversion method but sadly i'm still in the office. Default conversion shows jagged edges and are very visible whereas using colorspace conversion reduces the effect of jagged edges (it is still there though the effect is not that strong). Doing a .ConvertToRGB32 after DirectShowSource (YV12 > RGB32 only) eliminates it, not sure if the jagged edges are really gone but they aren't visible at all to my eyes at least. However i don't want that since the strength of the colors doesn't look the same, they appear to be washed out and dull. Red turns into somewhat orange-ish, it's still Red but it's leaning towards the color orange. The encoded clip still looks great however it's not like the source. So returning it back to YV12 is better but the colors are not that strong anymore. I just want to enhance the color to achieve a quality near what my source is. Not really how my source looks like but somewhere near to it.

    @Sanlyn I agree, i really hope i could upload a sample clip. Thank you for the input.
    Last edited by zanaitoryoushi; 21st May 2013 at 10:42.
    Quote Quote  
  11. Originally Posted by zanaitoryoushi View Post

    @poisondeathray
    I do know that yes, that's why i want to try if converting the colorspace like, YV12 > YV24 > YV12 is better than using RGB32.
    I also did quite some research and yes, none of them says something of using colorspace conversion to eliminate aliasing.
    It was just a test of mine in which right now I really can't recall how i came up with such an idea but it does work. I'll be showing screenshots of doing the conversion using what Ripbot does at default and the utilization of the colorspace conversion method but sadly i'm still in the office. Default conversion shows jagged edges and are very visible whereas using colorspace conversion reduces the effect of jagged edges (it is still there though the effect is not that strong). Doing a .ConvertToRGB32 after DirectShowSource (YV12 > RGB32 only) eliminates it, not sure if the jagged edges are really gone but they aren't visible at all to my eyes at least. However i don't want that since the strength of the colors doesn't look the same, they appear to be washed out and dull. Red turns into somewhat orange-ish, it's still Red but it's leaning towards the color orange. The encoded clip still looks great however it's not like the source. So returning it back to YV12 is better but the colors are not that strong anymore. I just want to enhance the color to achieve a quality near what my source is. Not really how my source looks like but somewhere near to it.

    How are you viewing this ? What software or hardware ?

    red => orange probably because you are using the wrong matrix (709 vs 601 issue)
    If you 're going HD=>SD you usually need to convert to 601

    The YV12->RGB->YV12 (and back to RGB for display) conversion using spline is just blurring the chroma . The more times you do this the more blurry it becomes - sure it might reduce the blocky color edges, but effetively reduces chroma resolution as well - small details will be gone

    YV12 > YV24 > YV12 is better than using RGB32.
    Yes it is, you don't incur those other losses from taking a trip into RGB land - but it's still blurring the chroma when using something like spline interpolation. Think of it as upscaling and downscaling an image (but applied on the chroma planes). The more times you do it, the more blurry it becomes




    How you configure your display and renderer can affect how you "see" things. How the video is converted to RGB for display is important (chroma upsampling used) . Of course you do not have control over this on portable devices

    e.g MadVR
    https://www.videohelp.com/toolsimages/madvr_1196.jpg
    Last edited by poisondeathray; 21st May 2013 at 10:50.
    Quote Quote  
  12. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    Originally Posted by poisondeathray View Post

    How are you viewing this ? What software or hardware ?

    red => orange probably because you are using the wrong matrix (709 vs 601 issue)
    If you 're going HD=>SD you usually need to convert to 601

    The YV12->RGB->YV12 (and back to RGB for display) conversion using spline is just blurring the chroma . The more times you do this the more blurry it becomes - sure it might reduce the blocky color edges, but effetively reduces chroma resolution as well - small details will be gone
    I'm using my psp 3000 to view the encoded clip. You are right about the matrix when I did YV12 > RGB32 color space conversion. I used 709 and probably that's the cause of the colorshift,

    So is there a way to adjust the chroma resolution?

    -- hey btw, thank you for the input. About the YV24 thingy better than RGB.
    Quote Quote  
  13. Originally Posted by zanaitoryoushi View Post
    so I'm guessing that there is no way in achieving superblacks and superbrights after the colorspace conversion.
    Sure there is. Just use the PC matrices: PC.601 and PC.709. But your source shouldn't contain any superblacks and superbrights. And superblacks should all render the same shade of black anyway. Same for superbrights. Ie, losing superblacks and superbrights generally doesn't change the way the video looks when displayed.

    Originally Posted by zanaitoryoushi View Post
    Red turns into somewhat orange-ish
    I agree with pdr. That's probably rec.709 vs rec.601 issue.
    https://forum.videohelp.com/threads/329866-incorrect-collor-display-in-video-playback?p...=1#post2045830

    You should post short video segments of your source and your conversions.
    Last edited by jagabo; 21st May 2013 at 10:58.
    Quote Quote  
  14. Originally Posted by zanaitoryoushi View Post

    I'm using my psp 3000 to view the encoded clip. You are right about the matrix when I did YV12 > RGB32 color space conversion. I used 709 and probably that's the cause of the colorshift,

    So is there a way to adjust the chroma resolution?

    I don't understand your question? You can resize the chroma planes, but when it's subsampled 4:2:0 (e.g. your device probably can't play 4:4:4) , you are limited in what you can do


    What are your source clip dimensions, and the export dimensions intended for the psp ?

    YV12 means 4:2:0 or the color information is 1/2 in each direction , horizontal, vertical

    e.g. a 4:2:0 1920x1080 source would have only 960x540 CbCr - chroma planes resolution, but full 1920x1080 Y' resolution

    When you converttoYV24, you convert to 4:4:4 , so you are scaling those 960x540 chroma planes to full resolution . Y' 1920x1080, Cb 1920x1080, Cr 1920x1080 . Of course you don't have "real" resolution, effectively the chroma resolution is 960x540 because the source was 960x540.

    Resizing a clip to 480x272 4:2:0 would have full 480x272 Y' , but 240x136 CbCr resolution
    Quote Quote  
  15. If you just want to blur or sharpen the chroma channels you can do so without any colorspace conversion:

    Code:
    MergeChroma(Blur(1.0))
    or

    Code:
    MergeChroma(Sharpen(1.0))
    Those are pretty extreme but you get the idea.
    Quote Quote  
  16. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    Originally Posted by jagabo View Post
    Originally Posted by zanaitoryoushi View Post
    so I'm guessing that there is no way in achieving superblacks and superbrights after the colorspace conversion.
    Sure there is. Just use the PC matrices: PC.601 and PC.709. But your source shouldn't contain any superblacks and superbrights. And superblacks should all render the same shade of black anyway. Same for superbrights. Ie, losing superblacks and superbrights generally doesn't change the way the video looks when displayed.

    Originally Posted by zanaitoryoushi View Post
    Red turns into somewhat orange-ish
    I agree with pdr. That's probably rec.709 vs rec.601 issue.
    https://forum.videohelp.com/threads/329866-incorrect-collor-display-in-video-playback?p...=1#post2045830

    You should post short video segments of your source and your conversions.
    I'd really love to post minute clips from my source and the conversions. I will put them up after i'm out of the office.

    Thank you all. This has been very very educational. I'm no expert at all and I don't really know all the other terms and technicalities, those things to consider, the effects of different resizers, chroma loss, etc. but at least I'm understanding bit by bit even if i can't keep up in your level.

    Btw so i guess back to my script if i want a YV12 > YV24 > YV12 conversion then i guess i'll just eliminate that MergeChroma thingy and just play with the PC.matrices, correct? That could be a good start for me.
    Quote Quote  
  17. If you stay in YUV, you won't clip superbrights/darks . There will be no reason to use PC matrices (PC and Rec matrices are for YUV<=>RGB conversions) . If you stay in YUV, you can use Colormatrix filter (If it was an HD source convert video as if it had used 601)

    I'm assuming this was an HD source and you're converting to SD
    e.g
    colormatrix(mode="rec.709->rec.601", clamp=0)


    I suspect the problems are with the PSP chroma upsampling (how it converts YUV to RGB for display) . If you play the same clip on the computer (eg. configured with MadVR), do you see the same problems ?
    Quote Quote  
  18. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    Originally Posted by poisondeathray View Post
    If you stay in YUV, you won't clip superbrights/darks . There will be no reason to use PC matrices (PC and Rec matrices are for YUV<=>RGB conversions) . If you stay in YUV, you can use Colormatrix filter (If it was an HD source convert video as if it had used 601)


    I suspect the problems are with the PSP chroma upsampling (how it converts YUV to RGB for display) . If you play the same clip on the computer (eg. configured with MadVR), do you see the same problems ?
    Thank you. This is the first time i've heard about the colormatrix. I'll try that. Hmm i don't really configure anything but if i play the same clip on the pc and same thing on the psp, like say the jagged edges, they still show on pc but it's more subtle than how it shows on the psp. Though i need to double check that one.

    WOW! This is cool, i'll probably try the code you suggested: colormatrix(mode="rec.709->rec.601", clamp=0) btw, if it's not much to ask, given that i am still doing this:

    DirectshowSource().ConvertToYV24()
    Spline36Resize(video, x,y,).Sharpen(.6)
    ConvertToYV12()

    where should i put the code? Should that be appliead before or after the resize?
    Last edited by zanaitoryoushi; 21st May 2013 at 11:22.
    Quote Quote  
  19. It won' t make much of a difference where you put it

    If all your filters come after the Spline36Resize to lower dimensions, it will be faster to process the script, but technically slightly worse because there are fewer pixels to work with (but you won't be able to see the difference in the end result) .

    I suspect Sharpen(0.6) is probably oversharpening and generating halos, especially if this is from HD content - that's is a very strong setting . At least with other sharpeners, they are "smarter" and reduce overshapening by limiting to some extent (e.g. lsfmod, limitedsharpenfaster) , but are slower to process
    Quote Quote  
  20. Originally Posted by zanaitoryoushi View Post
    DirectshowSource().ConvertToYV24()
    Spline36Resize(video, x,y,).Sharpen(.6)
    ConvertToYV12()
    That code makes no sense. video, x, and y are undefined.
    Quote Quote  
  21. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    Originally Posted by jagabo View Post
    Originally Posted by zanaitoryoushi View Post
    DirectshowSource().ConvertToYV24()
    Spline36Resize(video, x,y,).Sharpen(.6)
    ConvertToYV12()
    That code makes no sense. video, x, and y are undefined.
    Oh, hehe sorry my bad, just setting it as an example, i should have placed width, height in there instead.
    Quote Quote  
  22. Consider using MergeChroma(Blur(n)) instead of ConvertToYV24().ConvertToYV12(). That will give you more control over exactly how much blurring you get.

    Also, be careful with DirectShowSource(). It isn't guaranteed frame accurate. It probably won't be a problem with what you're doing there but is will be with more complex scripts that request frames out of order. Use ffvideosource() or some other source filter instead.
    Quote Quote  
  23. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    Originally Posted by poisondeathray View Post
    It won' t make much of a difference where you put it

    If all your filters come after the Spline36Resize to lower dimensions, it will be faster to process the script, but technically slightly worse because there are fewer pixels to work with (but you won't be able to see the difference in the end result) .

    I suspect Sharpen(0.6) is probably oversharpening and generating halos, especially if this is from HD content - that's is a very strong setting . At least with other sharpeners, they are "smarter" and reduce overshapening by limiting to some extent (e.g. lsfmod, limitedsharpenfaster) , but are slower to process
    to be honest though i tried using LSFmod before and LimitedSharpenFaster but i can't seem to make it run for some reason.
    maybe i'll try it again. I tried the default settings on Scintilla's Avisynth page before but i wasn't able to make it work. probably i've missed a plugin or two. Since then i dropped the idea of using those sharpeners.
    Quote Quote  
  24. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    Originally Posted by jagabo View Post
    Consider using MergeChroma(Blur(n)) instead of ConvertToYV24().ConvertToYV12(). That will give you more control over exactly how much blurring you get.

    Also, be careful with DirectShowSource(). It isn't guaranteed frame accurate. It probably won't be a problem with what you're doing there but is will be with more complex scripts that request frames out of order. Use ffvideosource() or some other source filter instead.
    I will really consider everything that was mentioned in this thread. I will do some trial and error later - really appreciate all the expert inputs you guys are providing.
    Quote Quote  
  25. Originally Posted by poisondeathray View Post
    I suspect Sharpen(0.6) is probably oversharpening and generating halos
    It probably doesn't matter on a PSP 3000. The picture is so small it's probably not noticeable. I'd be more concerned about buzzing edges.
    Quote Quote  
  26. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    Originally Posted by jagabo View Post
    Originally Posted by poisondeathray View Post
    I suspect Sharpen(0.6) is probably oversharpening and generating halos
    It probably doesn't matter on a PSP 3000. The picture is so small it's probably not noticeable. I'd be more concerned about buzzing edges.

    That is spot on Though PDR is right if the clip is viewed on old psp versions like the PSP phat and 2000 versions.
    In 3000 though, i think you're right about the buzzing edges, but they are not noticeable if the character is focused on the screen. I noticed that already and i guess i'll have to live with it rather than decreasing the sharpening level. My only concern right now are the colors.
    Quote Quote  
  27. Instead of sharpening and then blurring the chroma with colorspace conversions, what about sharpening only the luma?

    Code:
    DirectshowSource(...)
    Spline36Resize(480,272)
    MergeChroma(Sharpen(0.6), last)
    ColorMatrix((mode="rec.709->rec.601", clamp=0)
    Quote Quote  
  28. Member zanaitoryoushi's Avatar
    Join Date
    Aug 2012
    Location
    Stark Tower
    Search PM
    Originally Posted by jagabo View Post
    Instead of sharpening and then blurring the chroma with colorspace conversions, what about sharpening only the luma?

    Code:
    DirectshowSource(...)
    Spline36Resize(480,272)
    MergeChroma(Sharpen(0.6), last)
    ColorMatrix((mode="rec.709->rec.601", clamp=0)
    Sorry I was on break. Wow! Again this is great, thanks a lot. I will definitely try this
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!