I'm trying to convert a video to MPEG2 for DVD. So Far HC Encoder has worked wonderfully and helped me in dealing with errors when I attempted to save to MPEG2 directly from Sony Vegas.
The only problem is that I'm getting an almost not noticeable colour loss in the resulting video.
Both the source and the converted video are MPEG2 with TV levels expanded. The colour that shows more contrast between both pictures is red. In the converted frame, it looks more pink, though I think every colour has lost a bit of contrast after conversion. Apparently only black and maybe white colours are the same in both source and converted videos.
What could I be doing wrong?
I'm using Debugmode Frameserver to get the video from Sony Vegas to the encoder in RGB32.
Since Vegas gets the source in their original levels 16-235, I naturally used the Levels filter to convert anything outside the MPEG2 source to Studio levels.
In Avisynth. I'm using:
AVIsource <--- to load the video.
ConvertToYV12(matrix="PC.601", interlaced=true) <---To make HCEncoder understand that the video is already in 4:2:0 TV levels (16-235) so no additional colour or brightness changes should be done.
These are my NON-DEFAULT settings for HC Encoder:
*BITRATE 3500
*MAXBITRATE 9800
*PROFILE best
*ASPECT 4:3
*AUTOGOP 15
*INTERLACED
*TFF
*CHROMADOWNSAMPLE 1
*MINBRFAC 0.50
*INTRAVLC 2
*MATRIX mpeg
*PRIORITY high
Any help would be appreciated.
+ Reply to Thread
Results 1 to 23 of 23
-
Last edited by enzeru; 14th Aug 2017 at 09:46.
-
Seems like there are YUV<->RGB rounding errors, caused by you outputting your video (which I assume is YV12 originally) in RGB32 and then converting back to YV12.
-
Why did you take the images down ?
What kind of operations are you doing in vegas ?
Why is script interlaced=true ? This looks like it should be progressive content (you should probably IVTC before editing)
You'll get more accurate results (less rounding error), cleaner lines with less bleeding (better chroma up/downsampling) if you control the RGB conversion in avisynth or vapoursynth first before editing in vegas -
I added some png images to the video and applied a levels filter to keep them in limited range. But the video itself is not being modified (brightness, saturation, etc; it's the same as in source).
I'm working with interlaced content and I can't IVTC it because it has some frames that can't be recover (I don't know how ti explain it properly, but let's say some frames are at 60fps, so I only have half field for those). Anyway, I'm actually liking editing in interlaced mode, so that's not a problem to be honest.
I don't know how to import an Avisynth to Sony Vegas, but I think it has more to do with the output suffering from the Convert to YV12 filter of Avisynth as others pointed out. When I rendered the video in Vegas, I did not get colour loss, but of course since Vegas is buggy with non uncompressed renders, I can't do that as a solution.
-----
By the way, I don't know that VapourSynth program. Would it work better than Avisynth and could I use it as an input for HCEncoder?Last edited by enzeru; 14th Aug 2017 at 10:51.
-
It is being modified, by vegas. It works in studio RGB, not YUV. Presumably source is YUV. That conversion is not peformed ideally by vegas (or there are better ways)
I'm working with interlaced content and I can't IVTC it because it has some frames that can't be recover (I don't know how ti explain it properly, but let's say some frames are at 60fps, so I only have half field for those). Anyway, I'm actually liking editing in interlaced mode, so that's not a problem to be honest.
I don't know how to import an Avisynth to Sony Vegas, but I think it has more to do with the output suffering from the Convert to YV12 filter of Avisynth as others pointed out. When I rendered the video in Vegas, I did not get colour loss, but of course since Vegas is buggy with non uncompressed renders, I can't do that as a solution.
This has nothing to do with HCEnc, it's purely the method of RGB<=>YUV conversion
Download the images below and compare:
1 is the source YV12 mpeg2 (converted to RGB for display)
2 is "pure" avisynth workflow , YV12 source => RGB => YV12 , but before HCenc (converted to RGB for display)
3 is "normal" dfs vegas workflow, import mpeg2 into vegas => dfs => YV12 PC levels do "undo" the studio RGB conversion, but before HCenc (converted to RGB for display)
4 is source mpeg2 converted to RGB first, RGB import import vegas => dfs RGB out => RGB to YV12 rec levels via avisynth , but before HCenc (converted to RGB for display)
Source YV12 mpeg2 "red" bar was 180,16,17 , but after "normal" dfs vegas workflow 177,18,17 . Pure avisynth, or RGB converted prior to vegas return the same 180,16,17. Also look at the color bleeding exacerbation from worse up/down sampling chroma algorithmm it's much worse in vegas. For sources like cartoons/ anime with clean color lines and borders such as in your screenshot it will look more noticable than something like live action content -
Part of the problem with editing without pulldown removal (if you edit while still interlaced) is you can only cut on special frames, right on cadence boundaries . You migth break cadence and get interlaced fades, cause field problems otherwise. If you're editing "cuts" , you have to be very careful where you cut the progressive content sections if it's still interlaced
Another way you could do it is apply "studio RGB to computer RGB" preset just before dfs export. Then ConvertToYV12() would only use Rec matrix instead of PC. This will be more accurate (because vegas is undoing it's own studio RGB instead of wrong conversion), but you will still have the worse chroma bleeding artifacts from worse up/down chroma sampling. -
I was using Convert to YV12 since I think that's the only colour format that works for a DVD and I read that PC 601 would left the range untouched (now I'm not sure if this is true) and treat the video as if it is in TV levels without additional conversion. All of this since I'm working in studio levels (which I thought they were the same thing as limited range).
What can I use then to feed HCEncoder with TV levels avoiding additional conversion by the HC itself?
I found this: http://trevlac.us/colorCorrection/YUY2toRGB219.zip but I don't understand if this would work since it says RGB and not YV12.
If you tell me this is the right plugin to use in Avisynth, then I would need to change the output of Debugmode Frameserver to YUY2 instead of RGB32, am I right? -
See post #8 . You will get the closest results by applying the studio RGB to computer RGB preset, just before debugmode . Frameserve RGB. Then ConvertToYV12(matrix="rec601" , interlaced=true) or ("false" if you decided to do it the other way removing pulldown )
The results will still be worse because of the chroma bleeding, but the colors will be closer than what you had. "Best" , cleanest results doing it with avisynth or RGB import -
I understand what you are saying regarding the interlaced content. The funny thing is that this anime has weird cuts and that's what I'm referring to 60fps parts. Some of the original cuts end in a half field that cannot be recovered with IVTC. In other words, it originally has a broken cadence.
So should I screw all the limited range edition and just output in full range and let Avisynth do a better conversion? If you are saying it won't worsen anything and actually improved a bit the "rounding errors", then I think that would be the easiest solution.
-----
By "worse" you mean that they are going to be just "as bad" as what I'm already getting in "chroma bleeding"? -
Also be aware that different player and editors may round differently when converting YUV to RGB for display. One may round, say 160.7 to 161, whereas another might truncate to 160. The differences can add up over multiple YUV/RGB conversions.
-
yes, unfortunately it's common. But do you see if the editor had done things properly - original would be ok ? No reason to propogate bad habits or make it worse - just pay attention to where you are cutting if you keep it interlaced while editing
-----
In theory, you can get zero additional bleeding , if you use nearest neighbor algorithm on the chroma up and down scaling if you use centered chroma samples interpolation. Because you're just duplicating samples then discarding samples, you're left with the original if done with powers of 2. I'll test this out later, but IIRC, there were issues with avisynth internal functions, you needed dither tools or vapoursynth to do it correctly -
Poisondeathray, you are a genious!! It really worked!! The colours are now the same
Now the Chroma Bleeding is the only thing that stops the conversion from being perfect but with my limited knowledge and space disc I don't think I would be able to fix that.
-
But do you actually *need* vegas for anything ? If you can avoid the RGB conversion altogether it would be even better (that becomes "perfect" essentially). You're importing some PNG images right ? what else ? what other manipulations? Because you can do that in avisynth without even having to go to RGB. (The PNG images can get converted to YV12, but it's not necessary to degrade everything else)
It is possible to import avs into vegas with "fake avi" using avisynth virtual frameserver (AVFS) . So no large intermediate file with large HDD requirements, it's a "virtual" file. But performance (seeking/scrubbing) is much slower than an actual file like UT Video , or even lagarith (and lagarith is slow)
Here is the dither script with point resized chroma (but not actually dithering, it's disabled with mode=-1) , but you can see the bleeding is reduced even more . The only losses are from RGB<=>YUV conversions, no additional losses from bicubic interpolation resizing of chroma planes (you're just doubling chroma samples when converting to YV24, then discarding the same ones later when converting back to YV12, instead of interpolating inbetween). This was done on progressive, but in theory it should be applicable to interlaced if done on separated fields
5 - is the source YV12 mpeg2, converted to RGB with point resized chroma. That would be imported into vegas etc... Then to convert it back you point resize again and convert to YV12. The image is before HCenc (converted to RGB for display), like the others.
Code:#Original Source, converting to RGB24 Dither_convert_8_to_16 () Dither_resize16 (orig.width, orig.height, kernel="point", csp="YV24") #The chroma placement is ignored when center is set to false or kernel to "point" Dither_convert_yuv_to_rgb(matrix="601", cplace="mpeg1", output="rgb24", mode=-1, lsb_in=true) #RGB intermediate, vegas etc... #RGB from dfs , converting back to YV12 for HCenc Dither_convert_rgb_to_yuv(matrix="601", cplace="mpeg1", output="YV24", mode=-1) Dither_convert_8_to_16 () Dither_resize16 (orig.width, orig.height, kernel="point", csp="YV12") DitherPost (mode=-1)
Last edited by poisondeathray; 14th Aug 2017 at 14:59.
-
Extract latest avfs.exe from VapourSynth portable package. Then learn how to use command-line applications.
Code:syntax: avfs [<switch> ...] <script file> switches: -d Print diagnostic info to stdout. -s Serve to stdin/stdout.
-
-
(and for completeness, Gavino actually solved the proper point resizing roundtrip for YUV<=>RGB way back a few years ago with internal avisynth functions , it gives almost identical results as using dither method - if you zoom in 4x you might minor differences . He even gives the interlaced version - I should have remembered this one...)
Code:http://forum.doom9.org/showthread.php?t=164737 ConvertToYV24(chromaresample="point") MergeChroma(PointResize(width, height, 0, 1)) ConvertToRGB32() ... # filtering in RGB32 ConvertToYV12(chromaresample="point") ConvertToYV24(interlaced=true, chromaresample="point") MergeChroma(PointResize(width, height, 0, 2)) ConvertToRGB32() ... # filtering in RGB32 ConvertToYV12(interlaced=true, chromaresample="point")
-
So, the only thing that matters now is to import an RGB source to Vegas.
Why chromaresample instead of Rec 601?
Of course I did the later, but as far as I can tell, both methods should work if this think ever works for me. -
Rec601 is the matrix (you would use Rec709 for HD) . This is different than chromaresample, which specifies the resizing algorithm used on the chroma planes. By default it's bicubic. "Point" is the same thing as nearest neighbor
When you omit "matrix" argument , it's assumed Rec601 by default in avisynth
Of course I did the later, but as far as I can tell, both methods should work if this think ever works for me.
Similar Threads
-
is libX265 encoder in latest FFMPEG is the same as standalone x265 encoder?
By junglemike in forum Video ConversionReplies: 5Last Post: 21st Sep 2016, 01:36 -
Colour correction on DV Captures
By Imy in forum RestorationReplies: 12Last Post: 5th May 2016, 19:05 -
use besweet encoder in megui as audio encoder
By dani010 in forum AudioReplies: 3Last Post: 25th Apr 2013, 08:10 -
Loss of colour when encoding 1080i to 1080p
By KyleMadrid in forum Newbie / General discussionsReplies: 1Last Post: 26th Mar 2013, 09:44 -
FRAPS YV12 to MeGUI Encoder color loss help
By lolyondaime in forum Video ConversionReplies: 8Last Post: 20th Sep 2012, 11:01