I'm a bit lost here.
I open my vob files in VirtualDub-mpeg2, I'm doing some changes to the movie (resize, add logo, etc) then I save my files as uncompressed AVI which I compress later... But when I compress the avi files the brightness is changed. Those files are intended to by played back on a computer.
The original mpeg2 files from the DVD, is it YUY2 or RGB24? When I compress the avi files to, let say WMV files using Media Encoder, are they converted back to YUY2 or RGB24 etc... ?
What should I do to fix this? Does it means I'll have to use AVIsynth or can I do something directly with VirtualDub ?
Thanks!
+ Reply to Thread
Results 1 to 15 of 15
-
-
I believe most MPEG files use YV12 format -- a planar YUV format with U and V at half the resolution (both dimensions) of Y:
http://www.fourcc.org/index.php?http%3A//www.fourcc.org/yuv.php
When you convert from YUV colorspace to RGB and back there is some loss of accuracy. In my experience this is not noticable as a change in brightness though.
There's no way to avoid the conversions in VirtualDub as its filters only work in the RGB colorspace. Indeed most AVISynth filter can work in YV12.
I don't know what colorspace format WMV uses internally but I wouldn't be surprised if it's YV12 -- it's a cheap way to get a 50 percent reduction in size right off the bat and the format is easy to work with. -
The original mpeg2 files from the DVD, is it YUY2 or RGB24?
between 4:2:0 and 4:1:1 dv) is virtually indistinguishable.. that by the time
the math gets to it, and equation'alizes it, its the same, to the eyes. I
gave up some time ago, fighting this 4:1:1 vs. 4:2:0 battle, because it's
truely futile.
Second, depending on the setting you have in WMV, it will either show
your video under 0-255 or else 16-235 color range. (the 16-235 range
is what's making your video lighter) But, that's normal for YUV sources
like MPEG-2 being decoded. PDVD does the same thing. It showes your
sources as 16-235 range, so it looks lighter.
.
But, for some strange reason, Commercial DVD movies are going against the
standard, and using (encoding) to 0-255 range, which, to some people here,
beleive it's IRE 7.5 value that the DVD players are outputing. I'm not
sure how this is so, but I figure there is a glitch somewhere's in the mix
of things.
I'm not sure how WMV is coverting your source files, because I don't have
the wmv encoder, and never tested it. But, I would assume that it's prob
just an user error on your part, of settings and things. Make sure that
you are not using any filters during the encoding. Then, make sure that
you know what WMV is using to display it's output. If you can change it
somewhere's, then do so. But, remember this. If WMV is *not* altering
your source during encoding, then it's only a matter of finding that
setting, that is causing your source to display brighter.
-vhelp 3333 -
When you convert from YUV colorspace to RGB and back there is some
loss of accuracy. In my experience this is not noticable as a change in
brightness though.
it is possible to reproduce the same output (from the conversion process
of RGB -to- YUV, and back) without loss.
.
Also, if the source using the conversion formula, is the same ( *inverse*
formula ) the reproduction will be the same.. though somewhere's inside
that nasty color container that floats around in space, there may be a slight
loss of a few pixel colors.
.
But, I did read that its possible to *NOT* lose any color information,
through the use of (I think it was) integer values IN/OUT. But, so far,
I've only seen "signed" values, and these slow up the conversion calculations
dramatically.
.
There are other methods, like for instance, using a lookup table (I heared
that it's actually quite small - though hard for me to believe is true) but
I've never ben able to figure out the method to implement it in code. Most
of it is in C language. I'm not sure how or what vdub (and vdubMOD) uses
in its method of conversion. But, I thought that it used look-up tables
I'm not sure.. hence why its fast.
-vhelp 3334 -
Originally Posted by vhelp
http://www.quantel.com/domisphere/infopool.nsf/HTML/dfb411?OpenDocument
4:2:0 is optimized for 2D display but the difference to the viewer is minimal. 4:2:0 results in half sampling H and half sampling V. In theory this averages the chroma resolution in H and V.
http://www.mir.com/DMG/chroma.html
4:2:2 and 4:1:1 pixel alignments are/were used during production because pixel location is more predictable during 2D and 3D spatial manipulation. Maybe this concept is obsolete.
The rest of this thread is interesting. YUV to RGB to YUV conversion can have many distortions including levels. I'll be back. -
RGB <-> YUV conversion equations:
http://www.fourcc.org/fccyvrgb.php
These equations are reversible (within the accuracy of the floating point values).
But when working with integers (as is done in virtualdub) you will loose precision.
Firstly, not all RGB colors have valid YUV equivalents in the 0-255 range. And vice versa. If a conversion results in a value of, say, -5, you will have to clap the value to 0. Or 261 will have to be clamped to 255. Later when you want to convert back you don't have a -5 or 261 to start the calculation, you have a 0 or a 255. Your reversal will not get you the same value you started with.
Secondly, since the equations will often result in non integer results you will be dropping the number to the right of the decimal point (or rounding). This is a loss of precision.
Just as a simple analogy lets say you have a conversion that is as simple as
y = x / 3
The reverse is obviously
x = y * 3
Say you start out with the value x=7. You convert using the first equation to get y=2.333... But you are saving your results as integers so you're left with 2. Now you want to reverse the calcualtion x = 2 * 3. The answer is 6, not the 7 you started with. This is analogous to what happens when you convert YUV to RGB and back. -
yes, I agree with all of you.
-->
However, in my many testings with this fun area of color space, I
have not found color loss.. though I was using DVD sources. I
beleive that commercial DVD are touched up to a gree. I could be
wrong, but that would explain why there seems to be no loss..
unless.. maybe edDV could post some Vegas graphis for us, showing
that *there is* loss. (please use a source that we all have so
we can compare)
.
In my tests, I've ben using my movie, "The Incredible" and "The
Matrix" and "The Fifth Element" hmm.. a lot of "THE" 's.
On another note..
I've ben working inside my color bar tool I was scupturing, and
added the ability to convert color space to YUV, so I can compare
the color space (PC -> TV) using DVD sources.
.
In one of my tests today, I found that the color range differed to
a degree, and in order to match that DVD, I hd to use an 8-235 range.
.
In another test, I did a PDVD copy to clipboard, and compare my
conversion formula with PDVD's, and found they differed in certain
color areas, again seemed to feel like 8-235 but the verdict is not
yet out on this one. Anyways, I'm still testing, cause I do want
this to be an "official" tool for others (you all) to use in your
quest for knowledge or curisoity or what-have-you.
I may revise the tool to be more flexible to inmport other conversion
formulas. But, to be consistant w/ vdub, I would like to know what
formula it uses, and compare it to the code I translate it to, under
Delphi/Pascal.
My next steps would be to create some features to produce Graphs
from the color bars process. That will be a challenage on its own,
but I have lots of others I'm still workng on
-vhelp 3340 -
Here is what I *would* like to test ...
Ok. It was re-itterated that there is some color loss during the
convergence (RGB -to- YUV, and back) Lets say I agree that there
is. What I would like to do is discuss *that* area inside the
color model that *DO* get effected, and perform a test conversion
with a tool I am working on (color bars) and if possible, measure
or pinpoint the area that are being effect. I realize that this
may be difficult, becuase of the math involved with calculating
and throwing out certain numbers under certain threshold (ie, say
we have a -5 ..that would be converting (or clipped) to a 0
Just what area is being effected by all this (ie, RED; GREEN; or
BLUE) or I'm not sure I'm wording everything correctly.. but nod
your head if you *seem* to understand me
-vhelp 3341 -
The source for VirtualDUb is available. Just download it and look for conversion algorithms:
http://prdownloads.sourceforge.net/virtualdub/VirtualDub-1.6.5-src.zip.bz2?download -
4:1:1 like 4:2:2 keeps the chroma pixels in line with the first luminance sample. This results in quarter H sampling but full V sampling of chroma.
-
Here is how I see RGB vs. YUV, from my experience ...
From a beginning (originality) point of view, when starting with
a "fresh" source for capturing, it is alwasy best to capture in
RGB format, if anything, to maintain exact color information.
But, the problem with this approach, is that it takes a whole lot
of hard drive space, and add to that, cpu intensive (on some systems)
..though it shouldn't be because of the lack of a conversion step
(ie, huffy; RGB->YUV; etc) but the main issue is throddle, (or,
muscle) in the system setup, due to the *LARGE* amount of data
being send across the BUS. That was never a problem with my prev
system setup (ESC KS75A) and vfw driver support, using my prev
Ospray-210 cap card; win-TV GO; and a few others, I've tested over
the years.
.
But most stay away from RGB because of various reason, for instance.
.
* huffy = YUY2 (or, YUV) = 420 and/or 422
* MJPEG = DCT (or, mod. mpeg) = 420 and/or 422
* DV = DCT and YUV = 411 NTSC (420 PAL)
.
and as such, these codecs allow reduced storage (at a cost)
the cost is within the eye's limitation with Luma vs. Chroma.
Anyways.
Now, back to my original flavor of originality.
Lets say your arguments sake, our source is DV. We all know that it
is sampled at 411 (NTSC) and that its RGB is converted (by the chip)
to YUV (after some DCT compression, which is similar to MPEG)
.
The best approach to capturing the information from your DV source
(ie, cam) would be through the mechanism of Firewire. But some,
will use the S-Video port, and capture (analog'wise) from it, to
a codec of there choice.. could be huffy; mjpeg; *RGB* or whatever.
(who knows what's in the mind of the user on there quest - could be
due to lack of knowledge, or whatever) anyways.
.
In this case, it would not be wise to capture (analog'wise) to a
capture card, even through S-Video, to a codec. Why ?
Because after the footage is sent through the lens, and processed
through the on-board chips, its RGB get converted to YUV, and then
stored onto the tape (or sent across the firewire port via a direct
stream copy)
What about Analog TV ...
Well, mostly, it's 420, so in this case, we could capture in RGB
and get the maximum value. Why did I say, RGB ??
.
Because the source (from the unit.. ie, cable box; satellite box; etc)
is converting to RGB. so, from an "orgiainality" point of view, the
source is RGB, and that is what you want to capture.. not capture to
YUV.
.
But, because of the closeness in accuracy in certain codecs, (ie, huffy,
for instance) its a near match - though not. But, we like huffy
because its common throughout the many forums. So, most will quickly
recommend/suggest huffy.
.
But, the truth is, its best to captrue in RGB (if your source is from
cable; satellite; laserdisc) because, the output that these unit
send is RGB
.
If we capture this RGB (let us say from our cable box) using huffy,
then we have performed a conversion right then and there. We just
chainged our material some. But, others will argue, it's only minor,
if at all. True. perhaps.
.
But, that wisest choice, *WOULD HAVE BEEN* to capture to RGB.
.
Ok.., here's why.
.
Lets assume you are using TMPGenc. As some of you may already know,
TMPG is RGB-ready. If we sent our *captured* RGB source directly
into TMPG, it will *NOT* get converted to RGB, and would re-produce
our source EXACTLY, though now in YUV_420 form factor.
But, if you captured to huffy (which is YUY2, aka, YUV) TMPG would
have to convert to RGB before it can work with the source. Now,
we are converting yet again, a 2nd time (once for the RGB to Huffy (aka,
YUV) and then, the 2nd time, when we fed it into TMPG, where it has
to convert to RGB.
The bottom line here, is this.
If you capture *ANY* given source, your final OUTPUT should look
exactly like the INPUT.. minus the final convegence, mpeg ( YUV_420 )
form factor.
Thus ...
* INPUT = OUTPUT -- if, source is captured at equal form (ie, RGB vs. RGB)
and
* INPTU <> OUPUT -- if, source is captured other than form (ie, RGB vs. YUV)
Here's a flow of how I am invisioning things ...
A) - GOOD:
Source[RGB] -> cap[RGB] -> final_source[YUV_420] = INPUT=OUTPUT.
B) - BAD (or, conditioned to practice):
Source[RGB] -> cap[YUV] -> final_source[YUV_420] = INPUT<>OUTPUT.
All too often, the most common method used is B)
EDITED ...
To test this *theory* out for yourselves, you could run two tests.
TEST A through DV CAM source ...
* capture through firewire, some home footage (not advc-100 of cable)
* encode to mpeg
* then, split and merge original (DV AVI | MPEG) and view.
Test A should reveal the encoded MPEG's exact color detail w/ the DV AVI 's.
If it doesn't, then you did something wrong in your encoding setup.
(hint, stop using convertTOxxx() filters - it's not needed)
TEST B through cable; satellite; source ...
* capture through analog, your cable or satelite feed, not advc-100 of cable)
** do to tests. One in RGB, and the other, in YUV.
* encode *each* to mpeg
* then, split and merge original (analog cap cable/satalite | MPEG) and view.
* view *each* captured RGB and YUV against the final MPEG
The results of each tests *should* reveal differences. But, the TEST A
model, should be exact.
-vhelp 3343 -
Vhelp,
There is no RGB in analog video. Analog video works in a YIQ (NTSC) or YUV (PAL) colorspace and has both the chromanance signals at half the resolution of the luminance. YUY2 is about as close as you can get to the analog source.
Virtually every card captures in this native YIQ/YUV colorspace with reduced chromanance resolution. Converting to RGB is done afterwards and can only result in loss of precision. Fortunately the level of noise in your typical analog capture is such that the loss of precision is below the level of the noise. So this loss of precision is negligible.
About the only source of RGB video is computer graphics and animation. And this only if the file has remained in a digital RGB format. -
Originally Posted by vhelp
Some machines and image processing programs can "see" in RGB. RGB color space is wasted on humans. We see detail and motion in monochrome. Color resolution perception is much lower and the way the brain processes imagery, color is mostly ignored during motion. There are similar issues of audio perception that I will skip for now.
These human psychology issues were discovered early in the 20th century and were incorporated into analog recording and transmission as a compression technology. Economic concepts like "time is money" were well understood and quickly reduced to "bandwidth is money". Don't waste bandwidth on data that is irrelevant to the human receiver.
Skipping forward to analog TV broadcasting, a color TV camera sees in RGB. The RGB is immediately matrixed into the analog forms of YUV for processing. RGB was output for only two reasons. Reason one was basically for maintenance and troubleshooting. Reason two was to feed RGB to machines that could use the info (image processors). In an analog TV studio, the main "image processor" using RGB was the chroma keyer that in those days used blue screen and needed high resolution blue input.
All analog recording and transmission is based on the concept of YUV compression. This includes composite NTSC (YIQ), PAL(YUV) and component Betacam/MII (YUV).
Digital video recording and transmission started with sampled composite NTSC and PAL (at 3x and then 4x subcarrier sampling)*. CCIR-601 based component YUV recording and transmission eliminated the crosstalk artifacts of NTSC and PAL by completely separating Y from UV to optimize both video processing and human perception. All of these advantages of digital component YUV (standardized in the mid 1980s) extend directly to DV, DVD, ATSC and DVB standards.
The eye responds differently as we said to Y and UV. Y needs to be kept pure. Y carries the picture's detail and motion quality**. Clever tricks can be played with UV. Any conversion to RGB during this process contaminates Y with compressed and spatialy compromised UV and should be avoided.
Until very recently, those folks contaminated with a computer science academic (1Kx1K RGB based frame buffer) education have totally failed to grasp these concepts and continue to view video incorrectly as RGB.
*4 times frequency subcarrier (or 4xFSC) was the sampling standard for high quality composite (NTSC and PAL) processing and recording (D2 standard). The use of the number 4 to represent full bandwidth Y continued to be used in CCIR-601 even though the relationship to subcarrier frequency no longer applied.
** UV crosstalk into Y can cause serious pixel level flicker that the eye perceives as unnatural motion.
Originally Posted by vhelp
Originally Posted by vhelp
There may be some RGB processing in the transmission path but this must be carefully managed for special purposes only. The conversion to DTV represents a complete conversion to a component YUV path. -
In my prevous posts, I didn't add in some other notes relating to
the output (rgb vs. yuv) .. actually, I forgot to note certain things,
and then they came to me after I logged off. But, you guys are just too
quick for me. yeah.., thanks for you quick response back, guys.
Oh well. Anyways.
I had vaguely rememberd that most capture cards are YUV422 or YUV420
but that they may work in RGB and then convert back again to YUV. so,
there is some conversion going on, at least once or twice inside the
capture card.. though I'm not totally sold with that being the case,
it may very well be true. However, in my prevous post above, I was
on the assumption that the Source is outputing an RGB signal, and
that the capture card would work best in reproduction if it were to
capture in the same space as RGB. But, as junkmalle pointed out,
most capture cards (if not all) are outputing YUV data. That would
explain a few things. Anyways.
In my previous post of:
A) - GOOD:
Source[RGB] -> cap[RGB] -> final_source[YUV_420] = INPUT=OUTPUT.
B) - BAD (or, conditioned to practice):
Source[RGB] -> cap[YUV] -> final_source[YUV_420] = INPUT<>OUTPUT.
following should be revised now to:
A) - GOOD:
Source[YUV] -> cap[YUV] -> final_source[YUV_420] = INPUT=OUTPUT.
Where the source[YUV] is your laserdisc/cable/satalite box output ports.
And, cap[YUV] is the input/output ports of your capture card.
In one respect, one should know what a given device medium (ie, cable box)
is outputing.. RGB or YUV. This missing (and important) piece should be
known, for a better and more accurate chain of events.
.
Bare in mind then, that a given capture card/device may take a YUV and
convert it to RGB, and then back to YUV. So, there are some conversions
going on in the device.
.
Also, that would mean that for users of TMPG (like me) could in theory,
loose some color information due to the conversion of YUV to RGB for
TMPG to work with. (TMPG works in RGB space)
.
I'm not sure how CCE works, though I read that in addition to reading in
RGB, it also *accepts* YUY2 (yuv) sources, and is the prefered method
for most users who use AVIsynth scripts. (but what throws off quality
for most users of AVIsynth scripts, are those that include additional
color space conversion filters (ie, convertToYUY2(); convertToRGB();
etc etc))
But, what I think CCE does, is convert YUV to RGB. Partly, because of
the MPEG comprssion algorithm requries RGB matrices, to work with things
like DCT and RLE, and then last step is to convert to YUV for the final
MPEG compression format to take place, if I remember correctly.
From my reading of MPEG and how it is all put together, there is an
order of things to take place, assuming RGB space..
.
* Perform DCT compression
* Perform RLE compression
* Convert to YUV
* Sample to 420
* And finally, compress to MPEG.
.
There are other attributes I left out, but this is the basic meat of
the MPEG process/structure, assuming MPEG_YUV_4:2:0 for instance.
.
Most sources, like from satalite and digital cable are MPEG_YUV_4:2:0
streams. That would mean that the best method of reproduction from this
source, would be to capture it as YUV, and in 420 sampling, and work
or edit it under YUV, and fianlly compress to MPEG. And, looking at
all this again, one can't help but notice this is all lossy.
What I would also like to embark on, is a jeorney into the battle of the:
422 vs. 420 vs. 411
and see if there is really a noticable (or, measureable) difference..,
though particulary in the 420 vs. 411 area. Though we can leave the 422
battle out of the loop, because very few of us (if any) have access to
daily 422 sources, and are able to record to it, and then capture to it
w/out loss. That is just plain unlikely for the majority of us frequenting
here. So, a 420 vs. 411 is in order. Is there any really noticable difference ??
That is the question.. for another test near you
But ...
I will try and find the time to run tests (at least for myself) on the
(rgb vs. yuv) and look for any *measureable* quality differences between
the two. Source scene will vary the results (due to color shift, etc) but
that's the beast of research.
-vhelp 3345 -
vhelp, you should talk less and read more. Seriously. All you do is spreading information which is not correct, as people are trying to explain to you.
I had vaguely rememberd that most capture cards are YUV422 or YUV420
but that they may work in RGB and then convert back again to YUV. so,
there is some conversion going on, at least once or twice inside the
capture card.. though I'm not totally sold with that being the case,
it may very well be true.
If you disagree, i suggest you post some evidence for your assertion.
But, what I think CCE does, is convert YUV to RGB.
Partly, because of
the MPEG comprssion algorithm requries RGB matrices, to work with things like DCT and RLE
422 vs. 420 vs. 411
and see if there is really a noticable (or, measureable) difference..,
though particulary in the 420 vs. 411 area. Though we can leave the 422
battle out of the loop
because very few of us (if any) have access to
daily 422 sources, and are able to record to it, and then capture to it
w/out loss.
Similar Threads
-
YUY2 vs RGB24 vs Y800
By 201flyer in forum Software PlayingReplies: 40Last Post: 28th Apr 2011, 06:11 -
DebugMode frameserver output: RGB24, RGB32 or YUY2 ?
By codemaster in forum EditingReplies: 8Last Post: 6th Jan 2011, 13:09 -
Convert YUY2 to RGB
By jrivord in forum Video Streaming DownloadingReplies: 8Last Post: 19th Aug 2010, 10:21 -
YUY2 Codec for Quicktime?
By EZ-Fan in forum Software PlayingReplies: 1Last Post: 12th May 2009, 08:07 -
YUY2 Vs. UYVY Capture
By W_Eagle in forum Capturing and VCRReplies: 2Last Post: 11th Jul 2008, 10:15