OK, time to ask for outside input. I think this is as good as I can get with my VHS capture-restore-upload workflow.
http://youtu.be/NpU83gGQMGY
This is a capture of a first generation 1985 recording to VHS in SP mode, post processed to get the white balance as close as I could, set the white and black levels and de-noise. The lighting was somewhat amber and there was a hot spot in the center that caused the camera to gently burn the highlights to white. It was rendered to Sony AVC (h.264) for upload. This is the first upload I have done using full RGB levels (0-255). All my previous uploads have used studio RGB, which the last time I checked seemed correct for YouTube. But after a couple recent trial uploads, it looks like they now use 0-255. At least for MP4 files.
So what have I missed? I know the highlights are clipped. Since that is on the original recording, there isn't much I can do. The white balance is close and I can't get any closer without unpleasant artifacts (and is close to how I remember the lighting back then). This was a single tube camera and I don't think the three color channels were completely linear. It has been gently sharpened and de-noised with NeatVideo. It also is slightly cropped because I needed to rotate it slightly. I could have manipulated the audio so his patter is at the same level as the music but I didn't have the energyAnything else stand out?
Paul
+ Reply to Thread
Results 1 to 30 of 42
-
-
It's hard to tell a lot from a youtube video. You should make a better sample available for anaylysis. Levels look about right. Maybe needs a little gamma adjustment? A bit on the pink side? Noise reduction reasonable. Could use better deinterlacing. Good sound for a VHS camcorder. Great performance!
-
Not sure about gamma. He is wearing a tan suit and the lighting is kinda full on flat. I tend to avoid increasing the gamma because it increases other artifacts. I can take it up a notch and see how this one looks.
Yes, it is on the pink side... an artifact (I think) of trying to get a better white balance from an original shot under yellowish lighting (ie., colored lights). If I go for full white balance correction, I get other artifacts.
Better de-interlacing..... yup. I tried to get my head around deinterlacing in AVISynth and gave up. But I'll have to revisit it at some point. This was deinterlaced in Vegas Pro. What caught your eye re: deinterlacing? Just the overall slight softness?
The sound was either a single Radio Shack PZM on the stage floor or a Nakamichi shotgun mic. Not 100% sure which this used. I had to use a multi-pole hum filter to remove a very mild hum, which means it could have been the shotgun (which I used to mount on the overhead lighting fixture).
Thanks for the critique. I'll post when I get another good capture that doesn't have the highlights burned. Most do not.
EDIT: I re-read your post.... yes, I can upload a portion of the original MP4 to my web site.... stand by.
Paul -
OK, I've rendered the first 30 seconds to two AVI files, one with and one without FX.
http://www.pgoelz.com/stuff/Jethro-1%20with%20fx.avi
http://www.pgoelz.com/stuff/Jethro-1%20no%20fx.avi
Any other observations?
Paul -
The darks looked crushed. I downloaded the FLV video and looked at the levels. There are darks below IRE 0 (Y=16) and brights above IRE 100 (Y=235). That causes darks and lights to get crushed when displayed. Digital video should always range from Y=16-235 (except intermediate working files when necessary). I don't know if youtube was the cause of the blown out lights and darks.
Try QTGMC() when you get a chance. It's very slow but it's much better than Vegas.
Mostly the buzzing near-horizontal edges. Like the white border at the top edge of the madolin body.
Wouldn't a PZM on the floor would pick up a lot of foot stomping noise?
I'll download them later and take a look. -
I'm really sick of this level stuff, and especially with YouTube since they don't seem to publish any guidelines. You would think they would.... levels are important! I have conducted tests and previously they seemed to indicate that the YouTube processing required black=16 but white could be hotter than 235. Most of my uploads have been 16-235. But recently I did a test render and upload at 0-255 and it looked fine. I could see some noise in the darkest portion so I took that to mean that my zero was their zero. Hence this upload at 0-255. My uploaded file is fine, but I have not downloaded the FLV to look at what they did with it. How certain are you that the "standard" (if there is such a thing on the internet) is 16-235? I'd like to get it right and then move on
That is the one I was attempting to use when I was trying to get my head around AVISynth. I'll have to give it another go. However, if it is slow, the combination of that and NeatVideo may make the process so slow that I only use it on the most important stuff. On my dual core 3GHz, a NeatVideo render takes about 10X the running time to render. As my captures move into more recent recordings, they tend to get a bit better (more recordings at SP and better lighting) and NeatVideo is not always required. I'm also not sure about what to archive if I go that route.... the original AVI capture or the deinterlaced version.
Yes, but they weren't stomping. All my mic techniques were a compromise back then. I was a one man band and I needed to keep things simple. I could not pay much attention to the audio after I set it up and got the camera rolling so I tended to set one mic, plug it into the recorder and let it rip. The PZM was good for picking up a larger group where I could not get back far enough for the shotgun. The shotgun was generally best in that room because I could put it up on the light in the center of the room and cover the whole stage without too much room ambience. But I used both and I'm not sure which that was.
Paul -
Interesting that you pointed out the interlacing issues on horizontal lines. I looked and yes it is there. And I never even noticed it! I am so used to CRT based NTSC video that it looked normal to me and I looked right past it
Paul -
If you are sick of levels problems you might wanna learn how to work with avisynth scripts here
For encoding i suggest you use apps like HC-enc (mpeg2 encoding) and mediacoder (x264) both support avisynth scripts*** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE -
Audio needs work, too.
I'd use Audacity to remove the noise on this one.Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
I know how to manipulate levels. I use Vegas Pro and it has video scopes and comprehensive level controls. That is not my problem. What I am sick of is not knowing what the correct levels are supposed to be for YouTube encode/uploads. I looked everywhere I could think of when I started this adventure trying to find out what was recommended for YouTube and came up totally empty. So I resorted to experimenting with encoding, uploading and observing test patterns. What I observed a couple months ago seemed to indicate that 16 was "black" where YouTube was concerned. White was a bit less clear. I did the same thing just recently and it seemed to indicate that 0 was black. Thinking that maybe they had changed their processing, and in the absence of any guidelines from YouTube (or anyone else), I tried the Jethro upload at 0-255. I have now gone back to using 16-235..... is that generally believed to be "correct for YouTube input? Does the Codec matter? I am currently using the "Sony AVC" preset in Vegas. My experimentation seems to indicate that codec produces output levels identical to input levels so YouTube sees the same levels I encoded at.
As for the audio, I would agree that it could stand some gentle de-noising. It was an analog VHS recording, after all. I have access to some decent Waves filters (like Waves X-Noise) that learn the noise floor and noise characteristics and then process it out but I almost always find that the audio de-noising process introduces other, even more undesirable artifacts so I always end up omitting it. What kind of noise processing are you referring to in Audacity? I have it but I have not looked at the processing filters in Audacity.
The recorder that was used for these videos includes switchable Dolby but I don't remember if I recorded with it on back then. The captures don't sound to me like they were Dolby encoded. Anyone know if there is a VST Dolby plugin available? Or maybe I can synthesize one from some EQ and a compressor?
I don't mean to sound argumentative here..... I really am looking for input. But I have thought about much of this in advance of asking for input. Sometimes you get too close to a thing....
PaulLast edited by pgoelz; 29th May 2011 at 20:10. Reason: Edited for clarity
-
It depends of what kind of matrix you use, typically BT.601 (SD) or BT709 (HD) that must be set up in the encoder settings
What i know about vegas is that it convert to rgb24 your YUV video and, probably (not sure about that) use a BT601 matrix.If your video is bt709 that can be a problem that's why i think you should convert your video in RGB24 with the matrix of your choice PRIOR importing it in vegas, you can use an avisynth script for that i know i did it recently.You'll need the utility avs2avi
I upload many videos on youtube and i can tell you with confidence that it is "matrix aware" for example if you encode in x264/mp4 with mediacoder and specify the BT709 matrix in the encoder settings youll have a 0 rgb dark and if BT601 it'll be brighter (normally 16 rgb)*** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE -
Good grief, it's kinda like peeling an onion! There seem to be more ways to go wrong the harder I try
What the heck is a matrix? That is a new one on me. Never mind.... took a break and looked it up. For starters it sounds like it might explain why my raw captures (as monitored by WinDV during capture and also as displayed in Vegas) are very slightly greenish compared to the monitor view. But I have no control over that during capture. I correct it in Vegas during editing. So far, I haven't seen any artifacts from the correction. Not sure what happens for rendering since I don't see any mention of matrix anywhere in Vegas.
I am capturing camera originals from mid 1980s vintage VHS using a Panasonic AG-1970 running into an ADVC55. Once captured, I deinterlace and edit them in Vegas Pro and render to "Sony AVC" progressive for YouTube upload. I see no mention of a matrix setting anywhere in Vegas. I understand that Vegas converts the RGB edited video back to YUV during render but it doesn't mention what matrix it is using.
I have AVISynth installed and I'll have another go soon but I may end up deciding that the Vegas workflow is "good enough" for now. I am archiving the untouched raw captures (as well as keeping the VHS originals) so if I get better at this I can always go back and re-render selected important videos. Especially if someone wants a DVD of something instead of watching it on YouTube in stunning 480P flash
Paul -
The difference between rec.601 (aka bt.601) and rec.709 (aka bt.709) isn't in the levels, it's in the colors. There is a slight shift in colors if you use the wrong matrix when converting RGB to YUV or vice versa. I posted some samples a while back. I'll see if I canf find the link. They both should have luma values between 16 and 235. When converted to RGB for display the Y=16-235 range is contrast enhanced to RGB=0=255. Conversely, when converting RGB to YUV the RGB=0-255 range is compressed to Y=16-235. In general, SD should use rec.601, HD rec.709 but with codecs or containers that can flag the matrix either one can be used. Most Windows editors and players use these matrices. If youtube handles the levels of the two of these differently they are handling the video wrong.
In addition to those matrices you have the PC.601 and PC.709 matrices. With those the luma range is not contrast enhanced when converting YUV to RGB, or contrast reduced when converting RGB to YUV. Ie, Y=0-255 converts to RGB=0-255, and vice versa.
Here's the post that shows what happens if you use the wrong rec matrix:
https://forum.videohelp.com/threads/329866-incorrect-collor-display-in-video-playback?p...=1#post2045830
As you can see, there's a shift in the reds and greens. If there were grayscale bars in the images you would see that there is no difference in the luma.
This thread has some discussion of matrices and Youtube:
https://forum.videohelp.com/threads/330410-Youtube-changed-my-colors!-What-to-do
I know that Vegas uses the PC matrices for some types of video. And Quicktime is totally screwed up (and should be banned from your computer, avoided at all costs).
And some more stuff:
Be careful how you compare videos. The same player being played in two players at the same time may look different. Because only one player will be using the graphics card's video overlay feature (sending YUV to the graphics card) while the other is using the Desktop (converting YUV to RGB in software and writing RGB to the Desktop). Video overlay has its own proc amp settings and they may not be set correctly.
The situation is further complicated with the choice of video renderers (the final component in a filter graph that puts the video on the screen) and the complexity of the newer renderers (VMR7, VMR9, EVR). The graphic drivers' implementation of these is full of bugs as far as levels are concerned.
I believe flash player in a browser always uses the CPU to convert from YUV to RGB and uses the rec.709 matrix.Last edited by jagabo; 30th May 2011 at 08:19.
-
I've seen plenty of DVD's with black levels below RGB16, but never below RGB8 without looking pretty awful. I never see anything above RGB235 unless it's an obviously careless job. However, when I say RGB8 I'm talking about dark stuff that has no detail at all below RGB16, and that includes black borders and frames. You might be able to see detail below RGB16 on your PC, but it just looks grimy on a TV.
You also have to keep in mind that the way images display in VirtualDub is not the way they're encoded (16-235), it's the way they're displayed in sRGB. I've recently been thinking about doing all this level and color correction on a small 22" or 26" HDTV instead of a PC monitor (and who can afford a $5000 computer LCD display that shows the full NTSC color space? Not I, for one). Unfortunately small tv's have few or limited display correction controls except for a couple of LG sets with bad motion control and other artifacts, not to mention poor 4:3 upscaling. Still working on that idea.
I do a lot of color correcting in the TMPGenc 2.5 encoder. Their image controls have a setting that lets you see when your levels exceed RGB 16-235 and allows you to set low and high limits for those levels and observe if you're crushing anything or blowing out highlights.
As for VHS -- well, most VCR's crush darks and blow out highlights anyway. VHS puts nothing but noise into low-level values, it's impossible to get a clean, dark black out of most tapes without a lot of manipulation with a proc amp and software filtering.
Keep it up. Your work looks pretty good to me.Last edited by sanlyn; 20th Mar 2014 at 13:33.
-
VirtualDub uses the usual rec.601 matrix when converting between YUV and RGB. Y=16-235 is converted to RGB=0-255. Conversely, RGB 0-255 is covnerted to Y=16-235.
The display panes may show sRGB if you use the "Use DirectX for display panes" option. In that case it may depend on your graphics card's video proc amp settings. -
Thanks for the additional words of wisdom. I really need them! I started this adventure figuring I knew enough to proceed.... after all, I've been around video in one way or another all my adult life. Little did I suspect
I have no trouble at all with levels provided I know what they should be. I use Sony Vegas Pro and it has level and vectorscope functionality. I am actually getting pretty good at using the vectorscope to get into the ballpark just playing a normal scene. The color correction portion of Vegas lets you manipulate the darks, mids and whites separately and that is helpful when trying to color correct an old VHS recording from a camera that was not white balanced on a white card.
I have no idea if my monitor and PC are set up correctly. That is the next thing to look at. Examining photographs and normal PC screens it looks OK to me. Color temperature looks OK to me and I set the monitor controls to center and use the Nvidia controls to set white and black levels. Can I generate a test pattern in Vegas, make sure the levels are set correctly (like 16-235) and then use that to fine tune my monitor?
Using the Enosoft software proc amp, I can change the gain in YUV space and that does take care of my green tint after ADVC55 capture. But I prefer not to do that because I have other reasons for disliking the Enosoft proc amp. I use the Vegas color correction in RGB space and it seems to be OK. Any gross proc amp adjustment is now done in my Prime Image Freeze II prior to capture. I just need to fine tune after capture.
As for crushed whites and blacks, it looks to me as though YouTube does far more damage to the blacks than I ever could. If you look at my video "Volo Bogtrotters", take a look at the PA amp on stage right. The file I uploaded clearly shows the knobs on the front. The YouTube version has no detail in that area at all, even though it is not full black. That was encoded at 16-235 and it looks like those levels survived into YouTube but I have not downloaded the flash to find out for sure.
Without careful study and experimentation, how the heck does any casual video uploader get this right? I consider myself much more than a casual user and I'm not there yet. Not by a long shot. The more I learn the more it looks like many of my successes have been partly dumb luck
I do find it interesting that my very first uploads from my old two tube Hitachi camera look better in some areas than my later uploads from a single tube RCA. Perhaps because the Hitachi had a tube dedicated solely to monochrome? It has terrible smearing and lots of noise and terrible tape dropouts but the images look smoother to me with better depth of detail. And those tapes are from 1981. They were captured straight off the AG-1970. I should try to re-capture them using the Prime Image proc amp and see if they can be further improved.
Paul -
The best way to tune a monitor is to use a color probe and calibration software. The EyeOne Display 2 and Xrite software is the package most people use for video and photos. It's $200 at Amazon. For a little less you get the same software with the EyeOne Display LT -- actually the same probe, but fewer choices for setup in the software. The EyeOne has a long-time reputation for accuracy bettered only by $500-plus hardware. Many go even cheaper with the Spyder line, but there are many complaints about poor accuracy -- still, it's better than using your eyeballs.
In a pinch you can try the test patterns at the lagom site http://www.lagom.nl/lcd-test/ . Still, the EyeOne is hard to beat unless you have a bundle to spend on LaCie or other pro stuff. Try this review of the EyeOne package at http://www.tftcentral.co.uk/reviews/eye_one_display2.htm. It's an old review, but the product has changed little. They discuss others, too, including Spyders. It's a dandy site at tftcentral.uk; you can learn much by browsing reviews.
If you get a color probe (can't live without mine), you can use it for your tv with some free software recommended at http://www.curtpalme.com/forum/viewtopic.php?t=10457 . What you find on this very thorough web page also applies to PC monitors, tho most PC monitors don't have anything like the CMS image controls on some tv's. But it's still an eyeful about what's involved in "calibration".
If you'd made less of an effort, your "luck" would have been worse.Last edited by sanlyn; 20th Mar 2014 at 13:34.
-
Note that you'll only be testing RGB on the Desktop using those test patterns. After adjusting your monitor with those patterns test your video proc amp settings with the DV AVI file in this post:
https://forum.videohelp.com/threads/326496-file-in-Virtualdub-has-strange-colors-when-o...=1#post2022085
Viewing it on the computer should look like the first image in that post, not the second. That's only for adjusting levels, of course. -
Hah, beat you to it! I already found the http://www.lagom.nl/lcd-test/ site. And thus began the confusion. As I see it, I have three interacting adjustment opportunities, not counting and player settings. I have monitor settings, video card (Nvidia) settings and OS color management settings. I get the best results using the Nvidia control panel, but I am not sure if the OS color management settings interact. Could this POSSIBLY be more complicated? How in the world am I supposed to get everything set "correctly"! Note that using the test patterns above and the Nvidia control panel (and ignoring whatever the OS might be doing behind my back) I can get both monitors looking nearly identical and the test patterns are correct. So I think I'm there. I just don't know if the OS is messing with things. And it doesn't help that the Nvidia control panel doesn't always apply my settings on boot.
PaulLast edited by pgoelz; 31st May 2011 at 20:10.
-
Assuming you're using a digital connection from the computer to the monitor: when using RGB test patterns you should set all your graphics card's settings "flat". That will assure the RGB values from the test patterns reach the monitor intact. Then use the monitor's adjusments to adjust the picture. If you can't quite get the picture right with the monitor's controls you can consider using the Desktop color controls to fine tune the output.
-
It's nearly impossible to thoroughly adjust adjust a computer or tv display by eyeball. Test patterns often suffice for many basic adjustments, and you can get closer to ideal gamma using them. Balancing gray-scale and gamma curve response from dark to bright is not possible by eye, for several reasons. Most people know that a color is "off", but more often than not they're unable to say why. You might say that a gray looks "too blue". Is it because there's too much blue or because there's not enough Red and Green? While both of those answers sound "alike:, in color use they describe two different problems. You can correct that medium gray, but then you find that darker grays are too red because of the correction. So you correct for the darker grays, but that correction causes lighter grays to turn Green.
Besides the eyes' limitations, the default display behavior of most monitors and graphics cards works against you. The average PC monitor has an RGB response curve that places most of the color intensity in the midrange. Red is generally depressed in the darks, green or blue is more intense in the brights, etc., and then another monitor might display a completely different curve. Even if you had a graph that mapped those response curves, graphics cards and monitor controls are too limited to allow precise corrections. Many users think they can use Adobe Photoshop's "Adobe Gamma" feature to set their monitors. That feature works to a certain extent, but only for midtones -- the utility is designed for CRT's, not LCD's.
If a display's output were equally powerful or intense with all 3 colors from darks, midtones, and brights, a graph of the RGB response curve would show 3 horizontally flat lines, one atop the other, from the left side of the graph to the right-hand side. But it might surprise you to learn that even if you had the built-in image controls to achieve that goal, adjusting brightness/contrast relationships will change it. Many monitors literally "run out" of some colors in certain luminance ranges; others will increase color output at the dark end (usually blue) to give the impression that the monitor has really "dark" blacks at bright display levels -- in fact the color down there isn't black, it's blue, which we humans perceive as being a "darker" color than the other two. Many monitors rearrange their color response to maintain a specified gamma curve over specified brightness ranges. And it's well known that most monitors and tv's are designed to look "good" in bright greenish showroom lighting, not in homes. Most monitors out of the box will have an RGB response curve that has one of two shapes: a clearly-shaped bell curve (most common), or a clearly-shaped bowl curve, both shapes having severely splayed colors at each end.
I seem to recall you stated that you lowered your brightness and a couple of color values by one-third or something like that. That's likely a decent correction for many monitors, but it's limited. Calculation software can read exact values of many hues from darkest to lightest and set values via your monitor settings (on pro monitors) or graphics card LUT to maintain as flat an RGB curve as possible and an appropriate gamma curve for your preferred brightness settings. They can tell your graphics card to map Red values of 15 to a value of 21 when displayed, to match the behavior of other colors at that response range and to maintain a specified gamma over that range. The monitor and/or graphics card settings that users typically see aren't capable of making such distinctions.
Another distinction that color probes and associated software can make has to with y-luma and saturation levels. Let's go back to that gray square that looked too blue. It can look that way because (a) at that color point, there's not enough red and green, so raise red and green at that point to match blue, or (b) there's a blue "hump" above the average response curve at that color point, so lower blue within that range of colors, or (c) the saturation or "brightness" (luma) level of blue or the other two colors is not in balance at that color range. A few Samsung or LG TV's have control of specific saturation levels for specific colors, but consumer-grade PC monitors have no such controls. A basic "Saturation" or "Color" control affects all colors at once, not just one; you can use it to de-saturate blue, but you'll de-saturate red and green as well -- the overall relationship of the 3 saturation levels won't change.
Some makers of HDTV displays provide extensive image controls (Samsung, LG, Pioneer), others provide a more basic setup for RGB Gain and Cut (Sony, Toshiba, Panasonic), others have more simple RGB adjustments (Vizio), and others have no RGB adjustments at all. But even with an extensive color management system (CMS) in a tv, you need some kind of graphics display utility to let you see how your display actually responds to adjustments. Such a program would be the free HCFR, or a more expensive $1500 package would be from Calman. They both accomplish the same thing).
I'll be the first to admit that without basic measurement tools, many people with good visual sensitivity and a degree of patience can accomplish wonders adjusting a monitor or a video, using their eyeballs and their persistence. I did it for several years. But with quality standards being gradually lowered to maintain certain price levels and to satisfy consumer demand for odd-looking, in-your-face video effects (not to mention cheaper or non-existent quality control), I finally coughed up some cash.
We're not talking about $50K to $100K-plus for pro gear. I'd love to have some of that. But even if I had the cash, I don't have the years of know-how required to use that stuff.Last edited by sanlyn; 20th Mar 2014 at 13:34.
-
The main monitor is a Dell connected via HDMI. I tried turning off the Nvidia controls for it (they call it "other applications control color settings") and that worked except that I lose the bottom two grey steps. The monitor contrast affects whether the whites burn out, but neither the monitor brightness or contrast affects the blacks. I'm guessing the "brightness" merely changes the backlight? Now that I have Nvidia out of the picture I'll look at the OS color management settings. Not sure what color profile I have loaded.
The secondary monitor is a Magnavox LCD TV with VGA input. It has always looked terrible without adjusting the Nvidia controls.... WAY too much contrast and its brightness and contrast do not affect anything other than the overall brightness. I suspet there is something wrong in the OS but I haven't found the problem yet. It looks fine if I use Nvidia to set it up. Maybe that is normal?
Thanks for bearing with me here.
Paul -
-
Good test, jagabo.
My laptop: top chart, all distinct except 2 on left-- bottom chart, all look alike (not unusual for a laptop).
LG IPS226v (LED edge lighting): top chart, all distinct -- bottom chart, 3, 4, 5 distinct, 1 + 2 look like 3
HP RZ22W (CCFL backlight, but same front panel as LG): all distinct
I adamantly maintain that CCFL beats LED in many respects. The LG and the HP desktop monitors use the same LG e-IPS panel. I have calibrated both for a gamma of 1.8, used by many photo people.
Also note: if you use a Firefox browser, Firefox hooks into a computer's ICC profile if you activate one. Some versions of Internet Explorer don't seem to "hook" so well, but I didn't use IE to view the panels you posted above.
You can use those lagom panels in various programs (AviSynth, for one) to make RGB videos. Try them in your media players, or lopok at the same still-photo patches in your graphics program (Photoshop, PaintShopPro, etc.. You'll be shocked at the results. An oddity: videos I made with patches from lagom displayed quite well in VirtualDub, albeit not perfectly. I was surprised.Last edited by sanlyn; 20th Mar 2014 at 13:35. Reason: replace link
-
Note the dark shades in my previous post are hard to see because I trimmed away the black background. The bright white background of the web page, right next to the dark patches, makes it hard to see them. But in the original context:
http://www.lagom.nl/lcd-test/contrast.php
http://www.lagom.nl/lcd-test/black.php
I can easily differentiate all the grays in the first image. In the second I can pretty easily see all the way down to 3 from my normal seating position. If I move my head up by about six inches I can see 1 and 2 too. This is on an inexpensive CCFL LCD monitor, about a year old. As you know, the contrast ratio can vary on LCD monitors depending on the viewing angle.
Be careful about preparing test videos from image files. If you don't prepare them properly you will screw up the levels.
I recommend CSamp as a useful tool for reading RGB values off the screen. -
-
Again, thanks for the lengthy explanation. Yes, I can get very close by eye and in many cases I can identify the problem. My current screensaver in fact is a decent rudimentary monitor test because it has lots of subtle midtones and more subtle detail down to black. After calibrating with test patterns I use it to compare both monitors and see how identical they are. The Dell (the HDMI monitor) always looks a little flatter and more "photographic". The Magnavox always looks more vivid... kinda like a Fuji photograph. I can get it to match the Dell by using the display adapter controls and manipulating gamma and contrast but they end up near one end (especially gamma) and it seems like I am fighting something like the wrong calibration in the OS. I have had unpredictable results trying to load a calibration for this monitor... sometimes it seems to change the display and sometimes it does not. Until I can control it reliably, I'm not sure a color probe will do me much good. The Dell does seem to obey the OS calibration settings and there is one for that monitor.
Can I assume that the "correct" way to calibrate a monitor is to turn off the graphics card controls and load an appropriate calibration curve in the "color management" section of the properties for the graphics card? I am doing it that way with the Dell, and it seems to give results that I expect. However, the Magnavox seems to mostly ignore the calibration even though the graphics card properties clearly identifies it and allows me to load a calibration.
Gotta experiment the next couple days and see if I can get there.
Paul -
It's quite common for TVs to overdo the contrast and saturation. Go through all the TV's setup options and turn off all automatic picture adjustments -- like auto contrast, auto color, etc. Those feature do not make the picture more accurate. They are intended to "pump up" the picture because that's what sells on the showroom floor.
-
Unfortunately, there are no controls for those feautes in the monitor setup. Just brightness, contrast and color temperature. But you got me thinking.... maybe the controls for TV usage affect it as a monitor. Have to check this evening.
The extreme contrast is not subtle and is way more than I would ever expect from a hyped up midrange. I think I'm losing the bottom 20% of the scale.
Probably isn't related, but this monitor gets the position wrong about 95% of the time on a cold boot. The screen is displaced to the left (consistently) by about 50%. Interestingly, it is OK on a resume from hibernate. But I don't think this is related to the extremely contrasty picture.
FWIW, as an analog NTSC video monitor it is very good and accurate.
Paul -
Paul: jagabo's comments on matrices took the words outta my mouth (he's always doing that). I tried to scroll back to see which Dell you're using, couldn't find it. But getting a PC monitor and a TV to match exactly is seldom possible, but you can get fairly close. Unless you have an 8-bit or 10-bit extended gamut Dell monitor (likely you don't), the Dell is a 6-bit or 8-bit display that works with sRGB, whose color gamut is not the same as NTSC (usually, sRGB is about 70% of the NTSC gamut). The number of colors is the same (16.7 million or some such). It's complicated, but nobody seems to worry about it. Also, the gamma response of PC monitotr\TV monitors differ, with a TV usually looking brighter than a PC.
The reasons for these differences would take a month of reading; anyway, the math involved is too atrocious for mere mortals to tolerate. If you're getting a halfway decent match between the two, you're doing well. Since the TV has fewer image adjustments, you just do what you can with it. However, with the right tools you would be able to get a far more precise calibration with your Dell. A few graphics cards can use two profiles (mine can't).
You can't use a video card overlay and an ICC profile at the same time -- at least, I've never been able to (my ATI cards won't let me). EVen if you could, you're making things verrry complicated. A formal ICC profile will be more accurate anyway. Display software like XRite/Display-2 entails 2 sets of steps. The first set is to manually fix brightness and contrast to the desired levels, which the software can measure exactly. Just about every PC monitor has this adjustment, but don't assume that they all work the same way. Usually, PC monitors come with an ideal contrast already set (but don't assume that, either). The next step is to set the desired color temp. Some PC monitors have a color temp setting only, others have color temp and RGB. XRite gives you a display that shows what happens when you adjust RGB manually: you have to match Red, Green and Blue lines in the middle of a window (this is trickier than you'd think, but you just wanna get as close as you can).
Those first steps can get you pretty close to ideal. The XRite stuff then automatically does some tweaking of RGB/gamma/saturation for 5 minutes, and that's where your ICC profile comes in. The software sets Windows to load the profile at bootup (you can change the default profile if you want, but why would you?). If you don't have something like XRite/Spyder/LaCie or any of those guys, then basically what you've been doing with the Dell is the first few manual steps that calibration software packages use.
Why your Dell has to reposition so often is a mystery. But I've known some quirky monitors that do it all the time. Annoying.
Figure this: if you manually set the Dell, you're not fooling with the kind of additional graphics card LUT values that calibration software sends to the card. What you're watching is your graphic card's default behavior, tempered on one side by your Dell settings and on the other by your TV's settings. If you did load a software-generated ICC, your card would use the LUT values from the ICC profile: it would use those values for both monitors (unless you buy a multi-profile card and spend lotsa $$$ for multi-profile calibration packages. Save your money, and eat better).
For a primary reference I'd use the machine that gives more precise image control. That would be the Dell. If your Magnavox has halfway decent color, you probably spend most of the time trying to match brightness/contrast levels. Indeed, that's a problem between PC's and TV's: in this regard, they don't behave in exactly the same way. With probes, I've matched PC and TV with the same light output and gamma -- but they still look "different".Last edited by sanlyn; 20th Mar 2014 at 13:35.
Similar Threads
-
Sharp VCR (or similar) S-VHS quality for best capture of my VHS tape?
By ruehl84 in forum Capturing and VCRReplies: 0Last Post: 19th Feb 2012, 15:52 -
Which $150 or under capture card for VHS/S-VHS -> computer?
By HDClown in forum Capturing and VCRReplies: 25Last Post: 16th Apr 2010, 22:16 -
VHS to DV capture: Component video vs. S-VHS
By vega12 in forum Capturing and VCRReplies: 8Last Post: 19th Feb 2009, 19:42 -
Capture device needed for old VHS or 8mm camcorder capture....What to get?
By thor911 in forum Capturing and VCRReplies: 11Last Post: 5th Oct 2007, 04:31 -
Please critique my proposed HTPC build
By wazakenn in forum Media Center PC / MediaCentersReplies: 15Last Post: 22nd Sep 2007, 22:09