Lord Smurf, downloaded your VD & Filters from your site, lately been working on old film from the 1960's. From doing this for the last few months. Found that u can't just use one filter, usually 3 to 5, to improve the color. However each screen, LCD or whatever has a different picture tone. In other words depending on ones TV, PC Screen or whatever the colors could look a tad different. With the MSU filter found that sometimes it doesn't fully work and the filter gets a ton of color shifting in the picture. On Black Screens, the MSU filter destroys them and makes them multi colored.
On these 60ties films it seem that many of them have high levels of red that dominate the picture and using different filters and testing, you can get a much better color.
The biggest problems with these old films is that they have already been converted to digital by someone else. Many of them have scratches, lines, or black or white frame damage with random spots. Because of how it was converted some of the spots, can run on the back end and front end of different frames. To be honest, that is the part that is hardest to solve. The color takes a while but so far the results have been good.
+ Reply to Thread
Results 31 to 56 of 56
Last edited by Deter; 16th Jul 2011 at 10:36.
PowerDVD has halfway useable image adjustments, but other players (VLC, MediaPlayer) are useless in that respect.
Most people have no idea what monitor calibration is, how its done, or why. The reason one calibrates a monitor in the first place is so that what you see on-screen conforms to a common standard. If you can adjust a monitor at or near well-defined standards, you can fairly well predict how your images will display on other equipment. The fact that most people have poor visual discrimination, don't know one color from another, and prefer to use their monitor or TV in what is known as is-in-your-face "torch mode" is their own business.
Most of the calibration and image adjustment info you find on the 'net have to do with still photo, but the principles are the same for video. Sites like Dry Creek Photo are helpful sources of information. Here's a page that deals briefly with monitor calibration:
On that page you'll see some suggestion about using Adobe Gamma as a calibration aid. Don't take that advice. Adobe Gamma doesn't work with LCD's and its results are not very accurate.
That page leads to other pages, one of which is this site that offers more detail and some monitor test patches: http://www.normankoren.com/makingfineprints1A.html . While test patches can get you started (it's better than nothing), this thread goes into more detail about why test patches don't work that well: https://forum.videohelp.com/threads/335402-VHS-capture-critique?p=2083260&viewfull=1#post2083260 .
Most users adjust monitors with hardware/software packages designed for that purpose. A test and calibration article for a popular monitor used by many video and photo enthusiasts (Dell U2211H) is here:
http://www.tftcentral.co.uk/reviews/dell_u2211h.htm. While this review used an expensive LaCie pro calibration package, something that's more popular and affordable is reviewed here: http://www.tftcentral.co.uk/reviews/eye_one_display2.htm .
A longer article dealing with the principles behind TV, projector, and monitor adjustment (the basic pinciples apply to all displays) is here: http://www.curtpalme.com/forum/viewtopic.php?t=10457 . This page is most informative, but caution: the samples they display of scenes and people aren't that great. Apparently, someone prepared those images with a poorly calibrated monitor -- an example of why to calibrate in the first place.
Images display more accurately using software that hooks into your monitor's calibration profiles. Most media players don't do that. Some Windows applications that look better on calibrated monitors are the Windows desktop, Firefox web browser, VirtualDub's preview screens and most of its filtering dialogs.
Last edited by sanlyn; 20th Mar 2014 at 18:31.
The stuff that I have been working on are not VHS recordings. Old Film transferred to Digital. On these specific films, the red levels are way to high. From knowing what the grass and tone of people's faces are supposed to look like is how I adjust the color. It is not easy tweaking color cause one thing effects another. This is an example of the specs used on an old film from 1964. On this film decided to make special frame adjustments pulling out and re-painting many of the(frame damage with random spots). All the video edits or frame restoration are done before any color tweaking.
Thanks for the heads up on picture adjustment, set all my own calibrations on most of the media stuff I have. With the Laptops, never really needed to tweak the overall color.
ColorMill has 3-stage RGB, gamma, level, and other controls for darks, mids, and brights. A properly designed RGB filter should affect only one color at a time; if you adjust Blue, it should have no effect on Red and Green. Most people start color correction by first adjusting brightness and contrast. These two elements strongly affect the way the eye perceives color and saturation levels. So does wallpaper or a colored desktop; set your desktop to a plain gray, around RGB 32 to 64.
It sometimes helps to view a grayscale-only version of the image (use VDub's grayscale filter) to initially adjust brightness and contrast. This does require that you jockey back and forth a few times between grayscale and color views, since many color adjustments affect brightness levels, and vice versa. Normally the only RGB controls I use are ColorMill and the gradation curves. Too many color filters can get in the way sometimes, though I've often loaded the brightness-contrast and HSI filters as you show.
After setting basic contrast and brightness, the easiest way to start working on colors is to note how blacks, grays, and whites are displayed, followed by skin tones. If you get those looking right (assuming you're not dealing with a strongly colored light source such as orange floodlights or blue neon), most other hues will start falling in place. Rather than start by correcting green grass, if there are clouds in the image you might try getting clouds to look white to begin with (the shadows of clouds are usually gray, with very little blue). Direct-lighted grass is just above the illumination level of midtones, or about the same level as 65% gray. Under normal lighting, skin midtones rarely go above RGB 160 to 180 Red, and bright skin highlights look a bit odd when higher than RGB 220 -- but much depends on lighting conditions, etc., and whether you're correcting male or female skin. And even then, there are wide variations in real life.
Color temperature for film, NTSC, PAL, HDTV, etc. is standardized at 6500K for luma levels between 110 and 140. Out of the box, most consumer monitors have a brightness level at least 230 (usually higher), at least one oversaturated color (usually Red), an elevated blue at the dark end and/or crushed blacks and shadows (to make blacks look "darker"), and a color temp that ranges anywhere from 5500K to over 9000K. The only one of those elements that can be adequately corrected with a monitor's image controls is brightness. Fortunately, today's monitors usually have an optimal contrast setting (white levels), but gamma is inconsistent across the spectrum, usually too low in the darks and too high in the midrange. Out of the box, most monitor color is concentrated in the midrange between IRE 30 and 70 (to make them look brighter in showrooms).
Correcting color might be easier if you add a popular little tool to your desktop that you can use to measure actual RGB pixel values with your mouse pointer. No installer required; CSamp (for "Color Sampler") a tiny stand-alone applet that sits on your desktop. Free. Very handy.
Last edited by sanlyn; 20th Mar 2014 at 18:32.
Blender's nodes can be key framed, which can be used to vary the strength of the various filters over time. But if you've got lots of clips to do, the process needs to be as automated as possible.
How many of your videos have this blue tint? Of the tapes that have this problem, are they consistently bad - or do some only have a slight blue tint?
What about this work flow:
- create an AviSynth script that replicates what the Blender nodes do, and batch convert all the videos with this problem
- import into your NLE, where you can do more minor colour corrections + add titles, adjust audio, etc
- encode to some high quality video format (high bitrate h264) for archiving
You might notice that I've left out 'noise removal/reduction', which IMO is one of the most destructive things that can be done to a video. I don't know how good VDUB is at noise removal, but I'd prefer to digitally archive these types of video without any noise removal, and only use it at the last stage (when burning DVDs/Blurays/etc. for people to watch).
There's always a chance that a more sophisticated noise reduction system might be invented at some point in the future which might preserve more fine detail and avoid the image looking plastic or artificial. Hanging onto less processed versions of the videos would keep the door open to things being redone at a later date.
From what I've read, commercial apps make use of lookup tables and other clever techniques to speed up image processing (as well as using hardware acceleration). Blender doesn't do this, AFAIK.
However, the speed limitations are being worked on:
*although disappointingly, the accompanying video has been speeded up. Would much rather have seen a real-time video, irrespective of the spec. of the system it's running on.
While "most color films from the 60's & 70's weren't too red" may have been true when they first were produced, the aging of film PARTICULARLY FROM THAT ERA often has resulted in "color shift", where the dye layers in the print loses 1st it cyan/blue layer, then it's yellow/green layer, then finally it's magenta/red layer. Since these losses proceed at different rates, it's common for a film to "get more red" the older it gets.
Since it is now 35-50 years later for many of those pictures, the red is now becoming quite apparent. I'm pretty sure THAT is what Deter was referring to, whether it was clear or not.
This is a main focus in "Film Preservation", and is one reason why major film directors that are also film buffs want to digitize and make Technicolor safety copies of important old films (maybe even un-important ones). Technicolor, because of the nature of the stock actually being stored as Safety Black & White film, lasts MUCH longer.
Those color shifts are very hard to fix once too much of the dye has been removed. There's no point of reference (especially since the dyes don't always fade linearly), so it's almost like colorizing a black & white film by guessing at the color that should have been there.
Nodal correction processes probably will do the best job at these kinds of problems, just because they can approximate (and therefore reverse) many of the possibly complex interactions involved in the color getting the way it currently is.
Lot of good input in this post..
Did a lot of texture work and color blending on other projects. Also worked on a few massive photo restoration projects. So kind of learned how to work with colors. Unless you are a highly paid pro, the goal should be to improve the results of the source. Many of the films, I was working on, were recorded from TV and for personal use. However the quality of the films, (Color/Picture/Scratches/Sound, were not acceptable. So the goal was to create better achieve footage or film than the source. It is kind of like, you are the only one that has the digitally restored version and the owners of the actual footage have a lesser quality film. The sad thing is, with these VD filters and AviSynth scripting you can basically do this for no cost and get better results than that of the stock footage; which is kind of cool. You don't have to spend your checkbook or a lot of cash on something like Da Vinci.
Have done some color restore work on films just using a TBC with color correctors....
Sometimes with doing color correcting, you need to plan for more than 1 step, you make your color adjustment knowing you are going to make adjustments to your adjustments.
One problem I have looked in to has been different colors patterns from scene to scene. If you really want to fix something like this. You correct each scene problem and match the results vs each other, cut the film in to segments and blend everything to your source template. It is kind of a pain...
Last edited by Deter; 17th Jul 2011 at 22:37.
.... ok .... I'm just going to take my crayons and my coloring book and go home.
If I thouch up the lines that have been colored over with a sharpie, would that be chroma noise reduction?
Anyway, I'm humbled by everyone's knowledge. Can we get back to helping out the dumb-dumb here. Look, I understand monitor calibration, and how awesome CS4 is. In fact I went to art school, before computers could do any of this stuff. Back in my day we did everything with Pantone color chips, and I had to walk to school uphill both ways. I'm not gonna lie, I got mediocre marks in color theory. I have typical male vision when it comes to sorting out delicate color tones. I do know that the house I grew up in and the people living there were not blue.
My goal is to do better that what I have on my footage, not cinema perfect color. Nice enough for the family to enjoy on family video night, is the aim. Is there any middle ground here? I'm not restoring historic footage of the JFK family.
I get some good results in VMS with color correction but nothing like intracube got. Any tips using the tools I already have, AVISynth, VDUB, VMS 10? Heck I even have Power Director, though I despise it deeply.
You can get it similar to what intracube did using avisynth + vdub. But it's not "keyframeable" , and secondary CC is much more difficult to do
While intracube did a good job on that shot, I still have doubts - as mentioned earlier , a known reference blue object would make everyone feel better
converttoyv12(interlaced=true) chubbyrain2 assumetff() qtgmc(preset="faster", sharpness=0.5) removedirtmc(50,false) coloryuv(gain_u=-50, gain_v=5) #histogram("levels")
Then in vdub, I used gradation curves in YUV mode, then colormill and gradation curves in RGB mode. I attached the vdub settings below .vcf processing settings in the zip file
This might be why you are having difficulty in other programs, most which do a normal range RGB conversion (you lose parts of the blue channel when it's converted to RGB, because Cb has out of range values). So the idea is to "legalize" Cb, before converting into RGB. Some other programs that are better at color correction can access YCbCr data before the RGB conversion, and have filters that work in Y'CbCr, RGB, HSV, CMYK, LAB, etc...
Theoretically you could use smoothcurve() and smoothcustom() and do everything in avisynth in Y'CbCr , by remapping cb values, but it's very difficult to do, as there is no visual aid - you're working blind
Last edited by poisondeathray; 18th Jul 2011 at 16:01.
I didn't get any great results but just a little better than the sample you provided, i used ColorYUV (off_u and off_v) in avisynth to tweak the color a bit, then colormill, gradation curves, and lab in virtualdub.
I don't see anyone mentioning LAB (vdub flter) but try it, you can find it on http://www.infognition.com/VirtualDubFilters/detailed.html or directly on http://www.mysif.ru/FiltrDub.htm
This filters are not enough to fix it, its not just the "blue issue" , there are other problems too.
I have some samples of my own, and they have some similar "magenta problem" not with the blue, i have tried a lot of color tweaking/correcting but you cant expect a miracle as you said, just a little improvement.
edit: thanks poisondeathray, i like it
Nice PDR, I'm going to mess with those settings this evening. That's the kind of thing I'm looking to do. It's visually acceptable verses the blue haze of doom, without getting really technical. What you say about color space makes sense. It seems VDUB gets brickwalled on some of my color problems and I can't pull it back.
I have been running removedirt before deinterlace. I don't know if it will make a difference. I have been seperating the fields and running removedirt on each field individually then weave, then QTGMC. Also I have never used chubbyrain2.
It's more of a curiosity thing for me , but when you are fiddling around later, if you can find a section with a known blue object, that would be nice
Maybe blue jeans, blue car, etc..
Why are you deinterlacing the video? This is usually a last result and manly if the file has some kind of motion or field damage problems.
Remove dirt is also only used as a last result, this if for oxide drop outs or other pixel problems. If you have no drop outs or white/black streaks in the picture, you don't need it.
Kind of like what poisondeathray did with the re-color job, think it is really good....
Saturated blues are, of course, greatly reduced. I decided to let a bit of blue through (as can be seen on the folds of the clothes) - but it's a compromise. Hopefully, the areas that are slightly blue (but shouldn't be) aren't too objectionable, and the areas that should be blue are just blue enough. But it's very difficult to balance.
Initially, I thought there might be a way to distinguish between the areas where the blue should/shoudn't be and filter accordingly. But, from looking at the channels, I don't think that's possible.
converttoyv12(interlaced=true) chubbyrain2 assumetff() qtgmc(preset="faster", sharpness=0.5) removedirtmc(50,false) coloryuv(gain_u=-50, gain_v=5) #histogram("levels")
Oh, one other thing I noticed; The colours (reds, at least) are vertically shifted down relative to the Y channel by about 6px - noticeable at the top and bottom of the t-shirt. The image I uploaded in post #16 has the Y channel shifted relative to CrCb to compensate. The image in this post doesn't.
If he wants to shift CbCr relative to Y, the equivalent avisynth function would be chromashift() or chromashiftsp() . e.g. chromashiftsp(y=6) would shift chroma up 6px. The "SP" variant allows for subpixel gradations, so 5.5 would be 5.5 pixels.
Shifting the chroma might be ok for that 1 shot, but it doesn't look right in other parts of the clip
This is where vdub and avisynth fail. As the exposure changes, the static corrections aren't optimal for other parts in the clip. Looking at the full clip - I probably desaturated blue too much. I just "eyeballed" it to get a rough match to your blender results on that static shot. I doubt he will be able to leave just 1 setting and let it go for the actual video, or he will have to live with some "intermediate" result
"Shifting the chroma might be ok for that 1 shot, but it doesn't look right in other parts of the clip" spot on PDR
This is why I abandoned trying to fix chroma shifting on this old VHS footage. In one scene it is 6 pixels down then in the next it's 4px left. It may be some kind of tape flutter from one of the many transfers. Too bad VDUB does not employ temporal "envelope" style filters.
I'm sure I can find a clip with a blue object. I'll try to play with this tonight. I was extremely busy drinking beer last night and didn't visit video restoration world. I made a DVD of clips of me and my friends drinking beer then took it to my friends house so that me and my friends could drink beer while watching video of other times we drank beer..... ya I'm a dork.
Where do I find chubbyrain2?
manono, thanks chubbyrain2 is working well for killing rainbows and other chroma problems in my footage.
PDR, I think your theory about colorspace is correct. The YCbCr video was, I don't know how to say this in video speak, clipping or brickwalling the B in the RGB colorspace. I actually ended up with coloryuv(gain_u=-40, gain_v=5), -50 killed all the blue. Then in VMS I ran a color corrector and a secondary color corrector with masking to get rid of some stubborn magenta patches.
I'm still tweaking it. I'm trying to get it to look as natural as possible.
OK here is what I have going on in AVISynth:
f1=SelectEven().RemoveDirtMC(50,false)#.FFT3dfilte r (sigma=12,plane=3,bt=2)
f2=SelectOdd().RemoveDirtMC(50,false)#.FFT3dfilter ( sigma=2.0,plane=0,bt=-1)
QTGMC( Preset="Slower", sharpness=0.25)
This takes care of most of my really bad stuff. Then I do a Neat Video and some additional chroma noise reduction in VDUB. Then finally, and this is the only way I can figure out how to do it, I remove the evil purple/magenta using a secondary color filter in VMS. Maybe some additional color tweeking in VMS.
I'm not ready for public display yet, maybe tomorrow. Is there anything I can do to enhance that AVISYNTH script?
OK, I've moved on for now. I became a mad scientest with secondary color correctors. Then I remembered that I am using an uncallibrated monitor so I eased up on some of my tweaking. This particular footage is in very bad condition, I got to a "good enough" point then moved on to new clips.
I didn't realize how good I got with the secondary color correctors until I worked on some less degrated footage. It appears that I am now able to correct a wide range of color problems on my VHS footage. That secondary color correctors in VMS is the icing on the cake for my VHS restorations. Chaining 3 or 4 of them together, I can eliminate obnoxious color haze, purple hot spots, and over exposed yellow glare until my hearts content.
Though I'm still not sure where the best place is to put my deinterlace on my script. It seem that most people place it at the top. Running removedirt on the individual fields seems to make an improvement over running it on progressive frames.
Only about 200 more hours of processing left, then I get to move on to the VHS-C footage.
Imo you should put whatever deinterlace filter at the end of your script because for instance QTGMC slightly change the levels, that's what the histogram tells me in avisynth at least.
Can you put a sample of your restored video i'd be curious to see the end result thanks.
Looks something like this. Far from perfect, but way better than it was.