+ Reply to Thread
Results 31 to 60 of 112
And you can use the GUI sliders in avspmod . It comes by default with the levels() sliders for the 5 basic parameters (levels has other parameters like coring; and dither in avisynth 2.6.x) because it's a standard avisynth filter. The preview automatically refreshes live when you adjust the sliders . For filters that don't have sliders by default, you can create sliders (edit => insert user slider , or F12) . If you want manual refresh, or prefer to use text edits, push f5 to preview. When switching tabs, the preview automatically refreshes as well
Once again, I can't stress this enough, thanks so much to you two! I feel like instead of just getting into this, I made giant leaps thanks to your help, even though I still barely scratched the surface of the whole matter. It's highly interesting and I wanted to do this for quite some time.
So is avspmod the preferrable choice here? Does it make a difference if I try with avspmod or with VirtualDub? I definitely like the idea of using sliders to get acquainted with colours.
But for now I think I'll have to call it a day, still have to read a text on the Thirty Years War. Which unfortunately is very interesting as well, so my loyalties are split here, hehe.
And if it makes you feel any better, I have this text file of topics to go back and read over or research on. I copy links to discussions of topics that are interesting or might be valuable etc.. that I pass by, but I currently don't have time to learn about. It's like 300 pages long and growing (not all video stuff, but you get the idea) . (Jagabo is a big repeat offender on that list LOL) . So don't feel you have to learn everything in 1 day
I see, so I can "cut the waveform in half" (for example) and only apply changes to one half, e.g. raise the darker half?
Recall earlier I said there are many different ways to get similar end results - It's probably too much to absorb at this stage, but another powerful way is to remap input to output values using curves or smoothcurve. One drawback of levels() is the changes are linear . f you've every use photoshop or gimp curves, it's the same idea. The one in avisynth doesn't have a GUI, you need to enter a string .
And yet another more advanced way is to use masks. e.g. you might use a bright mask (derived from high Y' values) to composite layers to reduce the bright areas. Same thing with saturation - you might selectively bring down , say red saturation of a certain hue (but not affect other colors)
But in color work , you usually cannot adjust a narrow range without making something look out of place, especially with 8bit values. You have discrete changes that look unnatural and out of place. Also - don't just "treat" the histogram, waveform, vectorscope etc..ONLY . They are monitoring aids meant to be used in conjunction with the actual rendered image
Are there special filters like the Histogram for the brights and darks to help me notice issues with saturation and other things?
And you said your eyes aren't that good at picking out some defects - you already used one technique to help out (enlargement), but another useful way is to use histogram("luma") - it's an enhanced view that will emphasize defects like macroblocking, banding, many others
Last edited by poisondeathray; 22nd Jun 2014 at 13:58.
Sorry I was away from this thread while on the road and missed so many great tips. I have to agree, some of the junk can't be fixed, or if it can there won't be much video left to watch. Still, there's a lot one can do. I'm with poisondeathray, how did the broadcasters let some of this stuff out of the gate? Someone asleep at the switch, or they were smoking some really great stuff during the broadcast. Those jaggies shouldn't be there with proper interlacing, and the levels are off the charts.
I ignored the stuff that would drive most people crazy and just tried to make things look more realistic and less annoying. My scripts were fairly simple (but mighty slow at 1080 lines!). I smoothed edges with the Santiag plugin, cleaned up some other stuff with MCTemporalDenoise on "low" (I abbreviate it MCTD). In the AfterSchool clip I let the audiences go darker. No one's watching them anyway, but to keep them more illuminated you could play with some contrast masking. Too bad the brights are blown to hell and won't come back. For those blazing reds in the 2nd sample I used FixChromaBleeding -- not a perfect filter and an old-timer, but sure comes in handy sometimes. I fed the clips to ColorFinesse to get ideas for better levels and color -- but it wouldn't be fair to prescribe a $500 filter for color work, so I did what I could with Avisynth and YV12. The encodes are MPEG4 for BluRay.
After School script:
MPEG2Source(path to video "SAMPLE After School.d2v") ColorYUV(cont_y=-35,off_y=-10) SmoothLevels(10, 0.78, 255, 16, 255,chroma=200,limiter=0,tvrange=true,dither=100,protect=4) santiag() MCTemporalDenoise(settings="low",interlaced=true)
MPEG2Source(path to "SAMPLE Orange Caramel.d2v") ColorYUV(cont_y=-40,off_y=-14,cont_v=-50,off_v=-6) SmoothLevels(16,0.95, 255,16,245,chroma=200,limiter=0,tvrange=true,dither=100,protect=4) FixChromaBleeding2() santiag() MCTemporalDenoise(settings="low",interlaced=true)
Last edited by LMotlow; 22nd Jun 2014 at 17:58.- My sister Ann's brother
Photoshop, I know that works with Layer Masks and such, but I never knew what to do with it in that context either.
But BTT, I couldn't use SmoothLevels. The wiki said there is now SmoothAdjust, which includes SmoothLevels + more functions. But it required AviSynth 2.6.x and I downloaded 2.5.8. Should I get the 2.6.0 alpha? I guess so, since SmoothLevels has been mentioned repeatedly so it seems to be quite useful. The audience being darkened is totally fine, as you said, I don't care because I don't look at them. If it makes the stage look better, I'm all for it.
With video you can often use the video itself to build an alpha mask. Say you have a video where the bright parts of the picture are too red (pink tinge) but the dark parts of the picture are normal. If you reduce red over the entire image the bright parts will look right but the dark parts won't be red enough (shifted to cyan). In a case like this you can use the brightness of the image as an alpha mask, then overlay the original image with the color shifted image using the mask. In dark areas the mask indicates the original image should be used, in bright areas the mask indicates the color shifted image should be used. So darks retain their original color, brights get shifted toward cyan.
That's a neat explanation, jagabo. Wish I could explain things that clearly.
Lagarith AVI, then feed it to another script if I need to add more plugins. That's one of many tricks I learned by browsing this site for 6 years. Also, you often want to test a short clip with it, then tweak other plugins. So running a separate script avoids running slowpoke MCTD over and over to get what you want. I do the same thing with QTGMC or IVTC -- those are often necessary first steps, so why keep running a long step over and over? I often delete the AVI intermediate but save the script, just in case.
SmoothAdjust was designed by LaTo, who posts on doom9. He doesn't keep old versions of his stuff, and that's a headache. I have new versions, but frankly Avisynth MT's drive me crazy. Most times I still use SmoothLevels 2.62. Runs with 2.5 and 2.6 Avisynth. A copy is attached.
AvsPMod: I hate it. Slows me down. Just me, I guess.
I think jagabo and poisondeathray explain stuff like luma, chroma, etc., better than most people. I'll just simplify it and say that luma is brightness and chroma is color. In YUV the two are stored separately as 3 channels. Y = luma. V is mostly red, U is mostly blue. And that's simplified as well, because U and V kind of sneak into other hues. V includes some yellow and orange, U includes some cyan (blue + green). I once asked, if YUV stores red and blue, where is green? Well, in YUV you get green by subtracting red data and blue data from Y. Complicated? Yep. If you subtract all the chroma, you're left with Y which gives you a grayscale image. Some images of YUV here: http://en.wikipedia.org/wiki/YUV .
RGB is different. Each pixel of an RGB image stores luma and chroma data in the same place. So the R in RGB is really just red together with its own brightness component. Same with Blue and Green.
I've fiddled with color for a long time, started learning with Photoshop Pro and Windows 3.1, LOL !! . I use YUV and RGB histograms both. Something I learned over time from posts in videohelp is that YUV histograms show the way image data is stored in video. RGB histograms show the way it's displayed. Yep, you do have to use those graph and chart things: eyeballs can say whether something looks right or wrong, but histograms tell you why. Looks intimidating at first, and for some reason website tutorials about histograms are a little lean on video sites but are all over the place on photo sites. Try these two (I got the links from an old videohelp post):
Part 1: http://www.cambridgeincolour.com/tutorials/histograms1.htm
Part 2: http://www.cambridgeincolour.com/tutorials/histograms2.htm
I found those links years ago in videohelp and they were very helpful. The tutorials are for still photo. But when it comes to luma and color the principles are the same. Video is just a stream of still photos, after all.
Avisynth also has an RGB histogram plugin or script of some kind. I've seen jagabo refer to it.
VirtualDub a lot for RGB work. VDub has some color filters similar to those in pricey NLE's:
ColorMill uses sliders instead of wheels: http://fdump.narod.ru/rgb.htm
Gradation Curves (Photoshop-style): http://members.chello.at/nagiller/vdub/index.html .
RGB Levels: one with a GUI is built-in to VDub. Simplified, but it works.
ColorTools Histogram/vectorscope: http://trevlac.us/colorCorrection/colorTools.html. Warning: doesn't work in Win7!
I didn't use RGB for the video posts. Color balance and levels change with every scene, sometimes they don't even make sense. I just set up the same filters to cover every shot in a clip. A compromise, at best. Actually it was contrast, gamma and levels that made most of the difference. I didn't hit color itself very much. A pro colorist would be in hog heaven with those samples.
Other members can fill you in on more details. I just started posting here but, like you, I had to take a deep breath first because I'm still learning. Try browsing the Restoration section for ideas about fixing some godawful video. Plenty of examples about noise problems and how they get fixed. I've spent a lot of time in there.
Last edited by LMotlow; 23rd Jun 2014 at 09:05.- My sister Ann's brother
Okay, I understood how masks work, your links with the illustrations were very helpful! I wonder how you would create such a mask in avisynth though, it seems complicated. Can you couple it to conditions, e.g make avisynth check brightness or hue and then, based on those conditions, give the mask orders to work its function only on the video's spots that show high brightness, for example? Do people create these masks manually, through a text editor? That sounds like a serious challenge, but also looks like a really cool feature.
EDIT: Just looked at ColorMill. Impressive and looks easy to use! Is it possible to export the slider positions to avisynth or something? I don't like how VirtualDub only has AVI as output, and it crashed for me quite often during encoding.
Last edited by bschneider; 23rd Jun 2014 at 13:54.
I don't remember if it was mentioned earlier in this thread but CSamp is a very useful tool. It allows you to read RGB values off the desktop.
Wow, questions. I recall I had even more of my own back then, but I was too scared to post questions with all these experts around (I still am, almost).
Lagarith. There are others, but Lagarith seems to be everywhere and has good compatibility with different systems.
I don't decode out of VirtualDub.For SD DVD video I use HCenc or ye olde TMPGenc Plus 2.5 . For h264 I use TMPGenc Mastering Works, TX264, and -- when I can remember the darn command lines -- plain X264. Yep, saving lossless AVI takes up some room. But almost all of those working files are deleted after the final output is tested. I save avs scripts, though.
I'm still using XP. It's a new build, but I also made a Win7 PC just for heavy HD stuff, even tho I don't work with HD that much. A lot of people out there are still using XP, even nwith ancient machines. I have one AMD oldie that I use just for VHS capture. Lots of people do that, as well.
Well....yep. It's tough at first, for everybody. You don't put that much time into everything in sight, just the important stuff. Eventually you sit there scratching your head with a color problem and -- pow! -- all of a sudden it all falls together. Discard the idea that it's a matter of just adding more red or something. It's a matter of playing with those filters, trying things, learning what happens when you correct levels, fiddle with gamma, determine white balance, black balance (yep, black's a color just like the others). On that subject, I had to sit here today waiting for two big guys to install a new air conditioner and got to watching the After School video. Shucks, I wish they hadn't blown away the highlights! Anyway I put another version together (attached), this time with more levels and color work and some background color matching. Video should look like all the scenes were shot at the same time and place. Sometimes that's not easy, especially with changing stage light colors, or when the originator screws up the source. But I came close. Used a few tricks I learned from some people who don't seem to be around here any more. You might find with this mkv that decent levels and color grading make a vid look different -- almost enough to make your brain disregard some lingering artifacts. I also used FFT3DFilter in chroma-only mode to kill some rainbows and color blotches (how the heck does a broadcast video end up with that kind of noise anyway?), and GradFun2DBmod to sidestep some banding. The mkv looks sharper. But I never sharpened it. Not once. All done with VirtualDub, ColorMill, and gradation curves, with lossless Lagarith work files.
Last edited by LMotlow; 23rd Jun 2014 at 18:14.- My sister Ann's brother
vdub can export almost anything now, through the external encoder feature . e.g. you can use ffmpeg. There is a guide on the vdub forum
Avspmod has a pixel sampler similar to csamp. Like csamp, it will return RGB values, HEX values if you wish with the cursor/mouse over . But the benefit of doing it in avspmod is you can even get YUV values. Also the pixel (x,y) position for that video is also given (csamp does it for the native display resolution, which isn't as useful) . All the data returned can be customized on the status bar to show what you want (things like FPS, colorspace, current frame, framecount, audio characteristics, many others)
LMotlow and others have given some information about YUV earlier. But to understand color a little background may help:
Your eyes see in RGB. They have receptors that are sensitive to red light, others that are sensitive to green light, some that are sensitive to blue light, and some that are sensitive to any color (ie, grey scale; these are the most sensitive and the most numerous). RGB is pretty easy to understand. It's an additive process. You start with black then add red, green, or blue light to make colors. Red+green = yellow, red+blue = magenta, red+green+blue = white, etc. So you can make all colors you can see (more or less) with red, green, and blue.
This is a little different from what you probably learned in school about paints and primary colors -- a subtractive process. With paints you start with a white canvass (white light falling on the paint). The paint subtracts from that white leaving "colors". So red paint is really minus-green and minus-blue paint. Ie, the paint absorbs green and blue, only allowing red to be reflected back to your eye. Green paint is really minus-blue and minus-red paint. When you mix red and green paint together red, green, and blue are all absorbed leaving no light to be reflected back to your eye, black. In practice, the removal isn't perfect so you get some murky shade of brown.
TV was originally greyscale. When color TV was invented they need a way to transmit a color signal that could be viewed on existing black and white TVs. They came up with the idea of sending a greysacle picture along with information that tells the color TV how to add and subract colors from that greyscale image to produce the desired final colors. Just as it takes three components to generate a full color image with RGB, it takes three components to specify a full color image with this greyscale+color system. Generically, we'll refer to this as YUV. Y (luma) is the greyscale picture, U and V (chroma) are the colors that are added/subtracted. (There are minor variations in color sustems around the world. You'll see YIQ, YCbCr, YPbPr, rec.601, rec.709, etc. But they're all based on the same idea of a greyscale picture and two chroma channels.)
Here's an example using your second video. First the original colors:
Here's Y on top, U in the middle, V on the bottom (U and V are visualized as greyscale here):
Or maybe more instructive, here is the Y+U on top, Y+V on the bottom:
One thing to note: U and V represent colors that are added or subtracted from the greyscale image. But video is usually encoded as 8 bit unsigned integers and can only have values from 0 to 255, no negative values allowed. So U and V are normalized around 128. Values above 128 indicate colors added to the greyscale image, values below 128 indicate colors subtracted from the greyscale image. When U and V are at 128 nothing is added or subtracted from the greyscale image.
I'll write up something about hue and saturation next...
Wow! This is an education!
Before I go into replying to what you said, JESUS CHRIST, that sample you attached is like night and day, LMotlow! Holy... Was it very hard to do that? Because the improvement is amazing! I never even realised how bad the colours were oO It's like you said, it looks sharper for sure, and the fact that you actually didn't even sharpen it is extremely impressive. And the colour I don't even need to start on, haha, sweet lord.
MeGUI, simply because Vdub always crashed. With MeGUI, I just throw in the .avs that I wrote (or stole ) beforehand and it encodes directly to MP4. Would it be better to use something else? I almost exclusively do HD video, 720p sometimes, but 1080p in most cases. I noticed a lot of people here try to recover old VHS recordings, those memories~ hehe
here, scroll down a bit, there are three sample pictures next to each other)
There's another thread where a lot of this was covered before:
Unfortuantely, Sanlyn deleted all his posts, many of which were useful.
Yep, I kept tracking those posts over the years. Too bad about the ruckus. I've seen others fade away over time, especially some pro tech types who posted samples from various hardware, VCR's, tbc's, 'scopes, and all. I copied a ton of posts, saved on another drive. Got to where I had to make subfolders so I could relocate stuff. Glad to see jagabo still around. Great info. Thank you.- My sister Ann's brother
I read up a bit about terms like subsampling and found a link to fourcc.org in a doom9 thread when I wanted to research the colour systems. The FourCC overview is very technical and a bit confusing at times, but I guess it helped me understand some more details about the system. Too much to remember many specifics though. But it seemed very complete, there were tons of YUV formats in there.
With digital formats you'll see a few main formats:
4:2:2 (for every four Y samples there are 2 U and V samples. A 720x480 luma channel is accompanied by 360x480 chroma channels. Many capture devices use this subsampling. This also closest to what's in an good analog video signal.
4:1:1 (for every four Y samples there is one U and V sample). A 720x480 luma channel is accompanied by 180x480 chroma channels. NTSC DV camcorders use this subsampling.
4:2:0 (for every four Y samples there are one U and V samples). A 720x480 luma channel is accompanied by 360x240 chroma channels). DVD, Blu-ray, and broadcast digital TV uses this subsampling.
Last edited by jagabo; 24th Jun 2014 at 12:02.
here. So the higher the second and third number, the more accurately colours are determined? Does chroma bleeding have to do with this? As I understood it, chroma bleeding means that colours tend to... flow instead of being clear cut? Meaning that two shades of red being separated by a thin, black line might bleed over into the black line, so that line becomes invisible? Only checking every second pixel for colour horizontally sounds a bit like interlacing, but for colours, weird analogy maybe, hehe.
Man, thanks to jagabo I'm getting a full semester of education here. Glad I got back to this thread.
I made some pics about the denoising and color. The top image is the original frame 116 from After School, bottom is the mkv version. I think it's too red. I also think the gals aren't wearing pure white. Eyeing other shots, that white should have have a hint of pale jade. Levels and colors change with each shot. I started with the background color, which looks different in every scene and in different parts of the stage. Overall there should be more green IMO. That ain't easy, or the gals turn yellow. I tried matching that background color in each shot -- which means, of course, you cut the clip into 4 separate files, work each one, and read pixel values while adjusting. Avisynth came first, to correct basic levels and prevent RGB conversion from losing whatever brights and darks were left from the careless broadcaster.
Both images are reduced to 720 pixels wide. I attached full sized 1920x1080 originals. The smaller pics show the overall color change, which was YUV levels first, then color RGB tweaks. You can see how fixing black levels makes it seem sharper. The original has no depth or dimension. The original backgrounds don't match either, even considering lighting changes. I still think the fix is too red and a bit dark in the midtones. You can fix that with ColorMill by raising the midpoint. The "midpoint" is RGB 128, but you can raise it in small increments to affect the midtones with minimal damage to darks and brights.
Below is a pic of the YUV levels histogram, frame 116 original. You can see that black levels are a little anemic. The luma stretch into the unsafe zone at the right isn't as bad as the other shots -- but remember that the darks and brights will be expanded in RGB. In RGB I get pixel readings off the costume that hit or exceed RGB 255. That can make brights look too hot. Pixel readings off the face show too much green.
Noise: In the attached 1920x1080's you'll see interlace effects, but notice that many edges look a bit rough and the combing seems excessive. The interlacing looks weird to me anyway (where are those jaggies coming from in other parts of the images?). Those noisy edges get more noisy when the video plays. The santiag plugin is a mild edge smoother. You can set it higher or use something strong like SangNom, but I thought those did too much damage. I don't know how well set up your monitor is, but in the big original pic look at the mild mottling and uneven texture in the background. Doesn't look too bad, but consider that it changes with every frame and, therefore, it gets noisy in motion. I used MCTemporalDenoise and added FFT3Dfilter in chroma mode to smooth that stuff and make the background and other flat areas more uniform. Those plugins also helped clean some ugly blotches in skin tones elsewhere.
I should mention that the samples use Rec709 for HD color. Most of the filters use Rec601, so I had to jockey back and forth between the two color systems. I used the Avisynth ColorMatrix plugin for the chageover. BT.709 and BT.601 matrices are similar, but not exactly alike. Orange Caramel was easier to work with after using the right matrix for filtering.
I posted 1920x1080 PNG's separately. They take up too much room in the browser. The forum's viewer doesn't show them full size, so download them if you would, and look them over at 100% size. I also note that some forums make PNG and GIF look too dark in some browsers - as you can likely see if the attached 1920's are displayed below. For some reason JPG looks about right. The JPGs posted are 96% quality, made in Photoshop with the HVSJPEG plugin. Glad I spent that twenty on it.
Sometimes, though, heavy color work doesn't make a huge difference. It depends on how badly the original is screwed up.
EDIT: Oh, I forgot. You asked about banding. Here's a whole page of banding and macroblock examples from Google. Just click on some pics. https://www.google.com/search?q=color+banding&tbm=isch&imgil=bTAhObOS5_AZaM%253A%253Bh...40%3B800%3B600
Last edited by LMotlow; 24th Jun 2014 at 14:56.- My sister Ann's brother
A link was given earlier to the Wikipedia article about YUV. That gave all the math but a picture will make it much more understandable. From http://techpubs.sgi.com/library/dynaweb_docs/0650/SGI_Developer/books/DIVO_OG/sgi_html/apd.html
You can see here that the RGB cube is rotated within the YUV colorspace such that it's standing on its black corner at Y=16, U=128, V=128, with its white corner at Y=235, U=128, V=128. (Note that the image is labeled as Y, Cb, Cr, I've translated that to Y, U, V since we work in YUV in AviSynth) The line of the Y axis in the image is where all the U and V values are 128. All the grey shades of the RGB cube fall on that axis, from 0,0,0 to 255,255,255.
The YUV cube is much bigger than the RGB cube. Only YUV values that fall inside the RGB cube are "legal". It's generally stated that legal Y values are between 16 and 235, legal U and V values between 16 and 240. But not all combinations of YUV within those bounds are legal. Look at the black corner of the RGB cube where it touches Y=16. The only U and V values that fall within the RGB cube at that point are 128. The same is true at the white tip of the RGB cube. Between those extremes the extent of legal U and V values depends on the value of Y.
One way to check your YUV image for illegal colors is to convert it to RGB (where illegal RGB values get truncated), convert it back to YUV, then compare the YUV values before after the round trip conversion. Where the YUV values are identical thcolors were legal. Where they diff the colors were illegal. This isn't quite perfect since YUV/RGB conversions with 8 bit integers isn't 100 percent lossless. But if you add a small fudge factor it's accurate enough for most purposes. I wrote a function that does this. It highlights illegal YUV pixels with a user selectable color:
Earlier in that thread Gavino gave a more rigorous test but it takes forever to start up.
Note that my HighlightBadRGB code assumes rec.601 color. It would have to be modified for the rec.709 colors usually used with HD video.
This is a good place to introduce the color vectorscope. A vectorscope is a 2d plot of U vs. V values. It's a view of the YUV cube from directly overhead (or below). Historgam() has a vectorscope mode ("color").
VideoScope() also has an optional vectorscope. Here is the result of VideoScope("both", true, "U", "V", "UV"):
VideoScope shows the vectorscope vertically flipped compared to Histogram(). Some vectorscope displays show the locations of the standard SMPTE color bars. Sony's Vegas, for example:
Greys are at the middle of the vectorscope box. Colors get more saturated toward the edges.
Last edited by jagabo; 24th Jun 2014 at 17:02.
avisynth? I assume you could use what poisondeathray mentioned earlier in avspmod, switch on that overlay that gives you the values and then play with the variables? But since you didn't like avspmod, did you use CSamp then? Since ColorMill allows me to play with RGB values, is it safe to assume that it converts to RGB and causes the additional problems with the brights and darks you mentioned?
Assuming I used ColorMills to fix the colour of my video, hypothetically. How would I go about exporting these settings ideally? (It's surprisingly easy to just take one frame, play around with the sliders carefully and come up with a better picture. It's fun to play around with it a bit :3 Obviously this will be much more work if you have to account for every single scene...)
Yeah, I noticed that the costumes, especially the pants, have been hit pretty hard. Finally one thing even my eyes can see right away I guess that's something I'll have to live with, but we're making so much progress on many other fronts, I'm not too broken up about that.
Good to know that those plugins helped with skin blotches, because those were really bothering me! Having that fixed is a huge plus, thank you! Looking at FFT3Dfilter's documentation, I assume chroma mode means you set the "plane" variable to 3?
@jagabo: I just saw you posted something as well, I'll have to look at it tomorrow. It's past midnight here, I'm getting sleepy, it looks highly interesting though and I can't wait to get my hands on it :3 Thanks to all of you for your dedication and your patience!
EDIT: Just looked at Vdub and ColorMill, noticed I could check "Show Image Formats". It says the input is RGB32 as well as the output. How is the input RGB32? Shouldn't it be YUV originally?
Last edited by bschneider; 24th Jun 2014 at 17:51.
Knowing how something is supposed to look is where I start. If your white shirt is RGB 190-160-160, that shirt either doesn't look white (it looks reddish) or you need to adjust something to make it right. If you change the above red from 190 to 160 and match the other two guys, you'll have white. It won't be a very bright white (RGB 160 is really a light gray, but it depends on the lighting involved). If it's supposed to be a brighter white, then reducing red didn't help. Instead, raise Green and Blue to 190. RGB 190-190-190 is a fairly bright area of a fairly white shirt. But make that shirt into RGB 255-255-255 and it'll look like it's neon, which would look pretty tacky.
The idea behind getting whites and grays correct is that if you get all 3 colors in proper balance for those neutral hues, the other colors fall in place. If you don't have any whites, grays, or blacks, well.....you're kinda stuck, and will have to work on stuff like skin tones, or proper color for familiar objects like trees, leaves, specific flowers, brick homes, etc. Graphics look "real" when they remind us of the real stuff we see all the time. (Lordy, I think I just quoted from about 10 of poisondeathray's posts I learned from a while back !!).
But if you're working on a Disney fantasy, well...anything goes.
VirtualDub and used one of my pixel samplers to read right off VDub's input and output panes.
TMPGenc encoder's filters. You can always tell Virtualdub to save or compress to other formats.
Last edited by LMotlow; 24th Jun 2014 at 23:38.- My sister Ann's brother
vdub gives me a .vdscript file, but I can open it with Notepad++ just as well, so I assume it's no issue. How can I use this file aside from backing up filter changes I applied in VirtualDub? How would I best go about continuing at that stage? Exporting it to another encoder like Lagarith? Or can I use these variables for avisynth as well, somehow? Oh boy, I'm sure that overwriting bit will get me once or twice... But I will try my best to keep it in mind.
As a general rule: standard definition video uses rec.601 YUV, high definition uses rec.709 YUV. The correct color model can be specified within the video. But if it's not you have to guess.
You might be trying to cover too much detail in too many places at once. I did that at first. I used a lot of automated one-stop-shopping apps that spit out finished videos lickety-split. But I learned nothing about the details. What made me stop and look around was problem videos. Recording something off digital cable was simple. As soon as I started capturing ugly VHS, the auto apps left me standing in the rain. An old country saying describes it as 10 miles of bad road. 10 miles doesn't sound like much, but try it on a really bad one lane dirt road in February, LOL! I got deeper into this forum and others. Yep, slowed me down at first. At least every week I learned something different from posts like jagabo's and kept notes and copies. I still don't understand a lot of it, but most of the time I can at least plug something in and check the results.
@jagabo, that script is cool. Glad for that link, I lost track of it a while back.
Re: colorspaces for histograms and 'scopes: Check jagabo's notes on that. In VDub you can tell gradation curves to work with different color. Switching the plugin to a different colorspace takes time, though, because gradcurves builds a new copy of the video. With a long video you could be sitting there for quite a spell.
If running VirtualDub in "full processing mode", it'll convert to RGB and outputs RGB by default if you don't specify otherwise. To avoid that, set output compression/color to what you want and use "Video" -> "fast recompress".
I get the idea of a slightly off-color costume from the first image you posted. It's too green. Unless somebody built a color coordinated dark green piano and made a similar green suit, I thought that part of the image should be a black or dark gray piano and suit, with whitish piano ivories. Get the darks close to that, then work on flesh tones. If everything looks mostly normal, the costumes have a slight jade tint. But stage lighting complicates issues. Also, look at the shadow detail on those costumes in every scene. A white costume has grayish shadows, but there's no way to get gray into those costumes (all the other midtones would turn purple). Part of the problem is the yellow-red backlights, and other lights elsewhere. Maybe the tops were white and the pants different, I don't know. If I could get skin tones that didn't make the girls look like they had liver disease, I used my best guess.
Tips from other posts and graphics sites: don't work in bright light. Work in subdued, indirect light. Don't put a lamp in front of your monitor. Don't work in total darkness, or your eyes will think the monitor is too bright and will shut down on you. If you stare at something for too long, your brain tries to "normalize" it. Turn away for a while and come back, things will look different. 24 hours makes a big difference, too. Working with an uncalibrated monitor is like shooting yourself in the foot. That's a different Pandora's box too complicated for here and now, but you can run a cheap (free) monitor check here: http://www.lagom.nl/lcd-test/. Like many people here I use an IPS LCD display calibrated with a colorimeter kit. Not cheap. But see what the lagom link has to offer for free. Lots of people use their site.
Noise is difficult to see in bright light. I didn't recognize that background grunge until much later in the process. Sometimes you miss noise problems altogether and see it 2 days later on your TV. Drat!
QTGMC: I didn't see a need for it here. You can use it with various parameters, but it's slower than low-power MCTD and doesn't fix the same problems. The two plugins are different. I've seen bad video that required both. In that case, might as well drive around and do some shopping while you wait. Many standard video formats are usually interlaced or telecined. So it depends on your intended output. For BluRay (AVCHD is similar) the standards look pretty clear with this post: http://forum.doom9.org/showthread.php?t=154533.
Avisynth script. If you stick with script-only, be sure to convert to RGB for that filter. That RGB or other conversion isn't a plugin, but is a built-in Avisynth function (http://avisynth.org.ru/docs/english/corefilters/convert.htm). How you load the filter in AVisynth differs slightly with the filter. Many of them have documentation for this, or sometimes you have to search a forum or two to see how people did it. Some VDub filters don't work right in a .vcf file. FadeFX and BorderControl come to mind.
The requirements in Avisynth are to define the VirtualDub plugin path, then define and import the filter and give it a name of your choosing. Next step is to get up a statement that uses the filter (convertToRGB32 first!). For example, you would set up ColorMill as follows, using the .vcf file to get the settings. The part of the .vcf file that stores ColorMill values would look like this:
VirtualDub.video.filters.Add("Color Mill(2.1)"); VirtualDub.video.filters.instance.Config(25700, 25700, 25700, 25700, 24932, 25700, 25700, 25700, 25700, 25700, 26468, 25700, 25700, 1124, 5);
LoadVirtualDubPlugin("C:\Program Files\VirtualDub\plugins\ColorMill.vdf", "ColorMill", 1)
ConvertToRGB32(matrix="Rec601", interlaced=true) ColorMill(25700, 25700, 25700, 25700, 24932, 25700, 25700, 25700, 25700, 25700, 26468, 25700, 25700, 1124, 5) ConvertToYV12(interlaced=true) #<- if you need to get back to YV12
First, I'd stick with learning how to get up a script and just run it. Color can come later and you don't always need RGB. You have to denoise and do YUV stuff first anyway. That's a subject in itself.
Good grief, after not posting here for many years, I seem to be in overdrive, LOL! Never thought that would happen. Better get to those earlier links from jagabo and poisondeathray for a while and learn something new.
Last edited by LMotlow; 25th Jun 2014 at 09:56.- My sister Ann's brother