OK no problem - I'll post the simple script anyway in case someone does a search and is looking for a "blue frame detection script" , since it works ok on in synthetic tests. If you need some more help on this just shout and someone should help you out
To run it, you could open in vdub and use file => run video analysis pass, and the text file will show those frame numbers which are detected as "blue" (or U value greater than 150)
Code:AVISource() writefileif(last, "bluedetection.txt", "AverageChromaU()>150", "current_frame")
+ Reply to Thread
Results 61 to 85 of 85
-
-
No. Very common with damaged VHS, especially in leader tape.
Reluctantly, I've attached a file containing 4 scripts. Even if you understand them, it would take a week for you to assemble the support files. And note that this script is for cleaning up a very dirty leader tape that you won't keep anyway. It is inappropriate for "real" video and would be rather destructive. It would also take another week to go into detail about it. You might be using some parts for regular captures, but not as presented here. You can't use the same scripts, plugins and settings for everything. There are 4 scripts in the attachment, two of them purposely designed to run separately because plugins in one would be incompatible with some plugins in the other. Also, I'm involved in one of my wife's frankenstein redecorating schemes for at least a couple of weeks so my computers will be on and off for a while.
I looked for every occurrence. Sometimes you have to do it that way, sometimes most of the glitches won't matter, and sometimes some of them can't be repaired without destroying other elements. Poisondeathray's suggestions could come in handy later.
Avoid overthink. Get yourself some captures and address problems one at a time. Before long you'll be rattling off scripts lickety-split.Last edited by sanlyn; 19th Mar 2014 at 03:39.
-
[QUOTE=sanlyn;2300469] OK. Here's another capture (saved as YUY2 this time, and confirmed using the script you gave me the other day).
This is stock footage from the beginning of a wedding tape. So ... not really a leader ... but definitely a copy of something.
Problem 1: I see combing around the letters in Harvey's.
Problem 2: I see noise in shots of Lake Tahoe.
Which should I go after first? -
[QUOTE=boobounder;2300612] I don't see combing. With interlaced or telecined video you'll see combing with most editors because they don't deinterlace or remove telecine. It doesn't appear in deinterlacing players or on TV. The motion "disturbance" you do see is strong aliasing and motion shimmer. The stock footage plays as interlaced from your VCR, but some of the stock original appears to have been incorrectly deinterlaced using field decimation (fields within frames are separated, then one of the fields in each frame is deleted and the resulting field expanded to full-frame). It also seems that if this stock footage was recorded VCR-to-VCR, one of the VCR's had interlace roblems. It's not possible to reconstruct the original frames, although one might relieve some of the noise.
The video has strong magenta and green rainbows; the magenta is most evident in the last sequence. Between frames 64 and 289 there are about 20 pixels of horizontal green flashing or "blinking" across the top of the image, along with horizontal magenta rainbows in the blue sky with some green rainbows mixed in. Bright cloud detail has been stripped and discolored due to clipping.
In the zoom-out that begins on frame 290: AGC changes brightness levels drastically during the zoom until darks are badly crushed in the final frames. Go to the zoom-out sequence starting on fame 430 and observe what happens in the lower-left corner during the zoom.
It's difficult to tell whether your tbc is helping in this capture. The stock footage was apparently copied from another VCR that had line timing errors. Unfortunately, a line tbc can't see timing errors made by another VCR. Those errors are a permanent part of the duped tape. A line tbc can help correct only your player's current errors, not errors that were played by another VCR in a different duping operation. You'll also note that the image in the capture is warped toward the left along the top borders -- the top part of the side borders leans left. Consequently, your capture contains the wiggling verticals and horizontal ripples and warping, along with considerable motion noise and dot crawl, that was recorded into your source tape.
Some of these defects can be repaired, some partially repaired; a few of them will have to tolerated. will be back later with some tips about filters. You should target the crushed darks first, then rainbows, then the aliasing, then motion noise. You will need Avisynth for those targets.Last edited by sanlyn; 19th Mar 2014 at 03:39.
-
1) Two comments below the quote.
1a) This is an original VHS from 1993, so not sure if you're suggestion of "incorrectly deinterlaced using field decimation" was possible then.
1b) I have not re-recorded this video from VCR to VCR. Very likely that the wedding photographer may have. To me, this is the most original I have.
2) I am confused. Are you saying I should take my line-level TBC out and recapture? (I did say earlier in the thread that most of my videos were commercial originals or copies of those. This one is an exception. And FWIW: I made digitizations of copies of this tape many years ago because I got better results than from this original. I'm re-doing it now just because putting in live-level TBC made it viewable).
3) I am good to go with AVIsynth, scripting, and VirtualDub. You said " You should target the crushed darks first ..." Are you saying I should run histogram and then levels?
4) I don't mind going one step at a time with the filters ... but I'd like to know why I would go in this order: crushed darks first, then rainbows, then the aliasing, then motion noise.
5) I am assuming that (while I learn) it is a good idea to separate my processing of the stock video from the actual wedding footage (I recognize that as I get up to speed I may want to do the whole video non-linearly, but in one script). Am I right? -
a) denoisers really don't like cruhsed darks and clipping. They get denoised into smooth mush if you correct levels later. (b) Usually you will have to deinterlace/reinterlace to try to fix aliasing. The best of the deinterlacers (QTGMC) actually deinterlaces and denoises somewhat at the same time. Because deinterlacing involves resizing even and off fields into full frames, why resize noise? QTGMC does deinerlace, denoise and some motion compensated smoothing in one step anyway. Basically, however, QTGMC is not primarily a denoiser but a lot of people use it that way.
Sometimes you can eyeball it. But why make it hard on yourself? Use histograms for prevision.
I don't know what's up with the tape. It kinda looked like a dupe to me, but sometimes it's hard to tell.
With interlaced or telecined video, combing generally shows only when there's motion. Each interlaced field represents two instants in time. If nothing moves, both fields should should look pretty much alike. If you want a better example of combing, look at this odd frame 63 at the scene change. I don't think I've ever seen a consumer camera make scene changes look like this. Combing most obvious in the lower right:
[Attachment 23495 - Click to enlarge]
The Harvey's shot shouldn't show combing because neither the camera nor the building are moving. Only the clouds move, but verrry sloooowly, not changing much between frames.
[Attachment 23496 - Click to enlarge]
Well...let me revise that. In the Harveys shot things are moving...a little. Line timing errors are making objects warp, change shape. When line timing errors affect diagonals, the aliasing looks worse. So if you look at the HARVEY letters real close, you'll see some slight edge combing because, in fact, timing errors are making everything move slightly. Also, the clouds keep changing shape. The video below is the HARVEYS shot slowed down to 2 frames per second. VLC player just balked at this 2fps clip (I hate that player!). MPC-BE and Windows Media Player took it in stride. Note (a) how the borders wiggle, (b) how the letters wiggle, (c) how the clouds keep morphing, and (d) how the right-hand shadow edge of the building wiggles as well:
HARVEYS_2FPS.mkv
If you don't think this video needs a line tbc, take a look at the zoom-out scene and watch what those lines and angles are doing. I also noted that this shot has thinner borders; they are nearly half the thickness of borders in the other shots. Timing errors are rather obvious here. You can also see the horizontal magenta rainbows flashing on and off. What I don't get is why the borders in that shot look more stable:
Hotel_2FPS.mkv
I threw a few plugins at your intro sample and finished it off with some low-power NeatVideo. I didn't address a few issues (halos, levels, some leftover rainbows). Anti-alias routines didn't help that specific damage so much, but I've seen plenty of videos with that problem involving a number of consumer cameras. But things do flow more smoothly overall. See if you think it makes a difference. I can get into detail later tonight or tomorrow.
Wedding_Intro_Rework.mkvLast edited by sanlyn; 19th Mar 2014 at 03:40.
-
This is grand ... yes, I think this makes a difference, and I'd like you to get into more detail. I need to figure out how to do some of this by working through an example.
Could you either attach a script, or detail which plugin you used for which problem ("crushed darks, then rainbows, then the aliasing, then motion noise")? I'd like to break that down and do my own before and afters with each plugin.
I am still not sure what a "crushed" dark is. When I look at the histogram it is shifted towards the dark end with a strong peak there. Is that what you mean?
FWIW: I don't think that with my level of expertise, and small number of tapes I'd like to re-do, that NeatVideo is a good purchase at this point. -
I didn't have time for some details (like really checking the levels, start to finish, but some crushing was easy to spot). I'll have to rig up an example later and clean up my messy script to make it usable. WIll be able to get that tomorow during a break from this redecorating hassle -- and get my big PC back up.
FWIW: I don't think that with my level of expertise, and small number of tapes I'd like to re-do, that NeatVideo is a good purchase at this point.[/QUOTE]There are other plugins, but for crappy tape NeatVideo is tough to beat as a finishing touch. But I'll use some others.Last edited by sanlyn; 19th Mar 2014 at 03:40.
-
OK. Take your time.
Messy scripts are OK with me. I code a lot: all my scripts start out messy, so I'm used to cleaning them up and commenting everything.
I have seen your strong recommendation of NeatVideo on other threads. Is it also useful for 8 mm and Hi8 camcorder tapes? I have quite a lot of those, that were newer and digitized quite well back in the day ... I wasn't really intending on revisiting them in this round of processing, but if I can make them look even better it might be worth the $$ for NeatVideo. -
Working up some demos now, hope it doesn't take too long (many interruptions here). I'm not the only user who recommends NeatVideo. It's popular world wide and has plugins for VirtualDub, Adobe Premiere Pro/AfterEffects, Vegas Pro, etc., and has been around a long time, so it's not small-time goods. The home and the Pro editions are the same features and interface, although the Home Edition handles only up to 1280x720. If you want unlimited frame size you need the pro edition -- but, really, I hate and despise working with 1920x1080. Others might differ, but if you can see the difference with properly processed video on anything up to 60" TV, you're either darn good or your face is touching the screen.
You don't have to make up your mind about it anytime soon. There's a trial edition. But some cautions:
Don't use NeatVideo for everything. There are times when it's unnecessary and/or inappropriate. People buy it and try to use it as their only filter -- a big mistake. Another caveat: never use its default settings. They are far too powerful and will destroy your videos when used in that manner. Like many other big-time, complex filters such as MCTemporalDenoise, TemporalDeGrain, QTGMC, etc., that have dozens of parameters and noise "targets", they can creep along for hours (or days) when used incorrectly and make jello of your images. I used NeatVideo as the very last step on the wedding intro demo I posted, at very low power. It processed that clip at 20fps on my i5 PC, which is pretty fast for a heavy-duty plugin. The "prep" steps that preceded it ran less than 6 fps.
I'll be working up a demo today using your intro.avi, which is a pretty terrible piece of video. But that sort of material makes for a decent learning tool.Last edited by sanlyn; 19th Mar 2014 at 03:40.
-
Yes. Clipped brights will bang against the edges at the right-hand side. Avisynth's histograms usually deal with the luma component (brightness), less with chroma. But colors can be out of spec as well, such as when the video is rather dark and one or more colors is over saturated.
Okay, here's a demo and some scripts on how to us a couple of image filters in Avisynth. Understand beforehand, if you will, that you can make a video look however you want. There's no bible. Objective here is to demonstrate what the filters can do. The rest is your choice. You own the videos.
I'm working with the last of the three camera shots in your wedding intro. It's the worst of the shots.
YUV histograms look at the way video is stored. RGB histograms look at the way it displays. Your PC monitor and TV don't display YUV; they all display an RGB colorspace. We humans don't see in YUV. We see Red Green Blue (RGB). The two colorspaces behave differently. To simplys: a YUV space stores a somwhat compacted version of extremes of light and dark. When converted to RGB, YUV values at the dark end are expanded; the same thing happens at the bright end. In apps like VirtualDub, YUV darks and brights are expanded to RGB 0 to 255. If the stored YUV values are already 0 to 255, you can see what happens to the darkest and brightest values -- they get expanded beyond the RGB 16-240 range that is "kosher" for RGB display. Some darkish gray details turn to solid black and get "cut off" below a certain point, so dark detail is lost in black muck. We say that those values are chrushed. Often, they're crushed to the point where the original details can't be retrieved; all you get is distorted values (noise). You can make that stuff brighter, but it just becomes lighter gray blobs with no detail. The same thing happens at the bright end; detail gets lost, often even changing hue; we call that clipping. Photogs often use the same words (crushed and/or clipped) to mean the same thing. Some folks (yours truly) prefer to say that darks get crushed and brights get clipped. But both words describe the same problem at different ends of the spectrum.
Many players, especially VCR's, "enhance" contrast to the point of killing darks and blowing away brights. To some people this looks cool on TV, but to an experienced eye it's ugly and kinda silly, not to mention "inaccurate". Of course with filters you can make your video look any way you want. During capture, the idea of adjusting levels at the outset is to help the incoming video adhere to a proper 16-240 range. Save a lot of headaches later. This would also apply to intended PC-only display at RGB 0-255.
The image below is frame 514 of the intro clip, interlaced and unprocessed. The black borders and head switching noise at the bottom of the frame have been cropped to keep those blacks from affecting histograms.The crop functions have a few rules, depending on colorspace and whether the video is interlaced or not. Details on the Crop() function are here: http://avisynth.nl/index.php/Crop. The same info is in Avisynth's online help (go to "Programs" -> "Avisynth 2.5", expand the program listing and click "Avisynth Documentation". Most users have a shortcut to that subfolder on their desktop). The website info is often more up to date, but most of the doc doesn't change.
[Attachment 23514 - Click to enlarge]
To crop the borders off this image I used the following:
Code:AviSource("D rive:\path\to\video\wedding_intro_2.avi") Crop(10,0,-6,-8)
The image below contains 3 histograms based on the above image. On the left is the Avisynth "Levels" histogram, the middle is Avisynth's "Classic" mode (waveform), and the right is VirtualDub's ColorTools RGB graph. They all reveal the same problem. Each histogram has a left border (darks) and right border (brights). You'd want to keep data out of those borders to avoid crushing and clipping, although RGB can extend to down to RGB 0 if you don't think it looks to "dank" down there. In all three histograms, data overflows into the left borders. You can see crushing in the image itself: the darks have no detail; they're inky black. In the RGB histogram you can see brightness (the white band) and all three colors bleed into the left and climb up the left wall. In all cases, RGB data that peaks against the left side and starts wall climbing is cut off (crushed). The RGB histogram also reveals that blue is both crushed and clipped. All the bands are low-lying, indicating either a dark image or oversaturation; in this case, we have both. Blue is so dim you can hardly see it in the RGB histogram; even the background scenery is blue. In the image you can also see chroma shift and bleed, and bright halos along building edges.
[Attachment 23516 - Click to enlarge]
My first step is to use ColorYUV to improve things. The parameters and subfunctions of ColorYUV are here http://avisynth.org.ru/docs/english/corefilters/coloryuv.htm and in AVisynth's computer docs. I started experimenting with ColorYUV by reducing contrast ("cont_y=-20"), which "shrinks" the histogram toward the middle of the graph, making darks look brighter. That would still look a little "dim" on the left-hand side, so I used luma offset ("off_y") to nudge the data a little to the right. Then I greatly reduced blue contrast (saturation) with "cont_u=-120".
Code:ColorYUV(cont_y=-20,off_y=6) ColorYUV(Cont_u=-120)
[Attachment 23518 - Click to enlarge]
Now we can see that there really is some dark detail "down there', and we have the YUV histogram with the 16-240 limit (below). The RGB histogram says there still some darkness tyo deal with, but we're getting there. In truth, RGB dies leave you with a little leeway at either end. But, still, not quite where I'd like it.
[Attachment 23519 - Click to enlarge]
Continued, next post. . . .Last edited by sanlyn; 19th Mar 2014 at 03:41.
-
There's another handy Avisynth image tool (SmoothAdjust.dll) that will take us farther. The plugin has a function called SmoothLevels(). Here, I've added SmoothLevels to the code, using the plugin's input/output default values but adjusting a few of its other defaults to induce some dithering to smooth out the histogram spikes a little:
Code:ColorYUV(cont_y=-20,off_y=6) ColorYUV(Cont_u=-120) SmoothLevels(0,1.0,255,16,235,chroma=200,limiter=0,tvrange=true,dither=100,protect=6)
[Attachment 23520 - Click to enlarge]
Now we have even more detail, but the image looks washed out and the first 4 parameters of SmoothLevels (which are defaults) leave black levels too high. Below, the histograms show us what's happening: RGB looks okay, but there's not much going on at the bright end, and we still have some leeway remaining below the RGB 16 border.
[Attachment 23521 - Click to enlarge]
To fix this a bit, we know we can adjust contrast and nudge values left and right with ColorYUV, and SmoothAdjust has some tweaks available. The fisrtt 4 parameters of the SmnoothLevels statement can adjust the dark and bright end. For example, the first 4 parameters (the first 4 numbers shown in the above SmoothLevels statement) adjust in the following order:
SmoothLevels(dark input, gamma, bright input, dark output, bright output....)
By fiddling with ColorYUV and SmoothLevels, and looking at the effects in the histogram and just by looking at the image, I finally came up with this arbitray code. One can always tweak to one's heart content, and overall balance and levels are ultimately a matter of preference. You nudge offset up or down, raise and lower gamma, play with the dark input and dark output, and so on, until you get what you want. Adjust values, observe the results, get a feeling for which values control different aspects of the image, is really the only way to learn to use these filters. It's a little iffy in the beginning, but think about this: if you were using simple image controls in an NLE, you'd have the same problem figuring it out, but you would be working with far more limitations. One of the very last steps was to increase impact by raising luma's contrast and offset with ColorYUV, which meant readjusting SmoothLevels one more time. This is what I came up with:
Code:ChromaShift(C=-4) ColorYUV(cont_y=55,off_y=40) #,gamma_y=-10) ColorYUV(Cont_u=-150) SmoothLevels(10,0.90,255,16,235,chroma=200,limiter=0,tvrange=true,dither=100,protect=6) SMoothTweak(hue1=-5,Saturation=1.2)
[Attachment 23522 - Click to enlarge]
Below, the histograms look a little different now. YUV is well within the 16-240 limit. The RGB histogram shows you how YUV darks and brights get expanded in RGB. Poor old blue is hopeless; the camera's autowhite and AGC has blown it apart, but it does look a little better and you do have that 16-240 leeway. The other bands are in decent shape; there's a slight bleed against the left but, really, there's no detail there anyway. For overall effect, you often have to compromise with a faulty source.
[Attachment 23523 - Click to enlarge]
D f514 YUV RGB.png
Actually the Blue overshoot can be controlled better and the midtones spiffed up just a bit with another filter, but you'll need RGB to use it (Gradation Curves plugin for VirtualDub). Won't get into that now. It allows you to very precisely control specific areas of an image. I used it to cut off blue at RGB 8 and RGB 240. Here is frame 514 with the borders restored:
[Attachment 23524 - Click to enlarge]
The clip has not been denoised yet. Question: Will you have to go through this much trouble with "normal" video? Not likely most of the time, but this one is in really bad shape. Besides other problems, the building is shot from the shaded side and the sky behind it is very bright -- it's a contrast range that's too wide for most video. Or, you can just stop at the first step in the previous post and be done with it.
Now...after all that, there is another problem (always!). These filters won't ride at all well with the other Camera shots. The other shots look completely different, almost as if they were from another video. Also, the fade to black at the end will be too green (but that can be fixed, too).Last edited by sanlyn; 19th Mar 2014 at 03:42.
-
Thank you again. There is much to process here. I will have a fuller reply in a bit.
I understand that the videos are mine, as is the final product. These tutorials are a big expense for you, but they are exactly what I need.* Getting the software, scripting, and being willing to play with it until I'm happy are all skills I already have. Recognizing what is wrong, even knowing what keyword to use to look it up, and the borderlines of filters' capabilities is where I'm weak.
I do intend on eventually knowing enough to do this on my own. Having said that ... none of my videos will be "normal". All of my normal ones were burned to DVD many years ago: probably too compressed to be helped at this point. The only ones I have around are the tapes that gave me big enough problems to keep around for hardware and software advances.
* Part of my profession is programming statistical runs for economic and financial data. I have given hundreds of tutorials like this thread. My fingers are crossed that I've paid forward enough to get help from the people on this forum when I need it. -
Some things seem complicated the first time around. After you've played for a while, you look back and wonder what the fuss was all about.
Long ago I found two good tutorials on what histograms tell you about images. You'd think video sites would cover simple and essentials like histograms (which also come in the form of waveform monitors, etc. -- same info, different pictures). The two brief illustrated lessons deal with still photo and digital cameras, but the main point is: the principles and operations are the same for graphics of all kinds.
http://www.cambridgeincolour.com/tutorials/histograms1.htm
http://www.cambridgeincolour.com/tutorials/histograms2.htmLast edited by sanlyn; 19th Mar 2014 at 03:42.
-
Questions a
1) Does this mean that the output settings of the various levels filters should generally be set the same if the target for output is a TV as they are when the target is a PC?
2a) So, when doing histograms, I should always crop of areas on the perimeter that I'm not going to keep at the end of processing, right?
2b) Should I save at this cropped size, or should I go back to an uncropped video once I'm happy with the levels (and recrop when I'm all done)?
3a)) I get the height. Not getting the width. Doesn't this say, take 10 off the left, and then take 6 off both sides? That would get me down to 458. Yet, when I run this it shows 472. Where is my math wrong?
3b) Also, is the -6 offset taken in addition to the 10 on the left (i.e., 16), or does it do whichever is larger (i.e., 10).
================================================== ==============
4a) I have seen you use the 235 value before. Shouldn't it be (more like) 240?
4b) (I understand dithering from other fields). In this case is dithering actually changing the pixels, or just the way they are summarized in the histogram? If it's the former, is that change permanent?
5) Isn't Chromashift() superfluous here? Couldn't you have just done the same thing with different arguments on the other functions?
Everything else is clear to me. -
Separate question: given what you've said about there being really like 3 different views in this clip alone, should I be thinking (long-term) that in all my videos if the view/perspective of the camera shifts, then I should plan on processing that new portion differently than previous ones?
-
Many people do process sections separately. But it depends on how "different" they are. If a section has small defects or no severe problems, you could use it as-is or perhaps shelve it for a later day and just live with it.
Last edited by sanlyn; 19th Mar 2014 at 03:42.
-
This refers to setting levels during capture. The two controls used are brightness (controls mostly black-level) and contrast (controls mostly brights). If your incoming signal is smashing against one or both sides of a capture histogram, it indicates that detail is being crushed or clipped during capture. Those details can't be recovered later.
Most video is targeted for broadcast standard 16-240, whether it's for PC or TV display. Think about it: when you buy a retail DVD, it looks as nice on your TV as it does on your computer. If ultimately you want the full 0-255 range, it would look OK on a PC, but not on many TV's. Even at full 0-255, you see clipped out-of-spec color all the time on the web. 16-240 does have a little leeway at either end, but 0-255 does not. YV12 standard video using standard colorspaces is expanded at the dark and bright ends. If the video is already 0-255, you have no leeway.
In this case the cropping is done to keep black borders and flashes of bright light in the head switching noise from affecting your analysis. Once you have the settings you want, you can disable the crop and histogram code lines.
For ultimate output, head switching noise and uneven borders are cropped and replaced with black borders. Also, color correction changes border colors as well. Restore the original frame size by replacing borders. Don't resize to eliminate borders; resizing alters the aspect ratio and proportions of the original image. And even the best resizers are imperfect.
The syntax for Crop() is Crop(left,top,right,bottom). Right side and bottom are given usually as negative numbers (because movement inward from the right and upward from the bottom involve negative x and y coordinate values).
The code removes 10 pixels from the left and 6 from the right, = 16 pixels off the total width. 8 pixels are cropped off the bottom. After making the settings desired, the crop statement and the histogram statement were disabled, and the video was processed at its original frame size.
You adjust the bright output depending on the situation.I could just as well have done the reverse, meaning to set bright input = 135 and bright output = 255 -- which would expand the bright end slightly, rather than rein it in. You can use SmoothLevels parameters in a variety of ways to tweak the extremes. There are other SmoothAdjust functions that get even more complex.
Dithering is used to smooth areas where stretching or compressing original luma and chroma value results in a loss of smooth gradation between changing colors. For instance, a blue sky or a while wall and even skin shadows and contours are never exactly one color over their entire area; there is almost always some level of smooth hue change or gradation. Dithering smooths out those areas by interpolating and "mixing" values that make for smoother gradations and help to prevent banding artifacts. It's not a complete solution (there are specific anti-block and anti-banding filters available that are more efficient), but it helps. The changes are permanent.
The other other functions don't address chroma shift. Chroma shift is displacement of chroma data. Look at the earlier photos in the demo: red and blue are shifted to thd right about 4 pixels. ChromaShift() doesn't "change" pixel colors, it literally shifts the location of pixel data. In this case the shift wasn't perfect -- in YUV, you can shift pixel location only with even numbers. This code shifts chroma left by 4 pixels and overlays the shifts onto the original image. I should also have shifted chroma upward by 2 pixels, which would be "L=-2", but I forgot to do it (!).Last edited by sanlyn; 19th Mar 2014 at 03:43.
-
I have a workflow question. Suppose my AVIsynth script looks like this:
Code:AviSource("p:\\wedding_intro_2.avi") ConvertToYV12(interlaced=true) Crop(10,0,-6,-8) histogram(mode="levels")
Yet VirtualDub has input and output panes.
Is there a way to write my script so that I can see the before and after in the input and output panes within VirtualDub? Or am I supposed to take a screen shot? -
Something amiss with your concept of VirtualDub. VirtualDub only takes what Avisynth or any other file server hands to it. VDub isn't executing any line in that script -- Avisynth is doing all the work. Virtualdub just shows the results.
If you add some code to that script, such as adjusting contrast or some other change, you'll see the histogram change when you re-run the script. If you want change the script and see what happens to the same frame you are currently viewing but without going all the way to the start of the run, use "File" -> "Reopen video file" (or hit F2). If you are running a long script that is slow in executing, it might take a few second for that same frame to refresh in VDub's window. If you spend all day hitting F2 dozens of times, eventually VDub will peak out of cache memory or might freeze or crash.
To see a before and after, you need two invent two names in your script, one for "before" and one for "after", and the two versions have to be the same frame size, etc.
Code:AviSource(path + video here) AviSource(vidpath+"test 140205a.avi") ConvertToYV12(interlaced=true) Crop(10,0,-6,-8) b1=last ColorYUV(cont_y=-25) #<- reduce contrast SmoothLevels(16, 0.90, 255, 16, 235) a1=last #---- add histograms to each version --- b1=b1.histogram(mode="levels") a1=a1.histogram(mode="levels") # StackVertical(b1,a1) #<- stack two versions one above the other StackHorizontal(b1,a1) #<- stack two versions side by side return last
Last edited by sanlyn; 19th Mar 2014 at 03:43.
-
Thanks. This makes a lot of sense.
I think I'm ready to move on to the next steps ... -
I am also thinking that I should be proceeding by having an original uncompressed video, and that while I build my scripts I may save a new copy of that in any format I like to view my progress.
But, as I build my script, I should be starting each time with that original, adding a few more lines of code, and then generating another new copy.
Then, when I'm satisfied that I've done everything in my script that I need to, I should delete all those in progress copies. Then rerun the full script one last time, starting from the original, and at the end outputting a final (presumably compressed) copy that I'd distribute.
Does this workflow sound correct? -
synth always over my head, and i get diZZzzzzy. sanlyn did a good job.
just a bit funny...
Does this ... sound correct? -
The other submitted examples looked pretty good as well, all I did extra was some color work and fixing borders. Avisynth isn't all that difficult, but at this point it's all entirely new for you. I don't think any of the work that was submitted so far was especially exotic. They use very common routines and filters. But that kind of processing would come later. First, you need a capture that helps preserve decent luma and chroma ranges, and especially avoids disturbances like dot crawl and herringbone. Cleaning those problems is a very lossy process.
Last edited by sanlyn; 19th Mar 2014 at 03:43.
Similar Threads
-
Newbie needs help/advice on backing up
By squale247 in forum Newbie / General discussionsReplies: 7Last Post: 11th Feb 2012, 18:27 -
Newbie needs advice on burning DVDs in 5.1 format
By mcronix in forum Authoring (DVD)Replies: 2Last Post: 16th Sep 2010, 07:05 -
Conversion advice for a newbie
By darafat in forum Video ConversionReplies: 1Last Post: 20th May 2010, 01:17 -
Advice please for a newbie - AVI to different format with subtitles
By brodders20 in forum Newbie / General discussionsReplies: 5Last Post: 21st Jul 2009, 15:10 -
Newbie Advice for Cheap Multimedia Studio Setup
By dartmanx in forum Newbie / General discussionsReplies: 4Last Post: 19th Feb 2009, 13:16