+ Reply to Thread
Results 91 to 120 of 240
-
if no macroblocks found that means no compression (not mpeg2) and i'll admit that would be a mistery to me (how this is achieved).
Perhaps there is a condition in the drivers that state "if output = mts use mpeg2, else use raw yuv"*** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE -
-
Why is this so hard to believe? My Hauppauge PVR-250, for example, has two chips, an analog capture chip that produces uncompressed YUV video, and an MPEG encoder chip that takes raw video from the first chip and outputs an MPEG stream. When Hauppauge's WinTV is in preview mode the raw output from the first chip is displayed. When WinTV is capturing it switches on the second chip and captures the MPEG stream. But there's no reason the raw video stream can't be captured -- it's just that WinTV doesn't support it. For example, I can build a capture graph with GraphStudio to capture the raw YUV video.
The PVR-150 is an update to the PVR-250 design with both chips integrated into one piece of silicon. But it still has the separate functionality (raw video, MPEG). The PVR-500 is basically two PVR-150 chips on a single board. -
Okay so it's 2 chips 2 outputs possible now that make sense, case closed then
i confirm there are no macroblocks (i used Histogram(mode="luma" as always)*** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE -
With proper bitrates, you can have MPEG-2 video with no visible macroblock borders.
Remember that "macroblocks" are part of the encoding method, not noise.
Don't confuse yourself.Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
Last edited by jagabo; 17th Apr 2012 at 19:44.
-
Okay so it's 2 chips 2 outputs possible now that make sense, case closed then
There two major players that make those decoders ( think of them like CPU or GPU) during these years conexant and philips ( nxp). Conexant was first out of the two to introduce 10bit ADC (analog to digital ) very popular with CX23880/3 vs 9bit Philips ADC very poular saa 713x and newer and 10bit one saa716x, another player that is very strong here is ATI ( it has 12 bit ADC ) and in the past matrox ( i know that many will say ATI but look at their market share for TV cards chips and you will know what I am talking about is less than 7% globally ). All these companies are selling "borrowing" their processor to the card makers ( Hauppage, Asus, Compro, Saphire, Pinnacle Kworld and others ) the same way ATI and Nvidia are selling their processors to the card makers.
The thing is that this decoder is doing all the input and output work ( IF demodulation, video signal processing and decoding, audio video interfaces and many more. The signal ( pal /ntsc ) the resolution 720x576 or higher HD and such, audio decoding and more.
No the fun part some of the variants of the decoders (processors) have programmable "ports" on the system bus for mpeg encoders/decoders and/or tv tuners which can be also made by the same company that makes the main decoder or by some third party. Some tv cards manufacturers use this and integrate another chip on the card that uses the mpeg decoding/encoding capabilities and interacts with the main processor or decoder and uses the port to communicate and to do the job in real time. Because this chip is on the card as i sad before it is very fast to encode to mpeg2 or other ( newer chips can even do h264 ) and that's why you in the process of encoding the PC cpu usage is so small because it is not used at all for encoding purposes.
So for "hardware based" tv card it is only software that controls the usage of the decoders including the mpeg one ( and puts them on or off ) so as jagabo sad, if you "tell" the card that you are using only the main processor, then the software ignores the second ( mpeg encoder/decoder) and uses only the main one ( the same as if he will use it if there was no second one like the "software" based cards.
That is why there is no mpeg 2 compression in this case because the second chip is not used at all only the main one and you get the exactly same result like with other software based card.
I am not an hardware or chip programming expert as to say how is this happening exactly, there is ton of web pages that explain the both decoders in details and the process of working -
Unless I'm missing something, these captures are too dark (capture 1 very destructively so), and have had their luma range digitally scaled and digitally clipped before saving to AVI.
btw, as part of the restoration, you need to move the chroma up by a few lines.
Cheers,
David. -
The luma is OK, it's the colors that are crushed. Playing with colors during capture is rather klutzy, any chroma filters you use would affect colorspace and slow the capture - causing more problems than it cures. That aspect can be fixed in YUV to a certain degree and looks better after some twiddling. This is what I used for the dinner scene:
Code:ColorYUV(cont_y=+18) ColorYUV(off_v=-1,off_u=2,gain_v=-5,gain_u=9) SmoothLevels(0, 1.2, 255, 15, 240, tvrange=false)
But it's too late for that now. I'm running some scripts that try to use some negative masking. You can do that in Photoshop or AfterEffects, not so easy in Avisynth, and I'm still learning this. Will have something later today. I'm still struggling with that herringbone garbage, setting filters to smooth it out without making the whole video look denuded.
Another problem is the out-of-focus effect, especially on motion. The video is interlaced. If you separatefield() or bob the video, you'll see that every second field is softer and has more distorted detail and edge problems than the other field (I don't remember if the "bad" field is even or odd). I'm running QTGMC with various settings to see if deinterlace/reinterlace will interpolate cleaner fields; to a degree it's looking better so far. Have to be careful about using sharpeners to retain detail. Sharpeners make the herringbone worse and tougher to deal with. The herringbone is also cyclical - about 4 to 6 frames in each cycle.
The angular waves (according to asesinato) aren't on other tapes, just this one. It might possibly have come from the camera, which had some bad sensors that produced motion trails on very bright objects. I've seen this herringbone junk before; it's not quite like the fine-mesh crosshatching you get with FM noise. Rather it's usually caused by improper tape storage. A tape stored on or near high-flux magnetic devices (TV picture tubes, subwoofers, etc.) has its magnetic layer affected in this way. Heat damage can also cause it. I've heard some people say that carrying tapes thru airport xrays causes this as well, but I've seen no published proof of this. Simply cleaning the tape won't fix it. I just finished a long project that had similar wave patterns in the first few minutes of damaged tape. It's next to impossible to get rid of it. Avisynth has no filters that address it directly. You can clean some of it with FFT3D, but at strong settings there won't be much video to watch afterwards. So far the only filter that makes it tolerable is NeatVideo's temporal settings.
Still running some scripts today. Will post some results later. -
The luma is not ok. It's crushed at Y=16. Although, it might be mostly the diagonal line noise that's was below Y=16.
-
diagonal line noise
-
-
-
Right, it's not OK in the original (neither are some of the high spots). Some darks are crushed by the camera's auto-exposure. Not much one can do about that, but in YUV I retrieved a little detail. Still twiddling with that. A big headache is the mixed lighting and that bright window. I find it impossible to correct entirely for the people in the room, so I guess it'll just have to be somewhat warmish inside to keep people from turning blue. You can't just throw more blue into it, it doesn't look right.
I'm playing with an auto white-point filter in Photoshop. The histogram settings can be transferred to VirtualDub's gradation curve. It's working but needs lots of tweaks. That daylight scene thru the window -- it will just have to be go cyan out there. If the auto features of the camera had been disabled in the first place, that's the way that outdoor scene should look. Will try to get up some images to show what's going on. -
The analogue crushed white is nowhere near digital 235. Can't be fixed, but hasn't been broken further by the digital capture.
The digital crushed black has been generated in the capture. There are loads of pixels at 16. They should have been lower, relatively (they are in the source - they're being clipped) - or more correctly, the whole thing should have been higher.
Using converttoyv12().histogram(mode="levels") you can see spikes in the luma histogram, implying the luma range has been scaled after capture in 8-bit domain. I don't think converttoyv12 will touch the luma of a YUV source (could be wrong) - so I think there's some inadvertent luma processing in this "lossless" capture. It doesn't matter at all in practice with such a noisy file, but I doubt the OP intended to include this extra stage somewhere. AFAICT they don't know they're doing it.
You can't colour correct everything in this image. The clipped whites are actually white. Any accurate correction for the indoor image will make them not white. And even if they weren't clipped, you can't colour correct a full scene which mixes artificial/indoor lighting and natural/outdoor lighting in one hit - not even if you shot it yesterday in 1080p60!
You could isolate the whites, and correct everything else. Then the outdoor not-whites would be wrong, but the indoor images could be subjectively a little better. There's so little saturation and blue that it's not worth doing too much IMO, but I'm sure you'll work your magic sanlyn and surprise us all!
Cheers,
David. -
-
I don;t know, what I've been reading about digital cameras/photos: the more sophisticated (expensive) gear can capture in RAW format. Illegal values aren't clipped, they're stored. But you need special software to retrieve out of bounds info. Average consumer gear will clip. IMO it looks clipped at the source. Not unusual, I see it in flash-lit Christmas snapshots of Aunt Bea and the babies all the time. But how many users think about that? Aunt Bea in the photo looks like Mister Hyde in drag and a fright wig, but if her face can be recognized it looks "great". My PC monitor chokes every time I get one of those shots in the email.
I'm discovering that the less you fiddle with the color in this part of the video, the better. Artificial light looks warm anyway. People look different in daylight than in artificial light. Thank the saints the dining area didn't have neon anywhere. -
Well...now that levels are getting under control and color is kind of on its way (I finally get the point; that tablecloth is not supposed to be white, buit the flowers are). Now I'm seeing edge artifacts and over-filtering at the same time. And this isn't even into NeatVideo yet! Time to learn how to use the dither and Unfilter plugins. Back to Step 1.
Last edited by sanlyn; 18th Apr 2012 at 18:40.
-
It's a trivial matter for the OP to recapture and adjust the capture device's proc amp to see if he can fix the black level.
-
I think so. These captures are the best ones yet, but those interior shots are really dark and dank. Shouldn't the capture software (is it virtualdub?) have access to the card's image controls? Or GraphEdit should, but I've never used it.
The man on the right, with his back to us: that's a very dark brown solid-backed chair, and one next to it is dark but more reddish. At least the dark colors are showing up, but no detail.
It's amazing that every new run of my plugins is a matter of "tweaking-down" the values, not making them stronger. I don't know why the faces are so out-of-focus though. Maybe the camera wasn't even focused correctly (?). A few faces are just smears. Yet I've stayed away from sharpeners as much as possible because of those dagonals. Adding some grain might help here. Back to work.Last edited by sanlyn; 18th Apr 2012 at 18:18.
-
-
To address the pattern noise, you can try Neural Net and/or Fan Filter
http://avisynth.org/vcmohan/NeuralNet/NeuralNet.html
http://avisynth.org/vcmohan/FanFilter/FanFilter.html
Another "brute force" method would be apply a directional blur (or rotate the video until the "lines" are vertical , use a horizontal blur) . e.g using the "destripe" function by mp4guy :
You have to play with the values, especially thr, and sometimes using iterations with varying rad and offset work better, especially for fixed patterns that move
Code:AVISource() ConvertToYV12(interlaced=true) AssumeTFF() SeparateFields() AddBorders(256,256,256,256) Rotate(65) DeStripe(rad=2, offset=2, thr=10) Rotate(-65) Crop(256,256,-256,-256,true) LimitedSharpenFaster(strength=75) Weave() #thr is strength, rad is "how big are the (whatevers)" offset is "how far apart are they" rad goes from 1 to 5, offset from 1 to 4, thr from 1 to bignumber function DeStripe(Clip C, int "rad", int "offset", int "thr") { rad = Default(rad, 2) offset = Default(offset, 0) thr_ = Default(thr, 256) Blurred = Rad == 1 ? C.Mt_Convolution(Horizontal=" 1 1 1 ", vertical = " 1 ", u=1, v=1) : C Blurred = Rad == 2 ? offset == 0 ? C.Mt_Convolution(Horizontal=" 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 1 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred Blurred = Rad == 3 ? offset == 0 ? C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : offset == 1 ? C.Mt_Convolution(Horizontal=" 1 1 0 1 0 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 0 1 0 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred Blurred = Rad == 4 ? offset == 0 ? C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : offset == 1 ? C.Mt_Convolution(Horizontal=" 1 1 1 0 1 0 1 1 1 ", vertical = " 1 ", u=1, v=1) : offset == 2 ? C.Mt_Convolution(Horizontal=" 1 1 0 0 1 0 0 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 0 0 1 0 0 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred Blurred = Rad == 5 ? offset == 0 ? C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : offset == 1 ? C.Mt_Convolution(Horizontal=" 1 1 1 1 0 1 0 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : offset == 2 ? C.Mt_Convolution(Horizontal=" 1 1 1 0 0 1 0 0 1 1 1 ", vertical = " 1 ", u=1, v=1) : offset == 3 ? C.Mt_Convolution(Horizontal=" 1 1 0 0 0 1 0 0 0 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 0 0 0 1 0 0 0 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred Diff = Mt_Makediff(C, Blurred) THR=string(thr_) MedianDiff = Rad == 1 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : Diff MedianDiff = Rad == 2 ? offset == 0 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff MedianDiff = Rad == 3 ? offset == 0 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 3 0 -3 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 1 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 3 0 -3 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 3 0 -3 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff MedianDiff = Rad == 4 ? offset == 0 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 1 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 2 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 3 0 -3 0 4 0 -4 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 4 0 -4 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff MedianDiff = Rad == 5 ? offset == 0 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 5 0 -5 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 1 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 5 0 -5 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 2 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 3 0 -3 0 4 0 -4 0 5 0 -5 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 3 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 4 0 -4 0 5 0 -5 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 5 0 -5 0 " , expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff ReconstructedMedian = mt_makediff(Diff, MedianDiff) Mt_AddDiff(Blurred, ReconstructedMedian) Return(Mergechroma(Last, C, 1)) }
-
Yes VirtualDub should give access to the proc amp controls. But when accessing them through VirtualDub you don't get real time feedback (ie, the capture display is frozen). But you can use GraphEdit or GraphStudio to access the capture filter and see the results in realtime in VirtualDub. You can enable the Histogram display in VirtualDub to see the YUV levels while previewing.
-
But when accessing them through VirtualDub you don't get real time feedback (ie, the capture display is frozen)
Im still on the stand that this noise patters are some svideo/comp scart output not rightly setup than on the source tape -
poisondeathray: you do have ways of keeping me up for days studying these ideas. I did try FanFilter a while back and just recently. Ok, didn't work then, but I might not have set it up correctly (no surprise). Will give it another try. But the example scripts given in the doc show parameters that don't occur in the table of named arguments. Anyway, it turned the test video blue.
Working with Neural Networks filters will likely be incomprehensible (I have a long history of D's in math). Will give destripe a try, too. I spent hours searching for the likes of Destripe, but it never showed up. Thanks for those, pdr. Again.
ED: FanFilter: Never mind. I just took a quick glance, and shazam! figured it out. Why couldn't I get it the first 15 times I read it? Go figure. -
Hey guys... sorry for me being gone but i brooke my right arm on thuesday so thats way
I have access to change pro amp settings, but what is it you want me to change? Brightness? -
You need to display a luma histogram and adjust brightness and contrast until everything falls within range 16-235. Either for the whole tape, or scene by scene.
With this much noise, I'd set it for the whole tape, and twiddle scene-by-scene in software later if needs be. As long as you don't clip anything (i.e. you keep the blackest black and whitest white in-range 16-235) you'll be fine.
Cheers,
David.
Similar Threads
-
old VHS restoration
By pirej in forum RestorationReplies: 72Last Post: 30th Apr 2016, 07:26 -
What am i doing wrong (VHS Restoration)
By Asesinato in forum RestorationReplies: 50Last Post: 14th Apr 2012, 06:02 -
old vhs restoration
By ishwinderjauhar in forum RestorationReplies: 16Last Post: 24th Sep 2011, 09:03 -
VHS restoration
By degacci in forum RestorationReplies: 5Last Post: 5th May 2008, 02:02 -
VHS-C restoration
By dlarry in forum RestorationReplies: 10Last Post: 25th Sep 2007, 21:14