VideoHelp Forum




+ Reply to Thread
Page 4 of 8
FirstFirst ... 2 3 4 5 6 ... LastLast
Results 91 to 120 of 240
  1. Originally Posted by DB83 View Post
    I am more than happy to take your word for it. I do not have 20/20 vision so can not make the appropiate distinction.
    Use a screen maginfier. Or use an editor and zoom in. The macroblocking in the OPs TS clips is obvious. There is none in his AVI clips.
    Quote Quote  
  2. if no macroblocks found that means no compression (not mpeg2) and i'll admit that would be a mistery to me (how this is achieved).
    Perhaps there is a condition in the drivers that state "if output = mts use mpeg2, else use raw yuv"
    *** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
    Quote Quote  
  3. Member DB83's Avatar
    Join Date
    Jul 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by DB83 View Post
    I am more than happy to take your word for it. I do not have 20/20 vision so can not make the appropiate distinction.
    Use a screen maginfier. Or use an editor and zoom in. The macroblocking in the OPs TS clips is obvious. There is none in his AVI clips.
    Well I think I know what macroblocking is. I deleted all the clips but just for a fit of giggles d/ld the second .ts clip from #26 in this topic - not the original .ts clips.

    Ran at full screen, frame by frame and I'll be darned if I can see any.
    Quote Quote  
  4. Originally Posted by themaster1 View Post
    if no macroblocks found that means no compression (not mpeg2) and i'll admit that would be a mistery to me (how this is achieved).
    Why is this so hard to believe? My Hauppauge PVR-250, for example, has two chips, an analog capture chip that produces uncompressed YUV video, and an MPEG encoder chip that takes raw video from the first chip and outputs an MPEG stream. When Hauppauge's WinTV is in preview mode the raw output from the first chip is displayed. When WinTV is capturing it switches on the second chip and captures the MPEG stream. But there's no reason the raw video stream can't be captured -- it's just that WinTV doesn't support it. For example, I can build a capture graph with GraphStudio to capture the raw YUV video.

    The PVR-150 is an update to the PVR-250 design with both chips integrated into one piece of silicon. But it still has the separate functionality (raw video, MPEG). The PVR-500 is basically two PVR-150 chips on a single board.
    Quote Quote  
  5. Okay so it's 2 chips 2 outputs possible now that make sense, case closed then

    i confirm there are no macroblocks (i used Histogram(mode="luma" as always)
    *** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
    Quote Quote  
  6. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    With proper bitrates, you can have MPEG-2 video with no visible macroblock borders.

    Remember that "macroblocks" are part of the encoding method, not noise.

    Don't confuse yourself.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  7. Originally Posted by lordsmurf View Post
    With proper bitrates, you can have MPEG-2 video with no visible macroblock borders.
    Not at any bitrate a PVR-150, 250, or 500 can output.

    In any case, it doesn't really matter for the OP. With his capture device he's getting far better caps as lossless AVI than he can get as MPEG files.
    Last edited by jagabo; 17th Apr 2012 at 19:44.
    Quote Quote  
  8. Oops, wrong thread.
    Quote Quote  
  9. Okay so it's 2 chips 2 outputs possible now that make sense, case closed then
    Little light on the mater ( i see a lot of you guys are confused). All TV cards are based around one central "processor" or decoder.
    There two major players that make those decoders ( think of them like CPU or GPU) during these years conexant and philips ( nxp). Conexant was first out of the two to introduce 10bit ADC (analog to digital ) very popular with CX23880/3 vs 9bit Philips ADC very poular saa 713x and newer and 10bit one saa716x, another player that is very strong here is ATI ( it has 12 bit ADC ) and in the past matrox ( i know that many will say ATI but look at their market share for TV cards chips and you will know what I am talking about is less than 7% globally ). All these companies are selling "borrowing" their processor to the card makers ( Hauppage, Asus, Compro, Saphire, Pinnacle Kworld and others ) the same way ATI and Nvidia are selling their processors to the card makers.
    The thing is that this decoder is doing all the input and output work ( IF demodulation, video signal processing and decoding, audio video interfaces and many more. The signal ( pal /ntsc ) the resolution 720x576 or higher HD and such, audio decoding and more.

    No the fun part some of the variants of the decoders (processors) have programmable "ports" on the system bus for mpeg encoders/decoders and/or tv tuners which can be also made by the same company that makes the main decoder or by some third party. Some tv cards manufacturers use this and integrate another chip on the card that uses the mpeg decoding/encoding capabilities and interacts with the main processor or decoder and uses the port to communicate and to do the job in real time. Because this chip is on the card as i sad before it is very fast to encode to mpeg2 or other ( newer chips can even do h264 ) and that's why you in the process of encoding the PC cpu usage is so small because it is not used at all for encoding purposes.
    So for "hardware based" tv card it is only software that controls the usage of the decoders including the mpeg one ( and puts them on or off ) so as jagabo sad, if you "tell" the card that you are using only the main processor, then the software ignores the second ( mpeg encoder/decoder) and uses only the main one ( the same as if he will use it if there was no second one like the "software" based cards.

    That is why there is no mpeg 2 compression in this case because the second chip is not used at all only the main one and you get the exactly same result like with other software based card.

    I am not an hardware or chip programming expert as to say how is this happening exactly, there is ton of web pages that explain the both decoders in details and the process of working
    Quote Quote  
  10. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by Asesinato View Post
    Now here is 3 clips of the capture of the movie, from various points in it...

    So what can we do now to make it better?
    Unless I'm missing something, these captures are too dark (capture 1 very destructively so), and have had their luma range digitally scaled and digitally clipped before saving to AVI.

    btw, as part of the restoration, you need to move the chroma up by a few lines.

    Cheers,
    David.
    Quote Quote  
  11. Originally Posted by 2Bdecided View Post
    Originally Posted by Asesinato View Post
    Now here is 3 clips of the capture of the movie, from various points in it...

    So what can we do now to make it better?
    Unless I'm missing something, these captures are too dark (capture 1 very destructively so), and have had their luma range digitally scaled and digitally clipped before saving to AVI.
    Yes:
    Click image for larger version

Name:	levels.jpg
Views:	1480
Size:	102.0 KB
ID:	12021
    Quote Quote  
  12. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    The luma is OK, it's the colors that are crushed. Playing with colors during capture is rather klutzy, any chroma filters you use would affect colorspace and slow the capture - causing more problems than it cures. That aspect can be fixed in YUV to a certain degree and looks better after some twiddling. This is what I used for the dinner scene:

    Code:
    ColorYUV(cont_y=+18)
    ColorYUV(off_v=-1,off_u=2,gain_v=-5,gain_u=9)
    SmoothLevels(0, 1.2, 255, 15, 240, tvrange=false)
    It's not a 100% fix for the darkest areas and still needed adjustment in RGB later. Look at the way the indoor scenes were photographed. The camera's auto exposure is on, which is bad enough for an indoor scene, but to make matters worse the camera is aimed directly at a bright window in the dinner scene. The scene outside that window is daylight, bright enough to darken the whole image. Then there are indoor lights that further makes the autogain shut down the darkest areas to accommodate those lights. Overall gamma, then, is depressed. On top of that the tape damage affects the dark end, lens flare from the lights affects midtone detail and highlights. Autogain and auto exposure try to make everything look like an "average" luma and color range. The only way to prevent all this is to have brought some small flood-fill lights along and to set exposure manually, focusing on the people in the room instead of allowing the lights and bright window to throw everything off.

    But it's too late for that now. I'm running some scripts that try to use some negative masking. You can do that in Photoshop or AfterEffects, not so easy in Avisynth, and I'm still learning this. Will have something later today. I'm still struggling with that herringbone garbage, setting filters to smooth it out without making the whole video look denuded.

    Another problem is the out-of-focus effect, especially on motion. The video is interlaced. If you separatefield() or bob the video, you'll see that every second field is softer and has more distorted detail and edge problems than the other field (I don't remember if the "bad" field is even or odd). I'm running QTGMC with various settings to see if deinterlace/reinterlace will interpolate cleaner fields; to a degree it's looking better so far. Have to be careful about using sharpeners to retain detail. Sharpeners make the herringbone worse and tougher to deal with. The herringbone is also cyclical - about 4 to 6 frames in each cycle.

    The angular waves (according to asesinato) aren't on other tapes, just this one. It might possibly have come from the camera, which had some bad sensors that produced motion trails on very bright objects. I've seen this herringbone junk before; it's not quite like the fine-mesh crosshatching you get with FM noise. Rather it's usually caused by improper tape storage. A tape stored on or near high-flux magnetic devices (TV picture tubes, subwoofers, etc.) has its magnetic layer affected in this way. Heat damage can also cause it. I've heard some people say that carrying tapes thru airport xrays causes this as well, but I've seen no published proof of this. Simply cleaning the tape won't fix it. I just finished a long project that had similar wave patterns in the first few minutes of damaged tape. It's next to impossible to get rid of it. Avisynth has no filters that address it directly. You can clean some of it with FFT3D, but at strong settings there won't be much video to watch afterwards. So far the only filter that makes it tolerable is NeatVideo's temporal settings.

    Still running some scripts today. Will post some results later.
    Quote Quote  
  13. The luma is not ok. It's crushed at Y=16. Although, it might be mostly the diagonal line noise that's was below Y=16.
    Quote Quote  
  14. diagonal line noise
    As I see it the noise is in the equipment or the cabling not on the original footage he should really do something about that first, then think about the rest.
    Quote Quote  
  15. Originally Posted by mammo1789 View Post
    diagonal line noise
    As I see it the noise is in the equipment or the cabling not on the original footage he should really do something about that first, then think about the rest.
    He claims other tapes don't show the same noise.
    Quote Quote  
  16. Member DB83's Avatar
    Join Date
    Jul 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by mammo1789 View Post
    diagonal line noise
    As I see it the noise is in the equipment or the cabling not on the original footage he should really do something about that first, then think about the rest.
    He claims other tapes don't show the same noise.
    More likely a poor transfer from the original camera to domestic VHS.
    Quote Quote  
  17. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Originally Posted by jagabo View Post
    The luma is not ok. It's crushed at Y=16. Although, it might be mostly the diagonal line noise that's was below Y=16.
    Right, it's not OK in the original (neither are some of the high spots). Some darks are crushed by the camera's auto-exposure. Not much one can do about that, but in YUV I retrieved a little detail. Still twiddling with that. A big headache is the mixed lighting and that bright window. I find it impossible to correct entirely for the people in the room, so I guess it'll just have to be somewhat warmish inside to keep people from turning blue. You can't just throw more blue into it, it doesn't look right.

    I'm playing with an auto white-point filter in Photoshop. The histogram settings can be transferred to VirtualDub's gradation curve. It's working but needs lots of tweaks. That daylight scene thru the window -- it will just have to be go cyan out there. If the auto features of the camera had been disabled in the first place, that's the way that outdoor scene should look. Will try to get up some images to show what's going on.
    Quote Quote  
  18. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    The analogue crushed white is nowhere near digital 235. Can't be fixed, but hasn't been broken further by the digital capture.

    The digital crushed black has been generated in the capture. There are loads of pixels at 16. They should have been lower, relatively (they are in the source - they're being clipped) - or more correctly, the whole thing should have been higher.

    Using converttoyv12().histogram(mode="levels") you can see spikes in the luma histogram, implying the luma range has been scaled after capture in 8-bit domain. I don't think converttoyv12 will touch the luma of a YUV source (could be wrong) - so I think there's some inadvertent luma processing in this "lossless" capture. It doesn't matter at all in practice with such a noisy file, but I doubt the OP intended to include this extra stage somewhere. AFAICT they don't know they're doing it.

    You can't colour correct everything in this image. The clipped whites are actually white. Any accurate correction for the indoor image will make them not white. And even if they weren't clipped, you can't colour correct a full scene which mixes artificial/indoor lighting and natural/outdoor lighting in one hit - not even if you shot it yesterday in 1080p60!

    You could isolate the whites, and correct everything else. Then the outdoor not-whites would be wrong, but the indoor images could be subjectively a little better. There's so little saturation and blue that it's not worth doing too much IMO, but I'm sure you'll work your magic sanlyn and surprise us all!

    Cheers,
    David.
    Quote Quote  
  19. Originally Posted by sanlyn View Post
    Originally Posted by jagabo View Post
    The luma is not ok. It's crushed at Y=16. Although, it might be mostly the diagonal line noise that's was below Y=16.
    Right, it's not OK in the original (neither are some of the high spots). Some darks are crushed by the camera's auto-exposure.
    That's not necessarily the case. He should recapture and see if he can get some of those darker blacks back.
    Quote Quote  
  20. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I don;t know, what I've been reading about digital cameras/photos: the more sophisticated (expensive) gear can capture in RAW format. Illegal values aren't clipped, they're stored. But you need special software to retrieve out of bounds info. Average consumer gear will clip. IMO it looks clipped at the source. Not unusual, I see it in flash-lit Christmas snapshots of Aunt Bea and the babies all the time. But how many users think about that? Aunt Bea in the photo looks like Mister Hyde in drag and a fright wig, but if her face can be recognized it looks "great". My PC monitor chokes every time I get one of those shots in the email.

    I'm discovering that the less you fiddle with the color in this part of the video, the better. Artificial light looks warm anyway. People look different in daylight than in artificial light. Thank the saints the dining area didn't have neon anywhere.
    Quote Quote  
  21. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Well...now that levels are getting under control and color is kind of on its way (I finally get the point; that tablecloth is not supposed to be white, buit the flowers are). Now I'm seeing edge artifacts and over-filtering at the same time. And this isn't even into NeatVideo yet! Time to learn how to use the dither and Unfilter plugins. Back to Step 1.

    Click image for larger version

Name:	On the way.png
Views:	1657
Size:	687.0 KB
ID:	12025
    Last edited by sanlyn; 18th Apr 2012 at 18:40.
    Quote Quote  
  22. It's a trivial matter for the OP to recapture and adjust the capture device's proc amp to see if he can fix the black level.
    Quote Quote  
  23. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    I think so. These captures are the best ones yet, but those interior shots are really dark and dank. Shouldn't the capture software (is it virtualdub?) have access to the card's image controls? Or GraphEdit should, but I've never used it.

    The man on the right, with his back to us: that's a very dark brown solid-backed chair, and one next to it is dark but more reddish. At least the dark colors are showing up, but no detail.

    It's amazing that every new run of my plugins is a matter of "tweaking-down" the values, not making them stronger. I don't know why the faces are so out-of-focus though. Maybe the camera wasn't even focused correctly (?). A few faces are just smears. Yet I've stayed away from sharpeners as much as possible because of those dagonals. Adding some grain might help here. Back to work.
    Last edited by sanlyn; 18th Apr 2012 at 18:18.
    Quote Quote  
  24. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    The same image as avove, but levels raised only below RGB 48. You can clearly see the effects of crushed darks, especially the colorless blotches in the lower left.

    Click image for larger version

Name:	On the way3.png
Views:	1671
Size:	616.7 KB
ID:	12026
    Quote Quote  
  25. Originally Posted by sanlyn View Post
    I just finished a long project that had similar wave patterns in the first few minutes of damaged tape. It's next to impossible to get rid of it. Avisynth has no filters that address it directly. You can clean some of it with FFT3D, but at strong settings there won't be much video to watch afterwards. So far the only filter that makes it tolerable is NeatVideo's temporal settings.
    To address the pattern noise, you can try Neural Net and/or Fan Filter
    http://avisynth.org/vcmohan/NeuralNet/NeuralNet.html
    http://avisynth.org/vcmohan/FanFilter/FanFilter.html


    Another "brute force" method would be apply a directional blur (or rotate the video until the "lines" are vertical , use a horizontal blur) . e.g using the "destripe" function by mp4guy :

    You have to play with the values, especially thr, and sometimes using iterations with varying rad and offset work better, especially for fixed patterns that move


    Code:
    AVISource()
    ConvertToYV12(interlaced=true)
    AssumeTFF()
    SeparateFields()
    
    AddBorders(256,256,256,256)
    Rotate(65)
    
    DeStripe(rad=2, offset=2, thr=10)
    Rotate(-65)
    Crop(256,256,-256,-256,true)
    LimitedSharpenFaster(strength=75)
    Weave()
    
    
    #thr is strength, rad is "how big are the (whatevers)" offset is "how far apart are they" rad goes from 1 to 5, offset from 1 to 4, thr from 1 to bignumber
    
    
    function DeStripe(Clip C, int "rad", int "offset", int "thr")
    {
    
        rad = Default(rad, 2)
        offset = Default(offset, 0)
        thr_ = Default(thr, 256)
    
    
        Blurred = Rad == 1 ? C.Mt_Convolution(Horizontal=" 1 1 1 ", vertical = " 1 ", u=1, v=1) : C
        Blurred = Rad == 2 ? offset == 0 ? C.Mt_Convolution(Horizontal=" 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 1 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred
        Blurred = Rad == 3 ? offset == 0 ?  C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) : offset == 1 ?  C.Mt_Convolution(Horizontal=" 1 1 0 1 0 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 0 1 0 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred
        Blurred = Rad == 4 ? offset == 0 ?  C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) :  offset == 1 ? C.Mt_Convolution(Horizontal=" 1 1 1 0 1 0 1 1 1 ", vertical = " 1 ", u=1, v=1) :  offset == 2 ? C.Mt_Convolution(Horizontal=" 1 1 0 0 1 0 0 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 0 0 1 0 0 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred
        Blurred = Rad == 5 ? offset == 0 ?  C.Mt_Convolution(Horizontal=" 1 1 1 1 1 1 1 1 1 1 1 ", vertical = " 1 ", u=1, v=1) :  offset == 1 ?  C.Mt_Convolution(Horizontal=" 1 1 1 1 0 1 0 1 1 1 1 ", vertical = " 1 ", u=1, v=1) :  offset == 2 ?  C.Mt_Convolution(Horizontal=" 1 1 1 0 0 1 0 0 1 1 1 ", vertical = " 1 ", u=1, v=1) :  offset == 3 ?  C.Mt_Convolution(Horizontal=" 1 1 0 0 0 1 0 0 0 1 1 ", vertical = " 1 ", u=1, v=1) : C.Mt_Convolution(Horizontal=" 1 0 0 0 0 1 0 0 0 0 1 ", vertical = " 1 ", u=1, v=1) : Blurred
            Diff = Mt_Makediff(C, Blurred)
    
        THR=string(thr_)
        MedianDiff =  Rad == 1 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : Diff
        MedianDiff =  Rad == 2 ? offset == 0 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff
        MedianDiff =  Rad == 3 ? offset == 0 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 3 0 -3 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 1 ? MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 3 0 -3 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 3 0 -3 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff
        MedianDiff =  Rad == 4 ? offset == 0 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 1 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 2 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 3 0 -3 0 4 0 -4 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 4 0 -4 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff
        MedianDiff =  Rad == 5 ? offset == 0 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 1 0 -1 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 5 0 -5 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 1 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 2 0 -2 0 3 0 -3 0 4 0 -4 0 5 0 -5 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 2 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 3 0 -3 0 4 0 -4 0 5 0 -5 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : offset == 3 ?  MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 4 0 -4 0 5 0 -5 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MT_Luts(Diff, Diff, mode="med", pixels = " 0 0 5 0 -5 0 " ,  expr = " X Y - X Y - X Y - abs 1 + * X Y - abs 1 + "+THR+" 1 >= "+THR+" 0.5 ^ "+THR+" ? + / - 128 +", u=1,v=1) : MedianDiff
            ReconstructedMedian = mt_makediff(Diff, MedianDiff)
                Mt_AddDiff(Blurred, ReconstructedMedian)
    
    Return(Mergechroma(Last, C, 1))
    }
    Image Attached Thumbnails Click image for larger version

Name:	60 original.png
Views:	271
Size:	511.9 KB
ID:	12027  

    Click image for larger version

Name:	60 destripe.png
Views:	270
Size:	513.8 KB
ID:	12028  

    Quote Quote  
  26. Originally Posted by sanlyn View Post
    Shouldn't the capture software (is it virtualdub?) have access to the card's image controls? Or GraphEdit should, but I've never used it.
    Yes VirtualDub should give access to the proc amp controls. But when accessing them through VirtualDub you don't get real time feedback (ie, the capture display is frozen). But you can use GraphEdit or GraphStudio to access the capture filter and see the results in realtime in VirtualDub. You can enable the Histogram display in VirtualDub to see the YUV levels while previewing.
    Quote Quote  
  27. But when accessing them through VirtualDub you don't get real time feedback (ie, the capture display is frozen)
    That is not entirely true i suppose it depends on the card and driver i can use it real time even while the capturing in progress in vdub with my aver media tv card, but it was grey out ( in windows 7 64x at least) and couldn't slide it in real time ( windows xp) with the pinnacle 110 pro tv card.

    Im still on the stand that this noise patters are some svideo/comp scart output not rightly setup than on the source tape
    Quote Quote  
  28. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    poisondeathray: you do have ways of keeping me up for days studying these ideas. I did try FanFilter a while back and just recently. Ok, didn't work then, but I might not have set it up correctly (no surprise). Will give it another try. But the example scripts given in the doc show parameters that don't occur in the table of named arguments. Anyway, it turned the test video blue.

    Working with Neural Networks filters will likely be incomprehensible (I have a long history of D's in math). Will give destripe a try, too. I spent hours searching for the likes of Destripe, but it never showed up. Thanks for those, pdr. Again.

    ED: FanFilter: Never mind. I just took a quick glance, and shazam! figured it out. Why couldn't I get it the first 15 times I read it? Go figure.
    Quote Quote  
  29. Member
    Join Date
    Dec 2011
    Location
    Denmark
    Search PM
    Hey guys... sorry for me being gone but i brooke my right arm on thuesday so thats way

    I have access to change pro amp settings, but what is it you want me to change? Brightness?
    Quote Quote  
  30. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    You need to display a luma histogram and adjust brightness and contrast until everything falls within range 16-235. Either for the whole tape, or scene by scene.

    With this much noise, I'd set it for the whole tape, and twiddle scene-by-scene in software later if needs be. As long as you don't clip anything (i.e. you keep the blackest black and whitest white in-range 16-235) you'll be fine.

    Cheers,
    David.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!