VideoHelp Forum
+ Reply to Thread
Page 3 of 4
FirstFirst 1 2 3 4 LastLast
Results 61 to 90 of 102
Thread
  1. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    Originally Posted by vaporeon800 View Post
    The TBC has the entire signal including horizontal sync pulses, while the capture device only gives you active video.
    Ah, that makes sense. Thanks

    Back to the CED capture - I'm looking at other TBCs now. Perhaps a DataVideo TBC-1000 would do a good job but not introduce the posterization like the ES15.

    I've gone back and tested the MX-1 so more, but it does a poor job of horizontal alignment and there's some color issues not present in raw caps or caps with the ES15 in the chain.
    Quote Quote  
  2. Originally Posted by CED View Post
    What software was used to generate the spectral waveforms?
    I generated stills in an image editor and converted them to video with AviSynth. I think I included DVD compatible MPEG 2 samples in that thread.

    Originally Posted by CED View Post
    What I've been able to put together is that time base errors exist because the mechanical systems reading the analog signal stored on a CED/VHS/Laserdisc/etc are not perfectly timed, so the NTSC signal they generate might be played back too fast (lines are too short) or too slow (lines are too long) and possibly vary over time causing the visual errors we see.
    Yes.

    Originally Posted by CED View Post
    So, the thing I've noticed is that pretty much all the TBCs I've come across are referred to as digital time base correctors. I assume this means these devices are digitally sampling the analog signal into a line buffer or full frame buffer and then resampling the signal so that it's either stretched or compressed to proper standard time interval and then generating a near perfect NTSC signal from the resampled digital data.
    Yes.

    Originally Posted by CED View Post
    Now everything I've read says the TBC needs to be done before the signal is put into a capture device. Given the above, I don't understand why this is.
    The capture card / software is essentially just sampling the analog signal and storing it in a file. Why is it not possible to process this file with software in the same manner a dedicated TBC would? What information is lost when the capture card samples the NTSC signal that makes this not possible, but yet is not lost when a TBC samples it?
    In theory, a capture device could perform the line TBC function. Some capture chips even have the ability to do so. But the drivers rarely support the function. I've only ever heard of one device that supported the feature (don't know how well it worked, just saw someone mention it was an option in the software).

    The line length is determined by the distance between sync pulses. That is really the only feature of an analog signal you can count on for this. And those are gone after a video has been captured and saved. Some people have attempted to write filters that uses the black borders on an image to estimate the line length adjustments but that doesn't work very well. The rise time between the black border and the picture isn't consistent, sometimes there are no black borders, sometimes you can't tell the difference between the black border and black picture content, etc.
    Quote Quote  
  3. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    [QUOTE=jagabo;2455922]I generated stills in an image editor and converted them to video with AviSynth. I think I included DVD compatible MPEG 2 samples in that thread.

    Oh! Were the wave forms at the bottom and on the right part of the video that was played or an analysis of the capture of the video? I was curious what generated the wave forms. I've seen something similar in Davinci (the RGB Parade is one). Just wondering if there was some other software out there to analyze the frame.

    Originally Posted by jagabo View Post
    The line length is determined by the distance between sync pulses. That is really the only feature of an analog signal you can count on for this. And those are gone after a video has been captured and saved. Some people have attempted to write filters that uses the black borders on an image to estimate the line length adjustments but that doesn't work very well. The rise time between the black border and the picture isn't consistent, sometimes there are no black borders, sometimes you can't tell the difference between the black border and black picture content, etc.
    I was having a similar thought (attempting to line up the video starting at the first colored pixel), but if there's bits of the signal missing it would be difficult to determine how much each individual line had been stretched / compressed.

    I'm currently looking for a DataVideo TBC-1000 as a replacement for the ES15. Do you know if it performs horizontal stabilization well?
    Quote Quote  
  4. Originally Posted by CED View Post
    Oh! Were the wave forms at the bottom and on the right part of the video that was played or an analysis of the capture of the video?
    Those were added with the VideoScope filter in AviSynth.
    Quote Quote  
  5. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    Thanks for that. I guess the DataVideo TBC-1000 isn't what I'm looking for at all.

    Are there any alternatives to the Panasonic ES15 for line TBC? I would even consider DVD recorders that only do TBC for recording to disc, if they don't pass through.
    Quote Quote  
  6. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    Davideck mentioned the Philips DVDR3575H, which I bought and use. The step-down 3475 apparently functions the same way, and I believe there are many other Funai-manufactured recorders that do as well. As mentioned in the link, a proc amp is required with these recorders when the input signal is overly-bright. Presumably a professionally-mastered CED wouldn't exceed safe levels, but that may be a gamble.

    The Toshiba I tried is the D-KR4 and I don't like the side effects I saw on test patterns. Also Davideck said the Toshiba TBC is weaker than the Philips, and Sanlyn said the Toshiba is weaker than Panasonic.

    The DVDR3575H and D-KR4 both have only 2D Y/C separation.
    Quote Quote  
  7. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    Adding one more capture to my VHS test here: https://forum.videohelp.com/threads/379545-Need-some-help-with-capturing-NTSC-video-TBC...=1#post2455196

    I received a JVC HR-S9900U VCR today. This is the same tape from before over S-Video output to the ATI 600. Digital TBC/NR was enabled on the S9900U was enabled, R3 disabled.
    Image Attached Files
    Quote Quote  
  8. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    Quick Avisynth question: What's a good way to visually determine contrast changes in Avisynth? Right now I'm using the Tweak() function and reloading the video file in VirtualDub each time I change the script. Is there a better way to do this? For color balance I found ColorBalance() which mirrors the way GIMP handles color adjustments, so I can paste a screenshot of the video into GIMP, edit color balance and then copy those numbers into the Avisynth script. I haven't found anything does the same for contrast.

    Is there a better solution for this? Or reloading the avs for every change the best method currently?
    Quote Quote  
  9. You can use Animate() to animate Tweak(). An example:

    Code:
    function TweakContrast(clip c, float contrast)
    {
       Tweak(c, cont=contrast).Subtitle("contrast="+string(contrast))
    }
    
    WhateverSource("filename.ext")
    Animate(0,100, "TweakContrast", last,1.0, last,2.0)
    That will increase the contrast from 1.0 to 2.0 over the first 100 frames. Or to view a particular frame, say frame 1000:

    Code:
    function TweakContrast(clip c, float contrast)
    {
       Tweak(c, cont=contrast).Subtitle("contrast="+string(cont))
    }
    
    WhateverSource("filename.ext")
    Trim(1000,0) # remove first 1000 frames
    Loop(100,0,0) # repeat frame 0, 100 times
    Animate(0,100, "TweakContrast", last,1.0, last,2.0)
    Quote Quote  
  10. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    AvsPmod lets you use sliders to change values and auto-updates its display once you let go of the mouse button.
    Quote Quote  
  11. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    Originally Posted by vaporeon800 View Post
    AvsPmod lets you use sliders to change values and auto-updates its display once you let go of the mouse button.
    YES! This is exactly what i need. Thank you
    Quote Quote  
  12. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    Originally Posted by jagabo View Post
    You can use Animate() to animate Tweak(). An example:

    Code:
    function TweakContrast(clip c, float contrast)
    {
       Tweak(c, cont=contrast).Subtitle("contrast="+string(contrast))
    }
    
    WhateverSource("filename.ext")
    Animate(0,100, "TweakContrast", last,1.0, last,2.0)
    That will increase the contrast from 1.0 to 2.0 over the first 100 frames. Or to view a particular frame, say frame 1000:

    Code:
    function TweakContrast(clip c, float contrast)
    {
       Tweak(c, cont=contrast).Subtitle("contrast="+string(cont))
    }
    
    WhateverSource("filename.ext")
    Trim(1000,0) # remove first 1000 frames
    Loop(100,0,0) # repeat frame 0, 100 times
    Animate(0,100, "TweakContrast", last,1.0, last,2.0)
    Not quite what I was aiming for (I want a graphical way to visualize these changes without reloading the script into VirtualDub every time I edit it), but this answers another question I was going to ask. How do you apply corrections only to specific frames? Looks like Animate is the way to do that
    Quote Quote  
  13. Originally Posted by CED View Post
    Not quite what I was aiming for (I want a graphical way to visualize these changes without reloading the script into VirtualDub every time I edit it)...
    I don't understand. You make the change in the script, save it (File->Save, or just CTRL S), followed by hitting F2 in VDub. You get an instant update of the script, remaining on the same frame.
    How do you apply corrections only to specific frames? Looks like Animate is the way to do that
    Do you understand what his script is doing? It doesn't make the full change over a range of frames but begins with the minimum change and slowly brings the change to full strength over the range specified.

    You make a change for a range of frames by either using Trim statements or, better, using ReplaceFramesSimple. Here's an example:

    A=YLevelsS(0,1.5,255,0,255).Tweak(Bright=-10,Cont=1.1,Coring=False)
    ReplaceFramesSimple(Last,A,Mappings="[17053 18481] [22360 22498] [22645 23362] [27395 28259] ")

    That uses a combination of YLevels and Tweak in different ranges as specified in the brackets. It's very useful when you want to filter different places differently. That particular combination requires the external AviSynth filters RemapFrames and YLevels.

    Particularly when using levels/gamma/brightness/contrast filters such as those I find:

    ColorYUV(Analyze=True).Limiter(Show="Luma")


    to be very useful, as is the Histogram filter.
    Last edited by manono; 28th Aug 2016 at 15:15.
    Quote Quote  
  14. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    Originally Posted by manono View Post
    I don't understand. You make the change in the script, save it (File->Save, or just CTRL S), followed by hitting F2 in VDub. You get an instant update of the script, remaining on the same frame.
    That's quite a bit less productive than a tool that lets you apply a slider and automatically updates/reloads the video everytime you move it. Swapping out to a text editor, manually editing values, saving, then reloading for fine changes has been rather ...slow.

    Originally Posted by manono View Post
    Do you understand what his script is doing? It doesn't make the full change over a range of frames but begins with the minimum change and slowly brings the change to full strength over the range specified.
    Yes, I understand his script. If you give Animate() the same parameters for the initial and final values, it applies the same effect across the entire range of frames, which is what I want to do. The documentation for Animate() also references ApplyRange(), which calls Animate with the same initial and final values, specifically to achieve that.

    http://avisynth.org.ru/docs/english/corefilters/animate.htm

    Originally Posted by manono View Post
    You make a change for a range of frames by either using Trim statements or, better, using ReplaceFramesSimple. Here's an example:

    A=YLevelsS(0,1.5,255,0,255).Tweak(Bright=-10,Cont=1.1,Coring=False)
    ReplaceFramesSimple(Last,A,Mappings="[17053 18481] [22360 22498] [22645 23362] [27395 28259] ")

    That uses a combination of YLevels and Tweak in different ranges as specified in the brackets. It's very useful when you want to filter different places differently. That particular combination requires the external AviSynth filters RemapFrames and YLevels.
    For example:

    Code:
    Animate(17053, 18481, "YLevelsS", 0, 1.5, 255, 0, 255, 0, 1.5, 255, 0, 255)
    or

    Code:
    ApplyRange(17053, 18481, "YLevelsS", 0, 1.5, 255, 0, 255)
    Your method seems to achieve the same thing as Animate/ApplyRange does, but that's handy for applying the same change to multiple ranges at once. Thanks for the tip

    Originally Posted by manono View Post
    Particularly when using levels/gamma/brightness/contrast filters such as those I find:

    ColorYUV(Analyze=True).Limiter(Show="Luma")


    to be very useful, as is the Histogram filter.
    Also useful to know! Thanks again
    Quote Quote  
  15. Originally Posted by CED View Post
    If you give Animate() the same parameters for the initial and final values, it applies the same effect across the entire range of frames, which is what I want to do.
    That wasn't my point. I was suggesting you use it to apply a range of values over a range of frames to find the value that you want. You scrub through that section of the video looking for the right value. Once you've determined what value you want you remove the Animate() and just apply the tweak to the video. I use contrast in the example but you can use it for any variable in tweak. Or any numeric variable in any filter.
    Last edited by jagabo; 28th Aug 2016 at 20:46.
    Quote Quote  
  16. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by CED View Post
    If you give Animate() the same parameters for the initial and final values, it applies the same effect across the entire range of frames, which is what I want to do.
    That wasn't my point. I was suggesting you use it to apply a range of values over a range of frames to find the value that you want. You scrub through that section of the video looking for the right value. Once you've determined what value you want you remove the Animate() and just apply the tweak to the video. I use contrast in the example but you can use it for any variable in tweak. Or any numeric variable in any filter.
    Ah that is very clever I did indeed miss it. Do you just look at the frame count for the frame you think looks best and use that as a percentage of the change?

    So, for example, using Animate to vary contrast from 1.0 to 1.1 over a 1000 frames, then viewing the video output and noting a preference for the contrast at frame 631. So 63.1% along the transition or (.631 * .1) + 1.0 = 1.0631 contrast setting?
    Quote Quote  
  17. Originally Posted by CED View Post
    Do you just look at the frame count for the frame you think looks best and use that as a percentage of the change?
    The called function prints the value on the frame, hence the ".Subtitle("contrast="+string(contrast))".
    Quote Quote  
  18. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by CED View Post
    Do you just look at the frame count for the frame you think looks best and use that as a percentage of the change?
    The called function prints the value on the frame, hence the ".Subtitle("contrast="+string(contrast))".
    I really should have read that script more closely I saw what Animate was doing then saw the AvsPmod post and didn't read through it carefully . Makes sense now, thanks for the patience in explaining it
    Quote Quote  
  19. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    This is an example of what I wanted to use a graphical editor for. The color balance is 9-axis and is quite cumbersome to do by typing in values and then tweaking them in a text editor. The Animate() method wouldn't work well for this either since there are 9 variables to check all the other variables against. The sliders make this quite easy to work with though:

    Image
    [Attachment 38363 - Click to enlarge]


    I'm not sure if there's a better method available to do this type of adjustment.
    Quote Quote  
  20. Member vhelp's Avatar
    Join Date
    Mar 2001
    Location
    New York
    Search Comp PM
    Originally Posted by CED View Post
    Thanks for that. I guess the DataVideo TBC-1000 isn't what I'm looking for at all.

    Are there any alternatives to the Panasonic ES15 for line TBC? I would even consider DVD recorders that only do TBC for recording to disc, if they don't pass through.

    You can try the Toshiba DR-430 dvd recorder in this link --> My tbc test pattern result is in that page. It has a good line sync like the ES10/ES15.
    Quote Quote  
  21. Originally Posted by CED View Post
    This is an example of what I wanted to use a graphical editor for. The color balance is 9-axis and is quite cumbersome to do by typing in values and then tweaking them in a text editor. The Animate() method wouldn't work well for this either since there are 9 variables to check all the other variables against. The sliders make this quite easy to work with though:

    Image
    [Attachment 38363 - Click to enlarge]


    I'm not sure if there's a better method available to do this type of adjustment.
    I am jumping in without having read the entire thread. So this response may be way off topic. My apologies if so.

    Absolutely there is a better method for doing this sort of work. They are called full-fledged NLEs or more precisely color grading programs. They have color wheels that allow you to adjust not only the master offset but shadows, midtones, and highlights as well. That is called primary grading. They go even further with secondary grading by correcting, for example, skin tones using qualifiers, tracking, and on and on and on. They are typically paired with an external calibrated broadcast monitor or TV, so you can trust the colors and actually see what you are doing (versus a small window in a GUI subjected to who knows what resizing algo and off color conversion).

    One thing to keep in mind when you are doing any sort of color work is that computer monitors are RGB. Video is Y'CbCr. So any time you use a computer monitor for display, the video must be converted to RGB. This conversion process is full of pitfalls and can't be trusted. IOW, the red you see on a computer monitor doesn't look the same on a TV, no matter how many hoops you might jump through.
    Quote Quote  
  22. Originally Posted by CED View Post
    I'm pretty sure it's the DMR-ES15 that's the causing the issue now. Unfortunately it seems systemic as I picked up a duplicate ES15 as a backup. The same posterization shows up from that unit.
    There may be a fix for this. It's a bit counter intuitive but...

    https://forum.videohelp.com/threads/380285-Where-did-I-go-wrong-What-am-I-missing?p=246...74#post2460874

    And the next post.
    Last edited by jagabo; 26th Sep 2016 at 08:20.
    Quote Quote  
  23. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by CED View Post
    I'm pretty sure it's the DMR-ES15 that's the causing the issue now. Unfortunately it seems systemic as I picked up a duplicate ES15 as a backup. The same posterization shows up from that unit.
    There may be a fix for this. It's a bit counter intuitive but...

    https://forum.videohelp.com/threads/380285-Where-did-I-go-wrong-What-am-I-missing?p=246...74#post2460874

    And the next post.
    Thanks for the pointer! I also noticed the Lighter-Darker setting appeared to have less banding in areas I was looking for banding (darker areas) caused by the Darker-Lighter setting. I think it may be that banding is actually reversed though - lighter areas may show banding as opposed to darker ones. It's less noticeable, but I think there the ES15 unit still does something to the images in either setting.
    Quote Quote  
  24. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    So I've been back at working on archiving some videos. I've been testing out some different optimizations that seem to have good cleanup potential. However, in comparing some of the denoising to AVIsynth output, I noticed there seems to be some significant loss of detail and overall brightening from TFM() and TDecimate(). I was a little surprised by this, but it's particularly noticeable with clouds.

    This is the full AVIsynth script applied to the the capture:

    Code:
    Mpeg2Source("title00_track1_eng.d2v", CPU2="ooooxx", Info=3)
    
    TFM() # field matching -- remove comb artifacts
    TDecimate()  # remove the one extra frame of every 5 left by TFM, restores 24 fps film
    Screen caps are attached. Is that behavior expected from TFM?

    Image
    [Attachment 40225 - Click to enlarge]


    Image
    [Attachment 40226 - Click to enlarge]
    Quote Quote  
  25. That crushing of brights is not expected with TFM().TDecimate(). Can you upload a short sample of the source? You can use DgIndex to mark and demux a short segment (no reencoding). Then upload the m2v file.
    Quote Quote  
  26. Originally Posted by CED View Post
    Is that behavior expected from TFM?
    No, and whatever is causing it, it's not the fault of TFM. Do you have two instances of the player open at the same time to compare the differences and take the pictures?
    Quote Quote  
  27. Originally Posted by manono View Post
    If you leave off the CPU2="ooooxx", Info=3, do you see the same thing?
    That might cause the slight loss of detail but shouldn't cause the crushed brights.
    Quote Quote  
  28. Originally Posted by jagabo View Post
    That might cause the slight loss of detail but shouldn't cause the crushed brights.
    Damn, you caught that before I removed it. Yes, I realized the same thing, but the question stands. My guess at the moment is it's two players, one using the overlay and the other not, that's the cause of it.
    Quote Quote  
  29. Member
    Join Date
    Jul 2016
    Location
    USA
    Search Comp PM
    Originally Posted by manono View Post
    Originally Posted by jagabo View Post
    That might cause the slight loss of detail but shouldn't cause the crushed brights.
    Damn, you caught that before I removed it. Yes, I realized the same thing, but the question stands. My guess at the moment is it's two players, one using the overlay and the other not, that's the cause of it.
    Yes, both Windows were open at the same time, but I don't believe that to be the issue. I have different version I'm working on (CED as opposed to VHS). It doesn't exhibit that issue with both open, though the colors are darker from this source:

    Image
    [Attachment 40227 - Click to enlarge]


    Also, looking at the avs in VirtualDub (other apps closed) with the "CPU2="ooooxx", Info=3" removed appears the same:

    Image
    [Attachment 40228 - Click to enlarge]
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!