Ah, that makes sense. Thanks
Back to the CED capture - I'm looking at other TBCs now. Perhaps a DataVideo TBC-1000 would do a good job but not introduce the posterization like the ES15.
I've gone back and tested the MX-1 so more, but it does a poor job of horizontal alignment and there's some color issues not present in raw caps or caps with the ES15 in the chain.
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 61 to 90 of 102
Thread
-
-
I generated stills in an image editor and converted them to video with AviSynth. I think I included DVD compatible MPEG 2 samples in that thread.
Yes.
Yes.
In theory, a capture device could perform the line TBC function. Some capture chips even have the ability to do so. But the drivers rarely support the function. I've only ever heard of one device that supported the feature (don't know how well it worked, just saw someone mention it was an option in the software).
The line length is determined by the distance between sync pulses. That is really the only feature of an analog signal you can count on for this. And those are gone after a video has been captured and saved. Some people have attempted to write filters that uses the black borders on an image to estimate the line length adjustments but that doesn't work very well. The rise time between the black border and the picture isn't consistent, sometimes there are no black borders, sometimes you can't tell the difference between the black border and black picture content, etc. -
[QUOTE=jagabo;2455922]I generated stills in an image editor and converted them to video with AviSynth. I think I included DVD compatible MPEG 2 samples in that thread.
Oh! Were the wave forms at the bottom and on the right part of the video that was played or an analysis of the capture of the video? I was curious what generated the wave forms. I've seen something similar in Davinci (the RGB Parade is one). Just wondering if there was some other software out there to analyze the frame.
I was having a similar thought (attempting to line up the video starting at the first colored pixel), but if there's bits of the signal missing it would be difficult to determine how much each individual line had been stretched / compressed.
I'm currently looking for a DataVideo TBC-1000 as a replacement for the ES15. Do you know if it performs horizontal stabilization well? -
Last edited by Brad; 13th Aug 2016 at 19:08.
-
-
Davideck mentioned the Philips DVDR3575H, which I bought and use. The step-down 3475 apparently functions the same way, and I believe there are many other Funai-manufactured recorders that do as well. As mentioned in the link, a proc amp is required with these recorders when the input signal is overly-bright. Presumably a professionally-mastered CED wouldn't exceed safe levels, but that may be a gamble.
The Toshiba I tried is the D-KR4 and I don't like the side effects I saw on test patterns. Also Davideck said the Toshiba TBC is weaker than the Philips, and Sanlyn said the Toshiba is weaker than Panasonic.
The DVDR3575H and D-KR4 both have only 2D Y/C separation. -
Adding one more capture to my VHS test here: https://forum.videohelp.com/threads/379545-Need-some-help-with-capturing-NTSC-video-TBC...=1#post2455196
I received a JVC HR-S9900U VCR today. This is the same tape from before over S-Video output to the ATI 600. Digital TBC/NR was enabled on the S9900U was enabled, R3 disabled. -
Quick Avisynth question: What's a good way to visually determine contrast changes in Avisynth? Right now I'm using the Tweak() function and reloading the video file in VirtualDub each time I change the script. Is there a better way to do this? For color balance I found ColorBalance() which mirrors the way GIMP handles color adjustments, so I can paste a screenshot of the video into GIMP, edit color balance and then copy those numbers into the Avisynth script. I haven't found anything does the same for contrast.
Is there a better solution for this? Or reloading the avs for every change the best method currently? -
You can use Animate() to animate Tweak(). An example:
Code:function TweakContrast(clip c, float contrast) { Tweak(c, cont=contrast).Subtitle("contrast="+string(contrast)) } WhateverSource("filename.ext") Animate(0,100, "TweakContrast", last,1.0, last,2.0)
Code:function TweakContrast(clip c, float contrast) { Tweak(c, cont=contrast).Subtitle("contrast="+string(cont)) } WhateverSource("filename.ext") Trim(1000,0) # remove first 1000 frames Loop(100,0,0) # repeat frame 0, 100 times Animate(0,100, "TweakContrast", last,1.0, last,2.0)
-
-
Not quite what I was aiming for (I want a graphical way to visualize these changes without reloading the script into VirtualDub every time I edit it), but this answers another question I was going to ask. How do you apply corrections only to specific frames? Looks like Animate is the way to do that
-
I don't understand. You make the change in the script, save it (File->Save, or just CTRL S), followed by hitting F2 in VDub. You get an instant update of the script, remaining on the same frame.
How do you apply corrections only to specific frames? Looks like Animate is the way to do that
You make a change for a range of frames by either using Trim statements or, better, using ReplaceFramesSimple. Here's an example:
A=YLevelsS(0,1.5,255,0,255).Tweak(Bright=-10,Cont=1.1,Coring=False)
ReplaceFramesSimple(Last,A,Mappings="[17053 18481] [22360 22498] [22645 23362] [27395 28259] ")
That uses a combination of YLevels and Tweak in different ranges as specified in the brackets. It's very useful when you want to filter different places differently. That particular combination requires the external AviSynth filters RemapFrames and YLevels.
Particularly when using levels/gamma/brightness/contrast filters such as those I find:
ColorYUV(Analyze=True).Limiter(Show="Luma")
to be very useful, as is the Histogram filter.Last edited by manono; 28th Aug 2016 at 15:15.
-
That's quite a bit less productive than a tool that lets you apply a slider and automatically updates/reloads the video everytime you move it. Swapping out to a text editor, manually editing values, saving, then reloading for fine changes has been rather ...slow.
Yes, I understand his script. If you give Animate() the same parameters for the initial and final values, it applies the same effect across the entire range of frames, which is what I want to do. The documentation for Animate() also references ApplyRange(), which calls Animate with the same initial and final values, specifically to achieve that.
http://avisynth.org.ru/docs/english/corefilters/animate.htm
For example:
Code:Animate(17053, 18481, "YLevelsS", 0, 1.5, 255, 0, 255, 0, 1.5, 255, 0, 255)
Code:ApplyRange(17053, 18481, "YLevelsS", 0, 1.5, 255, 0, 255)
Also useful to know! Thanks again -
That wasn't my point. I was suggesting you use it to apply a range of values over a range of frames to find the value that you want. You scrub through that section of the video looking for the right value. Once you've determined what value you want you remove the Animate() and just apply the tweak to the video. I use contrast in the example but you can use it for any variable in tweak. Or any numeric variable in any filter.
Last edited by jagabo; 28th Aug 2016 at 20:46.
-
Ah that is very clever I did indeed miss it. Do you just look at the frame count for the frame you think looks best and use that as a percentage of the change?
So, for example, using Animate to vary contrast from 1.0 to 1.1 over a 1000 frames, then viewing the video output and noting a preference for the contrast at frame 631. So 63.1% along the transition or (.631 * .1) + 1.0 = 1.0631 contrast setting? -
-
This is an example of what I wanted to use a graphical editor for. The color balance is 9-axis and is quite cumbersome to do by typing in values and then tweaking them in a text editor. The Animate() method wouldn't work well for this either since there are 9 variables to check all the other variables against. The sliders make this quite easy to work with though:
[Attachment 38363 - Click to enlarge]
I'm not sure if there's a better method available to do this type of adjustment. -
You can try the Toshiba DR-430 dvd recorder in this link --> My tbc test pattern result is in that page. It has a good line sync like the ES10/ES15. -
I am jumping in without having read the entire thread. So this response may be way off topic. My apologies if so.
Absolutely there is a better method for doing this sort of work. They are called full-fledged NLEs or more precisely color grading programs. They have color wheels that allow you to adjust not only the master offset but shadows, midtones, and highlights as well. That is called primary grading. They go even further with secondary grading by correcting, for example, skin tones using qualifiers, tracking, and on and on and on. They are typically paired with an external calibrated broadcast monitor or TV, so you can trust the colors and actually see what you are doing (versus a small window in a GUI subjected to who knows what resizing algo and off color conversion).
One thing to keep in mind when you are doing any sort of color work is that computer monitors are RGB. Video is Y'CbCr. So any time you use a computer monitor for display, the video must be converted to RGB. This conversion process is full of pitfalls and can't be trusted. IOW, the red you see on a computer monitor doesn't look the same on a TV, no matter how many hoops you might jump through. -
There may be a fix for this. It's a bit counter intuitive but...
https://forum.videohelp.com/threads/380285-Where-did-I-go-wrong-What-am-I-missing?p=246...74#post2460874
And the next post.Last edited by jagabo; 26th Sep 2016 at 08:20.
-
Thanks for the pointer! I also noticed the Lighter-Darker setting appeared to have less banding in areas I was looking for banding (darker areas) caused by the Darker-Lighter setting. I think it may be that banding is actually reversed though - lighter areas may show banding as opposed to darker ones. It's less noticeable, but I think there the ES15 unit still does something to the images in either setting.
-
So I've been back at working on archiving some videos. I've been testing out some different optimizations that seem to have good cleanup potential. However, in comparing some of the denoising to AVIsynth output, I noticed there seems to be some significant loss of detail and overall brightening from TFM() and TDecimate(). I was a little surprised by this, but it's particularly noticeable with clouds.
This is the full AVIsynth script applied to the the capture:
Code:Mpeg2Source("title00_track1_eng.d2v", CPU2="ooooxx", Info=3) TFM() # field matching -- remove comb artifacts TDecimate() # remove the one extra frame of every 5 left by TFM, restores 24 fps film
[Attachment 40225 - Click to enlarge]
[Attachment 40226 - Click to enlarge] -
-
-
-
Yes, both Windows were open at the same time, but I don't believe that to be the issue. I have different version I'm working on (CED as opposed to VHS). It doesn't exhibit that issue with both open, though the colors are darker from this source:
[Attachment 40227 - Click to enlarge]
Also, looking at the avs in VirtualDub (other apps closed) with the "CPU2="ooooxx", Info=3" removed appears the same:
[Attachment 40228 - Click to enlarge]
Similar Threads
-
Unsure about the best approach to take to clean up VHS image re. TBCs
By bergqvistjl in forum RestorationReplies: 8Last Post: 26th Jan 2015, 14:18 -
Good TBCs that support NTSC-J?
By kei17 in forum RestorationReplies: 3Last Post: 9th Jan 2015, 04:56 -
Do I need multiple TBCs?
By shaynestacy in forum Newbie / General discussionsReplies: 4Last Post: 29th Dec 2012, 03:12 -
Free Software & Codec for analog video capturing with Windows 7?
By jouse. in forum CapturingReplies: 7Last Post: 24th Oct 2012, 09:27 -
How to play & convert Sony 8 mm Video 8 NTSC Tapes
By Richiegs in forum Camcorders (DV/HDV/AVCHD/HD)Replies: 9Last Post: 19th Nov 2011, 22:13