VideoHelp Forum
+ Reply to Thread
Results 1 to 21 of 21
Thread
  1. I could possibly meet up with a seller of a CCD-TRV65 to test it out before making the $100 purchase, but it will only be on the stipulation of if my old tapes were recorded in stereo or not. I currently have a CCD‑TRV318 which outputs mono and my father doesn't know if the earlier tapes, which were taped with another camcorder, were recorded in stereo. Doesn't remember if the older first camcorder was stereo or mono.

    So is there a way for me to find out if the old tapes I place into the camcorder I would be testing out on the spot, into the TRV65, are playing stereo or just their 'ol mono sound?

    Other than really good ears next to the speakers, is there another way? Like hooking up headphones to it? Something that tells you on the screen if the tape is stereo or not?
    Quote Quote  
  2. Mod Neophyte Super Moderator redwudz's Avatar
    Join Date
    Sep 2002
    Location
    USA
    Search Comp PM
    I'd try the headphones and trust your ears.

    Other common methods of checking if the audio is discrete stereo would normally need a computer and a
    audio editor like Audacity to inspect the waveforms from each channel.

    MediaInfo on a computer may tell you it's two channels, but that doesn't necessarily mean stereo, maybe dual mono.
    Quote Quote  
  3. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    A channel correlation meter (like a Lissajous display) could tell you if you had identical/opposite info in both channels (mono), or totally random info (noise, maybe binaural, maybe dissimilar programs like 2 languages), or partly similar (probably stereo, but could be binaural or dual program with some commonality, or mono with phasing problems, etc). This could be visual or %.

    Otherwise, and beyond that, there's no way to tell without just listening.

    Scott
    Quote Quote  
  4. Member
    Join Date
    Aug 2010
    Location
    San Francisco, California
    Search PM
    If you have an iPhone, you can use the TwistedWave app to record a sample and visually inspect the left and right channel waveforms.
    Quote Quote  
  5. aBigMeanie aedipuss's Avatar
    Join Date
    Oct 2005
    Location
    666th portal
    Search Comp PM
    8 and hi8 had at least 2 different stereo systems. some older hi8 models had afm stereo and some newer ones pcm stereo. added into that is older 8 cams only had afm mono.
    --
    "a lot of people are better dead" - prisoner KSC2-303
    Quote Quote  
  6. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Originally Posted by JVRaines View Post
    If you have an iPhone, you can use the TwistedWave app to record a sample and visually inspect the left and right channel waveforms.
    On a macro level and on a micro level (but for differing reasons), simple visual inspection of the waveform would not provide meaningful info beyond the bluntly obvious (like no signal or VERY delayed, on a channel).

    For example, a simple global/static phase shift would make 2 waveforms seem very different visually, but they would look near identical (just rotated) on a correlation meter.

    Scott
    Quote Quote  
  7. Member
    Join Date
    Dec 2012
    Location
    The Palatinate
    Search PM
    Originally Posted by Cornucopia View Post
    A channel correlation meter (like a Lissajous display) could tell you [...]
    Acoustica Basic Edition 6.0 has one. Nice audio editor btw, which is not in the tools section here, but happily replaced Audacity on my windows system a few weeks ago.
    Quote Quote  
  8. Thanks for the suggestions guys but I ended up buying the CCD‑TRV65 regardless since I had to drive an hour for it, might as well do it. I brought 2 tapes with me, tested it out via the headphone and RCA audio jacks, all went good. It's on the recommended list for Hi8 camcorders so that's a plus as ultimately it's better than the one I have since it's stereo. $90 was a good deal too. Only thing that's "broken" on it is the spinner selector wheel moves around on its own when you enter the menu settings; good thing is that it's manageable and all the settings were already on what they should be.

    Now when I'll capture the tapes, if some were originally recorded in stereo I'll get that. If some were originally recorded in mono, I'll get that x2 like how I would have had to alter it in post. So it's a win-win.

    Only question I have now for people that know Hi8 cameras and capturing on them, TBC will obviously be on, but for capturing from it should I also keep DNR on?
    Quote Quote  
  9. Originally Posted by CZbwoi View Post
    If some were originally recorded in mono, I'll get that x2 like how I would have had to alter it in post.
    A simple Y adapter for the audio would have worked just fine and you wouldn't need to do anything in post.

    Originally Posted by CZbwoi View Post
    Only question I have now for people that know Hi8 cameras and capturing on them, TBC will obviously be on, but for capturing from it should I also keep DNR on?
    Of course, you should try both ways yourself. But generally, if you plan on doing your own filtering then you don't want to use the players DNR. You can do much better in software if you invest the time and effort learning how. But if you just want to capture and be done with it, use the DNR.
    Quote Quote  
  10. A simple Y adapter for the audio would have worked just fine and you wouldn't need to do anything in post.
    Oh yeah I know, that was my next step if I wouldn't be able to get my hands onto a stereo camcorder, but now that I have it does all the work for me.

    But generally, if you plan on doing your own filtering then you don't want to use the players DNR. You can do much better in software if you invest the time and effort learning how. But if you just want to capture and be done with it, use the DNR.
    Is a bad idea to mix-match the camcorder's DNR with whatever denoising you do in avisynth? Or would it be like a tag-team effort and only help out?

    If I turn off the DNR will QTGMC do all of that work for me in the end if I simply insert "QTGMC" into a line like normally? I know that it has a denoising feature.
    Quote Quote  
  11. Originally Posted by CZbwoi View Post
    Is a bad idea to mix-match the camcorder's DNR with whatever denoising you do in avisynth? Or would it be like a tag-team effort and only help out?
    The camcorder will likely remove all the small low contrast detail and there's no getting it back. And if it's a 3DNR filter it may create smeary artifacts that can't be eliminated.
    Quote Quote  
  12. Originally Posted by jagabo View Post
    Originally Posted by CZbwoi View Post
    Is a bad idea to mix-match the camcorder's DNR with whatever denoising you do in avisynth? Or would it be like a tag-team effort and only help out?
    The camcorder will likely remove all the small low contrast detail and there's no getting it back. And if it's a 3DNR filter it may create smeary artifacts that can't be eliminated.
    Alright, so turn it off in the camcorder and fix it myself by inserting QTGMC()?
    Quote Quote  
  13. QTGMC() isn't primarily a noise reducer. Though it has an optional noise reduction function.

    Code:
    QTGMC(EZDenoise=2.0) # bob deinterlace and apply moderate noise reduction
    Adjust the amount noise reduction to suit. Smaller values give less NR, larger values more.

    There are a lot of other dedicated noise reducers like dfttest(), TemporalDegrain(), etc.
    Quote Quote  
  14. QTGMC() isn't primarily a noise reducer. Though it has an optional noise reduction function.
    Ah, so if I simply insert QTGMC() it won't do any denoising unless I add another line that says QTGMC(EZDenoise=2.0) which only does the denoising bit?

    So in turn I'd have 2 separate lines that look like this:

    Code:
    QTGMC()
    QTGMC(EZDenoise=2.0)


    # bob deinterlace and apply moderate noise reduction
    What do you mean by this? Is the above usage of QTGMC not considered bob deinterlacing, so I would have to do something like this?

    Code:
    Yadif(Mode=1)
    QTGMC(EZDenoise=2.0)
    QTGMC()

    And do you recommend I do one of these two, whichever is the right way, every time I do captured files since I'll turn my DNR off now? Like make it the standard for capturing tapes.
    Quote Quote  
  15. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Originally Posted by CZbwoi View Post
    QTGMC() isn't primarily a noise reducer. Though it has an optional noise reduction function.
    Ah, so if I simply insert QTGMC() it won't do any denoising unless I add another line that says QTGMC(EZDenoise=2.0) which only does the denoising bit?
    That's not what jagabo wrote. He wrote:
    Originally Posted by jagabo View Post
    QTGMC() isn't primarily a noise reducer. Though it has an optional noise reduction function.
    Again, it's not primarily a denoiser. But it does some cleanup when it separates interlaced fields and interpolates them into full frames. How much cleanup it does depends on the "Preset" parameter. By default, QTGMC's preset is "slow". The default set of preset values doesn't have to be stated. When you type "QTGMC()" it's the same thing as typing "QTGMC(preset="Slow")".

    The faster the preset, the less cleanup it does. This is mostly correcting things like shimmer, some resizing artifacts, etc., but not primarily degraining or noise reduction.

    You can set umpteen parameters in the call to QTGMC and increase noise reduction and other parameters. You can also use a lossless mode that retains noise, often use to prevent an over filtered look and useful for keeping the look of film. QTGMC has a whole bunch of variable parameters, including those that change the primary field separation and motion compensation tools. If you take a look at QTGMC's script you'll see that it's a scripted function that's actually called this way:

    Code:
    function QTGMC( clip Input, string "Preset", int "TR0", int "TR1", int "TR2", int "Rep0", int "Rep1", int "Rep2", string "EdiMode", bool "RepChroma", \
                        int "NNSize", int "NNeurons", int "EdiQual", int "EdiMaxD", string "ChromaEdi", int "EdiThreads", clip "EdiExt", float "Sharpness", \
                        int "SMode", int "SLMode", int "SLRad", int "SOvs", float "SVThin", int "Sbb", int "SrchClipPP", int "SubPel", int "SubPelInterp", \
                        int "BlockSize", int "Overlap", int "Search", int "SearchParam", int "PelSearch", bool "ChromaMotion", bool "TrueMotion", int "Lambda", \
                        int "LSAD", int "PNew", int "PLevel", bool "GlobalMotion", int "DCT", int "ThSAD1", int "ThSAD2", int "ThSCD1", int "ThSCD2", \
                        int "SourceMatch", string "MatchPreset", string "MatchEdi", string "MatchPreset2", string "MatchEdi2", int "MatchTR2", \
                        float "MatchEnhance", int "Lossless", int "NoiseProcess", float "EZDenoise", float "EZKeepGrain", string "NoisePreset", string "Denoiser", \
                        int "DftThreads", bool "DenoiseMC", int "NoiseTR", float "Sigma", bool "ChromaNoise", val "ShowNoise", float "GrainRestore", \
                        float "NoiseRestore", string "NoiseDeint", bool "StabilizeNoise", int "InputType", float "ProgSADMask", int "FPSDivisor", \
                        int "ShutterBlur", float "ShutterAngleSrc", float "ShutterAngleOut", int "SBlurLimit", bool "Border", bool "Precise", string "Tuning", \
                        bool "ShowSettings", string "GlobalNames", string "PrevGlobals", int "ForceTR", \
                        val "BT", val "DetailRestore", val "MotionBlur", val "MBlurLimit", val "NoiseBypass" )
    For an explanation of all this mumbo jumbo, look at the html that ships with QTGMC. In case you haven't seen it, it's in QTGMC's downloaded and unzipped folder in the "instructions" subfolder. The document's name is QTGMC-3.32.html. That's probably more than you were asking for, but you'll get deeper into that muli-function plugin sooner or later when you see the way others use it. Download link: http://www.mediafire.com/download/su7l5jtcobabksk/QTGMC-3.32.zip. Most readers don't comprehend most of it, but it'll give you a pretty good idea how much QTGMC can accomplish. Stick with defaults for the time being, which is what you've been doing.

    Originally Posted by CZbwoi View Post
    So in turn I'd have 2 separate lines that look like this:

    Code:
    QTGMC()
    QTGMC(EZDenoise=2.0)
    No, that deinterlaces twice. If you start with 29.97 fps you'll end up with 120fps, duplicate frames, and a lot of damage.

    Originally Posted by CZbwoi View Post
    # bob deinterlace and apply moderate noise reduction
    What do you mean by this? Is the above usage of QTGMC not considered bob deinterlacing, so I would have to do something like this?

    Code:
    Yadif(Mode=1)
    QTGMC(EZDenoise=2.0)
    QTGMC()
    No way. that's deintelaced three times.

    One variation (stronge denoising using default "slow" preset):
    Code:
    AssumeTFF().QTGMC(EZDenoise=8)
    That's the same thing as typing "AssumeTFF().QTGMC(preset="Slow",EZDenoise=8)"

    Another (faster operation and moderate denoising with dfttest):
    Code:
    AssumeTFF().QTGMC(preset="faster", EZDenoise=4, Denoiser="dfttest")
    Like jagabo said, you don't always use QTGMC for a denoiser. It depends on what you're trying to clean up.
    Last edited by LMotlow; 20th Apr 2016 at 15:23.
    - My sister Ann's brother
    Quote Quote  
  16. Thank you for that, it all makes sense now. Only thing that looks weird is that you put AssumeTFF() on the same line in your last 2 examples; is this normal and does it work? I always see people placing AssumeTFF() on it's own separate line. Does it equal and mean the same thing as if it were on it's own line?
    Quote Quote  
  17. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Both mean the same. But note that the Assume function and the QTGMC function are two separated operations, so they're separated by a period. If you ever get into a really long script, it saves some space. But 2 lines is probably easier to read.

    But look at this, which deinterlaces and then reinterlaces:

    Code:
    AssumeTFF()
    QTGMC()
    ....some filters...
    ....more filters....
    SeparateFields()
    SelectEvery(4,0,3)
    Weave()
    You could also type:
    Code:
    AssumeTFF().QTGMC()
    ....some filters...
    ....more filters....
    SeparateFields().SelectEvery(4,0,3).Weave()
    Depends on your druthers.
    Last edited by LMotlow; 20th Apr 2016 at 16:22.
    - My sister Ann's brother
    Quote Quote  
  18. Alrighty, thank you, you're the best.
    Quote Quote  
  19. Originally Posted by CZbwoi View Post
    Only thing that looks weird is that you put AssumeTFF() on the same line in your last 2 examples; is this normal and does it work? I always see people placing AssumeTFF() on it's own separate line. Does it equal and mean the same thing as if it were on it's own line?
    In this case it's the same but it can be slightly different. The first thing you should know: when you don't specify a stream by name in AviSynth it uses the name "last". And most filters (except input filters) take in an input stream and produce an output stream. So a script like:

    Code:
    DirectShowSource("filename.ext")
    AssumeTFF()
    QTGMC()
    really means:

    Code:
    last = DirectShowSource("filename.ext")
    last = AssumeTFF(last)
    last = QTGMC(last)
    The "." character pipes the stream directly from one filter (the one on the left) to the next (the one on the right), rather than taking input from last or another named stream.

    Code:
    DirectShowSource("filename.ext")
    AssumeTFF().QTGMC()
    So here, the output of AssumeTFF is piped directly to QTGMC and QTGMC's output becomes the new (implied) last. The final result is the same as the first script. But there are times when they are not exactly equivalent:

    Code:
    DirectShowSource("filename.ext")
    v1 = AssumeTFF().QTGMC()
    v2 = QTGMC()
    StackHorizontal(v1,v2)
    That uses two instances of QTGMC to make two copies of the video and stacks them side by side. You might think that means v1 and v2 are exactly the same. But AviSynth usually assumes videos are BFF, not TFF. Since the output of AssumeTFF() was sent directly to the first instance of QTGMC, and the output of that operation was assigned the name v1, the second instance of QTGMC gets last as its input -- which is still assumed to be BFF. One of the two side by side videos will have very jerky motion. More explicitly:

    Code:
    last = DirectShowSource("filename.ext") # last assumed BFF by default
    v1 = AssumeTFF(last).QTGMC()
    # at this point "last" is still the output of DirectShowSource, assumed to be BFF
    v2 = QTGMC(last)
    last = StackHorizontal(v1,v2)
    And be careful, if you don't use the . between filters on a line the effect is the same as putting the two filters on different lines:

    Code:
    DirectShowSource("filename.ext")
    v1 = AssumeTFF().QTGMC()
    v2 = AssumeTFF() QTGMC() # note the missing . between the two filters
    StackHorizontal(v1,v2)
    v1 and v2 are not the same because that script is equivalent to:

    Code:
    DirectShowSource("filename.ext")
    v1 = AssumeTFF().QTGMC()
    v2 = AssumeTFF()
    QTGMC()
    StackHorizontal(v1,v2)
    v2 is assumed to be TFF but is never used again. The second QTGMC is going to take last as its input and last is still assumed BFF.
    Last edited by jagabo; 20th Apr 2016 at 19:55.
    Quote Quote  
  20. Ah, so if I'd ever want to have a side-by-side comparison of 2 different versions that should come in handy. Thanks for those codes and explaining, it will come in use.


    Since this topic is on Hi8 and 8mm tapes, do you know a lot about them? Is there an exact aspect ratio they're definitively all in? I know VHS tapes are in 4:3 but I can't tell with these other tapes. The Hi8 camcorder screens' I have do not look 4:3, they look slightly wider. And I know they're getting captured in 720x480 and as I look at the capture screen it does look right and equal, but at the same time it doesn't because of the conditioning I got from looking at VHS-C tape captures (which are supposed to be 4:3 from what I can tell). I'm Googling it to no avail, I'll make a separate topic if you don't definitively know the answer in hopes that someone does.
    Quote Quote  
  21. Another way to compare two videos is to interleave the frames -- ie frame 1 of video A, frame 1 of video B, frame 2 of video A, frame 2 of video B... In AviSynth:

    Code:
    Interleave(A, B) # rather than StackHorizontal(A, B)
    Then use an editor like VirtualDub where you can flip back and forth between frames with the left and right arrow keys. If you use a screen magnifier (like Windows' built in Magnifier) you can even see tiny difference between the two videos.

    There were essentially no wide screen TVs in the USA before HDTV was introduced. I believe all consumer analog tape formats in the USA were 4:3 DAR. You can verify the DAR for yourself if you can find something in the video of known aspect ratio. A car tire viewed directly from the side, a window pane that you know should be square, things like that. Try to find such things near the center of the frame because there can often be distortions at the edges of the frame.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!