VideoHelp Forum




+ Reply to Thread
Results 1 to 4 of 4
  1. I am able to capture raw analog NTSC video with my Picoscope 2204A, albeit at reduced horizontal resolution. For this, I needed to use streaming mode (not triggered block capture mode), and I did it with its sample rate set to 6.25MHz (3.125 MHz nyquist limit), even though it isn't advertised to be able to go over 1MHz sample rate when in streaming mode (this is supposed to require a more expensive model of scope). I asked on their forum, if it was possible to push it beyond its stated specs, and was told that the absolute maximum sample rate it could do was 6.25MHz, though it would likely drop buffers due to not having large enough buffers. I have discovered though that I can use this sample rate easily without dropping any buffers. Unfortunately, their official Picoscope software doesn't support a streaming sample rate over 1MHz, for most of their scopes (not even on most of the scopes that officially support higher sample rates, and certainly not the 2204A), but I used Visual Basic along with the Picoscope SDK (which supports using whatever the maximum sample rate is for your scope) to write a program that runs my 2204A at its maximum sample rate of 6.25MHz. This let me do all kinds of cool experiments.

    Now if you are familliar with the NTSC specification, the chroma carrier is at 3.579545MHz, which is unfortunately just over the nyquist limit of 3.125MHz at my scope's maximum streaming sample rate. Fortunately there's no analog lowpass filter between the BNC connector on the Picoscope and its A/D converter. This means that the chroma carrier isn't lost, just shifted down below its normal frequency. This way the chroma carrier is preserved, so to demodulate the video's chroma carrier all I need is to use the lower frequency that it was shifted down to when writing my QAM demodulator in my own NTSC decoder software.

    There's 3 steps in this process:

    First step is to use the Picoscope capture software I wrote to actually capture 10 seconds of raw NTSC composite video. And yes, after recording 10 seconds of video with 2-bytes per sample, at this sample rate, the file is about 120MBs. That's quite large for such a short duration, but at least I got a usable file. Note that even though it's 2-bytes per sample, it actually is just padded to 2-bytes, and doesn't have an actual bit-depth of any more than 8-bits per sample (this is done so that that when writing code in the Picoscope SDK, you can simplify your code, by using the same code for processing both low and high bitdepth data from scopes that support actual higher bitdepth). Make sure it's set at the +/- 1V range (gives you 2V p-p range between -1V and +1V, and I recommend this over the true +/-0.5V range which is 1V p-p, as it's entirely possible for the 1V p-p signal to drift above 0.5V or below -0.5V, and if it goes out of range you lose those samples of data).

    Second step is to fix the DC offset, as the output of most analog video sources is AC coupled, or at least is DC coupled but with the wrong DC offset. In any case, blanking level should be at 0 volts and sync level should be at -0.235V (maximum chroma excursion should be 130 IRE, and sync level should be -40 IRE, with a 170 IRE range, and this maps to a 1 volt range, so -40 IRE should be -0.235V when the DC offset is correct), but unfortunately the raw signal doesn't always fit these exact specs, so pre-processing needs to be done on the captured signal. I did my pre-processing in the audio processing software Goldwave. The first pre-processing step is to make a copy of the waveform and lowpass filter it with a simple (1,2,1)/4 kernel in the time domain (no sharp FFT filters because that gives you ripples in the time domain, that you really do want to avoid) using Goldwave's equation editor which lets you write your own custom signal processing equations. The next step is to use a custom piece-wise equation to find the DC offset of the sync signals. Then subtract that from your original raw signal, to make sure all your sync signals line up with 0. Then subtract 0.235 from the signal to bring the blanking level down to 0 and the sync level down to -0.235. Then divide the whole signal by 0.235 and multiply by 40, to scale it to the IRE range. Then save this as a 32bit floating point raw file.

    Third step is to use my own NTSC decoder software. It expects the signal to exactly fit the official NTSC specification regarding signal levels, and expects the units to be IRE, not volts. Now you can see why I had to do the above-mentioned pre-processing step (I hope to in the future expand my decoder to include this pre-processing, so that the above mentioned step won't be needed as a separate step anymore). This decoder locks onto the sync pulses, and does so in a way as to actually be able to also display those pulses (not missing out on displaying them because it's using them only for sync). It uses the first equalizing pulse of each field (not the actual V-Sync pulse) as V-Sync, in order to make sure that the first displayed line of output actually contains that field's first equalizing pulses (as these, not the V-Sync pulses, are actually the first thing in a video field). It then decodes the color signal from the chroma subcarrier, and then converts this YPbPr data to RGB for display. It also makes sure that the full 170IRE range is scaled into 0 to 255, so that everything including sync pulses can be seen. Sync is black in the output, blanking is dark gray, and white is white. Because of the fact that the brightness range is scaled like this, so that blanking is dark gray (and black signal level slightly lighter), this means all of the colors from the decoded chroma also look a bit washed out, but that can't be helped if you are trying to display the the full range of possible signal values in a composite video signal. Since computers can't handle half-lines, the 263 line field is kept as-is, and the 262 line field is padded to 263 lines by copying the line above it. This way the output video field has a fixed height of 263 lines. These are then output as 32bit RGB pixels to a raw video file. The resolution of the output is 396x263, and interestingly this makes the active picture portion of the video field approximately 320x240, so that its aspect ratio actually looks correct. In fact, I didn't even need to combine fields into frames, and was able to use these fields themselves as low resolution video frames (basically 240p60 video instead of 480i60 or 480p30 video).

    FFPLAY, the video player that comes with FFMPEG, supports playing raw video. You just need to use command line switches to tell it the specifications (like image size, frame rate, etc, stuff that would normally be in the header of a video file). And yes, FFPLAY does show the video as expected, so I know my processing steps worked. I then used FFMPEG to load the raw video file and write an MP4 compressed video file (h264 codec). I used quite high quality settings to make sure that there wouldn't be visible artifacts that could obscure visual analysis of the various parts of the video field on playback. Fortunately the output video file, even with these high quality settings, took just under 25MB of hardrive space, small enough to attach to this post, which indeed I've done.
    Image Attached Files
    Quote Quote  
  2. It looks like the MP4 video doesn't embed in a playable way, so I'm going to post a screenshot of one frame from its playback in FFPLAY (this is a single field, not full frame of the actual NTSC signal).
    Image Attached Images  
    Last edited by Videogamer555; 29th May 2022 at 02:43.
    Quote Quote  
  3. I also realized upon further research, it appears that my calculation for mapping the IRE range to the volts range is wrong, so I will need to revise that before my next capture. It seems an IRE range of -40 to 100 (the range of black and white TV with no chroma signal, which is a range of 140IRE, not 170IRE) is what is supposed to map to the 1V p-p range, so the maximum possible chroma excursion of 130IRE actually would go outside of the 1V p-p range.
    Quote Quote  
  4. Video doesn't have to embed
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!