VideoHelp Forum
+ Reply to Thread
Results 1 to 27 of 27
Thread
  1. So I'm looking at this timing diagram here (see attachment) for the NTSC fields and am trying to figure out this half-line thing. In particular, I'm trying to figure out, when an analog CRT TV starts scanning, does it actually start with image signal from the first field, or the second field. That is, is the first displayed line from field 1 or 2?

    Also, I know that the electron beam scans in a way that's slightly diagonal, so that it scans off the bottom of the screen at about the time that it's in the center (in a left-right sense) of the picture. Likewise, the beam comes onto the top of the screen at the center of the picture. Since it comes onto the top of the screen at the center (not left side), it would make sense that the top of the screen would contain an extra half-line of the top part of the image. However, looking at the timing diagram for the NTSC signal, starts at the top of field 2, not field 1. That makes no sense. Why would there be an extra half line at the top of the second field? Unless of course, the TV actually starts by scanning field 2, and NOT field 1.

    Can you explain this to me here?
    Image Attached Thumbnails Click image for larger version

Name:	NTSC fields.png
Views:	3469
Size:	85.5 KB
ID:	49669  

    Quote Quote  
  2. Sequence of equalization and preequalization pulses coding field order and chroma burst sequence, also half line shift is introduced there.
    Quote Quote  
  3. aBigMeanie aedipuss's Avatar
    Join Date
    Oct 2005
    Location
    666th portal
    Search Comp PM
    not of great interest anymore. analog scanning started at the bottom of line 1 and proceeded to the top, but there are "overscan lines" at both bottom and top of tvs, that weren't displayed on "normal" tvs.
    --
    "a lot of people are better dead" - prisoner KSC2-303
    Quote Quote  
  4. Member
    Join Date
    Jul 2007
    Location
    United States
    Search Comp PM
    I don't know if this answers the OP's question, but this image confirms what I recall reading decades ago. Line 1 is the first full scan line and line 263 is the other half of line 262, as stated, starting in the middle of the top of the screen.

    Image
    [Attachment 49672 - Click to enlarge]


    Source: http://www.ni.com/en-us/innovations/white-papers/06/anatomy-of-a-camera.html

    Okay...I'm out, that's the limit of what I know about scanlines.
    Quote Quote  
  5. Member
    Join Date
    Jul 2007
    Location
    United States
    Search Comp PM
    Okay, back in.

    I may be missing something here (not the first time and won't the last), but what do you mean by 'image'? 262 1/2 lines comprise the field, but X number of them while visible on a monitor set to overscan or when captured by certain hardware [are unseen, and as aedipuss stated, used for "housekeeping" purposes], there's no visible 'image' information there.

    *Jumps back out*
    Last edited by lingyi; 28th Jul 2019 at 21:51. Reason: Clarity, additional info
    Quote Quote  
  6. aBigMeanie aedipuss's Avatar
    Join Date
    Oct 2005
    Location
    666th portal
    Search Comp PM
    most people never truly understand the old interleaved tv standard. 262 1/2 lines are half the ntsc picture. there are another 262 1/2 lines that are the other half of the picture. those "frames" were divided into fields and the "fields" were broadcast as all the odd numbered lines then all the even lines from bottom to top. the tv's at that time had phosphors that retained their image just long enough to make a complete image during the odd/even presentation.

    the top and bottom few lines were not presented on the tv. they were used for other things by design.
    --
    "a lot of people are better dead" - prisoner KSC2-303
    Quote Quote  
  7. See no point to introduce VBI vs regular video lines, OP asks for details laid in VBI lines but those VBI lines can't be used to carry other than timing.
    And half line is simple time where period is half line and it is used to introduce interlace line offset. Understanding main principle is quite simple - all those pulses "pumping" capacitor used as integrator and memory - capacitor is feed with pulses, every pulse raising voltage a bit, there is some voltage comparator connected to capacitor, if voltage pass particular threshold, then voltage comparator output signalling this to receiver.
    Sequence of pulses and they length is made in such way that it will allow not only to signal beginning of both fields and they order but also it signal chroma subcarrier burst sequence - it is easier to understand how thing works if some real HW synchronization circuitry is analysed.
    Quote Quote  
  8. How does modern NTSC handle these weird "half line" things? Modern computers tend to work best with integers. In particular, how do modern video cards that have NTSC output handle this?
    Quote Quote  
  9. Originally Posted by Videogamer555 View Post
    How does modern NTSC handle these weird "half line" things? Modern computers tend to work best with integers. In particular, how do modern video cards that have NTSC output handle this?
    Depends on HW, older HW generated video fields with different number of lines (262/263) so called short and long field, modern HW simply implementing proper video timing based on RS-170A/ITU-R BT.470, from HW perspective creating proper sequence is very simple (state machine + LUT) - check 74ACT715 DS. Modern digital sync separator can be implemented for example this way https://www.xilinx.com/support/documentation/application_notes/xapp1308-hsync-video.pdf
    Quote Quote  
  10. Ok, so looking at the number of video lines, between vertical blanking intervals, it looks like there are 242.5 per field, for a total of 485 per frame. The VBI itself contains 9 lines of equalization and sync pulses, followed by 20 lines (for field 1, or 19.5 lines for field 2) that are blank lines (like normal video lines, except that they only contain blanking-level signal after each HSync, no image signal, not even black-level signal).

    That 485 video lines is interesting, because I've read that there are supposed to be 486 lines of video after each VBI. 485 is not equal to 486. Also, if you ignore the line that gets cut in half, you have 484 full lines of video. Again, 484 is not equal to 486. I'm not sure where people got the idea that there are that 486 lines of video in an NTSC frame.
    Quote Quote  
  11. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    I'm not sure where you get the idea that there are 485! You aren't really thinking this thing through as a technician - there is no such thing as 1/2 a line, to a computer.
    Of course computers would have to capture/convert a WHOLE line's worth in order to have whole # (aka full length) lines. So in Professional capture cards, they want to get every element of visible image they can so they include the blanking in the portion preceding the 1st visible 1/2 line and the blanking following the portion of the Last visible 1/2 line. So don't count those in your calculations, and you'll see it comes out right.

    Scott
    Quote Quote  
  12. It is correct that the active picture of an analog NTSC signal is 485 scan lines. Computers normally deal with rectangular arrays so the half lines at the top and bottom are captured as full scan lines. So capture devices that capture the full active picture capture 486 scan lines. With such caps you can see that the top and bottom scan lines are only half full. This is where 485 vs. 486 scan line confusion originates.
    Quote Quote  
  13. Originally Posted by jagabo View Post
    It is correct that the active picture of an analog NTSC signal is 485 scan lines. Computers normally deal with rectangular arrays so the half lines at the top and bottom are captured as full scan lines. So capture devices that capture the full active picture capture 486 scan lines. With such caps you can see that the top and bottom scan lines are only half full. This is where 485 vs. 486 scan line confusion originates.

    There are a couple things it could do that might be better. More advanced capture hardware might do this instead. Since the second half of that one scan line actually contains the last portion of image data from the first field (not the first portion of image data from the second field, even though the top of the second field is where the second half of that particular scan line exists in the signal), an advanced video capture device might actually grab that half-line at the start of the active image area of field-2, and move it to the end of field-1, to create a proper image with the maximum actually valid height of 485 lines. So no strange number of scanlines like 486.

    Alternatively, less advanced hardware could ignore the 2 lines of signal that contained only half-lines of image data, and remove those 2 lines from the image. This would create an image that had 484 lines of image data. Again, no strange number of scanlines like 486.


    However, in both cases, a properly working video capture card produces an image that has more than 480 lines of image data. The funny thing is, most video capture devices (or at least all of the ones I've ever used) capture only 480 lines. Why 480? Why not capture as many valid image lines as possible? Is it cheaper to make a video capture card that only captures 480 lines? Or is this because a very common 4:3 digital image size is 640x480, so 480 lines of image data corresponds to this common image size?

    And on a related topic, why do most video capture cards capture a resolution of 720x480, not 640x480? 640 is a standard digital image width, while 720 is not. I mean, the source is an analog signal, with no concept of pixels. Depending on the sample rate of your video capture card, a video line could be 100 pixels wide, or 10 million pixels wide. Also, there's no official standard that says how many pixels wide a digitized copy of an NTSC video frame should be. For some reason though, all of the cheap video capture cards I have bought on eBay use a resolution of 720x480. Are all of these eBay ones using the same internal components (even if they have different shaped cases, with different brand names on them), because they all are coming from Chinese manufacturers that all use the same components (probably very cheap components, also made in China)?
    Last edited by Videogamer555; 30th Jul 2019 at 04:13.
    Quote Quote  
  14. Originally Posted by Videogamer555 View Post
    Ok, so looking at the number of video lines, between vertical blanking intervals, it looks like there are 242.5 per field, for a total of 485 per frame. The VBI itself contains 9 lines of equalization and sync pulses, followed by 20 lines (for field 1, or 19.5 lines for field 2) that are blank lines (like normal video lines, except that they only contain blanking-level signal after each HSync, no image signal, not even black-level signal).

    That 485 video lines is interesting, because I've read that there are supposed to be 486 lines of video after each VBI. 485 is not equal to 486. Also, if you ignore the line that gets cut in half, you have 484 full lines of video. Again, 484 is not equal to 486. I'm not sure where people got the idea that there are that 486 lines of video in an NTSC frame.
    At first half line exist not as half line but as period 9 group of pulses with particular width and interval. This is 9 lines or rather approx 64uS periods filled with rectangle pulses with particular time characteristic. So forget about half video lines (yes, they exist but simply they are normal video lines interrupted (overlay) by signal from this 9 lines.

    Overall there are 2 groups of 20 VBI lines (9 from those 20 is used for pulses they don't have colour subcarrier burst thus no real video lines), in total there is 40 VBI lines, math is simple - 525-40 = 485 theoretical video lines, 486 is is to simplify calculation and avoid fraction. This is acceptable, one VBI line is not filled with VBI data but may have active video.

    480 lines is outcome of digitalisation (technology progress) - at first it is dividable by 16 (so MPEG-2 interlace macroblock size) secondly anyway they are not visible anyway on common consumer setup.

    Modern video compression forced mod8/mod16 video format thus 720x480 (real analogue active video with sampling 13.5MHz can be maximum 716 pixels wide).

    And to be precise - video digital processing is in use at least from first half of the 70's and 486 lines is valid to digital professional video equipment made before ITU-R BT.601/ITU-R BT.656 was accepted as de facto worldwide digital video standard.

    Unless you have access to very old (most likely professional studio source) recordings (before MPEG-2 era) then 486 lines is not your problem.

    Don't shoot messenger - China has nothing with 480 lines limitations - if you searching for video capture more flexible than BT.601/BT.656 timing you need something different for example ancient solution based on BT848 where video acquisition may be extended also beyond 480 lines (in fact you can acquire over 500 lines if you need in RAW VBI mode).
    Btw 640 pixels and 480 lines are standard only in PC world. In TV world you may have 352x480 or even 240x480 - bandwidth is all that define analogue video.

    measure with ruler your active TV area (CRT if you have) - this is real display aspect ratio - pixels may have very weird aspects (also variable over time due unavoidable timing errors), significantly different than expected by you 1:1.

    Source for all above math are:
    https://www.itu.int/rec/R-REC-BT.1700/en
    https://www.itu.int/rec/R-REC-BT.601/en
    https://www.itu.int/rec/R-REC-BT.656/en

    sorry for not providing US standards links - they are not available for free...
    If you need non-standard video capture capabilities then perhaps https://en.wikipedia.org/wiki/DScaler may be helpful - it should be not problem to build dedicated PC with old mobo with PCI bus and old Bt848/878 card or better - go for something like https://www.analogdiscovery.com/ 14 bit, 30MHz bandwidth, you can process your video fully in digital domain (also perform perfect TBC and ideal colour demodulation) - probably results better than any commercially available capture video frontend.
    Last edited by pandy; 30th Jul 2019 at 15:54.
    Quote Quote  
  15. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Again, you are not considering this from the perspective of a technician (especially at the time that these things were codified).

    Analog video has been around for over half a century. It has been set in stone. It existed before digital video. Digital video was meant to emulate analog video, WITHIN the confines of the bandwidth and technology existing at the time.
    Analog video (as with anything analog) is not discreet/packetized, but continuous - there is no "raster" in analog (unless you're talking about CRT masks), just a series of lines (or more truthfully, one long cyclical line). To "digitize" something, you must sample it at regular points in time (audio, video) and space (video, photo).

    In order to MAXIMIZE the detail of an image, your sample is as highly/densely/finely-gridded as possible (given those constraints).

    The sampling of video started with a number of differing, competing methods, before standardizing on the CCIR (ITU)-601 digital video standard. It has the benefit of being usable for BOTH PAL and NTSC video systems. The sampling of 704 or 720 (depending on whether ITU or MPEG variations), is common maximum per line when 13.5MHz is the overall sampling rate.
    Yes, this makes the pixels be considered "non-square".

    480 is just a common denominator, consumer version of 486. End consumers don't normally, necessarily need those last/first few lines (or half-lines), including VBI lines like captioning or timecode, because those elements are incorporated elsewhere in a digital signal as metadata or auxilliary data instead of video.
    480 is also MOD2, MOD4, MOD8, and MOD16 compatible, making it very friendly for older codecs that required this in their block based algorithms. 486 is ONLY MOD2 compatible. Again, you have to remember the capability of the tech at the time it was worked out.

    Scott
    Quote Quote  
  16. Originally Posted by Cornucopia View Post
    Again, you are not considering this from the perspective of a technician (especially at the time that these things were codified).

    Analog video has been around for over half a century. It has been set in stone. It existed before digital video. Digital video was meant to emulate analog video, WITHIN the confines of the bandwidth and technology existing at the time.
    Analog video (as with anything analog) is not discreet/packetized, but continuous - there is no "raster" in analog (unless you're talking about CRT masks), just a series of lines (or more truthfully, one long cyclical line). To "digitize" something, you must sample it at regular points in time (audio, video) and space (video, photo).

    In order to MAXIMIZE the detail of an image, your sample is as highly/densely/finely-gridded as possible (given those constraints).

    The sampling of video started with a number of differing, competing methods, before standardizing on the CCIR (ITU)-601 digital video standard. It has the benefit of being usable for BOTH PAL and NTSC video systems. The sampling of 704 or 720 (depending on whether ITU or MPEG variations), is common maximum per line when 13.5MHz is the overall sampling rate.
    Yes, this makes the pixels be considered "non-square".

    480 is just a common denominator, consumer version of 486. End consumers don't normally, necessarily need those last/first few lines (or half-lines), including VBI lines like captioning or timecode, because those elements are incorporated elsewhere in a digital signal as metadata or auxilliary data instead of video.
    480 is also MOD2, MOD4, MOD8, and MOD16 compatible, making it very friendly for older codecs that required this in their block based algorithms. 486 is ONLY MOD2 compatible. Again, you have to remember the capability of the tech at the time it was worked out.

    Scott
    What's up with the 13.5 million smp/sec standard for digitizing? Why that specific sample rate? Assuming you are captureing both audio and video from an NTSC TV broadcast, you need no more than 12 million smp/sec for the sample rate. This gives you a 6MHz bandwidth, which corresponds to the 6MHz bandwidth of the NTSC signal (only downconverted to baseband, instead of the original transmitted frequency in the VHF or UHF TV bands). This lets you capture the complete NTSC TV broadcast signal.

    However, if you are talking about the only video signal, you can then ignore the audio carrier and the lower sideband of the video carrier, and you are talking about even less bandwidth. The demodulated NTSC video signal (which is also equivalent to the output of the composite video port on your camcorder or video card that supports composite video), has even less bandwidth. Specifically it has a 4.1 MHz bandwidth, meaning that you only need to sample at a rate of 8.2 million smp/sec. Or if your ADC only samples at integer rates that would be 9 million smp/sec, or the closest standard sample rate which is 10 million smp/sec.

    At no point, is a sample rate of 13.5 million smp/sec required. I'm not sure where that number is coming from.
    Quote Quote  
  17. Originally Posted by Videogamer555 View Post
    What's up with the 13.5 million smp/sec standard for digitizing? Why that specific sample rate? Assuming you are captureing both audio and video from an NTSC TV broadcast, you need no more than 12 million smp/sec for the sample rate. This gives you a 6MHz bandwidth, which corresponds to the 6MHz bandwidth of the NTSC signal (only downconverted to baseband, instead of the original transmitted frequency in the VHF or UHF TV bands). This lets you capture the complete NTSC TV broadcast signal.

    However, if you are talking about the only video signal, you can then ignore the audio carrier and the lower sideband of the video carrier, and you are talking about even less bandwidth. The demodulated NTSC video signal (which is also equivalent to the output of the composite video port on your camcorder or video card that supports composite video), has even less bandwidth. Specifically it has a 4.1 MHz bandwidth, meaning that you only need to sample at a rate of 8.2 million smp/sec. Or if your ADC only samples at integer rates that would be 9 million smp/sec, or the closest standard sample rate which is 10 million smp/sec.

    At no point, is a sample rate of 13.5 million smp/sec required. I'm not sure where that number is coming from.
    More important question is why NOT 13.5MHz as sampling clock? For sampling component video this frequency is OK, also Y NTSC bandwidth is not limited on top so it can be even above 8MHz accordingly to SMPTE 170M... If you are really curious to find explanation why 13.5MHz then you should read attached papers, enjoy.
    Image Attached Thumbnails trev_304-rec601_wood.pdf  

    trev_304-rec601_rainger.pdf  

    Quote Quote  
  18. Member
    Join Date
    Jul 2007
    Location
    United States
    Search Comp PM
    Just curious. How, if it is, is this related to your other thread about getting a RF demodulator?

    Edit: If you're curious about how video games use the extra lines in the VBI [and how it's handled by CRTs], there's lots of info from programmers of the various system's games.
    Last edited by lingyi; 1st Aug 2019 at 17:46.
    Quote Quote  
  19. Originally Posted by pandy View Post
    Originally Posted by Videogamer555 View Post
    What's up with the 13.5 million smp/sec standard for digitizing? Why that specific sample rate? Assuming you are captureing both audio and video from an NTSC TV broadcast, you need no more than 12 million smp/sec for the sample rate. This gives you a 6MHz bandwidth, which corresponds to the 6MHz bandwidth of the NTSC signal (only downconverted to baseband, instead of the original transmitted frequency in the VHF or UHF TV bands). This lets you capture the complete NTSC TV broadcast signal.

    However, if you are talking about the only video signal, you can then ignore the audio carrier and the lower sideband of the video carrier, and you are talking about even less bandwidth. The demodulated NTSC video signal (which is also equivalent to the output of the composite video port on your camcorder or video card that supports composite video), has even less bandwidth. Specifically it has a 4.1 MHz bandwidth, meaning that you only need to sample at a rate of 8.2 million smp/sec. Or if your ADC only samples at integer rates that would be 9 million smp/sec, or the closest standard sample rate which is 10 million smp/sec.

    At no point, is a sample rate of 13.5 million smp/sec required. I'm not sure where that number is coming from.
    More important question is why NOT 13.5MHz as sampling clock? For sampling component video this frequency is OK, also Y NTSC bandwidth is not limited on top so it can be even above 8MHz accordingly to SMPTE 170M... If you are really curious to find explanation why 13.5MHz then you should read attached papers, enjoy.

    TV channel bandwidth over 6 MHz is not allowed by the FCC for over-the-air TV transmission. So most signals you will deal with (at least if they are broadcast-legal) can be handled with only a 12 MSmp/Sec sampling rate. And this gives you the full video and audio signal.

    And the higher the sample rate the more expensive the equipment. So a sample rate just BARELY high enough to meet your requirements is what you should be expecting to buy, at least if you don't want to break your bank account.
    Quote Quote  
  20. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    601 wasn't created to specifically sample BROADCAST signals!

    99% of pro A->D capture (what 601 was originally created for) uses master tapes played back using best channel (usually component). This has a likelihood of higher available bandwidth than your assumption.

    It seems you are still not paying attention to the available tech of those times, nor thinking as an engineer would.

    Scott
    Quote Quote  
  21. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    Originally Posted by Videogamer555 View Post
    However, in both cases, a properly working video capture card produces an image that has more than 480 lines of image data. The funny thing is, most video capture devices (or at least all of the ones I've ever used) capture only 480 lines. Why 480? Why not capture as many valid image lines as possible? Is it cheaper to make a video capture card that only captures 480 lines? Or is this because a very common 4:3 digital image size is 640x480, so 480 lines of image data corresponds to this common image size?
    My capture device BrightEye75 captures at 720x486 for NTSC, As Pandy mentioned above that's the standard for professional capture devices and cards. I had to trim off those 6 lines though during encoding to a lossy video format.
    Quote Quote  
  22. Originally Posted by Videogamer555 View Post
    Originally Posted by pandy View Post
    For sampling component video this frequency is OK, also Y NTSC bandwidth is not limited on top so it can be even above 8MHz accordingly to SMPTE 170M...

    TV channel bandwidth over 6 MHz is not allowed by the FCC for over-the-air TV transmission. So most signals you will deal with (at least if they are broadcast-legal) can be handled with only a 12 MSmp/Sec sampling rate. And this gives you the full video and audio signal.

    And the higher the sample rate the more expensive the equipment. So a sample rate just BARELY high enough to meet your requirements is what you should be expecting to buy, at least if you don't want to break your bank account.
    Once again - for component video 13.5MHz is more than fine. Seem you are focused only on consumer aspect - don't forget that broadcast studio video signal processing chain is way more complicated than simple consumer decoder.
    Please read provided papers - there is good explanation why 13.5MHz, why 720 pixels why 576/480 lines etc.
    IMHO there is quite late too reinvent wheel called BT.601/656.
    btw - you will not find any real system where signal sample rate matching absolute minimum required, Nyquist sample rate, always slightly higher sample rate is used.
    Secondly don't forget that in Europe Y bandwidth can be over 6MHz for broadcast RF signal (8MHz UHF, audio carrier on 6.5MHz) - single worldwide standard MUST match all targets.
    Last edited by pandy; 4th Aug 2019 at 07:19.
    Quote Quote  
  23. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Exactly. Sample rate matching precisely minimum of 2x bandwidth would require an infinitely steep brickwall filter, which is NEVER a good thing.

    Scott
    Quote Quote  
  24. Originally Posted by Cornucopia View Post
    Exactly. Sample rate matching precisely minimum of 2x bandwidth would require an infinitely steep brickwall filter, which is NEVER a good thing.

    Scott
    infinite steep brickwall filter will be infinitely long i.e. can't be realized in practice (of course this is true partially as in normal life attenuation in bandstop area is finite and as such filter but anyway it is easier to oversample signal and provide relatively gentle transition area).
    Quote Quote  
  25. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Agreed. That's why minimal SR isn't a best practice and raising it (to either a somewhat higher SR with more gentler native rolloff, or using oversampling with easy rolloff) IS. Contrary to what the OP was thinking he/she wanted.

    Scott
    Quote Quote  
  26. Originally Posted by Cornucopia View Post
    Agreed. That's why minimal SR isn't a best practice and raising it (to either a somewhat higher SR with more gentler native rolloff, or using oversampling with easy rolloff) IS. Contrary to what the OP was thinking he/she wanted.

    Scott
    Ditto!

    IMHO it is too late anyway (around 30 years) to redefine sampling rate for SD component video - i like "academic" discussions but this one is really abstract one...

    btw digital oscilloscope golden rule is "at least 5 samples per sine period" i.e. sampling rate at least 2.5x fmax and second golden rule - never enough samples - more samples is always* better
    *always if you didn't screwed something totally and you can afford to process those samples - Nyquist-Shanon-Kotelnikov define absolute minimum i.e boundary criteria - if you are wise you can undersample your signal. (btw disagree with Wikipedia - undersampling is not bandpass sampling - bandpass sampling is special case of undersampling where our wanted signal fulfil Nyquist-Shanon-Kotelnikov criteria i.e. is not undersampled).
    Last edited by pandy; 6th Aug 2019 at 13:01.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!