Here is a loopy idea.
Has anyone tried to emulate a NTSC encoder in software, or is anyone familiar with an effort to do so?
The next question is, why? I dunno, just something to do. Here in the U.S., there are still low-power TV stations broadcasting in analog NTSC.
It's easy to convert RGB into luminance using NTSC coefficients, but to be a true emulation you would need to synthesize a subcarrier.
+ Reply to Thread
Results 1 to 29 of 29
there is workflows around for about 20 years,
they might take VHS, if they still take them,
or NTSC DV tape, you can use some NLE like Magix Vegas,
they might take some appropriate mpg as a file, you'd need to ask for specs, but if you encode it with Magix Vegas, there should be no problem, these files could be easily tested on a distance
before exporting in Vegas (or other NLE) just use some broadcast colors effect, (7.5 IRE, or custom luma, chroma, composite) or even to 16-235 RGB (Vegas can apply this) , audio balanced with a desired -6dB or something, 48000Hz.
NTSC is as good as obsolete. I can't imagine Vegas or any NLE program supporting it because the output would be basically useless; it wouldn't be compatible with anything. The few low-power TV stations still broadcasting analog NTSC probably use hardware encoders, just like the old days.
There are people in this world who restore steam locomotives and vacuum tube radios as hobby projects, and they enjoy it. You wouldn't run a railroad with steam locomotives in revenue service in this day and age, though.
What do VHS tapes have to do with the topic at hand?
Digital files? We didn't have digital files in 1970. It's 2020 now. In 2020 NTSC is a dinosaur.
What the heck are you talking about? What NTSC encoder even means?
You want to emulate something from 1960 before digital era with those analog rules? I just told you, you take a video and then throw a bunch of filters on it to simulate NTSC. Am I in Twilight Zone or something? In 1960 you had recording in analog to tape and broadcasted it was as such with specific hardware with bunch of knobs turning left and right making a signal out of it and receiving it and specific viewing CRT at the end also equipped with bunch of knobs. What part of it do you wish to simulate? Just one part of it at the end, receiving signal? How? It was analog.
I got the NTSC encoder working except for interlace.
I had thought about synthesizing a subcarrier but the math is way over my head. Much simpler to use the equations from Poynton.
I found the following to convert YIQ to a composite NTSC signal. It was in SMPTE 170M - 2004.
Anybody want to take a crack at converting this back to YIQ? The math is way over my head. This is a "just for fun" project so don't knock yourself out.
double N = (0.925 * Y + 7.5 + 0.925 * Q * Sin(2 * PI * fsct + 33)) + (0.925 * I * Cos (2 * PI * fsct + 33))
// RGB to YIQ
double Y = R * 0.299 + G * 0.587 + B * 0.114
double I = (0.5959 * R) - (0.2746 * G) - ( 0.3213 * B)
double Q = (0.2115 * R) - (0.5227 * G) + (0.3112 * B)
// YIQ to RGB
double R = Y + 0.955986*I + 0.620825 * Q
double G = Y - 0.272013 * I - 0.647204 * Q
double B = Y - 1.106740 * I + 1.704230 * Q
Last edited by chris319; 26th Jul 2020 at 21:24.
If you want to synthesize NTSC (which has absolutely nothing to do with VHS) yes, you can do it in software but the result is a digital stream anyway. You still have to convert it back to analog using a fast DAC so it becomes rather pointless when there are cheap hardware solutions that can directly convert RGB or YUV costing only a few $$. You would be emulating an IC like the AD720 series in software. If you really want to try it, the first place to search is for VHDL models of the AD72x series as these will have most of the math in them already.
Hi Brian -
I don't know where you got the notion of VHS tapes or hardware. This is strictly a software emulation at a cost of 0. I already have it encoding Y, I and Q. That was the easy part.
Theoretically I am generating the composite signal "N"; I just don't know how to decode it back to YIQ.
Most software output NTSC as 720x480, NTSC broadcast standard is Nx525, N is the frequency of the luma carrier of each scan line in the raster including all the sync signals, So if you intend to broadcast it, it has to be converted to analog first which defeats the purpose of having it in digital in the first place when you can feed the pure digital video to the NTSC encoder and do both the DAC and NTSC standardisation in one step.
I recall one of the members here was able to capture the full VHS frame in 780x525 or something like that, I don't know how he did it, try to find that sample and see if you can reverse it, Though VHS and live broadcast feed are two different things.
No VHS, no hardware, no broadcast. Just a software emulation.
Is this an academic exercise?
Otherwise what's the purpose? Capturing a CVBS or RF input using a generic scientific sampler/a-to-d has uses in bringing in difficult-to-otherwise-capture material (due to corruption, deterioration, etc) to perform complex DSP that goes beyond what is feasible in the analog or hardware realms. Ultimately, those would hopefully be further decoded into normal (aka standards-based) digital video files. (Btw, those are still in a state not quite ready to rival normal methods)
But to go the other direction WHILE emulating would require either an additional round-trip of decoding prior to normal (digital, digital-to-analog) output, or it would require being output using a similar (comparable but reverse direction) generic scientific d-to-a device (theoretically possible but I've never seen on). Plus storage of the emulated version would be much higher requirements.
And to emulate for some other reason?
Last edited by Cornucopia; 3rd Aug 2020 at 21:58.
Not academic, just for fun.
NTSC is analog, how are you going to emulate an analog signal with digital? It just doesn't make any sense.
Yes that's my point, You are starting from an NTSC digital file already so what do you want to do beyond that??? Basically what you are saying is I have a digital audio file and I want to convert it to tape in the digital domain, if you can explain this to me then I might be able to understand the video part.
I'm starting with a .bmp file and encoding it to NTSC, emulating the subcarrier in the digital domain.
Do you mean UHF/VHF carriers or DBS satellite carriers? After it's being stripped from those carriers it's just plain NTSC signal like the file you are starting from.
First software, real time video encoder (not only) on cheap ESP32 https://github.com/rossumur/espflix (source available so most of the work already done)
Secondly - use any HDL (Verilog/VHDL) simulator and use one of few available video color encoders (for example https://opencores.org/projects/fbas_encoder ) - record output bitstream to file.
Thirdly - use GNU Radio framework - PAL encoder is described here - NTSC should be slightly simpler (no subcarier phase switch) https://hackaday.io/project/14904-analog-tv-broadcast-of-the-new-age - inexpensive (150$) SDR such as https://www.analog.com/en/design-center/evaluation-hardware-and-software/evaluation-bo...lm-pluto.html# will give you NTSC RF signal, if you are interested in NTSC baseband (aka CVBS) then https://www.analog.com/en/design-center/evaluation-hardware-and-software/evaluation-bo...ml#eb-overview should work for you.
Just baseband. Forget about UHF and VHF RF carriers.
No hardware. Just software emulation. The key word is emulation, not hardware implementation.
Also, just one frame, no motion.
It was easy to emulate Y, I and Q signals using formulae found in one of Poynton's books.
SMPTE 170M - 2004 shows how to combine Y, I and Q into a composite signal, as explained in post #8 in this thread. I don't know how do decode this signal back to baseband Y, I and Q.
Last edited by chris319; 19th Aug 2020 at 17:48.
Well here is an NTSC frame from VHS decode project and the second marks the pure NTSC frame according to D1 standard. All in digital no hardware, mission accomplished, No chroma signal though.
Approach may be different and various tricks used to create real live video but basic principle stays unaltered.
Poynton's book have some errors (you will not create working HW composite video encoder based on his books)
Apologies to Charles Poynton, yes YIQ is very easy to create, but color composite is more complex - this classical example of vector calculation
Use older references perhaps with very nice description of the software way of the CVBS generation - some pdf's attached.
IMHO easiest way is to use GNU Radio framework - almost sure i saw somewhere analog NTSC encoder.
btw 704/720 pixel formats use 13.5MHz sampling - it is way easier to create digital (software) encoder for so called 4Fsc i.e. sampling frequency 4 times of the chroma subcarier - this lead to video with 768 pixel not 704/720.
Last edited by pandy; 21st Aug 2020 at 14:50.
The numbers in Poynton's book are for encoding and decoding RGB <--> YIQ and they agree with some other source - I'll have to search to tell you which.
In U.S. NTSC, subcarrier frequency is constant at 3.579545 MHz. 4x that is 14.31818. It is phase modulated and the frequency is constant.
I don't know why you brought up a HW encoder. Really, all I want to do is decode the composite signal back to YIQ. See post #8. The math is way over my head.
Last edited by pandy; 21st Aug 2020 at 14:51.
you asked for software solution and i provided references of working software solutions for composite video encoder.
Are you saying a software decoder is not feasible?
Your thread title says encoder.
OK, I have a rough draft of my NTSC encoder and decoder working. It combines two subcarriers (I and Q) in quadrature and decodes them to I and Q and then to lovely RGB pictures. I had to employ some cheats, though. I had to utilize some trig, which I have never studied.
The image is 753 x 483. Given a subcarrier frequency of 3.58 MHz, this gives 52.59 uSec visible and 10.97 uSec of H blanking, all within tolerance. Pixel aspect ratio is 1.001, close enough.