I am working on a project to capture video audio data from off of free to air antenna using a Hauppauge HVR 2250.
I need to capture 2 minutes frames of video data in chunks in real time, then the chunks of video data
need to be imported into a relational data base for processing reports with close caption.
Does anybody have any ideas how to accomplish this project with graphedit thumbnail picture, plus any programming
+ Reply to Thread
Results 1 to 20 of 20
Building a graph:
I tried the instructions in the articles above last night and built a graph using "Microsoft Network Provider" as the first filter in the chain. ("Microsoft ATSC Network Provider" did not connect). However, I have been unable to get tuning to work to test the graph, so there is not much reason to post it. Maybe the articles will be enough get you started.
Thank you - if you come up with a graph let me know.
The graph above both displays the output from one of the HVR 2250's tuners and captures it to a file. A dump filter rather than the generic File Writer filter is needed to capture the file. The ArcSoft File Dump filter arrived on my PC with one of the ArcSoft programs installed on my PC, but there are probably others that will work. You will need to use GraphEdit rather than GraphStudio to have access to the tuning settings tabs in the Microsoft Network Provider filter and possibly some other filter properties.
Pick a strong local channel. Use TVFool.com or RabbitEars.Info to find the physical channel number associated with it. For example, channel 2.1 (major channel = 2, minor channel = 1) might actually be broadcast on physical channel 27. Right click on the Microsoft Network Provider filter and click the "ATSC Tune Request Tab". Enter the physical channel, major channel, and minor channel in the appropriate boxes, click "Submit", and then click "OK" to close the window. Starting the graph will automatically fill the carrier frequency box.
Hint: Right-clicking on an output pin on the the MPEG2 Demultiplexer filter and then selecting "Render Pin" can find the right filter or filters for you to complete that part of the graph.
Last edited by usually_quiet; 26th Aug 2014 at 11:41. Reason: Up too late last night. Some of what I wrote made no sense.
The other question - we need to be able to create 2 minute chunks of video from the free to air antenna. From some of the research I have done it appears
that creating the chunks of video data needs to have a way to count the frames of data then cut them at a certain point. Example I found out that the resolution
apparently determines the frames like 1280 X 720 frame rate can be 29.97 frames per second. So a 2 minute chunk would 120 seconds X 29.97 = 3596.40 frames
You would then cut the frame - what I have heard is that ffmpeg can help in counting the frames. I would also gather you can of course vary the resolution.
The video audio and close caption data is then imported into a relational data base where it is then knitted together and reports can be then generated. We also
need to be able to vary the resolution and possible frame rates to be able to generate the best quality we can. Some of the problems I have seen which we are
also trying to prevent is that the videos that are generated will skip every 2 minutes when they come out at 1280 X 720 - I understand that is caused by the
buffer during the knitting process. We found that lower res like say 620 X 480 the skip or jump does not happen.
The problem is that 620 X 480 is too low.
I believe that ffmpeg can be used for both as a frame counter and to adjust the resolution and frames, like we might want to create 1 minute and say up to 10 minute chunks of data, am I correct or is there another way to do this?
Keep in mind the project has us capture Television Stations 24/7.
Thanks for your help with this.
You need to learn more about the ATSC specification. 1280x720 is 720p60 in N. America, which actually has 59.94 frames per second. 1920x1080 is 1080i60 in N. America, which actually has 29.97 frames per second or 59.94 fields per second where each field becomes a frame when properly deinterlaced. I have personally seen 480i60 (29.97 frames per second or 59.94 fields per second) broadcast using a few different resolutions (704x480, 640x480 and 528x480) but not 620x480. As far as I have been able to find out, 620x480 is not allowed by the specification, nor is 720x480.
You do know that the closed captions are stored in the MPEG-2 video stream's GOP user data, right? If you capture the stream as my graph does, then the closed captions are also stored in that file. They have to be decoded to text be read, but CCExtractor can do that. (Yes I know you want real time processing to extract the CCs, but the original video will have a backup of the same data.)
I hardly ever use ffmpeg directly so I don't know if it can count frames, but would not be surprised if it can.
I'm only going to help you with the graphs, as I do build them once in a while. I have retired from programming.
Last edited by usually_quiet; 26th Aug 2014 at 11:02.
Thanks for the help with this - I have been forwarding what you are telling me to Henry the developer. He is taking the graphs and other graphs we have and running these graphs against the Hauppauge HVR 2250. Henry has researched into Media Foundation and found out that they do not have a TV Tuner so we have had to go back to Direct Show. The first part of the project is to develop a Beta model in the US using close caption, then expand out into teletext - we understand there is a difference. There is another model Hauppauge HVR 2200 which is made for teletext. The HVR 2250 is for the US market.
Our concerns is to provide high quality hi res videos that do not skip as I have seen. The skipping is caused from buffering the data - Henry believes that Media Foundation would have prevented this problem by recalculating the frames and adapting for this problem. But again Media Foundation is just for video cameras and
not for TV Tuners. Media Foundation does run very well with the Windows 7, Windows 8 and coming Windows 9 OS.
Any graphs you want to sent is appreciated. You can also communicate with me via my email address if you want to email@example.com
The goal is to create a key word subject search to pull up ad hoc and email alert reports.
Just so you know, some DVB countries do not currently use teletext at all. They only use bitmap-based subtitles.
I had no idea - I though close caption was used in US alone and the rest of the world used teletext.
That will be something to consider as we go further. We do have to start of course with the beta
model and just capture the close caption video and audio. There are other issues also related to
this project and that is capturing the ticker that is the video text images that are generated from
a character generated on the television screen. I have seen this done. We are also looking at
also capturing the voice with using speech to text.
Wikipedia has a list of the countries that have converted or plan to convert to ATSC. The USA, Canada, and Mexico use EIA-608 and EIA-708 closed captions for ATSC, but EIA-608 is the only type of ATSC closed captions with good software support at present. I don't know what Korea or the rest of the countries using (or planning to use) ATSC do about closed captioning.
Base on what members here say, Australia still uses Teletext for DVB-T, but the UK discontinued Teletext and only uses DVB-T's bitmap-based subtitles now. Other than that, I don't know what other DVB-T counties still use Teletext and which have discontinued it in favor of bitmap-based subtitles alone, but I doubt that the UK is the only country that no longer uses Teletext for DVB-T
Then there is ISDB-T, used in most of S. America, parts of Central America, parts of Africa, Japan and some other Asian nations. I have never investigated what ISDB-T counties use for subtitles.
Last edited by usually_quiet; 26th Aug 2014 at 15:50.
That is interesting - I have sent that on to Henry for future reference.
We need to first of course work on just capturing the data the way we need it from the US market, then
it will take working on the rest. What I noticed is that on here that working with
ATI TV Capture board might be easier to capture close caption text then the HVR 2250
Our concerns is that we know for a fact the HVR 2250 has a on board cpu - do you know if the ATI TV capture board has a on board cpu?
The reason I am asking we use the Hauppauge HVR 2250 which has a CPU on the board, what I am curious about does the ATI 650 have a CPU on board similar to the HVR 2250?
Reason is because I believe this takes the processing load off of the CPU that is on the mother board of the computer.
What we are working on is capturing and parsing in real time from a free to air antenna the television signal and have it captured in 2 minute parses or chunks of data, plus we need to capture the close caption text. We presently are working with the Hauppauge HVR 2250. We are looking also for graphedits and any ideas how to accomplish what we need. If for example we can not accomplish this by using Win TV 7 and the HVR 2250 and there is a better board to accomplish this I have no objections to change boards as long as the board has some kind of on board CPU similar to the HVR 2250 - which at this point we still prefer.
Let me explain the goal is to capture the video audio close caption data from a free to air antenna and off of cable TV, the data needs to be chunked into 2 minute chunks then knitted together in a relational data to generate ad hoc and email alert reports.
We have done some research into Media Foundation which we found does not have a TV Tuner so we have turned our attention back to Direct Show which does have some problems doing the frame calculations and knitting the frames back together which potentially through the buffering process at a hi res mp4 1280 X 720 can potentially create a skip in the data - where from our research Media Foundation would be far better at accomplishing that part.
We have also done some research into ffmpeg which I do believe has a frame counter plus I know you can with ffmpeg modify the resolution.
The idea is to create just a beta model for testing in the Denver Colorado where we are located with the close caption text
Again any ideas would be really helpful with graphedits and code ideas how to accomplish this project.
rshapiro in good time I'm sure Henry will get fig out to bad I can't get hold of the one dev that could help you all out.
Last edited by SHS; 26th Aug 2014 at 23:16.
The ATI card would come with a few different filters. I did use one of them to dump raw cc data (not text!) to a file. I posted the graph here at VideoHelp. ...but the TV Wonder 650 has been out of production for about 5 years, so it isn't something I'd recommend at this point,.
Last edited by usually_quiet; 27th Aug 2014 at 00:04.
Nearly all the consumer software and freeware available at present can only display EIA-608 captions or convert EIA-608 captions to subtitles. EIA-708 support is still a work in progress.
When video from an ATSC channel containing EIA 608 captions is decoded, converted to analog, and delivered via composite or S-Video or a modulated NTSC analog signal over coax, the EIA-608 captions are converted to Line 21 captions in the VBI, which a US TV can decode and display.
Some a long the US boarder will still be a round as well because there nothing the FCC can do about it.
Last edited by SHS; 27th Aug 2014 at 01:13.