VideoHelp Forum




+ Reply to Thread
Results 1 to 20 of 20
  1. Member
    Join Date
    Nov 2006
    Location
    United States
    Search Comp PM
    For years I've captured vhs and laserdiscs using DV mode. I bought a Shuttle for the option of capturing either in HD, or SD lossless. But it looks like the only frame rate option for SD captures is 59.97fps.

    Is this Fields or Frames per second?

    Can't be 59.97 frames right? Original video source is only 29.97. I thought the goal was to take the telecined 29.97 and cram it back to the original 23.94 Frames Per Second via IVTC?

    I checked Blackmagic's higher-end pci/e card Intensity Pro 4K and the SD mode on that is also 59.97fps.

    Is there a reason for Double NTSC that I'm missing?
    Quote Quote  
  2. Member
    Join Date
    Aug 2010
    Location
    San Francisco, California
    Search PM
    What application program is showing you that?
    Quote Quote  
  3. Originally Posted by clashradio View Post
    Can't be 59.97 frames right?
    It could be if it (smart) bobs on the fly.
    Last edited by jagabo; 23rd Jan 2018 at 21:39.
    Quote Quote  
  4. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    It could mean 59.97 fields per second, Since each frame has two interlaced fields, divide that by 2 you get the frame rate.
    Quote Quote  
  5. Member
    Join Date
    Nov 2006
    Location
    United States
    Search Comp PM
    JVRaines: for capturing I use Blackmagic's own Media Express. I can't get Vegas work for capturing via the HDV mode (I never had a problem capturing in DV mode). Or at least I can't get Vegas to capture from the Shuttle, which would be via HDV, not DV.

    When I drop the captured file into Vegas, the Properties say 59.97 frames per second.
    Quote Quote  
  6. Member
    Join Date
    Aug 2006
    Location
    United States
    Search Comp PM
    Originally Posted by clashradio View Post
    JVRaines: for capturing I use Blackmagic's own Media Express. I can't get Vegas work for capturing via the HDV mode (I never had a problem capturing in DV mode). Or at least I can't get Vegas to capture from the Shuttle, which would be via HDV, not DV.

    When I drop the captured file into Vegas, the Properties say 59.97 frames per second.
    Last time you posted about capturing with the Shuttle, you were going to capture 480p, created from a 480i source using a video processor. When 480i is properly deinterlaced to 480p, that should produce progressive video with 59.97 frames per second. So, are you capturing 480i (SD) or capturing 480p out from the processor?

    The output from HDV devices is very different from the Shuttle's output, so it is understandable that HDV mode doesn't work for the Shuttle. HDV sends an MPEG-2 transport stream for file transfer over a USB 2.0 connection. The Shuttle streams uncompressed video and audio over a USB 3.0 connection.

    [Edit]I found a thread in the Vegas forum in which someone replied that the Intensity Shuttle doesn't work with Vegas versions beyond Vegas 9. https://www.vegascreativesoftware.info/us/forum/video-capture-device--100832/
    Last edited by usually_quiet; 24th Jan 2018 at 12:56.
    Ignore list: hello_hello, tried, TechLord, Snoopy329
    Quote Quote  
  7. Member
    Join Date
    Apr 2003
    Location
    United States
    Search Comp PM
    I fired up my Shuttle and captured via S-Video using NTSC setting and Mediainfo reports 29.97 fps, as does Sony Vegas 12. I'm using an older driver (10.8). I installed the newest driver and like the OP said, it only allowed capturing SD at 59.94 fps.
    Quote Quote  
  8. Member
    Join Date
    Aug 2010
    Location
    San Francisco, California
    Search PM
    The new driver must be forcing bob deinterlacing. That's pretty lame support for legacy video. Another black mark for Blackmagic.
    Quote Quote  
  9. Member
    Join Date
    Nov 2006
    Location
    United States
    Search Comp PM
    Yes [usually quiet] I’m still experimenting with laserdisc captures. I currently have a Lumagen HDQ (video processor) soon to upgrade to a Lumagen 2144. But I’m still very green with the HDQ and I’m sure there will be a learning curve with the 2144.

    I remember you saying when working with SD material to capture in SD lossless & to match the capture settings with the same as the source (which would be 480i), as opposed to capturing in HD. I finally got the Shuttle to recognize the output of the HDQ (HDQ dvi-out to Shuttle hdmi-in). The Shuttle automatically detects the input source, which shows 720x486 @59.94fps Progressive (when using the HDQ). Like I said I haven’t fully learned how to use the HDQ so I assume I can change the desired output.

    Thank you for the Vegas link.

    I’m confused when you say when you deinterlace Interlaced to Progressive, it changes the frame rate to 59? I thought the whole point IVTC was going from film-to-video-back to the frame rate of film (23.97). Now I have Double NTSC. So I then take 59.94 and cram it down to 23.97?
    If using the 2144 I might capture in HD 1080P @23.97 as this thing automatically has IVTC, upscaling, 3d comb filter, basically the best legacy to HD processor. That’s one reason why I was so convinced about using the hdmi input on the Shuttle (or possibly a pci board with hdmi input).

    [Brainiac] My version of Desktop Video is 10.9.7. You noticed problems with that version and were able to find 10.8?
    Quote Quote  
  10. Member
    Join Date
    Apr 2003
    Location
    United States
    Search Comp PM
    I would not call it a problem (10.9.7), It just did not allow an SD setting with 29.97 fps. 10.8 did. It can be downloaded for the Blackmagic site under support.
    Quote Quote  
  11. Originally Posted by clashradio View Post
    I’m confused when you say when you deinterlace Interlaced to Progressive, it changes the frame rate to 59?
    A plain deinterlace turns one interlaced frame into one progressive frame. A bob deinterlaced turns each field of the interlaced frame into a frame -- so two output frames for every input frame. The latter is best for real interlaced material where every field is from a different point in time -- typically live sports, news, camcorder video, some reality TV shows, etc.

    Originally Posted by clashradio View Post
    I thought the whole point IVTC was going from film-to-video-back to the frame rate of film (23.97). Now I have Double NTSC.
    You only want to IVTC film based material. The best way to do that is to field match (join complimentary fields to restore the original film frames), then remove duplicates. For example, with the usual 3:2 pulldown pattern, field matching leaves you with one duplciate frame in every group of 5 frames. You want to remove that duplicate frame, turning 29.97 frames per second into 23.976 frames per second.

    Originally Posted by clashradio View Post
    So I then take 59.94 and cram it down to 23.97?
    You usually want to field match. But if you have a telecined source that has been bobbed, you have 6 duplicate frames (aside from bob/noise artifacts) in every group of 10 frames. You want to remove those 6 frames, leaving all the original 23.976 fps film frames.

    Originally Posted by clashradio View Post
    If using the 2144 I might capture in HD 1080P @23.97 as this thing automatically has IVTC, upscaling, 3d comb filter, basically the best legacy to HD processor.
    Probably not as good as can be done in software with AviSynth.
    Quote Quote  
  12. Member
    Join Date
    Nov 2006
    Location
    United States
    Search Comp PM
    The bulk of my laserdiscs are movies (film based 23.97fps). So I'm guessing capturing @ 29.97 is ideal? vs. 59.97?

    Field match as in capture Interlaced? or Progressive? or are you talking about UFF/LFF?
    Quote Quote  
  13. You want to capture at 29.97i.

    Field matching is simply the pairing of fields to restore the original film frames. You start with a field of one video frame and pair it with either field before it or the field after it, whichever gives the least comb artifacts. With normal pulldown one of those fields completes the original film frame. This is performed on each video frame so 29.97i becomes 29.97p. Then you remove the duplicates to get 23.976p.

    Of course, all this discussion assumes you are willing to put in the time learning how to do it, and doing it. If not, just let your hardware to do it for you.
    Quote Quote  
  14. Member
    Join Date
    Nov 2006
    Location
    United States
    Search Comp PM
    The two fields combined gives you one frame? Is this why it's mandatory to have a good 3d comb filter? for less comb artifacts?

    I'd like to learn how to do it. Still having a hard time grasping what fields are vs. frames. I guess I'm stuck on film; one frame is one frame...period.

    Is it possible for hardware to do IVTC, and to do it via AviSynth?
    Quote Quote  
  15. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    Originally Posted by clashradio View Post
    The two fields combined gives you one frame? Is this why it's mandatory to have a good 3d comb filter? for less comb artifacts?

    I'd like to learn how to do it. Still having a hard time grasping what fields are vs. frames. I guess I'm stuck on film; one frame is one frame...period.

    Is it possible for hardware to do IVTC, and to do it via AviSynth?
    That's how video started out back in the day, There is odd and even set of interlaced scanning lines or fields every odd and even field combined makes up one complete frame.
    https://en.wikipedia.org/wiki/Interlaced_video
    Quote Quote  
  16. Member
    Join Date
    Aug 2006
    Location
    United States
    Search Comp PM
    Originally Posted by clashradio View Post
    The two fields combined gives you one frame? Is this why it's mandatory to have a good 3d comb filter? for less comb artifacts?
    No. Comb filters don't reduce combing artifacts. 2D and 3D comb filters are mainly used to reduce dot crawl artifacts caused by crosstalk between the chrominance and luminance components of composite video signals. If you want to know why it is called a comb filter, read the Wikipedia entry for comb filter.
    Ignore list: hello_hello, tried, TechLord, Snoopy329
    Quote Quote  
  17. To understand what fields are you first you need to understand how CRT displays work. The picture is drawn by an electron beam that scans across the screen from left to right, top to bottom. The beam starts at the top left of the screen (as the viewer sees it) and scans to the right -- this is called a scanline. It then moves down and scans another line, repeating until it scans the last line at the bottom of the screen.

    Interlaced TV (and all analog broadcasts, VHS, etc.) adds a tweak to this. Rather than scanning each line in order (0,1,2,3,4...479) they scan the even numbered scanlines in one pass (lines 0,2,4,6...478, called the top field) then go back near the top and scan the remaining lines (1,3,5,7...479, called the bottom field). Not only do the TVs do this, everything along the analog chain does it. The camera, recording devices, everything. So interlaced analog video is a series of fields (at 1/60 second intervals) not frames. It takes two fields to cover the entire face of the TV*. The terminology for fields varies. Some programs use top/bottom, others use upper/lower, even/odd, A/B, etc.


    When film is shown on interlaced TV it is necessary to convert the 24 fps film frames to 60 fields per second. This is done by showing one film frame for the duration of 3 fields, then next film frame for the duration of 2 fields, then continuously alternating between those two durations. So, on average, each film frame is displayed for 2.5 fields (note that 24 x 2.5 = 60). This is called 3:2 (or 2:3, same thing) pulldown.

    When interlaced video is captured pairs of fields are woven together into frames before being stored. Remember, it takes two fields to create an entire frame. So 59.94 fields per second is capture as 29.97 frames per second. In parts of the picture where there is no motion the two fields look like a complete picture. But if anything moves in the 1/60 second interval between fields, you see comb artifacts. The term "comb" is used because the artifacts literally look like the tines of a comb. You can see an example here:

    https://forum.videohelp.com/threads/358015-From-AVI-to-DVD-Upper-or-Lower-Field-First-...e2#post2259925

    Note, this has nothing to do with the 2d/3d comb filters found in capture devices. That refers to using a comb like filter to sift the chroma from the luma of a composite video signal. More on that later...

    So you have 24 fps film telecined to 60 fields per second, then captured and stored as 30 video frames per second. Inverse telecining is the restoration of the 24 film frames from the 30 video frames. Let's look at four original film frames. We'll call them 1, 2, 3, and 4. We'll call the fields t or b to indicate top and bottom:

    Code:
    film frames:                   1 2 3 4
    telecined with 2:3 pulldown:   1t 1b 2t 2b 2t 3b 3t 4b 4t 4b
    fields paired as frames:       1t+1b 2t+2b 2t+3b 3t+4b 4t+4b
    Notice that 1t+1b 2t+2b and 4t+4b already have fields paired correctly, that is, they contain a top and bottom field from the same film frame. But 2t+3b and 3t+4b are paired incorrectly. You want to take 3b from the first pair and 3t from the second pair, recombine them into 3b+3t, and discard the other two fields.

    In practice, a program doesn't know beforehand which video frames are already paired correctly. They just start with one field of the frame and look to see whether the field before it or the field after it gives the least comb artifacts. So with our digital video:

    Code:
    fields paired as frames:       1t+1b 2t+2b 2t+3b 3t+4b 4t+4b
    We start with the first field of the first frame, 1t. The field after it is 1b. If this is the start of the video there is no field before it. If this is in the middle of a longer video the field before it is 0b. 1b is more likely to give no comb artifacts when paired with 1t so we do so, restoring the original film frame 1. We move onto the next video frame selecting 2t. We compare pairings with the field before, 1b+2t, and after, 2t+2b. The second pair matches and we now have film frame 2. Next we compare the combination of 2b+2t and 2t+3b. The former gives us the original film frame 2 again. Next is 3b+3t and 3t+4b. We select the first and now have film frame 3. Finally we compare 4b+4t and 4t+4b. Either one will do and we now have film frame 4. So we have, 1, 2, 2, 3, 4. We look through the sequence and decide the second and third frames are both film frame 2, so we discard one, leaving 1, 2, 3, 4.

    Regarding comb filters in capture devices: this is used to separate the chroma and luma from a composite video signal. If you look at this page

    https://www.eetimes.com/document.asp?doc_id=1272387

    In the first image you'll see a graph of the waveform of a greyscale scanline where height on the graph represents brightness of the picture across the scanline:

    Click image for larger version

Name:	grey.png
Views:	627
Size:	37.6 KB
ID:	44517

    In the second image you'll see how color (chroma: hue and saturation) is added in a composite signal. It takes the form of a secondary waveform that's added to the greyscale waveform:

    Click image for larger version

Name:	color.png
Views:	648
Size:	184.1 KB
ID:	44518

    The saturation of the color is indicated by the amplitude of that secondary waveform. The hue is represented by the phase of that waveform (relative to the phase of the chromaburst signal near the start of the line (you don't see that portion of the scanline when watching TV, it would be off to the left of the screen).

    The comb filter in a capture device is used to separate the chroma and the luma into separate signals again. A 2d comb filter looks only at a single frame. But this leaves dot crawl artifacts at the edges of highly saturated colors. A 3d comb filter looks at earlier and/or later frames, leaving fewer dot crawl artifacts. But 3d comb filters only work well on stationary parts of the picture. They will leave dot crawl artifacts on moving parts of the picture.

    dot crawl artifacts: https://en.wikipedia.org/wiki/Dot_crawl



    * Actually, on a CRT you only see a few scanlines at a time (they fade away very quickly). But persistence of vision made you perceive it, more-or-less, as a constant picture. This youtube video taken with a very high speed camera shows the picture being drawn on a CRT display https://www.youtube.com/watch?v=3BJU2drrtCM.
    Quote Quote  
  18. Member
    Join Date
    Nov 2006
    Location
    United States
    Search Comp PM
    Thank you jagabo, and usually quiet. I'll have to review this post a few times to absorb all the info.

    In the meantime; going from interlace to progressive adds fields correct? 480i @29.97 frames per second converted to 480p =59.94 frames per second correct? Confusing as the goal of film-to-video is to get back to 23.976 frames per second. So capturing a source of 480i @ 480p, 720p or even 1080p produces double the original frame rate.
    Quote Quote  
  19. Each field is a half-picture. With a double frame rate deinterlace each field becomes a full picture/frame -- by somehow filling in the missing lines. That can be done in many ways, from simply duplicating each existing line, spacially interpolation between the line above and the line below, using data from the other field where there is no motion, analyzing motion in the fields before and after and smartly filling in the missing lines, etc. With a single frame rate deinterlace only one field of each frame is retained and the missing lines filled in in a similar fashion.
    Quote Quote  
  20. This youtube video shows the scope view and the normal TV view of some test patterns. With some of them it's easy to understand the brightness of the scanlines vs. the height of the graph.

    https://www.youtube.com/watch?v=DdJyF3OHadY

    Here is one of the simpler patterns, just some veritcal lines of different brightnesses. I've inset a portion of the picture at the bottom of the scope view so you can see how the two correspond. The brighter the picture on the TV view the higher the line on the scope.

    Click image for larger version

Name:	scope.jpg
Views:	634
Size:	26.0 KB
ID:	44555

    On the slightly bluish bar you can see the chroma subcarrier added to the line on the scope. It's not high enough resolution for you to see the sinusoidal waveform but you can see that the line looks thick there. There are some places later in the video where he zooms in on the chroma carrier.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!