The data position of VBR video/audio (e.g. DVD-Video) can not be calculated like CBR/ABR, where one just needs to multiply the stream size with a division of selected position and total playback time to determine the position.
But with variable bitrate, it is less simple.
Example: How do I know, which data position or LBA the 1526th second in a VBR movie (occasionally among interleaved VBR audio) has?
Where exactly, on what megabyte position of the disc is that position/frame saved?
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 1 to 20 of 20
Thread
-
-
Different containers (e.g. AVI, MKV, MP4) have different ways to handle this problem. Usually the muxers create some sort of index that allows fast seeking (like table of contents in a book). Basically a table of video (key)frames with their timecode and the corresponding file position. Because of video/audio interleaving the audio will be at about the same position.
-
-
Timecode.
With mpeg-ps/vob/ts there is presentation time stamps (PTS) in the stream that you can read - and is what apps/devices use to find a particular time. NOT lba/sector.
You have it backwards, the devices don't care what lba/sector one needs to go to to get to the 1000th second, they just go through to the 1000th second's timestamp (probably skip reading the TC along the way), and whatever lba/sector they land on, that's where they are.
Note: due to the nature of B-frame ordering, you may need to pay attention to decoding timestamps also, as b-frames' existence clearly would affect a calculation of accumulated lba/sector distance.
Scott -
I have my own video playback app.
It would start at the same point but hardly ever show the same fraction of a second.
Because indexers, wifi, security and other apps are busting in ....
I primarily want to see the segment or very close to it....
My solution was to have the app start at a point very close multiple times.
Start at 1000 seconds in and play for 1/30 second and pause
start at 1000.01 seconds in and play for 1/30 second and pause
again and again maybe 10 times. You will see what is at that point in the video. -
Those other things should have nothing to do with seek accuracy. Sounds like your app isn't designed properly. Time to get a different one - there are plenty out there that work as intended.
:roll eyes:
Scott -
ffmpeg's ffprobe.exe can write a .CSV file (easily opened in any spreadsheet program) with a record of presentation timestamps (the time a frame is displayed) along with packet positions (the position of a frame in the video stream - I think). I made a batch file and wrote it up here, but the heart of it is very simple:
Code:@echo entry,media_type,key_frame,pkt_pts,pkt_pts_time,pkt_dts,pkt_dts_time,best_effort_timestamp,^ best_effort_timestamp_time,pkt_duration,pkt_duration_time,pkt_pos,pkt_size,^ width,height,pix_fmt,sample_aspect_ratio,pict_type,coded_picture_number,^ display_picture_number,interlaced_frame,top_field_first,repeat_pict > "ffprobe.csv" ver > nul ffprobe.exe -select_streams v -print_format csv -show_entries frame "my_file.xyz" >> "ffprobe.csv"
(beware, the columns ffprobe puts out may have changed since 2014 when I last used it)
(EDIT you need to understand Scott's post #4 to even begin to understand this data)Last edited by raffriff42; 21st Jan 2018 at 17:19.
-
Great stuff Raffriff42!! The output might have changed a little (ffprobe 2015) but its all there if, as you and Scott mentioned, you understand the data.
[Attachment 44467 - Click to enlarge] -
-
If you are having that kind of trouble, you are doing it wrong or using the wrong app or both.
And I think my history here is clear testimony that I DO know of what I speak. Look in the mirror, dude.
Scott -
Try and show off objects that are faint - extremely fast - or Slow Slow.
You need repetition, slow motion and fast forward like My app has. Nope you can't touch this method of timeline playback.
Show me the segments of video where the Sky Creatures show their stuff..
https://www.youtube.com/watch?v=URT8GhC1ITA
http://www.telusplanet.net/public/stonedan/demo85_pict7.mpgLast edited by SpectateSwamp; 24th Jan 2018 at 10:49. Reason: add a youtube video link
-
What you see is far more important than any precision time code. PERIOD
The feature (not completely tested out) can show me 4 and 5 segments of the video. Again and again. You see things that are repeated.. You just do.
Another use I had for time code playback... Is to eliminate the jiggle at the beginning and end of a video when you are zoomed way out.
At 1/8 mile drag races I get over 200 clips. Situated down by the finish line...
I Have a remote. Without it, I can eliminate that problem. By starting 1/2 second into each clip and playing 1 second less than the total length..
Jiggles are very annoying.
Last edited by SpectateSwamp; 24th Jan 2018 at 10:28. Reason: add sample video from the drag races
-
Simply decode video before and after interesting area - there is no other way to do frame accurate seek - if you add to this closed/open GOP and frame relations/dependencies there is no other way than decode frames to buffer and navigate within buffer - that's why special codecs for studio production exist - they offer only Intra frames and every frame is independent instance. If your recording will be this kind of format then there is no problem with accurate navigation over your timeline.
You can create smaller size video intentionally for preview with intra coded frames to quickly navigate and read frame position from this preview. -
I keep my video clips short Usually under 3 minutes.
When shooting political forum video, I will use mpg2cut2 to break them up. That is about the only editing I need to do. EVER
I'd like to see what some of these video apps can do with these Sky Creatures.
When you use timeline playback you have way less video clip clutter everywhere. You lose things / video that way.
Back up is simpler. Keep a copy of mpg and a copy of the original video format. 2 or more copies is a good backup. -
You can't pinpoint exact data locations for a visual image when it's GOP. The group contains the data, not the exact frame. That's why you have to index it before it can be edited (which is a frame-exact access). At least that's always been my understanding. And pretty sure I'm correct here.
VBR vs. CBR doesn't even matter here.
You said DVD-Video, meaning MPEG-2, so about 15-length GOP with 2-3 P and 9-12 B. Noting that GOP length can vary based on scene change smart encoding.Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
It's a bit more complicated.
For example in MKV container with cues (index) I want to jump to the frame at 01:30:00. The nearest keyframe is listed in the cues as being at 01:29:55. So I jump there and then forward to 01:30:00.
But: if I am in AviSynth I don't want to jump to just "01:30:00". I want to jump to e.g. frame #50000. But the MKV container doesn't hold this information in any index and since it is variable framerate (by design) I cannot reliably calculate fps * time. So e.g. ffvideosource has to parse every single frame of the file first so that I can jump to frame #50000.
Some containers are even worse. Transport streams don't even have an integrated index in the first place. -
https://www.youtube.com/watch?v=Kb8cnBJL7fU
The absolute data displayed is what is important.
There are 8 or 9 more segments where the sky creature zips by..
I'll do the same for them... -
@RaffRiff42... In thread number 7, I noticed the 'display_picture_numbers' are not in sequence although the times are increasing at a regular rate. Any Ideaswhy they are shown this way?
Thanks -
That's because a long-GOP codec writes the frames out of order - I'm not really up on the details.
-
@raffriff42 ... Thanks. With that I was able to find the following which explains the order. Unfortunately, when you seek to a particular frame, it is (I believe...) the display sequence. The CSV output shown earlier makes this hard to parse with the correct display timing. At least its understandable now. Thanks for the valuable info.
Display order: I(1) B(2) B(3) P(4) B(5) B(6) P(7)
Transmit order: I(1) P(4) B(2) B(3) P(7) B(5) B(6)
The short answer for why this is so: Due to the bidirectional nature of B frame prediction the decoder must first process the previous and next reference frames. For example, to decode B(2) the decoder must first have I(1) and P(4).
The easy way (for most sequences) to reorder frames from transmission to display is to look at the temporal_reference in the picture header which gives you the location of the frame in display order.
Similar Threads
-
Data position measurement on optical discs.
By TechLord in forum Newbie / General discussionsReplies: 3Last Post: 1st Oct 2017, 20:32 -
Convert to MP4 to AVI-DV for individual frame editting
By Voltaire1694 in forum Newbie / General discussionsReplies: 6Last Post: 19th Oct 2016, 00:38 -
How to find out frame exposure time in a variable frame rate .mov file...
By bananas100 in forum EditingReplies: 5Last Post: 24th Oct 2014, 20:18 -
Find perfect position for external AC3 audio track with its waveform?
By JohnnieBBadd in forum CapturingReplies: 0Last Post: 21st Oct 2014, 16:09 -
PGS time/time position after merging MTS files to MKV
By MrRalph in forum Camcorders (DV/HDV/AVCHD/HD)Replies: 1Last Post: 4th Aug 2014, 14:50