I am downloading TV shows from a streaming service using ffmpeg while doing on-the-fly encoding and geometry fixing so the final video is a 480p size in mp4 format.
This is being done on a Linux Ubuntu server on a schedule.
Then I need to process the videos so I have created an application in which I can play the video and set cut points at one second accuracy. These points are then used to cut up the source video into a number of temp video files which are finally rejoined to the end result video. But without the sections between the cuts.
Both the cut-up and join operations are done by ffmpeg calls.
The fastest way to do this I have found is this:
1) Extract cuts using this for every cut in the source:
Here outputfile is a numbered temporary mp4 fileCode:ffmpeg -hide_banner -i <inputfile> -ss <starttime> -t <duration> -c:v copy -c:a copy <outputfile>
2) Paste the temp files together:
- Create a text file "filelist" containing the names of the video cuts just created.
Then:
3) Delete the temporary cut files and the filelist file.Code:ffmpeg -hide_banner -f concat <filelist> -c copy <resultfile>
The resulting video is created in 3-5 seconds when the source file is 1 hour long.
PROBLEM:
When I play the resulting video it contains extra stuff in the splices from the parts I had cut out, so the transit from one cut to the other is not smooth.
I have also tried a different approach using ffmpeg to do it in one single command like this:
When I do this conversion no temp files are created but ffmpeg is re-encoding the video, which takes a long time.Code:ffmpeg -hide_banner \ -ss <starttime1> -t <duration1> -i <inputfile> \ -ss <starttime2> -t <duration2> -i <inputfile> \ -ss <starttime3> -t <duration3> -i <inputfile> \ -ss <starttime4> -t <duration4> -i <inputfile> \ -ss <starttime5> -t <duration5> -i <inputfile> \ -lavfi concat=n=<numcuts>:v=1:a=1 -an <resultfile>
Typically the execution time for this is about 4-7 minutes for the same video size that takes only say 5 seconds to process as shown before.
BUT: The video is now seamless at the split points!
QUESTIONS:
1) Is there some 3rd way to do the job, which will not re-encode the whole video (remember all the cuts are from a single homogeneous video file)?
2) Is there some way to specify the input file in the second case as a single argument instead of having to repeat it over and over? Sort of a global input file argument used by all the cut commands. After all it is a single file...
+ Reply to Thread
Results 1 to 30 of 34
-
-
Without re-encoding you can only join/split videos encoded with common final codecs at some specific points, the so called "key frames", where the information about the current frame is "complete" and not depending from previous frames (that's how encoding works).
So.. the answer is no: there is no way to make frame-precise cuts without re-encoding at least some parts of the video (the parts between each pair of adjacent cuts.. starting and ending from/to keyframes.) -
Is it possible to find the location of these "key frame" borders in any way so the requested cuts can be set to happen at the proper point in time?
If the cuts are complete with full frames then a concat might succeed by using "copy"? -
see if this will work - https://ottverse.com/trim-cut-video-using-start-endtime-reencoding-ffmpeg/
-
When you encode something.. you can tell the encoding algorithm to put a keyframe every <n> frames.. but to have a precise-frame cut that value should be 1 (all the frames should be keyframes) .. and that would mean no compression at all (or very little).
You can reduce that value and so be somehow "more precise" with the cuts (but not 100% precise, never, unless you use a lossless codec with neither p nor b frames = very big file size).. but the lesser you set that <n> , the greater in size will be the resulting encoded video file (this is the only thing that gives better "precision" in the cuts: less compression as possible)Last edited by krykmoon; 3rd Jul 2022 at 00:07.
-
-
You can force 1s keyframes when encoding, then losslessly cut on keyframes. The cuts will be abrupt but the output file should play smoothly if you get everything right.
Avidemux can be used as an alternative to ffmpeg to cut out the unwanted parts.
https://forum.videohelp.com/threads/400142-any-software-that-would-split-the-video-in-...t-as-one/page3 -
Questions:
1) What is the normal size of keyframe if one does not supply a setting?
2) How does setting a keyframe affect video file size? For example if the default is 5 and I want to change it to1...
3) When I download using the command shown below, what will be the keyframe and is it possible to set it to something other than the source's?
Code:ffmpeg -hide_banner -referer <webpage url> -i <m3u8 stream url> -vf scale=w=-4:h=480 -c:v libx264 -preset fast -crf 26 -c:a aac -t 3840 output.mp4
4) If a video is downloaded with the command above including a 1s keyframe, will cuts made on 1 s timing borders be OK to paste without re-encoding?
My cut engine operates on whole seconds only.. -
1) the default in x264 is for variable interval keyframes (on scene change or up to 250 frames)
2) keyframe increase filesize.
3) keyframes are a libx264 encoder parameter, they don't have to match the source. To force keyframes, you have to set gop-size parameters, specified as the min and max number of frames in a gop, etc.
4) Lossless cuts are always made on keyframes. The first frame of a video will be a keyframe. -
You probably mean having more keyframes will increase file size?
I just wonder by how much? The data will be compressed too, right?3) keyframes are a libx264 encoder parameter, they don't have to match the source. To force keyframes, you have to set gop-size parameters, specified as the min and max number of frames in a gop, etc.4) Lossless cuts are always made on keyframes. The first frame of a video will be a keyframe.
Code:ffmpeg -hide_banner -i <inputfile> -ss <starttime> -t <duration> -c:v copy -c:a copy <outputfile>
I.e. so the extracted part will start with a keyframe and run until just before the next keyframe...
Or maybe the last part is not necessary since when pasting together the final video from these cuts the following video will start with a keyframe? -
Yes.. but most of the compression is obtained by removing part of the information about the frame on most frames and then let the decode algorithm trying to "figure it out" by similarities among adjacent frames.
Keyframes (technically they're I-frames, Intra-frames) is where ALL the information about the current frame is present and the algorithm has not to "figure" anything out.
p-frames and b-frames , instead, are frames missing part of the information about the frame itself.. and that missing information is rebuilt by the algorithm retrieving information from the following and (with b-frames) the previous frames.
That's why you can have a 100% frame-precise cut only on keyframes.. and why having more keyframes increases the file size. -
@krykmoon, you have it reversed regarding p frames - they retrieve info from previous frames to finish building their picture, not the following frames.
But yes, b frames are bi-directional. And that bidirectionality, along with the use of Open GOPs, can further complicate things because what if the following I frame that the bframes need is part of the next Gop and that happens to be the one that you want to cut on? The following clip would play ok, but the end of that previous clip might have corrupted final bframes, which could be pretty lengthy depending on the cadence/pattern.
This is why I have been suggesting a newer, smarter form of "smart rendering", where only the few frames surrounding and affected by a cut get rerendered/re-encoded.
Scott -
Smart-rendering is a frequently desired feature for video cutters/editors but is likely beyond the scope of ffmpeg.
x264 uses closed-gop by default.
Smart-rendering might be possible manually using ffmpeg, as long as you can encode with the exact same parameters as the source video. -
-
Smart-rendering allows cuts starting on any arbitrary frame (vs starting on an existing keyframe for lossless cutting). It requires re-encoding the required segment (and recombining it).
There is no automated smart part with ffmpeg: you need to provide all the parameters. -
Smart Rendering (whether normal S.R. or what I was mentioning) is beyond the current scope of ffmpeg, but not necessarily beyond some future feature.
Smart-rendering requires the ability to view & manage encoding parameters beyond GOP boundaries, or to differ based on specifics of a frame in the GOP cadence or timeline (some of this is under the jurisdiction of the encoder, but not this kind of overview). Encoders alone so far can only do this when given an listing of how to encode based on frame#/Timecode, and that usually requires GUI-based listing builder, either in a GUI to an encoder, or built into an NLE. That doesn't have to be necessarily the case if there is a programmatic script-based logic that can do that, even in a commandline environment. And the logic for decisionmaking of whether to rerender a frame/GOP, or not, is fairly straightforward.
ScottLast edited by Cornucopia; 5th Jul 2022 at 15:58.
-
IIRC, VideoReDo has an api, part of which I have used very very minimally from vbs, and it even has a user-developed app ?VAP? in its forums.
Videoredo can re-encode just around cuts specified at non-keyframes, that's its main thing.
It may not do what you want, but hey what do you expect from free advice -
https://www.videohelp.com/software/LosslessCut
It's free, and works great. You can cut whatever you want, without re-encoding, i think it uses ffmpeg -
LosslessCut cuts on keyframes only. It's as Avidemux, keyframe only, no smart rendering.
-
From version 3.44.0 they introduced a "smart cut" feature that should allow precise cuts, but I think it is in a very early stage (It's listed as "experimental") and I never tried it.
https://github.com/mifi/lossless-cut/releases/tag/v3.44.0 -
Capture with keyframes every second using -g framespersecond and -x264-opts no-scenecut. Then, after you visually find the cut point, use the nearest second as the cut time.
-
Thanks for your input!
So my download script running on Ubuntu Server 20.04 uses ffmpeg like this:
CMD="ffmpeg -hide_banner ${MODE} -i \"${M3U8URL}\" -vf scale=w=-4:h=480 -c:v libx264 -preset fast -crf 26 -c:a aac -t ${CAPTURETIME} ${TARGETFILE}"
where:
Code:read -r VIDEOURL M3U8URL < $URLFILE #Read variables from a file MODE="-referer \"${VIDEOURL}\"" CAPTURETIME="$1" TARGETFILE="$2"
Re-encoding is done "on-the-fly" when downloading the stream.
But I have no control over the frames or scenes here...
So where in this command line should I put the arguments you propose?
Like this?:
Code:framespersecond="25" options="no-scenecut" CMD="ffmpeg -hide_banner ${MODE} -i \"${M3U8URL}\" -vf scale=w=-4:h=480 -c:v libx264 -g $framespersecond -x264-opts $options -preset fast -crf 26 -c:a aac -t ${CAPTURETIME} ${TARGETFILE}
My video editor works by having buttons to click when I want to put a cut start and end and then the current position in the playing video is read back and stored as whole seconds from the start.
I read back the current position from the video player (a VLC API component) so it should be whatever the player uses.
Thanks for any clarification on usage.
EDIT some hours later:
I found a 7 years old Stackoverflow post that explained how the arguments should be used...
They belong with this section of my command:
Code:-c:v libx264 -x264-params keyint=120:scenecut=0
Unfortunately this did not work well, the splice points are still bad.
So right now the only way I have found that works is pretty slow because it re-encodes the whole video.Last edited by BosseB; 8th Jul 2022 at 16:50. Reason: Reporting test result
-
So I have bitten the bullet and created a script that will be run in a screen window to process the downloaded video with a list of extraction pieces to concatenate.
This uses the ffmpeg command that I showed in my start post on this thread and has some other book-keeping tasks as well.
I run this in a screen window so I can continue editing the next video while the conversion is being done.
This only slows me down by about one file conversion time since while the screen process runs I can prepare the next and start it in another screen window.
Of course, if I am quicker getting the cut points for the next video than the conversion time then there will be a cumulative delay before the videos are all converted.
But it does not affect me (the human) so much, just delays the publishing time a bit. -
Just my 2 cents worth... It is possible to cut at any "P" frame, usually every 4 frames, so the end would be approximately where you want it. You still need to start at a Key frame "I" but the locations of each can be decoded using ffmpeg and ffprobe. FFprobe is no speed demon so it takes a bit for a large file. I script(Automated program) i use provides this:
ALL FRAMES (FFmpeg script altered to only obtain certain sequences.
1, 15090, 0.167667, Iside_data
0, 18090, 0.201000, B
0, 21090, 0.234333, B
0, 24090, 0.267667, B
0, 27090, 0.301000, P
0, 30090, 0.334333, B
0, 33090, 0.367667, B
0, 36090, 0.401000, B
0, 39090, 0.434333, P
KEY FRAMES
1, 15090, 0.167667, Iside_data
1, 285090, 3.167667, I
1, 7305090, 81.167667, I
1, 7575090, 84.167667, I
1, 7845090, 87.167667, I
1, 8115090, 90.167667, I
Adding FFMS@ (ffmpegsource2) to Avisynth allows viewing all pertinent info Frame, frame type, frame pts time, as the video plays:
[Attachment 65909 - Click to enlarge]
If anyone would like more info on the scripts contained in my program, let me know.
[Attachment 65910 - Click to enlarge] -
So I am back again with the most basic question:
Given the following:
- 1 hour video downloaded from the net using ffmpeg and re-encoded on the fly to set screen size etc.
- Using my video editor to find the start/end times in seconds for the sections I want to extract
- Use ffmpeg to extract each such section:
Code:ffmpeg -hide_banner -ss 00:01:00 -i input.mp4 -to 00:02:00 -c copy output.mp4
Code:-ss = Start time of clip as hh:mm:ss or sssss -i = Name of input file -to = Duration of clipped section (if missing, clip to end of file) -c = method of clipping final argument is name of output file
So since the start/duration times are in whole seconds (that is what my editor extracts) the cut points are mostly not on frame borders, right?
QUESTION:
Is there a command (ffmpeg or ffprobe), which can be used to find the closest frame border before or on the given cut start point?
If that is possible then even if the frame border time is a floating point number I could use it in the cut command -ss argument to ensure that the resulting section starts on a frame border and presumably then the pasting together of the various sections into a single video will be cleaner at the borders?
Or is this not possible to do in a single ffmpeg/ffprobe command that can be scripted? -
Thanks, this looks promising!
I currently have two different approaches to concatenating the video cuts:
1- Full re-encode as described above executed inside a script in a single ffmpeg command line containing the cut specification list including file names. It looks like this:
Code:ffmpeg -hide_banner \ -ss <starttime1> -t <duration1> -i <inputfile> \ -ss <starttime2> -t <duration2> -i <inputfile> \ -ss <starttime3> -t <duration3> -i <inputfile> \ -ss <starttime4> -t <duration4> -i <inputfile> \ -ss <starttime5> -t <duration5> -i <inputfile> \ -lavfi concat=n=<numcuts>:v=1:a=1 -an <resultfile>
The latter is the quick execution one but suffers from artifacts at the paste points.
The first works fine quality wise but is slow.
What I would like to do is to use the suggested function ffnearest() to create the accurate cut points for each section and run these commands in a similar fashion as a single ffmpeg command.
But when I look at the ffmpeg documentation for concat this mode is not described...
Can I somehow use the same construct for the copy concat too?
Do I just replace the lineCode:-lavfi concat=n=<numcuts>:v=1:a=1 -an <resultfile>
Similar Threads
-
FFMPEG Audio Cutting
By Silco in forum Newbie / General discussionsReplies: 0Last Post: 12th Oct 2020, 12:18 -
Cannot encode video with ffmpeg libx264 maintaining 5 reference frames.
By Budman1 in forum EditingReplies: 4Last Post: 20th Aug 2020, 13:27 -
Cutting some stubborn videos with ffmpeg
By mpegnup in forum Newbie / General discussionsReplies: 16Last Post: 3rd Apr 2020, 10:25 -
How to Encode video using "Apple QuickTime" Writing Library in FFmpeg
By deepfrayder in forum Video ConversionReplies: 15Last Post: 3rd Feb 2020, 00:16 -
FFMPEG, how to encode this Panasonic P2 camera MXF video file?
By marcorocchini in forum Newbie / General discussionsReplies: 7Last Post: 31st Jul 2018, 09:14