Hi.
Here's my problem: the person that will be recorded needs to see himself while at it.
So it's the issue of monitoring in real timeI found that using the tee pseudomuxer seems to be the simplest and most straightforward.
Perfect except it's not over a network but just over the screen.The tee muxer can be used to write the same data to several outputs, such as files or streams. It can be used, for example, to stream a video over a network and save it to disk at the same time.
So it gives us something like:
ffmpeg -f v4l2 -i /dev/video0 -framerate 30 -video_size 1280x720 -c:v utvideo -f tee "out.mvk|[f=nut]-" | ffplay -
But it says :
If I need to use the map thing I don't know how I should do it.Code:Output #0, tee, to 'out.mvk|-': Output file #0 does not contain any stream pipe:: Invalid data found when processing input nan : 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0
Thanks for your help.
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
Try StreamFab Downloader and download streaming video from Youtube, Netflix, Amazon! Download free trial.
+ Reply to Thread
Results 1 to 10 of 10
Thread
-
-
for live stream see here - https://www.oodlestechnologies.com/blogs/how-to-save-live-video-on-local-disk-using-ffmpeg/
-
In this blog I am going to explain how to take the live video streaming url as input and transcode it and record it in a local disk using FFmpeg.
-
try this here - https://trac.ffmpeg.org/wiki/Capture/Desktop
i assume you mean webcam correct ?? - https://trac.ffmpeg.org/wiki/Capture/Webcam -
I have this for downloading and playing a live m3u8 stream. It should work for a camera too.
Code:ffmpeg -y -i "http://live.prd.go.th:1935/live/ch1_L.sdp/chunklist_w1249643471.m3u8" -c:v libx264 -af "volume=15dB" output.mp4 -c copy -f mpegts - | ffplay -x 1280 -y 720 -
-
Code:
ffmpeg -y -f v4l2 -framerate 30 -video_size 1280x720 -input_format mjpeg -i /dev/video2 -c:v utvideo -vf hflip essai.nut -c copy -f nut - | ffplay -
Except the time lag is terrible. it's supposed to be realtime.
and whatever -f format I choose for the pipe stream I always get:Code:av_interleaved_write_frame(): Broken pipe Error writing trailer of pipe:: Broken pipe frame= 263 fps= 26 q=-0.0 Lq=-1.0 size= 165895kB time=00:00:08.78 bitrate=154646.4kbits/s speed=0.879x video:205515kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown Conversion failed!
I want this and nothing else will work regards to real time:
camera -> filter > | -> file
| -> screen !
If you can tweak my command to remove the time lag, then fine.
Otherwise I'll need the tee muxer.Last edited by Sentinel166; 28th Sep 2021 at 05:41.
-
I tried all manners of tee, including with command subtitution. Nevers works out.
ffplay -f v4l2 -framerate 30 -video_size 1280x720 -input_format mjpeg -vf hflip -i /dev/video2 | tee -a essai; ffmpeg -y -i essai -f utvideo essai.nut
"essai" isn't recognized. apparently sending frames in real time to append to a file doesn't fit with the age old unix f***in' syntax.
Is it possible to stack up all the data in the essai file, so that the input ffmpeg takes is exactly the same as if coming straight from the webcam ?
piping from ffmpeg to ffplay is way to slow.
It has to work with something like:
ffmpeg -nostdin -f v4l2 -i /dev/video2 -framerate 30 -video_size 1280x720 -c:v utvideo -f tee -map 0:v "[f=nut]essai.nut|[f=data](ffplay -)"
Can THAT work ? How ?Last edited by Sentinel166; 28th Sep 2021 at 07:07.
-
"stacking up" data is not going to work, especially if it is sliced up. compressed video formats like MP4 are made up of different kinds of frame types that must follow in the correct order. I frames contain a whole image and others are differences that need to have that particular frame before them.
your command line (which i see in text, not in code) has "| tee -a" which in Linux will run the system "tee" command which knows nothing about any file format. it works only on raw byte streams. it can be useful in some cases, but not for collecting multiple parts of just about any data format, especially compressed formats like video or audio.
you are creating a "mishmash". ffmpeg and its other tools may be able to find video in there. but at the boundaries you are creating, it will be a mess. you would be better to transcode that out to raw uncompressed frames of equal size. but that will be huge data and transcoding back to compressed will take many times the play duration for typical general purpose CPUs (BTDT). -
I don't know if it will help but sending raw video to ffplay may result in less delay. I only have a webcam to test with. It has an inherent delay so I can't tell how much delay is from the webcam and how much from the processing/piping.
Code:ffmpeg -y -f dshow -r 30 -i video="USB Video":audio="Digital Audio Interface (USB Digital Audio)" ^ -c:v h264_qsv -c:a pcm_s16le output.mkv ^ -c:v rawvideo -c:a pcm_s16le -f nut - | "G:\Program Files\ffmpeg64\bin\ffplay.exe" -
Similar Threads
-
ffmpeg framcount differs from real numbers
By Fabian Schmidt in forum Newbie / General discussionsReplies: 38Last Post: 23rd Jun 2021, 09:08 -
Noob: Can someone pls help me save this video program to Hard Disk?
By peggypwr1 in forum Newbie / General discussionsReplies: 5Last Post: 2nd Apr 2020, 16:15 -
Best solution for recording with different cameras and save it as such
By marciano999 in forum Newbie / General discussionsReplies: 6Last Post: 24th Oct 2019, 01:05 -
How do i make srt save time codes past 10 hours?
By jooj in forum SubtitleReplies: 2Last Post: 1st Aug 2019, 15:45 -
Real-Time Video Resize
By TvMind in forum Newbie / General discussionsReplies: 3Last Post: 22nd Mar 2017, 20:14