Here's my problem: the person that will be recorded needs to see himself while at it.
So it's the issue of monitoring in real timeI found that using the tee pseudomuxer seems to be the simplest and most straightforward.
Perfect except it's not over a network but just over the screen.The tee muxer can be used to write the same data to several outputs, such as files or streams. It can be used, for example, to stream a video over a network and save it to disk at the same time.
So it gives us something like:
ffmpeg -f v4l2 -i /dev/video0 -framerate 30 -video_size 1280x720 -c:v utvideo -f tee "out.mvk|[f=nut]-" | ffplay -
But it says :
If I need to use the map thing I don't know how I should do it.Code:Output #0, tee, to 'out.mvk|-': Output file #0 does not contain any stream pipe:: Invalid data found when processing input nan : 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0
Thanks for your help.
+ Reply to Thread
Results 1 to 9 of 9
Yes webcam. Thanks I'll look at it tomorrow.
I have this for downloading and playing a live m3u8 stream. It should work for a camera too.
ffmpeg -y -i "http://live.prd.go.th:1935/live/ch1_L.sdp/chunklist_w1249643471.m3u8" -c:v libx264 -af "volume=15dB" output.mp4 -c copy -f mpegts - | ffplay -x 1280 -y 720 -
ffmpeg -y -f v4l2 -framerate 30 -video_size 1280x720 -input_format mjpeg -i /dev/video2 -c:v utvideo -vf hflip essai.nut -c copy -f nut - | ffplay -
Except the time lag is terrible. it's supposed to be realtime.
and whatever -f format I choose for the pipe stream I always get:Code:
av_interleaved_write_frame(): Broken pipe Error writing trailer of pipe:: Broken pipe frame= 263 fps= 26 q=-0.0 Lq=-1.0 size= 165895kB time=00:00:08.78 bitrate=154646.4kbits/s speed=0.879x video:205515kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown Conversion failed!
I want this and nothing else will work regards to real time:
camera -> filter > | -> file
| -> screen !
If you can tweak my command to remove the time lag, then fine.
Otherwise I'll need the tee muxer.
Last edited by Sentinel166; 28th Sep 2021 at 04:41.
I tried all manners of tee, including with command subtitution. Nevers works out.
ffplay -f v4l2 -framerate 30 -video_size 1280x720 -input_format mjpeg -vf hflip -i /dev/video2 | tee -a essai; ffmpeg -y -i essai -f utvideo essai.nut
"essai" isn't recognized. apparently sending frames in real time to append to a file doesn't fit with the age old unix f***in' syntax.
Is it possible to stack up all the data in the essai file, so that the input ffmpeg takes is exactly the same as if coming straight from the webcam ?
piping from ffmpeg to ffplay is way to slow.
It has to work with something like:
ffmpeg -nostdin -f v4l2 -i /dev/video2 -framerate 30 -video_size 1280x720 -c:v utvideo -f tee -map 0:v "[f=nut]essai.nut|[f=data](ffplay -)"
Can THAT work ? How ?
Last edited by Sentinel166; 28th Sep 2021 at 06:07.
"stacking up" data is not going to work, especially if it is sliced up. compressed video formats like MP4 are made up of different kinds of frame types that must follow in the correct order. I frames contain a whole image and others are differences that need to have that particular frame before them.
your command line (which i see in text, not in code) has "| tee -a" which in Linux will run the system "tee" command which knows nothing about any file format. it works only on raw byte streams. it can be useful in some cases, but not for collecting multiple parts of just about any data format, especially compressed formats like video or audio.
you are creating a "mishmash". ffmpeg and its other tools may be able to find video in there. but at the boundaries you are creating, it will be a mess. you would be better to transcode that out to raw uncompressed frames of equal size. but that will be huge data and transcoding back to compressed will take many times the play duration for typical general purpose CPUs (BTDT).