VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or Try ConvertXtoDVD and convert all your movies to DVD. Free trial ! :)
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 55
Thread
  1. Hi guys

    I can't find command one examples for ffmpeg on how to convert progressive video to interlaced.

    The video was recorded as avc hd 1920 X 1080 50i but ffmpeg after coverting to utvideo it shows 25fps. Is there a way I can tell ffmpeg to interlace to 50 fps?
    Last edited by oduodui; 16th Mar 2018 at 18:16.
    Quote Quote  
  2. 50i is 25 interlaced frames per second. So there may be nothing wrong with the conversion. What's the command line you used?
    Quote Quote  
  3. ffmpeg -i 00066.mts -vcodec utvideo pix_fmt yuv420p -acodec pcm_s16le 66.avi
    Quote Quote  
  4. ffprobeversionN-82143-gbf14393Copyright(c)2007-2016theFFmpegdevelopers builtwithgcc5.4.0(GCC) configuration:--enable-gpl--enable-version3--disable-w32threads--enable-dxva2
    --enable-libmfx--enable-nvenc--enable-avisynth--enable-bzlib--ena ble-libebur128--enable-fontconfig--enable-frei0r--enable-gnutls--enable-iconv
    --enable-libass--enable-libbluray--enable-libbs2b--enable-libcaca-- enable-libfreetype--enable-libgme--enable-libgsm --enable-libilbc--enable-libmodplug
    --enable-libmp3lame--enable-libopencore-amrnb--enable-libopenco re-amrwb--enable-libopenh264--enable-libopenjpeg--enable-libopus--enable-librtmp
    --enable-libschroedinger--enable-libsnappy--enable-libsoxr--enabl e-libspeex--enable-libtheora--enable-libtwolame--enable-libvidstab
    --enable-libvo-amrwbenc--enable-libvorbis--enable-libvpx--enable-libwavpack--en able-libwebp--enable-libx264--enable-libx265--enable-libxavs--enable-libxvid
    --enable-libzimg--enable-lzma--enable-decklink--enable-zlib libavutil 55.35.100/55.35.100 libavcodec 57.65.100/57.65.100 libavformat 57.57.100/57.57.100 libavdevice 57. 2.100/57. 2.100 libavfilter 6.66.100/ 6.66.100 libswscale 4. 3.100/ 4. 3.100 libswresample 2. 4.100/ 2. 4.100 libpostproc 54. 2.100/54. 2.100 Input#0,mpegts,from'C:\Users\Elizabeth\Desktop\10M arch2018\00066.MTS': Duration:00:17:13.22,start:1.040000,bitrate:16449k b/s Program1
    Stream#0:0[0x1011]:Video:h264(High)(HDMV/0x564D4448),yuv420p(topfirst), 1920x1080[SAR1:1DAR16:9],25fps,50tbr,90ktbn,50tbc Stream#0:1[0x1100]:Audio:ac3(AC-3/0x332D4341),48000Hz,stereo,fltp,256kb/s Stream#0:2[0x1200]:Subtitle:hdmv_pgs_subtitle([144][0][0][0]/0x0090),1920x1080
    Quote Quote  
  5. Note how ffprobe shows your video is 25 fps. 25 fps interlaced video used to be called 25i. Marketing started calling it 50i because bigger numbers sound better -- but it's just a name change.

    The command line you used works properly. I verified it here with an interlaced MPG file I used for testing. But since UT doesn't indicate whether the 420 video is progressive or interlaced (the chroma channels are different) you may have problems with blended chroma later (if the software treats the interlaced chroma as progressive chroma).

    If you want a 50p file use a deinterlacer like yadif:

    Code:
    ffmpeg -i 00066.mts -vf yadif=1 -vcodec utvideo pix_fmt yuv420p -acodec pcm_s16le 66.avi
    Yadif (any deinterlacer) will create some artifacts though.
    Last edited by jagabo; 17th Mar 2018 at 10:12.
    Quote Quote  
  6. Thanks for the replies

    I want to interlace the the mts frames . I think the camera recorded the frames that was recorded as 50 frames per second interlaced and then stores it as 25 frames progressive. The top first field means that when the frames are stored the fiirst and second fields 50 fps interlaced is stored as one but the field order is top field is top field first for when it seperates it on the camera for playback on the camera.

    Ffmpeg just sees is as progressive when it trancodes it but stil sees the fiield order.

    What I want is to tell ffmpeg to take every single progressive frame and spilt them in their separate interlaced fields and then transcode them as two separate fields with tff.

    So when I run ffprobe the result will be 50i. Tff with duplicataing fields

    Is this passable.

    The problem is that oif ffmpeg trancodes at 25 f gprogressive it shows two interlaced frames at the same time which makes the video jagedy and blurry because the frames were recorded at different times . And the video qulaity suffers and defeats the purpose of recording at 50fps.

    Is this possible with ffmpeg?
    Quote Quote  
  7. Not sure if i understand your problem correctly but it may be that issue is related not to encoding itself but to incorrectly signalled info that your material is interlaced (perhaps due codec limitation).
    Of course you can use tinterlace filter (tinterlace=mode=4 or tinterlace=mode=5)within ffmpeg to convert progressive source to interlaced one however i'm not fully convinced if this is what are you searching for.

    Forgot to add that tinterlace perform re-lacing - to separate fields you need to use: separatefields with as mentioned in help preceding filter setfield to manually determine proper field dominance - this is more or less same way as it is usually performed for example in Avisynth - seem ffmpeg missing (it is my impression perhaps wrong) one filter present in avisynth (AssumeFieldBased) you may consider to use Avisynth together with ffmpeg.

    After while - perhaps instead re-lacing you should tell encoder that your source is interlaced - for example try to add those commands to your ffmpeg line without doing any filtering (spatial or time domain).

    For TFF
    Code:
     -top 1 -flags:v +ilme+ildct
    For BFF
    Code:
     -top 0 -flags:v +ilme+ildct
    This should guide encoder that content is interlaced and it should be encoded as interlaced.
    Last edited by pandy; 22nd Mar 2018 at 04:50. Reason: additional info
    Quote Quote  
  8. Pandy could you give me a complete command line example For -top 0/1?
    Quote Quote  
  9. Originally Posted by oduodui View Post
    Pandy could you give me a complete command line example For -top 0/1?
    NP - provide your ffmpeg command
    Quote Quote  
  10. Ffmpeg -i 00066.mts ............filter

    -vcodec utvideo -pix_fmt yuv420p -acodec pcm_s16le 66.avi
    Quote Quote  
  11. If your source is TFF then:
    Code:
    Ffmpeg -hide_banner -loglevel 32 -stats -color_range 2 -y -i 00066.mts -c:v utvideo -top 1 -flags:v +ilme+ildct -color_range 2 -pix_fmt yuv420p -c:a pcm_s16le 66_tff.avi
    otherwise (if source is BFF):
    Code:
    Ffmpeg -hide_banner -loglevel 32 -stats -color_range 2 -y -i 00066.mts -c:v utvideo -top 0 -flags:v +ilme+ildct -color_range 2 -pix_fmt yuv420p -c:a pcm_s16le 66_bff.avi
    Please test and give us feedback.
    Quote Quote  
  12. Originally Posted by oduodui View Post
    I want to interlace the the mts frames . I think the camera recorded the frames that was recorded as 50 frames per second interlaced and then stores it as 25 frames progressive. The top first field means that when the frames are stored the fiirst and second fields 50 fps interlaced is stored as one but the field order is top field is top field first for when it seperates it on the camera for playback on the camera.
    The MTS from camera is already interlaced. 25 frames per second. Or 50 fields per second.

    When it's played back on camera or TV it gets deinterlaced to 50 frames per second




    The problem is that oif ffmpeg trancodes at 25 f gprogressive it shows two interlaced frames at the same time which makes the video jagedy and blurry because the frames were recorded at different times . And the video qulaity suffers and defeats the purpose of recording at 50fps.
    No

    It just means your other program isn't handling ut video correctly . You might have to manually activate a deinterlacer or interpret the file in a video editor

    The original file is flagged interlaced, and most programs will handle it correctly, including editors, media players. Most will deinterlace automatically


    What is the background information? Why are you doing this ? What is this for ? What other applications are you planning to use ?
    There might be a more suitable intermediate which signals automatically
    Quote Quote  
  13. The AVI container doesn't really support interlaced video. It's up to the codecs used to maintain that information and pass it along to the later editor/encoder/player. UT Video Codec does not do that. As I said in post #5, your command line in post #3 already produces interlaced YV12 in the AVI file. The problems are in the downstream handling -- the editor doesn't realize the video is interlaced. It's up to you to tell it that.
    Last edited by jagabo; 22nd Mar 2018 at 17:09.
    Quote Quote  
  14. I guess I missed the bit about fields and frames. I thought fields were just another name for frames. Does all codecs and or containers store 50i recorded as 25i?

    Never knew interlaced was not supported by avi or that some containers had that issue. Is mkv or or mov better at that?

    I am using utvideo because it transcodes so very fast. Both in ffmpeg and blender. It also supports yuv420p which is the same as what the camcorder supports. So there is no loss of quality due to changing to yuv422 or rgb24. The intermediate codecs I have tried (ffmpeg ones) like prores (-vprofile 2 or 3) definiitely loses quality after transcoding to it out of .mts. But ffv1 and huffyuv retains a lot of the quality ( ffmpeg ones and using rgb24) but the file sizes are enormous and when I go to something like over 30 minutes of video on the time line in blender with ffv1 files the whole thing freezes up. I then use the proxies but for some reason the proxies get stuck at 100 percent and never finishes when you build them so I can't use them for ffv1 videos in the timeline. The quality from 5 minute ffv1 videos pulled into the timeline that I created with blender is exceptional . But when you go long I need proxies but alas. So I looked for lossless intraframe codecs that uses yuv420p (same colour scheme as the camera) because it only slows things down when you transcode it to something like yuv422 or rgb 24 . Prores quality sucks and rgb24 awesome is huge and freezes thing up. Blender is the only free NLE that I can use for multicam editing that uses proxies . Everything else not free. It also works on linux and apple mac is I don't loose the skille I have learned if I move to a different operating system. I have accepted the quality loss because I can't get the proxies to finish building . Maybe I should leave it for a whole day and see what happens. When I put the .mts video into blender it does pick it up as 50i but I can't use it as blender doesn t support smart rendering and I an intraframe codec (with uncompressed audio) so that I can cut frame accurate with no audio video synch issues.

    Here is a thread where I explained my work flow from a few years . I now understand what everybody was talking about. :

    https://forum.videohelp.com/threads/367981-What-is-the-best-source-format-for-recording-video

    At this stage I was just trying to find a codec that allowed me to cut frame accurate without losing sync between audio and video for multicam edits because you needed a nle that supported smart rendering to use them which blender does not.

    I try to stick to ffmpeg and blender since both is free and works on windows linux apple mac. So what I learned I keep.

    Now blender also freezes up or creates a black video after it passes 100 000 frames. Hence if I trancode only 80000 frames at time and output to ffv1 again then there should be no loss of quality . After doing this for all the video I can then just use ffmoeg to concat the pieces together and then use ffmpeg to transcode to divx FMP4 ASP that plays on anything except mobile devices( I can transcode to libx264 -vprofile baseline 3.0 will play but the file sizes are huge or hand brake but is is very slow and again the file size is not that small)

    I don't use dvd or blu ray for delivery storage but memory sticks . The usb memory stick must just be formatted to the correct format for the tv or dvd player (ntfs or fat 32 or exFat) and then copied unto it.

    The reason why I want to use ffmpeg to transcode the mts files to a loSsless intraframe codec is that I can transcode the the video even of they are long over an hour which blender won't do.

    Pc specs

    Windows 64 bit professional
    Core I 3
    16 gb ram
    1.8 tB hdd
    The graphics card I can't remember but it is not the onboard graphics. its late and I am not at work where the pc is .
    Quote Quote  
  15. Originally Posted by jagabo View Post
    The AVI container doesn't really support interlaced video. It's up to the codecs used to maintain that information and pass it along to the later editor/encoder/player. UT Video Codec does not do that.
    http://umezawa.dyndns.info/archive/utvideo/utvideo-6.0.0-readme.en.html

    Version history
    Version 6.0.0

    New features

    common: Add support for interlace video.
    Quote Quote  
  16. Originally Posted by pandy View Post
    Originally Posted by jagabo View Post
    The AVI container doesn't really support interlaced video. It's up to the codecs used to maintain that information and pass it along to the later editor/encoder/player. UT Video Codec does not do that.
    http://umezawa.dyndns.info/archive/utvideo/utvideo-6.0.0-readme.en.html

    Version history
    Version 6.0.0

    New features

    common: Add support for interlace video.
    Unfortunately this information is not conveyed to other programs. Like video editors, NLEs - Unlike , say DV-AVI


    Originally Posted by oduodui View Post
    I guess I missed the bit about fields and frames. I thought fields were just another name for frames. Does all codecs and or containers store 50i recorded as 25i?
    jagabo already explained earlier; "50i" and "25i" actually are the same thing. Different naming conventions. Both mean 50 fields per second , or 25 frames per second interlaced.

    There is no such thing as 100 fields per second interlaced (50 frames per second) - it's not a common format . But 50p is common.



    I just hate blender... uggh . A huge pet peeve of mine. The GUI just makes me want to puke. Both the 3d and video sequence editor. Someone else will have to help you with that part. But it just might be it can't handle it properly. You need to have an interlaced timeline or a way to tell the program it's stored in fields. Blender probably doesn't. It's meant for progressive formats.

    To be fair, in other NLE's ut video in YUV isn't handled properly either (it's treated as RGB) . Nor is the interlaced flag acknowledged (which ffmpeg variant doesn't even have, only the VFW version has that ability) . In those programs, you have to interpret it manually as interlaced
    Quote Quote  
  17. Thanks for the command line examples. Will report back asap.
    Quote Quote  
  18. Originally Posted by oduodui View Post
    Never knew interlaced was not supported by avi or that some containers had that issue. Is mkv or or mov better at that?
    MOV is more formally supported by professional video editors, including interlace, but not open source ones . MKV is never supported by pro NLE's ,except open source ones


    What kind of project or "editing" are you doing ? Is this a massive multicam edit , timing, sync, effects etc....


    Blender is the only free NLE that I can use for multicam editing that uses proxies
    What about shotcut ? I think it has a proxy workflow . But I guess no true multicam... ?
    Last edited by poisondeathray; 22nd Mar 2018 at 20:32.
    Quote Quote  
  19. Yes blender is difficult to learn. I almost gave up . You can click on de interlace at the side panel but it slows everything down.

    I am unable to quote on my telephone.

    Thanks for pointing out about blender be made primarily for progressive video. That explains a lot .

    Thing is if I can just get the damn proxies to work I can do everything in ffv1 which means virtually no loss in quality after transcoding . That's a big thing for me. So that after I have stabilised I can take the same ffvw clip and there is no loss in qualty and pull it in to time line.

    The other thing is the metastrips. One of the biggest problems I had with pinnacle was keeping all the clips aligned so that they synchronised. So that camera a would cover for camera b if there was a mistake. So after all your clips are aligned mistakes creeps in without you knowing about it . Just one of the clips jumping out af synchronisation messes everything up. It could also be due to cutting gop's at the wrong etc.
    With blender once you have synchronised you can combine them all as single metastrip and not worry about losing sync.

    But yes blender is hard to learn and so so soft on the eye.
    Quote Quote  
  20. Yes it is multi cam about 2 hours.

    As far as I know shotcut doesn't support proxies but its been awhile since I have used it and upgraded to the layest version. I also can't get is to import image sequences. its a nice program and free and also works on linux.
    Quote Quote  
  21. Originally Posted by oduodui View Post
    Thanks for pointing out about blender be made primarily for progressive video. That explains a lot .
    It might be able to, I'm just saying the vast majority of blender users will not be using interlaced.

    Yes blender is difficult to learn. I almost gave up .
    It's super duper powerful, especially in the 3d department with 3rd party scripts and plugins. Large user base. I'm aware of what it can do, but the GUI just sucks . It really does. I re-visit it every 6months or so , but more for the 3d side. It's so counterintuitive to how other programs do things. I suppose if you start with it, and stick with it, it wouldn't be so bad... but coming from other programs then going to blender... it's just fighting the GUI to get work done

    You can click on de interlace at the side panel but it slows everything down.

    A video editor should automatically deinterlace for the preview ONLY . It should keep internally knowledge of the intact fields (it's just interlaced internally, just the preview is deinterlaced) . This means you have to have some way of telling the settings that the timeline is interlaced , not progressive. And which assets are interlaced (and which are not).

    When you loaded the native MTS file, did it look combed ? or was preview "clean" . How did it compare to the UT video version ?

    Thing is if I can just get the damn proxies to work I can do everything in ffv1 which means virtually no loss in quality after transcoding . That's a big thing for me. So that after I have stabilised I can take the same ffvw clip and there is no loss in qualty and pull it in to time line.
    .
    Are you sure ? On the shorter tests were you able to export a 25i (or 50i) format properly ? Just a short test, import/ export on a short clip - are the fields intact ?

    Does blender have to generate the proxies, or can you externally generate proxies then re-link them in blender?
    Last edited by poisondeathray; 22nd Mar 2018 at 21:17.
    Quote Quote  
  22. I just took a quick look , it's not handling the interlace properly and upsampling incorrectly as progressive instead of interlace on a native interlaced MTS sample.

    In case that was only the preview, the export has the same upsampling errors , converted to RGB (FFV1 supports YUV, but it gives you no option to do so in blender. I suspect blender cannot work in, or passthrough YUV, that the VSE works in RGB only) . (You can export as TFF in field mode, but the individual fields show the errors)

    Another clue is it doesn't even have any "i" presets. They are all "p" like "HDTV 1080p" . You can create your own "i" presets, but you're still stuck with the chroma errors. There is no way to override the upsampling to use interlace that I could find
    Quote Quote  
  23. Ok it give the following error

    Unable to find a suitable output format for 'color_range" color_range : Invalid argument
    Quote Quote  
  24. Originally Posted by oduodui View Post
    Ok it give the following error

    Unable to find a suitable output format for 'color_range" color_range : Invalid argument
    When you facing such error you have two options: or remove this command or (IMHO better and required by ffmpeg developers) start using proper (latest) ffmpeg version.
    Quote Quote  
  25. Originally Posted by poisondeathray View Post
    Unfortunately this information is not conveyed to other programs. Like video editors, NLEs - Unlike , say DV-AVI
    What a bizarre idea... - are you sure that Blender is incapable to work with interlaced content in case where it is used for TV broadcast production? IMHO even old POVray support properly interlace and it is hard to understand lack of support for interlaced content on such huge beast like Blender.

    btw i will only mention that using container for interlace/progressive signalization is also quite bizarre - interlace must be supported by codec at elementary stream level not by container.
    Quote Quote  
  26. Ok it give the following error

    Unable to find a suitable output format for 'color_range" color_range : Invalid argument
    Quote Quote  
  27. Originally Posted by oduodui View Post
    Ok it give the following error

    Unable to find a suitable output format for 'color_range" color_range : Invalid argument
    Works for me (difference is only source file - it was 720x576i25 no audio capture coded with FFV1):
    Code:
    ffmpeg started on 2018-03-23 at 11:48:23
    Report written to "ffmpeg-20180323-114823.log"
    Command line:
    ffmpeg -report -hide_banner -loglevel 32 -stats -color_range 2 -y -i capture7_ffv1.mkv -c:v utvideo -top 1 -flags:v +ilme+ildct -color_range 2 -pix_fmt yuv420p -c:a pcm_s16le 66_tff.avi
    Splitting the commandline.
    Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
    Reading option '-hide_banner' ... matched as option 'hide_banner' (do not show program banner) with argument '1'.
    Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument '32'.
    Reading option '-stats' ... matched as option 'stats' (print progress report during encoding) with argument '1'.
    Reading option '-color_range' ... matched as AVOption 'color_range' with argument '2'.
    Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
    Reading option '-i' ... matched as input url with argument 'capture7_ffv1.mkv'.
    Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'utvideo'.
    Reading option '-top' ... matched as option 'top' (top=1/bottom=0/auto=-1 field first) with argument '1'.
    Reading option '-flags:v' ... matched as AVOption 'flags:v' with argument '+ilme+ildct'.
    Reading option '-color_range' ... matched as AVOption 'color_range' with argument '2'.
    Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'yuv420p'.
    Reading option '-c:a' ... matched as option 'c' (codec name) with argument 'pcm_s16le'.
    Reading option '66_tff.avi' ... matched as output url.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option report (generate a report) with argument 1.
    Applying option hide_banner (do not show program banner) with argument 1.
    Applying option loglevel (set logging level) with argument 32.
    Applying option stats (print progress report during encoding) with argument 1.
    Applying option y (overwrite output files) with argument 1.
    Successfully parsed a group of options.
    Parsing a group of options: input url capture7_ffv1.mkv.
    Successfully parsed a group of options.
    Opening an input file: capture7_ffv1.mkv.
    [NULL @ 00000000004dcb80] Opening 'capture7_ffv1.mkv' for reading
    [file @ 00000000004dd280] Setting default whitelist 'file,crypto'
    [matroska,webm @ 00000000004dcb80] Format matroska,webm probed with size=2048 and score=100
    st:0 removing common factor 1000000 from timebase
    [matroska,webm @ 00000000004dcb80] Before avformat_find_stream_info() pos: 909 bytes read:32768 seeks:0 nb_streams:1
    [matroska,webm @ 00000000004dcb80] parser not found for codec ffv1, packets or times may be invalid.
    [matroska,webm @ 00000000004dcb80] parser not found for codec ffv1, packets or times may be invalid.
    [matroska,webm @ 00000000004dcb80] All info found
    [matroska,webm @ 00000000004dcb80] After avformat_find_stream_info() pos: 122694 bytes read:122694 seeks:0 frames:1
    Input #0, matroska,webm, from 'capture7_ffv1.mkv':
      Metadata:
        ENCODER         : Lavf58.3.100
      Duration: 00:00:06.28, start: 0.000000, bitrate: 15020 kb/s
        Stream #0:0, 1, 1/1000: Video: ffv1 (FFV1 / 0x31564646), yuv422p(pc), 720x576, 25 fps, 25 tbr, 1k tbn, 1k tbc (default)
        Metadata:
          ENCODER         : Lavc58.9.100 ffv1
          DURATION        : 00:00:06.280000000
    Successfully opened the file.
    Parsing a group of options: output url 66_tff.avi.
    Applying option c:v (codec name) with argument utvideo.
    Applying option top (top=1/bottom=0/auto=-1 field first) with argument 1.
    Applying option pix_fmt (set pixel format) with argument yuv420p.
    Applying option c:a (codec name) with argument pcm_s16le.
    Successfully parsed a group of options.
    Opening an output file: 66_tff.avi.
    [file @ 00000000004de640] Setting default whitelist 'file,crypto'
    Successfully opened the file.
    detected 2 logical cores
    Stream mapping:
      Stream #0:0 -> #0:0 (ffv1 (native) -> utvideo (native))
    Press [q] to stop, [?] for help
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    [graph 0 input from stream 0:0 @ 0000000003c4df80] Setting 'video_size' to value '720x576'
    [graph 0 input from stream 0:0 @ 0000000003c4df80] Setting 'pix_fmt' to value '4'
    [graph 0 input from stream 0:0 @ 0000000003c4df80] Setting 'time_base' to value '1/1000'
    [graph 0 input from stream 0:0 @ 0000000003c4df80] Setting 'pixel_aspect' to value '0/1'
    [graph 0 input from stream 0:0 @ 0000000003c4df80] Setting 'sws_param' to value 'flags=2'
    [graph 0 input from stream 0:0 @ 0000000003c4df80] Setting 'frame_rate' to value '25/1'
    [graph 0 input from stream 0:0 @ 0000000003c4df80] w:720 h:576 pixfmt:yuv422p tb:1/1000 fr:25/1 sar:0/1 sws_param:flags=2
    [format @ 0000000003c8ac80] Setting 'pix_fmts' to value 'yuv420p'
    [auto_scaler_0 @ 0000000003c8b0c0] Setting 'flags' to value 'bicubic'
    [auto_scaler_0 @ 0000000003c8b0c0] w:iw h:ih flags:'bicubic' interl:0
    [format @ 0000000003c8ac80] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_null_0' and the filter 'format'
    [AVFilterGraph @ 0000000003c4dc80] query_formats: 4 queried, 2 merged, 1 already done, 0 delayed
    [auto_scaler_0 @ 0000000003c8b0c0] w:720 h:576 fmt:yuv422p sar:0/1 -> w:720 h:576 fmt:yuv420p sar:0/1 flags:0x4
    [avi @ 0000000002e205c0] reserve_index_space:0 master_index_max_size:256
    [avi @ 0000000002e205c0] duration_est:36000.000, filesize_est:0.9GiB, master_index_max_size:256
    Output #0, avi, to '66_tff.avi':
      Metadata:
        ISFT            : Lavf58.3.100
        Stream #0:0, 0, 1/25: Video: utvideo (ULY0 / 0x30594C55), yuv420p(pc), 720x576, q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc (default)
        Metadata:
          DURATION        : 00:00:06.280000000
          encoder         : Lavc58.9.100 utvideo
    Clipping frame in rate conversion by 0.000008
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    frame=   26 fps=0.0 q=-0.0 size=    3590kB time=00:00:01.00 bitrate=29405.7kbits/s speed=1.99x    
    frame=   53 fps= 53 q=-0.0 size=    7174kB time=00:00:02.04 bitrate=28806.8kbits/s speed=2.03x    
    frame=   83 fps= 55 q=-0.0 size=   11014kB time=00:00:03.24 bitrate=27846.6kbits/s speed=2.15x    
    frame=  112 fps= 56 q=-0.0 size=   14854kB time=00:00:04.40 bitrate=27654.6kbits/s speed=2.19x    
    frame=  142 fps= 57 q=-0.0 size=   18694kB time=00:00:05.60 bitrate=27346.0kbits/s speed=2.23x    
    [out_0_0 @ 0000000003c8ab80] EOF on sink link out_0_0:default.
    No more output streams to write to, finishing.
    frame=  157 fps= 56 q=-0.0 Lsize=   21139kB time=00:00:06.28 bitrate=27574.7kbits/s speed=2.24x    
    video:21130kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.043768%
    Input file #0 (capture7_ffv1.mkv):
      Input stream #0:0 (video): 157 packets read (11788302 bytes); 157 frames decoded; 
      Total: 157 packets (11788302 bytes) demuxed
    Output file #0 (66_tff.avi):
      Output stream #0:0 (video): 157 frames encoded; 157 packets muxed (21636692 bytes); 
      Total: 157 packets (21636692 bytes) muxed
    157 frames successfully decoded, 0 decoding errors
    [AVIOContext @ 00000000004de780] Statistics: 8 seeks, 90 writeouts
    [AVIOContext @ 00000000004e5480] Statistics: 11790765 bytes read, 0 seeks
    Please provide your command line and better same detailed report (add to your ffmpeg command line option -report i.e. ffmpeg -report ....)
    Quote Quote  
  28. Sorry my mistake typed wrong.

    The report file is 18MB in size my cell phone doesn't allow me to upload the file (BB9720)

    The name of the file is ffmpeg-20180223-135624.
    Quote Quote  
  29. Originally Posted by pandy View Post

    What a bizarre idea... - are you sure that Blender is incapable to work with interlaced content in case where it is used for TV broadcast production? IMHO even old POVray support properly interlace and it is hard to understand lack of support for interlaced content on such huge beast like Blender.
    I only took a quick look, but it seems that way . There might be some hidden switches or "god mode" button I couldn't find. I tested with a native camcorder file with proper PAFF.

    You can "deinterlace" and that will solve some of the problems, but that's not really suitable for broadcast or interlace workflow. If you're going that way, you might as well use a better deinterlace workflow like QTGMC first. But editing will be worse performance wise for multiple 50p layers than 25i.

    It doesn't surprise me, because the VSE is really an after thought. It's meant for re-arranging the progressive sequences from the 3D renders. Almost nobody uses interlace with blender.

    It works in RGB internally. Which also makes sense. Because it's primarily for CG and compositing.

    I'll take another look, but the CUE (chroma upsampling errors) and lack of interlace presets already tell me blender is not geared for proper interlace handling

    btw i will only mention that using container for interlace/progressive signalization is also quite bizarre - interlace must be supported by codec at elementary stream level not by container.
    Yes, I'm just saying UT Video codec doesn't convey this information. Or at least other programs do not pick up or distinguish between it.
    Last edited by poisondeathray; 23rd Mar 2018 at 10:06.
    Quote Quote  
  30. Originally Posted by poisondeathray View Post
    I only took a quick look, but it seems that way . There might be some hidden switches or "god mode" button I couldn't find. I tested with a native camcorder file with proper PAFF.
    For sure Blender support interlaced rendering (found info that option 'field' need to be activated (F10 key), however this is render part - not sure about NLE part but i would assume if it can generate interlaced then it must be also able to perform NLE on such content.

    Originally Posted by poisondeathray View Post
    Yes, I'm just saying UT Video codec doesn't convey this information. Or at least other programs do not pick up or distinguish between it.
    At this point i'm seriously confused as utvideo developer claims that interlaced content is supported from rev 6 and seem now it is rev 19.1...
    Quote Quote  



Similar Threads