VideoHelp Forum




+ Reply to Thread
Results 1 to 18 of 18
  1. Member
    Join Date
    Dec 2017
    Location
    United Kingdom
    Search PM
    I'm having some trouble faithfully preserving interlaced fields to match the order and scan type from the original analog recording. My content is pure analog interlaced video with no telecining involved.* It's derived from NTSC source. However,* when I come to encode the content using ffmpeg as a Ffv1/mkv file,* the interlaced fields are stored as interleaved fields which as you may be aware,* encodes static scenes progressive and motion as interlaced but I encounter some motion blurring. Is it possible to encode as seperate fields as opposed to interleaved fields? What is the best codec in ffmpeg for doing so in order to achieve 10 bit 4 2 2 lossless video for archiving with true interlacing.
    Quote Quote  
  2. Not sure if I understand your question correctly but x265 seems to store interlaced footage as separated fields:
    --interlace <false|tff|bff>
    0. progressive pictures (default)
    1. top field first
    2. bottom field first
    HEVC encodes interlaced content as fields. Fields must be provided to the encoder in the correct temporal order. The source dimensions must be field dimensions and the FPS must be in units of fields per second. The decoder must re-combine the fields in their correct orientation for display.
    https://x265.readthedocs.io/en/master/cli.html#input-output-file-options
    Quote Quote  
  3. Member
    Join Date
    Dec 2017
    Location
    United Kingdom
    Search PM
    Thank you - I understand x265 is lossy, I was looking for a lossless codec that does this and is supported in ffmpeg or virtual dub for that matter.
    Quote Quote  
  4. x265 can be lossless with the lossless switch enabled. But it's "interlace" is not very compatible, and separate fields is not very compatible

    Why would there be motion blurring ? FFV1 is a lossless codec - it's bit for bit identical

    Perhaps it's a decoding/playback mishandling issue ? A static scene is just 2 fields displayed at the same time. You get full progressive resolution when the fields are weaved. If your playback/decoding method deinterlaces, it degrades the scene

    If you wanted to encode separate fields , you can use -vf separatefields in the command line. But most hardware and software are not designed to handle separate fields as input.

    https://ffmpeg.org/ffmpeg-filters.html#separatefields-1
    Last edited by poisondeathray; 29th Aug 2021 at 10:22.
    Quote Quote  
  5. IMHO x264 in lossless mode seem to be standardized lossless and future proof with interlace support - of course there is plenty lossless codecs but most of them if not all suffer from same issue - they are not standardized and future depends fully on author/owner will.
    Quote Quote  
  6. Member
    Join Date
    Dec 2017
    Location
    United Kingdom
    Search PM
    Can someone tell me what command I would use in ffmpeg if I wanted to encode FFV1 with separate field interlacing and 10 bit 422 video, PCM 16 bit audio?
    Field order is Top Field First and from NTSC analog source
    Last edited by cjdavis83; 30th Aug 2021 at 11:13.
    Quote Quote  
  7. Originally Posted by cjdavis83 View Post
    Can someone tell me what command I would use in ffmpeg if I wanted to encode FFV1 with separate field interlacing and 10 bit 422 video, PCM 16 bit audio?
    Field order is Top Field First and from NTSC analog source

    What is your input file ? Or how are you doing the analog to digital conversion? Is it 10bit422 ? Otherwise you might have use slightly different scaling flags

    separate fields means each field is encoded separately, so for a TFF stream, top field, bottom field, top, bottom etc.. as subsequent pictures that are 1/2 height (because they only have 1/2 the scanlines)

    ie. even scanlines as 1 field encoded as a separated frame, odd scanlines as 1 field encoded as a separate picture, even, odd...

    Code:
    ffmpeg -i input.ext -vf setfield=tff,separatefields,format=yuv422p10le -c:v ffv1 -c:a pcm_s16le -aspect 4/3 output.mkv

    FFV1 has other options you want to use for archival purposes like slicecrc (for error correction checking) , I-frame instead of long GOP (worse compression, but each frame encoded separately)
    Quote Quote  
  8. Member
    Join Date
    Dec 2017
    Location
    United Kingdom
    Search PM
    Thank you - do i not need to specify NTSC framerate? It is from the Domesday 86 project, it is being encoded from a .tbc file.
    Quote Quote  
  9. Member
    Join Date
    Dec 2017
    Location
    United Kingdom
    Search PM
    Thank you @poisondeathray

    Here is the code that I used previously;

    fmpeg -i - -f s16le -r 44.1k -ac 2 -i "/mnt/Summer-1.efm.pcm" -vcodec ffv1 -level 3 -framerate ntsc -pix_fmt yuv422p10 -vf hue=h=-20=1.15 -aspect 4:3 -acodec wavpack /mnt/Summer-1.mkv

    so how would I implement in the code:

    fmpeg -i input.ext -vf setfield=tff,separatefields,format=yuv422p10le -c:v ffv1 -c:a pcm_s16le -aspect 4/3 output.mkv
    Last edited by cjdavis83; 30th Aug 2021 at 11:37.
    Quote Quote  
  10. I don't know what the doomsday86 project is, or what the .tbc input is, or what format it is. If I have time I'll look it up

    It looks like you're piping in something with -i -

    You don't have to specify a framerate, if the input pipe conveys that info. But I don't know what you're sending. It doesn't hurt to specify -r if it's the same. But if it's different that what is sent, you can get either dropped frames or duplicate frames

    If it's a rawvideo pipe such as -f rawvideo, then you have to specify framerate, resolution, pixel format of the pipe

    If it's not 4:2:2 already, in your old commandline, if you use -pix_fmt yuv422p10le and there is chroma scaling it's going to be done in a progressive manner. If it's sent as interleaved fields (like 99.99% of stuff is for interlaced content), you're going to get chroma scaling artifacts
    Quote Quote  
  11. Member
    Join Date
    Dec 2017
    Location
    United Kingdom
    Search PM
    This is it:

    https://fileinfo.com/extension/tbc

    Yeah, I do not want interleaved fields as I have noticed motion trailing and chroma artifacts too as the ffmpeg encodes were defaulted to interleaved fields but I do NOT want to use this. So, basically are you saying I can just add the code in anywhere to modify the original that i posted? My source is pure analog, NTSC video shot - not film or telecined. It is natively 29.97fps.
    Quote Quote  
  12. Originally Posted by cjdavis83 View Post
    This is it:

    https://fileinfo.com/extension/tbc

    Yeah, I do not want interleaved fields as I have noticed motion trailing and chroma artifacts too as the ffmpeg encodes were defaulted to interleaved fields but I do NOT want to use this. So, basically are you saying I can just add the code in anywhere to modify the original that i posted? My source is pure analog, NTSC video shot - not film or telecined. It is natively 29.97fps.
    I don't know what to do, because I don't know what you have exactly...

    You're assuming that interleaved fields is the cause; but it might just be that you're scaling the input signal progressively - that will cause blended chroma artifacts if the input signal was not 4:2:2 . There is no issue with interleaved fields, if it's done properly

    For example, they can use ld-chroma-decoder to produce a raw RGB bitstream from a TBC file
    If it's a raw RGB bitstream, then you must perform a correct RGB to YUV conversion . 4:2:2 is subsampled YCbCr so that can cause those artifacts if not done properly. How is the bitstream stored ? Is it interleaved fields ?
    Quote Quote  
  13. Member
    Join Date
    Dec 2017
    Location
    United Kingdom
    Search PM
    If it's a raw RGB bitstream, then you must perform a correct RGB to YUV conversion . 4:2:2 is subsampled YCbCr so that can cause those artifacts if not done properly. How is the bitstream stored ? Is it interleaved fields ?[/QUOTE]

    It is coming from a raw video source yes
    Quote Quote  
  14. Not sure, there are way there too many questions and unknowns for me to answer


    If it's a raw RGB 8bit video pipe into ffmpeg, you should have something like this for the video pipe into ffmpeg. A raw pipe conveys no characteristics such as pixel format, framerate, dimensions, so you have to specify those

    ffmpeg -i - -f rawvideo -s (width)x(height) -pix_fmt rgb24 -r 30000/1001

    But there are different types of "RGB" configurations, eg. gbrp (planar) , bgr24 , 16bit rgb etc... which one is it sending ?



    Assuming your "old" command line works, just try adding -vf setfield=tff,separatefields,format=yuv422p10le . In theory, because the fields are separated, you can scale progressively without errors

    If you still get problems and artifacts first check your playback/ viewing method. If the palyback method converts yuv422p10le to RGB for display incorrectly, you also get artifacts

    If you're sure it's correct, then instead use -vf setfield=tff,separatefields,scale=interl=1,format= yuv422p10le
    Quote Quote  
  15. Originally Posted by cjdavis83 View Post
    This is it:

    https://fileinfo.com/extension/tbc

    Yeah, I do not want interleaved fields as I have noticed motion trailing and chroma artifacts too as the ffmpeg encodes were defaulted to interleaved fields but I do NOT want to use this. So, basically are you saying I can just add the code in anywhere to modify the original that i posted? My source is pure analog, NTSC video shot - not film or telecined. It is natively 29.97fps.
    tbc extension is for RAW sampled data - there is no video there AFAIK but instead video FM modulated RF signal with relatively limited bandwidth (few MHz) - use same approach as others when dealing with tbc-like files - one of lossless audio encoders should work.
    explanation:
    video has 2-D structure where sampled RF signal is 1-D - most video encoders is designed to deal with 2-D files (or even with 3-D where third dimension is Time i.e. successive sequence of the 2-D pictures).
    To get video you need to demodulate tbc files and as overall process of the demodulation is experimental still it may be sane to keep RAW sampled RF baseband i.e. tbc file compressed losslessly - btw IMHO current way of RF signal sampling is still rather crude so probably your tbc files are still suboptimal source from HQ perspective.
    Quote Quote  
  16. Member
    Join Date
    Dec 2017
    Location
    United Kingdom
    Search PM
    So, using the commands I've been given here this is the video output I got.

    I'm very confused though 😕 😐 🤔

    https://drive.google.com/file/d/1VWTHnpnFsoZVgUToG9XcwCZyqJgKSVdJ/view?usp=sharing

    Should it not be interlaced when seperate fields is selected?
    Image Attached Thumbnails Click image for larger version

Name:	Screenshot_20210831-230815_MediaInfo.jpg
Views:	70
Size:	394.0 KB
ID:	60515  

    Quote Quote  
  17. Originally Posted by cjdavis83 View Post

    Should it not be interlaced when seperate fields is selected?
    No, it should be progressive because you have separated fields. Each field is an individual 1/2 height picture. Separated fields is just displaying each individual field sequentially (or each group of scan lines individually, even, odd, even odd...). Interlaced scan is used when fields are interleaved
    Quote Quote  
  18. According to "Digitizing Video for Long-Term Preservation" guide

    Master file: video: uncompressed 10-bit YUV 422, audio: 24bt PCM, wrapped in mkv or mov
    Mezzanine: video: DV50, audio: 16-bit pcm, wrapped in mov
    access file: de-interlaced MPEG2: 7Mbps or Windows Media 700 kbps/ 16-bit AAC audio
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!