VideoHelp Forum
+ Reply to Thread
Results 1 to 9 of 9
Thread
  1. I have 1000's of hours of footage recorded in the early days of HDV on cameras such as the Sony HVR-V1U & HVR-Z5U (usually with the HVR-MRC1 solid state recorder installed), and I'm interested in converting the best of that to deinterlaced footage. I have been following Andrew Swan's tutorials on this subject (https://macilatthefront.blogspot.com/2018/10/64-bit-avisynth-and-qtgmc-update-and.html) and I'm really impressed with the results.

    As a complete noob to script based video modding / editing (I'm very familiar with Premiere & Resolve GUI editing), I'm looking for some (a lot? ) of help on two topics:

    1) Andrew only works with Prores on the output side (does not fit my workflow), so all his examples are geared to that. I'd like to output the video back to either something very close to what the camera produced initially, or perhaps some flavor of x264 (recommendations? My computer does not struggle at all with editing x264 files in Premiere, etc). I'm having a lot of trouble coming up with an ffmpeg output command line that would get me to where I need to be - see mediainfo output from original camera file below.

    2) Since in some ways we are doubling the frame rate with this process, would it be appropriate to increase the bit rate as well for the saved footage?. I know that most cameras increase bitrate as the frame rate increases, so this seems rational. Also, is there more data to save since the original is only 1440x1080 non square pixels, and now there are 1920x1080 square pixels after the resize to fix the aspect ratio? Again, this seems rational, but neither answer is self evident, or at least not to me.

    My avisynth+ script so far (mostly from Andrew's tutorial, computer is an AMD Threadripper 16 core):

    SetFilterMTMode("QTGMC",2)
    FFmpegSource2("D:\Videos\sample_194456.M2T",atrack =1)
    ConvertToYV12()
    AssumeTFF()
    QTGMC(Preset="slower", Edithreads=8)
    BilinearResize(1920,1080)
    Prefetch(threads=30)

    Batch file to run it:

    ffmpeg64 -i "Test-deint-1.avs" -c:v <---------- what should the command be to get something like camera original, should bit rate go up for 60fps???

    Here's the mediainfo from an original camera file:

    General
    ID : 255 (0xFF)
    Complete name : D:\Videos\sample_194456.M2T
    Format : MPEG-TS
    Commercial name : HDV 1080i
    File size : 162 MiB
    Duration : 52 s 419 ms
    Start time : UTC 2012-08-05 19:44:56
    End time : UTC 2012-08-05 19:45:48
    Overall bit rate mode : Variable
    Overall bit rate : 25.9 Mb/s
    Maximum Overall bit rate : 33.0 Mb/s
    Encoded date : UTC 2012-08-05 19:44:56

    Video
    ID : 2064 (0x810)
    Menu ID : 100 (0x64)
    Format : MPEG Video
    Commercial name : HDV 1080i
    Format version : Version 2
    Format profile : Main@High 1440
    Format settings : CustomMatrix / BVOP
    Format settings, BVOP : Yes
    Format settings, Matrix : Custom
    Format settings, GOP : M=3, N=15
    Format settings, picture structure : Frame
    Codec ID : 2
    Duration : 52 s 52 ms
    Bit rate mode : Constant
    Bit rate : 24.3 Mb/s
    Maximum bit rate : 25.0 Mb/s
    Width : 1 440 pixels
    Height : 1 080 pixels
    Display aspect ratio : 16:9
    Frame rate : 29.970 (30000/1001) FPS
    Standard : Component
    Color space : YUV
    Chroma subsampling : 4:2:0
    Bit depth : 8 bits
    Scan type : Interlaced
    Scan order : Top Field First
    Compression mode : Lossy
    Bits/(Pixel*Frame) : 0.521
    Stream size : 151 MiB (93%)
    Color primaries : BT.709
    Transfer characteristics : BT.709
    Matrix coefficients : BT.709

    Audio
    ID : 2068 (0x814)
    Menu ID : 100 (0x64)
    Format : MPEG Audio
    Format version : Version 1
    Format profile : Layer 2
    Codec ID : 3
    Duration : 51 s 768 ms
    Bit rate mode : Constant
    Bit rate : 384 kb/s
    Channel(s) : 2 channels
    Sampling rate : 48.0 kHz
    Frame rate : 41.667 FPS (1152 SPF)
    Compression mode : Lossy
    Delay relative to video : 13 ms
    Stream size : 2.37 MiB (1%)

    Menu
    ID : 129 (0x81)
    Menu ID : 100 (0x64)
    List : 2064 (0x810) (MPEG Video) / 2068 (0x814) (MPEG Audio) / 2069 (0x815) () / 2065 (0x811) ()
    Quote Quote  
  2. Please clarify something, what is the purpose of this conversion?

    If you are preparing material to be edited in Premiere, leave the HDV alone and use it as-is.
    If you are making files to watch or share or upload, that's a different matter.

    H.264 is far more efficient than mpeg2 (HDV) for encoding so you can get away with a lower bitrate than the original and still maintain high quality. Indeed the 25mb rate of HDV is overkill for a 1080 h.264 video for non-professional distribution.
    Last edited by smrpix; 13th Feb 2020 at 07:00.
    Quote Quote  
  3. Several reasons...

    I'm interested in playing around with Topaz's Gigapixel AI for Windows beta (https://videoai.topazlabs.com/beta). This beta version only works with deinterlaced footage, and it's quite picky about the format. I made a test file using Andrew's Prores example, and the Topaz software totally failed on that (claimed it was processing for considerable time using only 4 of 16 cores, then produced a tiny, unplayable file). I'm sorry if this turns into a TLDR, but it's complex. As I said, I'm quite familiar with using GUI editors, and even working with interlaced footage in them. My interest in Gigapixel stems from the fact that the cameras mentioned above had a nifty, very early High Frame Rate mode that recorded 6 seconds @240i fps to an internal buffer, then wrote that buffer out to a file that most people would call slow motion footage these days. Unfortunately, since this was very early in the HD era, the resolution of those files was low (Sony never published a number, but probably something like 480x360 or less non square pixels in 16x9 aspect ratio) , but upscaled (poorly) by the camera to 1440x1080i. This entire process was automatic and non-configurable, so please don't ask why it was done that way... I don't know, only some (probably retired) Sony engineer in Japan could possibly answer that. I'd love to see if Gigapixel can do anything with those slowmo clips in terms of improving them, even just a tiny bit.

    Most of the footage I'm working with involves sports and quick movement. Premiere's handling of HDV interlaced footage has always been a little underwhelming, in my opinion. So I ran some tests on a sample prores file I created using Andrew's tutorial, and I was pleased to see that the edited footage can be exported at 60p (59.94p), and looks noticeably better than just ingesting the native 29.97i footage and working with that. An added bonus is that the deinterlaced file responds well to Optical Flow time remapping (slowmo), and a 50-60% slowdown still looks great, without excessive artifacting. The same can not be said when working with the native camera 29.97i footage. However, I'd still like the option of plugging some of these files into Gigapixel AI, and that can't happen if they are Prores.

    So those are my reasons for wanting to create some deinterlaced masters in either mpeg2 similar to the camera master, or in x264 / h264. Thinking further, perhaps x264 / h264 would be more appropriate if I need to increase the bit rate to accommodate more pixels (1440 -> 1920) and more FPS (29.97i -> 59.94p) yet still maintain quality similar to the original...??
    Quote Quote  
  4. Thank you for explaining.

    In this situation, using ProRes as an intermediate actually seems like a pretty good idea. Not sure why your software wasn't reading it. (Topazlabs are calling for a pretty hefty videocard, I'm sure you've read the specs.)

    You might want to try using a lossless codec like Lagarith or Ut as an intermediary. (I've mostly stuck with Quicktime Animation, DNxHD or ProRes, so I can't speak well to the others -- some here swear by them.)

    H.264 and its variants require more computing power to decode, so a larger intermediate file may work more quickly.
    Quote Quote  
  5. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    Wouldn't you actually want to "descale" the badly-upsized footage after deinterlacing, rather than upsizing it even more prior to AI?

    That's the way anime encoders get sharper upscales than official releases. Not sure how good this link is as an explanation; it's just one of the top Google results: https://iamscum.wordpress.com/_test1/_test2/resizing/
    Quote Quote  
  6. aBigMeanie aedipuss's Avatar
    Join Date
    Oct 2005
    Location
    666th portal
    Search Comp PM
    don't know who or what "andrew" is but i'd dismiss it as trash. capture all the HDV in it's native format. then edit as is to final format. it ain't rocket science.
    --
    "a lot of people are better dead" - prisoner KSC2-303
    Quote Quote  
  7. Member
    Join Date
    Dec 2019
    Location
    Victoria, AU
    Search PM
    Originally Posted by aedipuss View Post
    don't know who or what "andrew" is but i'd dismiss it as trash. capture all the HDV in it's native format. then edit as is to final format. it ain't rocket science.
    Why put down something if you don't even know about it? I would never have got as far as I have with deinterlacing without "Andrew's" blog and youtube videos.

    For some of us, we want to de-interlace source video's to make them look as good as they can for viewing on progressive displays, simple.

    Or, perhaps you were trying to convey something else and I have missinterpreted your post?

    Cheers.
    Quote Quote  
  8. aBigMeanie aedipuss's Avatar
    Join Date
    Oct 2005
    Location
    666th portal
    Search Comp PM
    work with the source as much as possible. you either have 60i 1440x1080 or 30p 1440x1080 HDV, for the most part, unless a 720p60 jvc cam was used. the 60i converts nicely to 30p and the HDV 30p already is 1920x1080p30 in square pixels. i gave up on pp long ago and have used vegas pro since about v6.
    --
    "a lot of people are better dead" - prisoner KSC2-303
    Quote Quote  
  9. ConvertToYV12()
    AssumeTFF()
    some options,
    your video is already YUV, no need for that ConvertToYV12(),

    while making ProRes you do not need to resize, saving resources, just confirm video in your videoeditor as anamorphic 4:3, I think Premiere could do resize just fine, it is already progressive if I understand correctly (hopefully 60p, not 30p) from that avisynth script and Premiere should not make it any worse like it'd came out from avisynth using BilinearResize(), it resizes horizontally only

    using HDV a lot too, from old tapes but using vapoursynth, where using QTGMC and transcoding script to ProRes uses 100%CPU all 4threads busy, so that might be an option too with QTGMC deinterlacing:
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    d2v_file=r'G:/HDV TAPE 68/tape 68-2013_11_16-21_24_07.d2v'
    clip = core.d2v.Source(d2v_file)
    from havsfunc import QTGMC
    clip = core.std.SetFrameProp(clip, prop="_FieldBased", intval=2) #TFF
    clip = core.resize.Point(clip, format=vs.YUV420P8, matrix_s="709")
    clip = QTGMC(clip, Preset='Fast', TFF=True)
    clip.set_output()
    encoding it to prores:
    Code:
    vspipe.exe  --y4m  script.vpy  -  | ffmpeg.exe -f yuv4mpegpipe -i - -i "G:/HDV TAPE 68/tape 68-2013_11_16-21_24_07.M2T" -map 0:v -map 1:1 -c:v prores -c:a pcm_s24le -y "out.mov"
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!