VideoHelp Forum
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 57
Thread
  1. Hi everyone,

    As continuation to the previous talk we had, I dived into AviSynth and QTGMC. To sum the other post for people who weren't involved in it - I already done with the capturing. I have around 200 video files (PAL 720x576, 25FPS. Loseless HufYuv and overscan is being masked). I captured. Around 150 are family footage taken from the same Camcorder, and 50 of the other videos varies from TV shows to Cartoons.

    I plan to keep the family footage in loseless interlaced form for archiving. So I will have own two copies - loesless and Lossy post QTGMC. The other 50 videos are not that important, and I plan only to keep the Lossy QTGMC file.

    I have AviSynth+ with all the 32Bit plugin needed for QTGMC, and also using AvsPmod. I followed the excellent blog post by Andrew Swan, However - I'm left with couple of questions. Anrew using FFMPEGSource2 to load the video file, but I assume that's because the file is already FFMPEG. I just AviSource for HuffYuv. Here's the script:

    Code:
    SetFilterMTMODE("QTGMC", 2)
    AviSource("D:\Copy.avi")
    AssumeTFF()
    QTGMC(Preset="Slower")
    Prefetch=(2)
    I'm AssumeTFF() because the else the video movement was wobbly (which also mean I guess the default if not stated is to use BFF). It's an old PC with 4 cores, so I used prefretch 2. Here's the questions:

    1. Andrew's state ConvertToYV12() is needed by QTGMC. Is that still the case? I was able to run the script without it, there was no mention about using ConvertToYV12() in the QTGMC documentation, and I couldn't tell a difference in the Preview window when using that command.

    2. Andrew's resize his video using BilinearResize(720,540) for fix aspect ration due to pixel size ration (the original size was 720x480). Doesn't that hurt the final quality because everything got a bit stretched? or it's very minimal and worth the proper aspect ratio? I'm assuming 720,540 is the proper aspect ratio from NTSC. What would be the proper one for PAL?

    3. Based on reading I did. I know cropping is normally bad. I mask all the overscan with black bars using virtualDub, retaining the original resolution. I do not mask more then 20 pixels on each side (X) and 24 top/bottom (Y) - I assume that's the limit because that was what actually hidden by the old CRT tubes. However, Andrew's crop the black bars, and resize black to the original resolution to Spline64Resize() which also done some sharping in the process. Is that a common workflow when saving view-oriented files? or that resizing usually hurt quality a bit too much and it's better just to watch it with the black bars around?

    4. Andrew using FFMpeg to save the Videos. However, it was suggested before to use x264/5 as you get better quality for Bitrate. I assume much devices can read x264. Is the difference in quality worth using 265? or not really? Also - Lordsmurf suggested using Hybrid (I'm assuming It work with .avs script - I didn't try it yet). I normally just state constant quality of 22 RF, and leave everything default. But I wonder if I should turn on/off with other options, and if perhaps the RF number should be smaller (as I can only do it once on the Cartoons for examples and I won't be saving the loseless format).

    5. It was mentioned on the previous post that It's not always the best going for 50 FPS (on PAL) and something the video will end up looking funky and it's better to leave it at 25FPS (using SelectEven()). But I wonder if I can make assumptions for videos taken from the same SOURCE (same Camcorder). If a single video from the Camcorder (and assuming my father didn't fiddle with the default Camera settings) using TFF, looks better on 50 FPS, and has a Chroma Offset of -4 down that being corrected with AviSynth - I can apply the same settings to ALL the videos take from the same source? If that's the case, maybe instead of using Hybrid - I will write a quick command line script that use the command line x264, and recursively apply the same settings to all those 150 videos.

    6. Anything else I should be adding to the base QTGMC I mentioned above to be good baseline to all videos?

    Thanks again everyone!
    Last edited by Okiba; 29th Sep 2020 at 05:53.
    Quote Quote  
  2. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    I rarely use Slower. It blurs.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  3. Originally Posted by Okiba View Post
    1. Andrew's state ConvertToYV12() is needed by QTGMC. Is that still the case?
    QTGMC() used to work only with YV12. Newer versions work with YV12, YUY2, YV24, and maybe some others. And since the video is interlaced at that point he should have used ConvertToYV12(interlaced=true). Doing that conversion incorrectly blurs the colors of the two fields together. This manifests as ghosting of colors in fast moving shots.

    Originally Posted by Okiba View Post
    2. Andrew's resize his video using BilinearResize(720,540) for fix aspect ration due to pixel size ration (the original size was 720x480). Doesn't that hurt the final quality because everything got a bit stretched? or it's very minimal and worth the proper aspect ratio?
    If you want your video to be displayed with the proper aspect ratio you need to resize to a frame size that matches the DAR of the video, or keep the original frame size and encode with SAR/DAR flags (so the player or TV resizes for you while playing). Note that nobody watches video on a 4:3 CRT anymore. Pretty much everything you watch will be upscaled to an HD display, typically 1280x720, 1920x1080, or 3840x2160 (and 4:3 material will typically be pillarboxed within those frame sizes). Every resize has the potential to introduce artifacts so it's best to resize as few times as possible. Using AR flags allows you to keep the original frame size and have the player/TV upscale to the final display size with a single resize. Unfortunately, some players/TVs will ignore the AR flags in MP4 or MKV files and display the video with the wrong aspect ratio. So it's safest to resize to a frame size that matches the aspect ratio of your source.

    Originally Posted by Okiba View Post
    I'm assuming 720,540 is the proper aspect ratio from NTSC.
    Any 4:3 frame size is appropriate for 4:3 DAR video. 320x240, 640x480, 720x540, 960x720, 1440x1080, just to mention a few.

    Originally Posted by Okiba View Post
    What would be the proper one for PAL?
    Any 4:3 frame size. More common with PAL is 384x288, 768x576.


    Originally Posted by Okiba View Post
    3. Based on reading I did. I know cropping is normally bad.
    Cropping isn't necessarily bad. It depends on what you're doing and doing it correctly. For example PAL DVD requires a 720x576 (or 704x576) frame at 25 fps. So you can't just crop away black borders of a 720x576 source. You would have to follow up with adding borders back (which is fine, essentially the same as masking) or resize back to 720x576 (possibly introducing artifacts and distortions). But if you're not producing DVDs there may be no reason to restore the frame to 720x576.

    Originally Posted by Okiba View Post
    I mask all the overscan with black bars using virtualDub, retaining the original resolution.
    Since you're using AviSynth you can do it there. Use Crop() to crop then AddBorders() to restore the frame size with perfect black (or whatever color you want) borders.

    Originally Posted by Okiba View Post
    I do not mask more then 20 pixels on each side (X) and 24 top/bottom (Y) - I assume that's the limit because that was what actually hidden by the old CRT tubes.
    The typical CRT hid much more than that. More like 5 percent at each edge. And the amount varied with the temperature, how long the TV has been on, etc.

    Originally Posted by Okiba View Post
    However, Andrew's crop the black bars, and resize black to the original resolution to Spline64Resize() which also done some sharping in the process. Is that a common workflow when saving view-oriented files? or that resizing usually hurt quality a bit too much and it's better just to watch it with the black bars around?
    Again, it depends on what you're final output is.

    Originally Posted by Okiba View Post
    I assume much devices can read x264.
    Most modern devices yes. Note that x264 is a particular encoder. The standard is h.264 (AKA AVC).

    Originally Posted by Okiba View Post
    Is the difference in quality worth using 265?
    The most modern devices support h.265. The goal of h.265 was to produce the same quality as h.264 at half the bitrate. I don't think they've come anywhere near that goal. Especially with SD video (a lot of the advances apply more to larger frames).

    Originally Posted by Okiba View Post
    Also - Lordsmurf suggested using Hybrid (I'm assuming It work with .avs script - I didn't try it yet). I normally just state constant quality of 22 RF, and leave everything default. But I wonder if I should turn on/off with other options, and if perhaps the RF number should be smaller (as I can only do it once on the Cartoons for examples and I won't be saving the loseless format).
    For the most part you should stick with the presets and tunings. For example preset "slow", tune "animation". Go with the slowest preset you can stand. I usually use CRF 18 and preset slow for SD material.

    Originally Posted by Okiba View Post
    5. It was mentioned on the previous post that It's not always the best going for 50 FPS (on PAL) and something the video will end up looking funky and it's better to leave it at 25FPS (using SelectEven()). But I wonder if I can make assumptions for videos taken from the same SOURCE (same Camcorder). If a single video from the Camcorder (and assuming my father didn't fiddle with the default Camera settings) using TFF, looks better on 50 FPS, and has a Chroma Offset of -4 down that being corrected with AviSynth - I can apply the same settings to ALL the videos take from the same source? If that's the case, maybe instead of using Hybrid - I will write a quick command line script that use the command line x264, and recursively apply the same settings to all those 150 videos.
    For handheld camcorder video 50 fps with QTGMC will almost always look better. Again, your target format may limit your choices. 50 fps progressive isn't supported by DVD. You would probably want to reinterlace back to 25i. You can assume the basic properties of the video are the same when shot with the same analog camcorder. But always check.

    Note that field order is critical. An interlaced frame packs two images into one frame. One image is in all the even numbered scanlines (0,2,4...) the other in all the odd numbered scan lines (1,3,5...), called fields. The two images are displayed separately and sequentially (at 50 fields per seconed) on an interlaced PAL TV. The field order corresponds the which of those two fields is displayed first. If you use the wrong field order the two fields will be display in the wrong temporal order. You will get a two-steps-forward-one-step-back jerky motion
    Quote Quote  
  4. Why are you doing this massive amount of work. Have you looked at 2-3 minutes of video that you've run through QTGMC and compared it to your original capture? Have you done a quick back-of-the-envelope calculation on how much time this will take for 200 videos? It has to be massive, even if each video is "only" 20-30 minutes.

    I think your time would be much better spend editing, adding titles (so later generations have some idea who they're looking at), doing gamma and color corrections, etc.

    Your time, your choice, but I'd sure do the comparison, and if the "after" doesn't knock my socks off compared to "before," I'd forget about it.
    Quote Quote  
  5. Phew. Had to catch up a lot of information with Google to reply

    I rarely use Slower. It blurs.
    Do you mean the QTGMC preset? or x264? And what's your preferred option?

    QTGMC() used to work only with YV12
    Great. That what I was missing. All the captures are YUV2, so I should be fine without any conversion.

    But if you're not producing DVDs there may be no reason to restore the frame to 720x576.
    I don't plan on producing DVD. I play to watch the content on PC with a Monitor - and sometimes Stream to the living room widescreen TV using Kodi. A non perfect 4:1 DAR video will hurt proportion won't it?

    So it's safest to resize to a frame size that matches the aspect ratio of your source.
    So I'm a bit confused here. When I use VirtualDub for capturing, and set the Standard to be PAL, the output video is 720x576. That's an aspect ratio of 5:4. Does it means the source (camcorder) aspect ratio is 5:4? or it's actually 4:1 and there's some capturing trick going on?
    And let's say it's indeed 4:1 - why should I resize it to be 768x576 and not 1920x1440? (both are 4:1)

    The typical CRT hid much more than that. More like 5 percent at each edge. And the amount varied with the temperature, how long the TV has been on, etc.
    Ermm, then I wonder why the guide limited max 40 pixel on x, and 24 on y...

    For the most part you should stick with the presets and tunings
    So that should work?
    x264.exe qtgmc.avs --crf 18 --preset slow --output "results.mp4"
    In case It's animation, I will be adding "--tune animation"

    You will get a two-steps-forward-one-step-back jerky motion
    Yes. Happened to me the first time trying to figure TFF/BFF

    You can assume the basic properties of the video are the same when shot with the same analog camcorder. But always check.
    In that case, I will start by writing a quick shell script that will randomly pick 10 videos, and use a generic AVS script that use QTGMC on 50FPS, TFF, and Fix the Chroma Offset by to C=-1, L=4 - and verify all 10 are fine. If that's the case, I will use the same settings for the rest of the videos. Worst case scenario if a I'll figure out something went funky on couple of videos later on, I plan to keep the loseless format for the family footage.

    Why are you doing this massive amount of work. Have you looked at 2-3 minutes of video that you've run through QTGMC and compared it to your original capture? Have you done a quick back-of-the-envelope calculation on how much time this will take for 200 videos? It has to be massive, even if each video is "only" 20-30 minutes.
    We talked about it in the original thread. Probably should have mentioned it here.
    Yes I have. To my eyes at least - there's quite a big improvement using QTGMC. Mainly because It seems it doing much more then just de-interlacing (I even picked the QTGMC.avs file to see what exactly happening in the background ). The original time invest was to figure how to properly Capture, how to store the captured footage for archiving. I didn't plan to dive into post capture/processing stuff. However, I had couple of problematic videos. Time based issues with my capture setup. A fix was suggested to me for couple of problematic videos (about 10). That's where I learned about QTGMC and AviSynth. There was a initial setup/research time on how to use it, and I'm still learning here now. However, I can pretty easily write a small Shell script that will convert the 200 video files for me automatically based on the generic QTGMC script mentioned above. It's a dedicated machine, so I don't care leave it on and let it do the encoding. Unless I'm missing here some huge time-sink I'm not aware of.

    think your time would be much better spend editing, adding titles (so later generations have some idea who they're looking at), doing gamma and color corrections, etc.
    Because each video had 3-4 event on it, I had to split the video into multiple files. So I did a bit of editing during that process. Adding titles is a good idea, I named the file based on the event. But a title would be cool indeed. Gamma and color correction is something I should be indeed doing at some point - but doing that for 200 videos sounds like too much for me. Now that I have all the tapes captures are archives loseless, and family can enjoy the not-perfect encoded files - that's a good ending point for me. When I'll have more time, I will learn how to adjust things like color, gamma, restoring bad sections etc (but that's something I can do any day in the future - I wanted to get that Capture out of my way as setup is hard to come by and the tapes are getting old).

    Thanks!
    Quote Quote  
  6. Originally Posted by Okiba View Post
    Originally Posted by johnmeyer View Post
    Why are you doing this massive amount of work. Have you looked at 2-3 minutes of video that you've run through QTGMC and compared it to your original capture? Have you done a quick back-of-the-envelope calculation on how much time this will take for 200 videos? It has to be massive, even if each video is "only" 20-30 minutes.
    We talked about it in the original thread. Probably should have mentioned it here.
    Yes I have. To my eyes at least - there's quite a big improvement using QTGMC. Mainly because It seems it doing much more then just de-interlacing (I even picked the QTGMC.avs file to see what exactly happening in the background ). Thanks!
    I had forgotten about the original thread, and didn't realize that you have indeed done a before/after comparison. I apologize for wasting you time with my post because I didn't remember the original post. Since you can see a significant difference, your work is almost certainly worth doing.

    Yes, QTGMC also does denoising, and it is quite good at that. The deinterlacing is pretty much as good as it gets, so you don't degrade the video too much (deinterlacing always degrades the video).

    I think there might be much faster denoisers than QTGMC which, for VHS PAL captures, might do an equal or better job, but since you have QTGMC almost sorted out, you will probably be best served by sticking with that.
    Quote Quote  
  7. Originally Posted by Okiba View Post
    But if you're not producing DVDs there may be no reason to restore the frame to 720x576.
    I don't plan on producing DVD. I play to watch the content on PC with a Monitor - and sometimes Stream to the living room widescreen TV using Kodi. A non perfect 4:1 DAR video will hurt proportion won't it?
    Where does this "4:1" come from? You used it several times. I guess you mean 4:3. More below...

    Originally Posted by Okiba View Post
    So it's safest to resize to a frame size that matches the aspect ratio of your source.
    So I'm a bit confused here. When I use VirtualDub for capturing, and set the Standard to be PAL, the output video is 720x576. That's an aspect ratio of 5:4. Does it means the source (camcorder) aspect ratio is 5:4? or it's actually 4:1 and there's some capturing trick going on?
    And let's say it's indeed 4:1 - why should I resize it to be 768x576 and not 1920x1440? (both are 4:1)
    Analog video doesn't have pixels. It's a continuous waveform. The way it's drawn on the face of an analog CRT produces a 4:3 aspect ratio picture (under optimal conditions). The ITU specifies that PAL video is captured as 704x576, or720x576. I'm going to simplify here: the 704x576 frame has the 4:3 image (it's really 702.something but 704 is generally considered close enough), the 720x576 frame has a little extra at the left and right in case the source or cap is slightly off center. The general equation that relates the display aspect ratio to the frame dimensions is:

    Code:
    DAR = FAR * SAR
    
    DAR = Display aspect ratio, the final shape of the picture that's viewed
    FAR = Frame Aspect Ratio (frame_width:frame_height)
    SAR = Sampling aspect ratio -- the "distance" between samples horizontally and vertically
    SAR is sometimes called PAR or Pixel Aspect Ratio because you can think of it as the width:height of individual pixels. If individual pixels are perfectly square (1:1) then DAR = FAR. If individual pixels are wider than they are tall then DAR > FAR. If pixels are taller than they are wide DAR < FAR. So for a 704x576 ITU cap you get:


    Code:
    DAR = FAR * SAR
    4:3 = 704:576 * SAR
    4/3 = 704/576 * SAR
    (4 * 576) / (3 *704) = SAR
    2304 / 2112 = SAR
    divide both values by 192
    12/11 = SAR
    12:11 = SAR
    I would crop whatever you need to get rid of junk at the edges of the frame then encode with a 12:11 SAR ("--sar 12:11" on the x264 command line). Alternatively, resize your 720x576 cap to 786x576, crop away whatever you don't want, and encode as square pixel (-- sar 1:1).

    Originally Posted by Okiba View Post
    The typical CRT hid much more than that. More like 5 percent at each edge. And the amount varied with the temperature, how long the TV has been on, etc.
    Ermm, then I wonder why the guide limited max 40 pixel on x, and 24 on y...
    The tutorials have many errors, approximations, and simplifications. I think he used those values to help you keep close to the correct aspect ratio when resizing to 720x540 at the end.

    Originally Posted by Okiba View Post
    For the most part you should stick with the presets and tunings
    So that should work?
    x264.exe qtgmc.avs --crf 18 --preset slow --output "results.mp4"
    In case It's animation, I will be adding "--tune animation"
    I'm not sure you should even use the animation tuning for VHS caps of cartoons. That tuning works well for sharp, low noise, cartoons -- which isn't the case with VHS tapes. If will depend on how much you sharpen and how much noise reduction you use. Give it a try and compare for yourself.
    Quote Quote  
  8. You might gets come ideas for using QTGMC on PAL VHS captures in this thread from doom9.org:

    Restoring old VHS video by Avisynth
    Quote Quote  
  9. I apologize for wasting you time with my post
    No worries! You didn't waste anything. I'm enjoying this researches and talking process

    Where does this "4:1" come from?
    Ops! Mistake! Was learning about common Aspect Ratios just before and I probably just replaced 4:3 with 4:1 in my head by accident!

    I would crop whatever you need to get rid of junk at the edges of the frame then encode with a 12:11 SAR ("--sar 12:11" on the x264 command line). Alternatively, resize your 720x576 cap to 786x576, crop away whatever you don't want, and encode as square pixel (-- sar 1:1).
    Thank you. It's good to know the math behind. Sadly, the loseless 720x576 archived videos are already masked (I didn't save the non-masked output). So I assume that Cropping means removing the black masking boarders. The --sar option you mentioned is the Flag you mentioned earlier? so some devices won't respect that flag? I plan to use VLC for playing the Video on the PC, and stream in with Kodi to the big living room TV. Hopefully both can handle the flag.

    It seems like the first option is better (encoding with 12:11 sar flag) - as this means only a single resize happens, and I don't need to resize it on my AviSynth flow. Unless the AviSyth resize can actually improve the total quality using the right method.

    Give it a try and compare for yourself.
    Yep. I plan to try all we talked about here. Try using the flag, see if VLC can handle it etc. Thank you!

    You might gets come ideas for using QTGMC on PAL VHS captures in this thread from doom9.org:
    Interesting! Thanks! But there's so much in the background I don't understand here (RemoveDirt(), Level(), LimitedSharpenFaster(), AddGrainC() and much more). I think it's better if I'll bite it on small chunks. With time I will figure out things that bother me, and will find the specific method I should be using to fix that.

    EDIT:
    I cropped one of the videos and created two x264 files. One with --sar 12:11, and one resized using LanczosResize(786,576) with --sar 1:1 flag. Took a screenshot with VLC pload them both here so you can have a look if you wish. First of all, there seems to be a single pixel difference. Nothing major. To my eyes, the resized seems to have a bit more noise compared to the more soft 12:11 with no resize video.

    By the way. I noticed x246.exe only handle video (well, that makes sense . I assume it means I have to use ffmpeg. Here's the final x264.exe command I plan on using:
    x264.exe qtgmc.avs --crf 18 --preset slow --sar 12:11 --output "results.mp4"
    If someone can quickly extract the proper ffmpeg command out of it, that would be cool. If not - I'll figure it out by reading the commandline options ffmpeg has
    Image Attached Thumbnails Click image for larger version

Name:	1211.png
Views:	76
Size:	586.2 KB
ID:	55110  

    Click image for larger version

Name:	resize11.png
Views:	77
Size:	514.1 KB
ID:	55111  

    Last edited by Okiba; 30th Sep 2020 at 07:06.
    Quote Quote  
  10. Originally Posted by Okiba View Post
    The --sar option you mentioned is the Flag you mentioned earlier?
    Yes.

    Originally Posted by Okiba View Post
    so some devices won't respect that flag?
    Yes, some devices will ignore the SAR flags and play the video at the frame aspect ratio.

    Originally Posted by Okiba View Post
    I plan to use VLC for playing the Video on the PC, and stream in with Kodi to the big living room TV.
    VLC does respect the SAR. Kodi might depend on the particular device.

    Originally Posted by Okiba View Post
    It seems like the first option is better (encoding with 12:11 sar flag) - as this means only a single resize happens, and I don't need to resize it on my AviSynth flow.
    Yes, potentially.

    Originally Posted by Okiba View Post
    Unless the AviSyth resize can actually improve the total quality using the right method.
    There's the question. There are some upscaling filters in AviSynth (nnedi3 for example) that work much better than the upscalers built into most players/TVs. So upscaling to 1440x1080 with nnedi3 might look better than letting the TV do the upscaling. But upscalers work best with video that's sharp to begin with -- VHS is not sharp. The best upscalers manage to retain sharp edges without creating oversharpening halos or aliasing artifacts.

    Originally Posted by Okiba View Post
    I cropped one of the videos and created two x264 files. One with --sar 12:11, and one resized using LanczosResize(786,576) with --sar 1:1 flag. Took a screenshot with VLC pload them both here so you can have a look if you wish. First of all, there seems to be a single pixel difference. Nothing major. To my eyes, the resized seems to have a bit more noise compared to the more soft 12:11 with no resize video.
    Most players use something like a BicubicResize() to scale video -- that slightly sharpens the picture. LanczosResize() is even sharper. Sharpening increases noise as well as edges -- hence the increased noise.

    Originally Posted by Okiba View Post
    By the way. I noticed x246.exe only handle video (well, that makes sense . I assume it means I have to use ffmpeg. Here's the final x264.exe command I plan on using:
    x264.exe qtgmc.avs --crf 18 --preset slow --sar 12:11 --output "results.mp4"
    If someone can quickly extract the proper ffmpeg command out of it, that would be cool. If not - I'll figure it out by reading the commandline options ffmpeg has
    Something like:
    Code:
    ffmpeg -i "qtgmc.avs" -c:v libx264 -preset slow -crf 18 -sar=12:11 -c:a acc "results.mp4"
    Quote Quote  
  11. Yes, some devices will ignore the SAR flags and play the video at the frame aspect ratio.
    Is there a way to check if this flag is set? VLC doesn't show this information.

    Kodi might depend on the particular device.
    It's RPI with Kodi on it. I guess I will find out soon by checking.

    So upscaling to 1440x1080 with nnedi3 might look better than letting the TV do the upscaling.
    But that's sort of a losing battle isn't that? 10 years ago 720P was the standard, then 1080P, today is Full frame. So I can scale it myself for 1440x1080, but at some point - a new better standard will pop out and those video will upscale by the TV? Unless you of-course re-creating the encoded videos each generation.

    ffmpeg -i "qtgmc.avs" -c:v libx264 -preset slow -crf 18 -sar=12:11 -c:a acc "results.mp4"
    Seems to be more complex then that. I'm using Windows, and ffmpeg doesn't come with the acc encoder. You to either compile it or use and older ffmpeg that comes bundled with it before they split up. I'll install Hybrid. If it's a ffmpeg wrapper - perhaps I can use the ffmpeg executable in it and it's already comes with ACC.
    Quote Quote  
  12. Originally Posted by Okiba View Post
    Yes, some devices will ignore the SAR flags and play the video at the frame aspect ratio.
    Is there a way to check if this flag is set? VLC doesn't show this information.
    You can see by the shape of the video when it's played. But I don't think VLC lets you see the SAR or DAR values numerically. You can use MediaInfo for that.

    Originally Posted by Okiba View Post
    Kodi might depend on the particular device.
    It's RPI with Kodi on it. I guess I will find out soon by checking.
    Kodi on the RPi will display the video correctly. I have one too.

    Originally Posted by Okiba View Post
    So upscaling to 1440x1080 with nnedi3 might look better than letting the TV do the upscaling.
    But that's sort of a losing battle isn't that? 10 years ago 720P was the standard, then 1080P, today is Full frame. So I can scale it myself for 1440x1080, but at some point - a new better standard will pop out and those video will upscale by the TV? Unless you of-course re-creating the encoded videos each generation.
    Yes.

    Originally Posted by Okiba View Post
    ffmpeg -i "qtgmc.avs" -c:v libx264 -preset slow -crf 18 -sar=12:11 -c:a acc "results.mp4"
    Seems to be more complex then that. I'm using Windows, and ffmpeg doesn't come with the acc encoder. You to either compile it or use and older ffmpeg that comes bundled with it before they split up. I'll install Hybrid. If it's a ffmpeg wrapper - perhaps I can use the ffmpeg executable in it and it's already comes with ACC.
    You probably just have a "licensing safe" build that doesn't include it. I recommend you get a build that has it. Or use another encoder. You can get a list of codecs included with your current build with "ffmpeg -codecs".
    Quote Quote  
  13. You can use MediaInfo for that.
    Oh, Nifty tool.

    I recommend you get a build that has it.
    Well, I found pre-compiled Windows binary that has fdk_aac (which suppose to be better then the build-in aac). However, the binary was 64bit. And that didn't play nice with AviSynth, because apparently some plugins like chromaShift() doesn't have 64bit plugins, only 32 bit. I ended up using an older 32bit FFMPEG binary without fdk_aac. I assume because the audio quality is quite low already on video tapes - there wouldn't be much differences between the fdk and the build in aac (if any).

    I *think* I have everything I need to keep pushing the project forward. I'm attaching the final example for a review (I also attached the original Loseless video). I fixed everything you guys mentioned except the TBC issues which sadly I can't with the current setup (so what was fixed was the Aspect Ratio, Chroma Offset by 4 pixels, and de-interlaced it). Feel free to review the final results

    I'm summing it up for myself (to make sure I didn't miss anything) and for future readers who might find it useful:

    - The 'generic' camcorder QTGMC script looks like so:

    Code:
    SetFilterMTMODE("QTGMC", 2)
    AviSource("E:\loseless_hufyuv_file.avi")
    AssumeTFF()
    QTGMC(Preset="Slower", EdiThreads=3)
    crop(20, 6, -20, -6)
    chromaShift(L=-4)
    Prefetch(3)
    - I'm going to Slower, until LordSmurf can explain his comment about Slower?
    - Masking bars are being cropped.
    - To avoid multiple re-sizing, we let the TV/Monitor do the resizing, but adding SAR as option to the encoder (12:11 for a PAL video that starts as 720x576 but get cropped).
    - The ffmpeg command is the following:

    Code:
    ffmpeg -i "qtgmc.avs" -c:v libx264 -preset slow -crf 18 -vf setsar=12/11 -c:a laac "results.mp4"
    Sounds good for a decent baseline?
    Image Attached Files
    Quote Quote  
  14. If you want to switch to 64 bit AviSynth+ there's a replacement for ChromaShift(): ChromaShiftSP(). It uses X and Y instead of C and L, and the values are the opposite sign (negated). And it supports odd as well as even shifts. Even non-integer values. So ChromaShiftSP(Y=4) is equivalent to ChromaShift(L=-4).

    And it's even possible to use 32 bit filters within 64 bit AviSynth with MP_pipeline(). It's a bit awkward and slower than all native 64 bit.

    Code:
    MP_Pipeline("""
    
    ### platform: win64
    AviSource("E:\loseless_hufyuv_file.avi")
    AssumeTFF()
    QTGMC(Preset="Slower", EdiThreads=3)
    crop(20, 6, -20, -6)
    prefetch(3)
    ### ###
    
    ### platform: win32
    LoadPlugin("c:\program files (x86)\AviSynth+\plugins+\chromashift.dll")
    ChromaShift(L=-4)
    ### ###
    
    """)
    That script will run on both 32 bit and 64 bit AviSynth+.

    That particular video could use some level/color adjustments. Maybe something like Tweak(cont=1.2, bright=-40, sat=1.2).
    Quote Quote  
  15. there's a replacement for ChromaShift(): ChromaShiftSP().
    Ohhhh, nice! Didn't know that. Will use that!

    That script will run on both 32 bit and 64 bit AviSynth+.
    That's a pretty good trick to remember. Thanks!

    That particular video could use some level/color adjustments. Maybe something like Tweak(cont=1.2, bright=-40, sat=1.2).
    Yes, I didn't touch level/color adjustment at all. During capture, I made sure to play around with brightness/contrast that information won't get clipped. But nothing beyond that. I try to avoid tweaking level/color because:

    a. I don't have the eye yet to know "What good" and "whats not"
    b. I'm not sure my monitor is calibrated or even remotely correct.

    I will apply what you suggested for that specific video. See if I can tell what you tried to do there. The 4 Pixel down and 1 to the Left ChromaShift for example, was suggested by LordSmurf. I wasn't even aware of that until I learned about it, and then keep seeing it all the time
    4 pixels seems OK to me. But I couldn't tell a difference when moving just one pixel. Hopefully it's correct now (as I'm going to apply it to all this specific CamCorder setup videos).

    Thank you for the professional help jagabo. I appreciate your help!
    Quote Quote  
  16. I thought 3 pixels was a little better than 4. You might also try sharpening the chroma.

    Code:
    MergeChroma(last, Spline36Resize(width/2, height).aWarpSharp(20).Sharpen(1.0).nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=width, fheight=height))
    Quote Quote  
  17. I thought 3 pixels was a little better than 4
    Ermm. It's hard to tell really. At least for my un-trained eye. Unless I'm looking at the wrong point. What I do is to check the red roof as it's flat.

    You might also try sharpening the chroma.
    64bit alternative to aWarpSharp?

    It's a bit awkward and slower than all native 64 bit.
    Only Just the 32bit section will be slower? or using MP_Pipeline scope means that even the 64bit will run slower?

    Also, seems like MergeChroma need to run on YV12 (And my Loseless video is YUV2). I added to ConvertToYV12() just so I can share the results (includes MergeChrome, 3 pixels down instead of 4, and the color/level correction). When I have some free time I will read about the difference between YV12 and YUV2 (and why my HuffYuv Files are YUV2 and if that's OK to change them to YV12).

    Code:
    SetFilterMTMODE("QTGMC", 2)
    AviSource("E:\test\SwissRaw.avi")
    ConvertToYV12()
    AssumeTFF()
    QTGMC(Preset="Slower", EdiThreads=3)
    crop(20, 6, -20, -6)
    ChromaShiftSP(Y=3)
    MergeChroma(last, Spline36Resize(width/2, height).aWarpSharp(20).Sharpen(1.0).nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=width, fheight=height))
    Tweak(cont=1.2, bright=-40, sat=1.2)
    Prefetch(3)
    While I find the color correction to be a bit harsh (reds are a bit too strong on my monitor). Probably my monitor. While brightness/contrast depends on the video. Sat is a global variable I can apply to all videos? or this is too scene specific? the Chroma sharping is great. It's clearly visible on the flapping flag.
    Image Attached Files
    Last edited by Okiba; 1st Oct 2020 at 10:50.
    Quote Quote  
  18. I think I added the wrong result file. Uploading it again just in case.
    Image Attached Files
    Quote Quote  
  19. Originally Posted by Okiba View Post
    64bit alternative to aWarpSharp?
    awarpsharp2

    http://avisynth.nl/index.php/AWarpSharp2




    Only Just the 32bit section will be slower? or using MP_Pipeline scope means that even the 64bit will run slower?
    What do you need that is 32bit ?


    Also, seems like MergeChroma need to run on YV12 (And my Loseless video is YUV2). I added to ConvertToYV12() just so I can share the results (includes MergeChrome, 3 pixels down instead of 4, and the color/level correction). When I have some free time I will read about the difference between YV12 and YUV2 (and why my HuffYuv Files are YUV2 and if that's OK to change them to YV12).
    MergeChroma requires planar input. YUY2 (8bit 422) in planar equivalent would be YV16

    If you use ConvertToYV12 before a deinterlacer to convert 422 to 420, it has to use interlace=true, otherwise you will get chroma artifacts. ConvertToYV12(interlaced=true)

    But since you're using some filter later that requires planar, just use ConvertToYV16(interlaced=true) instead and you can keep 422

    Sat is a global variable I can apply to all videos? or this is too scene specific?
    Yes , but you can apply different filters or different settings to different sections by using Trim()
    Last edited by poisondeathray; 1st Oct 2020 at 11:22.
    Quote Quote  
  20. Member
    Join Date
    May 2005
    Location
    Australia-PAL Land
    Search Comp PM
    That's pretty impressive, Okiba.
    Quote Quote  
  21. YV12 has the luma channel at 720x576 but the chroma channels at 360x288. YUY2 has the luma at 720x576 but the chroma at 360x576. Internally, YV12 is stored as 3 planes (YYYY... UUUU... VVVV...). YUY2 is stored interleaved (YUYVYUYV...)
    Quote Quote  
  22. awarpsharp2
    Oh dang. Should have just googled it. Sorry

    What do you need that is 32bit ?
    Now with awarpsharp2, nothing. But still wonder.

    MergeChroma requires planar input. YUY2 (8bit 422) in planar equivalent would be YV16
    YV12 has the luma channel at 720x576 but the chroma channels at 360x288. YUY2 has the luma at 720x576 but the chroma at 360x576. Internally, YV12 is stored as 3 planes (YYYY... UUUU... VVVV...). YUY2 is stored interleaved (YUYVYUYV...)
    So I probably making it more simple then it's is, but the difference is in HOW the information is store - but the information is identical (4:2:2)? So some plugins except a specific image format, and I can move between images formats. But quality wise, YUV2 and YV16 will be the same?

    interlaced=true
    Ofcourse. Stupid me. We already talked about it early on. Completely forgot.
    Does it makes a difference if I de-interlace and use ConvertToYV16() afterwards, or if I first ConvertToYV16(interlace=true) and then de-interlace?

    Yes , but you can apply different filters or different settings to different sections by using Trim()
    Probably choose the incorrect words. I'm trying to create a generic script that will being all videos from the same CamCorder to a even level. So I only try to apply things that benefit all (like QTGMC, like fixing the chromeOffset or applying the Chroma Sharpening). While contrast and levels can't be applied globally because they are very scene oriented, I wondered if Saturation is also something global I can apply to all videos? because if the saturation is wrong on the basic settings of the camera, it will be wrong in all videos?

    That's pretty impressive, Okiba.
    Harmm?

    EDIT:
    Sample uploaded.
    Image Attached Files
    Last edited by Okiba; 1st Oct 2020 at 12:38.
    Quote Quote  
  23. Originally Posted by Okiba View Post
    What do you need that is 32bit ?
    Now with awarpsharp2, nothing. But still wonder.
    In general, there is additional overhead with mp_pipeline, so you expect it to be slower most of the time, than if you ran natively x64 with prefetch(x)

    But there are some cases where mp_pipeline's threading model makes some operations faster. It uses a different threading model than global preftech


    So I probably making it more simple then it's is, but the difference is in HOW the information is store - but the information is identical (4:2:2)? So some plugins except a specific image format, and I can move between images formats. But quality wise, YUV2 and YV16 will be the same?
    Yes, it how the uncompressed data is stored. There are many; for example, UYVY is another 8bit422 arrangement for uncompressed 422.

    Some programs handle different types of 8bit422, differently. For example, most Windows NLE's do not handle YUY2 or YV16 as YUV, they get converted to RGB.

    But in avisynth - YUY2 and YV16 are interconvertible, losslessly. All types of 8bit422 are treated in avisynth as either YUY2 or YV16 (the conversion is sometimes done in the source filter)



    Does it makes a difference if I de-interlace and use ConvertToYV16() afterwards, or if I first ConvertToYV16(interlace=true) and then de-interlace?
    It shouldn't quality wise.

    But in general , most filters run faster with their planar counterparts, so converting to YV16 earlier rather than later should be faster

    Yes , but you can apply different filters or different settings to different sections by using Trim()
    Probably choose the incorrect words. I'm trying to create a generic script that will being all videos from the same CamCorder to a even level. So I only try to apply things that benefit all (like QTGMC, like fixing the chromeOffset or applying the Chroma Sharpening). While contrast and levels can't be applied globally because they are very scene oriented, I wondered if Saturation is also something global I can apply to all videos? because if the saturation is wrong on the basic settings of the camera, it will be wrong in all videos?
    No , because apparent saturation is also partially dependent on levels. Different scenes, different exposures, different camera settings might require different settings for everything
    Quote Quote  
  24. Thank you for answering the questions poisondeathray. In that case, here's the updated generic script:

    Code:
    SetFilterMTMODE("QTGMC", 2)
    AviSource("E:\test.avi")
    ConvertToYV16(interlaced=true) 
    AssumeTFF()
    QTGMC(Preset="Slower", EdiThreads=3)
    crop(20, 6, -20, -6)
    MergeChroma(last, Spline36Resize(width/2, height).aWarpSharp2(20).Sharpen(1.0).nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=width, fheight=height))
    ChromaShiftSP(Y=3)
    Prefetch(3)
    Just for testing the theory of "one AviSynth script to rule them all" - I applied the "generic" filter to two more scenes. One at Italy, where it's not full day-light, but neither dark, and there close objects. Another one is from Epcot during the night shows. Attaching. What do you think?
    One thing I noticed with the camera during the night scene, is that on some scenes, It's like it lacks focus on the side areas (and the center is focused). But that's a subject for another time :P
    Image Attached Files
    Quote Quote  
  25. italy and epcot both have elevated black level, giving that "washed out" look

    you can examine the waveform with histogram() in avisynth, or use a waveform with other programs
    Quote Quote  
  26. Originally Posted by poisondeathray View Post
    italy and epcot both have elevated black level, giving that "washed out" look
    One could perhaps add to the script
    Code:
    levels(12,1.0,255,0,255,coring=false)
    Last edited by Sharc; 2nd Oct 2020 at 08:43. Reason: coring added
    Quote Quote  
  27. you can examine the waveform with histogram() in avisynth, or use a waveform with other programs
    I was only playing with Brightness/Contrast value before the capturing. I would move the Brightness/Contras so Black/White won't clipped. But that's it. I'm assuming that blindly moving white/contrast just to be in the proper limit, can result strange white/black levels. But I was OK with that because I can fix it post capture.

    I tried use the histogram() command, and it looks a bit complex compared to the Virtualhub Histogram or the Histogram I know from when taking picture with my Camera. Is tweaking brightness/black has a set of rules I can apply from just looking at the histogram? or it's a matter of having experience "knowing" what is good or not?

    levels(12,1.0,255,0,255,coring=false)
    I can see the difference pretty clearly and it's indeed looking like "Foggy". However, levels seems much more complex compared to tweak. They doing the same?
    Quote Quote  
  28. All three of your samples have about the same black and white levels. The blacks are way too high, the whites a little low. You'd be better off capturing closer to the right levels because you would make better use of the limited range of luma values. Your deepest blacks are around Y=45 and brightest whites around 220. The valid range for limited range rec.601 is from 16 to 235. So you only have about 176 different Y values, where you could have 220.

    A waveform monitor is much more useful than a histogram. Note that AviSynth's Histogram() defaults to a waveform monitor, not a histogram. A waveform monitor is basically a graph of all the Y values across the width or height of the frame. Here's a explanation I wrote up long ago:

    https://forum.videohelp.com/threads/340804-colorspace-conversation-elaboration#post2121568

    In AviSynth I like to use TurnRight().Histogram().TurnLeft() to get a horizontal waveform (this is what one would see on an oscilloscope).

    Image
    [Attachment 55172 - Click to enlarge]
    Quote Quote  
  29. All three of your samples have about the same black and white levels.
    That makes sense. I captured around 50-60 tapes. Some events were filmed in very bright days - some were taken at the dark. Each tape had multiple event in it - so It was very time consuming to check every event and adjust the brightness/contrast. So what I did was to pick a VERY dark scene, and a very LIGHT scene - and used those levels as a template for all the tapes (as all the tapes were being taken from the same camcorder with the same default settings).

    The blacks are way too high, the whites a little low
    Well, the way I tweaked brightness/contrast was pretty basic. I played the blackest scene I could fine. See the clipping black bars (they are marked in Red in VirtualDub), and use brightness setting to move everything to the right side - until I hit 16, and I stopped there. Then leaving the brightness settings at that - I did the same for a very bright scene (lowering contrast until I'm at 235). So I can see how that is not very precise because all I was aiming for was not to lose details (so I can fix that post-processing).

    So I assume what's happening here - is that while this scene is dark, is not the darkest I tested. So the blacks starts at 45. Same thing happens for the white. It seems however, nothing clips (which was the intention). So now that I understand the graph better. How does it helps me solve it? Is it possible perhaps to get a graph based on ALL the video, and not that single point? and then - move the lowest black (let's say it's 30) to 16 - and move the lower white (lets say it's 220) to 235? "Streching" the histogram?

    I was checking Sharc settings:

    levels(12,1.0,255,0,255,coring=false)
    I can indeed see the blacks are lower on the graph. It's hard to tell about the whites. And it's pretty far from the top.

    Thanks!
    Last edited by Okiba; 2nd Oct 2020 at 16:02.
    Quote Quote  
  30. The levels filter exists because it's easy to adjust levels with it. Look at the image in post #28. The blacks are at Y=45 and whites at Y=220. To fix that so that blacks are at 16 and whites are at 235 you use Levels(45, 1.0, 220, 16, 235). Gamma is the linearity of that adjustment. 1.0 is linear. Gamma values less than 1.0 reduce dark details, values over 1.0 bring out dark detail. Here's an example that animates the gamma value

    Code:
    ######################################################
    
    function GreyRamp()
    {
       BlankClip(color=$000000, width=1, height=256, pixel_type="RGB32")
       StackHorizontal(last, last.RGBAdjust(rb=1, gb=1, bb=1))
       StackHorizontal(last, last.RGBAdjust(rb=2, gb=2, bb=2))
       StackHorizontal(last, last.RGBAdjust(rb=4, gb=4, bb=4))
       StackHorizontal(last, last.RGBAdjust(rb=8, gb=8, bb=8))
       StackHorizontal(last, last.RGBAdjust(rb=16, gb=16, bb=16))
       StackHorizontal(last, last.RGBAdjust(rb=32, gb=32, bb=32))
       StackHorizontal(last, last.RGBAdjust(rb=64, gb=64, bb=64))
       StackHorizontal(last, last.RGBAdjust(rb=128, gb=128, bb=128))
    }
    
    ######################################################
    
    function levels_gamma(clip v, float gamma)
    {
        Levels(v, 16, gamma, 235, 16, 235, coring=false)
        Subtitle("gamma="+String(gamma))
    }
    
    ######################################################
    
    GreyRamp()
    ConvertToYUY2()
    Animate(last, 0, 256, "levels_gamma", 0.5, 2.0)
    TurnRight().Histogram().TurnLeft()
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!