VideoHelp Forum
+ Reply to Thread
Page 1 of 4
1 2 3 ... LastLast
Results 1 to 30 of 115
Thread
  1. Member
    Join Date
    Mar 2015
    Location
    Europe
    Search Comp PM
    I have been for some months transferring and capturing a relative large collection of miniDV, video8, VHS and super8 film (transfer outsourced). Even If I have not yet completed this project, and also for some of the already captured material I´m still trying to get better results and trying to recover some problematic tapes , it´s time to begin thinking how to address the second phase cleaning and restoration.

    Between all the formats the more challenging for me have probably been the video8 tapes due to players constraints, tape age (20-26 years old) ,original camcorder quality and extensive tape playing over time.

    As this tapes have more problems than MiniDV and even VHS I will focus on these as I think I can probably extrapolate results for VHS and miniDV even if for instance VHS could have some specificities .

    Selected capture workflow ( PAL tapes)

    D8 Camcorder(Sony DCR-TRV238 / Hi8/video8 deck( Sony EV-S9000E)->svideo->Panasonic DVD recorder ( DMR-EH65)->HDMI 576i->splitter->Startech usb3hdcap->usb 3.0 PC->VirtualDub->Huufyuv ---- DVD recorder as ADC and TBC

    I intend to do PC file versions and for that I need to clean ,restore and deinterlace .I will probably convert to H.264. Later I intend also to do some DVD´s but for now I will stick with the PC versions .

    The most common problems that I can detect are as follows:

    Noise- in general , grain in low light conditions
    Shaking- Due to incorrect camcorder use ( aggressive pan and zoom during original shooting)
    Exposure- over and under exposure when light conditions change
    Color problems- white balance , bright, contrast and saturation problems
    Borders- head switching noise, black solid capture borders and color edge issues (right or top caused by players)
    Blur /smear – need to crisp the image if possible
    Interlace- Need to deinterlace for pc versions

    For sure other problems are also eventually present as jitter, Chroma noise, Chroma shift ,...

    I have thought of two different approaches to clean and restore
    1. Use applications like Videocleaner ( GUI using AvsPmod, AviSynth and virtualDub ) , Film9, old vReveal or similar apps.
    2. Using VirtualDub and Avisynth directly, individually or both if necessary
    Even if Videocleaner use similar avisynth filters as the ones used directly I have not been able to get very good results , I have not yet tried Film9. For now the best results I got are from an hybrid approach based on VirtualDub ( deshake) and Avisynth( all the rest ).

    Based on example’s I found in videohelp and digitalfaq forums I made a small avisynth script to try to address the problems detected. I am not an expert on avisynth so for now this is basically copy /paste with minor adaptations.

    I have done the process in 3 steps
    1 -Deshake in VirtualDub -output huffyuv (YUV ) borders ( or no borders ) are quite tricky ,I don’t know for sure if I have addressed well the problem, also I don’t know if I should resize here or in step 2 at avisynth script end
    2 - Clean /restore using an avisynth script in virtualdub -output huffyuv(YUV)
    3 - Transcoding to H.264 in VirtualDub

    I have applied the script to entire files. I could probably have advantages treating separately the more problematic clips like for instance low light shooting and over exposure ones and try to adapt the script ,but it will increase dramatically post processing time so I will have to compromise somewhat .The avisynth script is quite slow about 2 fps even with a fast pc, however computing time is not a real problem for me.

    I was interested to limit to the minimum color space conversions to avoid losses , however with this 3 steps approach I have done some but I have problems to optimize the situation , also I´m not sure if the filters order in avisynth script is the most correct. The fine tuning is probably still far away from what is possible. For what I have read deshaking should be done before any crispening or temporal smoothing/cleaning. Also denoise and clean up must be done before sharpening or resizing so I as I said before I not sure if I should resize immediately after deshaking in Virtualdub or only at the end in the avisynth script as it can eventually impact final quality.

    Audio- I am thinking to demux and use iZotope RX for audio cleaning using included modules like Declip, Declick, Decrackle, Hum Remove, Denoise (Spectral, Dialogue),Leveler,… ). However I´m not sure what type of cleaning are most needed in these type of audio files .Probably hum and denoise should be used. I think it should be inappropriate to use all modules I could risk to deteriorate more than improve.

    Sorry for the long text, I´m looking forward to get your feedback and expert advice about my restoration approach, in order to fine tune sequence, script and parameters as I feel I´m still far from the optimal and unsure related to some options and parameters. I´m sure I will cut my learning curve dramatically and will be able to get strong results quickly with some help from the forum.

    Code:
    AviSource("E:\Video8\k15_tape.avi")
    ComplementParity()
    QTGMC( Preset="Slower",SourceMatch=3,Lossless=2,MatchEnhance=0.75, NoiseProcess=1, NoiseRestore=0.7, Sigma=1.5 )
    autolevels()
    Cnr2("xoo",4,2,64)
    ConvertToYV12(interlaced=true)
    MergeChroma(aWarpSharp(depth=10), aWarpSharp(depth=20)) #  white balance
    Overlay(last, ColorYUV(off_y=-8, off_u=9, off_v=-2), 0, 0, GreyScale(last).ColorYUV(cont_y=30))
    UnsharpHQ(THRESHOLD=20, SHARPSTR=4.0, SMOOTH=0.5, SHOW=false)
    #crop(2, 16, -22, -36)
    #LanczosResize(720, 576)
    Image Attached Files
    Quote Quote  
  2. You are doing a lot of good things.

    A few thoughts, in no particular order:

    Don't get carried away trying to do too much. Concentrate only on the problems which really stick out like a sore thumb.

    At the risk of sounding like a broken record (I've posted this in other threads), don't deinterlace. You permanently throw away half of all your temporal information, i.e., you are degrading your video. If you are eventually making DVDs, you definitely do not need, nor do you want to deinterlace. Even for playback on your PC, most media players can do an adequate deinterlacing job, especially for consumer analog video. You'll save yourself a lot of time if you eliminate this step.

    Once you deinterlace, all that temporal quality is gone -- forever. I always find it strange that people spend hours, days, or weeks, doing all sorts of restoration, but then blithely throw away half (not 10%, 20%, but 50%) of their video's quality.

    I have iZotope RX3 Advanced and it can do miracles on certain things. However, it does not have any sort of "band extrapolator" to add brightness to old linear-mode (6kHz or less bandwidth) VHS audio tracks. Too bad because this is my number one complaint about the audio on old videotapes. Having said that, iZotope can definitely help reduce hiss. It can also help reduce camera noise if you are unfortunate enough to have tapes from a camcorder that had the mic mounted in such a way as to record the motor noise.

    Here is my videotape workflow:

    1. Capture.
    2. Put video on NLE timeline (I use Vegas Pro).
    3. Cut out bad stuff.
    4. Color correct and adjust exposure. You really want to do this in your NLE because it has to be done for each scene, and you really need to have interactive feedback while making changes.
    5. Send audio segments that need help out to iZotope, correct, and then replace original audio segments with the new audio.
    6. Stabilize using Deshaker or Mercalli (Mercalli is much better, but it costs money). I wrote Vegas & VirtualDub scripts to automate the process of using Deshaker to stabilize individual events within Vegas (i.e., "batch stabilization").
    7. Frameserve from the NLE into AVISynth script that does basic denoising, including chroma noise reduction.

    I alter my denoising script for every tape I do, and sometimes for individual scenes. This is a requirement because the noise you get after gaining a really dark scene has quite a different character from garden-variety VHS, 8mm, or Beta analog noise. Noise on Digital8 and DV is different from those, and generally doesn't need chroma noise reduction.

    I bring the denoised video back into Vegas, add titles and chapters, and then render out to DVD.

    Because SD consumer video is not that great, DVD quality is more than adequate. I like the durability and longevity of DVD and compared to BD, memory card, or hard drive, I think it is far more likely that you'll be able to play it in 30-40 years. I say this because the CD was introduced in 1984, and 32 years later it can still be played on any "round shiny object" player in the world. The DVD turns 20 next year, and it is still the mainstay for physical SD video distribution.

    My final piece of advice: if you find yourself squinting at the monitor, doing an A/B between the original and your improved version, and can barely see a difference, don't bother to do all the work. Restoration for this sort of material is about making big improvements that dramatically enhance the viewing experience. I've seen too many people get carried away with really difficult workflows, which makes it hard to finish the project. The old "silk purse out of a sow's ear" is an apt metaphor.

    Getting to the finish line should be your number one objective.

    Just my opinion ...
    Quote Quote  
  3. Member
    Join Date
    Mar 2015
    Location
    Europe
    Search Comp PM
    Thank you johnmeyer your opinions are most welcome as I´m still searching for a strategy to address my restoration project.

    Related to deinterlacing I thought that if both fields were retained without discarding or blending and using 50 fps the losses would be minimized .Obviously I will retain the masters captures and for DVD versions I never thought to deinterlace, so I will retain all information for whatever I will need in the future. So the problem I think will be if deinterlace is better addressed by algorithms like QTGMC or the ones included in pc players.

    I have tried to deinterlace using pc players applications like VLC , MPC-HC, …, however my impression is that none of these application have a deinterlacing algorithm as efficient as QTGMC or as least the visual results for me seem better with QTGMC.

    For the pc versions I was thinking to do a very limited edition cutting only bad stuff directly in virtualdub . I was not thinking to add titles and chapters for now. To avoid color correct and adjust exposure for each scene I was trying to find and use in the avisynth script plug-in´s to do auto adjustments namely ,autolevels, automatic correction of white balance and Automatic brightness and contrast correction at least.Eventually very problematic scenes could be individuall treated.

    I was very much interested to know what are considered the best avisynth plug-in´s for basic denoising, chroma noise reduction, sharpening/crisping and for the auto adjustments indicated in last paragraph to compare to the ones I have used or typical restore script´s for standard video8 tapes.

    For the DVD versions I intend to do later a more professional approach as soon as I have a more strong knowledge of the several techniques and applications involved , with tittles chapters, transitions, all the stuff and individual scene adjustments if needed and for this I was thinking to use an NLE (Sony movie studio 13 or Adobe premiere pro cs6). I still have to decide in this case which mpeg2 encoder to use, I have TMPGenc plus and I think I have an old version of mainconcept 2.0 or 2.1. I have to see if these are still considered good encoders or if it will be better to buy something new.

    I understand that using a NLE and doing an individual scene treatment is a more professional approach and will lead to better results and I will love to do it now, but considering my actual know how ( I’m not a professional in this area I´m learning as I progress) , the dimension of my tapes collection something between 350 and 400 files to restore and the need to have some results shortly, I have to do some type of compromise and I thought that a more automated treatment could be the solution. That said I have an old pinnacle studio 15 version which I used in the past, but I bought before I began my capture project the sony movie studio 13 and adobe premiere pro cs6 but I still have to re-learn and adapt to effectively use these applications. If I can quickly get to efficiently use Adobe Premier pro or sony movie studio I will try your approach, however I am still interested to try to optimize my approach as I feel that I can get relative good results sooner , I hope not to be wrong.
    Quote Quote  
  4. A few quick responses:

    Related to deinterlacing I thought that if both fields were retained without discarding or blending and using 50 fps the losses would be minimized .
    Unfortunately, that's not how deinterlacing works. Your video has 50 events every second. People get really confused about interlacing because they freeze a complete frame and see both fields at the same time, and since the two fields are from different moments in time, they see "teeth" around places where this is strong horizontal motion, and they think something is wrong. However, when the video is in motion, your persistence of vision, which is what allows you to see motion, does not notice that the odd fields are being updated at a different moment in time than the even fields, and the net effect is something that has 50 temporal events per second.

    When you deinterlace, you end up doing it one of two ways: the first yields 25 fps progressive and the other yields 50 fps progressive. If you do the deinterlacing that yields 25 fps, you end up with something that no longer has any of the fluidity of video, and instead looks more like film, complete with judder on horizontal pans. In addition, you end up with errors in the estimation of where the field from the other moment in time must be spatially shifted to match the current field. The 50p result has its own set of problems having to do with motion estimation.

    however my impression is that none of these application have a deinterlacing algorithm as efficient as QTGMC or as least the visual results for me seem better with QTGMC.
    I agree completely. QTGMC is just about the best deinterlacer you can find, and it does lots of other things as well. So, if you are going to deinteralce, it is the one to use. But, good as it is, the basic "laws of physics" apply, and your video will still be degraded.

    To avoid color correct and adjust exposure for each scene I was trying to find and use in the avisynth script plug-in´s to do auto adjustments namely ,autolevels, automatic correction of white balance and Automatic brightness and contrast correction at least
    I wish there was such a thing a autolevels and auto white balance. I've tried dozens of AVISynth scripts, Sony Vegas fX, and various commercial plugins. None of them do a decent job, and most of them actually fail quite miserably. But, try them and see for yourself. Maybe you'll find something I haven't tried. The reason I don't think you'll be happy, however, is that most color and gamma correction has to deal with major errors, and there is not enough color or exposure information to get back to something that looks normal. For color correction, most "autowhite" tools are really designed to help you match different cameras, or match different scenes taken at slightly different times of day. However, for the major problems you usually have with consumer camcorder, such as balancing for indoor light and then exposing outdoors, the automatic tools fail completely. Or, how do you balance for fluorescent lighting? Again, the automatic tools fail.

    I was very much interested to know what are considered the best avisynth plug-in´s for basic denoising, chroma noise reduction, sharpening/crisping
    I have posted my starting point VHS restoration script many times. It is basically just an adaptation of the MDegrain2 denoising that is given in the MVTools2 documentation. I also use an old VirtualDub plugin, CNR2, that I run inside of AVISynth. Everyone's tastes and needs are different, but for VHS, SVHS, 8mm, and Hi8, I find that denoising is about the only "automatic" operation that really improves the result (well, deshaking can do wonders, in some cases). Sharpening is very problematic, and tends to create some pretty nasty artifacts without really bringing out any useful detail. By contrast, in the film restoration scripts we created over in doom9.org, a little sharpening can sometimes work miracle. Take a look at one of my early posts in the first of two long threads. Scroll down until you see the porch railings. Note how much detail I was able to recover:

    The power of Avisynth: restoring old 8mm films.

    I still have to decide in this case which mpeg2 encoder to use,
    Any of the ones you mention will work fine.
    Last edited by johnmeyer; 17th Apr 2016 at 09:29. Reason: Changed 60p to 50p
    Quote Quote  
  5. Originally Posted by johnmeyer View Post

    When you deinterlace, you end up doing it one of two ways: the first yields 25 fps progressive and the other yields 60 fps progressive. If you do the deinterlacing that yields 25 fps, you end up with something that no longer has any of the fluidity of video, and instead looks more like film, complete with judder on horizontal pans. In addition, you end up with errors in the estimation of where the field from the other moment in time must be spatially shifted to match the current field. The 60p result has its own set of problems having to do with motion estimation.
    ?? Am I misreading what you're trying to say ?

    From a 25fps (50 field /s) source , bob deinterlacing with any method produces 50p, not 60p . So you don't lose any temporal information or interpolate any additional temporal information . You don't need to use optical flow techniques . The simplest bob deinterlace is bob() a bicubic resize of field without spatial or temporal interpolation, no correction of oven/odd field shift either. Each field essentially becomes a frame. So 50 fields becomes 50 frames.
    Quote Quote  
  6. Thanks for the correction. The 60p was a typo. I have corrected that. I live in NTSC land, so my brain is hard-wired for 60.

    As for Bob deinterlacing, or any other method which produces 60p for NTSC or 50p for PAL, yes, the temporal cadence is retained, but at the cost of degrading the original spatial integrity of the video. It is called "bob" deinterlacing because, with a simple bob algorithm, the resulting video will, in some scenes, appear to "bob" up and down.
    Last edited by johnmeyer; 17th Apr 2016 at 09:33. Reason: added second paragraph.
    Quote Quote  
  7. Yes, "smarter" bob algorithms will correct for the even/odd field offset. Yadif is probably the most commonly used one for software

    Most HDTV sets do not use any spatial or temporal interpolation. It's very similar to simple bob in terms that a field is simply resized without any other processing. This predisposes to deinterlacing artifacts, "marching ants", buzzing lines etc... So I would argue that "original spatial integrity" isn't necessarily a good thing when you're viewing it, because you're not viewing the original fields, you're viewing frames. More expensive sets use motion adaptive, spatial and temporal interpolation to fill in the missing lines, antialiasing - almost as good as an advanced software deinterlacers. Deinterlacing on the fly on those setups is almost as good as something like QTGMC. So the equipment you have might factor into the choices you make

    The OP needs a PC version. That' s the reason for his thread.

    My opinion - I would rather watch a higher quality deinterlaced version than watch some lower quality deinterlaced on the fly version. But another option is you can watch with QTGMC on the fly if your setup is fast enough. But not everyone, like grandma, etc.., will know about avisynth or how to set it up to watch in a software player. If you "bake" in a QTGMC processed version, it's "easier" to watch

    Pros/cons to anything you do. In general, stabilization , temporal filtering will work better on a double rate deinterlaced version (50p in his case) than on applying to separate even/odd fields and interleaving. If you filter the even/odd fields separately, the filtering never accounts for the shifts between even and odd fields - you predispose to temporal fluctuations and fluttering between frames. But some specific cases might require separate treatment of even vs. odd

    I think everyone will agree that one should archive the original, in case some better algorithm down the road develops, or if you need to do other manipulations etc.. So in that respect, original integrity isn't degraded.
    Quote Quote  
  8. Originally Posted by poisondeathray View Post
    I think everyone will agree that one should archive the original, in case some better algorithm down the road develops, or if you need to do other manipulations etc.. So in that respect, original integrity isn't degraded.
    ... or if future sets can actually (once again) display interlaced video natively. I still don't understand why this is not possible. Every pixel is addressed, so addressing an individual line of pixels in an LCD display should be doable. However, it must be more difficult than that or some manufacturer would offer such a set. However, based on my technical understanding, I still think such a display is possible, and if it is ever offered, then having your original interlaced material would be a good thing.
    Quote Quote  
  9. Member
    Join Date
    Mar 2015
    Location
    Europe
    Search Comp PM
    As of my first post I have tried to identify the more frequent problems usually presented in my video8 captures from which the sample on post 1 and in this post are examples.

    However as my Know How on this area is still limited it would have been a miracle if I have correctly identified all the more important problems that need restoration and the best methods to do so . So I very much welcome your comments/advice to improve and optimize the project and avisynth script, which is an attempt to restore these captures without using for now a full NLE.

    Resuming what I have done so far and what seems to be my actual problems

    Deshaking- I have separately done it in virtualdub . I think the results are acceptable even if I feel that probably deshaking parameters are not completely well optimized. Related to this I have serious doubts if I should resize after cropping, probably it will be better to add borders instead in Vdub or in avisynth script . However borders and even resizing are always tricking in my case in YUY2 colorspace it will be mod-2 however others problems could mess with this and justify other border dimensions ( ringing,… ) and I don’t know if any of these situations apply and how to correctly choose to avoid them if any. Deshaking also poses a problem with logos and dates that seems to go all over the screen, I still have to see how to address this problem ( delogo plug-in´s ?)

    Deinterlace – I´m very satisfied with the results obtained so far with QTGMC I think the results are solid. I tried to optimize the parameters a little more and I think I got a minor but visible improvement
    QTGMC( Preset="Very Slow",SourceMatch=3,Sharpness=0.5,TR2=2,Lossless=2 ,MatchEnhance=0.75, NoiseProcess=1, NoiseRestore=0.7, Sigma=1.5 )

    Exposure adjustment- This is my major pitfall. The only solution I found it´s autolevels() , I did see a minor improvement but is very very limited and I think is clearly the area where I got the worst results ( see added sample). Probably to adjust these scenes I will have to address them individually and even so I have not yet made my mind what will be the better plug-in or filter to use for this adjustment.
    autolevels()

    Chroma noise reduction- Even if I don´t see a very significant improvement with CNR2 I think it will be useful probably to maintain it.
    Cnr2("xoo",4,2,64)

    Noise- somewhat addressed through QTGMC perhaps should be better optimized for this or complemented with a separate denoizing plug-in. I do have neat video for instance, but I don´t think the type and amount of noise justify it´s use.

    Auto-white balance – it seems to work , the results do seem satisfactory and from the several solutions this one for me seems to work better so far.
    MergeChroma(aWarpSharp(depth=10), aWarpSharp(depth=20)) # white balance
    Overlay(last, ColorYUV(off_y=-8, off_u=9, off_v=-2), 0, 0, GreyScale(last).ColorYUV(cont_y=30))

    Crispening/Sharpening- UnSharpHQ- I think it is a difficult area due to the nature of video8 (lack of resolution)- However so far these plug-in seems to provide a small benefit perhaps need to be better optimized
    UnsharpHQ(THRESHOLD=20, SHARPSTR=4.0, SMOOTH=0.5, SHOW=false)

    I don´t know for sure if I have other problems that I should address as for example anti-aliasing (namely after deinterlacing ) , chroma shift ( I think I have perhaps a small problem with this but I´m unsure), need for jitter stabilization and eventually others problems.

    Color space conversions
    Knowing how important is to reduce color space conversions to a minimum. I have thought of a new approach but I´m unsure if it could give any improvement.

    I can at deshaking end save directly to IV12 , uncompressed , as I think Huufyuv doesn’t support it ( it´s an intermediary format so space is not problem and as I use SSD throughput also don´t pose any problem) .As all the avisynth plug-in´s work on IV12 I would not have to make any further color space conversion and could also compress at the end directly to h.264 without further 3th step.

    I tried this simultaneous with the QTGMC tweak and got a minimal but i think visible improvement I don’t know if it´s from these modification or from QTGMC optimization or both but probably is from QTGMC However I´m only avoiding subset color space conversions from YUY2 to IV12 and I´m limiting Chroma sub sample earlier so I´m unsure what the balance will be.However the script runs almost twice faster.

    k7_raw capture YUV(YUY2) 4.2.2 (Huufyuv)
    Vdub null transform RGB24
    Vdub deshaker RGB32
    Vdub resize(Lanczos3) RGB32
    Vdub save to YUV(YV12) 4.2.0 (uncompressed)
    avisynth QTGMC YV12
    avisynth autolevels() YV12
    avisynth CNR2 YV12
    avisynth aWarpSharp YV12
    avisynth UnsharpHQ YV12
    Vdub save to H.264 4.2.0


    Looking forward for yours comments. Sorry again for the long text
    Image Attached Files
    Quote Quote  
  10. Originally Posted by johnmeyer View Post
    Originally Posted by poisondeathray View Post
    I think everyone will agree that one should archive the original, in case some better algorithm down the road develops, or if you need to do other manipulations etc.. So in that respect, original integrity isn't degraded.
    ... or if future sets can actually (once again) display interlaced video natively. I still don't understand why this is not possible.
    Interlaced CRT displays scanned the face of the tube with an electron beam spot that was close to two lines in diameter. So there weren't visible black lines between each line of a field. 1/60 Of a second later the second field was drawn, offset by one scan line. That's not so different from what a simple software bob does. And it results in the same bobbing picture with flickering horizontal edges. Why would anyone want to go back to that? Smart bobbers can do much better in still parts of the picture and deliver the same or better quality in moving parts. QTGMC() does much better job of bob deinterlacing than any TV or DVD/Blu-ray player. So it makes sense to use it now if your player can handle the frame rate (Blu-ray disc, for example, doesn't support 1080p60). Of course, for film based material with pulldown you get better quality with an IVTC than a smart bob.

    In the future smart bobbers may get better than QTGMC. Using QTGMC now will lock it's deinterlacing quality into your final video. So it makes sense to keep the original interlaced video around if you think you might want better deinterlacing in the future.
    Quote Quote  
  11. A CRT actually did continuous scanning, so every phosphor particle represented a different moment in time. The phosphor was formulated to have a persistence of approximately 1/30 second, so the image would be retained until the next time the electron beam painted the same spot on the tube. However, since the phosphor did get dimmer while waiting for the next zap, this resulted in a perceptible flicker. In addition, some of the previous image was left behind.

    So, I'll agree that I have no desire to go back to that particular technology.

    However, interlacing itself, if it could be done line-by-line on an LCD display could actually work extremely well.

    I totally agree that QTGMC is a great deinterlacer, although I am not sure I agree that it is better than any TV. Better than some, for sure, but maybe not all. Four years ago I splurged and got a really good TV (one that has the "soap opera" settings for film, which I never, ever use), and I think its deinterlacer, which is based on the same motion estimation algorithms used for the soap opera effect, is pretty darn good.

    I will say that in four years of watching mostly sports on this TV (lots of motion!), I've never once seen even a glimmer of artifacting.
    Last edited by johnmeyer; 18th Apr 2016 at 21:52. Reason: removed sentence that wasn't needed
    Quote Quote  
  12. Originally Posted by johnmeyer View Post
    The phosphor was formulated to have a persistence of approximately 1/30 second, so the image would be retained until the next time the electron beam painted the same spot on the tube.
    Of the TVs I tested the persistence was closer to 1/60 second. You can easily see this by creating a video that has one field black and the other field white. The flicker on an interlaced CRT will blow you out of the room. And you will not see a pronounced black line between successive scan lines of the white field, just a little dimming between lines.

    Originally Posted by johnmeyer View Post
    However, interlacing itself, if it could be done line-by-line on an LCD display could actually work extremely well.
    Please explain how.
    Last edited by jagabo; 18th Apr 2016 at 22:10.
    Quote Quote  
  13. Originally Posted by jagabo View Post
    Please explain how.
    I don't understand the question. Are you asking me to explain how interlacing works? I assume not. Are you asking me to explain how interlacing uses half the bandwidth of progressive video that has the same temporal "feel" (i.e.., 60p vs. 60i) and thus makes it possible to have both 1920x1080 resolution and 60 temporal events per second in a bandwidth that is constrained by economics and government standards (FCC in this country)?

    I was trained as an electrical engineer and when I design things, those designs always involve "engineering tradeoffs." Interlacing was invented as part of the initial television design work back in the 1930s and 1940s as a way to "have your cake and eat it too." It worked really well, and I don't ever remember reading stories about how horrible it was. Quite the opposite: it was always heralded as a brilliant piece of design.

    When the HD era was about to dawn, the original standards being proposed were analog, not digital, and interlacing was once again very much a part of the design mix because OTA was still a huge part of the equation. Then, when digital technology kept improving by leaps and bounds (remember when your new computer was always 2-3 times faster than your old one), the international standards committee recognized that HD could be a digital product. However, the basic laws of physics that governed the original analog TV work fifty years earlier were still in play as design work shifted to creating a digital standard, and the bandwidth available per channel, even in cable TV systems, dictated that to keep the 60 temporal events (or 50 in Europe) per second that gives video its special "feel," interlacing still needed to be part of the equation.

    So, 1080i was baked into the standards.

    Now here we are in 2016 and people think the decision to include interlacing in the HD TV standards back in the early 1990s was short-sighted, and that interlacing is this horrible thing. Both thoughts are wrong. The decision to include 1080i was both a brilliant and necessary decision, one which is still required today. In fact, without it, things would be worse. What do I mean by this? Well, if you go to any forum which discusses satellite and cable TV delivery (or OTA for that matter) most quality discussions are about the horrendous amount of compression being used on TV signals in order to squeeze them into the channel bandwidth available. In the case of OTA those channels are constricted by government fiat, while in the case of cable and satellite, the operators could choose to offer fewer channels and instead provide better quality on each channel, but economics and market demand continue to drive them towards offering more channels.

    Once again, interlacing is the only technology that lets them (and us) "have our cake and eat it too."

    Finally, we now have 4K. TV sets supporting this new standard have been widely available for more than half a decade. And, to the point at hand, the 4K standard does not offer any interlaced standard. Well, good you say: it's about time we got rid of that.

    Perhaps, but consider the following:

    What's the biggest issue with 4K? Finding a way to deliver it! My neighbors first showed me their new 4K TV over six years ago. As of today, they still have no way to receive any content, either via our local cable or OTA. Even DirecTV has only an extremely limited offering.

    So why, after over half a decade, are we in this situation? It's that same word again: bandwidth. It is a limited resource, and has not been expanded at all. We still have the same spectrum allocation for OTA, and the satellite and cable operators still have the exact same infrastructure as before. They have two options: give up 2-4 channels for each 4K (or 8K) channel they want to provide or ...

    ... use interlacing.

    Back to what you may have been asking, namely how different 1080i would look from 1080p if it could be displayed on an LCD that natively supported interlacing, I don't know because I haven't seen it. However, I still have four CRT TVs that I watch, albeit casually (i.e., while I'm doing something else), and I never once think about interlacing when I watch them, and instead only notice that they aren't as sharp because, of course, they are SD. I therefore suspect that 1080i displayed on a display capable of separately addressing odd and even lines would look extremely good, although perhaps the lack of persistence might introduce artifacts during fast motion that were masked on a CRT display. It is an interesting question, and I've never seen any educated discussion, based on some sort of testing, that would provide an answer.

    So, I understand the purpose of interlacing; I understand the problems it was designed to solve; and as a result I understand that it is a brilliant solution to the "engineering tradeoff" issue related to limited bandwidth and that it still might have a place in the future world of ultra HD (and beyond).
    Last edited by johnmeyer; 19th Apr 2016 at 10:29. Reason: typo
    Quote Quote  
  14. Originally Posted by johnmeyer View Post
    Back to what you may have been asking, namely how different 1080i would look from 1080p if it could be displayed on an LCD that natively supported interlacing, I don't know because I haven't seen it.
    That's what I was asking. So you have no explanation of how it would work and have never seen it yet you claim it "could actually work extremely well."
    Quote Quote  
  15. Member
    Join Date
    Mar 2015
    Location
    Europe
    Search Comp PM
    I feel that the main thread subject is being hijacked. It seems to me that regardless of the importance of the alternative theme in discussion if it´s not about the main subject of this thread it should be carried on in a new thread. I doubt that anybody reading this thread still remember what the original subject was .
    Quote Quote  
  16. I already gave my explanation, namely that I've watched it for 60+ years and it works well, so no further explanation is needed. The only issue is whether, without the persistence of the phosphor, other artifacts might show up.
    Quote Quote  
  17. I've watched for nearly 60 years too and I wouldn't want to go back to the flickery mess.
    Quote Quote  
  18. Member
    Join Date
    Mar 2015
    Location
    Europe
    Search Comp PM
    If both of you could redirect your fantastic energy and give me some advice for my project that will awesome
    Quote Quote  
  19. @FLP437 - it's alot to read over and go through so be patient

    A few things to start with

    1) It looks like you're using the YUY2 variant of QTGMC, and if you've used QTGMC, then it's already progressive, you shouldn't have ConvertToYV12(interlaced=true) after QTGMC. Unless the script has already been changed from post #1

    2) If you're using deshaker , it only works in RGB, that's unavoidable. 1 trip back and forth isn't too bad especially for this type of content. I wouldn't be too concerned about those losses if it's done correctly. But if you haven't corrected for levels before in YUV, the RGB conversion done in vdub will clip the levels Y>235, Y<16 . Your original capture has levels in that "illegal" range, so you need to either adjust during capture, or adjust it before deshaker in YUV, or do a different RGB conversion than what vdub does (full range) - but that last one will stretch the contrast

    3) Some people might consider deinterlacing first (since you've already decided you're going to QTGMC it somewhere for this project), because deshaker tends to work slightly better with progressive content, but OTOH, stabilizing first will make every temporal filter afterwards work better including QTGMC... In some cases one might be better to do first, you'd have to try some mini tests




    OT:
    I semi-regularly go to the high end AV store to check displays. I'm confident that there are none <$20,000 that do a better job overall than QTGMC. There might be some custom HW panel that some MIT tech made for himself, but there are none available to general public that do a better job. If you see one, PM me. Also, QTGMC isn't "perfect." It can make mistakes. There are examples and situations where it fails with the motion estimation despite any settings, even didee pointed this out from TGMC. But overall it's still probably the best
    Last edited by poisondeathray; 19th Apr 2016 at 20:03.
    Quote Quote  
  20. Member
    Join Date
    Mar 2015
    Location
    Europe
    Search Comp PM
    Thank you so much poisondeathray and my apologies to johnmeyer and jagabo
    Quote Quote  
  21. Deshaking also poses a problem with logos and dates that seems to go all over the screen, I still have to see how to address this problem ( delogo plug-in´s ?)
    This one can be very bad. But delogo approaches can be bad too. Is it possible to toggle time/date overlays on some of the captures ? That would be "best"

    Presumably only some of the footage from some of the sources have it , and some of them might not have it continuously ? I didn't see any logo/date on the 1st sample. Can you provide more info on the type/distribution ?





    For the exposure and levels - I 100% agree with john. It's almost impossible to tackle that in avisynth unless it's a very simple scene. Or it's going to take 100x longer to do a good job in avisynth. You need realtime feedback (you can get semi realtime feedback in avspmod with the sliders if you set them up), but more importantly you also need the ability to adjust (keyframe in a NLE or compositor) the changes over time. That's a real pain to do in avisynth if you want to do a good job. "Auto-anything" usually doesn't do that great, avisynth or other tools. But you've said you tried autolevels() in avisynth - but you should be aware there are several versions, and also try autoadjust() by lato. Autoadjust has many settings, so make sure to play around with them and look at the documentation. I guess it depends on what your expectations are
    Quote Quote  
  22. You're adding a blue shift to the brights with:

    Code:
    Overlay(last, ColorYUV(off_y=-8, off_u=9, off_v=-2), 0, 0, GreyScale(last).ColorYUV(cont_y=30))
    Are the walls supposed to be bluish?
    Quote Quote  
  23. Member
    Join Date
    Mar 2015
    Location
    Europe
    Search Comp PM
    thank you again for your advice

    last script version

    Code:
    AviSource("E:\Video8\k15_tape.avi")
    ComplementParity()
    QTGMC( Preset="Very Slow",SourceMatch=3,Sharpness=0.5,TR2=2,Lossless=2,MatchEnhance=0.75, NoiseProcess=1, NoiseRestore=0.7, Sigma=1.5 )
    autolevels()
    Cnr2("xoo",4,2,64)
    MergeChroma(aWarpSharp(depth=10), aWarpSharp(depth=20)) #  white balance
    Overlay(last, ColorYUV(off_y=-8, off_u=9, off_v=-2), 0, 0, GreyScale(last).ColorYUV(cont_y=30))
    UnsharpHQ(THRESHOLD=20, SHARPSTR=4.0, SMOOTH=0.5, SHOW=false)
    The version of QTGMC I used was v3.33 I think it support both YUY2 and IV12

    yes I have not adjusted levels before the first RGB conversion, I forgot about this problem, I will try to adjust it before deshaker in YUV ( the hdmi captures get the full range and I have also almost everything captured) , what will be the best method?

    I´m unaware of the possibility to toggle time/date overlays on some of the captures , how can this be done ?

    Well I do agree with johnmeyer and you to, in what exposure and levels are concerned, I was hoping only to get results faster this way even if not so good .I was thinking to use the NLE only for final versions on DVD but if the pc versions get to much defective I will have perhaps no other chance.

    I will try some mini tests to see what will be better related to deshaking and stabilizing
    Quote Quote  
  24. Member
    Join Date
    Mar 2015
    Location
    Europe
    Search Comp PM
    You're adding a blue shift to the brights with:

    Code:
    Overlay(last, ColorYUV(off_y=-8, off_u=9, off_v=-2), 0, 0, GreyScale(last).ColorYUV(cont_y=30))
    Are the walls supposed to be bluish?
    Well probably not, I copy pasted from another script ( perhaps in the original were a reason for that ) and I didn´t notice, I will have to adapt, thanks
    Quote Quote  
  25. Sorry for taking this OT. My fault for continuing to respond. I made my point and should have moved on.

    Your restoration looks very good. It's better than anything I have been doing, so I'm going to steal some of your code.

    As for the exposure flicker, you might try Deflicker. I just tried it on your clip, along with setting dct=1 in MDegrain2, and got pretty good results. I was going to upload my results until I looked at your results, and even though the deflickering worked pretty well, your sharpening is much better

    You do have a pretty noticeable color shift that makes the orange-red roof look more red. I'm not sure what caused that.
    Quote Quote  
  26. Member
    Join Date
    Mar 2015
    Location
    Europe
    Search Comp PM
    @johnmeyer,It´s ok , no major problem. I also didn´t intend in any way to be rude with my comments , if it looked that way,again my apologies , but I feel in need to try to break-up the dialog and refocus on the subject again.

    Thank you for the compliment for the restoration quality but unfortunately nothing is of my own , I have only copy pasted parts of scripts from others , my eventual only merit if any ,may have been to have been able to identify some plug-in´s or parts of scripts that seemed to work reasonably well.

    I have tried to make some small additional improvements based on the advice received
    • To avoid levels crushing when converting from YUV2 to RGB ( as almost all virtualdub filters work on RGB) I used the levels filter to squeeze luma from 9-254 to 16-235. However levels in virtualdub is probably not the correct way to squeeze luma values as the filter itself works on RGB32 so values could be crushed before they are squeezed. If this is not the correct method perhaps I could do an avisynth script only to squeeze luma values ? As I capture analog not directly but already in digital format hdmi 576i I have been told not to adjust the processor amplifier and use instead default values and not adjusting I get the full range from 1-254)
    • I still have to try Deflicker when light condition change suddenly, it could bring an additional benefit ,I will do that as soon as possible.
    • There is some sort of ghost or color bleeding I don’t know for sure ( see detail image) that in some way prevent the image to look sharper .In the event that this is indeed color shift I included an additional line to try to prevent the situation .It seems to have improved a little, but I don´t know if it´s due to this probably not
    • I have tried to resize only at avisynth script end ( avoiding doing that after deshaker in virtual but it seemed to me that it didn´t result so I stayed with resizing after deshaking in virtualdub
    • I tweaked a little more the virtualdub deshaker filter but I think it will be difficult to see visually any new improvement. I have not tried to correct all pan and zoom as there are too aggressive movements so I tried only to partially reduce shaking, I think it results better.

    • If I split the under and over exposed scenes virtualdub levels are enough to achieve a minimal acceptable result . I think I will have to do that and join after the clips, if I don’t want for now to use a NLE.
    • Related to timestamps and logos problem with deshaker I don’t have them continually . I will provide an example. However I don’t´know how to toggle time/date overlays on some of the captures , could it be only for minidv. I will try the mask option in deshaker and the delogo plug-in as soon as I have a little more time
    • The auto white balance that I included is part of an original multiscript from Lordsmurf. However I tried to do it without the second line as Jagabo told about a blue shift to the brights and I don’t dislike the final color

    Code:
    ### YV12 color corrections
    # MergeChroma(aWarpSharp(depth=10), aWarpSharp(depth=20)) # better white balance than ColorYUV(autowhite), use with overlay line below
    # Overlay(last, ColorYUV(off_y=-8, off_u=9, off_v=-2), 0, 0, GreyScale(last).ColorYUV(cont_y=30)) # use with mergechroma line above

    Poisondeathray thanks for the several advice/ tips I have not yet had time to test them all . I will try them in the upcoming days.
    Image Attached Thumbnails Click image for larger version

Name:	detail.jpg
Views:	2018
Size:	16.8 KB
ID:	36693  

    Image Attached Files
    Quote Quote  
  27. Originally Posted by FLP437 View Post
    To avoid levels crushing when converting from YUV2 to RGB ( as almost all virtualdub filters work on RGB) I used the levels filter to squeeze luma from 9-254 to 16-235. However levels in virtualdub is probably not the correct way to squeeze luma values as the filter itself works on RGB32 so values
    You are right. VirtualDub's Levels filter works in RGB so the superblacks and superwhites have already been crushed before the Levels filter. Its Brightness/Contrast filter can work in YUV though. Set Brightness to about 6 and Contrast to 87 will get you close. AviSynth has several methods of converting levels. The built in ColorYUV(levels="PC->TV"), for example. But for your sample I find that a ColorYUV(gain_y=-20) is better. That brings the brights down to Y~=235 while only slightly darkening darks. The only darks below Y=16 are oversharpening halos and you don't really care if those get crushed.

    Originally Posted by FLP437 View Post
    There is some sort of ghost or color bleeding I don’t know for sure ( see detail image)
    The image shows two problems: oversharpening halos and color bleeding.

    Consumer video tape formats are all low resolution horizontally so most players include a sharpener. Those usually cause overshoot. On a transition from dark to bright there are bands of over dark on the dark side, over bright on the bright side. The oversharpening isn't too bad on your video. But it's best to avoid this by turning off the sharpening filter in the playback deck. Or switching to a player that doesn't do it. If neither of those is an option you can reduce them (at the cost of losing a little detail and shaprness) with a dehalo filter like dehalo_alpha(). A 4x greyscale (to eliminate the color smear) enlargement, before and after dehalo_alpha(rx=4, ry=1):

    Image
    [Attachment 36703 - Click to enlarge]


    Consumer tape formats also have very very low color resolution, only about 40 lines across the width of the frame. That is the cause of the color smear.


    Originally Posted by FLP437 View Post
    The auto white balance that I included is part of an original multiscript from Lordsmurf. However I tried to do it without the second line as Jagabo told about a blue shift to the brights and I don’t dislike the final color
    The first of those two lines

    Code:
    MergeChroma(aWarpSharp(depth=10), aWarpSharp(depth=20))
    has nothing with white balance. It sharpens the luma a little, and the chroma more. Since the chroma is so low resolution horizontally I would use something more like:

    Code:
    MergeChroma(Spline36Resize(width/4, height).aWarpSharp(depth=10).Spline36Resize(width,height))
    That keeps the luma intact but sharpens the chroma significantly. You might need a ChromaShift() after the operation to better align the sharpened chroma with the luma.

    The second line, Overlay(...) applies a color shift to the brights but not the darks, but the shift is in appropriate for the video. Unless those white walls are supposed to be blue. In my opinion the white balance of the original clip is pretty close and it doesn't need any adjustments.
    Last edited by jagabo; 21st Apr 2016 at 09:00.
    Quote Quote  
  28. Have you tried deshaking in AviSynth with depan, instead of Deshaker in VirtualDub? After QTGMC() try something like this:

    Code:
    Crop(10,2,-10,-10) # get rid of black borders, add them back later if you want
    maxstabH=50 
    maxstabV=40
    mdata=DePanEstimate(last,trust=1.0,dxmax=maxstabH,dymax=maxstabV)
    DePanStabilize(last,data=mdata,dxmax=maxstabH,dymax=maxstabV,method=1,mirror=15,cutoff=1.0,damping=1.0,prev=0,next=0,blur=0)
    Last edited by jagabo; 21st Apr 2016 at 14:17.
    Quote Quote  
  29. Member
    Join Date
    Mar 2015
    Location
    Europe
    Search Comp PM
    Thank you jagabo I feel almost overwhelmed with so many advice and options , I will take some days to digest all the information received.

    In what concerns luminance crushing with conversion from YUV2 to RGB in virtualdub I searched and found two standard approaches just before seeing your post

    Code:
    AviSource("C:\name.avi")
    ColorYUV(levels =”PC->TV”)
    ConvertToRGB # if necessary
    or

    Code:
    AviSource("C:\name.avi")
    ConvertToRGB(matrix=”PC.601”)
    I have used this last one and imported directly in RGB in VD , however I didn’t saw any visible impact on the final result , probably only in scenes with super black or super white , over or under exposed will be eventually possible to see some result .I don’t think it will be any different with the first approach.I will try your proposal of ColorYUV(gain_y=-20) and see if I could see any visual benefit.

    Related to the over sharpening halos and color bleeding I tried Colorshift and with a C value of about -4 the color bleeding is substantial reduced however I don’t know how to select the Luma value I tried several values but didn’t see any difference.
    I tried the dehalo_alpha() it works fine but picture detail suffers a little. I was thinking to try also exorcist to see if it will do anything but perhaps is not applicable for this problem.

    With Digital8 captures I don’t have control over sharpening however with the Hi8/video8 deck I have, usually I have used I think sharpness in it default value it can be increased or decreased. Do you think if I decrease sharpness to the minimum I will solve the over sharpnening halo? The problem is if there is a toll usually there is, picture detail . I will do some mini tests but I don’t want to compromise detail to much I have to find a balance that probably is near the default sharpness value.

    Related to depan I never have used It do you think I could get better results than with deshaker if yes I will try it?

    I have tried your code

    Code:
    MergeChroma(Spline36Resize(width/4, height).aWarpSharp(depth=10).Spline36Resize(width,height))
    But I got a slight worse result, I have to repeat and include the colorshift line and see what happens. However perhaps you are right I could even don’t need a color correction as the captures are not that bad ( I think the capture workflow worked quite well ).

    I tried also camcorder color denoise in virtualdub instead of CNR2 in avisynth as I have read it was better but I didn’t get any better results.

    I still want also to try if antialiasing after deinterlace as any positive effect. However I think I´m approaching the limits of video8 and eventual improvements are probably very small if any.
    Quote Quote  
  30. You're video doesn't really have any super blacks of significance but it does have super whites. A lot of your super whites are totally blown out so bringing them down doesn't really make any difference. It's only those superwhites that are between Y=235 and Y=255 where detail can be recovered. That can be hard to spot.

    Dehalo_alpha() has some controls that help keep more detail. BrightStr and DarkStr control how much bright halos are darkened and how much dark halos are brightened. LowSens and HighSens determine how pronounced the halos have to be before they're dehaloed -- something like that.

    ChromaShift only shifts the chroma around, it doesn't sharpen the chroma. For example, if you use ChromaShift(c=100) you'll see that all the colors move 100 pixels to the right. The "l" value in ChromaShift() isn't the luma it's the number of lines up or down to shift the chroma. So ChromaShift(l=100) will move the chroma 100 pixels down.

    With your Hi8/video8 deck try the sharpness all the way down or in the middle. Either of those could be neutral. The detail that that sharpness filter adds is not real. You can do better in AviSynth. Ie, you can sharpen without creating halos.

    Depan isn't necessarily better than DeShaker. In fact it has one big shortcoming -- it doesn't fix rotation. But it may be good enough for you and eliminate the need for VirtualDub.

    Looking again I did notice that the blacks in your video were a little bluish at times. So maybe it could use a little white balance in the darks.

    CCD in VirtualDub does work better for the purple/green splotches you often see in caps. Your video doesn't have much of it.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!