VideoHelp Forum
+ Reply to Thread
Page 3 of 3
FirstFirst 1 2 3
Results 61 to 81 of 81
Thread
  1. AI actually improves on traditional upscaling in two broad ways.

    1. It uses the photo and trained photos data to predict what should be there and draw in between pixels to improve the image quality.
    https://www.pixelmator.com/blog/2019/12/17/all-about-the-new-ml-super-resolution-featu...ixelmator-pro/
    Also, AI is used across multiple frames to recreate image data that can't be done using a single frame alone.
    https://venturebeat.com/2020/03/06/researchers-state-of-the-art-ai-upscales-and-enhances-videos/amp/
    (Notice the text - impossible to create such sharp text from 1 fuzzy frame).


    2. AI is pushing into the field where only humans could fix things - actually patching destroyed areas.
    https://www.google.com/amp/s/www.sciencealert.com/nvidia-neural-network-editing-progra...inpainting/amp
    No amount of upscaling or traditional processing could fix such - only manual human painting by a skilled artist in the past.

    Coupled with an enormous online database - images with partially blocked anything - cars, buildings, people, etc can be replaced with scene appropriate replacements.
    https://m.youtube.com/watch?v=gg0F5JjKmhA
    Quote Quote  
  2. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    Here is a Hi8 comparison I made years ago: DV conversion vs lossless capture - comparison (Hi8) [WARNING: auto-load!]

    Theoretically I could post a Video8 comparison that would better represent the type of difference you can expect, but I have too many other projects on my To-Do List. If you search this forum, there are some sub-optimal comparisons out there specifically for NTSC Video8 (not all variables controlled). The user FLP347 also posted a ton of comparisons of different methods for PAL Video8, but PAL DV uses a 4:2:0 not 4:1:1, so his DV-compressed Video8 comparisons in particular are relevant for compression artifacts but not for color.


    Did you have the camera's (line) TBC turned On when you captured this sample? The left border looks awfully jittery.

    For some reason, the luma has 120Hz flicker. This Avisynth script probably won't mean anything to you, but for other readers:
    Code:
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    If look at the fields we see varying brightness from scanline-to-scanline. If we separate the 240@60 again down to 120@120, the lines in the image disappear and we see the varying brightness as flicker.


    I would not use the Shader as suggested by babygdav to expand the levels. He misunderstands Sony's 7.5 IRE error. Their DV encoding uses ~16-255, not 16-235. If you expand at both ends, you will lose a lot of bright details. The proper fix is to lower the black level.
    Last edited by Brad; 19th Apr 2020 at 13:11. Reason: 120Hz flicker
    My YouTube channel with little clips: vhs-decode, comparing TBC, etc.
    Quote Quote  
  3. https://forum.videohelp.com/threads/376945-Conventional-analog-capture-workflow-vs-alt...ods-comparison

    https://forum.videohelp.com/threads/376945-Conventional-analog-capture-workflow-vs-alt...e3#post2463190

    https://forum.videohelp.com/threads/376945-Conventional-analog-capture-workflow-vs-alt...e2#post2436046

    https://forum.videohelp.com/threads/376945-Conventional-analog-capture-workflow-vs-alt...e2#post2461554


    Beyond that, good luck.
    The rabbit hole can be big trying to figure out all the components you can afford and how to use them to achieve the best analog capture results.

    You can use the capture comparisons to determine which components/methods work better for others.

    Some are obvious - a TBC for the input signal. A non-lossy codec for the capture before post-processing (if desired).

    The rest, you can glean from that thread to see what has worked well for others.

    ....

    And no, haven't found a guide yet that tests and shows A, B, C are the top 3 choices in the world to digitize analog for the very best output.

    You'll generally read feedback that A device is either crap, ok, or great, then you can look at reviews and captures to confirm before purchase and trial.

    Tons of past posts for each step (eg which codec to capture to and why) in the forums.
    Last edited by babygdav; 19th Apr 2020 at 13:27.
    Quote Quote  
  4. Originally Posted by Brad View Post
    Did you have the camera's (line) TBC turned On when you captured this sample? The left border looks awfully jittery.
    I checked the camera, and both TBC and DNR are on. However, I had saved numerous versions while searching for quality; so, I'm not 100% sure how the posted file was output from the camera. I will start taking better notes!... thanks for the input!
    Quote Quote  
  5. Originally Posted by Brad View Post
    I would not use the Shader as suggested by babygdav to expand the levels. He misunderstands Sony's 7.5 IRE error. Their DV encoding uses ~16-255, not 16-235. If you expand at both ends, you will lose a lot of bright details. The proper fix is to lower the black level.
    babygdav actually recommended SHADER (16-255 to 0-255). However, I only found the 16-235 option in MPC-HC. I had assumed his was a typo, but I guess not.
    Quote Quote  
  6. Originally Posted by babygdav View Post
    Beyond that, good luck.
    The rabbit hole can be big trying to figure out all the components you can afford and how to use them to achieve the best analog capture results.
    ... I'm headed down that rabbit hole!!!!!
    Quote Quote  
  7. Originally Posted by GrouseHiker View Post
    Originally Posted by Brad View Post
    I would not use the Shader as suggested by babygdav to expand the levels. He misunderstands Sony's 7.5 IRE error. Their DV encoding uses ~16-255, not 16-235. If you expand at both ends, you will lose a lot of bright details. The proper fix is to lower the black level.
    babygdav actually recommended SHADER (16-255 to 0-255). However, I only found the 16-235 option in MPC-HC. I had assumed his was a typo, but I guess not.
    Sorry. My typo.
    16-235

    Everything ntsc analog is in that standard (in general).
    Just one of those ancient hacks the engineers did to ntsc (besides 29.97fps) to drive everyone in today's digital world nuts, but made things work cheap (instead of tossing out all the old video equipment and tvs in existence).

    Https://forum.videohelp.com/threads/333661-YCbCr-16-235-to-RGB-0-255-and-PAL-NTSC-diff...-%287-5-IRE%29

    https://www.adamwilt.com/DV-FAQ-tech.html

    ......

    Remember that 0-255 is merely the digital 8-bit range available for "How bright" a pixel is.

    When converting analog to digital, designers of such hardware knew PAL/SECSM analog video signals had higher and lower analog signal peaks than NTSC.

    Obviously, they set the max, 255, to be the PAL/SECAM/NTSC peak - which is 133 IRE PAL. NTSC peak at 120 IRE is obviously lower, so it is 235, not 255, in the digital world. The lower end was mapped similarly.
    Last edited by babygdav; 19th Apr 2020 at 16:46.
    Quote Quote  
  8. Originally Posted by GrouseHiker View Post
    Originally Posted by babygdav View Post
    Beyond that, good luck.
    The rabbit hole can be big trying to figure out all the components you can afford and how to use them to achieve the best analog capture results.
    ... I'm headed down that rabbit hole!!!!!
    Good luck.
    Amazon shipping sucks right now due to corona virus delays.

    Alternative companies like bandh.com are good alternatives to try and have decent return policies for equipment that doesn't work for you. Craigslist.com sometimes has video gear on sale by others that have finished using them, too.

    Lots of videos here and YouTube showing you what various equipment captures (by those that have spent some time to get the best they can from the equipment) look like.

    Then you just need to think about how "perfect" of a capture you're after. E.g. In post processing, if you deinterlace because you desire such, do you spend a week going over settings tweaks and doing A/B comparisons to find the best you can? Or just go with what others have posted and accept that?

    Lots of posts on that, too, to help save you the headache of determining what to look at first that others find most effective.

    E.g. https://m.youtube.com/watch?v=ZRJqxIbi5UA
    2:45 on. Not saying this is the "best possible" way on the planet, but a way to determine what you notice different among the capture methods. You can ask yourself if you even notice the difference, whether it matters based on your needs and source, and pick hardware and setups based on that to start.

    Honestly, there are some I know where sd vs 1080 vs 4k isn't something they "Notice" or care about even after you point out how to tell the difference. Ie. Some see the pixels, others see the content.

    ...

    Some things like tbc only generally improve the video no matter.
    https://m.youtube.com/watch?v=japFCgCVd00

    Even if the video is stable, tbc can help prevent dropped capture frames.
    https://m.youtube.com/watch?v=SFpkNUoq-ac

    ...

    https://m.youtube.com/watch?v=ZxDqTxPj78M

    Notice how some poor capture devices can introduce their own problems.

    ....

    If your budget happens to be quite unlimited,
    Easier to outsource to a commercial Hollywood video restoration house that does tv series and such.

    E.g.
    https://www.illuminatehollywood.com/services/r-and-d-lab/

    They can do a whole lot more faster simply because they've got experienced archivists on their team, tons of optimized equipment, and well established workflows.

    Not just make a great capture - color correction, noise and grain removal, image repairs, etc.

    https://m.youtube.com/watch?v=eqFwo4_sanU
    https://cinnafilm.com/abq-firm-takes-award-winning-film-tech-to-the-cloud/
    https://m.youtube.com/watch?v=oq4wE_XZjYk

    (Aside. Of course, the thing that sucks about these tools is that Hollywood often tries to pass off a 2K master with a 4K Bluray release. Have to search through blu-ray.com to verify it's actually a true 4k release.)
    Last edited by babygdav; 19th Apr 2020 at 17:45.
    Quote Quote  
  9. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    Originally Posted by babygdav View Post
    Remember that 0-255 is merely the digital 8-bit range available for "How bright" a pixel is.

    When converting analog to digital, designers of such hardware knew PAL/SECSM analog video signals had higher and lower analog signal peaks than NTSC.

    Obviously, they set the max, 255, to be the PAL/SECAM/NTSC peak - which is 133 IRE PAL. NTSC peak at 120 IRE is obviously lower, so it is 235, not 255, in the digital world. The lower end was mapped similarly.
    There are too many half-truths here to try to correct. But here is some real information:

    Studio 16 = PC 0 = black = 7.5 IRE NTSC (North America) = 0 IRE all other systems
    Studio 235 = PC 255 = white = 100 IRE all systems

    Many North American devices, especially DV devices, incorrectly use Japanese 0 IRE, so even though the specs say USA = 7.5, this needs to be verified for every source. My own full-size VHS recordings (RCA '80s camcorder purchased in Canada) are 0 IRE.

    Above 235 is headroom. I don't think I've seen any DV recording that clamps at 235 instead of utilizing as much headroom as possible to retain highlights before completely clipping. All of my consumer analog camcorder recordings, and most (all?) of the samples I've seen posted, do the same thing but in analog space (>100 IRE is recorded to tape for super-bright parts of the image).

    Converting directly from these >100 IRE recordings to PC (0-255) display without levels adjustments will destroy super-whites that one could see direct from tape playback if they compensated their monitor accordingly.
    My YouTube channel with little clips: vhs-decode, comparing TBC, etc.
    Quote Quote  
  10. True. There's far more than simply 0 or 16 = true black.
    But to start understanding the basics, sufficient since most aren't going to hook up their video players into a waveform monitor, proc amps etc to adjust levels, etc etc to get everything "broadcast / spec legal" prior to digitization.

    Yes. Exceeding specs was common in the analog days - that's analog for you. Goes ask the way back to the start when even audio vu dials have 0dB, then + values.

    Luckily, digital recordings on modern cameras etc have completely eliminated that issue. 0 = black, 255 = white, and there's no superblack or whites at all to trouble the consumer user.

    Anyways, op will need to figure out what basic capture workflow and equipment to start with, test, then go from there.
    Quote Quote  
  11. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    No, that is NOT true, @babygdav.

    Digital cams - pro and consumer - almost always record in YUV space using 16=black and 235=white with both overshoot and undershoot. How much over/under depends on the cam, the user, the settings, and the scene.

    Where are you coming up with this stuff??

    Scott
    Quote Quote  
  12. Not talking about older cameras.

    Let's say you use a modern camera for consumers - Panasonic gh5, Sony A7iii, etc.

    The most you have is control over the basics - knee, log format, etc. But the video captured goes right into a h.264/h.265 mp4 file (or with certain cameras and/or external recorders, prores, raw, etc).

    Not recording into a format that has overshoot. Blow out the highlights (past 255) and it's gone - no recoverable detail.

    In older sensors, there used to be an analog path between sensor and digitizer in camera, but with the latest sensors like Sony's memory integrated sensors, that's gone, too. Light hits the sensor, signal gets converted immediately and dumped right into on-sensor memory. Whatever analog overshooting that used to be present in the old days simply is gone today in these cameras.
    https://semiaccurate.com/2017/02/09/sony-stacks-memory-sensors-fight-rolling-shutter/


    ...

    Of course, if you're using a professional camera/camcorder, they've got all sorts of gain, ire setup, etc control beyond what a consumer level camera will have, but these sort of consumer cameras are using the full 0-255 range in the mp4 files they output.
    Last edited by babygdav; 20th Apr 2020 at 02:40.
    Quote Quote  
  13. Originally Posted by Brad View Post
    ... Sony's 7.5 IRE error. Their DV encoding uses ~16-255, not 16-235. If you expand at both ends, you will lose a lot of bright details. The proper fix is to lower the black level.
    I'm probably not yet worthy of jumping into this level of discussion, but... in the interest of learning:

    So... if the video output/display range (terminology?) is expanded too much, the bright whites are clipped? ...e.g., if the original video contains information in the 16-255 range, expanding the display range so that 235 is extrapolated to 255 would clip all information originally encoded in the 235 to 255 range?

    I noticed this morning MPC-HC has another display setting: "Render Settings">"Output Range">"0-255" or "16-235." I assume this will have the same effect as the "Shaders" option.

    Also, does "lowering the black level" mean only expanding the low end? ...extrapolating the 16 to 0?

    Is there any way to determine the actual content range of a video other than an expensive waveform scope? ...e.g., software?
    Quote Quote  
  14. I just ordered this capture card... arriving Friday. ...decided to take the plunge - good or bad. BH Photo was about the same delivery time as Amazon.

    http://magewell.com/products/pro-capture-hdmi
    Quote Quote  
  15. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    Did you buy it based on FLP347's threads?

    I noticed this morning MPC-HC has another display setting: "Render Settings">"Output Range">"0-255" or "16-235." I assume this will have the same effect as the "Shaders" option.
    They do, but the settings stack upon one another. The typical setting is for the renderer to expand 16-235 -> 0-255. Adding the Shader as suggested earlier then expands a second time.

    The luma values in the Sony passthrough DV are black ~32 and brightest superwhites ~245-255. The renderer normally expands this: now black ~16 RGB, white 255, everything that was 235-254 is flattened to pure 255. Adding the Shader into this chain expands this again: now black is finally ~0 RGB, but everything that was originally ~216+ is clipped.

    Originally Posted by GrouseHiker View Post
    if the original video contains information in the 16-255 range, expanding the display range so that 235 is extrapolated to 255 would clip all information originally encoded in the 235 to 255 range?
    Yes, exactly. Sometimes this doesn't matter; the super-whites may be unimportant non-details. Software waveform monitors & histograms and your eyes/judgement are the tools to use to determine this.

    does "lowering the black level" mean only expanding the low end? ...extrapolating the 16 to 0?
    In this case I would shift all pixel values down by (let's say) 15. In addition to correcting the black level, the brightest superwhite is then 240 instead of 255. I might also further scale the video down using a gain adjustment, so that it never hits above 235. But it depends on scene content.

    Is there any way to determine the actual content range of a video other than an expensive waveform scope? ...e.g., software?
    Avisynth is free and what you'll want to have installed to do typical restoration anyway. Its mislabelled Histogram() filter is a waveform monitor.

    A hardware waveform monitor tells you what's actually coming from the player. A software waveform monitor tells you what levels you actually captured. You shouldn't need the hardware.
    My YouTube channel with little clips: vhs-decode, comparing TBC, etc.
    Quote Quote  
  16. [QUOTE=GrouseHiker;2580025]
    Originally Posted by Brad View Post

    Is there any way to determine the actual content range of a video other than an expensive waveform scope? ...e.g., software?
    Software can check the digitized signal after capture.

    https://m.youtube.com/watch?v=fX3ZUKS8coQ
    https://www.blackmagicdesign.com/products/davinciresolve

    Pro-grade Blackmagic Davinci Resolve is free and can do ALL THAT with monitors.

    ....

    You can use other free software mentioned here and elsewhere. Tons of similar tools to do the same, whether all in one, or as individual tools.

    That said, an all in one video editor like Davinci can do a LOT more than color correction and grading, including audio cleanup, editing of the video, titling, graphics, effects and final output to a slew of video formats in 1 program.

    So if you're doing more than just a basic adjustment of levels and color before archiving, worth looking into using as your primary tool.
    ....

    Pulling 16 to 0 for ntsc sd video captures for viewing on pc and modern hdtvs will adjust for the grayness / lack of contrast you see.

    Don't do this with hdtv / hd sources because the 16 = black was eliminated, now 0 = black on hdtv broadcasts, hdtv monitors, bluray discs.... In General.

    As for whites, depends on how high the peaks were. You can use a scope to verify clipping as you pull up the whites, too.

    Often, such adjustments are based on your eye.
    Called - Color correction and grading.

    E.g. If you've got a little sun in your video, you can blow it out more than a wedding dress because you expect the sun to be BRIGHT and really not much detail to worry about. A wedding dress has detail in the fabric, so you don't blow it out. Sometimes, even set it below max by a significant amount depending on mood.

    You can always import a test target to compare by eye when adjusting your videos to suit your taste.
    http://www.gballard.net/photoshop/pdi_download/
    Photodisc test target is just one of numerous you can use.

    Ie. You know the faces and colors are generally well exposed, so comparing your video adjustments to this allows you to gauge how nice your video is WITHOUT having to open another rabbit hole - that's called color calibration of your monitor, which requires more hardware, color accurate lighting, neutral room wall color, etc.
    If you desire super accurate color grading and correction, you'll have to do it though....
    Last edited by babygdav; 20th Apr 2020 at 14:07.
    Quote Quote  
  17. Originally Posted by Brad View Post

    For some reason, the luma has 120Hz flicker. This Avisynth script probably won't mean anything to you, but for other readers:
    Code:
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    If look at the fields we see varying brightness from scanline-to-scanline. If we separate the 240@60 again down to 120@120, the lines in the image disappear and we see the varying brightness as flicker.
    I am further down the rabbit hole and have captured via s-video using the Magewell Capture Pro HDMI. I am noticing a flicker and wanted to try to see if your Avisynth-revealed "varying brightness" may be causing the same issue in this file.

    Would you recommend ffmpeg or VirtualDub as the easiest approach for running the Avisynth script? I have been able to run one ffmpeg filter (idet), so I have at least minimally started using ffmpeg...

    Update: I got this Avisynth script to run in VirtualDub:
    Code:
    avisource("test1.avi")
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    The video was squeezed vertically the the flicker was more pronounced.
    Last edited by GrouseHiker; 28th Apr 2020 at 11:29.
    Quote Quote  
  18. Attached is the sample I was testing... What I interpret as flicker is most obvious in the wood trim in the background.
    Image Attached Files
    Quote Quote  
  19. ...I hope I'm doing this right...
    When I run Brad's script (one pass) to create a 60 fps video, (when displayed 4:3) I'm seeing accurate movement (the hand) but the stationary objects are jumping up and down. Any ideas on this issue?

    Update: After further study, the vertical jumping appears to be a remnant of the Avisynth SeparateFields - maybe caused by the different vertical dimension of the noise at the bottom of the frames. I don't see it in the original video.
    Image Attached Files
    Last edited by GrouseHiker; 28th Apr 2020 at 15:47.
    Quote Quote  
  20. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    Something wrong with your captures, In the attached screen shot I see some weird horizontal thick white lines separated by very thin black lines, that's not how resolution lines should look like, They should look like the one shown on the leg of the chair.

    Image Attached Thumbnails Click image for larger version

Name:	res.jpg
Views:	885
Size:	312.8 KB
ID:	52963  

    Quote Quote  
  21. Thanks,

    I had noticed those lines, but didn't know enough to realize they were weird. That one was a DV capture via firewire. The one I just posted "xmas 4-2-2..." is an s-video capture through the Magewell card. I didn't notice lines in the recent upload. Both were from the same camera (same 8mm tape).

    Brad had noticed the 120 Hz flicker in that DV capture you're looking at, but I guess those horizontal lines are a much higher frequency than 120 Hz.

    I was wondering if maybe TBC in the camera or DNR might produce these issues. I'll experiment with that when I get back home.


    MOVED this weird lines issue to https://forum.videohelp.com/threads/396961-Weird-Lines-in-8mm-Captures
    Last edited by GrouseHiker; 29th Apr 2020 at 08:01.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!