VideoHelp Forum
+ Reply to Thread
Results 1 to 24 of 24
Thread
  1. Hi,

    Last year I saw "Research at NVIDIA: AI Can Now Fix Your Grainy Photos by Only Looking at Grainy Photos" https://www.youtube.com/watch?v=pp7HdI0-MIo and was impressed! Have not found anything like this.

    Appreciate any comments!

    Thanks

    Ken
    Quote Quote  
  2. Don't know of any public available software that does improve Video or Images using the new 'AI' stuff from NVIDIA. Yes, there are a bunch of presentations, but like you noticed there isn't any publicly available software as far as I know.
    -> you best bet is probably to ask in the NVIDIA forums,..

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  3. For AI image processing in general - most of the research is still done on still images. There are commercial products starting to come out, look at Topaz Labs . AI Clear, AI Gigapixel, AI Sharpen etc...

    https://topazlabs.com/ai-clear/

    But when still image AI algorithms are used in video situations, there tends to be flicker and lack of temporal consistency with artifacts . Some exceptions are the retiming AI research (because they are designed for video, but no usable commercial applications yet)

    "AI" is the new catch phrase . Like "laser" was in the 80's and 90's . People are going to include it just to sell stuff
    Quote Quote  
  4. Member
    Join Date
    Mar 2019
    Location
    Germany
    Search PM
    Not from Nvidia, but there's a really good pure AI deinterlacer. Their paper makes for an interesting read even for laypeople.
    Here's a python implementation suitable for use on videos:

    https://github.com/HENDRIX-ZT2/Deep-Video-Deinterlacing
    Quote Quote  
  5. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by poisondeathray View Post
    "AI" is the new catch phrase . Like "laser" was in the 80's and 90's . People are going to include it just to sell stuff
    Hmmm .... "laser AI" ... <heads to trademark office>.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  6. Hi Selur,

    "...best bet is probably to ask in the NVIDIA forums" Thanks, did some quic checking and found NVIDIA Communities https://www.nvidia.com/object/nvidia-communities.html Then checked available software and found under "NVIDIA Deep Learning SDK" https://developer.nvidia.com/deep-learning-software

    Deep Learning for Video Analytics (DeepStream SDK): High-level C++ API and runtime for GPU-accelerated transcoding and deep learning inference

    Optical Flow for Video Inference (Optical Flow SDK): Set of high-level APIs that expose the latest hardware capability of Turing GPUs dedicated for computing the optical flow of pixels between images. Also useful for calculating stereo disparity and depth estimation.

    Have no idea concerning the best forum to ask this question.

    Appreciate any suggestions.

    Thanks

    Ken
    Quote Quote  
  7. My suggestion: don't get hung up on a particular technology and instead ask for ideas on how to reduce the grain in photos. There are lots of ways to do that particular function.

    By far the best product I've used or seen is Adobe Lightroom. A professional photographer, now my son-in-law, turned me on to this four years ago. He told me to shoot in RAW rather than save to JPEG, and then import the RAW photos into Lightroom. From there, do the photo fix up.

    I've now graded over 10,000 photos in Lightroom and I can tell you from that extensive experience that it can do absolutely amazing things, including reducing grain.

    Yes, Lightroom is about as far from a free program as you can get, but if you are really serious about removing grain, it is the best solution I know of today that you can actually use, as opposed to papers or demos at SigGraph that sound or look good, but are nothing more than technology demos.
    Quote Quote  
  8. Hi poisondeathray,

    "...when still image AI algorithms are used in video situations, there tends to be flicker and lack of temporal consistency with artifacts . Some exceptions are the retiming AI research (because they are designed for video, but no usable commercial applications yet)"

    Agree, I have done a lot of searching and have not found any either.

    Please keep me in mind when you do find a video app.

    Thanks

    Ken
    Quote Quote  
  9. Hi csnyfan,

    Thanks for the AI deinterlacer tip!

    Was able to find their paper "Real-time Deep Video Deinterlacing" https://arxiv.org/pdf/1708.00187.pdf

    Have you used it on any videos? If yes what hardware did you use? Also have you posted any before after results?

    Ken
    Quote Quote  
  10. Hi johnmeyer,

    "...I've now graded over 10,000 photos in Lightroom and I can tell you from that extensive experience that it can do absolutely amazing things, including reducing grain."

    Thanks for the tip!

    Very Interesting! Can it be used with videos? If yes what version of Lightroom and hardware are you using? Also have you posted any before after results?

    Ken
    Quote Quote  
  11. Originally Posted by KenithOlson View Post
    Hi johnmeyer,

    "...I've now graded over 10,000 photos in Lightroom and I can tell you from that extensive experience that it can do absolutely amazing things, including reducing grain."

    Thanks for the tip!

    Very Interesting! Can it be used with videos? If yes what version of Lightroom and hardware are you using? Also have you posted any before after results?

    Ken
    Lightroom does have limited video restoration capability, and of course you can always change the video into a series of stills and then batch process those, something I have done many times. I got my version of Lightroom Classic three years ago and haven't upgraded. I run on Windows 7 on a ten-year-old Intel i7 based computer. It is snappy.

    As for posting my results, I never felt the need to show how it grades photos or video because there are hundreds, if not thousands, of such posts all around the Internet. I don't have any secret recipe or workflow that gives me any better results than the next guy.

    [edit]Here's a link to one of dozens of tutorials. It was just the first one that came up in a Google search. Go to the 2:45 mark which is where he shows how it works and what sort of results you get.

    https://helpx.adobe.com/lightroom-cc/how-to/reduce-noise-in-photos.html
    Quote Quote  
  12. johnmeyer,

    Thanks for the link! Watched the video agree about the noise reduction.

    "I run on Windows 7 on a ten-year-old Intel i7 based computer. It is snappy."

    Question: What MP size photos are you denoising?

    Ken
    Quote Quote  
  13. Hi

    To date have not found any public available software that does improve Video or Images using the new 'AI' from NVIDIA but while searching using keywords photo super resolution software did find "Yandex Introduces DeepHD Super-Resolution Technology" https://www.youtube.com/watch?v=8MTs7g01UY8 I could notice a difference so checked "Yandex DeepHD Technology Revolutionizing Photo & Video Quality for Yandex Services" https://yandex.com/company/blog/yandex-deephd-technology-revolutionizing-photo-video-q...andex-services

    The most recent post I found was "One example of these impressive AI advances in 2018 was the new neural-network based DeepHD super-resolution technology for upscaling photos and videos. In building DeepHD, the Yandex computer vision team created the first video super-resolution technology using GANs that offers users better quality content in real time. As more and more users shift to their laptops or phones for content streaming, in 2018, DeepHD helped users across Yandex’s streaming services rediscover their favorite classics in stunning quality and watch current programs with a significantly clearer picture." https://yandex.com/company/blog/parakhin-2018

    Further searching found "We're in UltraHD Morty! How to watch any movie in 4K" by sadfun February 27, 2019 https://habr.com/en/post/441918/ "You’ve probably heard about Yandex’s DeepHD technology they once used to improve the quality of old Soviet cartoons. Unfortunately, it’s not public yet, and we, regular programmers, don’t have the dedication to write our own solution. But I personally really wanted to watch Rick and Morty on my 2880x1880 Retina display. And I was deeply disappointed, as even 1080p video (the highest available for this series) looks really blurry on a Retina display!.....I decided I want to see Rick and Morty in 4K, even though I can’t write neural networks. And, amazingly, I found a solution. You don’t even need to write any code: all you need is around 100GB of free space and a bit of patience. The result is a sharp 4K image that looks better than any interpolation."

    Would be interested in others comments.

    Sample video at https://yadi.sk/d/3TvUrOPQXS_Wrg

    Ken
    Quote Quote  
  14. Originally Posted by KenithOlson View Post
    Question: What MP size photos are you denoising?
    Not that it matters, but I denoise photos from many cameras. The maximum is 3400x5600.
    Quote Quote  
  15. johnmeyer,

    Thanks again for the info!

    From "We're in UltraHD Morty! How to watch any movie in 4K" sadfun mentions "Adobe actually trained a proper neural network that can “complete” an image when you upscale it within the application!...I left my laptop to process 31 000 frames through the night with a simple instruction: “upscale 2x and save”. The next morning everything was ready. I had another folder full of images, but now in 4K and taking up 82GB of disk space."

    Also gives a comparison of the original and 5 software options (with the image zoomed in around 800%)

    Ken
    Quote Quote  
  16. I'm not sure upscaling is worth all the additional storage space. The only thing upscaling can do, and only if you use really good software, is to reduce jaggies on diagonal lines. It will not improve or add to detail. My recommendation would be to keep the originals and then, if you really need to have a high-res version of a few photos, run the upscaling, on just those photos, at that time.

    If you want to actually improve your photos, grade them using Adobe Lightroom. Depending on how the photo was created, you can sometimes make massive improvements. I just graded 6,000 photo scans my cousin sent me that came from prints and negatives he found in his aging parents' apartment. The difference in that effort is night-and-day. By contrast, up-resing will buy you almost nothing, and will consume vastly more disk space.

    [edit]
    Here is a before/after comparison showing some of the things you can do with Lightroom.

    Before
    Image
    [Attachment 48642 - Click to enlarge]



    After
    Image
    [Attachment 48643 - Click to enlarge]
    Last edited by johnmeyer; 12th Apr 2019 at 19:16. Reason: tried to add photo
    Quote Quote  
  17. johnmeyer,

    "If you want to actually improve your photos, grade them using Adobe Lightroom. Depending on how the photo was created, you can sometimes make massive improvements....By contrast, up-resing will buy you almost nothing, and will consume vastly more disk space."

    Agree with both your comments!

    Recently Topaz Gigapixel AI was recommended. Went to the website https://topazlabs.com/gigapixel-ai/ and the samples looked good so decided to check reviews. Found Topaz AI GigaPixel review http://www.northlight-images.co.uk/topaz-ai-gigapixel-review/ Seems like there are a number of variables that influence the final result. Was impressed with the photo enlargement improvement from a Olympus C1400XL (1.2 MP camera) "One of my first observations was that a lot of JPEG compression artifacts had gone in the upscaled version. Not too bad for a 1.2MP camera" so cleaning up these problems may be a significant factor.

    Ken
    Quote Quote  
  18. Everyone,

    Rod Lawton in his January 18, 2019 Topaz Labs AI Gigapixel 2.1.1 Review http://lifeafterphotoshop.com/topaz-labs-ai-gigapixel-2-1-1-review/ "AI Gigapixel’s blow-ups are sometimes jaw-dropping, sometimes just dodgy. It’s not magic, it’s a clever type of detail substitution we haven’t seen before. If you must use this kind of software, then this is perhaps the most convincing on most kinds of subject, and that’s probably all you can say about it...It works well with natural, random, textures where it’s easy to accept the software’s best guess at what the details probably looked like. It’s a lot less effective on human faces and on man-made subjects where its tiny squirly polygons can start to look quite artificial. Sharp, but artificial."

    Since he tested v2.1.1 and the latest release is v4.0.2, am wondering how much improvement has occurred. Googled but found no reviews for v4.0.2, then check YouTube and only found "A First Look at Topaz's Gigapixel AI 4" https://www.youtube.com/watch?v=smRbKg6gqq4

    Ken
    Quote Quote  
  19. Originally Posted by csnyfan View Post
    Not from Nvidia, but there's a really good pure AI deinterlacer. Their paper makes for an interesting read even for laypeople.
    Here's a python implementation suitable for use on videos:

    https://github.com/HENDRIX-ZT2/Deep-Video-Deinterlacing
    Does somebody tried this ai deinterlacing? I'm wondering, is it better than QTGMC?
    Quote Quote  
  20. https://videoai.topazlabs.com/

    Anybody tried it? it

    They have for video now and it looks very promising
    Quote Quote  
  21. I would love to try it. Does anyone have a mirror link to Topaz' Video AI Beta? Their page is nonfunctional.
    Quote Quote  
  22. page works fine here,...
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  23. Originally Posted by mammo1789 View Post
    https://videoai.topazlabs.com/

    Anybody tried it? it

    They have for video now and it looks very promising
    Yes and it's amazing when you get a good source to work with. Not good as in high definition, but untouched and clean, then it really shines. So, a clean SD file will work a lot better than an SD file that has been compressed a lot and fiddled with by using filters and edge enhancers. It doesnt work well with interlaced files as when you upscale it also upscales the comb effect.

    You need a fast nvidea GPU though as doing an hour of footage can take 12 hours or more with an entry level RTX gfx card and that's just enhancing SD footage by 200%. Getting it to full HD and 4K will take a lot longer
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!