http://neuron2.net/hotspot/hotspot.html
So I just fixed a home-made video that was shot with a modern camera with smudges around the perimeter, concentrated in the corners -- probably due to the owner wiping the lens clean with a napkin/rag and pushing all the crap to the corners.
It isn't really visible until deshaking. I fixed it with several applications of the hotspot filter for VDub but it was inconvenient to work with and didn't really fix it perfectly because I had to create a manual mask but it was good enough.
What do you gurus use?
In severe cases (like in scenes in Blair Witch project shot with a flashlight-equipped camera in the dark) I would blur all the frames with a high radius then subtract the normal frames with the blur but I hate doing this because it flattens the intensity and makes everything look unnatural. Correcting the luma would correct it for one thing (human skin) then screw it up for another (rivers, skies, grass).
+ Reply to Thread
Results 1 to 15 of 15
-
-
Is it only darker , like a vignette ?
But you mentioned "smudges" - so is it blurry as well was darker ?
What are the temporal characteristics ? If it's "crap" pushed to the corners was it something on the lens, but stationary? (thus only the camera movement causes it to "move") . Or did wiping or something else occur and the characteristics change over time ? Or is it completely something else ?
Why isn't it visible pre deshaking ? -
Oh man, when I read my own post I knew I knew how I would respond if the roles were reversed. I would be far less nice and ask for screenshots right away before I'd help.
I admire how you try to communicate as clearly as possible while being cautious about asking for photos to not be a dick.
It's not really blurry, just darker, possibly slight contrast change. It's stationary.
Screenshots, one original one deshaked: -
I'd probably use an overlay with feathed mask - ie. composite a brightened version over the corners . It's the same idea as hotspot (I think, never used that vdub filter)
Why after deshaking are the dark corners moved in? Did you have some edge compensation setting like mirror edges something like that ? When you have zoom for compensation, it might even get rid of the vignette! -
I'd probably use an overlay with feathed mask - ie. composite a brightened version over the corners . It's the same idea as hotspot (I think, never used that vdub filter)
Problem is, the entire perimeter is slightly darker not just the corners but the corners are most visible. I couldn't manually fix the non-corner sides but they aren't that important so meh.
Why after deshaking are the dark corners moved in? Did you have some edge compensation setting like mirror edges something like that ? When you have zoom for compensation, it might even get rid of the vignette!
Same reason why a shaky video with a logo when deshaked will be a video with a dancing logo. My video has dancing corner smudges.
EDIT: What I wanted to ask was, is there a tool that can average out all the frames in the video to one so stationary anomalies can be properly and precisely isolated? -
huh ? that's a few too many subtracts
Subtract doesn't really work for this type of thing, and you would have to manually fix it per scene using that method
I would use a feathered luma mask , with a brightened version of that video. This way you should only to do it once instead of doing it by scene .
Problem is, the entire perimeter is slightly darker not just the corners but the corners are most visible. I couldn't manually fix the non-corner sides but they aren't that important so meh.
EDIT: What I wanted to ask was, is there a tool that can average out all the frames in the video to one so stationary anomalies can be properly and precisely isolated? -
lol, I dunno how to overlay with avisynth so I use Subtract() to do mah bidding. This is how I add chroma from one video to another:
Code:equalized=imagesource("C:\tealgrey1.png") original=imagesource("C:\teal.png") b=original.greyscale() chroma=subtract(b,original) subtract(chroma,equalized) invert
Same thing; you make the mask to cover it. If it doesn' t move, you should only have to do it once
This this of approach doesn't work very well, unless you aren't describing what you had in mind very clearly... or it's for a special type of static clip (like tripod shot, no motion)
I figured a photographic average of the entire video would isolate the smudges because they are in the same place so the upper corners and the perimeter would end up being slightly darker than the rest of the frame if a composite average was achieved, am I wrong? -
so does it work in this case ?
How do you do feathered luma masks on avisynth?
You have original video on bottom. Brightened version on the Top. Mask in the middle. You only want to affect those regions on the edges, so you make your mask accordingly (it's much easier since it's a static defect) . Think this overlay as "covering up" those affected areas with a brightened version of the video . This mask will be black in the center, so the center will be unaffected - only the edges will be brightened
o= original video
b= bright video
m= mask
overlay(o, b , mask=m )
There are a few more tricks , like opacity and blending modes, and many manipuations you can do on the "bright" video to make it blend better
This this of approach doesn't work very well, unless you aren't describing what you had in mind very clearly... or it's for a special type of static clip (like tripod shot, no motion)
I figured a photographic average of the entire video would isolate the smudges because they are in the same place so the upper corners and the perimeter would end up being slightly darker than the rest of the frame if a composite average was achieved, am I wrong?
The overlay approach works (mostly) for different scenes, colors, lighting (unless there is clipping), different objects, zoom ,movement etc.... (as long as the defect is static and the mask "fits") - it's a more general purpose approach -
That looks like vignetting from the lens in wide position, or a conversion wide angle lens. Smudges would not be perfectly uniform like that.
Last edited by budwzr; 1st Jun 2013 at 01:36.
-
Yes but I didn't spend a lot of time correcting the corners in photoshop so it wasn't perfect and the darkened corners are not 100% static, they are a bit worse later on in the video so there's no one perfect mask. The three masks I created in Hotspot are a good balance but it was annoying making them.
You have original video on bottom. Brightened version on the Top. Mask in the middle. You only want to affect those regions on the edges, so you make your mask accordingly (it's much easier since it's a static defect) . Think this overlay as "covering up" those affected areas with a brightened version of the video . This mask will be black in the center, so the center will be unaffected - only the edges will be brightened
o= original video
b= bright video
m= mask
overlay(o, b , mask=m )
There are a few more tricks , like opacity and blending modes, and many manipuations you can do on the "bright" video to make it blend better
That makes it easier, it's basically 1 scene and it sound like a background replacement (with the exception of the 1 zoom scene) ? .
Then it depends really on how you take the average (over what time period), and over what areas.
Sometimes in bored moments I wondered what a composite average of an entire movie would look like but this is the first time I have actual use for such a technique. I don't think the areas would matter because the smudges are almost completely static.
The overlay approach works (mostly) for different scenes, colors, lighting (unless there is clipping), different objects, zoom ,movement etc.... (as long as the defect is static and the mask "fits") - it's a more general purpose approach
That looks like vignetting from the lens in wide position, or a conversion wide angle lens. Smudges would not be perfectly uniform like that. -
When the lens moves toward telephoto(zooms), it goes away right?
-
-
The difference is you're sampling from the same frame (to apply to each & every frame) with the overlay.
The reason why averaging doesn't work well - If you average over , say even 3 frames, the difference in brightness will make it not match up. The larger the time horizon for the average, the more it will likely not match up
With a static background scene you can even sample from a few pixels over (eg. crop and resize width), so the adjacent pixels from the same frame cover up the defect, and composite and blend it better with a mask
It's easier to do in something like after effects, because you can do the mask and get feedback right away, re-adjust . (You don't even need a luma matte, you can do the mask and feathering directly) -
So what PeedeeArr (
) is saying, to keyframe the mask as needed, should work, but you need an NLE to see what you're doing.
Make a radial ambient occlusion and eyeball it every few seconds.
Similar Threads
-
Star Trek Into Darkness - thoughts? no spoilers yet please for others
By yoda313 in forum Off topicReplies: 7Last Post: 20th May 2013, 17:13 -
Trouble with darkness and color (Funai and Sony recorders)
By DaisukeJigen in forum DVD & Blu-ray RecordersReplies: 9Last Post: 21st Aug 2012, 13:21 -
LCD TV shows white glow in corners
By Grunberg in forum DVB / IPTVReplies: 5Last Post: 18th Dec 2008, 12:24