Hello
After removing a logo, the processed part looks blurry and too smooth compared to the rest of the video.
I would like to capture the noise/grain on a regular part of the video and replicate it to the part where my logo has been removed. Is that possible?
Thank you in advance for any advice
max
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays!
+ Reply to Thread
Results 1 to 27 of 27
Thread
-
-
I suppose you could try filtering the noise out of another section of the frame, then subtract the filtered frames from the non-filtered frames (leaving only the noise) then add that to the area with the logo removed.
-
What is the pattern ?
You might be able to "wing" it with some filters like grainfactory
Another approach - If you have a section with just black (noise only), then you can make a difference mask, and apply that to a mask of the replaced delogoed area
EDIT: too slow, this is just a variation on what jagabo suggested
You can sample grain patterns and emulate with expensive compositing tools e.g. nuke . ; AE also has some grain matching filters. I'm not sure but blender might be able to do this as well -
Thank you for your answers
I can use grainfactory, but I would prefer to replicate the existing noise of the video..
It seems that you both suggested the same thing at the same time But technically, I don't know how to filter the noise and subtract it to the delogoed area. I took a black section with noise only, applied MDegrain2() and used Subtract() with original section and the filtered clip. I got a grey clip with noise, which I added to the delogoed area: overlay(mode="add"). But the result is that grey is added on the top of my logo
What about blender? Do I have to pay for it?
EDIT : maybe some frames would help to advise
Last edited by mathmax; 9th Oct 2011 at 11:56.
-
I would upsample the resolution first to get more addressable pixels.
Then make PNG cutouts of your three mask elements and take them into your paint program, then take few representative samples of grain from the video and replicate them in each mask, like DNA analysis, where they grow enough samples to make a bigger specimen.
Then composite from the back.Last edited by budwzr; 10th Oct 2011 at 10:43.
-
I read that again and again but I don't understand everything. Do you want to put grain into the mask used to remove the logo? I would have to make a bunch of masks else the noise will be static...
Why not use jababo's or poisondeathray's ray suggestion?
In any case, my problem remains the same. I don't know how to add the noise to the part where the logo has been removed. I face this problem:
I took a black section with noise only, applied MDegrain2() and used Subtract() with original section and the filtered clip. I got a grey clip with noise, which I added to the delogoed area: overlay(mode="add"). But the result is that grey is added on the top of my logo -
You said you wanted to replicate the existing noise in the video. Isn't the problem area where the white text was? Before you delogo-ed it?
I guess it's over my head then. To me it just looks like a crappy video, I can't discern what's good or bad. Good luck with it. -
yes, exact
no.. after. The parts where the logo has been removed are too smooth. I want to add noise on these part after removing the logo.
Well.. could you just tell me how to add the noise on an existing video? As I described in the preceding message, I can get a gray clip with noise but I don't know how to add only the noise on the clip. overlay(mode="add") will just stack the gray clip over my video... -
You probably need to prepare it with an alpha channel (RGB32), so all that remains is noise . e.g. if you have pure black with noise, and you key out pure black, then all you are left with is the noise + alpha (tranparency), the black is gone. Not sure how you would do this in avisynth, maybe something like colorkeymask(), showalpha() on an RGB32 clip with overlay() .
Yes, blender is free, but I don't know if it has grain sampling tools like the other compositing tools. It definitely has keying tools
But looking at your screenshots, I don't think adding noise is going to help very much, especially on the top and bottompart. It will probably make it look worse, and more pronounced. Adding grain to composites are usually done so that crisp, clean CG material can be matched to grain stocks (as if it was part of the shot on film). But your delogoed area isn't very clean or crisp to begin with - it' smeared with streaks .
Or you could denoise the whole thing then regrain -
I think the mask you made is 90% of the problem. It's not even close. Why don't you refine that mask instead of fixing the result of it? The overlap is getting interpolated to grey.
Confucius Say: If you have to add noise to a video to make it look better, something smells in Denmark. -
The problem is Subtract(). If it simply subtracted one image from the other some of the results would be negative. Since you can't represent any values below zero with unsigned 8 bit integers all the negative values would be lost (become zero). So what subtract does is:
i' = (i1 - i2)/2 + 128
That way the full range of -255 to +255 is represented in the new image. But that leaves you with a medium gray image with some noise. If you add that to your base image you will get medium gray added to everything. You have to then subtract that medium gray back out. The problem is, simply adding that gray noise image to your existing image will cause any pixels over 128 to max out at 255 -- ie, severe clipping of everything over 128. I'll have to think about how do address this... -
No - what it actually does is i' = (i1 - i2) + 126.
There is no divide by 2, so any difference outside of the range -126 to +129 is clamped to that range.
A base offset of 126 is used for YUV luma, representing the (approximate) midpoint of the [16, 235] range.
(For YUV chroma and for RGB, 128 is used.)
But that leaves you with a medium gray image with some noise. If you add that to your base image you will get medium gray added to everything. You have to then subtract that medium gray back out. The problem is, simply adding that gray noise image to your existing image will cause any pixels over 128 to max out at 255 -- ie, severe clipping of everything over 128. I'll have to think about how do address this... -
It's a lot of work, considering the logo removal filter has left artefacts which would still be visible.
Don't forget that you can't just overlay static grain - it would have to be animated to look right. That would mean you'd need at least 3 different grain overlays, then cycle between them.
Also, grain is less noticeable on moving video compared with looking at a still frame. The original grain + any you add will become less apparent on normal playback while the logo removal artefacts will stay (or be even more noticeable).
There's a standard blend mode called Hard Light in Photoshop/GIMP which does what you want. Blender has a filter called 'Linear Light'.
A quick search shows that avisynth's overlay() supports this mode. -
I don't know about Photoshop/GIMP, but in Avisynth, Overlay(mode="HardLight") uses the formula
out = base + (overlay-128)*2
The factor of 2 (and the use of 128 rather than 126) means it can't be used in conjunction with Subtract without some other adjustments.
I repeat my advice to use mt_makediff and mt_adddiff.
The calculation and adding back of noise is the archetypal application for these functions. -
Thank you all for your answers and explanations
I followed your advice Gavino.. don't know if I applied it correctly though
here is the code I wrote:
Code:source = directshowsource("DVD6-Title1.mpeg").Trim(10020, 10120).crop(0,80, -0, 120).ConvertToYV12() super = source.MSuper(pel=2, sharp=1) backward_vec2 = MAnalyse(super, isb = true, delta = 2, overlap=8,blksize=16) backward_vec1 = MAnalyse(super, isb = true, delta = 1, overlap=8,blksize=16) forward_vec1 = MAnalyse(super, isb = false, delta = 1, overlap=8,blksize=16) forward_vec2 = MAnalyse(super, isb = false, delta = 2, overlap=8,blksize=16) smooth = source.MDegrain2(super, backward_vec1,forward_vec1,backward_vec2,forward_vec2,thSAD=5000) noise = mt_makediff(source, smooth) stackvertical(noise, noise, noise, noise) clip = avisource("delogo_1.avi").ConvertToYV12() mymask = imagesource("logo3.1.bmp") Overlay(clip, mt_adddiff(clip, noise), mask= mymask)
Last edited by mathmax; 11th Oct 2011 at 08:39.
-
Did you ever de-interlace to 60p interpolated? Lot of scan lines visible. Use Lagarith.
-
The processed areas lack chroma.
You need to add chroma="process" to the mt_xxx calls (default sets chroma to garbage).
Not sure if that explains everything though - odd that you have vertical lines in the bottom section.
BTW you can remove (or comment out) the StackVertical call - it was to display the noise and is not used in the current script. -
A quick attempt with Overlay(mode="hardlight") (thanks to intracube):
src=ImageSource("org.jpeg")
mask=ImageSource("mask.jpg").Blur(1.0).Blur(1.0)
cln=ImageSource("cln.jpeg")
noise=Subtract(src,Blur(src,1.0).Blur(1.0).Blur(1. 0).Blur(1.0)).Crop(128,200,128,128)
noise=StackHorizontal(noise,noise,noise,noise,nois e,noise)
noise=StackVertical(noise,noise,noise,noise)
Noise=Crop(noise,0,0,720,480).RgbAdjust(rb=4,gb=4, bb=4)
Overlay(cln,noise,0,0,mask,mode="hardlight")
This would work better if you have a section of all black frames from which to build a noise video. Overlay is causing an overall color shift. -
Also , you shouldn't use directshowsource (not necessarily frame accurate) with mvtools . It could be mixing up frames causing distortions & artifacts
-
Thank you all
no.. I didn't. In fact I could add the noise on each field separately.. would it be better?
oh.. thank you for the last note. StackVertical() was because I took a section of 120 px height (a section without logo in a black part of the original clip) (crop(0,80, -0, 120)). Then I needed to make a full height clip... that's why I stacked 4 times (to recover 480px), but I forgot to store it in a variable and I used the 120px height clip in mt_adddiff(). That's why the script didn't work. I also added chroma="process". Now everything works fine
Jagabo, what you did is great too I'll have to try both methods to choose the one I prefer. Thank you for all your suggestions
Indeed, I changed that to mpeg2source() after demuxing. You noticed that I also have problem with MDegrain2 (cf. my thread on Doom9 )... but that's in the next step, where I use AviSource().. -
It's always better to get the source into the best condition possible first. The scan lines add to the masking problems. Pull your mask from a de-interlaced upsampled still that has as much black as possible.
Upsampling dramatically improves your ability to address details. It's the details that have the devil in them, didn't you know? -
Well.. I don't want to deinterlace the final video. I prefer to leave it interlaced for watching on TV. So of course I could process each field separately... but I don't think it's needed to upscale the fields to full frame size. Also, I don't need to be so accurate on the details, I just want to add some noise in order to unify the video aspect.
Maybe I don't realize well the issue that you mention... especially, I don't see why upsampling can improve the ability to address details. -
-
That's right - it's a bug that's fixed in Avisynth 2.6. See this post:
https://forum.videohelp.com/threads/337854-How-to-remove-huge-but-transparent-watermark...24#post2100724 -
That formula does exactly what Blender's 'linear light' does. 'Hard light' in GIMP produces quite a different effect. I'm not clued up on the exact differences between them - need to do some research, but GIMP's hard light was what I was suggesting. Blender doesn't seem to have hard light at all.
Good point on the 16-235 vs 0-255 midpoint issue. I usually work in 0-255 land for colour correction.
Blender's node compositor can deal with values outside the 0-255 range. It's a nice feature, but can also cause confusion for people like me who like to do things by eye. Adding two images together when one image has blacker than black negative values will of course subtract/darken parts of the image.
"Internally, the Compositor uses float buffers only (4 x 32 bits)" link -
Similar Threads
-
how do I replicate the technique used in this video
By edinzec in forum EditingReplies: 13Last Post: 1st May 2012, 13:32 -
Logo removal by using 2 sources
By pfxz in forum RestorationReplies: 13Last Post: 8th Feb 2011, 22:01 -
LOGO removal from WIdescreen black bar
By CBrianA in forum EditingReplies: 2Last Post: 6th Feb 2010, 09:10 -
Good Results on Video to Photo Logo Removal?
By Disco Makberto in forum EditingReplies: 28Last Post: 14th Dec 2009, 19:40 -
Removal of TV logo from VOB files?
By guy24s in forum EditingReplies: 6Last Post: 16th Nov 2009, 22:25