VideoHelp Forum




+ Reply to Thread
Results 1 to 18 of 18
  1. Hello,
    i have a video and in the video with objects that not moving, like a channel logo.

    I need that the script will paint anything that moving in black.
    And everything that not moves will paint in white color.

    the "objects"that not moving is the built-in subtitles in the video.
    so that way in theory i will able to separate the subtitles from anything else.

    I know this technique is not perfect. But it's good enough.
    If you have better technique then it is also good.
    But note - The goal is:
    * Anything that is not subtitle will deleted(so it will be in black color).
    * Only the subtitles will not be deleted and they will paint to white color.


    Thanks for helpers!!

    EDIT:
    sample of the video:
    https://www.dropbox.com/s/u4xljzkymbg80z6/sample.avi
    Last edited by gil900; 19th Nov 2013 at 09:20.
    Quote Quote  
  2. Subtract() a static frame from the other frames (or one frame from the previous frame, motion tools has such a function). Any non-128 (by some small threshold -- the result of subtract is A-B+128) can be painted over with a black overlay().

    But I suspect the problems is more complex than you describe. You should provide a sample video.
    Last edited by jagabo; 18th Nov 2013 at 21:17.
    Quote Quote  
  3. Member DB83's Avatar
    Join Date
    Jul 2007
    Location
    United Kingdom
    Search Comp PM
    IMHO impossible.

    avisynth deals with frames (or parts of frames). It would require a filter to map each pixel, then compare the next pixel(or the one a little later) and if different assume movement and write over it. Could you imagine just how long that process could take ?

    The best that could be achieved is to define a mask for the area where the subtitles appear and blank out the rest of the frame. But there will still be detail between the text characters.
    Quote Quote  
  4. Yes, there are approaches in avisynth, but the potential problems are compression artifacts, jittery footage (if it's not perfectly stable, parts of the subs will be removed), 1 frame delay (you compare the frame before, so your mask will be 1 off)

    You might be able to combine a few approaches for an improved composite mask . Post a sample clip

    eg. using mvtools

    Code:
    orig=WhateverSource()
    
    orig
    vectors = MSuper().MAnalyse(isb = false)
    MMask(vectors, kind=0, ml=1)
    m=last
    
    blankclip(orig)
    b=last
    
    blankclip(orig, color=$FFFFFF)
    w=last
    
    #overlay(orig, b, mask=m) #to view over footage
    overlay(w, b, mask=m) #white on black mask
    Quote Quote  
  5. Yes, there's no guarantee that the picture behind the subs will always be moving. And for motion detection, the algorithms look for pixels that change from frame to frame, they only detect motion at edges. So the edges of a solid block will be seen as moving but the middle won't.
    Quote Quote  
  6. In your other post, you had primarily "white" subs. So you could combine a luma mask (Define it by upper range, like Y' >150 ) , with a motion mask to refine it . You still have the n-1 problem, but that could dealt with by offsetting a frame from the mask when using it

    You can use masktools operators to combine masks

    e.g
    mt_logic(mode="and", 1st_mask, 2nd_mask)

    Masktools has mt_motion , but it's primarily for delicate motion detection - probably too fine for what you're trying to do here
    Quote Quote  
  7. Oops, PDR beat me to it. Here is an concrete example:

    You can try summing motion masks over several frames. In this video for example the camera and background are still, the dog is moving up in the frame. A static subtitle was overlaid onto the original video:

    Original frame:
    Click image for larger version

Name:	org.jpg
Views:	302
Size:	49.6 KB
ID:	21336

    Single mt_motion():
    Click image for larger version

Name:	single.jpg
Views:	280
Size:	40.9 KB
ID:	21337

    Eight frames of mt_motion added together:
    Click image for larger version

Name:	add8.jpg
Views:	297
Size:	50.7 KB
ID:	21338

    To add 8 motion masks together I used:

    Code:
    Overlay(SelectEven(), SelectOdd(), mode="add")
    Overlay(SelectEven(), SelectOdd(), mode="add")
    Overlay(SelectEven(), SelectOdd(), mode="add")
    Alternatively you could also use:

    Code:
    mt_logic(mode="or", SelectEven(), SelectOdd())
    mt_logic(mode="or", SelectEven(), SelectOdd())
    mt_logic(mode="or", SelectEven(), SelectOdd())
    Last edited by jagabo; 19th Nov 2013 at 08:57.
    Quote Quote  
  8. I see that there is a big discussion.
    It is good .. I'll try the scripts given here.

    Meanwhile, I gave a small example of my recordings:
    https://www.dropbox.com/s/u4xljzkymbg80z6/sample.avi


    The ultimate goal is:
    to create a very first and good input for OCR algorithm.
    Last edited by gil900; 19th Nov 2013 at 15:34.
    Quote Quote  
  9. I just thought of an ingenious and innovative technique!

    But this technique requires an additional input (in addition to recording input)
    The other input is the same episode without subtitles!
    There is a need to download this input from the internet.


    input1 = the recording
    input2 = the same video without subtitles (can be downloaded from internet)


    next,
    There is a need to adjust input1 to input2 so each frame in input1 will be equal to each frame in input2.

    next,
    there is need to use some algorithm that will compare each pixel in input1 in frame X to each pixel in input2 in frame X(the same frame).
    the pixels that will not be equal are:
    the subtitles and maybe the channel logo.

    each equal pixel that not equal enough(the pixels can't be 100% equal) will be paint in white color and each equal pixel that equal enough will paint in black color.

    The result will be:
    only the subtitles and maybe the channel logo will not be deleted.
    I would love if someone would write a script that performs this technique!
    Quote Quote  
  10. There's nothing new here. Such techniques have all been tried before. The devil is in the details. Rarely does it work as well is it seems it will in theory.
    Quote Quote  
  11. As a programmer, I strongly agree with "The devil is in the detail"

    But why do you think it will not work?
    If it that does not work and logically it is possible, then the problem is the algorithm that makes comparison between input1 to input2..


    Logically it is possible ..
    This is possible if there are no problems recording.
    Quote Quote  
  12. It's very difficult to get the brightness, contrast, and colors of two videos to match well. Getting the two frames to align can be difficult (and often there are non linear distortions so you can't just adjust the frame size, rotate, crop, and add borders -- like time base errors, film bounce, jello-effect, different sharpness, different noise). Even if you manage all of the above there are cases where the subtitles are the same brightness/color as the underlying image.

    So yes, in theory you can simply subtract one image from another and there are your subtitles. In practice you get a multitude of problems.

    Click image for larger version

Name:	subtract.jpg
Views:	266
Size:	55.8 KB
ID:	21345
    Quote Quote  
  13. The reason this ingenious method of yours hasn't been perfected already is because by the time you find two nearly matching videos, rectify them, do your difference mattes, clean up your extracted images, do your OCR, turn your OCR'd titles into proper subtitles, adjust and check your timings -- you could have just typed the damned things up.
    Quote Quote  
  14. Originally Posted by smrpix View Post
    The reason this ingenious method of yours hasn't been perfected already is because by the time you find two nearly matching videos, rectify them, do your difference mattes, clean up your extracted images, do your OCR, turn your OCR'd titles into proper subtitles, adjust and check your timings -- you could have just typed the damned things up.
    I was going to mention that but I though he'd figure it out for himself.
    Quote Quote  
  15. Originally Posted by jagabo View Post
    It's very difficult to get the brightness, contrast, and colors of two videos to match well. Getting the two frames to align can be difficult (and often there are non linear distortions so you can't just adjust the frame size, rotate, crop, and add borders -- like time base errors, film bounce, jello-effect, different sharpness, different noise). Even if you manage all of the above there are cases where the subtitles are the same brightness/color as the underlying image.

    So yes, in theory you can simply subtract one image from another and there are your subtitles. In practice you get a multitude of problems.
    Originally Posted by smrpix View Post
    The reason this ingenious method of yours hasn't been perfected already is because by the time you find two nearly matching videos, rectify them, do your difference mattes, clean up your extracted images, do your OCR, turn your OCR'd titles into proper subtitles, adjust and check your timings -- you could have just typed the damned things up.
    ************************************************** *****************************

    Originally Posted by jagabo View Post
    Getting the two frames to align can be difficult
    Originally Posted by smrpix View Post
    The reason this ingenious method of yours hasn't been perfected already is because by the time you find two nearly matching videos, rectify them, do your difference mattes
    Match two videos is the simplest process for me.
    I learned from you in the past very convenient technique:
    https://forum.videohelp.com/threads/354573-I-want-to-compare-two-cuptures-and-see-Which...-have-a-change

    Once you do this a few times, it does not take long .. Especially if the source is not a tape.

    Originally Posted by jagabo View Post
    ...and colors of two videos to match well. Getting the two frames to align can be difficult (and often there are non linear distortions so you can't just adjust the frame size, rotate, crop, and add borders -- like time base errors, film bounce, jello-effect, different sharpness, different noise).
    You're missing an important point in the technique.
    Like I said, the problem is that the algorithm that makes comparison between input1 to input2.
    the the idea is once the frames are equal, the pixels not need not be equal in 100%.
    *pixels that equal in more then 70% will paint in black color
    *pixels that equal in Less than 30% will paint in white color
    *all other pixels (Between 31% to 69%) will paint in gray color.

    ...like time base errors, film bounce, jello-effect, different sharpness, different noise
    time base errors:
    Like I said, you can fix it easily if you did it a few times before.

    jello-effect+different sharpness:
    about the jello-effect, if the source is not a tape and it is digital source then most likely this problem will not appear and if yes then most likely that the pixels will still be equal more then 70% because the jello-effect will not will be strong enough.
    about the different sharpness, in this case it is also that most likely that the pixels will still be equal in more then 70%.
    In the worst case, you can create a similar sharp in the input that not sharp enough to make the pixels similar as possible


    this is the full description of the technique.


    All this is based on hypotheses.
    To be sure, it is necessary to try it

    EDIT:
    I added a few things
    Last edited by gil900; 19th Nov 2013 at 20:09.
    Quote Quote  
  16. Go at it. If you're a programmer you should be able to figure out basic AviSynth stuff for this.
    Quote Quote  
  17. I'm not good enough to do this part ..
    It's too complicated.

    I just gave an idea ..
    I'd be happy if someone will try to program it!
    Quote Quote  
  18. this is what i managed up to now:
    Click image for larger version

Name:	sz7jba.png
Views:	272
Size:	209.1 KB
ID:	21347

    and this is the script:
    LoadCPlugin("C:\Program Files (x86)\AviSynth 2.5\plugins\ffms2.dll")

    Rec_Orgin = DirectShowSource("Bear Grylls - escape from hell - EP3.wmv")
    Down_Orgin = FFmpegSource2("bear.grylls.escape.from.hell.s01e03 .hdtv.x264-c4tv.mp4")

    # Matching "Rec" Resolution to "Down":
    Rec = BilinearResize(Rec_Orgin, 720, 404)
    # Removing Colors to prevent problems...:
    Rec = Grayscale(Rec)

    #Lowers the sharpness in "Down" to match the sharpness level in "Rec":
    Down = BilinearResize(Down_Orgin, 400,242)
    Down = BilinearResize(Down, 720, 404)
    # Removing Colors to prevent problems...:
    Down = Grayscale(Down)

    # Matching "Rec" time to "Down" time:
    part1 = Trim(Rec,310, 0)
    Rec = part1

    Test1_RecDown = overlay(Rec,Down,opacity=0.5)
    Subtract_RecDown = Subtract(Rec,Down).Trim(0, 1055)

    return Subtract_RecDown
    I tried to make this technique ..
    It does not work so well ..

    I attached rar file containing the attempt + videos:
    https://www.dropbox.com/s/vqhvhtpn8jrr0bg/attempt1.7z

    What could improve the result?


    EDIT:
    No matter ..
    I gave up. I thought of a simple way to use the subtitles without having to decode the subtitles...
    Last edited by gil900; 20th Nov 2013 at 03:43.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!