VideoHelp Forum

+ Reply to Thread
Page 1 of 6
1 2 3 ... LastLast
Results 1 to 30 of 172
Thread

Threaded View

  1. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello,
    I am very new to AviSynth, I have no idea on how to use it. I have installed AviSynth version 3.7 for Windows 10. I also installed latest version of FFMPEG.
    But I have one question to know: for example, if I have one short football game video in MP4 format. (Soccer1.mp4).
    The duration of Soccer1.mp4 is 1 minute (60 seconds), the first 20 seconds is one player scored one goal, then from 20th second to 40th second, it is replay for the first 20 seconds, and from 40th second to end, it is also a replay for the first 20 seconds.
    In short, in 60 seconds MP4 video, there are 2 replay for the first 20 seconds.
    I want to know if I can use AviSynth to find from what time to what time (like from 20.00 second to 40.00 second), it is a replay (duplicate clip) from previous clip (the first 20 seconds of video).
    Please advise!
    Quote Quote  
  2. I read this post yesterday which suggests a script to detect two identical frames in a row. What you're requesting here seems much more complicated, I'd be curious to see if someone comes up with a working solution.
    Quote Quote  
  3. It's possible to find any duplicates of a single frame within the entire video:

    Code:
    v = LSmashVideoSource("video.mp4")
    i = v.Trim(10,10) # the frame to search for
    diff = AbsDiff(v, i)
    
    WriteFileIf(diff, "match.txt", "AverageLuma<3.0", "current_frame", append = false)
    
    function AbsDiff(clip v1, clip v2)
    {
        Subtract(v1, v2)
        Overlay(last.ColorYUV(off_y=-126), last.Invert().ColorYUV(off_y=-130), mode="add")
    }
    That will search through the video looking for a duplicate of frame 10 and output the matching frame number(s) in a file called match.txt. But it's a long whay from there to detecting all duplicate segments within a file. You would have to find all duplicates of frame 0, then of frame 1, then of frame 2, etc. For starters, you'd have to repeat the search 360000 times in a 2 hour 50 fps video. Then go through all those results building a map of matching segments. It would probably be much faster to do this by hand in an NLE. So AviSynth isn't the correct tool for this.

    And instant replays are often different camera angles. There's no hope of finding those with an automated process.
    Quote Quote  
  4. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    Thanks for your code, I will try to see if I can run your script.
    However, I have found some technical details: for almost all the MP4 videos I have (around 100), I can see there are a short time (0.4 to 0.5 seconds), there are one small image of a shirt becoming bigger and bigger in all the frames during this short period of time. I can call this as transition scenes. The number of frames are from 10 to 13 frames. When the replay begins, the one short (0.4 second) transition scene appears, and when the replay ends, another long (0.5 second) transition scene appears, when the long (0.5 second) transition scene finishes, the replay is done.
    However, try to find those frames are not easy, as they donít totally occupy the whole frame, but some of them occupy more than 90% of whole frame.
    If I can detect those transition scenes, then the time for replay can be detected.
    Any suggestion for this technical details?
    Thanks,
    Quote Quote  
  5. If those animations are identical and at a fixed location within the frame they can be very easy to locate. The script I gave earlier was originally something I used to find a JPEG image within a video:

    Code:
    v = LSmashVideoSource("video.mp4")
    i = ImageSource("image.jpg", fps=v.framerate).ConvertToYV12().Trim(0,v.framecount)
    diff = AbsDiff(v, i)
    
    WriteFileIf(diff, "match.txt", "AverageLuma<5.0", "current_frame", append = false)
    
    function AbsDiff(clip v1, clip v2)
    {
        Subtract(v1, v2)
        Overlay(last.ColorYUV(off_y=-126), last.Invert().ColorYUV(off_y=-130), mode="add")
    }
    Image.jpg is the image that was searched for here. Do you have some video samples you can upload? Or a link to somewhere one can be downloaded? Youtube for instance.
    Quote Quote  
  6. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    I will try to upload the images for the long time (half a second) transition scene.
    Image
    [Attachment 60768 - Click to enlarge]

    Image
    [Attachment 60769 - Click to enlarge]

    Image
    [Attachment 60770 - Click to enlarge]

    Image
    [Attachment 60771 - Click to enlarge]

    Image
    [Attachment 60772 - Click to enlarge]

    Image
    [Attachment 60773 - Click to enlarge]

    Image
    [Attachment 60774 - Click to enlarge]

    Image
    [Attachment 60775 - Click to enlarge]

    Image
    [Attachment 60776 - Click to enlarge]

    Image
    [Attachment 60777 - Click to enlarge]

    I don't know how to upload one video from my PC yet, but I will try!
    If you can write some code to detect this transition scene, and show the time stamp for the first frame, then it is enough!
    Thanks,
    Image Attached Files
    Quote Quote  
  7. Here are some examples using your video:

    Using a selected frame (458) to match:
    Code:
    function AbsDiff(clip v1, clip v2)
    {
        Subtract(v1, v2)
        Overlay(last.ColorYUV(off_y=-126), last.Invert().ColorYUV(off_y=-130), mode="add")
    }
    
    
    v = LSmashVideoSource("transit1.mp4")
    i = Trim(v, 458,458) # use frame 458 as the match frame
    diff = AbsDiff(v, i)
    
    WriteFileIf(diff, "match.txt", "AverageLuma<4.0", "current_frame", append = false)

    Using a PNG image as a reference (frame 458 was previously selected and saved as a PNG image):
    Code:
    function AbsDiff(clip v1, clip v2)
    {
        Subtract(v1, v2)
        Overlay(last.ColorYUV(off_y=-126), last.Invert().ColorYUV(off_y=-130), mode="add")
    }
    
    
    v = LSmashVideoSource("transit1.mp4")
    i = ImageSource("image.png", fps=v.framerate).Trim(0,-1).ConvertToYV12()
    diff = AbsDiff(v, i)
    
    WriteFileIf(diff, "match.txt", "AverageLuma<4.0", "current_frame", append = false)

    Using only a crop of frame 458 (for example, that "SOFT 4 GAME" logo may not appear on all the matching frames):
    Code:
    function AbsDiff(clip v1, clip v2)
    {
        Subtract(v1, v2)
        Overlay(last.ColorYUV(off_y=-126), last.Invert().ColorYUV(off_y=-130), mode="add")
    }
    
    
    v = LSmashVideoSource("transit1.mp4")
    i = Trim(v, 458,458)
    diff  = AbsDiff(v.Crop(270,100,232,232), i.Crop(270,100,232,232)) # crop to the central portion of the frames
    
    WriteFileIf(diff, "match.txt", "AverageLuma<4.0", "current_frame", append = false)
    In all cases I loaded the script into VirtualDub then ran File -> Run Video Analysis Pass to scan through the video. Once that completes match.txt contains the frame numbers of all the matches, 458 and 974 in this case. Note that the transition starts about 12 frames before the matching frame, and ends about 6 frames after. In practice you may have to raise the AverageLuma threshold a bit because of compression/capture artifacts. Or lower it a bit to reduce the number of false positives.

    It might even be possible to automate the removal (if all false detections can be avoided) but I'd have to think about that...
    Quote Quote  
  8. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:

    Thanks for your code. I am trying to use it now. But I canít seem to get AyiSynth script up and running.
    I have installed AyiSynth version 2.6.0 and 2.6.1 alpha on windows 10. It was only a few DLLs. But the time for version 2.6.0 is 5 years ago. (https://www.videohelp.com/software/Avisynth)
    I also installed VirtualDub 1.10.4 (X86) for Windows 10.(http://sourceforge.net/projects/virtualdub/files/virtualdub-win/1.10.4.35491/VirtualDu...4.zip/download)
    However, the version Build 35491/release (2013.10.27) was built 8 years ago before Windows 10 was released.
    Anyway, I have done some tests. I found that I can play videos (MP4) using VirtualDub with some plugins without any issue. (https://sourceforge.net/projects/virtualdubffmpeginputplugin/files/V2000/VirtualdubFFM..._mirror=jztkft)
    However, when I try to run some AyiSynth script, like the simplest script: verion(), I have big issue:

    D:\Videos\Test>vdub /s Version.avs
    VirtualDub CLI Video Processor Version 1.10.4 (build 35491/release) for 80x86
    Copyright (C) Avery Lee 1998-2009. Licensed under GNU General Public License

    Error during script execution at line 1, column 8: Variable 'Version' not found

    Version<!>

    D:\Videos\Test>type Version.avs
    Version

    D:\Videos\Test>notepad Version.avs

    D:\Videos\Test>vdub /s Version.avs
    VirtualDub CLI Video Processor Version 1.10.4 (build 35491/release) for 80x86
    Copyright (C) Avery Lee 1998-2009. Licensed under GNU General Public License

    Error during script execution at line 1, column 8: Variable 'Version' not found

    Version<!>()

    D:\Videos\Test>type Version.avs
    Version()

    Do I miss anything?
    I donít know if I have to install some C++ library, but since I am using Visual Studio 2019 for some software development, so I installed a lot of C++ library. I canít install any old version of C++ library.
    By the way, can AyiSynth script work with other environment? (I think VirtualDub is too old, even AyiSynth is too old also.)
    By the way, the frame #458, 974. How do you get their frame number? Extract each frame from the transit1.mp4 file?
    Thanks,
    Quote Quote  
  9. version script in Avisynth should be:
    Code:
    version()
    lower case.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  10. Old versions or VirtualDub and AviSynth work in Win10 but I recommend you uninstall them and install the 64 bit version of AviSynthPlus and the 64 bit version of VirtualDub2, both of which are still in development.

    You'll also need the 64 bit version of LSMASHSource:

    http://avisynth.nl/index.php/LSMASHSource
    https://github.com/HomeOfAviSynthPlusEvolution/L-SMASH-Works/releases/
    https://github.com/HomeOfAviSynthPlusEvolution/L-SMASH-Works/releases/download/2021081...ks-20210811.7z

    Download the 7z file, extract LSMASHSource.dll from the x64 folder, and put it in AviSynth's plugins64+ folder. That's usually "C:\Program Files (x86)\AviSynth+\plugins64+".

    Then you'll be ready to open the AVS scripts with VirtualDub2's (VirtualDub64.exe) File -> Open Video File. Or drag/drop AVS scripts onto VirtualDub64.exe. I put a shortcut to VirtualDub64.exe in Windows' SendTo folder so I can right click on an AVS script and select Send To -> VirtualDub64.

    Case doesn't matter in AviSynth. VERSION(), version(), Version() are all the same. Note that vdub.exe and vdub64.exe are the command line versions of VirtualDub. Use the GUI version instead. You may have a 64 bit vs. 32 bit problem. 64 bit editors/encoders need the 64 bit version of AviSynth and 64 bit AviSynth plugs. 32 bit editors need 32 bit AviSynth and plugins.
    Last edited by jagabo; 18th Sep 2021 at 07:46.
    Quote Quote  
  11. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    Thank you very much for your detailed instructions. Now I can run your code:
    D:\Videos\Test>type FindTransitEnd.avs
    function AbsDiff(clip v1, clip v2)
    {
    Subtract(v1, v2)
    Overlay(last.ColorYUV(off_y=-126), last.Invert().ColorYUV(off_y=-130), mode="add")
    }


    v = LSmashVideoSource("transit1.mp4")
    i = Trim(v, 458,458) # use frame 458 as the match frame
    diff = AbsDiff(v, i)

    WriteFileIf(diff, "match.txt", "AverageLuma<4.0", "current_frame", append = false)
    D:\Videos\Test>type match.txt
    458

    D:\Videos\Test>
    However, there are something I don't quite understand:
    Why you pickup 458 at first place?
    For example, if I have some similar videos, I can cut each of them to 1 minute in duration, but I have no idea where the replay will start and when the replay will end. I am more concerned about the ending frame. In this case, I need the frame #974, right?
    So the frame to pick up in your code is decided by what? To use eyes to search frame by frame is nearly impossible here, as I have thousands of such clips.
    Please advise!
    By the way, are there any C# corresponding environment to run your code, as I have a lot of experience with C# now than any other language in developement.
    Thanks,
    Quote Quote  
  12. Originally Posted by zydjohn View Post
    D:\Videos\Test>type match.txt
    458
    Did you run the script over the entire video with File -> Run Video Analysis Pass? If you did it should have picked up frame 947 too.

    Originally Posted by zydjohn View Post
    However, there are something I don't quite understand:
    Why you pickup 458 at first place?
    I first examined the video looking for a frame that was easily identifiable. In that frame the logo covers the entire frame, the background video is entirely obscured. In nearby frames the background can be seen around and through the logo. That background will be different every time the logo is displayed -- and that would cause the matching to fail. So earlier and later frames aren't suitable.

    AbsSubtract() returns an image that is the absolute value of clip1 - clip2 at each pixel of each frame (only the luma channel is used) -- you can see the resulting image in Virtualdub. When there is a perfect match the result is a perfectly black frame. WriteFileIf() is used to print the frame number when the result of AbsSubtract() is a black frame (with some allowance for small variations from compression artifacts, noise, etc.).

    You could use the PNG image script on all videos that use that same transition logo (find the logo in the first one, save it, then use that image for all those videos). For videos that use a different transition logo you would have to find a suitable identifiable frame.

    Originally Posted by zydjohn View Post
    By the way, are there any C# corresponding environment to run your code, as I have a lot of experience with C# now than any other language in developement.
    No, you can't run AviSynth scripts in C#. AviSynth scripts only work in AviSynth.
    Quote Quote  
  13. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    If I understand you correctly. I can pick up frame #458 and #974 as a base frame, to compare its central area with other videos, and find the match frames, right?
    The both frames are quite similar (I extract all the frames with total number of exactly 1000, and I pick up frame #458 and #974
    Image
    [Attachment 60790 - Click to enlarge]

    Image
    [Attachment 60791 - Click to enlarge]

    If there are exactly 2 matches found, then the first one is the start of the replay and second one is the end of the replay, right? Like the matches in the text file:
    D:\Videos\Test>type match.txt
    458
    974

    By the way, it is not very easy to always drag and place script into VirtualDub64.exe, I added it to PATH, and run it with /s parameter, like this:
    but I got error: D:\Videos\Test>VirtualDub64 /s FindTransitEnd.avs
    See the picture.
    Image
    [Attachment 60792 - Click to enlarge]


    Can I run script from DOS command? it will be much easier, as I can launch it from C# program, but drag and place script into VirtualDub64.exe is difficult.
    Thanks again for your kind help!
    Quote Quote  
  14. Originally Posted by zydjohn View Post
    Hello:
    If I understand you correctly. I can pick up frame #458 and #974 as a base frame, to compare its central area with other videos, and find the match frames, right?
    No. You need to find one frame of the transition effect that is the same every time the transition appears (and doesn't appear anywhere else). You then use that frame to find all occurrences of the transition. Frame 453 is not suitable for this because it contains a lot of the background video:

    Image
    [Attachment 60794 - Click to enlarge]


    The next time that frame of the transition appears is at frame 969:

    Image
    [Attachment 60795 - Click to enlarge]


    The transition effect is the same but the background is different. So a simple frame match would not see the two frames as identical.

    If a different transition effect is used at the end of the replay you would need to add a second WriteFileIf() using a frame from the end effect for comparison.

    Originally Posted by zydjohn View Post
    If there are exactly 2 matches found, then the first one is the start of the replay and second one is the end of the replay, right?
    Like the matches in the text file:
    D:\Videos\Test>type match.txt
    458
    974
    Yes (assuming the video didn't start in the middle of a replay). Then next next pair is the start/end of the next replay. Etc.

    Originally Posted by zydjohn View Post
    By the way, it is not very easy to always drag and place script into VirtualDub64.exe, I added it to PATH, and run it with /s parameter, like this:
    but I got error: D:\Videos\Test>VirtualDub64 /s FindTransitEnd.avs
    See the picture.
    Image
    [Attachment 60792 - Click to enlarge]
    Leave out the /s. /s is for VirtualDub's own scripting language, not AVS scripts. AVS scripts are A/V files as far as VirtualDub is concerned.
    Quote Quote  
  15. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hi,
    Thanks for your advice.
    I want to do some tests to see if I can use only part of the frame#458, or #974 as base image to find match with other mp4 videos.
    Let me know how I can save the central part of frame#458 as an image, and use it to compare with other mp4 videos, like transit2.mp4 to find a match.
    By the way, I still canít figure out how to run dos command for VirtualDub64.exe to run my script. If not using /s option, then which option I can use?
    Thanks,
    Quote Quote  
  16. Originally Posted by zydjohn View Post
    I still canít figure out how to run dos command for VirtualDub64.exe to run my script. If not using /s option, then which option I can use?
    Just the name of the avisynth script.
    Code:
    Virtualdub64 filename.avs
    Originally Posted by zydjohn View Post
    I want to do some tests to see if I can use only part of the frame#458, or #974 as base image to find match with other mp4 videos.
    Let me know how I can save the central part of frame#458 as an image, and use it to compare with other mp4 videos, like transit2.mp4 to find a match.
    The third script in post #7 shows how to use a portion of one frame within the video as a reference.

    To use an image I would open the video in VirtualDub and navigate to a suitable frame. Add the crop filter (Video -> Filters... -> Add -> Crop... Use the crop dialog to mark the part of the frame you want to keep, note the X and Y coordinates of the crop for later use (keep all cropping coordinates values numbers with YV12 video). At the main VirtualDub windows select Video -> Copy Output Frame To Clipboard. Then paste from the clipboard as a new image in an image editor. Save the image. The AVS script will look like:

    Code:
    function AbsDiff(clip v1, clip v2)
    {
        Subtract(v1, v2)
        Overlay(last.ColorYUV(off_y=-126), last.Invert().ColorYUV(off_y=-130), mode="add")
    }
    
    
    v = LSmashVideoSource("transit1.mp4")
    i = ImageSource("image.png", fps=v.framerate).Trim(0,-1).ConvertToYV12()
    diff  = AbsDiff(v.Crop(XPOS, YPOS, i.width, i.height), i) # crop to video frame to match the reference image
    
    WriteFileIf(diff, "match.txt", "AverageLuma<4.0", "current_frame", append = false)
    Overlay(v, last, x=XPOS, y=YPOS)
    Replace XPOS and YPOS with the coordinates you recorded earlier. The last line isn't necessary but it lets you see the whole source frame with diff overlaid.

    Note that different videos using the same transition may have different colors, brightness, position, frame size, etc. So it may not be so easy.
    Last edited by jagabo; 18th Sep 2021 at 13:06.
    Quote Quote  
  17. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:

    I want to do more tests, so I have done the following:
    1) D:\Videos\Test>ffmpeg -i 458.png -vf "crop=232:232:270:100" 458_Center.png
    I crop the Frame #458 from its center part, with width=232px, height=232px, starting point X1=270, and Y1=100. And save it as ď458_Centere.pngĒ, I can use 458_Center.png as the base image, and trying to find a match in video transit1.mp4

    2) I created a new AVS script:
    D:\Videos\Test>type FindEnd2.avs
    function AbsDiff(clip v1, clip v2)
    {
    Subtract(v1, v2)
    Overlay(last.ColorYUV(off_y=-126), last.Invert().ColorYUV(off_y=-130), mode="add")
    }
    v = LSmashVideoSource("transit1.mp4")
    i = ImageSource("458_Center.png", fps=v.framerate).Trim(0,-1).ConvertToYV12()
    diff = AbsDiff(v.Crop(270, 100, i.width, i.height), i)
    WriteFileIf(diff, "match.txt", "AverageLuma<4.0", "current_frame", append = false)
    Overlay(v, last, x=270, y=100)

    3) When I run my script:
    D:\Videos\Test>VirtualDub64 FindEnd2.avs
    I saw the difference: my script run with normal transit1.mp4, the 458_Center.png is in inverse mode; but your code run with inverse mode of transit1.mp4.
    However, after run my code, I canít see any match:
    D:\Videos\Test>dir match.txt
    Volume in drive D is SATABackup
    Volume Serial Number is 4425-E62C

    Directory of D:\Videos\Test

    18/09/2021 21:55 0 match.txt
    1 File(s) 0 bytes
    0 Dir(s) 717,527,707,648 bytes free

    D:\Videos\Test>

    If there anything wrong with my code?
    By the way, I want to know if I can either print the discovered frameís time position, for example: Frame #458 is at about: 18.32 seconds from the start of the video, or print both the frame number and the time position, since I need the time position for further operation using FFMPEG.

    Finally, I canít figure out how to run the script using command line, like: vdub64.exe.
    I used vdub64.exe to run your code, but I found the match.txt is empty, like my code runs.
    Because, using VirtualDub64 to run script, I have to watch the video again, but after I finish the script, I would like to run the script a lot of times, I donít bother to watch them at all, as long as they give me the correct time positions, it is enough.
    Please advise!
    Thanks,
    Quote Quote  
  18. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Thanks for your advices.
    I will spend some time to understand your code and prepare the necessary videos to run some tests.
    I will let you know later, if I have any new founds or issues.
    Thanks,
    Quote Quote  
  19. I duplicated your procedure (1) for creating 458_center.png and ran your script (2) with transit1.mp4. It worked as expected with transit1.mp4.

    Image
    [Attachment 60796 - Click to enlarge]


    Code:
    function AbsDiff(clip v1, clip v2)
    {
        Subtract(v1, v2)
        Overlay(last.ColorYUV(off_y=-126), last.Invert().ColorYUV(off_y=-130), mode="add")
    }
    
    v = LSmashVideoSource("transit1.mp4")
    i = ImageSource("458_Center.png", fps=v.framerate).Trim(0,-1).ConvertToYV12()
    diff = AbsDiff(v.Crop(270, 100, i.width, i.height), i)
    WriteFileIf(diff, "match.txt", "AverageLuma<4.0", "current_frame", append = false)
    Overlay(v, last, x=270, y=100)
    If a different video is slightly lighter or darker, shifted left or right, etc. you will need to raise the AverageLuma threshold. For example:

    Code:
    WriteFileIf(diff, "match.txt", "AverageLuma<8.0", "current_frame", append = false)
    Eight might not be the right value. Too high and you'll get false positives. Too low and you will miss some of the transitions. If you still can't get good output upload a clip from the other video(s).

    You can "run" a script with ffmpeg:

    Code:
    ffmpeg -i FindEnd2.avs -c copy -f null -
    Last edited by jagabo; 18th Sep 2021 at 19:53.
    Quote Quote  
  20. using vapoursynth it looks like this:
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    import numpy as np
    import cv2
    
    SOURCE_PATH = r'transit1.mp4'
    TEMPLATE    = cv2.imread(r'jagabo_template_crop.png',0)
    FOUND_FRAMES = []
    
    def process_img(n, f):
        vsFrame = f.copy()
        npArray = np.dstack([np.asarray(vsFrame.get_read_array(i)) for i in [2,1,0]]) #opencv uses bgr not rgb
        img_gray = cv2.cvtColor(npArray, cv2.COLOR_BGR2GRAY)
        res = cv2.matchTemplate(img_gray,TEMPLATE,cv2.TM_CCOEFF_NORMED)
        threshold = 0.8
        loc = np.where( res >= threshold)
        if loc[0].size > 0:
            FOUND_FRAMES.append(n)  
        return vsFrame
    
    clip = core.lsmas.LibavSMASHSource(SOURCE_PATH)
    clip = core.resize.Point(clip, format=vs.RGB24, matrix_in_s = '709') #make it ready for opencv images
    clip_modify = core.std.ModifyFrame(clip,clip,process_img)
    for frame in clip_modify.frames():
        pass                         #to just request a frame
    print(FOUND_FRAMES)
    result:
    Code:
    [458, 459, 974, 975]
    >>>
    Last edited by _Al_; 18th Sep 2021 at 23:33.
    Quote Quote  
  21. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    Thanks for your code and advice.
    Now, I make yet another mp4 video: transit2.mp2, and I tried your code, it is not working. So I extract all frames from the video, and pickup one frame#979, which shows the biggest shirt in the image. Then I crop frame #979:
    D:\Videos\Test>ffmpeg -i B0979.png -vf "crop=232:232:270:100" B0979_Center.png
    I canít see any difference between B0979_Center.png and 458_Center.png.
    You can take a look.
    Then I run the following code, now it is working:
    D:\Videos\Test>type FindEnd3.avs
    function AbsDiff(clip v1, clip v2)
    {
    Subtract(v1, v2)
    Overlay(last.ColorYUV(off_y=-126), last.Invert().ColorYUV(off_y=-130), mode="add")
    }
    v = LSmashVideoSource("transit2.mp4")
    i = ImageSource("B0979_Center.png", fps=v.framerate).Trim(0,-1).ConvertToYV12()
    diff = AbsDiff(v.Crop(270, 100, i.width, i.height), i)
    WriteFileIf(diff, "match.txt", "AverageLuma<10.0", "current_frame", append = false)
    Overlay(v, last, x=270, y=100)
    D:\Videos\Test>ffmpeg -i FindEnd3.avs -c copy -f null -
    D:\Videos\Test>type match.txt
    978
    1758
    D:\Videos\Test>

    I think maybe one general solution is: try both B0979_Center.png and 458_Center.png, give a large value of threshold "AverageLuma", like 20 or 30; then pick up the frames whose "AverageLumaĒ is the lowest, and pick up the first two. What do you think?
    If you think it is OK, then how to find out the lowest value of "AverageLuma", and pick up the top two of them?
    By the way, if I set up the first frame as starting time 0 second, then how to show the current frameís time position in second in the "match.txt" file?
    Image
    [Attachment 60798 - Click to enlarge]
    Image Attached Files
    Quote Quote  
  22. Originally Posted by zydjohn View Post
    I canít see any difference between B0979_Center.png and 458_Center.png
    You don't see the difference in brightness and location of the two logos? Like I said before, you need to have nearly exact matches for this technique to work.

    You can get time in seconds by dividing the frame number by the frame rate:

    Code:
    WriteFileIf(diff, "match.txt", "AverageLuma<4.0", "current_frame/framerate", append = false)
    Quote Quote  
  23. This sort of things are really nice examples to do using Python. jagabo is a magician, he gets from avisynth a maximum, but I would bring an attention to python and using opencv module that has specifically a function for that: matchTemplate() . Template is an image just like jagabo posted, some sort of crop that you search within a larger image. Code below searches for that crop template in a video frame and if found it prints frame number:
    Code:
    import numpy as np   #pip install numpy
    import cv2           #pip install opencv-python
    
    def process_img(img_rgb, template, frame_number):
        img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
        res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
        threshold = 0.8
        loc = np.where( res >= threshold)
        if loc[0].size > 0:
            print(frame_number)
    
    vidcap = cv2.VideoCapture(r'transit1.mp4')
    template = cv2.imread(r'jagabo_template_crop.png',0)
    frame_number = 0
    while True:
      success,image = vidcap.read()
      if not success: break
      process_img(image, template, frame_number)
      frame_number += 1
    After I ran that it printed frames:
    458
    459
    974
    975
    so there are some frames that duplicate. This script could be adjusted to return a timecode or other things. Also video could be fed from vapoursynth clip (script would be longer), instead of letting opencv read frames, tweaking is endless.
    The way matchTemplate() works, it slides that template thru an image (rgb array representation) and it is looking for a match. That module is some C library, so it is fast, I found those frames in couple of seconds out of your sample. I even deliberately darkened that template crop image and it still found it, it works with gray image.

    actually, those frames are not duplicates, just image is found scaled close enough or colors are close enough
    Last edited by _Al_; 18th Sep 2021 at 22:30.
    Quote Quote  
  24. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Originally Posted by _Al_ View Post
    This sort of things are really nice examples to do using Python. jagabo is a magician, he gets from avisynth a maximum, but I would bring an attention to python and using opencv module that has specifically a function for that: matchTemplate() . Template is an image just like jagabo posted, some sort of crop that you search within a larger image. Code below searches for that crop template in a video frame and if found it prints frame number:
    Code:
    import numpy as np   #pip install numpy
    import cv2           #pip install opencv-python
    
    def process_img(img_rgb, template, frame_number):
        img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
        res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
        threshold = 0.8
        loc = np.where( res >= threshold)
        if loc[0].size > 0:
            print(frame_number)
    
    vidcap = cv2.VideoCapture(r'transit1.mp4')
    template = cv2.imread(r'jagabo_template_crop.png',0)
    frame_number = 0
    while True:
      success,image = vidcap.read()
      if not success: break
      process_img(image, template, frame_number)
      frame_number += 1
    After I ran that it printed frames:
    458
    459
    974
    975
    so there are some frames that duplicate. This script could be adjusted to return a timecode or other things. Also video could be fed from vapoursynth clip (script would be longer), instead of letting opencv read frames, tweaking is endless.
    The way matchTemplate() works, it slides that template thru an image (rgb array representation) and it is looking for a match. That module is some C library, so it is fast, I found those frames in couple of seconds out of your sample. I even deliberately darkened that template crop image and it still found it, it works with gray image.

    actually, those frames are not duplicates, just image is found scaled close enough or colors are close enough
    Hello:
    I have done the same testing with python:
    C:\Python\MatchTemplatePy>python MatchTemplatePy.py
    457
    973

    C:\MatchTemplatePy>type MatchTemplatePy.py
    import numpy as np #pip install numpy
    import cv2 #pip install opencv-python

    def process_img(img_rgb, template, frame_number):
    img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
    res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_ NORMED)
    threshold = 0.8
    loc = np.where( res >= threshold)
    if loc[0].size > 0:
    print(frame_number)

    vidcap = cv2.VideoCapture(r'transit1.mp4')
    template = cv2.imread(r'458_Center.png',0)
    frame_number = 0
    while True:
    success,image = vidcap.read()
    if not success: break
    process_img(image, template, frame_number)
    frame_number += 1

    I can see the output is a little difference. If '458_Center.png' is the same as 'jagabo_template_crop.png', then we should get the same output, right?
    But I want to understand one step further, from your python code, I want to know how do you compare the base image (458_Center.png) with the video frame. So, I think the code: res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_ NORMED)
    it will compare the base image with the total area of video frame (768*432 pixel). Since the base image has only area (232*232 pixel), so the comparation will take place multiple times, right?
    I think if you compare only the central area of video frame, starting from point (270, 100), and compare only one area of (232*232 pixel) will be much faster, right?
    By the way, if python works, then I could possibly try some NUGET package, and convert python code to C# code.
    Thanks for your code!
    Last edited by zydjohn; 19th Sep 2021 at 11:09. Reason: Add more idea
    Quote Quote  
  25. Originally Posted by zydjohn View Post
    I can see the output is a little difference. If '458_Center.png' is the same as 'jagabo_template_crop.png', then we should get the same output, right?
    But I want to understand one step further, from your python code, I want to know how do you compare the base image (458_Center.png) with the video frame. So, I think the code: res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_ NORMED)
    it will compare the base image with the total area of video frame (768*432 pixel). Since the base image has only area (232*232 pixel), so the comparation will take place multiple times, right?
    I think if you compare only the central area of video frame, starting from point (270, 100), and compare only one area of (232*232 pixel) will be much faster, right?
    By the way, if python works, then I could possibly try some NUGET package, and convert python code to C# code.
    Thanks for your code!
    that function slides that template across an image and returns found and rectangle position (top left an bottom right coordinates). That could be array of rectangles , you can try:
    print(loc)
    As long as there is a one rectangle found, it reports found frame.

    Perhaps yes, if template is smaller maybe it is going to be faster, but those things are lightning fast as it is, it uses arrays.

    Why you find only one frame and frame early and I got two frames and one frame later is perhaps due version differences. I use older opencv 4.4.0, but you downloaded latest perhaps (something 4.5.x). I found in that latest version (not sure what version introduced it), that there is a bug, it wrongly returns found coordinate in opencv image, there is a slight shift that could give readings shifted one pixel. So maybe that has something to do with it, so matchTemplate() reports found a frame early? Not sure. Or that function itself could be updated.
    Back then I uninstalled that opencv:
    pip uninstall opencv-python
    and installed older one, not sure exactly, it was something like:
    pip install opencv-python==4.4.0

    Not sure about C, but you can use C++, opencv in python is just a convenience. You might find scripts for C++ directly.
    https://www.ccoderun.ca/programming/doxygen/opencv/index.html
    Quote Quote  
  26. To use an image I would open the video in VirtualDub and navigate to a suitable frame. Add the crop filter (Video -> Filters... -> Add -> Crop... Use the crop dialog to mark the part of the frame you want to keep, note the X and Y coordinates of the crop for later use (keep all cropping coordinates values numbers with YV12 video). At the main VirtualDub windows select Video -> Copy Output Frame To Clipboard. Then paste from the clipboard as a new image in an image editor. Save the image.
    Or use AVSPMod and do a PNG screenshot directly. (Why can't VirtualDub2 do screenshots itself ? Unless I'm missing sumpting ?)

    @zydjohn
    For clarity's sake, you could put everything that is "code" or program output between those tags : [ CODE ] [ /CODE ] (remove the spaces before / after the square brackets). (A specific "code" button is available but only in "advanced" mode.)
    Quote Quote  
  27. To view found images if everything is ok:
    Code:
    import numpy as np
    import cv2
    
    def process_img(img_rgb, template, frame_number):
        img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
        res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
        threshold = 0.8
        loc = np.where( res >= threshold)
        if loc[0].size > 0:
            h, w = template.shape
            print(frame_number)
            for pt in zip(loc[1],loc[0]):
                #all found rectangles, it depends on threshold, if close to 1, less rectangles
                img_rgb = cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,0,255), 2)
            img_rgb = cv2.putText(img_rgb, f'frame={frame_number}', (50,50), cv2.FONT_HERSHEY_SIMPLEX, 1,(0, 0, 255),2,cv2.LINE_AA)      
            img_found.append(img_rgb) 
    
    vidcap = cv2.VideoCapture(r'transit1.mp4')
    template = cv2.imread(r'template.png',0)
    img_found = []
    frame_number = 0
    while True:
      success,image = vidcap.read()
      if not success: break
      process_img(image, template, frame_number)
      frame_number += 1
      
    while True:
        if not img_found: break
        for img in img_found:
            cv2.imshow('found images', img)
            cv2.waitKey(0)  #press any key for other found images
    Quote Quote  
  28. Here's an alternate AviSynth script that works for both test clips:

    Code:
    v = LSmashVideoSource("transit1.mp4")+LSmashVideoSource("transit2.mp4")
    testclip = VtoY(v)
    #return(ScriptClip(testclip, "Subtitle(String(AverageLuma))"))
    WriteFileIf(testclip, "match.txt", "(AverageLuma()>176.5) && (AverageLuma()<176.9)", "current_frame", append = false)
    Output:
    Code:
    458
    974
    1978
    2758
    This script detects frames that are nearly all red. The four detected frames had average V values from 176.58 to 176.87. You may need to loosen the range for other clips. Enable the commented out line to see the average V values. Limiting the test to the left ~1/3 of the frame may make it work even better (you'll have to adjust the range).
    Quote Quote  
  29. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    It is interesting to know what you mean: The four detected frames had average V values from 176.58 to 176.87
    For the both images for comparation:
    Image
    [Attachment 60803 - Click to enlarge]

    Image
    [Attachment 60804 - Click to enlarge]

    Can you use the images and provide detailed explanation?
    I think your solution seems to be better, or a more general solution for my question!
    Thanks,
    Quote Quote  
  30. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:

    I think your idea seems to be good.
    If I understand it correctly, your script will read the transit MP4 file, and calculate each frame and find its AverageLuma() value, then check to see if the value is within the specific range (176.5 to 176.9), if yes, print out the frame number, right?
    I would like to know some basic job in Python: how I can calculate the AverageLuma() value for a single image?
    I can only write the following Python code:

    import numpy as np #pip install numpy
    import cv2 #pip install opencv-python

    image1 = cv2.imread("D:\Videos\Test\0458_Center.PNG");
    avg_color_per_row = np.average(image1, axis=0)
    avg_color = np.average(avg_color_per_row, axis=0)
    print(avg_color)

    But this seems not enough to find the AverageLuma() value.
    Please advise how I can do this with Python (opencv and/or numpy)
    Thanks,
    Quote Quote