VideoHelp Forum

+ Reply to Thread
Page 4 of 6
FirstFirst ... 2 3 4 5 6 LastLast
Results 91 to 120 of 172
Thread
  1. to only print frame for a transition one time:
    Code:
    first_found_frame = -11
    
    def process_img(img_rgb, frame_number):
        lower = np.array([10,125,0])
        upper = np.array([40,255,255])
        for c in crops:
            r = cv2.cvtColor(img_rgb[c[1]:c[3],c[0]:c[2]], cv2.COLOR_BGR2HSV)
            mask = cv2.inRange(r, lower, upper)
            if np.average(mask) > 140:
                continue
            else:
                return
        global first_found_frame
        if abs(frame_number-first_found_frame) >10:
            first_found_frame = frame_number
            print(frame_number)
    Code:
    1095
    1547
    >>>
    Last edited by _Al_; 25th Sep 2021 at 12:22.
    Quote Quote  
  2. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    I want to know if you use MS-Paint to locate the coordinates, and if you use the red squares as I indicated in the picture, then I found the coordinates in my picture seems different from your code.
    Can you show me a picture like I did that you used in your code for crops?
    By the way, do you have a utility for this kind of job to show here?
    Thanks,
    Quote Quote  
  3. the rectangles were roughly in same area as you posted, I use vapoursynth previewers, which you do not use, but whatever works
    I will post some utility, let me put it together
    Quote Quote  
  4. Similarly in AviSynth:

    Code:
    LWLibavVideoSource("transit_L1.mp4", cache=false, prefer_hw=2) 
    ConvertToYV24()
    src = last
    
    x1 = 320
    y1 = 38
    x2 = 310
    y2 = 378
    
    rect1 = Crop(x1,y1,144,8).MaskHS(StartHue=183, EndHue=203, minSat=45, maxSat=65)
    rect2 = Crop(x2,y2,144,8).MaskHS(StartHue=183, EndHue=203, MinSat=45, MaxSat=65)
    testclip = Overlay(rect1, rect2, mode="multiply")
    
    wf = WriteFileIf(testclip, "match.txt", "(AverageLuma()>128)", "current_frame", append = false)
    
    # the following is just so you can see the test patches and testclip (top left corner)
    Overlay(src, rect1, x=x1, y=y1)
    Overlay(last, rect2, x=x2, y=y2)
    Overlay(last, wf, x=0, y=0)
    I used wide hue and saturation ranges but small rectangular areas that only match on a single frame where the transition is in motion. Of course, the wide hue/sat range makes the test more prone to false positives, and the small areas make the test more prone to false negatives. But it found a singel frame of both transitions in the test clip.

    Code:
    1103
    1553
    Quote Quote  
  5. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    I give you yet another transit mp4 file to test: transit_L2.mp4.
    I think this one is a little different from transit_L1.mp4, as there was no rain in the video.
    Let me know if you can run your code without any change or you have to change something.
    Thanks,
    Image Attached Files
    Quote Quote  
  6. transit_L2.mp4 result running my last posted code:
    Code:
    911
    1230
    >>>
    but update -11 and 10 to -101 and 100, sure there is not going to be transitions next to each other in 4 seconds (4x25=100 frames):
    Code:
    first_found_frame = -101
    def process_img(img_rgb, ...
    .
    .
        global first_found_frame
        if frame_number-first_found_frame >100:
            first_found_frame = frame_number
            print(frame_number)
    Sure it works with previous transit_L1.mp4 also.
    Last edited by _Al_; 25th Sep 2021 at 17:28.
    Quote Quote  
  7. Transit_L2.mp4 works with my script but returns an extra frame because the video has lots of duplicate frames; and one of the duplicates happens to be a match frame. Also the script is not optimal for L2 because the alignment of the logo is different. A frame where the logo is static (no match):

    Image
    [Attachment 60948 - Click to enlarge]


    On the left is L1. You can see that the test patches are just outside green logo. On the right is L2. One of the test patches is within the green portion of the logo and is a match. Overall it's not a match because the other patch is negative.

    On a frame with a positive match:

    Image
    [Attachment 60949 - Click to enlarge]


    You can see that both patches of L1 are just within the logo -- as the test was designed to detect. With L2 the logo is in a different position so one of the patches is partially outside the logo and the other is deeper within the logo. It still matches but it's not optimal. But even the two patches were in optimal positions there would still be two matches if there are two identical frames (as in the second time the logo appears in L2).
    Quote Quote  
  8. I guess my choice was more lucky, rectangles fell in both transitions in the area, so to explain why it worked, example for first rectangle, not sure about second one, but guessing something similar happened:

    Actually, the principal of selecting only first found frame of the whole transition gives result of selecting one frame only.
    Sorry, that first sentence above could be misleading. Yes area fell in, but setting wider ranges and position that is shared with other frames causes to return more frames from that transition. The code ignores other found frames in a particular transition..
    Image Attached Thumbnails Click image for larger version

Name:	transit_L1.mp4-first rectangle.png
Views:	4
Size:	84.0 KB
ID:	60950  

    Click image for larger version

Name:	transit_L2.mp4-first rectangle.png
Views:	4
Size:	178.3 KB
ID:	60951  

    Last edited by _Al_; 25th Sep 2021 at 19:35.
    Quote Quote  
  9. just looking at those png's, Hue is almost falling into green territory, between yellow and green, looking into this picture:
    https://answers.opencv.org/upfiles/15186768416857101.png
    so easily this would work also, selecting hue from 30 to 40:
    Code:
    lower = np.array([30,125,0])
    upper = np.array([40,255,255])
    when I was guessing a hue, I'd swear it was nice yellow hue, but it is almost green hue, something between yellow and green.
    That original hue selection was working as well, from 10 to 40, just saying, how our eyes sometimes could trick us.
    But yes, it depends on the background, so it was safe to select all yellow hue range.
    Last edited by _Al_; 25th Sep 2021 at 19:27.
    Quote Quote  
  10. Originally Posted by _Al_ View Post
    the principal of selecting only first found frame of the whole transition gives result of selecting one frame only.
    Yes, of course. It's a little harder to do in AviSynth though (as in, "I don't know how to do it off the top of my head").
    Quote Quote  
  11. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    Good to hear your explanation. Can you come up with one general utility which can cover all the above topics?
    I have a few other transit videos, I can test to see if your code can cover all the variations.
    Thanks,
    Quote Quote  
  12. Originally Posted by zydjohn View Post
    Can you come up with one general utility which can cover all the above topics?
    You can convert each of the test sequences to a function then call each of them. I did that with three of the earlier scripts:

    Code:
    FILENAME = "transit1.mp4" 
    
    src = LWLibavVideoSource(FILENAME, cache=false, prefer_hw=2) 
    
    Overlay(src, transit_i2(src))
    Overlay(last, transit_L1(src))
    Overlay(last, transit1(src))
    
    
    
    ##########################################################################
    
    function transit1(clip c)
    {
        WriteFileIf(c, "match.txt", "(AverageChromaV()>176.5) && (AverageChromaV()<176.9)", "current_frame", append = false)
    }
    
    
    function transit_I2(clip c)
    {
        c
    
        x1 = 206
        y1 = 74
        x2 = 582
        y2 = 324
    
        rect1 = Crop(x1,y1,16,16).MaskHS(StartHue=290, EndHue=305, minSat=45, maxSat=55)
        rect2 = Crop(x2,y2,16,16).MaskHS(StartHue=30, EndHue=40, MinSat=40, MaxSat=50)
        testclip = Overlay(rect1, rect2, mode="multiply")
    
        WriteFileIf(testclip, "match.txt", "(AverageLuma()>128)", "current_frame", append = false)
    }
    
    
    function transit_L1(clip c)
    {
        c
    
        x1 = 320
        y1 = 38
        x2 = 310
        y2 = 378
    
        rect1 = Crop(x1,y1,144,8).MaskHS(StartHue=183, EndHue=200, minSat=45, maxSat=65)
        rect2 = Crop(x2,y2,144,8).MaskHS(StartHue=183, EndHue=203, MinSat=45, MaxSat=65)
        testclip = Overlay(rect1, rect2, mode="multiply")
    
        WriteFileIf(testclip, "match.txt", "(AverageLuma()>128)", "current_frame", append = false)
    }
    The Overlay() calls are used because AviSynth won't call the functions if the thier output isn't used. I tested this with transit1.mp4, transit_i2.mp4, and transit_L1.mp4. All three delivered the expected output in match.txt

    You can use a batch file to build and run the script automatically:

    Code:
    echo FILENAME = "%1" >__script.avs
    type _part2.txt >>__script.avs
    ffmpeg -i __script.avs -c copy -f null -
    _part2.txt is the above script minus the first line. Drag/drop files onto the batch file.
    Last edited by jagabo; 26th Sep 2021 at 19:26.
    Quote Quote  
  13. Originally Posted by _Al_ View Post
    just looking at those png's, Hue is almost falling into green territory, between yellow and green, looking into this picture:
    https://answers.opencv.org/upfiles/15186768416857101.png
    I don't understand that image. A full circle is 360 degrees, not 180 degrees. Does opencv use their own definition of hue?
    Quote Quote  
  14. In opencv Hue range is [0,179], Saturation range is [0,255] and Value range is [0,255]
    so those values fit 255 range, they just use it like that I guess because of computing, so it fits into one byte
    Quote Quote  
  15. Originally Posted by _Al_ View Post
    In opencv Hue range is [0,179], Saturation range is [0,255] and Value range is [0,255]
    so those values fit 255 range, they just use it like that I guess because of computing, so it fits into one byte
    Ah, thanks. So it's hue/2.

    Here's the relationship between UV and HS in AviSynth:

    Image
    [Attachment 60980 - Click to enlarge]


    And an animation using MaskHS to convert specified ranges to black:

    Image
    [Attachment 60982 - Click to enlarge]
    Image Attached Files
    Quote Quote  
  16. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:

    I am going to use the python utility to detect video replay parts.
    But I have a few issues:
    First, I use the following Python code to play one video, it is kind of working, but the audio is gone.

    Code:
    import numpy as np
    import cv2
    
    cap = cv2.VideoCapture('D:/Videos/Backgrounds/TransitMP4s/Transit_C2.mp4')
    
    while(cap.isOpened()):
        ret, frame = cap.read()
        cv2.imshow('frame',frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    
    cap.release()
    cv2.destroyAllWindows()
    Second, I use the following Python code to split one video, cut the first 1000 frame and save into one file, it has audio inside. The split works rather fast, but the result video has no audio.
    Let me know what is wrong.

    Code:
    import cv2
    
    file = "D:/Videos/Backgrounds/TransitMP4s/Transit_C2.mp4"
    parts = [(1, 1000)]
    
    cap = cv2.VideoCapture(file)
    ret, frame = cap.read()
    h, w, _ = frame.shape
    
    fourcc = cv2.VideoWriter_fourcc(*"XVID")
    writers = [cv2.VideoWriter(f"part{start}-{end}.mp4", fourcc, 20.0, (w, h)) for start, end in parts]
    
    f = 0
    while ret:
        f += 1
        for i, part in enumerate(parts):
            start, end = part
            if start <= f <= end:
                writers[i].write(frame)
        ret, frame = cap.read()
    
    for writer in writers:
        writer.release()
    
    cap.release()
    Note 1: the Transit_C2.mp4 file was uploaded to the thread on Post #85
    Note 2: I can use FFMPEG to split the video, and it plays well.
    The command is like this:
    D:\Videos\Backgrounds\TransitMP4s>ffmpeg -i transit_C2.mp4 -ss 00 -to 01:40 output.mp4

    By the way, I am studying your code, however, it seems I canít run the code, as I donít understand what the variable Ďcí is for.

    Code:
    FILENAME = "transit1.mp4" 
    
    src = LWLibavVideoSource(FILENAME, cache=false, prefer_hw=2) 
    
    Overlay(src, transit_i2(src))
    Overlay(last, transit_L1(src))
    Overlay(last, transit1(src))
    
    
    
    ##########################################################################
    
    function transit1(clip c)
    {
        WriteFileIf(c, "match.txt", "(AverageChromaV()>176.5) && (AverageChromaV()<176.9)", "current_frame", append = false)
    }
    
    
    function transit_I2(clip c)
    {
        c
    
        x1 = 206
        y1 = 74
        x2 = 582
        y2 = 324
    
        rect1 = Crop(x1,y1,16,16).MaskHS(StartHue=290, EndHue=305, minSat=45, maxSat=55)
        rect2 = Crop(x2,y2,16,16).MaskHS(StartHue=30, EndHue=40, MinSat=40, MaxSat=50)
        testclip = Overlay(rect1, rect2, mode="multiply")
    
        WriteFileIf(testclip, "match.txt", "(AverageLuma()>128)", "current_frame", append = false)
    }
    
    
    function transit_L1(clip c)
    {
        c
    
        x1 = 320
        y1 = 38
        x2 = 310
        y2 = 378
    
        rect1 = Crop(x1,y1,144,8).MaskHS(StartHue=183, EndHue=200, minSat=45, maxSat=65)
        rect2 = Crop(x2,y2,144,8).MaskHS(StartHue=183, EndHue=203, MinSat=45, MaxSat=65)
        testclip = Overlay(rect1, rect2, mode="multiply")
    
        WriteFileIf(testclip, "match.txt", "(AverageLuma()>128)", "current_frame", append = false)
    }
    Please advise,
    Thanks,
    Quote Quote  
  17. Originally Posted by zydjohn View Post
    The split works rather fast, but the result video has no audio.
    You probably need to import the audio separately then join the audio and video together. I don't you VapourSynth but in AviSynth it's something like:

    Code:
    vid = LWlibavVideoSource("filaname.mp4") # get the video
    aud = LWlibavAudioSource("filaname.mp4") # get the audio
    both = AudioDub(vid, aud) # join them together

    Originally Posted by zydjohn View Post
    By the way, I am studying your code, however, it seems I canít run the code, as I donít understand what the variable Ďcí is for.
    Most filters take an input stream and produce an output stream. AviSynth uses named streams but when you don't specify a name it assumes the name "last". So code like:

    Code:
    LWlibavVideoSource("filename.mp4")
    Crop(8,0,-8,-0)
    really means:

    Code:
    last = LWlibavVideoSource("filename.mp4")
    last = Crop(last,8,0,-8,-0)
    In my script a sequence like:

    Code:
    function transit_I2(clip c) # c is the input clip
    {
        c
    
        x1 = 206
        y1 = 74
        x2 = 582
        y2 = 324
    
        rect1 = Crop(x1,y1,16,16)
        # etc.
    }
    means:

    Code:
    function transit_I2(clip c) # c is the input clip
    {
        last = c
    
        x1 = 206
        y1 = 74
        x2 = 582
        y2 = 324
    
        rect1 = Crop(last,x1,y1,16,16)
        # etc.
    }
    The original scripts assumed last in many places and just cut/pasted lines from the originals. So it was easier to provide a "last" from c at the start of the function.
    Quote Quote  
  18. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:

    Thanks for your code. However, now I can use Visual Studio 2019 (Version 16.11.3) to make Python project, and I changed your code, so it looks like this:

    Code:
    FILENAME = "transit1.mp4" 
    
    src = LWLibavVideoSource(FILENAME, cache=false, prefer_hw=2) 
    
    Overlay(src, transit_i2(src))
    Overlay(last, transit_L1(src))
    Overlay(last, transit1(src))
    
    ##########################################################################
    
    crops = [
        
    [306,   #x1
    64,     #y1
    306+48, #x2
    64+40]  #y2
    ,
    [382,
    326,
    382+58,
    326+42]
    ]
    
    def process_img(img_rgb, frame_number):
        lower = np.array([10,125,0])
        upper = np.array([40,255,255])
        for c in crops:
            r = cv2.cvtColor(img_rgb[c[1]:c[3],c[0]:c[2]], cv2.COLOR_BGR2HSV)
            mask = cv2.inRange(r, lower, upper)
            if np.average(mask) > 140:
                continue
            else:
                return
        print(frame_number)
    
    function transit1(clip c)
    {
        WriteFileIf(c, "match.txt", "(AverageChromaV()>176.5) && (AverageChromaV()<176.9)", "current_frame", append = false)
    }
    
    
    function transit_I2(clip c)
    {
        last = c
        x1 = 206
        y1 = 74
        x2 = 582
        y2 = 324
    
        rect1 = Crop(x1,y1,16,16).MaskHS(StartHue=290, EndHue=305, minSat=45, maxSat=55)
        rect2 = Crop(x2,y2,16,16).MaskHS(StartHue=30, EndHue=40, MinSat=40, MaxSat=50)
        testclip = Overlay(rect1, rect2, mode="multiply")
    
        WriteFileIf(testclip, "match.txt", "(AverageLuma()>128)", "current_frame", append = false)
    }
    
    
    function transit_L1(clip c)
    {
        last = c
    
        x1 = 320
        y1 = 38
        x2 = 310
        y2 = 378
    
        rect1 = Crop(x1,y1,144,8).MaskHS(StartHue=183, EndHue=200, minSat=45, maxSat=65)
        rect2 = Crop(x2,y2,144,8).MaskHS(StartHue=183, EndHue=203, MinSat=45, MaxSat=65)
        testclip = Overlay(rect1, rect2, mode="multiply")
    
        WriteFileIf(testclip, "match.txt", "(AverageLuma()>128)", "current_frame", append = false)
    }
    But I got more than 50 compiler errors:
    Basically: unexpected token 'transit1', unexpected token 'c', unexpected token ')', unexpected token 'transit2'.
    You can see the picture.

    By the way, now, I think if Python canít split the video, it is OK. I have other way to settle this.
    Now, I have C# code, which knows when the goals were being scored, it may have less than 10 seconds round error.
    Letís say, my C# code knows that in this video, there are 2 goals being scored, they are at 10th and 20th minutes respectively.
    Now I want to run the Python to seek from 10th minutes to see if it can detect 2 replay scenes, like in Transit1.mp4, those frames where the Python code print the frame number. If the second replay scene is detected, then either show the frame number, but it is better to show the total time in seconds. For example, if one player scored a goal at exactly 10th minutes and 0 second, but the replay start at 10:20 seconds, and end 10:30 seconds, then it is better to use Python code to output the second replay time, here it is 10:30 seconds, half minute after the player scored the goal. Then I can pick up the 10:30 time, in order for FFMPEG to split the video in different clips.
    Hope you understand my logic, and let me know how I can change the Python code to show the time, not the frame number.
    Thanks,
    Image Attached Thumbnails Click image for larger version

Name:	PythonManyTransitNOK.png
Views:	5
Size:	204.0 KB
ID:	61031  

    Quote Quote  
  19. You cannot load Avisynth language code into python script. You cannot mix two languages in general. Avisynth cannot be compiled as is anyway. Vapoursynth can, because it is python. You can freeze vapoursynth scripts as in python script into a EXE file.

    Do not edit your video using opencv. That is what Vapoursynth or Avisynth is for. Then you need to encode outputs of those scripts using ffmpeg or other scenarios. Use opencv for calculations to return frames for you for trimming.
    Last edited by _Al_; 28th Sep 2021 at 18:23.
    Quote Quote  
  20. You turn time into frames.Work with frames. If C# cannot return frame but time, change it to frame.

    So python input is seconds or a frame. If seconds, then get a particular frame.
    frame = seconds * fps (frames per second)
    in opencv:
    Code:
    vidcap = cv2.VideoCapture('Transit1.mp4')
    fps = vidcap.get(cv2.CAP_PROP_FPS)
    start_frame = int(in_seconds * fps)
    end_frame = int(out_seconds * fps)
    vidcap.set(2,start_frame)  #this sets opencv to start read from that frame
    .
    .
    .
    if frame_number == end_frame:
        break
    #you fill in desired code to hunt for transition as from before
    Quote Quote  
  21. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:

    I spent quite some time to find one library for C# EmguCV: https://github.com/emgucv

    I can get average HSV value for each frame inside one MP4 video using C# statement, like this:

    Code:
    using Emgu.CV;
    using Emgu.CV.Structure;
    using Emgu.CV.CvEnum;
    using System;
    using System.Collections.Generic;
    using System.Diagnostics;
    using System.IO;
    using System.Threading.Tasks;
    using System.Windows.Forms;
    
    namespace EmguCVGetHUEMP4Form
    {
        public partial class Form1 : Form
        {
            public const string Transit_B1_MP4_File = @"D:\Videos\Backgrounds\TransitMP4s\transit_B1.mp4";
            public const string Transit_B1_HSV_File = @"D:\Videos\Backgrounds\transit_B1_HSV.txt";
    
            public static List<string> File1_HSV_Values = new();
            private VideoCapture videoCapture;
            private Mat _frame;
            double FPS;
            double totalFrames;
            int currentFrameNo;
    
            public Form1()
            {
                InitializeComponent();
            }
    
            private void button1_Click(object sender, EventArgs e)
            {
                File1_HSV_Values.Clear();
                videoCapture = new VideoCapture(Transit_B1_MP4_File);
                totalFrames = videoCapture.Get(CapProp.FrameCount);
                for(int i = 0; i < (int)totalFrames; i++)
                {
                    Mat image = videoCapture.QueryFrame();
                    using Mat hsv = new();
                    CvInvoke.CvtColor(image, hsv, ColorConversion.Bgr2Hsv);
                    if (hsv != null)
                    {
                        Mat[] channels = hsv.Split();
                        RangeF H = channels[0].GetValueRange();
                        RangeF S = channels[1].GetValueRange();
                        RangeF V = channels[2].GetValueRange();
                        MCvScalar mean = CvInvoke.Mean(hsv);
                        string add_hsv1 =
                            string.Format("# {0} Avg-H {1} Avg-S {2} Avg-V {3}", i, mean.V0, mean.V1, mean.V2);
                        File1_HSV_Values.Add(add_hsv1);
                    }
                }
                videoCapture.Stop();
                videoCapture.Dispose();
                string total_hsv = string.Join("\n", File1_HSV_Values);
                File.WriteAllText(Transit_B1_HSV_File, total_hsv);
                Environment.Exit(0);
            }
        }
    }
    However, when I want to use HSV value to find the replay transit frames from Transit_B1.mp4, I think it is difficult to know which one to pick up. As I can see in the HSV file, those matching frames like, not as those nearly red frames, those matching frames average H,S,V values are rather similiar for other non-matching frames too.

    # 1114 Avg-H 86.0346649546682 Avg-S 77.16026777102623 Avg-V 89.85077281057099
    # 1115 Avg-H 86.35139672550154 Avg-S 74.7205192660108 Avg-V 91.62889117959105
    # 1116 Avg-H 86.38428035783178 Avg-S 74.67535023630401 Avg-V 91.62787241994599

    # 2323 Avg-H 85.72932641300154 Avg-S 77.2218605324074 Avg-V 89.84375301408178
    # 2324 Avg-H 86.5927071277006 Avg-S 70.41023160204475 Avg-V 94.54423466435185
    # 2325 Avg-H 87.06406430844908 Avg-S 65.36885127314814 Avg-V 98.05888913001543

    In my frames extracted from Transit_B1.mp4, the matching frames are: 1115 and 2324.
    Please advise on some other conditions I can use to find the matching frames?
    Like crop some area? But which area to pick up?
    Thanks,

    Image
    [Attachment 61099 - Click to enlarge]

    Image
    [Attachment 61100 - Click to enlarge]
    Quote Quote  
  22. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Forget to upload Transit_B1.mp4
    Image Attached Files
    Quote Quote  
  23. Really? You can't figure out a crop that differentiates that frame from others?

    Code:
    LWLibavVideoSource("transit_B1.mp4", cache=false, prefer_hw=2) 
    ConvertToYV24()
    src = last
    
    x1 = 8 # frame 1114 and 2323
    y1 = 88
    w1 = 180
    h1 = 330
    
    x2 = 580
    y2 = 88
    w2 = 180
    h2 = 330
    
    rect1 = Crop(x1,y1,w1,h1).MaskHS(StartHue=285, EndHue=295, minSat=5, maxSat=10)
    rect2 = Crop(x2,y2,w2,h2).MaskHS(StartHue=285, EndHue=295, MinSat=5, MaxSat=10)
    testclip = Overlay(rect1, rect2, mode="multiply")
    
    wf = WriteFileIf(testclip, "match.txt", "(AverageLuma()>220)", "current_frame", append = false)
    
    Overlay(src, rect1, x=x1, y=y1)
    Overlay(last, rect2, x=x2, y=y2)
    Overlay(last, wf, x=300, y=y2)
    You can probably get away with just the right one.
    Quote Quote  
  24. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Originally Posted by jagabo View Post
    Really? You can't figure out a crop that differentiates that frame from others?

    Code:
    LWLibavVideoSource("transit_B1.mp4", cache=false, prefer_hw=2) 
    ConvertToYV24()
    src = last
    
    x1 = 8 # frame 1114 and 2323
    y1 = 88
    w1 = 180
    h1 = 330
    
    x2 = 580
    y2 = 88
    w2 = 180
    h2 = 330
    
    rect1 = Crop(x1,y1,w1,h1).MaskHS(StartHue=285, EndHue=295, minSat=5, maxSat=10)
    rect2 = Crop(x2,y2,w2,h2).MaskHS(StartHue=285, EndHue=295, MinSat=5, MaxSat=10)
    testclip = Overlay(rect1, rect2, mode="multiply")
    
    wf = WriteFileIf(testclip, "match.txt", "(AverageLuma()>220)", "current_frame", append = false)
    
    Overlay(src, rect1, x=x1, y=y1)
    Overlay(last, rect2, x=x2, y=y2)
    Overlay(last, wf, x=300, y=y2)
    You can probably get away with just the right one.
    Hello:
    Thanks for your code, I just want to know how do you get the parameter: MaskHS(StartHue=285, EndHue=295, minSat=5, maxSat=10)
    Thanks,
    Quote Quote  
  25. This time I just eyeballed it using the image in post #105 then narrowed the range until it covered just the areas in question. I also gave you a method using an Animate() script in post #80.

    Looking at the numbers in post #111 I think the software you are using is using the opposite rotation for hue.
    Quote Quote  
  26. ok, I uploaded a utility that helps you tremendously to get all opencv crops and range values for you, so you just copy/paste them them in your python scrit,

    download: Comparator.7z

    unzip it into a directory it contains bunch of directories and loose files, it is portable setup, nothing is installed, and use Comparator.exe that you find in there after you unzip it, you can put its shortcut on your desktop. Utility is self sustained, you do not need to even have vapoursynth, opencv or python installed! But of course you have python and opencv for other purposes installed, actually using python code to come up with those frame numbers based on values obtained by Comparator.

    You can run it as is, without any file, it will load colorbars, so you can investigate how it works. You can play video, find frames, navigate in it.
    You can even load simultaneously more files, even vapoursynth scripts! Use Scroll on your mouse to zoom in or zoom out for convenience.

    Or you just drop your videofile on it, I will explain the workflow with your transit_L1.mp4 file

    1.drop your transit_L1.mp4 on Comparator.exe or use command line: Comparator.exe transit_L1.mp4
    of course use correct paths if using command lines

    2.set it up as on first image, find first clear transition graphic, I found 1096 frame, then on that frame use your mouse and select crop approximately as image shows.
    Copy that opencv cropping data from GUI (select in GUI and use Ctrl+C) and paste it for that first rectangle in your python editor in your code. Then investigate average values from GUI, this time for opencv_HSV, for that selection and use it in python code for lower and upper. If it says H average is 38.9, then something 30 Hue value for lower and for upper 45. Saturation average is 198.5, so I've chosen 125 to 255. HSV values are (Hue, Saturation, Value)

    That python editor, where you paste those values from GUI, is whatever you will have in your PC to edit python code. That is your app that you will put on screen as well. I used IDLE, but you can use whatever else, IDLEX and others.

    3.On the same frame, select other rectangle, again use mouse to choose another selection and then again paste those crop coordinates from GUI and then come up with lower and upper values based on average values in GUI.
    In our example lower and upper values are the same for both selections, but different color transitions gives you something else. Like that transparent soccer ball from your previous videos, we used two different colors blue and purple.

    4.run python code, it will return:
    Code:
    1095
    1543
    >>>
    used code, as seen in images:
    Code:
    import numpy as np
    import cv2
    
    first_found_frame = -11
    def process_img(img_rgb, frame_number):
        r1 = cv2.cvtColor(img_rgb[74:94, 324:366], cv2.COLOR_BGR2HSV)
        lower = np.array([30,125,0])
        upper = np.array([45,255,255])
        mask1 = cv2.inRange(r1, lower, upper)
        if np.average(mask1) > 140:
            r2 = cv2.cvtColor(img_rgb[304:332, 460:492], cv2.COLOR_BGR2HSV)
            lower = np.array([30,125,0])
            upper = np.array([45,255,255])
            mask2 = cv2.inRange(r2, lower, upper)
            if np.average(mask2) > 140:
    
                global first_found_frame
                if abs(frame_number-first_found_frame) >10:
                    first_found_frame = frame_number
                    print(frame_number)
                
    vidcap = cv2.VideoCapture(r'D:\downloads\transit_L1.mp4')
    frame_number = 0
    while True:
        success,image = vidcap.read()
        if not success: break
        process_img(image, frame_number)
        frame_number += 1
    vidcap.release()
    
    ##https://answers.opencv.org/upfiles/15186768416857101.png
    That utility is quite elaborate so ask whatever you need to know how to work with that, I just made it on the top of vapoursynth previewer.

    Also opencv loads video using BT601 always, and RGB to YUV is full to full so those values that you can read consist with that, so running calculation later are correct.
    If using it without opencv intensions, for other purposes, uncheck opencv_YUV and opencv_HSV to speed up things, because there is a lots of arrays to handle on the top processing opencv

    Also I forgot to default AVI files to load with ffms2 source plugin, I left it to use AviSource, which needs a codec in windows system. Later I will change it to ffms2 so those codecs are not needed. For now for example if using DV.avi it needs DV codec in PC etc. BUt on the other hand, if using Avisource it can load frame server avi's like from Sony Vegas or Premiere, I see no reason why it would not work. But perhaps not your problem or case.
    Image Attached Thumbnails Click image for larger version

Name:	1.png
Views:	10
Size:	258.6 KB
ID:	61104  

    Click image for larger version

Name:	2.png
Views:	7
Size:	257.0 KB
ID:	61105  

    Last edited by _Al_; 3rd Oct 2021 at 13:27.
    Quote Quote  
  27. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:

    Thank you so much for your such intelligent work. But I need some time to learn how to it.
    I copied the Transit_L1.mp4 to the same folder as comparator.exe, and run this command:
    C:\SoccerVideos\Python\Comparator>Comparator.exe transit_L1.mp4

    I visit the frame# 1096, and crop a small area, look at the picture.
    I want to know how I can read the coordinates for this crop area?
    Is it: corner: (332, 72) with size of (72, 30)?
    It seems not as shown in your Python code.
    Let me know if I understand correctly?
    Or if you can see the picture clearly enough, let me know what you think the coordinates for the corp I made.
    Thanks,
    Image
    [Attachment 61106 - Click to enlarge]
    Quote Quote  
  28. Your coordinates are out of screen for me, grab the right of your GUI and stretch it to the right. Readings are on the right side in "Live_Crop" line . It is something [72:102, 3...and I cannot read the rest. Of course values will be different than mine, because you select your rectangle!

    Also:
    player does not always keep aspect ratio, that is not important with this kind of work.

    By pressing "R", that will set pixels 1:1, correct aspect ratio, app also remembers last sizes and positions for windows when closin, it remembers even last frame shown, last clip index selected (if more videos for comparison) etc.

    App uses hot keys, R for reset image to 1:1, Space Bar to start and stop playback and many more ...

    If player is still small, grab a corner and increase the size of it, whatever you like. Key "R" always sets it to 1:1 with correct aspect ratio.

    Also if grabbing the corner of a selection and holding SHIFT, it keeps aspect ratio for selection same as video.width/video.height
    Last edited by _Al_; 3rd Oct 2021 at 13:40.
    Quote Quote  
  29. you can always zoom-in into video , use mouse scrollbar to zoom-in or out,
    and then choose selection
    also you can fine tune selection by
    -grabbing either the whole selection (clicking inside of selection) and then moving it as a whole
    -by grabbing a corner and stretching it
    -by grabbing a single line to move it
    Image Attached Thumbnails Click image for larger version

Name:	crop.png
Views:	2
Size:	15.7 KB
ID:	61107  

    Quote Quote  
  30. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    I tried again to make the image bigger, and let me know if you can see it.
    Let me know what is the coordinate for my crop.
    Thanks,Image
    [Attachment 61108 - Click to enlarge]
    Quote Quote