VideoHelp Forum
+ Reply to Thread
Page 6 of 6
FirstFirst ... 4 5 6
Results 151 to 172 of 172
Thread
  1. Originally Posted by zydjohn View Post
    Do you mean this shades of gray?
    No there are some slight colors in that image,

    I mean this image below, drop it onto Comparator ,
    and look for YUV values (U and V are always 128) or take a look at RGB values (all three are the same in RGB if it is shade of gray, that's a sign in RGB for no color). Shade of gray is no color info whatsoever. You can check for HSV values as well in that GUI.

    What it means for you:
    So if you must select a rectangle for a transition where you cannot see a color, only some sort of gray (whatever from white to black), again, you avoided it selected just other color rectangle, but if you must, you'd use YUV rectangle selection, not HSV.
    Last edited by _Al_; 6th Oct 2021 at 13:12.
    Quote Quote  
  2. Beware that simply using luma isn't very selective. Many different colors have the same luma value. For example, in this image:

    Image
    [Attachment 61167 - Click to enlarge]


    all 65536 pixels have the same luma value, 126. So you'll also want to make sure U and V are ~128 if looking for greys.

    * Note that the image had the same luma value before conversion to RGB to post the image. If you convert that image back to YUV you will find that many of the luma values are not 126. That's because many of the original YUV values were out of gamut. They would have required RGB values less than 0 or greater than 255. But they were clamped to the 0 to 255 range in the image. So conversion back to YUV gives an inaccurate values.
    Last edited by jagabo; 6th Oct 2021 at 13:44. Reason: added note at the end
    Quote Quote  
  3. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    For those shade of gray (U and V are always 128), I should use YUV, not HSV, right?
    But what condition I should use in Python?
    Like this:
    if np.average(mask1) > 240:
    By the way, the mask1 will be the same, as this?
    mask1 = cv2.inRange(r1, lower, upper)
    But how to define (lower, upper) in YUV, it should be different from HSV, right?
    Please give some Python code samples.
    I have done almost all the code converting to C#.
    But I need some time to do more testing.
    If necessary, I can change some code to use YUV.
    Thanks,
    Quote Quote  
  4. if wanting to select shades of gray only you do this (already posteted it before, just copy/pasting it):
    Code:
    #green values are taken from GUI
        if np.average(mask1) > 190:
            r2 = cv2.cvtColor(img_rgb[178:232, 298:324], cv2.COLOR_BGR2YUV) #selection of whitish area
            lower = np.array([200, 115, 115])  #241.6, 128.0, 130.7 values for YUV average, not HSV average
            upper = np.array([255, 140, 140])
    Also to actually see that color jump in Hue, copy/paste that script below, name it something.py or something.vpy, only extension is important,
    then drop that vapoursynth script it on Comparator.exe

    It should work even if you do not have vapoursynth (or python) installed

    Navigate on frame zero and check YUV values, then move slider. Or use Shift "<" or Shift ">" back and forth between 0 and first frame. Or bookmark zero frame with some other frame and switch between them. "F1" and "F2". Color is starting to be introduced and watch how "U" value is going slightly up. As oppose to Hue value that jumps from 0 to some Blue hue value rapidly. You cannot catch it in some decent range at all. So this is why you avoid HUE method while having gray rectangle. Because a disturbance in encoding or editing can introduce a slight color tint and in hue space you are way off suddenly. As oppose slight change in YUV space.
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    
    gradient = core.std.BlankClip(width=1, height=360, color=(0,0,0), format = vs.RGB24, length=128)
    for c in range(255):
        add = core.std.BlankClip(gradient, width=1, height=360, color=(c,c,c))
        gradient = core.std.StackHorizontal([gradient, add])
        del add
    gradient = gradient.resize.Point(width=720, height=360, format=vs.YUV420P8, matrix_s='709')    
    color_gradient = gradient.std.FrameEval(lambda n: gradient.std.Expr(['',f'x {n} +','']))
    color_gradient.set_output(0)
    Last edited by _Al_; 6th Oct 2021 at 14:25.
    Quote Quote  
  5. As it comes to even more confuse things , as long as you have a GUI and NOT needing to look at any images that intuitively select a color as a human, so as long you have a software, that returns averages for some solid area, you can always select YUV values, not Hue.

    jagabo: Does selecting always YUV has some downsides as Hue has (gray ranges , red area around 0)? To select ranges for a some sort of solid color area.
    Quote Quote  
  6. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    What you want to tell is that, if using HSV, the np.average(mask) value is low, then try to use YUV, right?
    So, up to now, the only case for YUV is for Transit_C1/Transit_C2/Transit_C3.
    The others I can use HSV, right?
    Quote Quote  
  7. Originally Posted by _Al_ View Post
    Does selecting always YUV has some downsides as Hue has (gray ranges , red area around 0)? To select ranges for a some sort of solid color area.
    For solid color areas I think YUV will work better. HS will work better for some gradients.
    Quote Quote  
  8. Originally Posted by zydjohn View Post
    Hello:
    What you want to tell is that, if using HSV, the np.average(mask) value is low, then try to use YUV, right?
    No, if Hue average value is close to zero.
    np.average(mask) value has nothing to do with it. Mask represents pixels with value 255 if fits range and 0 if it does not. I'd just use: np.average(mask)>190 all the time, for whatever, YUV, HSV. But it can be much higher. I really don't know. That number represents how many bad percentage in rectangle you'd tolerate. 190 means 190/255=0.75, that's about 75% pixels in rectangle need to fit your range. Imagine all rectangle pixels are in range, so average would be 255. 255/255=1, 100% hit. Half of them in range would mean average is 128, then 128/255=0.5 (50%)

    Code:
    So, up to now, the only case for YUV is for Transit_C1/Transit_C2/Transit_C3.
    The others I can use HSV, right?
    No. As long as you do not select white rectangle, you can use Hue or YUV. You are fine selecting those two golden rectangles for C1,C2,C3. Leave it. If selecting a white rectangle, which you did not do, you'd need to use YUV check for that rectangle.
    Quote Quote  
  9. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    I have a new transit mp4 file: Transit_R1.mp4. I want to use Comparator.exe to find cropped areas to detect the transit scenes.
    What I did now is that I cropped two areas, one is a blue letter of S (the middle letter of USA), and another one is one small area of red strip under the USA letter. See the picture.
    However, when I run my python code, I found besides the correct begin/end of transit scenes, there are some frames around #850, which also qualified for HSV mask average conditions.
    Please look at the Transit_R1.mp4, and advise which areas I should crop to avoid such conflicts.
    Thanks,
    (This one is my last transits until today!)
    Image
    [Attachment 61189 - Click to enlarge]
    Image Attached Files
    Quote Quote  
  10. So you don't see something in that frame that's unlikely to appear in other shots? How would you detect it?
    Quote Quote  
  11. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:

    I did quite a lot of testing with many video clips. Most of the Python code works, but I have some issue with one big type A video. I called it: Big_A1.mp4, which is the same as Transit_1.mp4 from my first question on the post.
    I used the following Python code to detect Transit scenes in this video file:

    Code:
    import numpy as np
    import cv2
    
    first_found_frame = -11
    def process_img(img_rgb, frame_number):
        img_yuv = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2HSV)
        averageV = np.average(img_yuv[:,:,2])
        if 194.85 < averageV > 196:
              global first_found_frame
              if abs(frame_number-first_found_frame) > 100:
                 first_found_frame = frame_number
                 print(frame_number)
                
    vidcap = cv2.VideoCapture(r'Big_A1.mp4')
    frame_number = 0
    while True:
        success,image = vidcap.read()
        if not success: break
        process_img(image, frame_number)
        frame_number += 1
    vidcap.release()
    I run this and got the following results:
    C:\SoccerVideos\Comparator>python DetectA2CropFrame.py

    C:\SoccerVideos\Comparator>
    The code didn’t find any type A transit scenes.
    But I can see by my eyes for at least 6 times of them.
    Let me know what is wrong with the code?
    By the way, if I want to crop one or two areas and use the code as other types Python code like this:
    mask1 = cv2.inRange(r1, lower, upper)
    And check if np.average(mask1) meet some condition to decide if the underlying image is the transit scene. But I found it is not easy to do this, as most of those solid red areas, the np.average(mask1) value is rather low, near 0.
    Please advise on how to fix this issue.
    Thanks,
    Image Attached Files
    Quote Quote  
  12. I see average V values around 170, not 195.
    Quote Quote  
  13. Originally Posted by jagabo View Post
    I see average V values around 170, not 195.
    opencv loads as RGB (from YUV, I could not make it capture to YUV 444) . Conversion RGB to YUV is full to full. So opencv might have those values different.

    zydjohn, I talked about trying to catch red color on this page for a long time, if it is near wrapping zero value, you cannot do it simply with one condition, so just use YUV. Use red and some white rectangle.
    Quote Quote  
  14. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:

    I come up with the following Python code, it can work with Big_A1.mp4:

    Code:
    import numpy as np
    import cv2
    
    first_found_frame = -11
    def process_img(img_rgb, frame_number):
        img_hsv = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2HSV)
        averageV1 = np.average(img_hsv[:,:,2])
        if averageV1 >= 123.0:
            img_yuv = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2YUV)
            averageV2 = np.average(img_yuv[:,:,2])
            if averageV2 > 180:
                global first_found_frame
                if abs(frame_number-first_found_frame) > 100:
                    first_found_frame = frame_number
                    print(frame_number)
                
    vidcap = cv2.VideoCapture(r'Big_A1.mp4')
    frame_number = 0
    while True:
        success,image = vidcap.read()
        if not success: break
        process_img(image, frame_number)
        frame_number += 1
    vidcap.release()
    When I run it:
    C:\SoccerVideos\Comparator>python DetectA2CropFrame.py
    1683
    2675
    8740
    10068
    10880
    11257

    I can see Big_A1.mp4 with Comparator.exe, the results seem OK, even the found frames are not in the center of the transit scenes, they are at rather beginning of the transit scenes. But I think it is OK for detecting the transit scenes. I will do more testing for verification.

    However, I found yet another difficult transit video clip: Transit_S1.mp4.
    I want to use Comparator.exe to find key frames with any unique properties, however, the only thing I can find is that the central area is very bright. But how I can use YUV or HSV to detect it, I have no idea.
    Image
    [Attachment 61238 - Click to enlarge]

    Please advise!
    Thanks,
    Image Attached Files
    Quote Quote  
  15. center averaga luma, cyan area near center UV or HS
    Quote Quote  
  16. Common man, it should be a routine now, still using latest code we agreed upon, selecting two rectangles. And rule: if white (any shade of gray ,black included) or red then always checking YUV. You could do YUV I guess all the time.
    Transit_S1.mp4 (on images there is Big_A1.mp4 text in a code, do not pay any attention to it).
    Code:
    import numpy as np
    import cv2
    #green values are taken from GUI
    first_found_frame = -21
    def process_img(img_rgb, frame_number):
        r1 = cv2.cvtColor(img_rgb[214:242, 368:398], cv2.COLOR_BGR2YUV)
        lower = np.array([250,122,122]) # 254.9, 127.9, 127.9
        upper = np.array([255,130,130])
        mask1 = cv2.inRange(r1, lower, upper)
        if np.average(mask1) > 190:
            r2 = cv2.cvtColor(img_rgb[288:304, 350:430], cv2.COLOR_BGR2YUV)
            lower = np.array([155, 150, 65]) # 161.0, 157.1, 83.0
            upper = np.array([166, 162, 90])
            mask2 = cv2.inRange(r2, lower, upper)
            if np.average(mask2) > 190:
                global first_found_frame
                if abs(frame_number-first_found_frame) >20:
                    first_found_frame = frame_number
                    print(frame_number)
    
    
                    
    vidcap = cv2.VideoCapture(r'Transit_S1.mp4')
    frame_number = 0
    while True:
        success,image = vidcap.read()
        if not success: break
        process_img(image, frame_number)
        frame_number += 1
    vidcap.release()
    Code:
    1426
    2187
    Image Attached Thumbnails Click image for larger version

Name:	S1_1.png
Views:	10
Size:	387.4 KB
ID:	61242  

    Click image for larger version

Name:	S1_2.png
Views:	11
Size:	186.5 KB
ID:	61243  

    Last edited by _Al_; 11th Oct 2021 at 10:17.
    Quote Quote  
  17. And that previous Big_A1.mp4:

    these should take you a minute to come up with
    Image Attached Thumbnails Click image for larger version

Name:	1.png
Views:	113
Size:	258.6 KB
ID:	61244  

    Click image for larger version

Name:	2.png
Views:	10
Size:	257.0 KB
ID:	61245  

    Quote Quote  
  18. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    I know I should use YUV for red/white, but I don't have clear idea on how to pick up lower/upper limit.
    Any rules for such YUV lower/upper limit for red/white color?
    Thanks,
    Quote Quote  
  19. take average values and your lower values would be average -10, upper values would be average + 10,
    minimum is 0 and maximum is 255

    or make it tighter, a bit less than 10 , I don't know, I just push a number in that ball park, not really overthinking it, it just works, because similar frames are rejected if close to already first found frame,

    you speed up your code it that first rectangle is really small, that second could be bigger, especially if your code is looking for a type of transition, not sure what you do
    Quote Quote  
  20. Can't this tool be modified to show min/max YUV and/or HSV?

    Quote Quote  
  21. I added those min/max readings.

    Besides vapoursynth scripts (does not have to be installed, supports API3 and new one API4), it should load avisynth script as well , tested with installed Avisynth+, but it should load that script even if avisynth is not installed. In that case script needs to load all plugins. It has AviSynth.dll (for avisynth+) and devIL.dll(not sure if needed). But anyway, better to have Avisynth installed and plugins auto-loading like usually.

    Some bugs cleared.
    Bookmarks can save frames and current crops as well. So GUI remember crop selections via those bookmarks.

    It is lots of overhead for GUI, so if those opencv YUV or opencv HSV readings are not needed, they should be unchecked on the left. GUI will not compute those values. Preview is faster.

    Comparator.7z
    sorry for that size (60MB), opencv is packed and quite loaded as a whole, I am not able to just include things I need only
    Last edited by _Al_; 11th Oct 2021 at 23:55.
    Quote Quote  
  22. Member
    Join Date
    Sep 2021
    Location
    Valencia
    Search Comp PM
    Hello:
    Thank you very much for your tools.
    Yesterday, after testing for more than 30 long video clips, I found one short video clip, which contains one replay transit of type A, like transit_A1, but its YUV value has something like: YUV: 235.0, 128.5, 119.0.
    Its value is just one point below the lower limit for [220, 122, 120] for 119 against 120. Therefore, the python code didn't catch the transit. Later, I changed the lower limit to something like:
    [220, 122, 100], then the code works.
    From my experience, it is better to give a little wider range for lower/upper limit.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!