VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or try DVDFab and copy, convert or make Blu-rays and DVDs! :)
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 50 of 50
Thread
  1. Originally Posted by chris319 View Post
    Here is the output of your Python demo:

    >>>
    Traceback (most recent call last):
    File "C:\Archive\Python\test2.py", line 1, in <module>
    from Vapoursynth import core
    ImportError: No module named 'Vapoursynth'
    >>>
    It's case sensitive

    Code:
    from vapoursynth import core
    Quote Quote  
  2. sorry, I just typed it, without checking
    Code:
    rgb = core.resize.Bicubic(clip , matrix_in_s = '709', format = vs.RGB24, range_in_s ='limited')
    also fix the range but in that case you do not have to specify range, I just put it there to see

    those '_s' at the end, mean that argument value is a string,
    matrix, transfer and color primaries string values you can get from here,

    without that '_s', it is an integer, from specs, around page 405

    '_in' is for input, without it , it is output
    Quote Quote  
  3. sorry, I just typed it, without checking
    Not cool to post buggy code and send a first timer down the wrong path like that.

    When I post code, unless I'm demonstrating a problem, it's been tested and works, guaranteed.

    Code:
    >>> 
    Traceback (most recent call last):
      File "C:\Archive\Python\test2.py", line 1, in <module>
        from vapoursynth import core
    ImportError: No module named 'vapoursynth'
    >>>
    So far we're off to a flying start with Python.

    LOL easy, eh?
    Quote Quote  
  4. There is always problems you should know that. Read the manual how to install everything. http://www.vapoursynth.com/doc/gettingstarted.html Take it slow.
    Quote Quote  
  5. If we're going to wind up with ffmpeg doing the actual video manipulation, why are we even bothering with Python and Vapoursynth? Why not just write an ffmpeg script?

    You have to have patience in dealing with ffmpeg's quirks (to put it politely) and there is a lot of back-and-forth to get it just right, but it's doable and will handle the audio automatically.

    Again, I am trying to conform to a standard designed for YRGB (EBU r 103) but a YUV deliverable is expected. Not easy.
    Quote Quote  
  6. Originally Posted by chris319 View Post
    If we're going to wind up with ffmpeg doing the actual video manipulation, why are we even bothering with Python and Vapoursynth? Why not just write an ffmpeg script?
    I though this topic was about creating some GUI for graphics and manipulations? So the values in the GUI (sliders, dials, whatever) would correspond to (be entered into) the vapoursynth code and then piped to ffmpeg or something else



    You can use whatever you want to do the actual video manipulation, and in any permutation. eg. You can use vapoursynth to perform the manipulations and ffmpeg to write . Or even something else to write and skip ffmpeg (the binary) entirely, or parts in some, and parts in others.

    Some of the filters actually share the same code, and with avisynth . Many are ported from avisynth , because many of the filters were "born" there first

    There are ways to get things into other programs, even ones like premiere, resolve etc... through virtual files and avfs . Basically it enables you to link scripts to other software without encoding large intermediate files. It's like piping between programs that usually don't accept pipes

    You have to have patience in dealing with ffmpeg's quirks (to put it politely) and there is a lot of back-and-forth to get it just right, but it's doable and will handle the audio automatically.
    All software has quirks. Avisynth, vapoursynth, ffmpeg have major quirks. High end professional software have them too. Broadcast software have major issues and "gotchas" that users must know about, and workarounds that you must use. I don't know any software that "just works" perfectly 100% in all situations without any quirks or that doesn't have some "gotcha!" scenarios. Every workflow has compromises - It's all about finding one that works for your situation and meets your needs with hopefully the least amount of pain
    Quote Quote  
  7. Once again, Vapoursynth, avisynth and Python use ffmpeg to manipulate the video, do they not?
    Quote Quote  
  8. Member Budman1's Avatar
    Join Date
    Jul 2012
    Location
    NORTHWEST ILLINOIS, USA
    Search Comp PM
    Visual Studios---> Visual Basic, C, C++, C# ---. Link to Linux? Compiles to EXE

    Gui Creation:
    I can't arrange or design a Shower curtain to look nice so the layout sucks, I know, but It's Gui, calls any external program you want to use without actually compiling the external utility and was created with Visual Basic.

    Image
    [Attachment 47398 - Click to enlarge]
    Quote Quote  
  9. My original idea is crashing and burning.

    On my big computer at home it works OK except for that pesky flicker that makes the output video unusable.

    I'm on the road now and my little tablet is having trouble setting up pipes and thus cannot read pixels out of a video file. Both the tablet and my home computer run Windows 10 so you would think if it runs on one it would run on the other, but no such luck.

    It would have been nice to be able to adjust video levels interactively, and that part works well on my home computer until it comes time to export it. The interactive part is nice and responsive considering the large amount of computation it has to do per frame. But then we pick up that flicker when exporting the video that I can't get rid of, despite trying all sorts of fixes and workarounds.

    It might be possible to use the interactive portion to set values to plug into a generic ffmpeg script. As I am manipulating pixel values directly in my program, one wonders if those numbers would give the same results when plugged into an ffmpeg script.

    The scope part of my program works well. It reports the maximum and minimum values for Y, R, G and B so the user can tell if his video is in compliance. It also hard clips at the r 103 limits. Yes, I know there might be recoverable detail above 235, a lot of which is ringing artifacts. The user can either reduce the gain to bring this detail under 236 or use the camera iris or control the subject matter better.
    Quote Quote  
  10. Originally Posted by Budman1 View Post
    Visual Studios---> Visual Basic, C, C++, C# ---. Link to Linux? Compiles to EXE

    Gui Creation:
    I can't arrange or design a Shower curtain to look nice so the layout sucks, I know, but It's Gui, calls any external program you want to use without actually compiling the external utility and was created with Visual Basic.

    Image
    [Attachment 47398 - Click to enlarge]
    Can your Visual Basic program open pipes and communicate with ffmpeg that way, hopefully without flicker?

    Here again is how Ted Burke does it in C under Linux:

    https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-vi...-part-2-video/
    Quote Quote  
  11. Originally Posted by chris319 View Post
    Once again, Vapoursynth, avisynth and Python use ffmpeg to manipulate the video, do they not?
    Not necessarily .

    If you are using vapoursynth filters, avisynth filters, those manipulations are done in vapoursynth, avisynth respectively . That altered data is then sent somewhere though pipe. If you want, ffmpeg.exe, or something else like x264.exe. You need someway to write out the data , encoding into a physical video (usually)

    Some filters use ffmpeg/libraries libraries. eg. some input filters can use ffmpeg to decode the video to uncompressed data . But ffmpeg.exe doesn't have to be used.
    Quote Quote  
  12. Originally Posted by chris319 View Post
    The scope part of my program works well. It reports the maximum and minimum values for Y, R, G and B so the user can tell if his video is in compliance. It also hard clips at the r 103 limits. Yes, I know there might be recoverable detail above 235, a lot of which is ringing artifacts. The user can either reduce the gain to bring this detail under 236 or use the camera iris or control the subject matter better.

    But you're checking this on decoded YUV, converted to RGB intermediate step, right ? Not on the final submission ?

    An additional problem that you probably don't want to hear about is when you convert it to your final YUV 422 submission format - you will generate non r103 friendly values on certain types of material even though your scopes look ok, your min/max values are ok, and you "think" it's compliant. Certain test patterns. Text. Graphics. Broadcast overlays. The subsampling step creates problems, and depending on what the validation method uses for upsampling algorithm to check RGB .
    Quote Quote  
  13. The final 4:2:2 submission has to be checked. It has to be decoded to RGB to check for r 103 compliance. That's part of the user's workflow. No further processing is done after the final check.
    Quote Quote  
  14. Originally Posted by poisondeathray View Post
    Not necessarily .

    If you are using vapoursynth filters, avisynth filters, those manipulations are done in vapoursynth, avisynth respectively . That altered data is then sent somewhere though pipe. If you want, ffmpeg.exe, or something else like x264.exe. You need someway to write out the data , encoding into a physical video (usually)

    Some filters use ffmpeg/libraries libraries. eg. some input filters can use ffmpeg to decode the video to uncompressed data . But ffmpeg.exe doesn't have to be used.
    I think what you are trying to say is that they use ffmpeg to read and write the video data, same as I and Ted Burke are doing.

    Why are people pushing vapoursynth and Python on me? What is the big benefit of using them instead of ffmpeg and its filters or my own code? Cripes, I can't even get a functional example in "LOL easy" Python. All they're doing is muddying the waters.
    Quote Quote  
  15. Originally Posted by chris319 View Post
    I think what you are trying to say is that they use ffmpeg to read and write the video data, same as I and Ted Burke are doing.
    No . You can use ffmpeg if you want to. But it's not necessary to use ffmpeg for anything when using avisynth or vapoursynth

    avisynth/vapoursynth is just used as a framework for intermediate steps . You can use it for the video manipulations. Since vapoursynth is python based, it lends itself to many possibilities for your graphics and GUI . The existing GUI's are somewhat lacking, but maybe you can contribute and help develop one

    Why are people pushing vapoursynth and Python on me? What is the big benefit of using them instead of ffmpeg and its filters or my own code? Cripes, I can't even get a functional example in "LOL easy" Python. All they're doing is muddying the waters.

    The benefit is more flexibility and filters available than ffmpeg, or ffmpeg filters . (eg. ffmpeg doesn't even have a simple levels filter for some reason) . You can do anything ffmpeg can plus a lot more . It's easier to filter sections and edit (you can specify certain different settings over certain ranges of video) .

    You were looking for something that works, without flicker. I don't know why you were only getting some frames processed, but these are workflows proven to work. Even ffmpeg alone can sometimes have issue. In avs, vpy you can index source videos. It provides a higher level of frame accuracy, especially for seeking. With long GOP videos (I,B,P, not intra frame encoding) , seeking can cause frame mismatches and wrong frames, out of order. That might be partially contributing to your flicker issue .

    If vapoursynth doesn't meet your needs, just carry on with what you're doing and try to debug it. There has to be a reason why it's not working


    The final 4:2:2 submission has to be checked. It has to be decoded to RGB to check for r 103 compliance. That's part of the user's workflow. No further processing is done after the final check.
    That workflow where you convert to RGB, do filtering in RGB and look at scopes, hardclip , then export YUV422 is potentially problematic . After you clip in RGB, that conversion and subsampling step in YUV can produce some non compliant values. You're usually allowed some leeway, but I'm just pointing out it's that subsampling step that is the big culprit. Some broadcast software filters check for this while the filter is applied . A way you might do that in a GUI or ffmpeg is pipe out to another instance or filter after the subsampling (but before the actual encoding) . That way it's closer to the actual final format submission format, instead of having to actually encode a physical file and go back and forth a zillion times
    Last edited by poisondeathray; 7th Dec 2018 at 10:41.
    Quote Quote  
  16. So ruling out API's. Sticking with -> load video. Simple like set sliders, press run to run a subprocess, get a raw image, put it on screen. But you need filter for that or custom made ffmpeg filter, like -filter_complex in cmd line with your math. Is it doable? Ask direct questions, like that, ffmpeg forums, stackoverflow . com, reveal your math, you might get lucky like that guy.

    What about different setup, for example using opencv to fetch and fix values with math done in your language. And handling pixel values. If you insist that Vapoursynth is not for you and difficult. Is there opencv modul and codes to handle opencv in pure basic? You do not have to use opencv as a gui, just for RGB pixel handling.

    I found out it here, it has 54 pages so perhaps it is doable
    Last edited by _Al_; 7th Dec 2018 at 14:39.
    Quote Quote  
  17. I downloaded and installed vapoursynth and Python 3.71. This code:

    Code:
    from vapoursynth import core
    import vapoursynth as vs
    file=r'C:\Archive\Python\C0058.mp4'
    clip = core.lsmas.LWLibavSource(file)
    #checking what color space file is, if you do not know 
    if clip.format.color_family == vs.ColorFamily.YUV:
        space = 'YUV'  #there is also RGB, GRAY, YCOCG and COMPAT
    value = 10
    Y_expr = 'x {value} -'.format(value=value)
    U_expr = ''
    V_expr = ''
    clip = core.std.Expr(clip, expr = [Y_expr,U_expr,V_expr])
    
    #clip is YUV , so you need to get it on screen as RGB 8bit, assuming input is BT709:
    if space == 'YUV':
        rgb = core.resize.Bicubic(clip , matrix_in_s = '709', format = vs.RGB24, range ='limited')
    
    #to get a frame from your rgb, for example first frame(0 , pyhon indexes from 0), but it could be any frame :
    rgbFrame = rgb.get_frame(0) 
    
    #now you have RGB24 image that most gui's can put on screen, you just need to know what array and how arranged it has to be, so it is gui specific further
    
    #to get it out thru vspipe.exe (encoding, piping it somewhere) you specify output:
    clip.set_otput()
    ... results in:

    ==================== RESTART: C:\Archive\Python\test2.py ====================
    Traceback (most recent call last):
    File "C:\Archive\Python\test2.py", line 1, in <module>
    from vapoursynth import core
    ImportError: cannot import name 'core' from 'vapoursynth' (unknown location)
    >>>
    Yeah, LOL easy. Writing an ffmpeg script is a snap by comparison if all we want to do is legalize the video.

    More filters and gewgaws are all well and fine, but if vapoursynth et al. are such hot shit then why don't they have built-in legalizers already?

    That workflow where you convert to RGB, do filtering in RGB and look at scopes, hardclip , then export YUV422 is potentially problematic .
    You've made this point more than once and keep talking in circles restating the problem without offering anything constructive, i.e. a solution to this conundrum. Yes, I realize thing are going to change going from RGB to 4:2:2 which is why I said the final copy has to be checked and modified again if not within spec.
    Quote Quote  
  18. You don't have to use it if you don't want to.

    Fair warning: expect many more error messages and hair pulling with debugging trying to get it to work. It's seriously painful at first. It's the same when someone first uses anything, like ffmpeg too for the first time . I had to revisit avisynth a few times when first learning it. It wasn't a pleasant experience. Vapoursynth becomes slightly easier if you know avisynth. At least you have some programming background, it shouldn't be difficult to get it working . Then there is the hassle of collecting plugins, dlls, scripts, not fun if you're new to it. But in the end, worth the hassle x10 . Very powerful video manipulation stuff that you can't do with other programs

    If you still want to pursue this, did you follow the getting started instructions, install correct vapoursynth version and matching python version ?

    Did you try typing this in python command line, as per the instructions ? And what messages did it print ? Was it the same unknown location ?
    http://www.vapoursynth.com/doc/installation.html

    Code:
    from vapoursynth import core
    print(core.version())


    More filters and gewgaws are all well and fine, but if vapoursynth et al. are such hot shit then why don't they have built-in legalizers already?
    They do have built in limiter functions, min/max functions (but it's not exactly the same as a professional broadcast legalizer filter which do other "smart" things )

    And obviously, there are many other types of AV manipulations besides "legalization" .



    Good luck with the flicker and GUI , I hope it works out
    Last edited by poisondeathray; 7th Dec 2018 at 16:41.
    Quote Quote  
  19. You cannot install Python and Vapoursynth properly, you don't even say what you did, portable version, or installation, very important step if Python works at all, or it is just vapoursynth, nothing, you said nothing, just copy/paste error.Some complaining, give , give me. What you want to hear? Are you serious? You were not even hold by my script at all, because you do not have it right in the first place, yet giving the lecture about correct approach in discussions?
    Quote Quote  
  20. Originally Posted by poisondeathray View Post
    That workflow where you convert to RGB, do filtering in RGB and look at scopes, hardclip , then export YUV422 is potentially problematic . After you clip in RGB, that conversion and subsampling step in YUV can produce some non compliant values. You're usually allowed some leeway, but I'm just pointing out it's that subsampling step that is the big culprit. Some broadcast software filters check for this while the filter is applied . A way you might do that in a GUI or ffmpeg is pipe out to another instance or filter after the subsampling (but before the actual encoding) . That way it's closer to the actual final format submission format, instead of having to actually encode a physical file and go back and forth a zillion times
    That problem has finally come home to roost. I just processed some footage and it is r 103 compliant in RGB24/4:4:4, but when transcoding to the distribution format, 4:2:2, it is no longer compliant. Any ideas?

    I am doing one pass to one file at 4:2:0 -> RGB, then a second pass at RGB -> 4:2:2.
    Quote Quote  



Similar Threads