VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or Try ConvertXtoDVD and convert all your movies to DVD. Free trial ! :)
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 50
Thread

Threaded View

  1. What is the easy way to create a GUI and render graphics in C or C++ on Linux? Or is there even an easy way?

    I've tried OpenGL and it was too weird. SDL was weird but not as much.

    There are also PureBasic and Lazarus. They are not C or C++ but what the heck?
    Quote Quote  
  2. SDL2, GLFW (OpenGL) and may be FreeGLUT are the main rendering APIs I can think of.
    Creating a GUI should be easy with for example Qt which also has some bindings for opengl. (there also projects out there which help with using SDL2 ant Qt together, but I've never tried those,..)
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  3. Member
    Join Date
    Mar 2011
    Location
    Nova Scotia, Canada
    Search Comp PM
    C/C++ is not easy, period. It's not a high level language, more like a high level assembler. Anyone who says it's easy is a web poser. If you're new to programming you may be better off with something simpler like python or perl.
    Quote Quote  
  4. Originally Posted by Hoser Rob View Post
    C/C++ is not easy, period. It's not a high level language, more like a high level assembler. Anyone who says it's easy is a web poser. If you're new to programming you may be better off with something simpler like python or perl.
    What a pile of nonsense, particularly the bold part.
    By the way, adding "period" at the end of a statement does not magically make it true. It's something feeble-minded fools do because they think it will enforce their ridiculous claims.
    Last edited by Groucho; 3rd Dec 2018 at 09:22.
    Quote Quote  
  5. Originally Posted by Groucho View Post
    Originally Posted by Hoser Rob View Post
    C/C++ is not easy, period. It's not a high level language, more like a high level assembler. Anyone who says it's easy is a web poser. If you're new to programming you may be better off with something simpler like python or perl.
    What a pile of nonsense, particularly the bold part.
    By the way, adding "period" at the end of a statement does not magically make it true. It's something feeble-minded fools do because they think it will enforce their ridiculous claims.
    Brave statement Groucho... Plain C lacks of many things common in high level language and from programmer perspective it looks more like machine language than high level language... Don't be so tense Groucho.

    @OP - BASIC compilers exist and if you are more familiar with BASIC then give a chance to BASIC. Or perhaps you should use something tailored for tasks as yours ( https://gstreamer.freedesktop.org/ ? )
    Quote Quote  
  6. Originally Posted by pandy View Post
    Originally Posted by Groucho View Post
    Originally Posted by Hoser Rob View Post
    C/C++ is not easy, period. It's not a high level language, more like a high level assembler. Anyone who says it's easy is a web poser. If you're new to programming you may be better off with something simpler like python or perl.
    What a pile of nonsense, particularly the bold part.
    By the way, adding "period" at the end of a statement does not magically make it true. It's something feeble-minded fools do because they think it will enforce their ridiculous claims.
    Plain C lacks of many things common in high level language and from programmer perspective it looks more like machine language than high level language...
    I was mainly referring to C++ which is clearly a high level language. The fact that C looks like machine language to some people is just a silly argument.

    Originally Posted by pandy View Post
    Don't be so tense Groucho.
    He, he, I'll do my best. It just pushes my buttons when someone without a frickin' clue spreads FUD.
    Quote Quote  
  7. Originally Posted by Groucho View Post
    I was mainly referring to C++ which is clearly a high level language. The fact that C looks like machine language to some people is just a silly argument.
    Once again C++/C# is different story, C has plenty things common with ASM (operators are practically same as in inline asm).
    It was designed to be compiled using a relatively straightforward compiler, to provide low-level access to memory, to provide language constructs that map efficiently to machine instructions, and to require minimal run-time support.
    https://en.wikipedia.org/wiki/C_%28programming_language%29

    Originally Posted by Groucho View Post
    He, he, I'll do my best. It just pushes my buttons when someone without a frickin' clue spreads FUD.
    Personally feel ASM easier (condition is understanding of CPU and system architecture) than C but perhaps this is outcome of my youth programming experience where BASIC or ASM was only two practical approaches. Tried so many times to learn something modern but never succeeded.
    Quote Quote  
  8. Originally Posted by Groucho View Post
    ...
    I was mainly referring to C++ which is clearly a high level language. The fact that C looks like machine language to some people is just a silly argument.
    Source code can be high level but in actual development you cannot choose to not deal with low level crap like linking errors, incompatible options, runtimes...
    Quote Quote  
  9. I need a compiled, not an interpreted, language.
    Quote Quote  
  10. Member
    Join Date
    Aug 2010
    Location
    San Francisco, California
    Search PM
    Why must it be compiled? To processor code? To intermediate code?
    Quote Quote  
  11. Originally Posted by JVRaines View Post
    Why must it be compiled? To processor code? To intermediate code?
    Execution speed.
    Last edited by chris319; 3rd Dec 2018 at 09:48.
    Quote Quote  
  12. Member
    Join Date
    Aug 2010
    Location
    San Francisco, California
    Search PM
    There's .Net Core with System.Drawing.Common which runs on Linux. I have no idea how fast.
    https://www.hanselman.com/blog/HowDoYouUseSystemDrawingInNETCore.aspx
    Quote Quote  
  13. Like I wrote before, Qt is a good frame work when working with C++ (or Java or Python).
    Depending on what you want to do there are XY additional libraries that could be used.
    -> Assuming you know your way around C++ this is the way I would go.

    In the end there are libraries for all mayor languages which allow basic graphic operations, depending on what you want to write yourself you will probably either way have to write OpenCL/OpenGL or Assembler code in case it needs to be fast.

    In case you are no experienced programmer, using Python and at least partially using Vapoursynth and libav seems to make the most sense, since it has the lowest learning curve and the existing base operations should all be assembler optimized. (Vapoursynth doesn't have to use libav/ffmpeg at all,..)
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  14. I just tested the same HEVC clip with the same source plugin, using Vapoursynth editor, which is written in C++ as well, and it gave me just about 2 fps more previewing it (meaning one YUV to RGB conversion was involved). He uses QT for his gui. Python code or Vapoursynth script (its the same) serves as a sort of wrapper but not handling heavy image processing. I would not try to reinvent wheel computing RGB values manually and when encountering a new format yet again looking for formulas. To manipulate YUV values or RGB , Expr can be handy (expressions), lut also. Just look into some py file that consists of filtering or they were ported from avisynth. Like havsfunc.py from HolyWu etc.

    NOTE: to preview it on screen for example using QT there is other thing involved like creating pixmap that QT uses to put it on screen, not sure what it is internally what array, but just to get a picture
    Last edited by _Al_; 4th Dec 2018 at 16:03.
    Quote Quote  
  15. Originally Posted by chris319 View Post
    Here is the output of your Python demo:

    >>>
    Traceback (most recent call last):
    File "C:\Archive\Python\test2.py", line 1, in <module>
    from Vapoursynth import core
    ImportError: No module named 'Vapoursynth'
    >>>
    It's case sensitive

    Code:
    from vapoursynth import core
    Quote Quote  
  16. sorry, I just typed it, without checking
    Code:
    rgb = core.resize.Bicubic(clip , matrix_in_s = '709', format = vs.RGB24, range_in_s ='limited')
    also fix the range but in that case you do not have to specify range, I just put it there to see

    those '_s' at the end, mean that argument value is a string,
    matrix, transfer and color primaries string values you can get from here,

    without that '_s', it is an integer, from specs, around page 405

    '_in' is for input, without it , it is output
    Quote Quote  
  17. sorry, I just typed it, without checking
    Not cool to post buggy code and send a first timer down the wrong path like that.

    When I post code, unless I'm demonstrating a problem, it's been tested and works, guaranteed.

    Code:
    >>> 
    Traceback (most recent call last):
      File "C:\Archive\Python\test2.py", line 1, in <module>
        from vapoursynth import core
    ImportError: No module named 'vapoursynth'
    >>>
    So far we're off to a flying start with Python.

    LOL easy, eh?
    Quote Quote  
  18. There is always problems you should know that. Read the manual how to install everything. http://www.vapoursynth.com/doc/gettingstarted.html Take it slow.
    Quote Quote  
  19. If we're going to wind up with ffmpeg doing the actual video manipulation, why are we even bothering with Python and Vapoursynth? Why not just write an ffmpeg script?

    You have to have patience in dealing with ffmpeg's quirks (to put it politely) and there is a lot of back-and-forth to get it just right, but it's doable and will handle the audio automatically.

    Again, I am trying to conform to a standard designed for YRGB (EBU r 103) but a YUV deliverable is expected. Not easy.
    Quote Quote  
  20. Originally Posted by chris319 View Post
    If we're going to wind up with ffmpeg doing the actual video manipulation, why are we even bothering with Python and Vapoursynth? Why not just write an ffmpeg script?
    I though this topic was about creating some GUI for graphics and manipulations? So the values in the GUI (sliders, dials, whatever) would correspond to (be entered into) the vapoursynth code and then piped to ffmpeg or something else



    You can use whatever you want to do the actual video manipulation, and in any permutation. eg. You can use vapoursynth to perform the manipulations and ffmpeg to write . Or even something else to write and skip ffmpeg (the binary) entirely, or parts in some, and parts in others.

    Some of the filters actually share the same code, and with avisynth . Many are ported from avisynth , because many of the filters were "born" there first

    There are ways to get things into other programs, even ones like premiere, resolve etc... through virtual files and avfs . Basically it enables you to link scripts to other software without encoding large intermediate files. It's like piping between programs that usually don't accept pipes

    You have to have patience in dealing with ffmpeg's quirks (to put it politely) and there is a lot of back-and-forth to get it just right, but it's doable and will handle the audio automatically.
    All software has quirks. Avisynth, vapoursynth, ffmpeg have major quirks. High end professional software have them too. Broadcast software have major issues and "gotchas" that users must know about, and workarounds that you must use. I don't know any software that "just works" perfectly 100% in all situations without any quirks or that doesn't have some "gotcha!" scenarios. Every workflow has compromises - It's all about finding one that works for your situation and meets your needs with hopefully the least amount of pain
    Quote Quote  
  21. My original idea is crashing and burning.

    On my big computer at home it works OK except for that pesky flicker that makes the output video unusable.

    I'm on the road now and my little tablet is having trouble setting up pipes and thus cannot read pixels out of a video file. Both the tablet and my home computer run Windows 10 so you would think if it runs on one it would run on the other, but no such luck.

    It would have been nice to be able to adjust video levels interactively, and that part works well on my home computer until it comes time to export it. The interactive part is nice and responsive considering the large amount of computation it has to do per frame. But then we pick up that flicker when exporting the video that I can't get rid of, despite trying all sorts of fixes and workarounds.

    It might be possible to use the interactive portion to set values to plug into a generic ffmpeg script. As I am manipulating pixel values directly in my program, one wonders if those numbers would give the same results when plugged into an ffmpeg script.

    The scope part of my program works well. It reports the maximum and minimum values for Y, R, G and B so the user can tell if his video is in compliance. It also hard clips at the r 103 limits. Yes, I know there might be recoverable detail above 235, a lot of which is ringing artifacts. The user can either reduce the gain to bring this detail under 236 or use the camera iris or control the subject matter better.
    Quote Quote  
  22. Originally Posted by chris319 View Post
    The scope part of my program works well. It reports the maximum and minimum values for Y, R, G and B so the user can tell if his video is in compliance. It also hard clips at the r 103 limits. Yes, I know there might be recoverable detail above 235, a lot of which is ringing artifacts. The user can either reduce the gain to bring this detail under 236 or use the camera iris or control the subject matter better.

    But you're checking this on decoded YUV, converted to RGB intermediate step, right ? Not on the final submission ?

    An additional problem that you probably don't want to hear about is when you convert it to your final YUV 422 submission format - you will generate non r103 friendly values on certain types of material even though your scopes look ok, your min/max values are ok, and you "think" it's compliant. Certain test patterns. Text. Graphics. Broadcast overlays. The subsampling step creates problems, and depending on what the validation method uses for upsampling algorithm to check RGB .
    Quote Quote  
  23. Once again, Vapoursynth, avisynth and Python use ffmpeg to manipulate the video, do they not?
    Quote Quote  
  24. Originally Posted by chris319 View Post
    Once again, Vapoursynth, avisynth and Python use ffmpeg to manipulate the video, do they not?
    Not necessarily .

    If you are using vapoursynth filters, avisynth filters, those manipulations are done in vapoursynth, avisynth respectively . That altered data is then sent somewhere though pipe. If you want, ffmpeg.exe, or something else like x264.exe. You need someway to write out the data , encoding into a physical video (usually)

    Some filters use ffmpeg/libraries libraries. eg. some input filters can use ffmpeg to decode the video to uncompressed data . But ffmpeg.exe doesn't have to be used.
    Quote Quote  
  25. Originally Posted by poisondeathray View Post
    Not necessarily .

    If you are using vapoursynth filters, avisynth filters, those manipulations are done in vapoursynth, avisynth respectively . That altered data is then sent somewhere though pipe. If you want, ffmpeg.exe, or something else like x264.exe. You need someway to write out the data , encoding into a physical video (usually)

    Some filters use ffmpeg/libraries libraries. eg. some input filters can use ffmpeg to decode the video to uncompressed data . But ffmpeg.exe doesn't have to be used.
    I think what you are trying to say is that they use ffmpeg to read and write the video data, same as I and Ted Burke are doing.

    Why are people pushing vapoursynth and Python on me? What is the big benefit of using them instead of ffmpeg and its filters or my own code? Cripes, I can't even get a functional example in "LOL easy" Python. All they're doing is muddying the waters.
    Quote Quote  
  26. Originally Posted by chris319 View Post
    I think what you are trying to say is that they use ffmpeg to read and write the video data, same as I and Ted Burke are doing.
    No . You can use ffmpeg if you want to. But it's not necessary to use ffmpeg for anything when using avisynth or vapoursynth

    avisynth/vapoursynth is just used as a framework for intermediate steps . You can use it for the video manipulations. Since vapoursynth is python based, it lends itself to many possibilities for your graphics and GUI . The existing GUI's are somewhat lacking, but maybe you can contribute and help develop one

    Why are people pushing vapoursynth and Python on me? What is the big benefit of using them instead of ffmpeg and its filters or my own code? Cripes, I can't even get a functional example in "LOL easy" Python. All they're doing is muddying the waters.

    The benefit is more flexibility and filters available than ffmpeg, or ffmpeg filters . (eg. ffmpeg doesn't even have a simple levels filter for some reason) . You can do anything ffmpeg can plus a lot more . It's easier to filter sections and edit (you can specify certain different settings over certain ranges of video) .

    You were looking for something that works, without flicker. I don't know why you were only getting some frames processed, but these are workflows proven to work. Even ffmpeg alone can sometimes have issue. In avs, vpy you can index source videos. It provides a higher level of frame accuracy, especially for seeking. With long GOP videos (I,B,P, not intra frame encoding) , seeking can cause frame mismatches and wrong frames, out of order. That might be partially contributing to your flicker issue .

    If vapoursynth doesn't meet your needs, just carry on with what you're doing and try to debug it. There has to be a reason why it's not working


    The final 4:2:2 submission has to be checked. It has to be decoded to RGB to check for r 103 compliance. That's part of the user's workflow. No further processing is done after the final check.
    That workflow where you convert to RGB, do filtering in RGB and look at scopes, hardclip , then export YUV422 is potentially problematic . After you clip in RGB, that conversion and subsampling step in YUV can produce some non compliant values. You're usually allowed some leeway, but I'm just pointing out it's that subsampling step that is the big culprit. Some broadcast software filters check for this while the filter is applied . A way you might do that in a GUI or ffmpeg is pipe out to another instance or filter after the subsampling (but before the actual encoding) . That way it's closer to the actual final format submission format, instead of having to actually encode a physical file and go back and forth a zillion times
    Last edited by poisondeathray; 7th Dec 2018 at 09:41.
    Quote Quote  
  27. Originally Posted by poisondeathray View Post
    That workflow where you convert to RGB, do filtering in RGB and look at scopes, hardclip , then export YUV422 is potentially problematic . After you clip in RGB, that conversion and subsampling step in YUV can produce some non compliant values. You're usually allowed some leeway, but I'm just pointing out it's that subsampling step that is the big culprit. Some broadcast software filters check for this while the filter is applied . A way you might do that in a GUI or ffmpeg is pipe out to another instance or filter after the subsampling (but before the actual encoding) . That way it's closer to the actual final format submission format, instead of having to actually encode a physical file and go back and forth a zillion times
    That problem has finally come home to roost. I just processed some footage and it is r 103 compliant in RGB24/4:4:4, but when transcoding to the distribution format, 4:2:2, it is no longer compliant. Any ideas?

    I am doing one pass to one file at 4:2:0 -> RGB, then a second pass at RGB -> 4:2:2.
    Quote Quote  
  28. Member Budman1's Avatar
    Join Date
    Jul 2012
    Location
    NORTHWEST ILLINOIS, USA
    Search Comp PM
    Visual Studios---> Visual Basic, C, C++, C# ---. Link to Linux? Compiles to EXE

    Gui Creation:
    I can't arrange or design a Shower curtain to look nice so the layout sucks, I know, but It's Gui, calls any external program you want to use without actually compiling the external utility and was created with Visual Basic.

    Image
    [Attachment 47398 - Click to enlarge]
    Quote Quote  
  29. Originally Posted by Budman1 View Post
    Visual Studios---> Visual Basic, C, C++, C# ---. Link to Linux? Compiles to EXE

    Gui Creation:
    I can't arrange or design a Shower curtain to look nice so the layout sucks, I know, but It's Gui, calls any external program you want to use without actually compiling the external utility and was created with Visual Basic.

    Image
    [Attachment 47398 - Click to enlarge]
    Can your Visual Basic program open pipes and communicate with ffmpeg that way, hopefully without flicker?

    Here again is how Ted Burke does it in C under Linux:

    https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-vi...-part-2-video/
    Quote Quote  
  30. The final 4:2:2 submission has to be checked. It has to be decoded to RGB to check for r 103 compliance. That's part of the user's workflow. No further processing is done after the final check.
    Quote Quote  



Similar Threads