VideoHelp Forum

+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 48 of 48
Thread
  1. A 6common format is 3 bytes per pixel (red, green, blue).
    Quote Quote  
  2. Anonymous543
    Guest
    If there's a image or video which is in YUV color space

    Then while playing them into player like VLC then
    Does vlc converts YUV to RGB first? before playing?
    Quote Quote  
  3. Originally Posted by kirito View Post
    How to calculate file size of raw image?
    What data a raw image contains?

    I want a simple file size calculation example..
    It depends on the color format and bit depth. 8 bit RGB is 8 bits per primary, ie, 24 bits (3 bytes) per pixel (32 bits per pixle if the alpha channel is included). 8 bit YUV 4:2:2 is 16 bits per pixel. 8 bit YUV 4:2:0 it 12 bits per pixel.

    It gets trickier with high bit depths. 10 bit RGB is 30 bits of information per pixel. But it may be stored as 16 bits per primary (48 bits total). The bits can be left justified or right justified.

    Then there are stride issues. The frame may be padded to align the data on byte, word, long, double boundaries.

    Originally Posted by kirito View Post
    If there's a image or video which is in YUV color space

    Then while playing them into player like VLC then
    Does vlc converts YUV to RGB first? before playing?
    It depends on the setup. It can be done by VLC, the graphics card, or the monitor.
    Last edited by jagabo; 24th May 2022 at 07:25.
    Quote Quote  
  4. Originally Posted by jagabo View Post
    Then there are stride issues. The frame may be padded to align the data on byte, word, long, double boundaries.
    Which one don't really need to consider for "file storage" but only "memory usage" instead. At least i have never seen any "file" that stored a stride.
    Quote Quote  
  5. Anonymous543
    Guest
    I want a free book lol or any online freely available website about multimedia ,to understand more about multimedia and its encoding decoding
    Last edited by Anonymous543; 24th May 2022 at 07:56.
    Quote Quote  
  6. Originally Posted by emcodem View Post
    Originally Posted by jagabo View Post
    Then there are stride issues. The frame may be padded to align the data on byte, word, long, double boundaries.
    Which one don't really need to consider for "file storage" but only "memory usage" instead. At least i have never seen any "file" that stored a stride.
    https://docs.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-bitmapinfoheader

    For uncompressed RGB formats, the minimum stride is always the image width in bytes, rounded up to the nearest DWORD. You can use the following formula to calculate the stride:

    stride = ((((biWidth * biBitCount) + 31) & ~31) >> 3)
    Quote Quote  
  7. Nice, i wasnt aware about that BMP worked like that
    I guess it's because i never used bmp format to store raw video before, lucky me that i did not need to dig too deep into directshow, i guess they use it a lot there

    Originally Posted by kirito View Post
    I want a free book lol or any online freely available website about multimedia ,to understand more about multimedia and its encoding decoding
    Almost all needed knowledge is freely available, there is just a lot to know about it. Might be best to start with something less complex, e.g. still image or audio compression?


    Originally Posted by kirito View Post
    because i wanna make a small but high quality 2d animation by my own..and i dont know which codecs will suit to animate frames which i draw
    If you mean you want do draw images from code, I'd like to say that you should use some higher level API's to draw, e.g. use some drawing libararies e.g. html canvas if you're doing that in a browser.

    Originally Posted by kirito View Post
    i want a basic simple codec that is easy to understand.
    Are you sure about that? Usually one doesnt need to understand a lot about the codec, just find some example configuration/profile in the that fits your need and feed the codec with your images. It might be best if you first think about "where/how should the final product be presented". E.g. Do you want your final image sequence to be viewed in a browser on all kinds of different devices and OS? If so, your codec choices are very limited.
    Last edited by emcodem; 24th May 2022 at 10:49.
    Quote Quote  
  8. Just a short example, as a programmer we can easily generate video, all we need to do is to set the RGB values of the pixels at the places we like them to be this and that color. Most simple to just write one byte per pixel, leading us to generate a so called RGB24 raw image.
    This batch let us use this batch to generate one frame of 16x16 pixels, each pixel having 3 bytes which carry the RGB values.
    Note that using batch language we can only write ascii characters instead of raw bytes to file, so the actual raw values for each pixels are (look up some ascii table to see what i mean):
    R=52
    G=47
    B=42

    The resulting color will be brown and all pixels will have the same color for start, we write 256 pixels.


    HTML Code:
    @echo off
    setlocal
    
    set PIXELBUFFER=RGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGBRGB
    del c:\temp\1.rgb24
    :loop
    >>c:\temp\1.rgb24 (
        echo|set /p=%PIXELBUFFER%
    )
    set /a written=%written%+1
    if %written% lss 1 goto loop
    Opening the generated file in a hex editor will reveal that we wrote the byte seqence 52 47 42 repeatedly.

    Now we feed some encoder with the generated data and encode it to something that you can play in some media player. As the file is just a bunch of bytes without any container, we need to tell our decoder how to interpret the input data. As i wrote 256 pixels above, i tell the decoder that i want it to assume 16x16 resolution. And of course we also need to tell the decoder which pixel format we have, in our case simple rgb24.

    ffmpeg -f rawvideo -pix_fmt rgb24 -s 16x16 -i c:\temp\1.rgb24 c:\temp\out.mp4

    OK, this way we generated a video with a single frame. If you want more frames all you have to do is to write more pixel data into 1.rgb24 file.
    Quote Quote  
  9. For some pixel art run length encoding (a very simple compression algorithm) can work moderately well.

    https://en.wikipedia.org/wiki/Run-length_encoding
    Quote Quote  
  10. Anonymous543
    Guest
    Originally Posted by jagabo View Post
    For some pixel art run length encoding (a very simple compression algorithm) can work moderately well.

    https://en.wikipedia.org/wiki/Run-length_encoding
    Is it possible encode videos with Run length encoding?
    if yes then how?
    Quote Quote  
  11. Compress rather than encode, yes. You can for example .zip your video data, or use x264/x265 in lossless mode.
    Quote Quote  
  12. Originally Posted by kirito View Post
    Originally Posted by jagabo View Post
    For some pixel art run length encoding (a very simple compression algorithm) can work moderately well.

    https://en.wikipedia.org/wiki/Run-length_encoding
    Is it possible encode videos with Run length encoding?
    if yes then how?
    I though you were interested in learning how compression works? Write your own encoder/decoder.

    ffmpeg has read and write support for quicktime rle: qtrle

    Code:
    ffmpeg -i rgb.avi -c:v qtrle output.mov
    Quote Quote  
  13. Anonymous543
    Guest
    I though you were interested in learning how compression works? Write your own encoder/decoder.
    Yes ,that's why am researching about everything and trying to understand.. things including raw images ,how they made by RGB color space and it's bitdepth etc
    after that i guess encoding comes in place for its compression like RGB => YUV and saving some bytes


    But am still confused how can I turn an image sequence in to video with audio insync..
    Image should not loss quality while it turns into video..

    It's like i wanna make a program that packs image set + audio into archive like zip with some metadata like..how many frame per second to play etc.
    And also a player program that reads that archive and it's metadata for playback.
    Quote Quote  
  14. Originally Posted by kirito View Post
    But am still confused how can I turn an image sequence in to video with audio insync..
    Image should not loss quality while it turns into video..

    It's like i wanna make a program that packs image set + audio into archive like zip with some metadata like..how many frame per second to play etc.
    And also a player program that reads that archive and it's metadata for playback.
    But it will be larger than using temporal RGB compression . Image sequences only use spatial compression, not temporal compression. Using both spatial and temporal will yield the highest lossless compression ratio

    eg. lets say you start with those YUV to RGB converted 289 PNG images from the earlier post

    Using FFV1 you save about 1/3 bandwidth on that example. The more similar the images, the more you save. The more different the images, the less you save. But temporal compression should always yield smaller results than the sum of PNG images (there might be some edge cases like exact duplicate frames, where lagarith "null frames" compresses better)

    289 image png sequence 535MB
    zip archive compression 535MB
    libx264rgb 472MB
    ffv1 355MB
    Last edited by poisondeathray; 25th May 2022 at 10:32.
    Quote Quote  
  15. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Yes, and as an extreme example, if you were to have an image that never changed throughout, your 289 png images would probably be >289x the size of a Lagarith video with null frames. Of course, if that were the case, you would probably be thinking, "why would I need 289 png images, then", and you would be right.

    But in the real world, you WILL have changes when you animate images (kind of the point).

    Good luck with coming up with an encoder/compressor and decoder/decompressor on your own though - they are complex enough that it took teams of (video & engineering & programming) highly educated and talented people years to come up with the ones we use right now.

    You asked about syncing: to do it properly requires a master clock and timestamps for equivalent packets of data in the video and audio streams, so they can be matched for playout.


    Scott
    Last edited by Cornucopia; 25th May 2022 at 10:50.
    Quote Quote  
  16. Anonymous543
    Guest
    Any one explain about BMP images
    in which programing language is written in? and how it works?

    can i draw pixel images with programing languages?
    Quote Quote  
  17. Originally Posted by kirito View Post
    Any one explain about BMP images
    in which programing language is written in? and how it works?

    can i draw pixel images with programing languages?
    BMP is a particular organization of pixel data including a header that describes the data and then the data.

    https://docs.microsoft.com/en-us/windows/win32/gdi/bitmaps
    https://docs.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-bitmapinfo
    https://docs.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-bitmapinfoheader

    BMP images can be created in almost any language. Using one with support for Windows GDI and Directshow, and low level byte/bit manipulation, may help (C variants for example). Create a 2D array of pixels. Create the associated header (structures that describe the data). Fill them out and save them.
    Quote Quote  
  18. python or C or perhaps other languages, not sure, can use opencv module. You have raw rgb in an array an you manipulate image using very fast algorithms or pre set up utilities. Even using python, when manipulating with an image, is very fast because pixels are not addressed one by one by python but by those algorithms in C using numpy
    https://www.geeksforgeeks.org/opencv-python-tutorial/
    opencv also loads an image (into numpy array) and saves new image (jpg., png) so you do not need to go technical about headers etc.
    Last edited by _Al_; 1st Jun 2022 at 08:56.
    Quote Quote  



Similar Threads