VideoHelp Forum




+ Reply to Thread
Results 1 to 15 of 15
  1. Is there any optimized conversion formula available for YUV to RGB and RGB to YUV?
    Any formula is available by using shift operations instead of floating point multiplications?
    Quote Quote  
  2. The usual trick is to do something like this:

    Instead of Y = F * X, you multiply both F and X by 256 such that

    Y = 256 * 256 * F * X

    and then divide by 65536 (256 * 256) to get Y = F * X.

    This makes it trivial because if you use 32-bit integers, you either shift right by 24 bits to get your final 8-bit result or just take the high word to get a 16-bit result.

    Your input data need to be scaled 0 to 255 (which they usually are) and the coefficients also need to be scaled from 0 to 255.

    The next optimization is to use SIMD instructions (MMX/SSE/SSE2 etc) to perform the calculations in parallel.
    John Miller
    Quote Quote  
  3. http://fourcc.org/fccyvrgb.php

    You can use scaled integers instead of floating point. For example, multiply all values by 256 (<<8) before calculations, then divide the result by 256 (>>8) afterwards. For rounding you can add 128 before dividing by 256.

    Is this for some embedded CPU without a FPU? On current x86 processors you may find that using fp SIMD is faster.
    Quote Quote  
  4. when i use eq: from fourcc and other help files then i got converted image but the picture is blured? Any other factor that the input file depend or not??
    Quote Quote  
  5. How are you getting your YUV video? Are the chroma channels subsampled as YUY2 or YV12 or similar? Maybe you aren't treating the expansion to 4:4:4 correctly? I would do the conversion first in floating point (for simplicity) and worry about speeding it up after the code is working properly.
    Quote Quote  
  6. Can someone please say about the pixel arrangement when we convert from one format to another...
    Is there any relation to clarity of picture and pixel arrangements.
    eg: RGB 888 whether entire r value then g and b for frame or r g and b for pixel 1 then pixel 2 ...till frame end
    Quote Quote  
  7. There's no difference in sharpness just because of the storage order of pixels or sub-pixels. There is a loss of sharpness in the color channels when using subsampled YUV (eg, YUY2 or YV12). But order of storage makes no difference.
    Quote Quote  
  8. can someone tell which are the RGB formats supported by usual 15 inch monitor...
    - By use the equation described in www.FourCC.org the required color effect is not obtain. What will i do to get perfect frame?
    - What is mean by Alpha channel RGB ...How this is different than other format....
    Quote Quote  
  9. There are many open source video programs. Why don't you download the source code of one of them and compare its conversion algorithm to what you're doing.

    An alpha channel usually is 8 bits of additional data added to a 24 bit color image (making it 32 bits per pixel). The extra 8 bits are usually used to indicate transparency.

    http://www.hendronix.com/?p=7
    Quote Quote  
  10. How will we maniculate back buffer for display.
    How will we manage the read and write to this back buffer in optimized manner.
    plz help me to solve doubts.....
    Quote Quote  
  11. Originally Posted by Dave1024
    How will we maniculate back buffer for display.
    How will we manage the read and write to this back buffer in optimized manner.
    You read and write to the back buffer exactly the same as the front buffer. When you are done updating the back buffer you call the system function to swap buffers.

    In Windows DirectShow you create two Direct Draw surfaces then call DIRECTDRAWSURFACE::Flip() to to swap them for display.
    Quote Quote  
  12. Direct show and direct X are different package now .Can plz mention direct X API support for back buffer implementation. Is there any other factor relate for getting smooth output with out delay.
    Quote Quote  
  13. How many minimmum number of buffers need to implement Flipping.
    How to load a buffer to the display device.
    Quote Quote  
  14. Originally Posted by Dave1024
    How many minimmum number of buffers need to implement Flipping.
    2. Use 3 for smoother results. One being displayed, one fully updated and waiting to be displayed, and one being updated.

    Originally Posted by Dave1024
    How to load a buffer to the display device.
    http://msdn.microsoft.com/en-us/library/aa139765.aspx
    Quote Quote  
  15. What is the difference between Texture and Surface?
    Is back buffer implemented with both texture and Surface?
    What is the importance of locking and unlocking of surface?
    Thread Parameters like Mutex,Semaphore,Critical Section have any importance of the video data rendering(Is it use to synchronize read convert and render the video data).
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!