VideoHelp Forum




+ Reply to Thread
Results 1 to 14 of 14
  1. Magix editor
    Can't import raw YV12 :
    What am I missing ?

    (It used to work fine
    On a Win XP system
    With older version.)
    Quote Quote  
  2. Try using ffdshow and enabling the raw video yv12 decoder.
    Quote Quote  
  3. Hi, thanks.
    Already did that : installed ffdshow, set "Raw video" to "all supported" (should include YV12 right ?), also disabled the "don't use ffdshow in..." and "use ffdshow only in..." boxes, still doesn't work.

    Click image for larger version

Name:	ffdshow & Magix Video Deluxe 2016.png
Views:	401
Size:	210.3 KB
ID:	38340


    The purpose is to load into the NLE virtual files created by Avisynth Virtual File System from source files pre-filtered with Avisynth (as explained in this other thread), and those files appear as "YV12". But I did try to copy one such virtual file to another location so as to load it as a physical file, to no avail. It worked fine when I did it with MVD 17 (from 2010) on my Windows XP partition ; but I had a framerate issue when exporting (as explained in that other thread), which prompted me to try the newer version on the newer system. At least the framerate issue seems to be solved, but I can't load the whole project until I have solved that YV12 issue (well, it works if I add "ConvertToRGB" but I'd prefer not to...).
    Last edited by abolibibelot; 28th Aug 2016 at 21:50.
    Quote Quote  
  4. Was your older software x86 ? You might have had "helix yuv codec" installed, which support YV12 in 32bit interface, but I think there might be a newer one that supports x64 now

    For YUV 4:2:0, usually the only fourcc configuration that works universally is "IYUV" , this cannot be sent through avfs . "IYUV" is Intel YUV and supported natively through Windows . YV12 isn't well supported in retail programs

    Your NLE most likely converts it (and lossless YUV codecs) to RGB anyways. The exception is usually "IYUV" - most Windows NLE's actually treat that as YUV.
    Quote Quote  
  5. Was your older software x86 ? You might have had "helix yuv codec" installed, which support YV12 in 32bit interface, but I think there might be a newer one that supports x64 now
    If it worked on Windows XP (32 bit) it was obviously x86 / 32 bit. As for the newer version I'm not sure, and don't know how to verify it. It installed Video Deluxe (the actual NLE software) in the "Programmes" directory, but a component called "Speed3_burnR" (probably a CD/DVD burner) was installed in the "Program Files (x86)" directory, if that's any clue.
    On the XP partition I have a thing called “Satsuki Decoder Pack” installed, which includes Media Player Classic and a supposedly carefully selected assortment of filters. It contains two files for which the description says “Helix...” something, not sure if that's the one you mean.

    Click image for larger version

Name:	Satsuki Decoder Pack - Helix DNA.png
Views:	454
Size:	177.5 KB
ID:	38341
    (Satsuki folder in XP partition.)

    So, is ffdshow supposed to work in 32 bit only, or 64 bit as well ?


    For YUV 4:2:0, usually the only fourcc configuration that works universally is "IYUV" , this cannot be sent through avfs . "IYUV" is Intel YUV and supported natively through Windows . YV12 isn't well supported in retail programs
    I still have a very rudimentary knowledge of color spaces and the various standards pertaining to them... I don't know what YUV / IYUV / YV12 are exactly. All I know is that it worked and no longer does.

    Your NLE most likely converts it (and lossless YUV codecs) to RGB anyways. The exception is usually "IYUV" - most Windows NLE's actually treat that as YUV.
    But won't it be heavier to process (i.e. require more memory especially) in RGB, even though those are virtual files ? I need to load 4 virtual files (about 35 minutes in 1280x720) before opening that project as it is, it worked on the XP install and I could make a full export (albeit out of sync), but it was pushing the system to the limit (2009 machine with 4GB of total memory -- I'm even surprised it worked, I was planing on building a new computer based on an Intel i7 before I would resume working on that movie much more comfortably, but I could barely afford it right now and it's been a long time I put it on hold already).
    Last edited by abolibibelot; 28th Aug 2016 at 22:46.
    Quote Quote  
  6. Originally Posted by abolibibelot View Post
    Hi, thanks.
    Already did that : installed ffdshow, set "Raw video" to "all supported" (should include YV12 right ?)
    Yes, that's the right idea. But... Did you set it for both VFW and DirectShow? Some programs use one, some the other.
    Quote Quote  
  7. Originally Posted by abolibibelot View Post
    But won't it be heavier to process (i.e. require more memory especially) in RGB, even though those are virtual files ?
    Yes it should be, but it partially depends on how the software handles the handoff from avfs . It might actually be the same if it converts internally to RGB32. There is another thing - RGB32 despite taking more memory for a "dummy" alpha channel, sometimes works faster than RGB24 because of some programmatic reason (memory alignment or something, not sure) . Eitherway, it's simple enough to do a quick test; or if you have lots going on in the avs script, it might actually be smarter to use a physical intermediate
    Quote Quote  
  8. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    And it's not like it loads the whole file into memory at once during frameserving, it serves them one frame at a time. So there's not a huge burden using rgb32 or rgb24 vs yuv12.

    Scott
    Quote Quote  
  9. Yes, that's the right idea. But... Did you set it for both VFW and DirectShow? Some programs use one, some the other.
    I set it both in "VFW configuration" and in "Video decoder configuration" (it's the same as DirectShow, right ? because I don't see a section called "DirectShow").

    Click image for larger version

Name:	ffdshow - VFW + video decoder.png
Views:	415
Size:	107.2 KB
ID:	38342

    Yes it should be, but it partially depends on how the software handles the handoff from avfs . It might actually be the same if it converts internally to RGB32. There is another thing - RGB32 despite taking more memory for a "dummy" alpha channel, sometimes works faster than RGB24 because of some programmatic reason (memory alignment or something, not sure) . Eitherway, it's simple enough to do a quick test; or if you have lots going on in the avs script, it might actually be smarter to use a physical intermediate.
    Use a physical intermediate : that's precisely what I wanted to avoid, when searching for a way to achieve this... ;^p If it works with AVFS -- and it did enough to export the whole thing (1h20min) in one go with no hiccup -- it allows to change the parameters in the AVS script and quickly reload the virtual files to apply the effect immediately, instead of going through yet another lengthy conversion to a huge lossless intermediate.
    Sure, with a denoising or deinterlacing filter it would have been impracticable. But those AVS scripts are only intended for exposure correction (as explained in the other thead I mentioned, which you may have seen in the meantime), so it's not too heavy on that aspect (the virtual files can almost be read in real time within the NLE, it's a bit jerky but it doesn't grind to a halt).
    Quote Quote  
  10. And it's not like it loads the whole file into memory at once during frameserving, it serves them one frame at a time. So there's not a huge burden using rgb32 or rgb24 vs yuv12.
    No, of course (the total for those 4 files would be 64GB !), but the process corresponding to the longest of those files (about 26min) occupies more than 500MB once the virtual file is loaded.
    I'll try converting to RGB24, or RGB32, if I don't find a solution to import in YV12 ; but still, that one should not be too hard to solve.

    There is another thing - RGB32 despite taking more memory for a "dummy" alpha channel, sometimes works faster than RGB24 because of some programmatic reason (memory alignment or something, not sure).
    Do you have a reference for that statement ? And how is that "dummy alpha channel" generally treated by editors ?
    I read this about RGB / RGBA with dummy 4th channel in MagicYUV's options :

    Click image for larger version

Name:	MagicYUV RGBA.png
Views:	548
Size:	37.4 KB
ID:	38344
    Quote Quote  
  11. Originally Posted by abolibibelot View Post

    Do you have a reference for that statement ? And how is that "dummy alpha channel" generally treated by editors ?


    It's just an observation with some programs. It depends on the specific program. But the underlying reason has to do with alignment to machine word boundaries

    http://avisynth.nl/index.php/RGB32
    Using the RGB32 video format provides in modern processors faster access to video data because the data is aligned to machine's word boundaries. For this reason many applications use it instead of RGB24 even when there is no transparency mask information in the fourth (A) byte, since in general the improved processing speed outweighs the memory overhead introduced by the unused A byte per pixel.
    I forget exactly where or which programs there were some benchmarks/speed tests showed it was like 20% faster than RGB24, despite being "larger"

    It's handled fine by most programs. It's a "dummy" alpha, so no actual data. Some programs might pop up a dialog asking you how to handle it (e.g. it might give you an option to ignore it). If you had real alpha channel (transparency information) with RGB32, then it would also be handled ok in most NLE's because they use it when compositing / mixing layers
    Quote Quote  
  12. OK, solved that one -- it works with ffdshow 64bit :
    https://sourceforge.net/projects/ffdshow-tryout/files/Official%20releases/64-bit/
    Previously I had chosen the one called “the latest version” without thinking twice about it :
    Looking for the latest version? Download ffdshow_rev4532_20140717_clsid.exe (4.8 MB)
    Now, would you (“cornucopia”, “poisondeathray”, “jagabo”) have some clues or good advices about my other issues (explained at length in the two related threads) :
    1) I used both 29.97FPS and 25FPS source files for the movie I'm about to finish, but in distinct parts (not mixed together, i.e. roughly the first two third from 29.97FPS source, and the last third from 25FPS source). The stabilization function (which I badly need for many 29.97FPS sequences -- the camera's O.I.S. was seemingly defective, vertical jerkiness all over the place) works well only if the project framerate and export framerate match that of the source (at least that was the case in MVD v.17, and it's such a lengthy process when used applied on large sections that I'd prefer not to have to start it all over again), so I have to export the whole movie in 29.97FPS (which seems to work well with the newer version of MVD, as opposed to v.17 which generated extra frames, causing a desynchronization). Those 25FPS sequences feature people talking in a room, mostly shot handheld (so moderately jerky, with a moderate amount of motion). Would you let the NLE convert the 25FPS part to 29.97 (most likely by duplicating about one frame every six), or use an Avisynth plugin to interpolate the extra frames, as “manolo” suggested ? Wouldn't that affect the picture quality, counterbalancing the possibly improved smoothness of playback ? Or would be possible / compliant (esp. for standalone devices like a blu-ray player with MP4/MKV compatibility) to encode the two parts separately at their native framerate, then stitch them as MP4 or MKV ? (Probably not, since the framerate is specified at the file header level, and there can be only one.)
    I can provide a sample if required.

    2) Regarding the exposure issue, is there anything more I could try to improve that footage before the final rendering / encoding stage, with other plugins or a better set of parameters for the ones I used ?
    I can also provide a sample if required.
    Last edited by abolibibelot; 29th Aug 2016 at 02:37.
    Quote Quote  
  13. You can try using DepanStabilize() to reduce the camera shake. After that try ChangeFPS() (duplicates frames), ConvertFPS() (blends frames), MFlowFPS() (motion interpolation), or InterFrame() (another motion interpolation) to convert the frame rate.
    Quote Quote  
  14. You can try using DepanStabilize() to reduce the camera shake. After that try ChangeFPS() (duplicates frames), ConvertFPS() (blends frames), MFlowFPS() (motion interpolation), or InterFrame() (another motion interpolation) to convert the frame rate.
    The question would be : is it worth the trouble ? ChangeFPS will probably have the same effect as the internal conversion by the NLE software, while the others, blending or interpolating new frames (or all frames ?), might impact the quality of an already problematic source.

    As for DepanStabilize, I'll try, but the stabilizing function included in MVD is satisfying and probably more practical (actually there are two in the newer version : the Magix one and another called ProDAD Mercalli which seems more advanced yet I haven't obtained good results with it in a few tests). Does this Avisynth function process the footage every time it is called, or does it store the stabilizing parameters somehow the first time to re-use them then ? (If it's the first case it's gonna be way too slow, compared with the MVD function which does store the parameters, subsquently allowing a relatively smooth playback of the stabilized footage.)

    Just to make it clear : the camera shake issue concerns the 29.97FPS footage (the 25FPS files are fine in that regard, just “regular” handheld camera shake, which I don't want to correct, the O.I.S. did a good job in that case) ; the exposure issue (and potential framerate conversion issue) concerns the 25FPS footage (the 29.97FPS files are well exposed). I'm already using Avisynth to pre-filter 4 of the 25FPS files (correcting the exposure / levels), which are loaded into the NLE through AVFS, and that's about the maximum I can do that way with this computer (otherwise I'd have to resort to using huge lossless intermediate which would make the whole process even more tedious). Further Avisynth filtering would have to take place at the encoding stage (on the exported lossless intermediate), after all the editing operations, thus with much less visibility / interactivity regarding the final result (at least with my rather low experience of Avisynth -- maybe some AVS scripting wizards can do whatever they want with just a text editor and have a clear vision of the effect of each command in relation with the others, I'm still far from that level !).
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!