Magix editor
Can't import raw YV12 :
What am I missing ?
(It used to work fine
On a Win XP system
With older version.)
+ Reply to Thread
Results 1 to 14 of 14
-
-
Hi, thanks.
Already did that : installed ffdshow, set "Raw video" to "all supported" (should include YV12 right ?), also disabled the "don't use ffdshow in..." and "use ffdshow only in..." boxes, still doesn't work.
The purpose is to load into the NLE virtual files created by Avisynth Virtual File System from source files pre-filtered with Avisynth (as explained in this other thread), and those files appear as "YV12". But I did try to copy one such virtual file to another location so as to load it as a physical file, to no avail. It worked fine when I did it with MVD 17 (from 2010) on my Windows XP partition ; but I had a framerate issue when exporting (as explained in that other thread), which prompted me to try the newer version on the newer system. At least the framerate issue seems to be solved, but I can't load the whole project until I have solved that YV12 issue (well, it works if I add "ConvertToRGB" but I'd prefer not to...).Last edited by abolibibelot; 28th Aug 2016 at 21:50.
-
Was your older software x86 ? You might have had "helix yuv codec" installed, which support YV12 in 32bit interface, but I think there might be a newer one that supports x64 now
For YUV 4:2:0, usually the only fourcc configuration that works universally is "IYUV" , this cannot be sent through avfs . "IYUV" is Intel YUV and supported natively through Windows . YV12 isn't well supported in retail programs
Your NLE most likely converts it (and lossless YUV codecs) to RGB anyways. The exception is usually "IYUV" - most Windows NLE's actually treat that as YUV. -
Was your older software x86 ? You might have had "helix yuv codec" installed, which support YV12 in 32bit interface, but I think there might be a newer one that supports x64 now
On the XP partition I have a thing called “Satsuki Decoder Pack” installed, which includes Media Player Classic and a supposedly carefully selected assortment of filters. It contains two files for which the description says “Helix...” something, not sure if that's the one you mean.
(Satsuki folder in XP partition.)
So, is ffdshow supposed to work in 32 bit only, or 64 bit as well ?
For YUV 4:2:0, usually the only fourcc configuration that works universally is "IYUV" , this cannot be sent through avfs . "IYUV" is Intel YUV and supported natively through Windows . YV12 isn't well supported in retail programs
Your NLE most likely converts it (and lossless YUV codecs) to RGB anyways. The exception is usually "IYUV" - most Windows NLE's actually treat that as YUV.Last edited by abolibibelot; 28th Aug 2016 at 22:46.
-
-
Yes it should be, but it partially depends on how the software handles the handoff from avfs . It might actually be the same if it converts internally to RGB32. There is another thing - RGB32 despite taking more memory for a "dummy" alpha channel, sometimes works faster than RGB24 because of some programmatic reason (memory alignment or something, not sure) . Eitherway, it's simple enough to do a quick test; or if you have lots going on in the avs script, it might actually be smarter to use a physical intermediate
-
And it's not like it loads the whole file into memory at once during frameserving, it serves them one frame at a time. So there's not a huge burden using rgb32 or rgb24 vs yuv12.
Scott -
Yes, that's the right idea. But... Did you set it for both VFW and DirectShow? Some programs use one, some the other.
Yes it should be, but it partially depends on how the software handles the handoff from avfs . It might actually be the same if it converts internally to RGB32. There is another thing - RGB32 despite taking more memory for a "dummy" alpha channel, sometimes works faster than RGB24 because of some programmatic reason (memory alignment or something, not sure) . Eitherway, it's simple enough to do a quick test; or if you have lots going on in the avs script, it might actually be smarter to use a physical intermediate.
Sure, with a denoising or deinterlacing filter it would have been impracticable. But those AVS scripts are only intended for exposure correction (as explained in the other thead I mentioned, which you may have seen in the meantime), so it's not too heavy on that aspect (the virtual files can almost be read in real time within the NLE, it's a bit jerky but it doesn't grind to a halt). -
And it's not like it loads the whole file into memory at once during frameserving, it serves them one frame at a time. So there's not a huge burden using rgb32 or rgb24 vs yuv12.
I'll try converting to RGB24, or RGB32, if I don't find a solution to import in YV12 ; but still, that one should not be too hard to solve.
There is another thing - RGB32 despite taking more memory for a "dummy" alpha channel, sometimes works faster than RGB24 because of some programmatic reason (memory alignment or something, not sure).
I read this about RGB / RGBA with dummy 4th channel in MagicYUV's options :
-
It's just an observation with some programs. It depends on the specific program. But the underlying reason has to do with alignment to machine word boundaries
http://avisynth.nl/index.php/RGB32
Using the RGB32 video format provides in modern processors faster access to video data because the data is aligned to machine's word boundaries. For this reason many applications use it instead of RGB24 even when there is no transparency mask information in the fourth (A) byte, since in general the improved processing speed outweighs the memory overhead introduced by the unused A byte per pixel.
It's handled fine by most programs. It's a "dummy" alpha, so no actual data. Some programs might pop up a dialog asking you how to handle it (e.g. it might give you an option to ignore it). If you had real alpha channel (transparency information) with RGB32, then it would also be handled ok in most NLE's because they use it when compositing / mixing layers -
OK, solved that one -- it works with ffdshow 64bit :
https://sourceforge.net/projects/ffdshow-tryout/files/Official%20releases/64-bit/
Previously I had chosen the one called “the latest version” without thinking twice about it :
Looking for the latest version? Download ffdshow_rev4532_20140717_clsid.exe (4.8 MB)
1) I used both 29.97FPS and 25FPS source files for the movie I'm about to finish, but in distinct parts (not mixed together, i.e. roughly the first two third from 29.97FPS source, and the last third from 25FPS source). The stabilization function (which I badly need for many 29.97FPS sequences -- the camera's O.I.S. was seemingly defective, vertical jerkiness all over the place) works well only if the project framerate and export framerate match that of the source (at least that was the case in MVD v.17, and it's such a lengthy process when used applied on large sections that I'd prefer not to have to start it all over again), so I have to export the whole movie in 29.97FPS (which seems to work well with the newer version of MVD, as opposed to v.17 which generated extra frames, causing a desynchronization). Those 25FPS sequences feature people talking in a room, mostly shot handheld (so moderately jerky, with a moderate amount of motion). Would you let the NLE convert the 25FPS part to 29.97 (most likely by duplicating about one frame every six), or use an Avisynth plugin to interpolate the extra frames, as “manolo” suggested ? Wouldn't that affect the picture quality, counterbalancing the possibly improved smoothness of playback ? Or would be possible / compliant (esp. for standalone devices like a blu-ray player with MP4/MKV compatibility) to encode the two parts separately at their native framerate, then stitch them as MP4 or MKV ? (Probably not, since the framerate is specified at the file header level, and there can be only one.)
I can provide a sample if required.
2) Regarding the exposure issue, is there anything more I could try to improve that footage before the final rendering / encoding stage, with other plugins or a better set of parameters for the ones I used ?
I can also provide a sample if required.Last edited by abolibibelot; 29th Aug 2016 at 02:37.
-
You can try using DepanStabilize() to reduce the camera shake. After that try ChangeFPS() (duplicates frames), ConvertFPS() (blends frames), MFlowFPS() (motion interpolation), or InterFrame() (another motion interpolation) to convert the frame rate.
-
You can try using DepanStabilize() to reduce the camera shake. After that try ChangeFPS() (duplicates frames), ConvertFPS() (blends frames), MFlowFPS() (motion interpolation), or InterFrame() (another motion interpolation) to convert the frame rate.
As for DepanStabilize, I'll try, but the stabilizing function included in MVD is satisfying and probably more practical (actually there are two in the newer version : the Magix one and another called ProDAD Mercalli which seems more advanced yet I haven't obtained good results with it in a few tests). Does this Avisynth function process the footage every time it is called, or does it store the stabilizing parameters somehow the first time to re-use them then ? (If it's the first case it's gonna be way too slow, compared with the MVD function which does store the parameters, subsquently allowing a relatively smooth playback of the stabilized footage.)
Just to make it clear : the camera shake issue concerns the 29.97FPS footage (the 25FPS files are fine in that regard, just “regular” handheld camera shake, which I don't want to correct, the O.I.S. did a good job in that case) ; the exposure issue (and potential framerate conversion issue) concerns the 25FPS footage (the 29.97FPS files are well exposed). I'm already using Avisynth to pre-filter 4 of the 25FPS files (correcting the exposure / levels), which are loaded into the NLE through AVFS, and that's about the maximum I can do that way with this computer (otherwise I'd have to resort to using huge lossless intermediate which would make the whole process even more tedious). Further Avisynth filtering would have to take place at the encoding stage (on the exported lossless intermediate), after all the editing operations, thus with much less visibility / interactivity regarding the final result (at least with my rather low experience of Avisynth -- maybe some AVS scripting wizards can do whatever they want with just a text editor and have a clear vision of the effect of each command in relation with the others, I'm still far from that level !).
Similar Threads
-
NLE for smartphone
By SameSelf in forum Newbie / General discussionsReplies: 9Last Post: 10th Mar 2016, 13:37 -
Davinci Resolve 12 NLE
By SameSelf in forum EditingReplies: 11Last Post: 7th Feb 2016, 09:16 -
Problem Muxing Raw h264 and Raw AC3
By Malonn in forum Video ConversionReplies: 14Last Post: 29th Jan 2015, 15:21 -
YV12 codec problem?
By chazz spacey in forum Video ConversionReplies: 24Last Post: 9th Dec 2014, 13:58 -
Which is better for converting to YV12 correctly?
By killerteengohan in forum RestorationReplies: 12Last Post: 26th Nov 2014, 19:18