This is driving me nuts... I made a movie from M2TS source videos in Magix Video Deluxe 2016 (I've already spent way too much time fiddling with every detail and countless unexpected issues and I want to finish this ASAP). I had to pre-filter some sections with Avisynth, but when I imported them into the editor, the colors were slightly off, no matter what I tried it didn't come out right. It would seem that something is wrong with the colorspace conversion matrix at some point, if I get this correctly.
I made a bunch of tests using a basic AVS script with no treatment :
– importing the files as virtual files with Avisynth Virtual File System
– importing AVI files created with VirtualDub in "fast recompress" mode, using MagicYUV, Lagarith, UT Video, with various parameters for each, or uncompressed YUV...
...to no avail.
What am I missing ?
Wait... it seems to produce a visually indistinguishable preview if I add a "CompressToRGB(matrix="Rec709")" line to the Avisynth script, then export it with Lagarith in RGB. How come the editor doesn't use the same conversion method, even though the M2TS and AVI files are supposed to have the exact same content ? Or is it something else ? Is this the only way I can make this right ?
I've read this thread, where it's said that Sony Vegas treats different types of source files differently – could it be the same issue here ?
Another thing that's bugging me : if I import a M2TS file in MVD, and export it right away as lossless AVI with no treatment whatsoever, then if I compare the source and the output with this script :
A = FFVideoSource("source", threads=1).subtitle("original").Histogram("levels" )
B = AVISource("output").ConvertToYV12(matrix="Rec709") .subtitle("export").Histogram("levels")
Interleave(A, B)
... I can see that it looks the same, but the histogram gets clamped to 16-235 with the AVI file, even though the levels from the source file extend below 16 and beyond 235. Shouldn't it keep the levels as they are, unless instructed otherwise ? Again, refering to the thread mentioned above, it's said that in Vegas, "native camera files get studio RGB treatment", but, if I get this correctly, this is not the case here, right ? Or am I completely confused about these notions ?
(By the way, I found no mention at all of YUV or RGB or Luma or Chroma in the integrated help, which is rather strange for a relatively advanced video editor...)
Thanks in advance.
+ Reply to Thread
Results 1 to 15 of 15
-
-
Yes it' s the same issue
Your editor is converting the YUV intermediate to RGB with Rec601 . That's also the reason for clipping - it doesn't treat those AVI codecs as YUV , it treats them as RGB and uses a standard range Rec601 conversion
If you "undo" the conversion in the AVI screenshot by going back to YUV with 601, then back to RGB with 709, you get the same colors as the native camera files. Combined with the observation that if you convert to RGB using 709 and use a RGB intermediate, that it looks correct confirms that is what is going on.
One format that usually gets pass through as YUV in Windows editors is UYVY (a specific uncompressed 8bit 422 configuration) . But the filesizes are huge because it's uncompressed.
For the export clamping - Did you try another format - for example does it occur on say a h264/AVC MP4 export ? Maybe there are other conversions behind the scenes going on
This is not as bad as "clipping" which would mean the data is lost. The "clamping" is just squishing the data into legal range. Not as bad because you could expand the range if you wanted to afterwards. But it would be nice if they gave the user control over that
Forget about "studio RGB". That only applies to vegas . No other program uses it -
Your editor is converting the YUV intermediate to RGB with Rec601 . That's also the reason for clipping - it doesn't treat those AVI codecs as YUV , it treats them as RGB and uses a standard range Rec601 conversion
If you "undo" the conversion in the AVI screenshot by going back to YUV with 601, then back to RGB with 709, you get the same colors as the native camera files. Combined with the observation that if you convert to RGB using 709 and use a RGB intermediate, that it looks correct confirms that is what is going on.
One format that usually gets pass through as YUV in Windows editors is UYVY (a specific uncompressed 8bit 422 configuration) . But the filesizes are huge because it's uncompressed.
So with this UYVY format, if it does work as intended without altering the colors, will it better preserve the quality than converting to RGB through Avisynth, or not at all ? (This footage is precious and already in a bad shape – the screenshots are from the well exposed part, the rest has a severe backlight issue which I painstakingly corrected with AutoAdjust and HDRAGC – so I'd like to avoid any further loss.) Since the editor converts to RGB anyway, I guess that the difference is going to be negligible... But a double conversion RGB => YUV => RGB is lossy, right ?
I also tried converting to RGB with PC.709 (instead of Rec709) : the picture got a washed-out aspect, as if the contrast had been significantly reduced.
For the export clamping - Did you try another format - for example does it occur on say a h264/AVC MP4 export ? Maybe there are other conversions behind the scenes going on
I'm going to make the final export in lossless AVI (before compressing with x264) : what format should I choose to get it right, with the best possible quality, no shift in the color balance, and all-around compliance ? Apparently MagicYUV only exports in YUV formats ; Lagarith and UT Video can export in RGB, which would seem to be the better choice here (since I have no control over the RGB to YUV process, there might be other SNAFUs down the line !...), and then convert to YV12 with the Rec709 matrix at the final transcoding stage – does that sound right ?
I'm still pondering if I'm going to export in 1280x720 or 1024x576 (the latter would be easier on older computers, and it may hide some defects in the source footage, but the former would retain more detail for still pictures) – should it matter when choosing the RGB to YUV conversion matrix, i.e. should I choose Rec601 for 1024x576, as it's not technically a so-called “HD” resolution, and may be wrongly identified as such by software or hardware players if Rec709 is used ?
This is not as bad as "clipping" which would mean the data is lost. The "clamping" is just squishing the data into legal range. Not as bad because you could expand the range if you wanted to afterwards. But it would be nice if they gave the user control over that
Isn't some data lost in the process in case or "clamping", i.e., won't there be banding if the range is expanded back ? And should YV12 video be exported in limited 16-235 luma range, to be compliant with standalone players / TV and whatnot ? Or are current devices able to interpret a 0-255 range ? I've read repeatedly that the luma range for video should be 16-235, with pure black being 16 and pure white being 235, but then what is the purpose of the values outside that range ?
Forget about "studio RGB". That only applies to vegas . No other program uses itLast edited by abolibibelot; 23rd Dec 2017 at 02:35.
-
Also : how can I export to UYVY with VirtualDub ? (Or anything that can read an Avisynth script and export to lossless AVI, not much else as far as I know, but maybe ffmpeg can do that right off the bat.)
Another issue : at first I tried to import Avisynth scripts directly into MVD, through Avisynth Virtual File System, but (apart from the color shift) I got weird issues, like non reproductible sudden variations of luminosity, as if some frames were not treated and served as-is, then if I closed and re-opened MVD those particular frames could be fine but others were affected in the same way. The exported AVI files seem fine in that regard (I haven't checked thoroughly yet, though).
The Avisynth script contains :
Code:FFVideoSource("20131224_145353.m2ts", threads=1) Autoadjust(auto_gain=true, high_quality=true, gain_mode=0, input_tv=false) HDRAGC(coef_gain=1.75, max_gain=6.00, coef_sat=1.00, max_sat=1.40, corrector=1.00, reducer=0.3, mode=2, black_clip=1.00, avg_lum=128, shadows=false) SMDegrain(thSAD=200)
Can the fact that I don't have a dedicated graphic card – and thus rely on the CPU's integrated graphic chip – have any bearing on this ? And even if it does not, would there be a significant advantage in purchasing an entry-level discrete video card for video editing tasks ? (I don't play video games, don't do any 3D-related task, so I thought that I could get away with not having one, since even video editing is supposed to rely mainly on the CPU rather than the GPU.) -
It works with other windows editors, such as vegas, premiere pro, a few others. But it's upsampling the chroma if you're starting with 4:2:0 . Technically that's not lossless, unless you just duplicate/discard chroma samples with nearest neighbor algorithm . This can be done in vapoursynth or avisynth , but you have no control over what is being used in your editor over the other conversions (YUV 422 to RGB is upsampling again, and there is loss from YUV to RGB as a general rule)
will it better preserve the quality than converting to RGB through Avisynth, or not at all ? (This footage is precious and already in a bad shape – the screenshots are from the well exposed part, the rest has a severe backlight issue which I painstakingly corrected with AutoAdjust and HDRAGC – so I'd like to avoid any further loss.) Since the editor converts to RGB anyway, I guess that the difference is going to be negligible... But a double conversion RGB => YUV => RGB is lossy, right ?
I also tried converting to RGB with PC.709 (instead of Rec709) : the picture got a washed-out aspect, as if the contrast had been significantly reduced.
If you use PC709, then yes , that is a full range conversion. You have to convert back with full range to make it "normal" when going back to YUV . Some people choose to work this way but it's difficult to do proper color work when the end goal is normal range
I wouldn't over concern yourself with "lossless" or not. The major losses that are avoidable, where you can "see" problems are clipping . Other things like lossless codecs are probably overkill .
For the export clamping - Did you try another format - for example does it occur on say a h264/AVC MP4 export ? Maybe there are other conversions behind the scenes going onYes, it's the same if I export in MP4, for instance.
I'm going to make the final export in lossless AVI (before compressing with x264) : what format should I choose to get it right, with the best possible quality, no shift in the color balance, and all-around compliance ? Apparently MagicYUV only exports in YUV formats ; Lagarith and UT Video can export in RGB, which would seem to be the better choice here (since I have no control over the RGB to YUV process, there might be other SNAFUs down the line !...), and then convert to YV12 with the Rec709 matrix at the final transcoding stage – does that sound right ?
I'm still pondering if I'm going to export in 1280x720 or 1024x576 (the latter would be easier on older computers, and it may hide some defects in the source footage, but the former would retain more detail for still pictures) – should it matter when choosing the RGB to YUV conversion matrix, i.e. should I choose Rec601 for 1024x576, as it's not technically a so-called “HD” resolution, and may be wrongly identified as such by software or hardware players if Rec709 is used ?
How can I know for sure if it's clipped or clamped ? How do other NLE softwares behave in that regard ?
Isn't some data lost in the process in case or "clamping", i.e., won't there be banding if the range is expanded back ? And should YV12 video be exported in limited 16-235 luma range, to be compliant with standalone players / TV and whatnot ? Or are current devices able to interpret a 0-255 range ? I've read repeatedly that the luma range for video should be 16-235, with pure black being 16 and pure white being 235, but then what is the purpose of the values outside that range ?
Clamping is "squished" . Look at the big spikes at the ends. The data is still there, just squished.
The problem is in 8bit precision, that clamping is very difficult to recover. You can't "tease out" details from 0-255 "slots" when it's squished to 16-235. So effectively, it's still "bad" but - just not as "bad" . It depends on the program . Some keep the internal calculations in float precision while you are still in the program, and you have access to all the values regardless of RGB or YUV .
The purpose for values outside of 16 and 235 are for undershoot and overshoot . You're supposed to keep most of everything in those values for Y.
Forget about "studio RGB". That only applies to vegas . No other program uses it
You can think of studio RGB as similar to "PC matrix" . The coefficients are slightly different (resulting in slightly different colors), but the levels are the same . So YUV 0-255 gets mapped to RGB 0-255 and vice versa. It's not being used in MEP, you can tell by the info you've provided.
Also : how can I export to UYVY with VirtualDub ? (Or anything that can read an Avisynth script and export to lossless AVI, not much else as far as I know, but maybe ffmpeg can do that right off the bat.)
Code:ffmpeg -i INPUT.avs -pix_fmt uyvy422 -c:v rawvideo -an -vtag "UYVY" OUTPUT_UYVY.avi
Another issue : at first I tried to import Avisynth scripts directly into MVD, through Avisynth Virtual File System, but (apart from the color shift) I got weird issues, like non reproductible sudden variations of luminosity, as if some frames were not treated and served as-is, then if I closed and re-opened MVD those particular frames could be fine but others were affected in the same way. The exported AVI files seem fine in that regard (I haven't checked thoroughly yet, though).
Can the fact that I don't have a dedicated graphic card – and thus rely on the CPU's integrated graphic chip – have any bearing on this ? And even if it does not, would there be a significant advantage in purchasing an entry-level discrete video card for video editing tasks ? (I don't play video games, don't do any 3D-related task, so I thought that I could get away with not having one, since even video editing is supposed to rely mainly on the CPU rather than the GPU.)
But discrete card will make difference on various tasks. It depends on which tasks , which software . Many operations in video editors are GPU accelerated these days, especially resizing, filtering. It can speed up workflow -
"lossless" YUV codecs are generally not "lossless" in many software, because they are often treated as RGB as you see here
But there are other intermediates you could use.
AVC in 8bit or 10bit422 is fairly good because it's usually treated as YUV in most editors. Very configurable . At low quantizers you can have 99.999% mathematically lossless , even much higher quality than even prores 4444xq (which is common for high end professional level masters) . (Some editors support the lossless x264 variant, but when tested, the decoding isn't 100% truely lossless) . You can configure anywhere in between. You can configure I-frame or short GOPS . Options of faster decoding --tune fastdecode or more highly compressed if storage/filesize is a consideration. But even the fastest decoding, I-frame variant is about 4x slower than say cineform or prores . It will still be faster / seek performance than AVFS (lots of overhead, essentially you're double frameserving), but the biggest "negative" is seek performance even with fast decoding options enabled. Cineform or prores are much more editing friendly; but at the highest quality levels, they are not as high as AVC 's highest quality levels . But for typical home video / consumer editing, they are still overkill IMO . -
-
Yes, vdub doesn't need external codec anymore for UYVY. In vdfm you can set the decode format, or under uncompressed there is a "pixel format" button where you can select UYVY
To clarify, vegas always works in RGB - it's just that "lossless" YUV codecs get computer RGB treatment, instead of studio RGB treatment . This produces clipping for superbrights/superdarks (not reccoverable) . But UYVY and native camera formats get "studio RGB treatment" thus you can fix them.
Premiere has a YUV capable timeline, and YUV filters , so it is possible to work in YUV completely. Most native camera formats, some types of AVC and HEVC, UYVY and v210 get YUV treatment . UYVY and v210 are extra special there, because they get complete passthrough . Input = Output
A long time ago, I tested Pinnacle, Ulead, I think MEP was on that list too. UYVY was treated as YUV too. It's the other uncompressed fourcc's such as YUY2, 2Vuy, etc... there are dozens of them, that got RGB treatment instead of uncompressed YUV treatment. On Mac's the preferred fourcc is 2Vuy for 8bit422 , On Windows, it's UYVY for 8bit422 . v210 for 10bit422 on both platforms. Some Windows editors were able to use IYUV as 8bit 4:2:0 (as opposed to the more common "YV12") . Old versions of Premiere were able to pass that through, but newer versions cannot. -
Nick Hope has written extensively about color issues involved when frameserving from Vegas, through AVISynth, into either VirtualDub, MeGUI, or Handbrake. Here is one of several tutorials he has written:
How to Render High Quality Video for YouTube and Vimeo from Sony Vegas Pro
It was written several years ago, but he updated it earlier this year.
You are pretty far into this, so by now you have probably experimented with lots of settings. Remember that you can serve out of Vegas using either RGB24 or YUY2; you can convert to other colorspaces within AVISynth, and each of those conversions can be modified using the REC colorspace modifier. You also have issues related to the usual 0, 15, 235, 255 levels conversions and these can affect, slightly, the colors. -
It works with other windows editors, such as vegas, premiere pro, a few others. But it's upsampling the chroma if you're starting with 4:2:0 . Technically that's not lossless, unless you just duplicate/discard chroma samples with nearest neighbor algorithm . This can be done in vapoursynth or avisynth , but you have no control over what is being used in your editor over the other conversions (YUV 422 to RGB is upsampling again, and there is loss from YUV to RGB as a general rule)
If you convert to RGB, and work in RGB in the editor - make sure you "fix" the levels before converting to RGB and importing into the editor. Adjust YUV levels in avisynth first. Those Y levels <16, >235 will get clipped with a standard range (rec709) . Chroma clipping occurs <16, >240
Anyway, the native, straight-from-camera files, also included in this movie, will get clamped by the editor with no pre-treatment at all, so I guess that it's a moot point... I won't convert all the files to lossless intermediates now just to fix that issue, which doesn't seem to be visually noticeable. (Or is it ?)
I wouldn't over concern yourself with "lossless" or not. The major losses that are avoidable, where you can "see" problems are clipping . Other things like lossless codecs are probably overkill .
If anything, lossless encoders are way faster (even though they produce much bigger files) than lossy ones, so if storage space is not an issue they're actually more convenient. In what way would they be overkill, other than size ?
magicyuv can use RGB, or (maybe that's for the non free version) . That workflow sounds right if you plan to work in RGB. (ie. Rec709 for YUV<=>RGB conversions, but don't for get to fix things in YUV before converting)
I don't “plan” to work in RGB, it just so happens that the editor does (as most of the equivalent softwares do from what I could gather).
Why not decide at the end ? Export 1 "master" format, then you can encode various end formats. If you need SD formats you can use Rec601 and downscale. If you need HD you can keep it 1280x720 and Rec709
So you confirm that for anything below 1280x720, Rec601 has to be used ?
The problem is in 8bit precision, that clamping is very difficult to recover. You can't "tease out" details from 0-255 "slots" when it's squished to 16-235. So effectively, it's still "bad" but - just not as "bad" . It depends on the program . Some keep the internal calculations in float precision while you are still in the program, and you have access to all the values regardless of RGB or YUV .
The purpose for values outside of 16 and 235 are for undershoot and overshoot . You're supposed to keep most of everything in those values for Y.
But they didn't change MEP did they ?
Not sure, but it's probably related to "auto" adjust . Auto anything for color work is prone to those fluctuations. I suspect the AVI would have the same problems, look more closely. I doubt it's related to AVFS
AVC in 8bit or 10bit422 is fairly good because it's usually treated as YUV in most editors. Very configurable . At low quantizers you can have 99.999% mathematically lossless , even much higher quality than even prores 4444xq (which is common for high end professional level masters) . (Some editors support the lossless x264 variant, but when tested, the decoding isn't 100% truely lossless) .
It will still be faster / seek performance than AVFS (lots of overhead, essentially you're double frameserving) [...]
But for typical home video / consumer editing, they are still overkill IMO .
But discrete card will make difference on various tasks. It depends on which tasks , which software . Many operations in video editors are GPU accelerated these days, especially resizing, filtering. It can speed up workflow -
So it's actually “more lossy” than converting directly to RGB, if the editor imports everything as RGB anyway, right ?
It not more lossy in terms of compression, and if you nearest neighbor up/down that is a lossless operation
The point of using it , is it by passes the RGB conversion in premiere , and you get studio RGB treatment in vegas . Recall , if you used lagarith in YUV etc.... that got problems because your program converted to RGB and also used the wrong matrix . IIRC, this UYVY also worked in MEP, at least an old version maybe 5 years ago. ie. It should "act" like the native m2ts file , instead of the mistreatment of YUV codec. But if you're sure the program only works in RGB anyways, it's probably a moot point. (In vegas it's important, because you get the studio RGB treatment instead of computer RGB)
As I said I used AutoAdjust for those scripts. Judging from the histogram, it brings the blacks “to the right side” and the whites “to the left side”, but leaves some black below 16 and some white above 235. I made tests with Levels, autolevels, Smoothlevels (setting output_low to 16 and output_high to 235, or using a “PC to TV” preset) : those three bring the levels completely within the “legal” range, but I find the result a bit more pleasing visually with AutoAdjust. Should I reconsider, based on that clamping issue ? Do you have experience with those other plugins, and is one of them considered “best” for such purpose ?
Anyway, the native, straight-from-camera files, also included in this movie, will get clamped by the editor with no pre-treatment at all, so I guess that it's a moot point... I won't convert all the files to lossless intermediates now just to fix that issue, which doesn't seem to be visually noticeable. (Or is it ?)
It sounds like you've played around enough to know what you like, so just pick what you think looks best and in legal limits for the final thing. There is more than one way to do things
What do you call “lossless codec” here ? You mean, you would use a lossy intermediate conversion in a case like this ? Or you mean that trying to get a perfectly lossless colorspace conversion is overkill ? Oh, I think you answered to that below, with the mention of Cineform, Prores and so forth.
If anything, lossless encoders are way faster (even though they produce much bigger files) than lossy ones, so if storage space is not an issue they're actually more convenient. In what way would they be overkill, other than size ?
Some lossless codecs might be faster for encoding, but it depends on which ones you're talking about, which settings. Pros/cons to all of them. Also a consideration is I/O throughput vs bandwidth, that can become a bottleneck for higher resolutions when using mathematically lossless codecs (maybe not so much for 720p) . Lossless and near lossless codecs are typically judged by encoding speed, decoding speed, compression ratio, compatibility and handling (e.g. colorspace handling in the host application, cross platform (e.g. does it work with linux? mac ? ) ), redundancy (e.g. secondary level of CRC check ? ) , stability, price
I've added the ConvertToRGB line at the and, after the filtering, so normally only one colorspace conversion takes place, right ?
I don't “plan” to work in RGB, it just so happens that the editor does (as most of the equivalent softwares do from what I could gather).
How do you know it works in RGB only ? Did you do some tests or did someone confirm it somewhere ?
So you confirm that for anything below 1280x720, Rec601 has to be used ?
But wider than 16:9 material might be less than 720pixel height, and still use Rec709. E.g. 1280x544 would be ~2.35:1 AR but should still use Rec709 by convention
So the default ffmpeg executable is not ? I thought about using it for the final encode, instead of MeGUI, as I read here (oh, it was also you ! :^p very active here apparently) that it has the advantage of doing it in one step. But can it also use external AAC encoders like QAAC (supposedly of better quality than the default AAC encoders) ?
I once tried to transcode a file with x264 CRF=1 (if that's what you mean), because the source in Xvid was causing issues in the editor (presumably because of DirectShow), but it indeed caused glitches of its own
What do you mean by double frameserving ?
But for typical home video / consumer editing, they are still overkill IMO .
Do you mean Cineform / Prores, or any kind of lossless format ?. If this is a priceless one of a kind family moment I suppose you could convince me to use a lossless workflow (but that would mean not using MEP if it works in RGB only, and assuming you didn't need RGB for other manipulations)
Are there particular models recommanded for video applications ? (Either current and inexpensive models, or originally higher-end cards now considered old for video games, which would still be relevant for such purpuses and could be found used at a bargain price.)
For example Vegas is heavily biased towards opencl, which favors AMD cards . Whereas Premiere is biased to Nvidia and CUDA performance . If you were using GPU encoding , you'd want a card appropriate for that. For example NVEnc works only with NVidia. QSVEnc only works with Intel . Some filters might be CUDA based, but others OpenCL based. CUDA is specific to NVidia, but NVidia cards can do OpenCL as well. You might find better info and benchmarks on the MEP forum -
I dislike auto anything. I prefer manual adjustments. Just my opinion
AVS Autoadjust + HDRAGC, then MVD treatment including Gamma HDR (5)
AVS Smoothlevels “pc2tv”, then MVD treatment including Gamma HDR (20)
=> Here Magix's Gamma HDR used alone seems to better preserve the colors and overall “crispness” of the picture than in conjunction with HDRAGC.
AVS Autoadjust + HDRAGC, then MVD treatment including Gamma HDR (10)
AVS Smoothlevels “pc2tv”, then MVD treatment including Gamma HDR (20)
=> But here Gamma HDR is not able to recover the shadows satisfyingly without generating an unnatural look.
The native footage looks like this, just for laughs (there are segments which are even worse than that) :
Actually it's a long overdue project that I resumed recently, for which I had already requested some help here more than a year ago, but had had little new insight on that particular aspect (how to make that awfully crushed/blown footage look halfway decent ?), then I wanted to include still pictures (of my deceased grandmother) but the ones I got at the time were low resolution and low quality, barely usable, which made the whole thing quite depressing (since then I received actual photographs which I could scan myself), and then I had to move to a new appartment, and then I had many new issues to deal with... plus I got the impression that nobody cared about that anymore... Yet I have to finish it, no matter what.
It sounds like you've played around enough to know what you like, so just pick what you think looks best and in legal limits for the final thing. There is more than one way to do things
My point is there should be a good reason to use a lossless codec. Given that most editors don't even treat them as lossless (colorspace conversions) , will there be a difference in the end result ? For some source material, definitely. Certainly things like illegal levels, clipping are important to address - but in terms of lossless compression, I think it's overkill for most home consumer projects. Just my opinion
In this particular case, you'd say that it would have been more important to systematically correct the levels (on pretty much all the source videos apparently) than trying to keep the footage “mathematically lossless” for intermediate conversions ? If those levels are illegal, why were they recorded as such by the camera ?
Some lossless codecs might be faster for encoding, but it depends on which ones you're talking about, which settings. Pros/cons to all of them. Also a consideration is I/O throughput vs bandwidth, that can become a bottleneck for higher resolutions when using mathematically lossless codecs (maybe not so much for 720p) . Lossless and near lossless codecs are typically judged by encoding speed, decoding speed, compression ratio, compatibility and handling (e.g. colorspace handling in the host application, cross platform (e.g. does it work with linux? mac ? ) ), redundancy (e.g. secondary level of CRC check ? ) , stability, price
And so, which codec do you generally use for intermediate conversions ?
There would be source YUV => RGB in avisynth, then assuming editor works in RGB, filters in RGB etc... => export in RGB, so far that's only 1 conversion. But usually you have one more conversion back to YUV for end distribution format. Usually 4:2:0.
How do you know it works in RGB only ? Did you do some tests or did someone confirm it somewhere ?
But was that long GOP or I-frame ? It's actually very stable in premiere (and vegas) in I-frame configuration. Long GOP can cause problems, especially if you used default setting (250)
Lossless definitely, but for most projects cineform/prores too . But I guess it depends on your expectations and what you are doing exactly. It's just my personal opinion, you're allowed to have your own opinion too . If this is a priceless one of a kind family moment I suppose you could convince me to use a lossless workflow (but that would mean not using MEP if it works in RGB only, and assuming you didn't need RGB for other manipulations)
So far this is the only editing software I'm sufficiently familiar with to work on a complex project with enough confidence and efficiency. I know that it has some quirks and bugs, but it also has strong points (from what I read the image stabilizer in Vegas is a poor performer). When I was looking for a good video editing software, intuitive enough to start and with enough potential to try more advanced things later on, I read quite a few reviews and it seemed like a solid choice. But apparently most non-linear editing softwares work in RGB internally, at least the midrange ones.
No, it's too vague of a question and that's going to depend on specific details, which applications and workflows.
For example Vegas is heavily biased towards opencl, which favors AMD cards . Whereas Premiere is biased to Nvidia and CUDA performance . If you were using GPU encoding , you'd want a card appropriate for that. For example NVEnc works only with NVidia. QSVEnc only works with Intel . Some filters might be CUDA based, but others OpenCL based. CUDA is specific to NVidia, but NVidia cards can do OpenCL as well. You might find better info and benchmarks on the MEP forumLast edited by abolibibelot; 25th Dec 2017 at 23:46.
-
Well, just looking at those recent screenshots, clearly some are better, some are clearly worse, right ? You're definitely making improvements ,not making it worse by anybody's book - so that's a "win" .
If you want to be objective, you can use scopes (waveform, histogram, vectorscope etc..) along with these manipulations to help guide you - not sure if mvd has them.
Some shots will be difficult to adjust, but that' s the nature of home video. There's also only so much you can do with home video and 8bit compression, so you have to manage your expectations as well. You can get more advanced with secondary color correction , masks , power windows/tracking in resolve - it depends how far you want to go and spend time learning. Resolve has a free version now, but it's very powerful for color work. It's the gold standard in hollywood for many years. But it can be a challenge and time consuming matching scenes for consistency and balance especially when lighting and sets are not controlled such as in the majority of home video. Professional colorists absolutely hate home video - because the source footage is so difficult to adjust compared to what they are used to (compared to camera raw footage, high bit depths, professionally shot and lit, or CG) . With raw footage, you have 10x more amount of freedom and control, you're not limited by crappy compression , or noise or bad lighting
My point is that there should be a good reason not to use a lossless codec... :^p The only inconvenience is that it requires a lot of available storage space, otherwise how does it make things more difficult ?
The other inconvenience is editing speed. Editing codecs are many times faster / smoother for editing. When you do large projects, higher resolutions (UHD) , it's becomes pain to edit with lossless codecs . But you can use low res / low quality proxy workflows then swap
And if you're ending up using RGB in the editor anyway then the "lossless codecs are not treated as YUV" argument doesn't matter for you either
In this particular case, you'd say that it would have been more important to systematically correct the levels (on pretty much all the source videos apparently) than trying to keep the footage “mathematically lossless” for intermediate conversions ? If those levels are illegal, why were they recorded as such by the camera ?
You will not be able to tell the difference in the end result between a lossless intermediate vs. one that used something like prores or cineform on this type of footage. That' s why those editing codecs are called "near lossless" . But using a lossless workflow is "best practices" here, and I agree for priceless family footage , I would probably too. (But that would usually mean an actual YUV workflow, unless you needed some RGB filters)
But you will notice things like clipped brights from a bad conversion - that's more damage than the difference between using a "lossless codec" and a near lossless codec
RE: "illegal" levels recorded by the camera
Ideally you want usable illegal levels for acquisition. Ideally you want much more than that. Only the consumer END delivery format (ie. after editing and processing) is 8bit 16-235 (although 10bit is becoming more common). Ideally you want more information, oversampling, higher bit depths, more dynamic range than you can use or see . It's better to have more data than less data. It's all downhill from the sensor, the processing and eventually degradation to consumer recording format throws away a lot of the data. More data would gives you flexiblity in post and to do manipulations .
The majority of 8bit consumer cameras actually record 16-255 to the media . The majority of those have "usuable" data in the 235-255 range, so you have to "rescue" those overbrights . Actually 10bit video is becoming more mainstream now, even on consumer devices. So instead you have 64-940 "slots" for Y' legal range . That's a hell of a lot more accuracy and range of expression - you have a lot more "shades" to express different details
Most cameras have a sensor that captures more infomration that eventually gets debayered to RGB at higher bit depths, it's the recording format that is subsampled to 4:2:0 and bitdepth reduced to 8bit, and data thrown away from crappy compression. But that's the answer, you're only looking at the consumer recording format on the media. Upstream there was actually a lot more data at a higher bit depth. It's the consumer recording format that dictates this degradation. Many people "hack" their camera (depending on model and firmware) to enable getting the better data higher upstream (so less compression , maybe better chroma subsampling, sometimes higher bit depths, and some models are even able to get the raw data , the actual "raw" . )
However, I was surprised to find out that exporting the same sequence with MVD's internal AVC encoder was actually way faster than with either of those. Can it be related to what you said about I/O throughput and bandwidth ? Or is this encoder really sloppy, with ultra-fast settings ?
After all there has to be a reason why it's recommanded to do the final encode with x264 rather than the AVC encoder provided by NLE softwares – or do you consider that overkill too ?
But for the lossless vs. near lossless debate - you're not going to be able to tell the difference, even on single frames zoomed in on the final result. Sure if you use a low quality, low bitrate intermediate, you're going to suffer... but nobody is going to advise that
And so, which codec do you generally use for intermediate conversions ?
I tend to use premiere for video editing, so that means YUV workflow and filters . "Lossless YUV codecs" are not lossless there, so if assets are YUV, I try to preserve YUV .
I'm not sure, but I've read repeatedly that nearly all NLE softwares were importing everything in RGB colorspace.
But some assets are not for certain types of projects . For example smart rendered projects, even vegas can pass through things like DV-AVI, XDCAM . Some "Lower cost" editors like powerdirector can too, with some AVCHD formats, MPEG2 formats. Smart render means complete passthrough for uninterrupted GOP segments. So on a cuts only project , only those sections that are cut in the GOP (only those frames in the GOP are re-encoded) . It's like "direct stream copy" in vdub, and only sections that are cut or have filters need to be re-encoded. For I-frame formats like DV, it would be individual frames. For long GOP formats, it would only be around that affected GOP (~15-24 frames around a cutsite) . But if you apply an RGB filter on the entire video, the the entire video needs to be re-encoded. For premiere, it has YUV filters, so those sections might never incur an RGB conversion.
But unless I'm mistaken VirtualDub can't export to AVC natively, so wouldn't it be more of a hassle in a case like this ?
But for you, and if it's a RGB workflow yes, there is little benefit. You might be Rec709 treatment instead of 601 for the "lossless" YUV codec, but if you're using lossless RGB intermediate anyway, it doesn't matter
The benefit is for those people that can use YUV workflows(e.g. premiere), since AVC in 8 or 10bit 420 or 422 will be treated as YUV ; or where the editor "treats" lossless YUV codecs differently (eg. vegas, as computer RGB). And for both, when you use low quantizers or crf values, the quality is several orders higher than even cineform or prores . To put things into persective, Prores HQ422 is the defacto standard for high quality professional workflows. This is what is typically used encode the retail blu-rays fromLast edited by poisondeathray; 26th Dec 2017 at 01:12.
-
Well, just looking at those recent screenshots, clearly some are better, some are clearly worse, right ? You're definitely making improvements ,not making it worse by anybody's book - so that's a "win" .
If you want to be objective, you can use scopes (waveform, histogram, vectorscope etc..) along with these manipulations to help guide you - not sure if mvd has them.
Some shots will be difficult to adjust, but that' s the nature of home video. There's also only so much you can do with home video and 8bit compression, so you have to manage your expectations as well.
You said you didn't like automatic filters, but how would you have proceeded to correct that kind of footage, with no help from something like HDRAGC ?
You can get more advanced with secondary color correction , masks , power windows/tracking in resolve - it depends how far you want to go and spend time learning. Resolve has a free version now, but it's very powerful for color work. It's the gold standard in hollywood for many years. But it can be a challenge and time consuming matching scenes for consistency and balance especially when lighting and sets are not controlled such as in the majority of home video. Professional colorists absolutely hate home video - because the source footage is so difficult to adjust compared to what they are used to (compared to camera raw footage, high bit depths, professionally shot and lit, or CG) . With raw footage, you have 10x more amount of freedom and control, you're not limited by crappy compression , or noise or bad lighting
I tried to fiddle with the luma curve mainly : it's relatively easy to get a dramatic improvement very quickly, with the help of the histogram (that part at least is way more intuitive and practical than adjusting contrast or gamma values, without knowing exactly what it does and how those adjustments interact with eachother). Beyond that, indeed, it would require a lot of time just to get acquainted with the notions you mentioned, I've spent way too much on this already. And there are some frustrating aspects : export formats are limited in AVI (only Cineform and uncompressed are available, can't use the lossless encoders installed on the system – otherwise there's a tremendously rich choice of uncompressed formats in 10bits+, of no use here), exporting to MP4 / H.264 gives an error with no explanation (“Recording failed with error : failed to encode video frame”), and apparently the only available framerates are 23.976, 24, and “30 (3:2)”, while my footage is 25fps and the finished movie will be 29.970 (because the first half was shot at that rate – I was advised to export in 59.94 for a smoother playback of the 25fps part, but I don't have this option in MVD). I still tried to export the longest of those videos (26m54s) to uncompressed 8bit, after a quick-and-dirty treatment based on a single frame which I thought would be representative of the worst-looking parts, I got a 70GB file (MediaInfo recognizes it as “HDYC”), with muted audio (don't know why, the audio is silent within the timeline too), but a correct 25fps framerate (when choosing “Individual clips” rather than “Single clip” the framerate option disappears so it probably keeps the native value). I also tried exporting in MOV / MPEG4 Video, with “Best” quality setting : it works, but the output has a constant bitrate around 5000kbps, less than the native files (~8000kbps), it's probably not enough to produce a visually transparent output. Then, checking the whole video, some parts are better looking than their HDRAGC counterparts, better colors, more natural contrast it would seem, others are worse – but it's possibly because I let the “Data levels” option to “Auto”, and apparently the export was made in full range (for the MOV export I chose “Video” and it looks better). I think that I'm going to export those roughly treated files, and mix them with the Avisynth-treated ones, by checking them both section by section and keeping what looks best.
But that YUV 4:2:2 format is not recognized by MVD, oh well, one more SNAFU... How can I export a clip treated by Resolve into a format recognized by MVD ? Or do I have to convert it yet again to RGB ? In this particular case, is the YUV 4:2:0 to YUV 4:2:2 conversion a lossless one, or does it add yet another lossy layer ? Also, there's quite conspicuous banding on the exported footage, is it normal after such a treatment, and can it be corrected / mitigated somehow ? (Levels and Smoothlevels in Avisynth have “dithering” to counteract the banding.)
native frame
Resolve luma curve treatment
Avisynth treatment (Autoadjust + HDRAGC)
Resolve treatment, just the preview panel
Avisynth treatment, preview panel cut and resized to the same size as the Resolve one (couldn't get the full size fully displayed within the software)
The treatment is very different, the Resolve one has more vivid colors, but otherwise feels dull and unnatural, faces are kinda waxy, reflections lack vibrancy, the walls lack shadows, I can't quite put my finger on what goes wrong technically, maybe the curve correction was too heavy-handed ; the Avisynth one has more contrast, more highlights in the window area but also more details, the colors are less vivid but probably more realistic. In fact it may not be worth the trouble to use the Resolve treatment as it is...
My previous comment said "typical home video." You know , the trip to the mall, my random cat video, oh look how blue the sky is today... Those sorts of videos that pollute youtube and various sites.
The other inconvenience is editing speed. Editing codecs are many times faster / smoother for editing. When you do large projects, higher resolutions (UHD) , it's becomes pain to edit with lossless codecs . But you can use low res / low quality proxy workflows then swap
In your case, since you are working in RGB, it's important to not clip anything right off the bat. That's my main point. Where/how you do the manipluations isn't that important, the end result is.
After the HDRAGC treatment some values are outside the 16-235 range on some frames, should I correct levels a second time after that filter ? (I tried and it looked worse, but I used the full range : I thought that black had to be 0 to look truly black.)
But using a lossless workflow is "best practices" here, and I agree for priceless family footage , I would probably too. (But that would usually mean an actual YUV workflow, unless you needed some RGB filters)
Not sure, I wouldn't have expected that result either. But it's likely using a Mainconcept AVC licenced encoder (99.9% of AVC exports from Windows NLE's do)
But some assets are not for certain types of projects . For example smart rendered projects, even vegas can pass through things like DV-AVI, XDCAM . Some "Lower cost" editors like powerdirector can too, with some AVCHD formats, MPEG2 formats. Smart render means complete passthrough for uninterrupted GOP segments. So on a cuts only project , only those sections that are cut in the GOP (only those frames in the GOP are re-encoded) . It's like "direct stream copy" in vdub, and only sections that are cut or have filters need to be re-encoded. For I-frame formats like DV, it would be individual frames. For long GOP formats, it would only be around that affected GOP (~15-24 frames around a cutsite) . But if you apply an RGB filter on the entire video, the the entire video needs to be re-encoded. For premiere, it has YUV filters, so those sections might never incur an RGB conversion.
I'm not sure if this could be related with another issue I had recently : I wanted to export a very simple edit made from one MP4 file with only a few cross-fades and no treatment beyond that, using the “Smart copy” option (normally it should have worked as you said, re-encoding only the cross-faded segments), but the option was disabled / greyed-out. Someone complained there about the same issue with a higher end Magix product.
In a batch file yes, but technically not "one step" because the muxing has to occur after video and audio encoding. They are actually sequential steps, but everything can be automated in a batch file where to the end user appears to be a "single step."
But for you, and if it's a RGB workflow yes, there is little benefit. You might be Rec709 treatment instead of 601 for the "lossless" YUV codec, but if you're using lossless RGB intermediate anyway, it doesn't matter
What would be the prefered (lossless) format for exporting / rendering the finished movie, to ensure that nothing goes wrong with regards to colorspace conversion ? Should I again export in RGB and then convert to YUV with Avisynth and "matrix="Rec709”
Thank you again for the time and patience it takes to provide such thorough replies. -
For HDRAGC, you can use the shift parameter , or adjust with levels(some settings, coring=false) beforehand
Is it normal for footage with valid 16-235 levels to look comparatively washed-out, or am I doing something wrong ?
If you want to be objective, you can use scopes (waveform, histogram, vectorscope etc..) along with these manipulations to help guide you - not sure if mvd has them.
You said you didn't like automatic filters, but how would you have proceeded to correct that kind of footage, with no help from something like HDRAGC ?
For resolve - There are a gazillion tutorials for resolve on youtube. There are some for different experience levels and some complete series for beginners.
There are different variations on workflow, some use physical files like prores, others use linking through xml/aaf/edl. I doubt your editor will be able to use any of the latter methods.
How can I export a clip treated by Resolve into a format recognized by MVD ?
I don't have MVD, I don't know. v210 should work (10bit422) , as should cineform
I think that I'm going to export those roughly treated files, and mix them with the Avisynth-treated ones, by checking them both section by section and keeping what looks best.
Certain things people will agree on 100% , such as don't clip overbrights etc.., but other things you can ask 10 people and get 10 differentanswers
In this particular case, is the YUV 4:2:0 to YUV 4:2:2 conversion a lossless one, or does it add yet another lossy layer ?
Also, there's quite conspicuous banding on the exported footage, is it normal after such a treatment, and can it be corrected / mitigated somehow ? (Levels and Smoothlevels in Avisynth have “dithering” to counteract the banding.)
The other inconvenience is editing speed. Editing codecs are many times faster / smoother for editing. When you do large projects, higher resolutions (UHD) , it's becomes pain to edit with lossless codecs . But you can use low res / low quality proxy workflows then swap
So, what is the prefered way of properly setting the levels to the valid range, in Avisynth or whatever else ?
After the HDRAGC treatment some values are outside the 16-235 range on some frames, should I correct levels a second time after that filter ? (I tried and it looked worse, but I used the full range : I thought that black had to be 0 to look truly black.)
For HDRAGC, use the shift parameter or level beforehand
Well, I needed an editing software, and you mentioned Premiere as the only prominent one which can deal with YUV directly. But isn't Premiere “overkill” for that kind of stuff ?
But you're not going to see massive deterioration from a single RGB conversion on this type of asset, assuming you've adjusted for clipping from the conversion beforehand.
What do you call “assets” here ?
I'm not sure if this could be related with another issue I had recently : I wanted to export a very simple edit made from one MP4 file with only a few cross-fades and no treatment beyond that, using the “Smart copy” option (normally it should have worked as you said, re-encoding only the cross-faded segments), but the option was disabled / greyed-out. Someone complained there about the same issue with a higher end Magix product.
As for smart render for long gop formats - yes, it's know to be "buggy" depending on many factors , not just magix
In a batch file yes, but technically not "one step" because the muxing has to occur after video and audio encoding. They are actually sequential steps, but everything can be automated in a batch file where to the end user appears to be a "single step."
But in a batch file, the temp files can be automatically processed (or deleted) in the script. The user doesn't have to "see" anything, just double click basically
What would be the prefered (lossless) format for exporting / rendering the finished movie, to ensure that nothing goes wrong with regards to colorspace conversion ? Should I again export in RGB and then convert to YUV with Avisynth and "matrix="Rec709”
If you include things like known color bars, test patterns etc.. in the same format as the original asset, it can help diagnose/identify potential problems in the workflow too.
Similar Threads
-
Editing videos in Magix Movie Edit Pro
By davlow06 in forum EditingReplies: 3Last Post: 28th Dec 2016, 12:08 -
[avisynth] FFMS2 version 2.22 get green colors
By marcorocchini in forum Newbie / General discussionsReplies: 2Last Post: 16th Oct 2015, 12:04 -
[Solved] Avisynth : colors problems
By Kdmeizk in forum Video ConversionReplies: 50Last Post: 30th Apr 2015, 07:20 -
Avisynth - Overlay on certain colors
By Ninelpienel in forum EditingReplies: 13Last Post: 13th Apr 2015, 14:18 -
Avisynth Santiag() making image and colors a tad brighter.
By killerteengohan in forum RestorationReplies: 4Last Post: 27th Jul 2014, 19:01