VideoHelp Forum

+ Reply to Thread
Results 1 to 21 of 21
Thread
  1. I'm going to capture Hi8 tape using huffyuv codec on AmarecTV.

    Does Adobe Premiere Pro experience problems or losses when editing video files captured with the huffyv (x32) codec?

    (Use Windows 10.)

    Which lossless codec is best for editing using Adobe Premiere Pro?
    (I heard there are magicyuv, ut video, lagarith besides huffyuv.)
    Quote Quote  
  2. The problem is most "lossless" YUV codecs get converted to RGB and not treated as YUV (thus they are not "lossless" in PP; there can be levels clipping during the limited range YUV conversion to full range RGB), and aspect ratio info isn't automatically conveyed (AR easy to fix by group selecting and interpreting files)

    For 8bit 4:2:2 , UYVY uncompressed 8bit is treated as lossless YUV , or for 10bit 4:2:2 uncompressed, v210 in all versions of PP. You need lots of fast HDD or SSD room for uncompressed, but SD isn't too bad compared to HD/UHD

    Also some PP point releases treat lossless x264 as YUV (truly lossless)
    Quote Quote  
  3. Originally Posted by poisondeathray View Post
    The problem is most "lossless" YUV codecs get converted to RGB and not treated as YUV (thus they are not "lossless" in PP; there can be levels clipping during the limited range YUV conversion to full range RGB), and aspect ratio info isn't automatically conveyed (AR easy to fix by group selecting and interpreting files)

    For 8bit 4:2:2 , UYVY uncompressed 8bit is treated as lossless YUV , or for 10bit 4:2:2 uncompressed, v210 in all versions of PP. You need lots of fast HDD or SSD room for uncompressed, but SD isn't too bad compared to HD/UHD

    Also some PP point releases treat lossless x264 as YUV (truly lossless)
    I googled and found out that 10 bits in Hi8 doesn't mean anything
    as you said If x264 is treated as lossless in some releases,

    Should I use the uncompressed UYVY 422 8bit codec?

    I don't know if what I understand is correct.

    The first workflow that I thought of is as follows.

    lossless capture(huffyuv or something) -> edit in PP(with lossless or prores 422 HQ) -> edit -> h.264

    In conclusion, it is one or two encoding processes.

    I wonder if it's right to go through this workflow.

    What workflow do people usually go through when capturing tapes and editing in Adobe PP?
    Quote Quote  
  4. Why do you need "lossless" ?

    Is something like prores hq or "near lossless", "good enough" for what you are doing ?


    People typically use uncompressed UYVY in Adobe if they need true lossless if using 8bit.

    If you import huffyuv or those others, you will get RGB treatment, and clipping (if your capture has Y values beyond 16-235, or CbCr values beyond 16-240). Hi8 isn't going to be great quality, and if you ensure you adjust the capture to legal levels, it can work ok

    Otherwise, near lossless intermediates like cineform, prores, dnxhr/dnxhd are typically used. They take up less space and edit performance very fast (decoding /scrubbing fast).
    Quote Quote  
  5. Originally Posted by poisondeathray View Post
    Why do you need "lossless" ?

    Is something like prores hq or "near lossless", "good enough" for what you are doing ?


    People typically use uncompressed UYVY in Adobe if they need true lossless if using 8bit.

    If you import huffyuv or those others, you will get RGB treatment, and clipping (if your capture has Y values beyond 16-235, or CbCr values beyond 16-240). Hi8 isn't going to be great quality, and if you ensure you adjust the capture to legal levels, it can work ok

    Otherwise, near lossless intermediates like cineform, prores, dnxhr/dnxhd are typically used. They take up less space and edit performance very fast (decoding /scrubbing fast).
    I'm making a music video.

    As far as I know, if you encode several times, the video will get worse.

    In addition, I understand that loss encoding will proceed once more when uploaded to YouTube.

    I tried not to make it worse because Hi8 itself is already a lo-fi video.

    However, if the image quality deterioration due to 2-3 encoding processes is meaningless, it would be fine.

    Not to the point of editing with uncompressed UYVY?

    If so, in conclusion,

    Huffyuv (Capture) -> 422 HQ (Edit) -> h.264 (Upload)

    Is this workflow correct?
    Quote Quote  
  6. The biggest quality loss , by far, will be youtube - because it re-encodes with low bitrate and not in an ideal fashion (distributed, many segments)

    If your end goal was youtube, it's probably not worth using lossless. Near lossless such as prores HQ is more than good enough, you won't be able to tell the difference on youtube

    If your setup can capture with prores directly, you can save some time
    Quote Quote  
  7. Originally Posted by poisondeathray View Post
    The biggest quality loss , by far, will be youtube - because it re-encodes with low bitrate and not in an ideal fashion (distributed, many segments)

    If your end goal was youtube, it's probably not worth using lossless. Near lossless such as prores HQ is more than good enough, you won't be able to tell the difference on youtube

    If your setup can capture with prores directly, you can save some time
    thanks!!!!:
    Quote Quote  
  8. Originally Posted by poisondeathray View Post
    The problem is most "lossless" YUV codecs get converted to RGB and not treated as YUV (thus they are not "lossless" in PP; there can be levels clipping during the limited range YUV conversion to full range RGB), and aspect ratio info isn't automatically conveyed (AR easy to fix by group selecting and interpreting files)

    For 8bit 4:2:2 , UYVY uncompressed 8bit is treated as lossless YUV , or for 10bit 4:2:2 uncompressed, v210 in all versions of PP. You need lots of fast HDD or SSD room for uncompressed, but SD isn't too bad compared to HD/UHD

    Also some PP point releases treat lossless x264 as YUV (truly lossless)

    Except for Adobe, it seems that it is impossible to encode Prores CODEC in Windows os.
    (It is said that ffmpeg is also possible, but I heard that it is not an official codec, so there may be problems when playing or editing.)

    So I think I need to edit it with cineform, DNxHD, or HuffYUV CODEC,
    but I have a few more questions.

    let's assume that editing is carried out with HuffYUV video.

    What exactly does it mean to beyond the y value, CbCr value?
    Isn't that YUV's color space?
    I wonder how the image captured by YUV exceeds the color space of YUV.

    Or, what kind of problems do you have in the process of changing the color space to 0-255 when RGB treatment becomes available?

    And if level clipping occurs, what does it result in?

    There is no difference between playback captured video by Huffyuv with window media player and playing with Adobe PP.(There is no difference in my eyes.)


    The HuffYUV codec has two settings:

    Always suggest RGB format for output.
    Enable RGBA (RGB with Alpha) compression.

    Are these two functions related to what you said?



    The aspect ratio information is a sequence created from the captured image in Adobe PP, and when I checked the sequence setting, it seems to be 4:3.
    (group selecting and interpreting files <- does this mean interpreting footages in project clip settings?)



    Does my workflow match the legal level of capture that you are talking about?

    here My workflow
    Hi8 tape->D8 camcorder(TBC+DNR)->S-Video->I-O DATA GV-USB2->AmarecTV HuffYUV CODEC

    mini DV tape->mini DV camcorder->ieee1394 cable->PCIe capture card->WinDV
    Quote Quote  
  9. Originally Posted by jsj2251 View Post

    Except for Adobe, it seems that it is impossible to encode Prores CODEC in Windows os.
    (It is said that ffmpeg is also possible, but I heard that it is not an official codec, so there may be problems when playing or editing.)
    https://forum.videohelp.com/threads/405645-editing-in-NLE-after-encoding-Prores-422-HQ...-the-Window-10

    If you don't want to take the chance, other than Adobe I think newer versions of vegas also produce certified streams (EDIT: actually I don't see them on the list). Also other less common windows professional programs like scratch, nucoda, phoenix, a few others.

    So I think I need to edit it with cineform, DNxHD, or HuffYUV CODEC,
    FFmpeg Prores should work ok in 2022 , many people use it every day




    let's assume that editing is carried out with HuffYUV video.

    What exactly does it mean to beyond the y value, CbCr value?
    Isn't that YUV's color space?
    I wonder how the image captured by YUV exceeds the color space of YUV.
    Yes, "YUV" and "YCbCr" are often used interchangebly (YUV is easier to write), but technically YCbCr the correct name for digitized values
    Y =~ Y
    Cb =~ U
    Cr =~ V

    Or, what kind of problems do you have in the process of changing the color space to 0-255 when RGB treatment becomes available?

    And if level clipping occurs, what does it result in?

    There is no difference between playback captured video by Huffyuv with window media player and playing with Adobe PP.(There is no difference in my eyes.)
    The clipping is the most visible problem. Lets say you have an outdoor shot . Most consumer cameras won't be using ND filter or control exposure properly - you will have overbrights. Often you can "rescue" the overbrights in YUV. That's not possible with huffyuv in programs that treat huffyuv as RGB. e.g. instead of clouds that are blown out and pure "white", you can bring back some details like grey. What you "see" in the RGB preview is not necessarily all the data. Only Y 16-235 gets "mapped" to RGB 0-255 , both for the preview and the conversion for the timeline (because huffyuv gets converted to RGB, not treated as YUV). The rest of the data gets discarded (so it's not "lossless"). If you carefully control your capture settings so that you bring everything within Y 16-235, then it's less of a problem, but it's still not lossless - you incur and 8bit YUV to RGB conversion - but that's less of an issue with low quality consumer tape input - but if you're choosing "lossless", you might as well do it properly. Otherwise choose near lossless and there will be a lot less hassle

    Another potential issue is incorrect chroma upsamping if the program does a YUV to RGB conversion. Instead of upsamping in an interlace aware fashion, it's done progressively, leading to "notching" color artifacts



    The HuffYUV codec has two settings:

    Always suggest RGB format for output.
    Enable RGBA (RGB with Alpha) compression.

    Are these two functions related to what you said?
    There are several huffyuv variants , but usually huffyuv needs to be set to YUV . Lagarith is different in that if the chain is setup correctly, then even if you select RGB, it encodes the proper input color space of the data (so if YUV data is sent, it automatically

    Alpha channel is for transparency, not what you are doing.


    The aspect ratio information is a sequence created from the captured image in Adobe PP, and when I checked the sequence setting, it seems to be 4:3.
    (group selecting and interpreting files <- does this mean interpreting footages in project clip settings?)
    Yes, group select => right click => modify => interpret footage . You can change the characteristics of the clip if Adobe "reads" it incorrectly


    Does my workflow match the legal level of capture that you are talking about?

    here My workflow
    Hi8 tape->D8 camcorder(TBC+DNR)->S-Video->I-O DATA GV-USB2->AmarecTV HuffYUV CODEC

    mini DV tape->mini DV camcorder->ieee1394 cable->PCIe capture card->WinDV
    I don't know about the 1st one, you'd have to check the levels in a waveform monitor in PP . If on an outdoor shot, you get a hard line at 100IRE (or you can set it to digital scale 16-235, and a hard line at 235) , it's clipping and hopefully you can adjust your capture settings, or use another format

    DV is should be ok because it's a digital copy if your card is ok in Win10.
    Quote Quote  
  10. Image
    [Attachment 64657 - Click to enlarge]
    Image
    [Attachment 64658 - Click to enlarge]
    Originally Posted by poisondeathray View Post
    Originally Posted by jsj2251 View Post

    Except for Adobe, it seems that it is impossible to encode Prores CODEC in Windows os.
    (It is said that ffmpeg is also possible, but I heard that it is not an official codec, so there may be problems when playing or editing.)
    https://forum.videohelp.com/threads/405645-editing-in-NLE-after-encoding-Prores-422-HQ...-the-Window-10

    If you don't want to take the chance, other than Adobe I think newer versions of vegas also produce certified streams (EDIT: actually I don't see them on the list). Also other less common windows professional programs like scratch, nucoda, phoenix, a few others.

    So I think I need to edit it with cineform, DNxHD, or HuffYUV CODEC,
    FFmpeg Prores should work ok in 2022 , many people use it every day




    let's assume that editing is carried out with HuffYUV video.

    What exactly does it mean to beyond the y value, CbCr value?
    Isn't that YUV's color space?
    I wonder how the image captured by YUV exceeds the color space of YUV.
    Yes, "YUV" and "YCbCr" are often used interchangebly (YUV is easier to write), but technically YCbCr the correct name for digitized values
    Y =~ Y
    Cb =~ U
    Cr =~ V

    Or, what kind of problems do you have in the process of changing the color space to 0-255 when RGB treatment becomes available?

    And if level clipping occurs, what does it result in?

    There is no difference between playback captured video by Huffyuv with window media player and playing with Adobe PP.(There is no difference in my eyes.)
    The clipping is the most visible problem. Lets say you have an outdoor shot . Most consumer cameras won't be using ND filter or control exposure properly - you will have overbrights. Often you can "rescue" the overbrights in YUV. That's not possible with huffyuv in programs that treat huffyuv as RGB. e.g. instead of clouds that are blown out and pure "white", you can bring back some details like grey. What you "see" in the RGB preview is not necessarily all the data. Only Y 16-235 gets "mapped" to RGB 0-255 , both for the preview and the conversion for the timeline (because huffyuv gets converted to RGB, not treated as YUV). The rest of the data gets discarded (so it's not "lossless"). If you carefully control your capture settings so that you bring everything within Y 16-235, then it's less of a problem, but it's still not lossless - you incur and 8bit YUV to RGB conversion - but that's less of an issue with low quality consumer tape input - but if you're choosing "lossless", you might as well do it properly. Otherwise choose near lossless and there will be a lot less hassle

    Another potential issue is incorrect chroma upsamping if the program does a YUV to RGB conversion. Instead of upsamping in an interlace aware fashion, it's done progressively, leading to "notching" color artifacts



    The HuffYUV codec has two settings:

    Always suggest RGB format for output.
    Enable RGBA (RGB with Alpha) compression.

    Are these two functions related to what you said?
    There are several huffyuv variants , but usually huffyuv needs to be set to YUV . Lagarith is different in that if the chain is setup correctly, then even if you select RGB, it encodes the proper input color space of the data (so if YUV data is sent, it automatically

    Alpha channel is for transparency, not what you are doing.


    The aspect ratio information is a sequence created from the captured image in Adobe PP, and when I checked the sequence setting, it seems to be 4:3.
    (group selecting and interpreting files <- does this mean interpreting footages in project clip settings?)
    Yes, group select => right click => modify => interpret footage . You can change the characteristics of the clip if Adobe "reads" it incorrectly


    Does my workflow match the legal level of capture that you are talking about?

    here My workflow
    Hi8 tape->D8 camcorder(TBC+DNR)->S-Video->I-O DATA GV-USB2->AmarecTV HuffYUV CODEC

    mini DV tape->mini DV camcorder->ieee1394 cable->PCIe capture card->WinDV
    I don't know about the 1st one, you'd have to check the levels in a waveform monitor in PP . If on an outdoor shot, you get a hard line at 100IRE (or you can set it to digital scale 16-235, and a hard line at 235) , it's clipping and hopefully you can adjust your capture settings, or use another format

    DV is should be ok because it's a digital copy if your card is ok in Win10.
    Wow... Thank you for your detailed answer.

    So finally, when I put your answer together,

    When a lossless codec like huffyuv or lagarith edits within Adobe PP, it inevitably results in loss
    Even if a codec that is treated as "real lossless" exists within Adobe PP,
    If the final goal is to upload to YouTube, it is a good choice to use a mediation codec such as prores without having to pursue lossless, right?

    Does the waveform monitor you mentioned mean lumetri scopes?

    (or you can set it to digital scale 16-235, and a hard line at 235) <- Does this mean that the parade type should be set to YUV instead of RGB?
    If Adobe PP processes RGB, shouldn't I set it to RGB and check it?
    Is it because when YUV is treated with RGB, values other than 16-235 are blanked?

    Lumetri scopes setting

    preset : RGB waveform
    parade type : RGB
    waveform type : RGB
    color space : Auto(Should I set it to rec.709?)
    brightness : standard


    footage profile

    DV / avi / 4:3 / 720x480 / PAR 0.9091 / 59.94i / winDV (mini DV)
    HuffYUV / avi / 4:3 / 720x480 / PAR 0.9091 / 59.94P / use yadif x2 in AmarecTV (Hi8)

    I will attach the image file of lumetris scopes that I checked with the setting above.
    It's an outdoor shoot in the morning and it was filmed in a very slightly shaded space.

    Is that the hard line you said?
    It is observed continuously in areas where the light is stronger than the surroundings.

    And if that's level clipping, how should I adjust the capture setting?
    Quote Quote  
  11. Originally Posted by jsj2251 View Post

    When a lossless codec like huffyuv or lagarith edits within Adobe PP, it inevitably results in loss
    Even if a codec that is treated as "real lossless" exists within Adobe PP,
    If the final goal is to upload to YouTube, it is a good choice to use a mediation codec such as prores without having to pursue lossless, right?
    It just means YT is the source of largest quality loss. It doesn't matter if you use lossless, it's not going to make much of a difference. Even prores will probably be overkill. The limiting factor is YT.


    Does the waveform monitor you mentioned mean lumetri scopes?
    Yes, lumetri scopes. You should watch some tutorials on YT, there are many explaining them what each scope is used for, what information each displays


    (or you can set it to digital scale 16-235, and a hard line at 235) <- Does this mean that the parade type should be set to YUV instead of RGB?
    If Adobe PP processes RGB, shouldn't I set it to RGB and check it?
    Is it because when YUV is treated with RGB, values other than 16-235 are blanked?

    Lumetri scopes setting

    preset : RGB waveform
    parade type : RGB
    waveform type : RGB
    color space : Auto(Should I set it to rec.709?)
    brightness : standard

    In general, a YUV asset should be using YUV, RGB should be using RGB . IRE are "analog" units but you can think if it as everything being converted to a 0-100 scale, so it can be useful for some people in that regard - you don't have to worry about RGB,YUV, etc...

    If clamp signal is uncheckmarked, and you have YC waveform set to 8bit, it will display 16-235 digital units which corresponds to 0-100 IRE on the left. That is legal range. Values above 100IRE or 235 are called "overbrights". It happens in consumer footage all the time, and often there is usable data there, such as bright clouds on an outdoor shot. If it gets converted to RGB, it gets clipped at 235, so you cannot "rescue" the "overbrights" with any filters. Rarely do limited range consumer cameras record usable data in the 0-15 range

    Ideally you don't want to work in RGB at all if you sources are YUV. Almost all consumer video is YUV. the biggest problem is limited range YUV is converted to full range RGB - it's standard method of conversion to RGB (and preview) for most programs. So you lose 0-15, 236-255 if the timeline is actually converting to RGB (not just for the preview purpose) . Most consumer cameras shooting in limited range do not have usable data from 0-15. But many have usable data from 235-255. That is why you want to avoid using huffyuv in Premiere, unless you ensure that you've legalized the range the capture upstream before hand to 16-235. That reduces most of the easily visible loss (it's still not lossless in Premiere, regardless)

    709 is usually for "HD", 601 for "SD"


    I will attach the image file of lumetris scopes that I checked with the setting above.
    It's an outdoor shoot in the morning and it was filmed in a very slightly shaded space.

    Is that the hard line you said?
    It is observed continuously in areas where the light is stronger than the surroundings.

    And if that's level clipping, how should I adjust the capture setting?

    In your example, yes that's a hard line, but that's a RGB waveform, it doesn't demonstrate the difference in Y. (But likely it's completely blown anyways, juding from the shot) I'm referring to Y 236 to 255. Premiere's RGB conversion of Huffyuv will cause clipping of Y range 236-255. When premiere converts to RGB, the data is already lost, so you don't see much difference in the preview image or waveform.

    To illustrate, his was a PAL Hi8 => DV example from another thread. And converted to huffyuv 422 for comparison . The waveform shows data present not shown in the preview in the DV video, but discarded in the huffyuv conversion . In this shot, you can improve the cloud detail with filters later. But if you had used huffyuv, it would discard the values 236-255
    Image Attached Thumbnails Click image for larger version

Name:	dv.jpg
Views:	15
Size:	378.1 KB
ID:	64662  

    Click image for larger version

Name:	dv_to_huffyuv.jpg
Views:	13
Size:	351.3 KB
ID:	64663  

    Quote Quote  
  12. Originally Posted by poisondeathray View Post
    Ideally you don't want to work in RGB at all if you sources are YUV.
    But I have to use pp,ae.(Or final cut)

    Almost all consumer video is YUV.
    the biggest problem is limited range YUV is converted to full range RGB - it's standard method of conversion to RGB (and preview) for most programs.
    So you lose 0-15, 236-255 if the timeline is actually converting to RGB (not just for the preview purpose) .
    Q. RGB uses color space between 0 and 255
    Why does 0-15, 236-255 disappear?


    Most consumer cameras shooting in limited range do not have usable data from 0-15.
    But many have usable data from 235-255.
    Q. I know the color space of YUV is 16-235.
    If i use YUV color space, why is the data in the 236-255 section recorded?
    And why can't the data for section 0-15 be recorded?
    And are both Y and UV(C) values recorded the same?(236-255?)

    Q.I'm using sony ccd-sc7,samsung vm-a990.
    Both of these cameras are over 20 years old,
    Is that match of most consumer cameras you're talking about?


    That is why you want to avoid using huffyuv in Premiere,
    Q. If it is the same YUV, shouldn't the loss occur in the process of YUV->RGB even if it is not like HuffYUV?(other lossless and loss codec too)

    Q.Then, in the image file DV you attached,
    When you look at the Lumetri scopes window in the PP,
    Why are the data values for the 236-255 section recorded?
    (The video file that I captured with the DV-Avi codec from mini DV,
    When viewed in the Lumetri scopes window in the PP, the section 236-255 is blanked.
    I can't see the data. DV-Avi 59.94i, h264 mp4 59.94p are both the same. The same goes for Hi8's HuffYUV.)

    Q. Capture with DV and huffyuv, encode with prores, and recall from PP, is 236-255 data preserved?

    Q.The codecs that are treated as "really lossless" with in the PP,
    Is it treated as 'real lossless' due to the what difference?

    unless you ensure that you've legalized the range the capture upstream before hand to 16-235.
    That reduces most of the easily visible loss (it's still not lossless in Premiere, regardless)
    Q.Q. How do I set the capture upstream range to 16-235? (In WinDV and Amarec TV, the settings window is not visible.)
    And even when i set it up like that, Isn't the data in the 0-15,236-255 section not being recorded?

    Thank you for your answer.
    Quote Quote  
  13. Originally Posted by jsj2251 View Post
    Q. RGB uses color space between 0 and 255
    Why does 0-15, 236-255 disappear?
    8bit limited range YUV is 16<Y<235 which leaves some footroom/headroom 0...15 and 236...255 for accommodating "imperfections" like temporary over/undershoot due to filter ringing for example. Sometimes cameras are also not very strict to stay within the limited range. Y (luma) data can therefore still be present in the 0...15 and 236 ....255 footroom/headroom range (as these are valid 8bit numbers), sometimes called "ultra blacks" and "super whites" or similar.
    When converting limited range YUV to full range RGB, YUV(16,128,128) is mapped to RGB (0,0,0), and YUV(235,128,128) is mapped to RGB (255,255,255). Accordingly, Y (0...15) would be mapped or "stretched" to illegal negative RGB values, and Y (236 ....255) would be mapped to illegal RGB>255 (impossible for 8bit). In practice these values will be clipped at RGB 0 and 255, hence all Y in the range (0....15) and (236 .... 255) will be irreversably forced (=clipped) to RGB 0 and 255 respectively and can't be recovered, hence they are lost.

    As long as one stays in YUV color space, the Y (0....15) and Y (236 ...255) are still available in YUV and can be regularly processed as 8bit numbers without clipping.

    Added:
    More precisely, take a look at the attached figure.
    The outer cube represents all possible 8bit YUV (aka Y,Cb,Cr) triplets. The inner block is the RGB colors cube in the YUV space and represents all valid 8bit RGB combinations. Black (BK) for example is YUV(16,128,128) and White (W) is YUV(235,128,128). In RGB the BK corner is RGB(0,0,0) and the W corner is RGB(255,255,255).
    When converting YUV to RGB, only those YUV triplets which are WITHIN the inner block are valid and will be correctly displayed on a RGB monitor (PC, TV). Hence only a fraction of the possible YUV triplets represent valid RGB values.
    Maybe this answers some of your other questions as well.
    More info here (and from other posts in this forum by much more knowledgeable members ...... just trying to find .......)
    https://www.intel.com/content/www/us/en/develop/documentation/ipp-dev-reference/top/vo...en?language=en
    Image Attached Thumbnails Click image for larger version

Name:	Screenshot 2022-05-06 172025.png
Views:	4
Size:	61.1 KB
ID:	64696  

    Last edited by Sharc; 6th May 2022 at 11:35.
    Quote Quote  
  14. Originally Posted by jsj2251 View Post
    Originally Posted by poisondeathray View Post
    Ideally you don't want to work in RGB at all if you sources are YUV.
    But I have to use pp,ae.(Or final cut)
    PP and FCP have a YUV capable timeline, if the assets are treated as YUV. The point is PP (and many editors) treat YUV lossless codecs like Huffyuv as RGB, and that is a problem

    AE works in RGB only, but it can handle YUV as RGB float if the asset is decoded as YUV first. (more on that below)

    Most consumer cameras shooting in limited range do not have usable data from 0-15.
    But many have usable data from 235-255.
    Q. I know the color space of YUV is 16-235.
    If i use YUV color space, why is the data in the 236-255 section recorded?
    And why can't the data for section 0-15 be recorded?
    And are both Y and UV(C) values recorded the same?(236-255?)
    8bit YUV has 0-255 code values, but the "legal" range is Y 16-235, UV 16-240

    Values >235 are recorded because the upstream sensors' raw video recording is debayered then processed at higher bit depths. The upstream digital signal processing (DSP) is usually 12-14bits even with cheap consumer cameras. It gets down converted eventually to internal recording at 8bits and YUV. (If the camera engineers wanted to, they could limit the recording to clip at 235, but the majority of consumer cameras don't.)

    The black level is almost always Y=16 for older 8bit consumer cameras. Almost all DV cameras record 16-255 with usable data in the 236-255 range - it's just the way it is. Many other consumer cameras recording to limited range YUV record 16-255 too - MPEG2, HDV, AVCHD, etc... very common . All those native camera formats get treated as YUV in PP



    Q.I'm using sony ccd-sc7,samsung vm-a990.
    Both of these cameras are over 20 years old,
    Is that match of most consumer cameras you're talking about?
    I think Samsung minidv should, not sure about the Hi8 Sony

    That is why you want to avoid using huffyuv in Premiere,
    Q. If it is the same YUV, shouldn't the loss occur in the process of YUV->RGB even if it is not like HuffYUV?(other lossless and loss codec too)
    Other "lossless" YUV codecs get "mishandled" in PP too. If you use other programs like avisynth , ffmpeg, shotcut, they are handled properly as YUV

    But UYVY (uncompressed YUV 8bit 4:2:2, with a specific fourcc configuration) , or other near lossless codecs like prores, etc... do not get mishandled converted to RGB. They are treated as YUV. Most types of AVC/h.264, MPEG2 also get YUV treatment

    If you use an RGB filter , the loss can occur too. That is essentially what is happening in the RGB preview in my 2nd screenshot. There is data available there, that is not represented in RGB preview. But the important concept is the order of operations. The YUV data for a YUV stream that is treated as YUV is accessible before you apply a RGB transform or filter. So you can make adjustments and recover that data. However, if a YUV codec is mishandled as RGB to begin with, you cannot access that data.

    In pure RGB programs like AE, you can use "32bit float" mode, and access all the data too - but only if the stream was handled as YUV in the first place. So a format like prores, or DV would be ok in AE. They are decoded as YUV first then converted to 32 bit float RGB internally in AE. This is very high precision, even negative RGB values are allowed and lossless RGB if done properly. But huffyuv in YUV mode would not be handled ok, because it's decoded as 8bit RGB right off the bat - that first step is where all the damage is done

    Or other programs like vegas use a different mapping for RGB - studio range RGB. So YUV 0-255 gets converted to RGB 0-255 - so nothing gets clipped . The black to white level for "limited range RGB" is RGB 16-235, instead of computer RGB or RGB 0-255 like Adobe and 99% of other programs use

    (The video file that I captured with the DV-Avi codec from mini DV,
    When viewed in the Lumetri scopes window in the PP, the section 236-255 is blanked.
    I can't see the data. DV-Avi 59.94i, h264 mp4 59.94p are both the same. The same goes for Hi8's HuffYUV.)
    What do you mean "blanked" ? No values in the waveform ? Or the preview ? The RGB preview is only "looking" at Y 16-235, so you cannot "see" Y236-255 in the preview (it looks the same in both my screenshots)

    Are you looking at the correct scope? (YC waveform) . Native DV should definitely have Y 236-255 in PP . Huffyuv should definitely be clipped to Y 235 as in the screenshot I posted above. It could also be that your shot is blown out and have no data. These old cameras do not have much latititude , very few stops (cannot record a range of bright and dark details)

    For h264 - if your conversion to 59.94p was not done properly, your software might have caused a RGB conversion, that would have the same effect of clipping Y to 235. eg. many vdub filters work in RGB.

    For MiniDV tape - WinDV will transfer digital copy . DV is handled properly in almost all editors.


    Q. Capture with DV and huffyuv, encode with prores, and recall from PP, is 236-255 data preserved?
    If you capture a DV camera properly it's just a direct transfer of DV. There is no reason to use anything else other than DV. If you were doing some other processing (e.g. maybe avisynth deinterlacing to 59.94), then prores would preserve that data. But if your filters work in RGB, it will clip the data too.

    If the Hi8 was captured with DV, DV is treated properly in almost all editors, there would be no reason to convert to anything else. DV is already lossy; if you convert after to prores you lose more quality and increase the filesize. If you convert to huffyuv, you increase the filesize and get mishandled in PP. If you capture directly to huffyuv, it will get mishandled in PP too.


    Q.The codecs that are treated as "really lossless" with in the PP,
    Is it treated as 'real lossless' due to the what difference?
    Really lossless means exactly that - no loss. If you measure "quality" with a metric against original, the PSNR would be infinity

    For a DV camera, DV is truely lossless, because it's a digital copy and handled properly in PP

    For YUV, there aren't any except lossless YUV codecs that get treated as YUV in PP, except for uncompressed 8bit422 as UYVY, or 10bit422 as v210



    unless you ensure that you've legalized the range the capture upstream before hand to 16-235.
    That reduces most of the easily visible loss (it's still not lossless in Premiere, regardless)
    Q.Q. How do I set the capture upstream range to 16-235? (In WinDV and Amarec TV, the settings window is not visible.)
    And even when i set it up like that, Isn't the data in the 0-15,236-255 section not being recorded?

    WinDV via firewire produces a direct transfer of DV, so the levels are a digital copy of the original video. DV works ok in PP and all NLE's. Check other DV videos with the YC wave form, they should have that 236-255 data

    Not sure about AmarecTV
    Quote Quote  
  15. Originally Posted by poisondeathray View Post
    Or other programs like vegas use a different mapping for RGB - studio range RGB. So YUV 0-255 gets converted to RGB 0-255 - so nothing gets clipped . The black to white level for "limited range RGB" is RGB 16-235, instead of computer RGB or RGB 0-255 like Adobe and 99% of other programs use
    Is this an intermediate mapping for editing only, means limited range YUV footage needs after editing in vegas a final full range RGB conversion? If left at RGB 16-235 and exported as YUV 16-235 the picture would look washed-out on PC or TV, no?
    I am using Shotcut which works in YUV on YUV footage, with the exception of the RGB filters (?), I understood.
    Last edited by Sharc; 6th May 2022 at 17:00.
    Quote Quote  
  16. Originally Posted by Sharc View Post
    Originally Posted by poisondeathray View Post
    Or other programs like vegas use a different mapping for RGB - studio range RGB. So YUV 0-255 gets converted to RGB 0-255 - so nothing gets clipped . The black to white level for "limited range RGB" is RGB 16-235, instead of computer RGB or RGB 0-255 like Adobe and 99% of other programs use
    Is this an intermediate mapping for editing only, means limited range YUV footage needs after editing in vegas a final full range RGB conversion? If left at RGB 16-235 and exported as YUV 16-235 the picture would look washed-out on PC or TV, no?
    I am using Shotcut which works in YUV on YUV footage, with the exception of the RGB filters (?), I understood.
    Vegas works in RGB only.

    It has presets to map studio range RGB to computer RGB and vice versa if you need to. It just works out of the box for native camera video (by default studio RGB is used for older versions of vegas - almost all native camera formats get studio RGB treatment, analgous to how native camera formats get YUV treatment in PP); vegas automatically uses the reverse equation when exporting a YUV format, so if you started with YUV src 16-235 => vegas RGB 16-235 => export YUV 16-235. Moderate vegas users know about this and and different file types, how they get treated differently in vegas . The confusion for most people is when you mix files that get different treatment. e.g. "lossless yuv codecs" like huffyuv get computer range RGB treatment, instead of studio range RGB treatment - so they can clip too in vegas. One lossless codec, magicyuv has a flag that can force vegas to treat it as studio RGB.

    But newer versions of vegas (18+) have the ability to work in either - computer RGB or studio RGB

    Yes, shotcut works in YUV , and if you use a RGB filter, a standard limited range YUV to full range RGB conversion is peformed
    Last edited by poisondeathray; 6th May 2022 at 18:55.
    Quote Quote  
  17. ^^^^
    Thank you for the clarifications.
    Quote Quote  
  18. Originally Posted by poisondeathray View Post


    PP and FCP have a YUV capable timeline, if the assets are treated as YUV. The point is PP (and many editors) treat YUV lossless codecs like Huffyuv as RGB, and that is a problem

    But the important concept is the order of operations.
    The YUV data for a YUV stream that is treated as YUV is accessible before you apply a RGB transform or filter.
    So you can make adjustments and recover that data.
    However, if a YUV codec is mishandled as RGB to begin with, you cannot access that data.
    So, after this process, can huffyuv handle YUV normally in PP?
    And how do you do it?

    I attached YC waveform data images of DV and HuffYUV images.

    HuffYUV only observes 16-235 data values.

    In case of DV, did the values of 0-15,236-255 come out properly?
    I'm not sure it's right.Image
    [Attachment 64930 - Click to enlarge]
    Image
    [Attachment 64931 - Click to enlarge]


    I'm sorry for the late response.
    Something urgent came up all of a sudden.
    Quote Quote  
  19. Originally Posted by jsj2251 View Post
    So, after this process, can huffyuv handle YUV normally in PP?
    No

    Huffyuv gets clipped. DV works well and is treated as YUV. See post 11

    And how do you do it?
    For huffyuv, you don't use it in PP. You use something else. Or if you must use huffyuv , you ensure that before encoding to huffyuv that you've adjusted the range to Y16-235


    I attached YC waveform data images of DV and HuffYUV images.

    HuffYUV only observes 16-235 data values.

    In case of DV, did the values of 0-15,236-255 come out properly?
    I'm not sure it's right

    Your screenshots do not necessarily indicate anything, you need to choose a shot that is slightly overexposed from DV. Compare the same shot as huffyuv. A completely overexposed (completely blown out) shot might not contain extra data, but slightly overexposed shots almost always do for DV

    If your video is already in the range Y 16-235, then PP''s treatment of huffyuv does not lose that extra data because you didn't have it to begin with
    Quote Quote  
  20. Originally Posted by poisondeathray View Post



    Your screenshots do not necessarily indicate anything, you need to choose a shot that is slightly overexposed from DV. Compare the same shot as huffyuv. A completely overexposed (completely blown out) shot might not contain extra data, but slightly overexposed shots almost always do for DV
    Among the DV videos that I have,
    In the exposed section, it is confirmed that the Y value rises very slightly by more than 235.
    (For HuffYUV, the Y value does not exceed 235 in any section.)

    However, hard lines are also observed in 235 sections at the same time.

    but I can't see the hard line in the screenshot you posted.

    Is this because of what you call 'slightly overexposed' and 'completly overexposed'?

    I'll have to do the outdoor shoot again to be sure,

    If the DV image is also clipped between 236 and 255,
    What is the cause?
    Image
    [Attachment 64985 - Click to enlarge]
    Quote Quote  
  21. Originally Posted by jsj2251 View Post
    In the exposed section, it is confirmed that the Y value rises very slightly by more than 235.
    (For HuffYUV, the Y value does not exceed 235 in any section.)

    However, hard lines are also observed in 235 sections at the same time.

    If the DV image is also clipped between 236 and 255,
    What is the cause?

    If that's the native DV video straight from camera - not sure - it could be the camera, or the camera settings. Also, check other shots.

    Judging from the waveform, those are not usable highlights as recorded

    Most DV record 16-255 with usable 236-255, but maybe your model did not


    Is this because of what you call 'slightly overexposed' and 'completly overexposed'?
    No; it should be a hard line at 255 (or 109 IRE), not 235.
    Quote Quote  



Similar Threads