VideoHelp Forum




+ Reply to Thread
Results 1 to 4 of 4
  1. I realize that there has been extensive discussion about the IRE problem in digitalizing NTSC analogue sources through a digital Camcorder (that is that the analogue black is captured as dark gray by the camcorder, and the whole picture is correspondingly “washed out.”)
    But I don’t recall seeing any specific advice on how to solve it by digital processing.
    The advanced posters all seem to have equipment for which this is not a problem, or solve it through some sort of expensive analogue video processor prior to digitalization.
    My crude solution is when I encode the AVI to mpeg in Procoder 2, I use the brightness filter and set it to reduce by –20.
    Does anyone else have a better solution, using either Vegas or Procoder filter settings?
    Thanks.
    Quote Quote  
  2. You can do it with our Enosoft DV Processor - and in realtime. e.g., you can adjust while capturing or while sending it back to a DV device. Or, of course, on an existing DV AVI file - in which case you can do it in faster-than-realtime.

    Simply use the built-in Proc Amp and subtract the required amount from the luma (luma offset = -48 ).

    Using our software will guarantee the least generation loss and will keep any information such as timecode, recording date etc in tact (most software will destroy it).
    John Miller
    Quote Quote  
  3. Johnny,

    A friend of mine just tried Enosoft for this problem and was impressed by the results. Could you explain technically how the Enosoft process differs from simply reducing the brightness?
    Quote Quote  
  4. Certainly.

    Mathematically, it is doing the same thing - but it's how we do it that's key.

    Usually, video signals are separated into two pieces - one that is the black-and-white part and another that relates to the color. The black-and-white part is called luminance or luma and has the symbol Y.

    Brightness relates to an offset (a fixed amount) added to the Y signal. When you adjust the brightness on a TV, it just adds/subtracts a constant voltage. When you do it in software like Vegas, it adds/subtracts a constant number. The same number is used for every pixel in the frame.

    DV uses a compression technique that involves converting the frame as we see it (in X-Y co-ordinates - the "spatial domain") to something known as the cosine or frequency domain. Specifically, a group of 8 x 8 pixels is converted (using the discrete cosine transform - or DCT) to another 8 x 8 group of numbers. This new group is organized in a way that relates to the detail in the image. The first number happens to be the average value of the original 64 pixels and the remaining numbers define the finer detail.

    Since the first number is the average value, if you add or subtract from it, you effectively add/subtract a constant value to each of the original pixels. If the 8 x 8 pixels are from the Y signal, it's the same as changing the brightness.

    When you get the DV signal from the camcorder, it is encoded in such a way that these average values (or "DC coefficients") are easily retrieved without having to do any part of the decompression. That saves a huge amount of CPU time compared to the traditional way of decompressing and converting the image to the familiar X-Y (spatial) layout. Furthermore, without having to touch the remaining information (the "AC coefficients" - the detail of the image), there can be no generation loss!

    Not only do you get it done very fast, you also don't need to decompress/recompress.

    For each 8 x 8 group of pixels in the video, you just change one number.

    Conventionally, you have to perform thousands of calculations just to get to the point where you can change the brightness. Then you have to reverse the process.
    John Miller
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!