VideoHelp Forum
+ Reply to Thread
Page 3 of 4
FirstFirst 1 2 3 4 LastLast
Results 61 to 90 of 109
Thread
  1. Have you tried using the right edge too? Find the first and last pixels of the picture then shift and stretch the line.

    The vertical shifts are caused by dropped fields during the capture (possibly during the recording on VHS too). For example, an interlaced analog video consists of alternating top and bottom fields:

    T B T B T B T B...

    normally captured by a TFF capture device:

    TB TB TB TB... (two letters together indicate the fields packed into a frame)

    But when a field is missed:

    TB (T dropped) BT BT BT...

    I think the way to fix this is to reverse the order in Interleave() -- use Interleave(bot,top) instead of Interleave(top,bot) in the earlier script. That means you have to identify where the errors occur and work on the video in sections. I don't know of any automated method of doing this.

    There are lots of lossless codecs: HuffYUV, Lagarith, MSU Lossless, h.264 lossless mode (YV12 only). HuffYUV is very fast, Lagarith is slower (but multithreaded so it improves with multicore CPUs), MSU even slower. In general slower equates to better compression -- though the returns diminish quite quickly. These days I usually use Lagarith for temporary files.
    Quote Quote  
  2. Have you tried using the right edge too? Find the first and last pixels of the picture then shift and stretch the line.Ok I fixed the problem this way. Thank you very much

    I finally used MSU Lossless but once encoded I cannot read the output file with vlc. What player should I use ?

    Do you what happen to the last lines at the bottom of the video? I saw this problem many times on videos that comes from vhs but I don't know what's the original problem. Is there a way to fix these bad lines?
    Quote Quote  
  3. VLC only uses its own built in codecs and it doesn't include MSU Lossless or Lagarith. It does have a HuffYUV decoder. If you want to view the former files you have to use a player that can access system installed codecs (VFW or DirectShow). I would use MPCHC, maybe KMPlayer.

    VHS always has head switching noise at the bottom of the frame. You have to crop or mask if you want to get rid of it. (Note the same noise is sent to the TV but TVs overscan the frame so you don't see it.)
    Quote Quote  
  4. Thank you.

    Sometimes my program don't succeed on detecting a shifted field. I would like to move them "manually" with avisynth.
    For example:

    from frame 10 to 20 of the bottom field, I would like to take all the picture except the first line and past it one pixel higher (position 0 0). What would be the script to do this for a single frame or a range or frame ?

    I also tried a plugin called "MSU Field Shift Fixer" (http://compression.ru/video/old_film_recover/field_shift_en.html) but it doesn't work (I used the script of the page). And still the same problem, I don't know how to apply a filter for only some frame (not the whole video)...
    Last edited by mathmax; 6th Apr 2010 at 22:56.
    Quote Quote  
  5. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    ApplyRange

    http://avisynth.org/mediawiki/Animate

    Cheers,
    David.
    Quote Quote  
  6. Use the Trim() command to isolate groups of frames:

    src=AviSource("filename.avi")
    v1=Trim(src,0,9)
    v2=Trim(src,10,19).Crop(0,1,-0,-0).AddBorders(0,0,0,1) #
    v3=Trim(src,20,0)
    return(v1+v2+v3)
    Note that you can't crop one scanline of YV12 so the video must be YUY2 or RGB.

    Or you can use ApplyRange():

    function ShiftUp(clip src)
    {
    Crop(src,0,1,-0,-0).AddBorders(0,0,0,1)
    }
    AVISource("filename.avi")
    ApplyRange(10,19,"ShiftUp")
    ApplyRange() only accepts one filter so I created ShiftUp() to do the Crop() and AddBorders().

    Did any of the earlier clips show the problem? I didn't notice.
    Last edited by jagabo; 7th Apr 2010 at 07:48.
    Quote Quote  
  7. ok thank you.
    But this will move the whole image 1px higher. And I would like to move only one field...
    Quote Quote  
  8. Like this?
    function ShiftUp(clip last)
    {
    AssumeTFF()
    SeparateFields()
    top=SelectEven()
    bot=SelectOdd().Crop(0,1,-0,-0).AddBorders(0,0,0,1)
    Interleave(top,bot)
    Weave()
    }
    Quote Quote  
  9. Thank you very much. That's perfect

    Just one thing: when I write

    Code:
    Interleave(top, bot)
    AssumeTFF()
    The bottom and top field are inverted. But I corrected this by writing:

    Code:
    Interleave(bot, top)
    AssumeTFF()
    Quote Quote  
  10. Yeah, I wasn't exactly sure about the field order issues. Looks like you sorted it out...
    Quote Quote  
  11. jagabo,

    I still have a problem. When I drop the original video on VirtualDub and I export the images as bmp, they are overexposed compared with the video.. is it a color format problem?

    Here I've uploaded a very short snippet on:
    http://www.mediafire.com/?mcyydy0mmmu

    Could you download and try to drop it on VirtualDub and compare the two windows?
    Last edited by mathmax; 17th Apr 2010 at 17:40.
    Quote Quote  
  12. sorry to insist but I really need to understand this problem. I need to process my video but if I do it now I'm afraid I'll have to do it again after because of these burned colors...
    Could you have quick look at the snippet I uploaded?

    Thank you.
    Quote Quote  
  13. It's normal to computers to contrast enhance video for display or conversion to digital images. Video uses a luma range from 0 to 255. But properly produced video is not supposed to have any luma values below 16 or above 235. That is, total black is defined as luma=16, full white as luma=235. Since computer monitors use the full range from 0 to 255, most programs stretch luma 16-235 out to RGB 0-255. The sample you provided has luma values below 16 and above 235. So those areas are "crushed" by the expansion. Here's a frame from your sample without (top) and with (bottom) the contrast enhancement:

    Click image for larger version

Name:	cont.jpg
Views:	865
Size:	15.3 KB
ID:	1392

    You should adjust the levels of your video before giving it to VirtualDub. Get the blacks at 16, the whites at 235. You can use AviSynth's Histogram() or VideoScope() to view the levels.
    Quote Quote  
  14. Thank you for this explanation

    I used Histogram() and I can see a diagram and the "curves" are indeed going to the brown strip (>235). Now the question is: if I export the images without adjusting the levels, will the files be damaged? I mean, is this problem only a screen issue or the colors are effectively crushed in the files in a way that they cannot be recovered after?

    In the second case I have to adjust the level... I looked at some methods to do this and I found this article:
    http://avisynth.org/mediawiki/Luminance_levels
    Which of the proposed methods would you use?
    Quote Quote  
  15. ok, I don't understand but the problem seems to be solved by itself... seems like it's only an hardware issue..

    Just out of curiosity, to adjust the level, you mean writing this ?
    Code:
    mpeg2source("myvideo.d2v")
    Levels(0, 0, 255, 16, 235)
    Quote Quote  
  16. Reducing levels at the top end from 255 to 235 is what you want to do. I would check other parts of the video to see how far up or down the low end should go. I wouldn't blindly raise it from 0 to 16. In my experience the black level is too high on most VHS recordings and needs to be pulled down rather than up.

    You can also use ColorYUV() and Tweak() to adjust levels.
    Quote Quote  
  17. Hi

    I don't understand how the levels() function works.

    In the documentation (http://avisynth.org/mediawiki/Levels), it's written:
    Code:
    # does nothing on a [16,235] clip, but it clamps (or rounds) a [0,255] clip to [16,235]:
    Levels(0, 1, 255, 0, 255)
    I don't understand why it should raise to 16 and decrease to 235... because the output interval is [0, 255]!

    And an other question: Why does this code not work? It says AverageLuma() invalid arguments..
    Code:
    AviSource("myvideo.avi")
    ConvertToYV12()
    AverageLuma()
    Last edited by mathmax; 24th Apr 2010 at 20:18.
    Quote Quote  
  18. In your earlier post you said "Levels(0, 0, 255, 16, 235)". (The zero in the second argument should have been a 1, AviSynth won't even accept a zero there.) That command will cause the range of 0-255 in the source to be squished to 16-235.

    An image with the full levels range from 0 to 255:
    Click image for larger version

Name:	0-255.png
Views:	537
Size:	9.4 KB
ID:	1515
    Notice the 17 horizontal lines in the graph, all evenly spaced.

    <edit>
    originally read: after Levels(0, 1, 255, 16, 235):
    corrected: after Levels(0, 1, 255, 16, 235, coring=false):
    </edit>
    Click image for larger version

Name:	16-235.png
Views:	644
Size:	2.9 KB
ID:	1505
    Note how the range from lowest to highest is reduced (they now only range from 16 to 235) but the 17 lines in the graph are still evenly spaced.

    After Levels(0, 1, 255, 0, 255):
    Click image for larger version

Name:	cored.png
Views:	561
Size:	2.9 KB
ID:	1506
    This doesn't effect the levels except that any pixels below 16 become 16, and any pixels above 235 become 235. So the darkest bar is a little brighter and the brightest 2 bars are a little darker. The rest of the bars are unchanged. This is called "coring" or "clamping" of the levels. Video signals are not supposed to have pixels with luma (the Y in YUV) below 16 or above 235. If you had added "coring=false" to the command the image would have been completely unchanged.

    Levels(InputLow, Gamma, InputHigh, OutputLow, OutputHigh) means stretch or squish the levels from the range (InputLow to InputHigh), to the range (OutputLow to OutputHigh). Gamma controls the linearity of the stretch.

    After Levels(0, 0.5, 255, 0, 255, coring=false):
    Click image for larger version

Name:	gamma0.5.png
Views:	783
Size:	721.6 KB
ID:	1513
    Notice how the full range is still 0-255, but the darks are squished and the brights are stretched out.

    After Levels(0, 1.5, 255, 0, 255, coring=false):
    Click image for larger version

Name:	gamma1.5.png
Views:	672
Size:	2.9 KB
ID:	1516
    This is the opposite. Darks are stretched out but brights are squished. This can be used to bring out shadow detail.
    Last edited by jagabo; 25th Apr 2010 at 10:59.
    Quote Quote  
  19. ok thank you, it's more clear now.. In fact this is part of a new script I'm trying to write:

    In order to better detect the edge and correct the time base errors, I would like to crop the clip to a width of 30px. This is approximately two times the width of the left black strip. On one hand the black as a lot of noise and is often not really dark, on the other hand the first pixel of a scan line have not the same luminosity on one line to an other.. I would like to unify the luma to more accurately detect the beginning of a line.

    Name:  03535.jpg
Views: 1378
Size:  6.3 KB== (stretched)==> Name:  035352.jpg
Views: 1075
Size:  35.1 KB


    That means, I would like for each line set the average luma of the first 10px to the black and the last 10px to the white. Hope I'm clear enough.. Here is the script I tried to write:

    Code:
    ImageSource("image.bmp")
    
    ConvertToYV12()
    crop = Crop(10,0,30,1) #get a single line of 30px width
    
    cropLeft = crop.Crop(0,0,10,1) #get the first 10px of this line
    black = AverageLuma() #get the average luma of these pixels
    
    cropRight = crop.Crop(20,0,10,1) #get the last 10px of this line
    white = AverageLuma() #get the average luma of these pixels
    
    newcrop = Levels(crop, black, 1, white, 0, 255) #adjust the level to make the left real black and right real white.
    Overlay(crop, newcrop, 0, 1) #to compare the original line and the adjusted line
    I have two problems:

    - AverageLuma() require that clip is in YV12. And since I convert to YV12, I cannot crop a single line because the height must be a multiple of 2 with this format.. Is there an other function to get the luma of a line of pixels?
    - AverageLuma() throws an exception "invalid arguments"

    Could you help me to fix these problems?
    Last edited by mathmax; 24th Apr 2010 at 22:43.
    Quote Quote  
  20. Member
    Join Date
    Sep 2005
    Location
    Oregon, USA
    Search Comp PM
    This has been an interesting read. I doubt I can add anything to the restoration discussion. In fact, I know I can't. I won't detract you from what you are doing. I just came across this topic.
    Going back to the beginning, I doubt getting the original VHS tape would help you much. Although there are TB errors from VHS, it appears to me that the original problem was not from the VHS or it's however many generations of dubs it was. Those can be problems but it appears that the VHS was originally a copy from some other source. Based on the content of the video, it appears to me that the original "shifted" problem is because it is a dub from an original that was from 2" Quad tape, used in that era.
    That system used four heads on a rotating head assembly...every 90 degrees. The VTR's had to correct for the head timing, as well as the line timing. There were two adjustments we manually made for each tape we played. One was the skew, the other was the scalloping. The skew was to line up each head vertically, other wise the edge of each heads playback would be a diagonal line. The scalloping would correct for crooked lines.... where the edge would look like ( or ) . So, we had to first adjust for it to be straight and then vertical. Normally, it would look pretty good. However, sometimes, with older and dubbed tapes, The horizontal shift between heads would still show an error, as it does on the original pictures you posted.

    Sometimes, there would be color differences between heads, also. One adjustment we made was to equalize each head separately to correct that error.....usually needed only when replacing the head assembly, which we did often due to wear on the heads, if it could no longer penetrate the tape enough. On the RCA TR-70's, we would shoot for 1000 hours on the heads but usually it was much less. 500 sounds more normal, I think I remember. It has been a long time since we used Quads.

    Your original picture looks like errors from a bad quad copy.
    Now back to your current solution attempts.
    Quote Quote  
  21. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    You can use PointResize to duplicate one original scanline into two YV12 scanlines.
    AverageLuma only makes sense in the context of a single frame, so it can only be used inside a run-time filter like ScriptClip. Try something like this:
    Code:
    ImageSource("image.bmp")
    PointResize(width, height*2)
    ConvertToYV12()
    crop = Crop(10,0,30,2) #get a (duplicated) single line of 30px width
    cropLeft = crop.Crop(0,0,10,2) #get the first 10px of this line
    cropRight = crop.Crop(20,0,10,2) #get the last 10px of this line
    
    newcrop = ScriptClip(crop, "
       black = int(AverageLuma(cropLeft)) #get the average luma of the left pixels
       white = int(AverageLuma(cropRight)) #get the average luma of the right pixels
    
       Levels(black, 1, white, 0, 255) #adjust the level to make the left real black and right real white.
    ")
    StackVertical(crop, newcrop) #to compare the original line and the adjusted line
    Quote Quote  
  22. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by jagabo View Post
    after Levels(0, 1, 255, 16, 235):

    Note how the range from lowest to highest is reduced (they now only range from 16 to 235) but the 17 lines in the graph are still evenly spaced.
    Strange. According to http://avisynth.org/mediawiki/Levels, this very example should result in luma values 0 and 16 both being converted to about 30, and I would also expect values beyond 235 to be reduced to about 217.

    "the input is scaled from [16,235] to [0,255], the conversion [0,255]->[16,235] takes place (accordingly to the formula), and the output is scaled back from [0,255] to [16,235]: (for example: the luma values in [0,16] are all converted to 30)"
    Quote Quote  
  23. Originally Posted by Gavino View Post
    Originally Posted by jagabo View Post
    after Levels(0, 1, 255, 16, 235):

    Note how the range from lowest to highest is reduced (they now only range from 16 to 235) but the 17 lines in the graph are still evenly spaced.
    Strange. According to http://avisynth.org/mediawiki/Levels, this very example should result in luma values 0 and 16 both being converted to about 30, and I would also expect values beyond 235 to be reduced to about 217.

    "the input is scaled from [16,235] to [0,255], the conversion [0,255]->[16,235] takes place (accordingly to the formula), and the output is scaled back from [0,255] to [16,235]: (for example: the luma values in [0,16] are all converted to 30)"
    Whoever wrote that appears to be confused and mixing RGB and YUV conversions/levels. The samples I provided were using AviSynth's level() command so they show exactly what happens to the video.

    Maybe the description was written with a very old version of AviSynth where the levels worked in RGB?
    Last edited by jagabo; 25th Apr 2010 at 07:36.
    Quote Quote  
  24. Originally Posted by Gavino View Post
    You can use PointResize to duplicate one original scanline into two YV12 scanlines.
    AverageLuma only makes sense in the context of a single frame, so it can only be used inside a run-time filter like ScriptClip. Try something like this:
    Code:
    ImageSource("image.bmp")
    PointResize(width, height*2)
    ConvertToYV12()
    crop = Crop(10,0,30,2) #get a (duplicated) single line of 30px width
    cropLeft = crop.Crop(0,0,10,2) #get the first 10px of this line
    cropRight = crop.Crop(20,0,10,2) #get the last 10px of this line
    
    newcrop = ScriptClip(crop, "
       black = int(AverageLuma(cropLeft)) #get the average luma of the left pixels
       white = int(AverageLuma(cropRight)) #get the average luma of the right pixels
    
       Levels(black, 1, white, 0, 255) #adjust the level to make the left real black and right real white.
    ")
    StackVertical(crop, newcrop) #to compare the original line and the adjusted line

    Thank you very much. It works on a single line, but now I want to apply this script to all line of the image. I wrote this:

    Code:
    ImageSource("G:\MJ\TBC\WBSS\images\top\00010.bmp")
    
    PointResize(width, height * 2)
    ConvertToYV12()
    
    newclip = ProcessLine(last, 1)
    return newclip.PointResize(width, height / 2)
    
    function ProcessLine(c, int n)
    {
        crop = c.Crop(4,n*2,28,2)
        cropLeft = crop.Crop(0,0,8,2)
        cropRight = crop.Crop(20,0,8,2)
        newcrop = ScriptClip(crop, "
               black = int(AverageLuma(cropLeft)) #get the average luma of the left pixels
               white = int(AverageLuma(cropRight)) #get the average luma of the right pixels
    
               Levels(black, 1, white, 0, 255) #adjust the level to make the left real black and right real white.
        ")    
        
        c = Overlay(c, newcrop, 30, n*2)
        
        return (n < 239) ?  ProcessLine(c, n + 1) : c
    }
    but it doesn't work I get this:

    Click image for larger version

Name:	adjustedge.jpg
Views:	498
Size:	103.2 KB
ID:	1528

    I put the original and the corrected edge side by side.. but there is no difference.
    Quote Quote  
  25. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by mathmax View Post
    but it doesn't work
    I think you should have:
    Code:
    newclip = ProcessLine(last, 0)
    but that doesn't really explain why the remaining lines are wrong.
    Quote Quote  
  26. ya.. I just corrected that. But no idea why the corrected lines are not adjusted?
    Quote Quote  
  27. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Ah, got it. It's the notorious problem with ScriptClip and variable binding - function local variables do not exist when the run-time script is evaluated, so you need to replace cropLeft and cropRight with their direct values inside the ScriptClip call.
    Code:
    newcrop = ScriptClip(crop, "
      black = int(AverageLuma(Crop(0,0,8,2))) #get the average luma of the left pixels
      white = int(AverageLuma(Crop(20,0,8,2))) #get the average luma of the right pixels
    
      Levels(black, 1, white, 0, 255) #adjust the level to make the left real black and right real white.
        ")
    (Even moving the assignments to cropLeft/Right inside the ScriptClip probably wouldn't work, as each recursive function will share the same run-time variables. If you're interested, my GRunT plugin provides improved versions of ScriptClip et al which addresses the variable bindings problem.)
    Quote Quote  
  28. Thank you so much
    Quote Quote  
  29. I think the edge detection will be better with this adjusted edge.

    Look at this picture:
    Click image for larger version

Name:	adjustedge2.jpg
Views:	345
Size:	117.3 KB
ID:	1530

    But there are still some line that remain black from the beginning to the end.. any idea to fix this?
    Quote Quote  
  30. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by jagabo View Post
    Whoever wrote that appears to be confused and mixing RGB and YUV conversions/levels. The samples I provided were using AviSynth's level() command so they show exactly what happens to the video.
    Maybe the description was written with a very old version of AviSynth where the levels worked in RGB?
    I've had a look at the source code of the Levels filter and it's consistent with the statement in the documentation.
    Was your source clip definitely YUV? Your results for Levels(0, 1, 255, 16, 235) are what you would expect for an RGB input, or where coring=false was specified. (coring is ignored for RGB)
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!