VideoHelp Forum
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 43
Thread
  1. Member
    Join Date
    Jan 2007
    Location
    United States
    Search Comp PM
    I've been using Lagarith for a few years now and while it does indeed get the job done for captures, I have started to grow frustrated with its limitations. One, no support outside Windows. Two, even within Windows, there isn't much support outside of Virtualdub. If I have a folder full of Lagarith-encoded files, there's nothing I can do about converting them outside of VirtualDub. WinFF won't recognize them. Handbrake won't recognize them. Frankly most tools won't touch them. So far I've only had luck with Windows DVD Maker (blech), VirtualDub and AVS2DVD.

    I'm not sure whether I have the CPU horsepower to capture to x264 lossless. Maybe not. But as a general rule, would this be a better format for archival/storage of footage if space isn't the issue. I'd have much better cross-platform support and presumably any program that decodes x264 files (which is just about everything) should be able to handle it. I haven't experimented with the format to know if there is a downside. I don't see it being used or recommended much.
    Quote Quote  
  2. Any tool that can accept avs scripts can accept lagarith through avisynth frameserving ; for example ffmpeg can (the engine behind winff)

    Lagarith isn't very good for capturing, because it's slow in terms of latency which can lead to dropped frames on some systems. Huffyuv, and UT Video codec are much faster and better candidates, but less compressed. Lagarith is better for archival or storage because of it's better compression

    What kind of sources? x264 is YV12 only (no RGB mode)

    While many decoders and software can work with x264, not all are capable of decoding it's lossless mode
    Quote Quote  
  3. Always Watching guns1inger's Avatar
    Join Date
    Apr 2004
    Location
    Miskatonic U
    Search Comp PM
    Any encoder that supports avisynth should be able to see them simply by wrapping them in a simple loading script. One line is all it would need.

    If that is a single core CPU then I suspect X264, even lossless, is going to be a big ask. You would also have to test just how much space it was going to require, as there can be quite a big difference in output size between lossless encoders.
    Read my blog here.
    Quote Quote  
  4. Member
    Join Date
    Jan 2007
    Location
    United States
    Search Comp PM
    Thanks for the replies. Capture PC is an i7 and by turning up the threading it more or less is able to encode x264 lossless realtime during capture, although with pretty high (> 75%) CPU utilization. But, good note to keep in mind on the YV12 only. It does seem that modern PCs are able to handle more sophisticated encoders. (Granted, my previous P4/2.8GHz would not have done so.) So, I'm investigating whether to switch to something more universal going forward.

    And thank you for the reference to frameserving... I tried it and Winff/ffmpeg work great. Still doesn't help me on my Mac, but at least it's a start.

    I am a little concerned about one thing with Lagarith. I encode a file to Lagarith, then open it up in Virtualdub and re-encode it to Lagarith (same options) to a different file, then compare the two files using AVISynth (script below) and I see some subtle but visible differences. Even unamplified. A second generation re-encode shows differences as well, but only when amplified. I've toggled "Prevent upsampling when decoding" to no effect. Mode is YUY2. Does not happen if I originally captured in RGB, or convert to RGB then to a second generation re-encode to RGB. Is virtualdub internally doing some colorspace conversions?

    Any ideas? I took the compare script from a forum post here, I believe.


    #An AVS comparison which also shows the differences between the two videos:
    v1 = AviSource("c:\test\vhs-01-1980p-part01.avi")
    v2 = AviSource("c:\test\vhs-01-1980p-part01a.avi")
    sub = v1.subtract(v2)
    substrong = sub.levels(112,1,144,0,255)
    return StackVertical(StackHorizontal(v1.subtitle("origina l"),v2.subtitle("encoded")),StackHorizontal(sub.su btitle("Difference"),substrong.subtitle("Differenc e amplified")))
    Last edited by sphinx99; 21st Mar 2010 at 18:59.
    Quote Quote  
  5. Member
    Join Date
    Jan 2007
    Location
    United States
    Search Comp PM
    Ah nevermind earlier post - looks like Virtualdub's "autoselect decompression format" under Color Depth was choosing something other than YUY2. When I set it as such, differences disappear.
    Quote Quote  
  6. If you use video=>fast recompress , and do all your filtering in avisynth, there will be no colorspace conversion. By default, vdub will decompress to RGB otherwise

    Is this a VHS capture? I'm not sure if x264's lossless mode works with interlaced content
    Quote Quote  
  7. Member
    Join Date
    Jan 2007
    Location
    United States
    Search Comp PM
    Some of it is VHS. A variety of sources, really.

    I agree re: interlaced content. The option is there in x264vfw but claims to be unimplemented. I guess that eliminates x264 as an option.
    Quote Quote  
  8. Just to add, FFV1 is even better at compression for YV12 sources, than Lagarith in YV12 mode (about 3-5% better), if this was for archival purposes.

    But I don't know if it has YUY2 or interlaced modes

    You can access it through ffdshow, and ffmpeg, even avidemux. So it might be accessible on the Mac (since ffdshow and avidemux are pretty much cross platform incl. linux). Avidemux has ffvhuff and ffv1, so you might want to experiment with that
    Quote Quote  
  9. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Problem with all these things for "archiving" is whether you'll be able to play it back easily, or at all, when you need that "archive".

    Makes me want to archive in something a little more standard - e.g. high bitrate lossy such as M-JPEG, MPEG-2, x264, MPEG-1 or maybe even DV for clean PAL sources. It's not lossless - but most of those options can be close enough for most practical purposes, and there's no point having a lossless archive that you can't play!

    HuffYUV would probably be fine if it supported YV12. I know there are variants (ffmpeg?) that do - but that's maybe getting even more risky in terms of future compatibility IMO. You could programme your own lossless YV12 <> YUY2 conversion if you were paranoid.

    Cheers,
    David.
    Quote Quote  
  10. Originally Posted by 2Bdecided View Post
    Problem with all these things for "archiving" is whether you'll be able to play it back easily, or at all, when you need that "archive".

    Makes me want to archive in something a little more standard - e.g. high bitrate lossy such as M-JPEG, MPEG-2, x264, MPEG-1 or maybe even DV for clean PAL sources. It's not lossless - but most of those options can be close enough for most practical purposes, and there's no point having a lossless archive that you can't play!

    Good point, but you can definitely playback UT video codec or Huffyuv in realtime on a decent computer (even HD material like 720p60, and it sounds like he is capturing SD stuff).


    I know there are variants (ffmpeg?) that do - but that's maybe getting even more risky in terms of future compatibility IMO.

    IMO, ffmpeg is the safest route in terms of future compatibility. It is open source, many contributors, cross platform, probably the most used tool in one form or another. So no worries that someday you'll have to "pay" a fee, or compatibility issues. For example DNxHD , now that it's been opened up by Avid and developed by the BBC as free to use as a VC-3 standard and there is no risk of it going obsolete even if Avid closes shop (it's not lossless, but very close, just using it as an example.)


    You could programme your own lossless YV12 <> YUY2 conversion if you were paranoid.
    lossless YV12<=>YUY2 conversion ? I recall hearing about the avisynth beta that might do this? Do you know how that's coming along or of how you could "program" it yourself?
    Quote Quote  
  11. Lossless YUY2 to YV12 is impossible: I have two numbers, the average is 37. What are my two numbers?

    What you can do is make it so that multiple generations of YUY2/YV12 conversions don't accrue more and more errors with each generation. You duplicate chroma samples going from YV12 to YUY2, then subsample or average chroma samples going from YUY2 to YV12. I believe AviSynth already does this.

    For example, here are two chroma samples:

    YUY2: 57 98
    average for conversion to YV12: (57 + 98) / 2 = 77 (loss of information)
    duplicate for conversion to YUY2: 77 77
    average for conversion to YV12: (77 + 77) / 2 = 77 (no further loss)
    duplicate for conversion to YUY2: 77 77
    average for conversion to YV12: (77 + 77) / 2 = 77 (no further loss)
    etc.
    Last edited by jagabo; 23rd Mar 2010 at 10:00.
    Quote Quote  
  12. Member
    Join Date
    Jan 2007
    Location
    United States
    Search Comp PM
    I agree that ffmpeg deserves consideration and may well be the safest route.

    Given its many possible output options, what do you all think would be the best (translation: most flexible vis-a-vis progressive/interlaced, color spaces; most stable and well-implemented) output format for archival purposes? As others mentioned I have tried but been unable to get an interlaced x264 output that works (near as I can tell) using the current releases.
    Quote Quote  
  13. Originally Posted by poisondeathray View Post
    What kind of sources? x264 is YV12 only (no RGB mode)
    Shall not be a problem - split RGB for 3 pictures and stack them together side by side - same rule in interlace - as far You do this in reverse direction during decoding everything is fine (and quite simple also in terms of CPU power needed).
    Quote Quote  
  14. Originally Posted by pandy View Post
    Originally Posted by poisondeathray View Post
    What kind of sources? x264 is YV12 only (no RGB mode)
    Shall not be a problem - split RGB for 3 pictures and stack them together side by side - same rule in interlace - as far You do this in reverse direction during decoding everything is fine (and quite simple also in terms of CPU power needed).
    What? Are you saying YV12 is "fine" for CGI renders, VFX and high end compositing ? I strongly disagree


    As others mentioned I have tried but been unable to get an interlaced x264 output that works (near as I can tell) using the current releases.
    Well I'm still not sure if x264 lossless mode works with interlaced, but part of the reason might be that you are using the vfw version
    Quote Quote  
  15. Originally Posted by pandy View Post
    Originally Posted by poisondeathray View Post
    What kind of sources? x264 is YV12 only (no RGB mode)
    Shall not be a problem - split RGB for 3 pictures and stack them together side by side
    Except there's no software that really does this (aside from AviSynth scripts), it will take 3+ times as long to encode, the encoded file will be significantly larger, and it will take 3+ times longer to decode later on.
    Quote Quote  
  16. Originally Posted by jagabo View Post
    Except there's no software that really does this (aside from AviSynth scripts), it will take 3+ times as long to encode, the encoded file will be significantly larger, and it will take 3+ times longer to decode later on.
    I made very fast test - it was only 33% slower and file is 63% smaller (3.1 for my idea vs 8.4MB original RGB lossless) - of coz those figures are not very true due of source nature (Avisynth color gradient) but i think that it shows that this is a bit diferent than only lienar multiplication by 3 as You made.

    for my idea avs script is:

    Code:
    fp=25.0
    tm=30.0
    ln=Round(fp*tm)
    
    Blankclip(length=ln, width=4,height=4, fps=fp, pixel_type="YUY2" ).KillAudio().Coloryuv(showyuv=true)
    ConvertToRGB(matrix="rec601", interlaced=False)
    
    Blue=ShowBlue ("YV12")
    Green=ShowGreen ("YV12")
    Red=ShowRed ("YV12")
    
    StackHorizontal (Red,Green,Blue)
    For direct RGB (x264 convert RGB to YV12 so lossless RGB is impossible):
    Code:
    fp=25.0
    tm=30.0
    ln=Round(fp*tm)
    
    Blankclip(length=ln, width=4,height=4, fps=fp, pixel_type="YUY2" ).KillAudio().Coloryuv(showyuv=true)
    ConvertToRGB(matrix="rec601", interlaced=False)
    
    #Blue=ShowBlue ("YV12")
    #Green=ShowGreen ("YV12")
    #Red=ShowRed ("YV12")
    
    #StackHorizontal (Red,Green,Blue)
    And YES, such quick manipulations are made on Avisynth however it should be not so difficult to implement them on various enviroments due of theirs simplicity.

    For x264 cmd looks like this:
    Code:
    x264 --crf 0 --output %1_test.264 %1.avs
    Quote Quote  
  17. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    Originally Posted by jagabo View Post
    Lossless YUY2 to YV12 is impossible: I have two numbers, the average is 37. What are my two numbers?
    Ah yes, I didn't mean starting with a YUY2 source.

    I meant, to store YV12 in the YUY2-only versions of HuffYUV, you can losslessly convert the YV12 to YUY2.

    Then you can convert that YUY2 back to the original YV12, hence the round trip is lossless.

    Obviously you can't convert normal YUY2 losslessly to YV12.

    Cheers,
    David.
    Quote Quote  
  18. If YUY2 source will be equal to bandwidth of the YV12 source then conversion from YUY2 to YV12 can be made losslessly.
    Quote Quote  
  19. BlankClip? Try it with some real video. I ran a quick test with a DV source and it took about 3.5 times longer and the file was 2.5 times bigger. In fact, the separated RGB lossess h.264 encoding was larger than a Lagarith RGB encoding and took over 6 times longer to compress.
    Last edited by jagabo; 24th Mar 2010 at 12:26.
    Quote Quote  
  20. Originally Posted by pandy View Post
    If YUY2 source will be equal to bandwidth of the YV12 source then conversion from YUY2 to YV12 can be made losslessly.
    Huh? YUY2 has 4 chroma samples for every four pixels. YV12 has only 2 chroma samples for every four pixels. So, by definition, they do not have the same bandwidth.
    Quote Quote  
  21. Originally Posted by jagabo View Post
    BlankClip? Try it with some real video. I ran a quick test with a DV source and it took about 3.5 times longer and the file was 2.5 times bigger. In fact, the separated RGB lossess h.264 encoding was larger than a Lagarith RGB encoding and took over 6 times longer to compress.
    At first this is not BlankClip.

    I made few quick tests and results are:

    RGB source (trailer for District 9 - d9-clip-arrive_h1080p.mov - 1080p - h.264 compressed, converted to RGB, resampled BicubicResize 1/4 of the original size ie 480x264, saved as AVI RGB no compression, no audio - size 271.9MiB)

    - plain conversion ConvertToYV12 in Avisynth due lack support for RGB lossless in x264, compression speed=43.01 fps, size 36.3MiB (over 86.65% gain vs source)

    - RGB horizontaly stacked (avs script as before). Compression speed 16.67 fps (61.24% worse than --crf 0 for YV12), size 85.9MiB (68.41% gain vs source)

    - RGB upsampled by 2 in H and V directions by PointResize in Avisynth (no-loss 4:4:4 vs 4:2:0) then converted to the YV12 (lossless conversion if YCbCr will be changed to the YCoCg). Compression speed 11.91 fps (72.31% worse than --crf 0 for YV12), size 100.6MiB (compression gain 63% over non-compressed RGB source)

    There is still some room for improving for speed and maybe a gain.
    Quote Quote  
  22. Originally Posted by jagabo View Post
    So, by definition, they do not have the same bandwidth.
    They can have the same bandwidth - resize YUY2 twice in V direction then convert to the YV12 - this will be lossless conversion YUY2<>YV12.

    Currently x264 have limited support for lossless (YV12 only) so if somone need to use h.264 for storing video this can be made by some tricks that preserve data at a cost of size and time - but it is much easier to do tricks (ok, sometimes easier) than develop codec.
    Quote Quote  
  23. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by jagabo View Post
    What you can do is make it so that multiple generations of YUY2/YV12 conversions don't accrue more and more errors with each generation. You duplicate chroma samples going from YV12 to YUY2, then subsample or average chroma samples going from YUY2 to YV12. I believe AviSynth already does this.
    Actually, it doesn't - in going from YV12 to YUY2, the added chroma samples are interpolated from the YV12 values. (See http://avisynth.org/mediawiki/Sampling#Upsampling)
    Quote Quote  
  24. Originally Posted by poisondeathray View Post
    What? Are you saying YV12 is "fine" for CGI renders, VFX and high end compositing ? I strongly disagree
    So this is NOT YV12 but 3 separate R,G,B planes coded as planar grayscale YV12 (no chroma planes are involved - only luma plane is used).

    So in fact this can be coded as 3xH size Y8 format (not sure about support for Y8 with x264)
    Quote Quote  
  25. Originally Posted by Gavino View Post
    Originally Posted by jagabo View Post
    What you can do is make it so that multiple generations of YUY2/YV12 conversions don't accrue more and more errors with each generation. You duplicate chroma samples going from YV12 to YUY2, then subsample or average chroma samples going from YUY2 to YV12. I believe AviSynth already does this.
    Actually, it doesn't - in going from YV12 to YUY2, the added chroma samples are interpolated from the YV12 values. (See http://avisynth.org/mediawiki/Sampling#Upsampling)
    You are right. I was remembering incorrectly. It is ffdshow that duplicates samples.
    Quote Quote  
  26. Originally Posted by pandy View Post
    Originally Posted by poisondeathray View Post
    What? Are you saying YV12 is "fine" for CGI renders, VFX and high end compositing ? I strongly disagree
    So this is NOT YV12 but 3 separate R,G,B planes coded as planar grayscale YV12 (no chroma planes are involved - only luma plane is used).

    So in fact this can be coded as 3xH size Y8 format (not sure about support for Y8 with x264)
    Yes, I see what you're doing

    But I fail to see any advantages to doing it this way, besides an academic exercise
    Quote Quote  
  27. Originally Posted by poisondeathray View Post
    Yes, I see what you're doing

    But I fail to see any advantages to doing it this way, besides an academic exercise
    h.264 is a industrial standard, lagarith not, operations on that codec video are easy to revert back even in cheap hardware (memory transfers, adres changing) - i made also test with lagarith on the same content - encoding was much faster (approx 83.3fps - approx 80% faster than my idea), final size is approx the same (84.8MiB vs 271.9MiB - gain 68.81% - only 0.4% better than my idea) but usually encoding is made only once, there is no lagarith hardware implementation or for other than x86 CPU (AFAIK)
    Quote Quote  
  28. Member
    Join Date
    Jan 2007
    Location
    United States
    Search Comp PM
    Is a VHS capture with Lagarith TFF or BFF?
    Quote Quote  
  29. Originally Posted by pandy View Post
    Originally Posted by poisondeathray View Post
    Yes, I see what you're doing

    But I fail to see any advantages to doing it this way, besides an academic exercise
    h.264 is a industrial standard, lagarith not, operations on that codec video are easy to revert back even in cheap hardware (memory transfers, adres changing) - i made also test with lagarith on the same content - encoding was much faster (approx 83.3fps - approx 80% faster than my idea), final size is approx the same (84.8MiB vs 271.9MiB - gain 68.81% - only 0.4% better than my idea) but usually encoding is made only once, there is no lagarith hardware implementation or for other than x86 CPU (AFAIK)

    I like what you're doing and development is a great idea. But can any hardware even decode lossless x264 streams? Last I checked , even many software decoders had issues decoding it, let alone chips that are limited to a certain profile or even DXVA. Right now, I'm not aware of any chips that can encode lossless x264 either... (other than CPU)


    Is a VHS capture with Lagarith TFF or BFF?
    Field order is determined by the source. If it's TFF source, then that's what lagarith encode captures. I don't know too much about VHS, and if it's always a certain field order. Standard DV is usually BFF, most HD sources are TFF.

    And if you don't know , you can always determine it with avisynth, but separating the fields
    http://neuron2.net/faq.html#analysis
    Last edited by poisondeathray; 25th Mar 2010 at 22:17.
    Quote Quote  
  30. Originally Posted by sphinx99 View Post
    Is a VHS capture with Lagarith TFF or BFF?
    In what sense TFF or BFF - AFAIR VHS record one full FRAME (both fields) one tape with help of the drum with heads thats revolve 25 times per second (for 25fps systems).

    "Because VHS is an analog system, VHS tapes represent video as a continuous stream of waves, in a manner similar to analog TV broadcasts. The waveform per scan-line can reach about 160 waves at max, and contains 525 scanlines from the top to the bottom of the screen in NTSC (480 visible). PAL variants have 625 scanlines (576 visible). In modern-day digital terminology, VHS is roughly equivalent to 333x480 pixels."

    http://en.wikipedia.org/wiki/VHS
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!