VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or Try ConvertXtoDVD and convert all your movies to DVD. Free trial ! :)
+ Reply to Thread
Page 2 of 3
FirstFirst 1 2 3 LastLast
Results 31 to 60 of 69
Thread
  1. Originally Posted by raymondwestwater View Post
    I will point out that some of the Derf video suffer from the same problem, especially the controlled burn video.
    I see HDV compression artifacts in that one, but not the artifacts I attempted to describe in the ABC video. I looked at the original AVI source rather than Xiph's Y4M, but the container shouldn't matter.

    All of the Derf videos have been rendered and are available for review at www <dot> zpeg <dot> com <slash> netflix <dot> shtml. These videos are mastered at Neflix rates. Perhaps I will add links to these files into the demo page...
    Thanks for the link. Unfortunately the large number of embedded videos caused Chrome to hang. I'll have to try again once I close some of my millions of tabs.
    Quote Quote  
  2. Originally Posted by raymondwestwater View Post
    jagabo, lordsmurf, of course the idea is similar in principle to noise reduction, except that the "noise" to be removed is based on a human visual model operating in the decorrelated 3-D transform domain. So the removal is far more targeted and effective than is possible with filtering, which cannot provide the same level of discrete access to the individual basis vectors.
    I'm getting visually similar results and better SSIM values with a little noise reduction and the same encoding settings.

    But there are some problems with your code: It is crushing darks below ~Y=19, and the U and V channels are dropping by one or two units.
    Quote Quote  
  3. Originally Posted by raymondwestwater View Post
    jagabo, lordsmurf, of course the idea is similar in principle to noise reduction, except that the "noise" to be removed is based on a human visual model operating in the decorrelated 3-D transform domain. So the removal is far more targeted and effective than is possible with filtering, which cannot provide the same level of discrete access to the individual basis vectors.
    I'm getting visually similar results and better SSIM values with a little noise reduction and the same encoding settings.

    But there are some problems with your code: It is crushing darks below ~Y=19, the rest of the picture is getting a unit darker, and the U and V channels are dropping by one or two units, causing a greenish cast.
    Quote Quote  
  4. Member
    Join Date
    Sep 2016
    Location
    United States
    Search PM
    jagabo, thanks for that assessment. I wish I still had your eyes! I will look into this issue.

    I have made the 6Mbs and 4Mbs renderings available through the tables on the demo page. Vaperon, this may be an easier way for you to get content.

    Thank you all for your participation!
    Quote Quote  
  5. Originally Posted by raymondwestwater View Post
    jagabo, thanks for that assessment. I wish I still had your eyes! I will look into this issue.
    I first noticed the slight change of color when viewing the videos. Then I checked with the VideoScope filter in AviSynth. With two videos interleaved you can A/B switch individual frames with an editor and it's easy to see the problem.

    Image
    [Attachment 38813 - Click to enlarge]


    Ignore the posterization and poor image quality from conversion to GIF. But note the way the U and V channels bounce up and down (for the U channel on the top right "up" is to the right, "down" is to the left). That's alternating between a frame of the source and the same frame from the optimized encoding.

    Code:
    v0 = LWLibavVideoSource("sintel2.mkv")
    v1 = LWLibavVideoSource("sintel2 37.raw.fullrate.mkv") # optimized video remuxed into MKV
    
    Interleave(v0, v1)
    
    ConvertToYUY2() # VideoScope requires YUY2
    VideoScope("both", true, "U", "V", "UV") # show U and V channels
    Last edited by jagabo; 4th Oct 2016 at 16:14.
    Quote Quote  
  6. Attention folks, this guy is a liar and a charlatan and we shouldn't be giving him any more attention. In this thread he has claimed that this is new technology that works by finding redundancies video that commonly used encoders, such as x264 and x265 are incapable of finding. He also claiming that this "pre-processor" of his is not a filter. He also claims that this program of his is fully compatible with every encoder out there. At first I was wondering how that could be, as it would require that it contain multiple built in encoders to provide the functionality he advertises.

    As you will recall he offered to share the binary with me if I would contact him via the contact page on his web site, I did so. He would not and did not provide the binary as he promised but he did download the sample file I linked to and he did provide the processed file for comparison purposes, I will let you guys see for yourself what he is doing:

    http://www.zpeg.com/test/

    As you can see he took the 1.8gb source file and converted it to raw YUV, then processed it using his filter which he won't call a filter which resulted in a second raw YUV, then he encoded it to x264.

    This program, which by his own test only provided less than a 3% compression benefit, requires the end user to create 2 nearly 50gb raw YUV files for a single 1.8gb source, which means that a typical Blu-Ray would need over 1tb of hard drive space to process. Even if we granted his claim of a 20-30% savings in bit rate the need for massive intermediate files makes his program useless.

    No wonder how hasn't tried to sell it to Google or Intel or NetFlix, they would laugh him right out of the presentation, that's why he's trying so hard to convince a bunch of hobbyists on a forum.

    Nice try there buddy.
    Quote Quote  
  7. Originally Posted by sophisticles View Post
    As you can see he took the 1.8gb source file and converted it to raw YUV, then processed it using his filter which he won't call a filter which resulted in a second raw YUV, then he encoded it to x264... the need for massive intermediate files makes his program useless.
    The reasons for this are obvious. This is a proof of concept program, not a production level program. It's too much hassle to write a program that deals with many different containers, codecs, etc. So he's using off-the-shelf software to convert to raw YUV which is easily handled by his software (everyone else does the same when prototyping algorithms). Then he encodes his transformed YUV video with whatever encoder you specify. If someone like Google was to buy his program they would just build the algorithms into their software.
    Quote Quote  
  8. Member
    Join Date
    Sep 2016
    Location
    United States
    Search PM
    Liar and charlatan here! As I had posted, the file did not do well, so I guess I failed that test... But as to methodology, jagabo, thank-you for clarifying the procedures that we use. Anyway, from the tests that people have run, we have seen good results for medium-to-high quality video (mays 4Mbs and up), and poor results for very low quality video (some were as low as a couple of 100Kbs).
    Quote Quote  
  9. Member
    Join Date
    Sep 2016
    Location
    United States
    Search PM
    jagabo, we have some preliminary conclusions from analyzing the Derf files and our process.
    1. As to grey levels, we do indeed clip the lower range to 16, and the upper to 240. These numbers are specified in the YCrCb standard, but we see it is violated routinely by the Derf data set, where ranges as wide as [0..255] occur. So we're in a bit of a quandary here, but we may just try widening the range and see what happens!
    2. The color spaces are initially "mangled" by a default FFMPEG color space conversion that takes place when we create the YUV files from the y4m content. So we can try a -v:copy sort of thing if the color spaces match, or we can accept y4m content in the file-based verion of our pre-processor.
    Just touching base to let you know we are taking your comments seriously and are working the issues.
    Quote Quote  
  10. Member
    Join Date
    Sep 2016
    Location
    United States
    Search PM
    Some more breaking news - we have rendered the Xiph Netflix sequences, and have gotten an average 37% file size reduction at equal QP. Better news than the results I got with sophisticles' data set (but you can't win them all!).
    Quote Quote  
  11. Originally Posted by raymondwestwater View Post
    Liar and charlatan here! As I had posted, the file did not do well, so I guess I failed that test... But as to methodology, jagabo, thank-you for clarifying the procedures that we use. Anyway, from the tests that people have run, we have seen good results for medium-to-high quality video (mays 4Mbs and up), and poor results for very low quality video (some were as low as a couple of 100Kbs).
    What makes you a liar and a charlatan is not the results you achieved, it's because you intentionally tried to mislead us. Here's are some of the statements you have made in this thread:

    This technology adds another 20% to MPEG-1, MPEG-2, AVC, HEVC, VP9, (your codec here)
    FACT: It does not such thing, further the above statement is nonsensical.

    and uses no filtering techniques whatsoever.
    FACT:
    jagabo, lordsmurf, of course the idea is similar in principle to noise reduction, except that the "noise" to be removed is based on a human visual model operating in the decorrelated 3-D transform domain. So the removal is far more targeted and effective than is possible with filtering, which cannot provide the same level of discrete access to the individual basis vectors.
    So first you claim that it's not a filter, then you admit that it's "similar" to a filter, you add a bunch of technical sounding hogwash, throw in an implication that you either have a Ph.D. or are working on one, then preceded to say that I "got you" when I provide a sample that has no to little noise to be removed and thus can't benefit from a noise filter in the first place.

    But the creme dela creme is when you make this absurd statement:

    The point is that we are able to find and remove redundancies that motion-estimation compressors (H.264, H.265, VP9) inherently cannot. This is due to the way that they extract redundancy, which is to find a motion compensation vector then record the difference or error term.
    Followed by an offer to supply me with a binary for testing but then backing out citing a need to "sanity check".

    Perhaps you should have considered sanity checking the claims you were going to make before you started this thread.
    Quote Quote  
  12. Originally Posted by raymondwestwater View Post
    Some more breaking news - we have rendered the Xiph Netflix sequences, and have gotten an average 37% file size reduction at equal QP. Better news than the results I got with sophisticles' data set (but you can't win them all!).
    Give it up man, you have made a mockery of yourself with your silly, bombastic claims.

    And what the hell is QP, anyway?

    Are you willing to supply this forum with a binary for tested or not?
    Quote Quote  
  13. Member
    Join Date
    Sep 2016
    Location
    United States
    Search PM
    sophistcles, I am more than willing to work with you. I processed the one file you gave me, and got unimpressive results. I discussed the results and their implications openly.
    I have modified the demo site to accept files up to 4BG in size from web links. Should that not be adequate, I can make access to you a server that enables CLI usage of the pre-processor.
    But the process is not for the newbie, of course, and you do make me nervous when you ask a question like "What is QP?" It is the standard unit of quantization for x264 and x265 that is used to measure quality (in terms of error) and has a logarithmic relationship to expected bandwidth. It's not a perfect predictor, but it does average out pretty well. A decent explanation is to be found here: http://slhck.info/articles/crf. I do have a PhD, in video compression. If I speak over your head, it is neither to evade or obscure, it's just my style. This technology has been well-documented on my site and in my publications, which are easy to find if you are interested.
    Quote Quote  
  14. Originally Posted by raymondwestwater View Post
    Liar and charlatan here!
    Well... don't take some folks too seriously - they abuse language and they are constantly in strange rush and sadly i must say you are not the first...

    Honestly current video processing algorithms not use almost at all human vision system model so i believe there is large area to introduce new pre and post processing where video compression may be improved - if your technique is such kind - i have no clue also as i can't test it in a comfortable way then i step away from thread but anyway i strongly support all people trying to do something useful so thumbs up and good luck - have a healthy distance.
    Quote Quote  
  15. Member
    Join Date
    Sep 2016
    Location
    United States
    Search PM
    pandy, thank-you for your encouragement and kind words. You are welcome to test on the demo site at zpeg <dot> com. If you email me from the contact page, I will respond.
    Quote Quote  
  16. Originally Posted by raymondwestwater View Post
    1. As to grey levels, we do indeed clip the lower range to 16, and the upper to 240.
    You may be implicitly doing that somewhere in your code but the samples I sent you for processing came back with blacks mostly clipped at Y=20 with a few overshoots below that. Ie, big black areas of Y=16 came back at Y=20. From a video encoded by your web site (job 37, if you want to verify it yourself):

    unprocessed:
    Image
    [Attachment 38864 - Click to enlarge]


    processed:
    Image
    [Attachment 38865 - Click to enlarge]


    Those were created by VideoScope() in AviSynth. On the bottom is a waveform graph of the luma channel. On the top right is a vertical waveform graph of the U channel (the V channel has the same -1 drop). On the bottom right is the UV vector plot.

    Originally Posted by raymondwestwater View Post
    2. The color spaces are initially "mangled" by a default FFMPEG color space conversion that takes place when we create the YUV files from the y4m content.
    It doesn't inspire much confidence when you don't notice or check for things like this.
    Quote Quote  
  17. Member
    Join Date
    Sep 2016
    Location
    United States
    Search PM
    jagabo, I understand that these stumbles we are making, and will continue to make, are not confidence-inspiring. I think the right way to look at this is we are asking experts such as yourself to help us understand our mistakes in a semi-friendly atmosphere. And to that end I will point out that the addition of ffmpeg to our processing chain is recent, and was done to support automated processing of user-provided content.
    As I pointed out, I am responding to your considered comments to share with you the current process in our analysis of the issues you have identified. Considering the effort you have put into this, it is the very least I can do. Now I have identified a reason why Y plane clipping takes place at 16 - but you tell me it's 20. I have not yet gotten to the bottom of that difference, but I have explained 80% of the issue! The additional 4 may be the calculated optimal quantization, but I just can't say at this point in the work. I will update as we progress in our analysis.
    Quote Quote  
  18. DECEASED
    Join Date
    Jun 2009
    Location
    Heaven
    Search Comp PM
    Originally Posted by sophisticles View Post
    .......

    Are you willing to supply this forum with a binary for testing or not?
    Nope, he won't do that.

    And this is the reason why I don't trust him either.
    Last edited by El Heggunte; 9th Oct 2016 at 17:56. Reason: clarity
    Quote Quote  
  19. Member
    Join Date
    Sep 2016
    Location
    United States
    Search PM
    El Heggunte, for those who have a plan to use the binary, they may have access to it on a server I have stood up. Just email me and have a reasoned dialog, and I will give you a login... I don't want to waste your time or mine.
    Quote Quote  
  20. Originally Posted by raymondwestwater View Post
    sophistcles, I am more than willing to work with you. I processed the one file you gave me, and got unimpressive results. I discussed the results and their implications openly.
    I have modified the demo site to accept files up to 4BG in size from web links. Should that not be adequate, I can make access to you a server that enables CLI usage of the pre-processor.
    But the process is not for the newbie, of course, and you do make me nervous when you ask a question like "What is QP?" It is the standard unit of quantization for x264 and x265 that is used to measure quality (in terms of error) and has a logarithmic relationship to expected bandwidth. It's not a perfect predictor, but it does average out pretty well. A decent explanation is to be found here: http://slhck.info/articles/crf. I do have a PhD, in video compression. If I speak over your head, it is neither to evade or obscure, it's just my style. This technology has been well-documented on my site and in my publications, which are easy to find if you are interested.
    You have a Ph.D. in video compression? Really? From what university? And where can I find this thesis of yours?

    If I make you nervous when I ask "what is QP?" then that makes us even because it makes me nervous when you reply with a link that explains how CRF works.

    I want you, using your Ph.D. trained brain, to explain to me, what you think QP is and why the values generated by x264 and x265 support your claims.

    Oh, and feel free to "speak over my head", I have a ladder in my garage that I can stand on if need be.

    LOL @ this guy. He claims to have a Ph.D. in video compression but instead of working for Google, Microsoft, Intel or any of the other big players in video compression and or selling his software to them, he admits he's working on it in his garage and spends his time on this forum trying to convince us of it's viability.

    Just too funny!
    Quote Quote  
  21. Originally Posted by raymondwestwater View Post
    j2. The color spaces are initially "mangled" by a default FFMPEG color space conversion that takes place when we create the YUV files from the y4m content.
    I took a look at the raw YUV files you provided to sophisticles. His video starts out with all black frames so it easy to test your claim with partial downloads. The unprocessed YUV file shows blacks a Y=16 and both chroma channels at 128, exactly where they should be. Your processed video shows blacks at Y=20 and the chroma channels at 126 (except for a few rows/columns at the edges of the frame). So it's your processing causing the problems, not the ffmpeg conversion to YUV.
    Quote Quote  
  22. Member
    Join Date
    Sep 2016
    Location
    United States
    Search PM
    sophistcles, sarcastic humor as a weapon of discourse is unfortunately all too effective. but if you would actually like to hear what is spoken with an open mind, all I can do is try. CRF compression is based upon the QP quality measure which I had thought was well-presented at the link I referenced. But such information is widely available and should be well-known to the experienced practitioner. Sicne QP is monotonically related to error injected into a frame, the argument does, reducing the pre-processed bandwidth to the point that the compressor uses the same QP value gives you the value at which the bandwidth savings result in exactly the same error contributed by the compressor.
    As to me personally, you can find out all you would like to know by a quick google search and inserting a space in my handle. My thesis has been published, as disclosed above.
    Quote Quote  
  23. Member
    Join Date
    Sep 2016
    Location
    United States
    Search PM
    jagabo, I am relaying the interim results reported by me by my associate, who is owning this effort. He reports that an unexpected ffmpeg conversion introduced the change. We are now removing ffmpeg from the equation and will look further. And the yuv files I provided have already been processed though this change, so that's not evidence (I think, if I understand you correctly). In any event, we will be delving into this until we get to the bottom. If you would like to be more hands-on, we would welcome your help! (email me...)
    Quote Quote  
  24. Originally Posted by raymondwestwater View Post
    He reports that an unexpected ffmpeg conversion introduced the change. We are now removing ffmpeg from the equation and will look further.
    Code:
    ffmpeg -color_range 2 -i "" -color_range 2 -vf "scale=iw:ih:sws_flags=neighbor:sws_dither=0:in_range=1:out_range=1"
    to disable color range change, additional command can be used to force particular colorspace
    Quote Quote  
  25. Originally Posted by raymondwestwater View Post
    And the yuv files I provided have already been processed though this change, so that's not evidence (I think, if I understand you correctly).
    One of the 46 GB YUV files is labled "raw", the other "processed". As I understand it, raw was the original mp4 file from sophisticles simply decompressed to YUV with ffmpeg. That file has a correct black level and black chroma channels (just look at a hex dump, it's obvious). I thought that you then ran "raw" through your program to produce "processed" which has bad levels and colors, as I pointed out earlier. If this is the case, your program screwed up, not ffmpeg.

    I'm referring to the files linked to in post #36.
    https://forum.videohelp.com/threads/380675-Announcing-ZPEG-Demonstration-Site-Works-wit...=1#post2461631
    http://www.zpeg.com/test/
    Last edited by jagabo; 10th Oct 2016 at 12:13.
    Quote Quote  
  26. Member
    Join Date
    Jan 2014
    Location
    Kazakhstan
    Search Comp PM
    Originally Posted by jagabo View Post
    Originally Posted by vaporeon800 View Post
    The only one that is downloadable on the demo page, unless I'm missing something. "ABC clip" = 720p60 red carpet footage with ABC bug in corner.
    I downloaded this video and didn't see such artifacts. It has some milder artifacts though:

    Image
    [Attachment 38791 - Click to enlarge]

    http://www.zpeg.com/videos/leftraw.mp4
    Immediately catches my eye difference of costume colors (light:dark).
    Quote Quote  
  27. Member
    Join Date
    Jan 2014
    Location
    Kazakhstan
    Search Comp PM
    Originally Posted by raymondwestwater View Post
    - Compressing the pre-processed file until it reached the same mean square error
    Привет.
    And here it already interesting. You may more detail?
    Quote Quote  
  28. Member
    Join Date
    Sep 2016
    Location
    United States
    Search PM
    Gravitator, thank you for taking a look at this site. The process I use which I use to justify my compression advantage numbers is "compress until equal average QP". What that means is:
    1. Compress the source video to, say, 6Mbs. Take the average QP
    2. Preprocess the video. Compress the video from the pre-processed source, take the average QP.
    3. Algorithm 1 - The QP after preprocessing is always larger (higher quality) than the QP of the raw compression. Calculate the expected bandwidth advantage from the difference in QP.
    4. Algorithm 2 - repeatedly compress to the preprocessed file at different bandwidths until the average QP is equal to the raw QP.
    Latest news: We think we have fixed the color shift and black level issues, and will make the fix available on the demo page shortly.

    Thanks!
    Raymond
    Quote Quote  
  29. It's too bad that one of the mods deleted my post about you Raymond, but hopefully they will let this one stand, as I intend to make it very mild.

    Attention folks, this guy, as I already pointed out, is a scammer, he invited me to google his name if I wanted to learn about his educational background. I did as he invited me to do, see for yourselves:

    https://www.linkedin.com/in/raymond-westwater-64743912

    He claims he has been with ZPEG.com for 30 years. I trust I don't need to point out the following:

    The first web site was home.cern in 1991, though there were a few .com domain names registered as far back as the mid '80's; near as I can tell ZPEG.com was not one of them.

    More importantly he presented the ZPEG technology as new in this thread and made some outlandish claims about it working with hevc, avc, vp9 et al. Let's ignore the fact that according to his bio he has been working on ZPEG for 30 years and 30 years ago avc/hevc/vp9 not only didn't exist but also weren't even a gleam in a developer's eye, his technology failed to perform as he claimed when I gave him the chance to prove his claims.

    He promised to provide a binary for testing and he reneged, the test file I provided he claimed was somehow rigged as he said that "I got him".

    But I am a very reasonable guy and willing to admit when I'm wrong so I'm going to give this "Raymond" fellow a chance to redeem himself.

    Raymond, buddy, I am going to give you a layup, a real softball challenge, you should be able to hit this one out of the park with ease.

    Download the y4m version of Tears of Steel from here:

    http://media.xiph.org/tearsofsteel/tearsofsteel-4k.y4m.xz

    It's a massive 60+ gb download that is compressed using .xz compression, upon inflation it's about 180 gb; the file has never been previously lossly compressed, they took the raw footage shot OpenEXR half float format, imported it into Blender, edited and added special effects and exported it in y4m format.

    Since it's never been previously compressed, it should have ample "redundancies" for your software to find and eliminate without any quality loss, as you claim your software is capable of.

    Take the original and the one you process with your software, the processed version should be at least 30% smaller while still being raw y4m.

    Then take each version and encode them using x264+medium+crf 18, again one would expect to see the version where you used the processed source to be 30% smaller.

    How fair is that? I'm giving you an easy opportunity to show me up, do you take the challenge?
    Quote Quote  
  30. Not that there's anything wrong with a challenge, but here it's weakened by a case of sophisticles calling the kettle black, or an apparent lack of a time frame for challenge completion.
    https://forum.videohelp.com/threads/369438-Is-x264-the-best?p=2370543&viewfull=1#post2370543
    https://forum.videohelp.com/threads/370119-Which-of-these-videos-do-you-prefer?p=237370...=1#post2373708
    Last edited by hello_hello; 17th Nov 2016 at 13:57.
    Quote Quote  



Similar Threads