Thanks for the link. Unfortunately the large number of embedded videos caused Chrome to hang. I'll have to try again once I close some of my millions of tabs.All of the Derf videos have been rendered and are available for review at www <dot> zpeg <dot> com <slash> netflix <dot> shtml. These videos are mastered at Neflix rates. Perhaps I will add links to these files into the demo page...
+ Reply to Thread
Results 31 to 60 of 69
But there are some problems with your code: It is crushing darks below ~Y=19, the rest of the picture is getting a unit darker, and the U and V channels are dropping by one or two units, causing a greenish cast.
jagabo, thanks for that assessment. I wish I still had your eyes! I will look into this issue.
I have made the 6Mbs and 4Mbs renderings available through the tables on the demo page. Vaperon, this may be an easier way for you to get content.
Thank you all for your participation!
AviSynth. With two videos interleaved you can A/B switch individual frames with an editor and it's easy to see the problem.
[Attachment 38813 - Click to enlarge]
Ignore the posterization and poor image quality from conversion to GIF. But note the way the U and V channels bounce up and down (for the U channel on the top right "up" is to the right, "down" is to the left). That's alternating between a frame of the source and the same frame from the optimized encoding.
v0 = LWLibavVideoSource("sintel2.mkv") v1 = LWLibavVideoSource("sintel2 37.raw.fullrate.mkv") # optimized video remuxed into MKV Interleave(v0, v1) ConvertToYUY2() # VideoScope requires YUY2 VideoScope("both", true, "U", "V", "UV") # show U and V channels
Last edited by jagabo; 4th Oct 2016 at 16:14.
Attention folks, this guy is a liar and a charlatan and we shouldn't be giving him any more attention. In this thread he has claimed that this is new technology that works by finding redundancies video that commonly used encoders, such as x264 and x265 are incapable of finding. He also claiming that this "pre-processor" of his is not a filter. He also claims that this program of his is fully compatible with every encoder out there. At first I was wondering how that could be, as it would require that it contain multiple built in encoders to provide the functionality he advertises.
As you will recall he offered to share the binary with me if I would contact him via the contact page on his web site, I did so. He would not and did not provide the binary as he promised but he did download the sample file I linked to and he did provide the processed file for comparison purposes, I will let you guys see for yourself what he is doing:
As you can see he took the 1.8gb source file and converted it to raw YUV, then processed it using his filter which he won't call a filter which resulted in a second raw YUV, then he encoded it to x264.
This program, which by his own test only provided less than a 3% compression benefit, requires the end user to create 2 nearly 50gb raw YUV files for a single 1.8gb source, which means that a typical Blu-Ray would need over 1tb of hard drive space to process. Even if we granted his claim of a 20-30% savings in bit rate the need for massive intermediate files makes his program useless.
No wonder how hasn't tried to sell it to Google or Intel or NetFlix, they would laugh him right out of the presentation, that's why he's trying so hard to convince a bunch of hobbyists on a forum.
Nice try there buddy.
Liar and charlatan here! As I had posted, the file did not do well, so I guess I failed that test... But as to methodology, jagabo, thank-you for clarifying the procedures that we use. Anyway, from the tests that people have run, we have seen good results for medium-to-high quality video (mays 4Mbs and up), and poor results for very low quality video (some were as low as a couple of 100Kbs).
jagabo, we have some preliminary conclusions from analyzing the Derf files and our process.
1. As to grey levels, we do indeed clip the lower range to 16, and the upper to 240. These numbers are specified in the YCrCb standard, but we see it is violated routinely by the Derf data set, where ranges as wide as [0..255] occur. So we're in a bit of a quandary here, but we may just try widening the range and see what happens!
2. The color spaces are initially "mangled" by a default FFMPEG color space conversion that takes place when we create the YUV files from the y4m content. So we can try a -v:copy sort of thing if the color spaces match, or we can accept y4m content in the file-based verion of our pre-processor.
Just touching base to let you know we are taking your comments seriously and are working the issues.
Some more breaking news - we have rendered the Xiph Netflix sequences, and have gotten an average 37% file size reduction at equal QP. Better news than the results I got with sophisticles' data set (but you can't win them all!).
This technology adds another 20% to MPEG-1, MPEG-2, AVC, HEVC, VP9, (your codec here)
and uses no filtering techniques whatsoever.jagabo, lordsmurf, of course the idea is similar in principle to noise reduction, except that the "noise" to be removed is based on a human visual model operating in the decorrelated 3-D transform domain. So the removal is far more targeted and effective than is possible with filtering, which cannot provide the same level of discrete access to the individual basis vectors.
But the creme dela creme is when you make this absurd statement:
The point is that we are able to find and remove redundancies that motion-estimation compressors (H.264, H.265, VP9) inherently cannot. This is due to the way that they extract redundancy, which is to find a motion compensation vector then record the difference or error term.
Perhaps you should have considered sanity checking the claims you were going to make before you started this thread.
sophistcles, I am more than willing to work with you. I processed the one file you gave me, and got unimpressive results. I discussed the results and their implications openly.
I have modified the demo site to accept files up to 4BG in size from web links. Should that not be adequate, I can make access to you a server that enables CLI usage of the pre-processor.
But the process is not for the newbie, of course, and you do make me nervous when you ask a question like "What is QP?" It is the standard unit of quantization for x264 and x265 that is used to measure quality (in terms of error) and has a logarithmic relationship to expected bandwidth. It's not a perfect predictor, but it does average out pretty well. A decent explanation is to be found here: http://slhck.info/articles/crf. I do have a PhD, in video compression. If I speak over your head, it is neither to evade or obscure, it's just my style. This technology has been well-documented on my site and in my publications, which are easy to find if you are interested.
Honestly current video processing algorithms not use almost at all human vision system model so i believe there is large area to introduce new pre and post processing where video compression may be improved - if your technique is such kind - i have no clue also as i can't test it in a comfortable way then i step away from thread but anyway i strongly support all people trying to do something useful so thumbs up and good luck - have a healthy distance.
pandy, thank-you for your encouragement and kind words. You are welcome to test on the demo site at zpeg <dot> com. If you email me from the contact page, I will respond.
[Attachment 38864 - Click to enlarge]
[Attachment 38865 - Click to enlarge]
Those were created by VideoScope() in AviSynth. On the bottom is a waveform graph of the luma channel. On the top right is a vertical waveform graph of the U channel (the V channel has the same -1 drop). On the bottom right is the UV vector plot.
jagabo, I understand that these stumbles we are making, and will continue to make, are not confidence-inspiring. I think the right way to look at this is we are asking experts such as yourself to help us understand our mistakes in a semi-friendly atmosphere. And to that end I will point out that the addition of ffmpeg to our processing chain is recent, and was done to support automated processing of user-provided content.
As I pointed out, I am responding to your considered comments to share with you the current process in our analysis of the issues you have identified. Considering the effort you have put into this, it is the very least I can do. Now I have identified a reason why Y plane clipping takes place at 16 - but you tell me it's 20. I have not yet gotten to the bottom of that difference, but I have explained 80% of the issue! The additional 4 may be the calculated optimal quantization, but I just can't say at this point in the work. I will update as we progress in our analysis.
Last edited by El Heggunte; 9th Oct 2016 at 17:56. Reason: clarity
El Heggunte, for those who have a plan to use the binary, they may have access to it on a server I have stood up. Just email me and have a reasoned dialog, and I will give you a login... I don't want to waste your time or mine.
If I make you nervous when I ask "what is QP?" then that makes us even because it makes me nervous when you reply with a link that explains how CRF works.
I want you, using your Ph.D. trained brain, to explain to me, what you think QP is and why the values generated by x264 and x265 support your claims.
Oh, and feel free to "speak over my head", I have a ladder in my garage that I can stand on if need be.
LOL @ this guy. He claims to have a Ph.D. in video compression but instead of working for Google, Microsoft, Intel or any of the other big players in video compression and or selling his software to them, he admits he's working on it in his garage and spends his time on this forum trying to convince us of it's viability.
Just too funny!
ffmpeg conversion to YUV.
sophistcles, sarcastic humor as a weapon of discourse is unfortunately all too effective. but if you would actually like to hear what is spoken with an open mind, all I can do is try. CRF compression is based upon the QP quality measure which I had thought was well-presented at the link I referenced. But such information is widely available and should be well-known to the experienced practitioner. Sicne QP is monotonically related to error injected into a frame, the argument does, reducing the pre-processed bandwidth to the point that the compressor uses the same QP value gives you the value at which the bandwidth savings result in exactly the same error contributed by the compressor.
As to me personally, you can find out all you would like to know by a quick google search and inserting a space in my handle. My thesis has been published, as disclosed above.
jagabo, I am relaying the interim results reported by me by my associate, who is owning this effort. He reports that an unexpected ffmpeg conversion introduced the change. We are now removing ffmpeg from the equation and will look further. And the yuv files I provided have already been processed though this change, so that's not evidence (I think, if I understand you correctly). In any event, we will be delving into this until we get to the bottom. If you would like to be more hands-on, we would welcome your help! (email me...)
ffmpeg. That file has a correct black level and black chroma channels (just look at a hex dump, it's obvious). I thought that you then ran "raw" through your program to produce "processed" which has bad levels and colors, as I pointed out earlier. If this is the case, your program screwed up, not ffmpeg.
I'm referring to the files linked to in post #36.
Last edited by jagabo; 10th Oct 2016 at 12:13.
Gravitator, thank you for taking a look at this site. The process I use which I use to justify my compression advantage numbers is "compress until equal average QP". What that means is:
1. Compress the source video to, say, 6Mbs. Take the average QP
2. Preprocess the video. Compress the video from the pre-processed source, take the average QP.
3. Algorithm 1 - The QP after preprocessing is always larger (higher quality) than the QP of the raw compression. Calculate the expected bandwidth advantage from the difference in QP.
4. Algorithm 2 - repeatedly compress to the preprocessed file at different bandwidths until the average QP is equal to the raw QP.
Latest news: We think we have fixed the color shift and black level issues, and will make the fix available on the demo page shortly.
It's too bad that one of the mods deleted my post about you Raymond, but hopefully they will let this one stand, as I intend to make it very mild.
Attention folks, this guy, as I already pointed out, is a scammer, he invited me to google his name if I wanted to learn about his educational background. I did as he invited me to do, see for yourselves:
He claims he has been with ZPEG.com for 30 years. I trust I don't need to point out the following:
The first web site was home.cern in 1991, though there were a few .com domain names registered as far back as the mid '80's; near as I can tell ZPEG.com was not one of them.
More importantly he presented the ZPEG technology as new in this thread and made some outlandish claims about it working with hevc, avc, vp9 et al. Let's ignore the fact that according to his bio he has been working on ZPEG for 30 years and 30 years ago avc/hevc/vp9 not only didn't exist but also weren't even a gleam in a developer's eye, his technology failed to perform as he claimed when I gave him the chance to prove his claims.
He promised to provide a binary for testing and he reneged, the test file I provided he claimed was somehow rigged as he said that "I got him".
But I am a very reasonable guy and willing to admit when I'm wrong so I'm going to give this "Raymond" fellow a chance to redeem himself.
Raymond, buddy, I am going to give you a layup, a real softball challenge, you should be able to hit this one out of the park with ease.
Download the y4m version of Tears of Steel from here:
It's a massive 60+ gb download that is compressed using .xz compression, upon inflation it's about 180 gb; the file has never been previously lossly compressed, they took the raw footage shot OpenEXR half float format, imported it into Blender, edited and added special effects and exported it in y4m format.
Since it's never been previously compressed, it should have ample "redundancies" for your software to find and eliminate without any quality loss, as you claim your software is capable of.
Take the original and the one you process with your software, the processed version should be at least 30% smaller while still being raw y4m.
Then take each version and encode them using x264+medium+crf 18, again one would expect to see the version where you used the processed source to be 30% smaller.
How fair is that? I'm giving you an easy opportunity to show me up, do you take the challenge?
Not that there's anything wrong with a challenge, but here it's weakened by a case of sophisticles calling the kettle black, or an apparent lack of a time frame for challenge completion.
Last edited by hello_hello; 17th Nov 2016 at 13:57.