We have following competitors this year:
# XviD (MPEG-4 ASP reference codec)
# MainConcept H.264
# Intel H.264
# x264
# AMD H.264
# Artemis H.264

Please find results is here:
http://www.compression.ru/video/codec_comparison/mpeg-4_avc_h264_2007_en.html
Direct link to PDF:
http://www.compression.ru/video/codec_comparison/pdf/msu_mpeg_4_avc_h264_codec_compari...n_2007_eng.pdf


Also see FAQ draft for this comparison:

----------------------------------------------------
Q. What was main goal of the comparison?

A. main idea of this annual comparison - official commercial companies
involving and so increasing number of participants. We are already
involved Intel, AMD, ATI, Apple (partially) and smaller Fraunhofer,
Elecard, Ateme, VSS, MainConcept, Sorenson, Artemis etc.

Q. Why official? You can get codec from a developer's site or (if
unavailable) from P2P networks and compare them successfully!

A. This is not true. For example, key feature of our comparisons is
comparison in several application areas (videoconferences, SD,
HDTV). Different areas require different codec settings. The
simplest strategy -- use of default settings -- definitely works,
but it is as much definitely nonoptimal (in terms of
quality-bitrate balance). So we received more optimal settings from
developers to test codec with higher accuracy. Also every year we
change sequences in an unpredictable way to prevent tuning to
specific sequences by developers.

Another problem is speed. Default settings of one codec can made it
10 time slower than another one and all your quality measurements
will not be so correct. Of course we can select suitable parameters
with close speed, but everybody can say "you select wrong
parameters for our codec, so you received wrong results", that's
why we do not use this approach in the annual H.264 comparison. Our
current solution is not sophisticated: We just asked companies for
optimal settings (as they think of them) for the predefined
application area and predefined time limits. The first step of
codec qualification to the comparison is time measurement: we
report time, measured on our system to developers with comments
like "you have 50% overtime, please make your preset faster", or
"your time reserve is still 40%, you can make your preset slower,
but with better quality/rate results".

Another advantage of official contacts: we typically receive the
newest internal versions, that are planned to be published not very
soon (it was up to half a year for one of our participants).


Q. Fine. But why you have so long timeout between starting till
publication?

Several reasons:

1. As we told previously - we have slow speed acceptance procedure,
that require correspondence with companies.

2. Next - if it's possible - we try to get companies chance to fix
clear bug. We receive fresh research version, so some bugs inside
is possible (especially this was actual for second annual
comparison). This can be very simple problem like "unlucky
build" with fast fixing. So if we have time reserve, we stop
measurement, report bug and get company chance to fix it. Of
course we wait very few time (2-3 days) with clear final
deadline, but this procedure also add additional delay.

3. Also we have measurements approval procedure. After we done all
measurements and after internal checking we provide measured
results of company's codec and test sequences to correspondent
companies, so companies are able to check our measurements with
there codec. We published our measurement tool (MSU Video Quality
Metric) and provide our measurement results in it's format,
so all results can be verified.

4. Next step is reports preparation (short, long versions and
versions for private comparison). Report size can be up to 200
pages, we prepare several reports, so this is also not so fast
step.

5. And final step is reports verification and reviewing. We provide
report draft to all participants for verification. They report
defect and commonly good suggestions. If we think, that some
suggestion is very important we can add appendix to comparison
with additional measurements or some clarifications. This also
require time. And final step is independent reviewing by very
good independent experts in H.264 codecs.

Q. Ok. Situation with time is clear. Is it possible for a company to
remove some results if it ask for this?

A. Now we avoid such situation by using a so-called private participation.

The motivation is pretty simple: at the start, without knowing its
real position against competitors, a developer did want to
participate in the public comparison. But after receiving results,
if this is not the first place or so, they did not want to see
these results published . So, if we allow to remove, there will
be only one codec in the comparison. Maybe two. (somebody and
x264). So we restrict such removing now, but allow private
participation.

Some companies did not want to participate due to not so good
current results which they know beforehand about. In private
participation a company pay us and receive its personal report.
This company codec is not included in the public comparison at all.
So, if company pay us, we prevent any conflicts of interests. A
company can understand its place and does further works on codec
improvement, but do not get its current results published and
well-known (the annual comparisons are very popular - we have more
than 200000 total downloads now).

Q. And the main question: what codec is the best?

A. We really have maybe the maximum information about the best codecs in
the world and we really do not know. At first, please specify
usage area, speed and other requirements, etc, etc, etc... We can
answer what codec is better for some common usage samples.

Our another objective is to help codec developers to make there
codecs better and better and better. Commonly developers achieve
some really good results and believe that this is the absolute
limit, but we help them to see new limits.

---------------------------------------------------------

And again - any updates, fixes and suggestions are welcome!