I am in the market for a consumer level camcorder. I have been researching various models and technologies for weeks. I have gone through spec sheets on all of the current models of all of the major players (and some of the minors).
I am a very big fan of canon digital cameras. I have owned many and they always produce superior quality images. Naturally I gravitated to canon camcorders assuming the quality would be in the same range. From everything I read, top image quality is between canon and sony. good enough.
I am obviously looking at an HD camcorder. It is not a fad - got it.
The avchd format used by (it appears) all of the new model hd camcorders leads me to part of my question here. My understanding is that avchd is akin to mpeg4 compression. In the digital world, all compression is bad as it relates to an algorithm used to throw data away to make the file smaller. Just like having the ability to pull "raw" images off my canon digital camera, if only there was a way to capture "raw" video for editing from a camcorder, that would be the golden cow right there, right? not going to happen at a consumer level - got it.
I have read several opinions stating that HDV to tape actually produces superior quality results. The basis of this argument is that the HDV format is akin to mpeg2 compression.
so part 1 of my question is: is mpeg2 better because it compresses the original data less than mpeg4? Or is mpeg4 so much better a codec that the loss experienced in the compression does not produce any worse results than the older mpeg2 format?
What got me on this track was finding the JVC GZ-HD5. To look at the specs of this thing, you would think they really had this figured out. All the ratings I have read unfortunately tell a different story.
The avchd spec claims to have a top bit rate of 24mbps. There is only one brand on the market that meets those specs; canon. And from what I read, they are performing a little voodoo to get to that spec. The obvious benefit of recording at the max possible bit rate is a (technically) more detailed image. Like I always say about digital cameras with multiple resolution settings.. why would you ever think that the picture you are taking is not worth capturing at the max resolution. If you don’t need that detail, throw it away later when you are editing. You can’t make up the data later (without interpolation).
Looking at the JVC, it claims to record 1080/60p at 30mbps max (26mbps average). Holy crap, there are tons of prosumer cameras that can’t meet those specs. The apparent ability for this camera to perform this video miracle is its recording format of mpeg2-ts. If you look at the newer JVC models which have switched over to the avchd (almost a) standard, their cameras drop down to 1080/60i at 17mbps like all the rest.
So part 2 of my long winded question is: is avchd really better? I can apparently do anything under the sun with the video in editing. There are tons of apps/plugins that will let me convert to 60p or 30p or 24p and apply any number of effects to the video. What they can’t do is create more data in the stream (without interpolation). And as you know, interpolation does not create detail, it takes the cruddy detail you have and makes it bigger.
I would love to hear other opinions on this. I do not have experience with these devices. I am only going by the info I have been able to research. Who knows, just like science, half of what I think I know may be wrong..
Thanks
+ Reply to Thread
Results 1 to 10 of 10
-
-
Originally Posted by zieske
With either MPEG2 or 4, you can choose a compression/quality trade off. So what you see on screen can be the same.
As you can get a PNG image identical to a BMP in 1% of the size, because it's a smarter format.
A more efficent encoding probably requires more intensive computation to encode or decode, but with Moore's Law this is not a problem. -
MPEG-4 AVC is (potentially) dramatically more efficient than MPEG-2, but realising that advantage in real-time on a consumer device isn't possible (yet).
So forget about the specs, and how it does it, and just look at the actual picture quality each particular camera can deliver.
Cheers,
David.
P.S. If a camera recorded 1920x1080p60 at 26Mbps, then all other things being equal, you'd get about as many compression artefacts as you'd get recording 1920x1080i60 at about 18Mbps, or 1440x1080i60 at about 14Mbps. In comparison, HDV records 1440x1080i60 at 25Mbps. So, all other things being equal, 1920x1080p60 26Mbps MPEG-2 will have more visible artfacts than 1440x1080i60 25Mbps HDV (albeit on a higher resolution, progressive image). Are "all other things equal"? Probably not. For one thing, HDV is CBR, while it sounds like this other camera is using VBR. More importantly, there is lens-, sensor-, and processing technology involved. -
2bdecided,
Yes, I agree with the image quality statement. The issue with the JVC I quoted is that all reviews say the image is not that great. What is the point of having this level of technology if you put a $10 ccd on the camera?
What really interests me is the info in your PS. can you expand on what you are saying there? Mathematically, I can understand your point about the probability of catching more artifacts if you start with more data, but mentally my reaction to that is: yeah but I have so much more data that I can afford to throw away during editing. My mind is still in the digital still world. If I start with a huge pic and make it smaller, it looks really good. If I start with a small pic and try to make it bigger, I end up with a really big crappy pic. -
Both formats are poor acquisition formats but good distribution formats. So, which has the most appropriate/acceptable tools available to you (definition of which depends on exactly 'you' means)? A great distribution format may be a lousy choice for editing.
John Miller -
any editing of video requires decompression of the stream. I am not sure of the relevance as it is just as easy to find wintel based utilities to edit m2t based files as it is to edit avchd files. everything has its drawbacks and the editing process itself will likely cause more damage than the original compression did.
I have no intention of trying to edit on the fly or on camera. I am also not a pro (obviously) so it isn't like I am going to drop 20G on a hardware/software editing solution.
I am just trying to think through the technology logically to come up with pros and cons to steer me in a direction. As far as I can tell, none of the manufacturers have "the" solution in a consumer level device. So without that, I am trying to decide what will best serve me for the next x years of use.
I hope I didn't totally misunderstand your comment. -
John made the most important practical point: AVCHD takes more computing power to decode, so it's painful to edit.
You can't edit lossless files.
You can edit near-lossless files, but near-lossless editing codec cost money. If you intend to use one, then for editing there's no difference between HDV and AVCHD.
You can edit HDV. You can even do it losslessly with some editors (i.e. parts that you don't change go through the edit as bit-perfect copies). With any other output format, the input files can be HDV.
You can edit AVCHD, not losslessly yet (AFAIK), but here's the problem: if the input files are AVCHD, and you do a cross fade or picture-in-picture etc, then when you preview that section, your PC has to decode to AVCHD streams at once, and process them. You need a very powerful PC.
As for the question you asked me: in the still picture world, the equivalent would be chosing between a 1Mpixel image at 100kB JPEG or a 2Mpixel image at 100kB JPEG. The latter has more pixels, but is much lower quality, because 100kB isn't enough for a decent 1Mpixel JPEG image, never mind a 2Mpixel one!
Cheers,
David. -
uhhhg.. you are making my brain hurt.
If you have two different compression techniques whereby one can compress a 1 megapixel image to 100k and the second can only compress it to 200k, when the image is uncompressed, it is still the same image (you know, depending on how good the compression algorithm is.. blah, blah).
If on the other hand you use the same compression to turn a 1 megapixel file into 100k and a 2 megapixel file into 100k than something additional was thrown away from the 2m file, like color depth.
If (all things being equal) you capture two hd 1920x1080 streams, one at 17mbps and the other at 26mbps than the 26mbps file should be bigger.
so just as an aside, I found a review that tells me more about the camera operation than jvc will. while the camera captures at 60p, it has a dedicated asic to downconvert to 60i before it writes the file. you can get the 60p image if you hook up the camera using hdmi and it will upconvert on the way out. so to get the file using usb, you get the 60i data. funnier still is that the combination of the 3 ccds in this camera only add up to 1.6m. so they are doing interpolation (sorry, pixel-shifting) to get to 1920 x 1080 resolution.
ok, so I started in my first post saying that the consensus was this camera was no good. but lets follow through with your point.. Sony says it captures full hd 60i at 17m avchd using a 4mp ccd at a rate of 8.25GB/hour on its new xr series. Canon says it captures full hd 60i at 24m avchd using a 6mp ccd at a rate of 10.5GB/hour on its new hfs series. whereas the jvc is capturing interpolated full hd 60p at 26m hdv using a 1.5mp ccd at a rate of 12GB/hour.
Thanks for making me think through that. it all makes a little more sense. math is your friend.
As to your other point, I have seen the system requirements/recommendations for editing avchd (dual quad core with 8gb and a fast hard drive). I don't have that yet which was one of the other reasons I thought the older technology stuff might be a little more convenient.
thanks for all the comments. -
Originally Posted by zieske
Which means loss of high frequency details, ringing, and blockiness.
As you say, the sensor numbers alone don't look great for that camcorder!
Cheers,
David. -
Originally Posted by zieske
Like MPEG-2, AVC wasn't really intended as an editing format, but it is currently the offering the best quality video at the lowest bitrate.. at least versus anything not still in a lab somewhere.
The AVCHD format itself is a decendent of Sony's professional XDCAM, as well as the Blu-Ray specification.
Originally Posted by zieske
What you don't like is visible compression... can you see compression artifacts? How do they compare, one CODEC to another? The big win of AVC is that, when well encoded, it's 2-3x the "coding efficiency" of MPEG-2. In other words, I can get the same quality with 1/2 to 1/3 the bitrate using AVC vs. MPEG-2. This isn't even open to debate... the fiasco of the early Blu-Ray discs, still using MPEG-2, compares to the early HD-DVD discs, all based on Microsoft's VC-1 CODEC (VC-1's coding efficiency is between AVC and MPEG-2, but closer to AVC).
The deal with camcorder AVC is simple: AVC is still a new technology, MPEG-2 is old, and thus, mature. Your camcorder isn't delivering the best possible AVC compression, it's delivering whatever the on-board video DSP can manage to deliver in realtime. In older AVCHD camcorders, and perhaps still in off-brands, video suffered due to the algorithms employed, not AVC itself. By most accounts, recent AVC camcorders with higher bitrates rival HDV. Going to 24Mb/s, they're pretty close to HDV's 25Mb/s anyway (though most of the time, the AVC camcorder is doing a full 1920x1080 vs. HDV's 1440x1080). My experience is that it can actually be better, but there are also conditions that show up more artifacts with AVC (though I'll admit most of my experiences are with a somewhat older AVC camcorder, compared to two HDVs I also use).
Originally Posted by zieske
Originally Posted by zieske
Originally Posted by zieske
The real answer is that AVCHD is only just recently starting to compare in quality, on camcorders, to HDV.. that doesn't mean always the same, it means sometimes better, sometimes worse. As the encoding technology improves, AVC should be expected to increasingly best HDV, and at increasingly lower bitrates. You can see this in action on Blu-Ray discs.. but I know, when I encode HDV for Blu-Ray, I spend 3-4 hours per hour of video, or more.. on a Q6600 Quad core processor. You're not getting that kind of power on a camcorder yet.. AVC is pretty complex to do well. But eventually, sure.
Originally Posted by zieske
Lowering bitrate isn't a matter of lowering pixel rate.. it's a matter of increasing the low pass filtering of the video. All DCT (Discrete Cosine Transform) CODECs, including MPEG-2 and AVC, work their magic along similar lines. They run the transform on specific blocks of video (in MPEG-2 it's usually 16x16, in AVC it's more flexible), which transform spatial data to frequency data. Then they run a low-pass filter, chopping out the high frequency data, and then a lossless compression, like Huffmann encoding. The key here is the LPF... as you do more filtering, you get smaller data out of the lossless routines, but you also eliminate the high frequency data... sharpness. There's no way to get that back.
Originally Posted by zieske
Originally Posted by zieske
Originally Posted by zieske-Dave
Similar Threads
-
Understanding 3D technology
By mbudman in forum Newbie / General discussionsReplies: 37Last Post: 31st Dec 2011, 13:22 -
New BR technology we dont know about ?
By ian curtis in forum Software PlayingReplies: 6Last Post: 30th Jan 2009, 22:02 -
HD TV with laser technology
By Delta2 in forum Latest Video NewsReplies: 2Last Post: 30th Dec 2008, 00:21 -
The Worst Technology Laws
By rkr1958 in forum Newbie / General discussionsReplies: 5Last Post: 10th Aug 2007, 18:02 -
best live streaming technology
By j21099 in forum TestReplies: 0Last Post: 22nd May 2007, 11:21