ok well I convert videos usually using the MediaCoder or Any Video Converter Professional. Everything was fine till recently when I encoded videos in Xvid from a lower resolution (ex 640x272) to a higher one (ex 800x340), the product would look like a stretched version of the original video. Basicly the quality isn't being preserved and even the file size is staying the same.
Any ideas? And also I kept the bitrate and and everything else the same..
+ Reply to Thread
Results 1 to 30 of 51
-
-
You resized it so it was stretched to the specified size.
What's the problem? -
Why are you doing this? Any re-encode will degrade the quality.
Donadagohvi (Cherokee for "Until we meet again") -
Hmm, well when I used to do this, it seemed to increase the size of the file by quite a bit and reproduce the same quality rebuilt into a higher resolution video..
-
Enlarging a small frame to a larger size will not increase the quality. You will enlarge the defects in the source which will make them more visible. Then encoding with the same bitrate and a lossy codec will degrade the image and add more defects.
File size = bitrate * running time.
If you want a bigger file use a higher bitrate. If you want a smaller file use a lower bitrate. The lower the bitrate, the lower the quality. Larger frames require more bitrate. -
Im not talking about increasing the quality.
What I mean is that when I took at low resolution xvid and encoded it to a higher resolution xvid, shouldn't it keep around the same quality while increasing the file size (by a quite a bit) because it should be a bigger file right?
Instead Im just merely having the video stretched looking (like you would by dragging the corners of the video) and having the almost same file size. -
How else are you expecting it to increase the frame size without "stretching the video" ? It can only work with what you have given it. If what you give it is smaller than what you ask of it, it will stretch it.
This is nothing you wouldnt normally do in a Player by selecting fullscreen, or a zoom option. Only you are adding yet another encode to it, yet again adding more artifacts as has already been said. -
The quality can only get worse by reencoding. But to get a larger file (to keep the quality from degrading too much) use a higher bitrate.
-
As was explained earlier, the universal rule for digital video is :
Filesize = Bitrate X Running Time
You have two videos.
Video A is 1280 x 720, runs for 20 minutes, and has a bitrate 1000 kbps
Video B is 640 x 360, runs for 20 minutes, and has a bitrate of 1000 kbps
Both have been compressed using 2-pass Xvid compression.
The file size for both videos is identical.
Now add to this story the fact that both videos were taken from the same source - for this example, a DVD Video. So the content is the same.
Video A will, in most cases, look far worse than video B because it has to spread the same bitrate over a much larger area. Added to this, video A has had the resolution enlarged to meet 720p resolution. In so doing, new pixels have had to be created where they weren't before, leading to a softer, blurrier image, and enhancing any compression issues that were in the original source.
Video B, on the other hand, has been reduced in size, and will in most cases still look reasonably sharp.
Video A would look even worse if enlarged from Video B, as it would have been created from an already compressed source with an even lower resolution.
Your thinking is flawed from the outset. Unless you are using some form of super resolution process to enlarge the image, and are will to use a much higher bitrate, you will be unable to maintain the same quality. Even with a super resolution upscaling process, you cannot create details that were not there, and you will expose flaws in your source material.Read my blog here.
-
Alright that makes sense. So its better to keep the original unless I have the source.
Would it still be better to keep the original or try to upscale it with a bigger bitrate?
Then what about if I make it smaller? Would I lose a lot of quality too? -
The way I see it, the only reason to UPSCALE at all is probably when one needs to edit with mixed resolutions (then upscale using that "super-resolution" algorithm to get the SD material to fit better with the HD material). Otherwise, don't upscale in software. Don't DOWNSCALE either, unless you absolutely have to to fit a medium/bandwidth requirement.
Let hardware (settop DVD players, TV displays) do the upscaling automatically.
You usually know your final output--the display device. You want your source material to either match or start off higher-rez (and STAY that way) than the output. If you make something smaller to the point of it being lower rez than the display, it's gonna have to upscale again anyway (and lose quality in the process).
Scott -
There's at least one other time when it makes sense to upscale: If you have a hardware player (portable device, or Divx/DVD player) that upscales poorly you do better in software. Just be sure to use sufficient bitrate (or use constant quality encoding with a high quality setting) to keep from adding a lot more artifacts. Most modern players do a decent job of upscaling so this would be rare now.
-
Originally Posted by guns1inger
In other words, if you have a monitor with a resolution of 1280 x 720 would Video A play bettrer than B (in terms of quality always)? It would make sense to convert the video with higher resolution (at the cost of degraded quality for a constant bit rate as you highlight) than let the player enlarge it. -
Originally Posted by ee98vvt
-
It's a balancing act. By using a smaller frame size you trade resolution to get less macroblocking. Where the optimum is will depend on the particular video, the bitrate, your player, your TV, how far away you sit, and your tolerance for the different defects.
-
Thanks for the response poeple!
Let me re-phrase that.
Scenario:
You have a monitor with W:H resolution.
You need to convert a DVD movie to a compressed video (i.e. avi, mp4 etc.) of a fixed size S
You know that this size allows for a certain bit rate, which gives you good results for up to a resolution of (0.5*W)0.5*H)
Question:
What is it better to encode at:
A. W:H
B. (0.5*W)0.5*H)
I think both A and B degrade quality; but, is it better to compromise quality during the encoding or during playback?! -
They sacrifice quality in different ways. In most cases I think you'll find the lower resolution video looks better.
-
Question, is there any way to determine the original resolution of an upscaled video without access to the source?
-
No. At least not in general. With certain types of test patterns you might be able to.
-
If the original resolution is thought to be 240p of the 480p video, can it be downscaled to 240p then upscaled and compared? The SSIM value should remain at 1.0000 if no details were lost, correct?
-
Originally Posted by Xpenguin17
My understanding is that the scaling algorithm involves rounding of values, so with rounding errors, it's not lossless and definitely not reversible. I'm sure jagabo can explain it better. Also there would be compression errors if not using lossless methods -
"if no details were lost" -- there's the rub. Resizing generally involves loss of detail.
Take four numbers:
0, 100, 0, 100
Two peaks, two valleys, all equal sized. Now convert that into 5 numbers and retain two peaks and two valleys of equal size. Having problems? -
I hear ya. Also, not to forget that compression artifacts in the upscaled video like blocking will be erased when scaling down, and SSIM will interpret this as a loss of detail, so yeah. But is there a certain threshold where it should stop being ignored? E.g. 0.99700?
-
Originally Posted by Xpenguin17
I don't see a point in assigning an arbitrary value, or how to even derive a value that would be considered a "threshold".
I think the trends are more important and not necessarily absolute values. How are you measuring SSIM ? If you are using various metric measurement tools or even x264, most of them do it slightly differently and are only approximations. (They don't use the full method as outlined in scientific journals) -
Originally Posted by poisondeathray
Originally Posted by poisondeathray
P.S. How does VQM compare to SSIM? -
Transparency.
I use MSU, and they claim their implementation adheres to the standard.
https://forum.videohelp.com/topic371339.html
ssim.zip
SSIM (precise)
x264 0.9841
mcr 0.9854
Now there is clearly a mistake in this measurement tool. The mcr encode has smoothed away most the grain compared to the original and x264 encode, and yet SSIM value is higher ?? This is supposed to be an objective tool. I can post 100's of other examples where it clearly fails in the accuracy department. It's usually better to use your eyes...
How does VQM compare to SSIM? -
Originally Posted by poisondeathray
Originally Posted by poisondeathray -
Originally Posted by Xpenguin17
And how realistic is denoising everything?
What do you think grain is? It's essentially noise. To the algorithm or computer doing the work calculations, it can't distinguish if a dot is supposed to be noise (created from compression artifacts) or grain (originally present in the original). But here clearly the tool is wrong... and by a lot (and I can show you many more examples). Which encode do you think more resembles the original?It should be pretty obvious here, as I chose a pretty clear cut example.
The other reason these metrics fail, is the type of weighting used. Human perception of quality usually percieves sharpness and clarity as "higher quality" than smoothness and lack of detail, where the metric has difficulty distinguishing "types" of noise. It tends to "penalize" sharper pictures more than lack detail, which is usually the opposite from " human perception of quality model". Also humans have higher weighting for certain parts of the frame which can deviate from the weighting used by the measurement tool (e.g you focus on faces & people usually rather than background stuff because thats usually considered more important). But this is where the subjective & objective world collides and there is lots of room for error
I'm not saying SSIM / PSNR measures are useless, it's just that in the grand scheme of measuring "quality" they are problematic and not very useful when used just by themselves. They are more useful when looking at trends as a rough estimate, in combination with other methods. -
I agree that noise shouldn't be an issue. What's noise in one image is detail in another (a fuzzy sweater, a stucco wall, etc).
I think it would be an interesting experiment to take an image, upscale it, then down size it to a range of different resolutions, and upsize again -- then measure the differences between the final image and the original upscaled image. Graph PSNR or whatever metric you use against the downsized size. Will there be a peak at the original image's size? Is the peak consistent across different types of images? How sharp is the peak?
Similar Threads
-
Subtitle retime and remux doesn't preserve font and positioning
By Canavi in forum SubtitleReplies: 2Last Post: 4th Mar 2011, 05:38 -
Higher Bitrate = Higher Quality? - 20MBPS difference for 1080p file
By SgtPepper23 in forum Newbie / General discussionsReplies: 4Last Post: 6th Dec 2009, 07:57 -
Higher Bitrate = Higher Quality?
By SgtPepper23 in forum AudioReplies: 1Last Post: 5th Dec 2009, 21:33 -
ISO doesn't preserve 5.1 soundtrack?
By UncleBose in forum Newbie / General discussionsReplies: 6Last Post: 25th Jun 2008, 22:26 -
How to Preserve DVD Resolution in mp4
By dizzie in forum Newbie / General discussionsReplies: 4Last Post: 4th Sep 2007, 11:18