VideoHelp Forum




+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 51
  1. Member
    Join Date
    Jun 2009
    Location
    United States
    Search Comp PM
    ok well I convert videos usually using the MediaCoder or Any Video Converter Professional. Everything was fine till recently when I encoded videos in Xvid from a lower resolution (ex 640x272) to a higher one (ex 800x340), the product would look like a stretched version of the original video. Basicly the quality isn't being preserved and even the file size is staying the same.

    Any ideas? And also I kept the bitrate and and everything else the same..
    Quote Quote  
  2. Member AlanHK's Avatar
    Join Date
    Apr 2006
    Location
    Hong Kong
    Search Comp PM
    You resized it so it was stretched to the specified size.
    What's the problem?
    Quote Quote  
  3. Man of Steel freebird73717's Avatar
    Join Date
    Dec 2003
    Location
    Smallville, USA
    Search PM
    Why are you doing this? Any re-encode will degrade the quality.
    Donadagohvi (Cherokee for "Until we meet again")
    Quote Quote  
  4. Member
    Join Date
    Jun 2009
    Location
    United States
    Search Comp PM
    Hmm, well when I used to do this, it seemed to increase the size of the file by quite a bit and reproduce the same quality rebuilt into a higher resolution video..
    Quote Quote  
  5. Enlarging a small frame to a larger size will not increase the quality. You will enlarge the defects in the source which will make them more visible. Then encoding with the same bitrate and a lossy codec will degrade the image and add more defects.

    File size = bitrate * running time.

    If you want a bigger file use a higher bitrate. If you want a smaller file use a lower bitrate. The lower the bitrate, the lower the quality. Larger frames require more bitrate.
    Quote Quote  
  6. Member
    Join Date
    Jun 2009
    Location
    United States
    Search Comp PM
    Im not talking about increasing the quality.
    What I mean is that when I took at low resolution xvid and encoded it to a higher resolution xvid, shouldn't it keep around the same quality while increasing the file size (by a quite a bit) because it should be a bigger file right?

    Instead Im just merely having the video stretched looking (like you would by dragging the corners of the video) and having the almost same file size.
    Quote Quote  
  7. Member fitch.j's Avatar
    Join Date
    May 2009
    Location
    United Kingdom
    Search Comp PM
    How else are you expecting it to increase the frame size without "stretching the video" ? It can only work with what you have given it. If what you give it is smaller than what you ask of it, it will stretch it.

    This is nothing you wouldnt normally do in a Player by selecting fullscreen, or a zoom option. Only you are adding yet another encode to it, yet again adding more artifacts as has already been said.
    Quote Quote  
  8. The quality can only get worse by reencoding. But to get a larger file (to keep the quality from degrading too much) use a higher bitrate.
    Quote Quote  
  9. Always Watching guns1inger's Avatar
    Join Date
    Apr 2004
    Location
    Miskatonic U
    Search Comp PM
    As was explained earlier, the universal rule for digital video is :

    Filesize = Bitrate X Running Time

    You have two videos.

    Video A is 1280 x 720, runs for 20 minutes, and has a bitrate 1000 kbps
    Video B is 640 x 360, runs for 20 minutes, and has a bitrate of 1000 kbps

    Both have been compressed using 2-pass Xvid compression.

    The file size for both videos is identical.

    Now add to this story the fact that both videos were taken from the same source - for this example, a DVD Video. So the content is the same.

    Video A will, in most cases, look far worse than video B because it has to spread the same bitrate over a much larger area. Added to this, video A has had the resolution enlarged to meet 720p resolution. In so doing, new pixels have had to be created where they weren't before, leading to a softer, blurrier image, and enhancing any compression issues that were in the original source.

    Video B, on the other hand, has been reduced in size, and will in most cases still look reasonably sharp.

    Video A would look even worse if enlarged from Video B, as it would have been created from an already compressed source with an even lower resolution.

    Your thinking is flawed from the outset. Unless you are using some form of super resolution process to enlarge the image, and are will to use a much higher bitrate, you will be unable to maintain the same quality. Even with a super resolution upscaling process, you cannot create details that were not there, and you will expose flaws in your source material.
    Read my blog here.
    Quote Quote  
  10. Member
    Join Date
    Jun 2009
    Location
    United States
    Search Comp PM
    Alright that makes sense. So its better to keep the original unless I have the source.
    Would it still be better to keep the original or try to upscale it with a bigger bitrate?
    Then what about if I make it smaller? Would I lose a lot of quality too?
    Quote Quote  
  11. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    The way I see it, the only reason to UPSCALE at all is probably when one needs to edit with mixed resolutions (then upscale using that "super-resolution" algorithm to get the SD material to fit better with the HD material). Otherwise, don't upscale in software. Don't DOWNSCALE either, unless you absolutely have to to fit a medium/bandwidth requirement.
    Let hardware (settop DVD players, TV displays) do the upscaling automatically.

    You usually know your final output--the display device. You want your source material to either match or start off higher-rez (and STAY that way) than the output. If you make something smaller to the point of it being lower rez than the display, it's gonna have to upscale again anyway (and lose quality in the process).

    Scott
    Quote Quote  
  12. There's at least one other time when it makes sense to upscale: If you have a hardware player (portable device, or Divx/DVD player) that upscales poorly you do better in software. Just be sure to use sufficient bitrate (or use constant quality encoding with a high quality setting) to keep from adding a lot more artifacts. Most modern players do a decent job of upscaling so this would be rare now.
    Quote Quote  
  13. Member
    Join Date
    Jun 2009
    Location
    United States
    Search Comp PM
    Ok I now understand. Thanks a lot.
    Quote Quote  
  14. Member
    Join Date
    Sep 2009
    Location
    London, UK
    Search Comp PM
    Originally Posted by guns1inger
    ...

    Video A is 1280 x 720, runs for 20 minutes, and has a bitrate 1000 kbps
    Video B is 640 x 360, runs for 20 minutes, and has a bitrate of 1000 kbps

    ...

    Video A will, in most cases, look far worse than video B because it has to spread the same bitrate over a much larger area.

    ...

    Video A would look even worse if enlarged from Video B, as it would have been created from an already compressed source with an even lower resolution.

    ...
    Does your later statement above also imply that if you were to play video B on a screen with native resolution of 1280 x 720, the enlargement that the player must perform would degrade the quality too? And let's not worry about different players and their methods of enlarging the video (i.e. smart, simple stretch etc.) here.

    In other words, if you have a monitor with a resolution of 1280 x 720 would Video A play bettrer than B (in terms of quality always)? It would make sense to convert the video with higher resolution (at the cost of degraded quality for a constant bit rate as you highlight) than let the player enlarge it.
    Quote Quote  
  15. Originally Posted by ee98vvt
    Does your later statement above also imply that if you were to play video B on a screen with native resolution of 1280 x 720, the enlargement that the player must perform would degrade the quality too? And let's not worry about different players and their methods of enlarging the video (i.e. smart, simple stretch etc.) here.
    Doesn't matter in the least. Video A has 4 times as many pixels as Video B and needs 4 times the bitrate to keep the same quality, not even taking into account that it's been reencoded from Video B. Instead of having 4 times the bitrate as Video B, it has exactly the same bitrate. By any measure, even compared to Video B scaled to 1280x720, Video A will look like garbage compared to Video B.
    Quote Quote  
  16. It's a balancing act. By using a smaller frame size you trade resolution to get less macroblocking. Where the optimum is will depend on the particular video, the bitrate, your player, your TV, how far away you sit, and your tolerance for the different defects.
    Quote Quote  
  17. Member
    Join Date
    Sep 2009
    Location
    London, UK
    Search Comp PM
    Thanks for the response poeple!

    Let me re-phrase that.

    Scenario:

    You have a monitor with W:H resolution.
    You need to convert a DVD movie to a compressed video (i.e. avi, mp4 etc.) of a fixed size S
    You know that this size allows for a certain bit rate, which gives you good results for up to a resolution of (0.5*W)0.5*H)

    Question:

    What is it better to encode at:

    A. W:H
    B. (0.5*W)0.5*H)

    I think both A and B degrade quality; but, is it better to compromise quality during the encoding or during playback?!
    Quote Quote  
  18. They sacrifice quality in different ways. In most cases I think you'll find the lower resolution video looks better.
    Quote Quote  
  19. Banned
    Join Date
    Jul 2009
    Location
    United States
    Search Comp PM
    Question, is there any way to determine the original resolution of an upscaled video without access to the source?
    Quote Quote  
  20. No. At least not in general. With certain types of test patterns you might be able to.
    Quote Quote  
  21. Banned
    Join Date
    Jul 2009
    Location
    United States
    Search Comp PM
    If the original resolution is thought to be 240p of the 480p video, can it be downscaled to 240p then upscaled and compared? The SSIM value should remain at 1.0000 if no details were lost, correct?
    Quote Quote  
  22. Originally Posted by Xpenguin17
    If the original resolution is thought to be 240p of the 480p video, can it be downscaled to 240p then upscaled and compared? The SSIM value should remain at 1.0000 if no details were lost, correct?
    I don't think so.

    My understanding is that the scaling algorithm involves rounding of values, so with rounding errors, it's not lossless and definitely not reversible. I'm sure jagabo can explain it better. Also there would be compression errors if not using lossless methods
    Quote Quote  
  23. "if no details were lost" -- there's the rub. Resizing generally involves loss of detail.

    Take four numbers:

    0, 100, 0, 100

    Two peaks, two valleys, all equal sized. Now convert that into 5 numbers and retain two peaks and two valleys of equal size. Having problems?
    Quote Quote  
  24. Banned
    Join Date
    Jul 2009
    Location
    United States
    Search Comp PM
    I hear ya. Also, not to forget that compression artifacts in the upscaled video like blocking will be erased when scaling down, and SSIM will interpret this as a loss of detail, so yeah. But is there a certain threshold where it should stop being ignored? E.g. 0.99700?
    Quote Quote  
  25. Originally Posted by Xpenguin17
    But is there a certain threshold where it should stop being ignored? E.g. 0.99700?
    I'm not sure what you are getting at here? In what context?

    I don't see a point in assigning an arbitrary value, or how to even derive a value that would be considered a "threshold".

    I think the trends are more important and not necessarily absolute values. How are you measuring SSIM ? If you are using various metric measurement tools or even x264, most of them do it slightly differently and are only approximations. (They don't use the full method as outlined in scientific journals)
    Quote Quote  
  26. Banned
    Join Date
    Jul 2009
    Location
    United States
    Search Comp PM
    Originally Posted by poisondeathray
    Originally Posted by Xpenguin17
    But is there a certain threshold where it should stop being ignored? E.g. 0.99700?
    I'm not sure what you are getting at here? In what context?
    Transparency.

    Originally Posted by poisondeathray
    I don't see a point in assigning an arbitrary value, or how to even derive a value that would be considered a "threshold".

    I think the trends are more important and not necessarily absolute values. How are you measuring SSIM ? If you are using various metric measurement tools or even x264, most of them do it slightly differently and are only approximations. (They don't use the full method as outlined in scientific journals)
    I use MSU, and they claim their implementation adheres to the standard.

    P.S. How does VQM compare to SSIM?
    Quote Quote  
  27. Transparency.
    Do you mean completely transparent as in true lossless or "visually losselss"? If the latter, there is subjectivity involved and you cannot apply an objective measure to the subjective model of "quality" with any degree of accuracy.

    I use MSU, and they claim their implementation adheres to the standard.
    Perhaps, but it often yields incorrect or poor readings. I wouldn't rely on SSIM or PSNR solely, because they are problematic , not very reliable, and not even very consistent between measuring tools! The trends might be useful (if you are consistent in your testing methodology), but even that is debatable. At best, they just provide a tiny bit of information on the big picture. I've posted illustrative examples before, and I'll post another one. Below is a zip file of a 3 bmp screenshots (original, x264, mcr) taken from this comparison thread, from the vote trailer. You can download full sources there
    https://forum.videohelp.com/topic371339.html

    ssim.zip

    SSIM (precise)
    x264 0.9841
    mcr 0.9854

    Now there is clearly a mistake in this measurement tool. The mcr encode has smoothed away most the grain compared to the original and x264 encode, and yet SSIM value is higher ?? This is supposed to be an objective tool. I can post 100's of other examples where it clearly fails in the accuracy department. It's usually better to use your eyes...


    How does VQM compare to SSIM?
    I don't know anything about VQM, maybe you can ask at doom9 forums
    Quote Quote  
  28. Banned
    Join Date
    Jul 2009
    Location
    United States
    Search Comp PM
    Originally Posted by poisondeathray
    Now there is clearly a mistake in this measurement tool. The mcr encode has smoothed away most the grain compared to the original and x264 encode, and yet SSIM value is higher ?? This is supposed to be an objective tool. I can post 100's of other examples where it clearly fails in the accuracy department.
    That sample contained noise. If you removed it, and did tests again, I think the metric would be consistent. It always has when I needed it.

    Originally Posted by poisondeathray
    It's usually better to use your eyes...
    My eyes are unreliable. :/
    Quote Quote  
  29. Originally Posted by Xpenguin17
    That sample contained noise. If you removed it, and did tests again, I think the metric would be consistent. It always has when I needed it.
    But denoising it would cause deviation from the original. Isn't the purpose of this tool supposed to measure closeness in "quality" to the original? Isn't that diametrically opposed to striving to obtain "transparency"? And how realistic is denoising everything?

    What do you think grain is? It's essentially noise. To the algorithm or computer doing the work calculations, it can't distinguish if a dot is supposed to be noise (created from compression artifacts) or grain (originally present in the original). But here clearly the tool is wrong... and by a lot (and I can show you many more examples). Which encode do you think more resembles the original? It should be pretty obvious here, as I chose a pretty clear cut example.

    The other reason these metrics fail, is the type of weighting used. Human perception of quality usually percieves sharpness and clarity as "higher quality" than smoothness and lack of detail, where the metric has difficulty distinguishing "types" of noise. It tends to "penalize" sharper pictures more than lack detail, which is usually the opposite from " human perception of quality model". Also humans have higher weighting for certain parts of the frame which can deviate from the weighting used by the measurement tool (e.g you focus on faces & people usually rather than background stuff because thats usually considered more important). But this is where the subjective & objective world collides and there is lots of room for error

    I'm not saying SSIM / PSNR measures are useless, it's just that in the grand scheme of measuring "quality" they are problematic and not very useful when used just by themselves. They are more useful when looking at trends as a rough estimate, in combination with other methods.
    Quote Quote  
  30. I agree that noise shouldn't be an issue. What's noise in one image is detail in another (a fuzzy sweater, a stucco wall, etc).

    I think it would be an interesting experiment to take an image, upscale it, then down size it to a range of different resolutions, and upsize again -- then measure the differences between the final image and the original upscaled image. Graph PSNR or whatever metric you use against the downsized size. Will there be a peak at the original image's size? Is the peak consistent across different types of images? How sharp is the peak?
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!