VideoHelp Forum
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 42 of 42
Thread
  1. Originally Posted by Mephesto View Post
    I have all my lossless and high-bitrate lossy versions stored safely on a large disposable drive should I need them to recompress to the latest format or another for a portable device. What's wrong with listening to compressed music? Even if there are audible artifacts, some people like that. Teens in a test conducted in 2008 preferred the "sizzly" 128kbps MP3 to lossless. MP3 compression in particular alleviates some annoying things like clipping.
    Nothing wrong with listening to compressed music, I just don't see much advantage in compressing it to the point where it sounds different when it can be compressed transparently at pretty reasonable bitrates.
    Even if that study happened to be accurate, and I have my doubts (it was never officially published) you've pretty much proven yourself with this thread that one of the goals of compression is to compress while reproducing the original as accurately as possible.

    Originally Posted by Mephesto View Post
    Don't assume the file is contiguous. HDDs are really bad with randomly fragmented files. In the worst case scenario a stereo 44.1 kHz WAV file fragmented into the smallest 4KB chunks all across the HDD needs about 176 KB retrieved in real time per second. My state-of-the-art HDD reads random 4KB chunks at about 400 KB/s so assuming the HDD is focusing exclusively on this WAV and nothing else, it can barely play it in real-time. Meanwhile, the disk thrashing that went on for 5 minutes to listen to the song has stressed the HDD out and lowered its lifespan.
    I'd be interested to know how you did the math, but speaking of studies..... checkout the section under "Over work = early death?"
    https://storagemojo.com/2007/02/19/googles-disk-failure-experience/

    Originally Posted by Mephesto View Post
    This is why SSDs are replacing HDDs but most affordable ones are 128-256GB. Now we're back to the "space is precious" argument. If you only had one song or a couple songs, 50MB wouldn't be a problem. But when you have albums and discographies then this easily goes beyond a maintainable collection. Maintainable meaning easy to backup and retrieve without hassle. 1TB is a f***ing hassle because it takes hours.
    I still agree. Compression is a good thing. My point was simply going for lowest possible bitrates doesn't seem to provide that much benefit to me in terms of file size if it means a trade-off with quality.
    I have 33.8GB worth of MP3s on my hard drive, but out of a total of something like 1800GB of hard drive space it doesn't seem to put much of a dent in it.

    Originally Posted by Mephesto View Post
    The other point is a reason of principle. I'm an avid connoisseur of the art of data compression. I love testing the limits and seeing how far I can compress a file without it sounding/looking any different. There will come a time when songs and videos will be the size of MIDIs and flash videos in full lossless quality, their size determining their actual non-technical quality.
    We all got different hobbies though.
    That's fair enough..... I have friends who think video conversion as a hobby is nuts. Each to their own.
    I guess someone's got to live on the edge when it comes to compression..... I just prefer not to be one of the "early adopters" myself.

    Originally Posted by Mephesto View Post
    This is weird. If the two files are supposed to be indiscernible then the only option for the participants is to guess. There is no possible way to guess 37.5% of the time, 5% of the time or 95% of the time. It should be close to 50%.

    This test undermines its own credibility and proves hallucinogen-abusing audiophiles right.
    I thought the same thing, but maybe given woman made up less than 10% of the participants it's just some sort of statistical anomaly. Either that, or it shows some women could distinguish between 24bit and 16bit, but when choosing which one they thought was 24bit they were actually showing they preferred the 16 bit version by choosing it instead.
    Quote Quote  
  2. Member
    Join Date
    Jan 2014
    Location
    Kazakhstan
    Search Comp PM
    I just say thanks to the developers of such coders They spend efforts on the vagaries of performance audiophile - terribly intrusive people!
    Quote Quote  
  3. @hello_hello

    I'd be interested to know how you did the math, but speaking of studies..... checkout the section under "Over work = early death?"
    https://storagemojo.com/2007/02/19/googles-disk-failure-experience/
    A lossless stereo song at 44.1 kHz comes to 176,400 byes a second. My HDD's official benchmarks boast of 300 IOPS which means it can read 300 4KB chunks randomly distributed on the HDD or about 1.2 MB/s random read speed. But when I did my own test on this drive with Crystalbenchmark, it came between 300-700 KB/s. Results weren't too consistent but sure as hell below 1 MB/s.

    That article is interesting. It says extremely idle drives are more likely to fail and consistently busy drives have only somewhat shorter lifespans. I wonder why idle drives have higher fail rates, it doesn't explain.

    It does also say that low temperature drives are more likely to die in the short term while hot drives die in the long run. This makes sense because the colder the temperature the harder it is for the drive to spin.

    I have 33.8GB worth of MP3s on my hard drive
    lol, I bet you don't listen to 90% of that. MP3s don't make up a huge chunk of all the data I have. In fact, I have 110 GB of original non-media data that built up over the last 15 years. I wish it was less because then it would be as convenient as a tiny USB stick. I would shit bricks if my music collection alone would make up 50% of my irreplaceable personal stuff.

    I guess someone's got to live on the edge when it comes to compression..... I just prefer not to be one of the "early adopters" myself.
    I'm actually more of an early majority. I prefer to be in blissful ignorance and procrastinate as time flies by so the tech matures enough to fall in my lap by the time I start thinking about it again. Too much anticipating, obsessing, fantasizing and then living a disillusioned disappointment because the technology is taking way too long to honor its original hype it instilled when it was first announced, is not really healthy especially when you're not an engineer and have nothing to do with the development of the technology. In other words, these "early adopters" you speak of are geeks who lead unsatisfying lives.

    CELT (opus) first came out somewhen early 2011. I didn't hear of it until January 2013 when I first tried it and didn't like it but I did recognize potential for the music it was good with which otherwise is recognized as being complex, entropic and a huge problem for AAC. So I used Opus for music AAC sucks with, and AAC for music Opus sucks with. Fair trade.

    Then a few months later opus 1.1 beta was released, and then 1.1 stable a few months after that which introduced and improved surround-sound encoding. If I adopted Opus when it first came out in 2011, I would've been really disillusioned with the slow progress if I had to wait 2 years for minor fixes and almost 3 years for yet another unimportant feature I mostly don't use.

    thought the same thing, but maybe given woman made up less than 10% of the participants it's just some sort of statistical anomaly. Either that, or it shows some women could distinguish between 24bit and 16bit, but when choosing which one they thought was 24bit they were actually showing they preferred the 16 bit version by choosing it instead.
    If they preferred the 16-bit then that means they could tell the difference. ABX tests are not about proving quality, they're about proving ability to tell two apart or not.
    Unless they had 3 versions of the audio instead of just 16 and 24-bit then the 37.5% number would make sense or if they only made a few guesses for a few pieces of audio. They need to honor the 95% confidence rate requirement before publishing.

    @Ethrelred
    I do defragment but my drive seems to easily become fragmented. Despite buying the faster drive I actually noticed them getting slower as the years go by. Maybe it's because CPU and memory stopped being bottlenecks and HDDs are now the new bottleneck.

    @Gravitator
    I had to look up "vagary" and I never knew there was a word with a definition that describes the audiophile community more accurately. I never knew there would be a time where someone who doesn't speak a goddamn word of English would be enriching my vocabulary. Do you speak any Russian?
    Quote Quote  
  4. Member
    Join Date
    Jan 2014
    Location
    Kazakhstan
    Search Comp PM
    Originally Posted by Mephesto View Post
    Do you speak any Russian?
    right
    - You know about the existence of a plan for the further development opus? Such as the project "ghost"
    http://people.xiph.org/~xiphmont/demo/ghost/demo.html
    Quote Quote  
  5. Thanks for reminding me about Ghost. I did read about it before but forgot. They can make the precursor for ghost right now by extending the frame size of opus which currently is only 20 ms. If they allowed something larger like 100 ms which is common for codecs like MP3 and MP4 then it can handle tonal content much better.

    But they're planning to go beyond that. With Ghost they're trying to find an intelligent way to utterly separate tone and broadband from the audio so they can encode each separately with algorithms specialized for them. CELT is excellent for broadband and a specialized parametric codec can be written to encode the extracted tonal audio to super-low bitrates with virtually no loss, then these are put back together upon decoding.

    This is one step closer to parametric, AI encoding. The next step will give us songs the size of MIDIs and all kinds of useful applications like being able to edit the song, lyrics to whatever you want like you can with a MIDI program.

    Now that you heard the optimistic news, it's time you heard some pessimism: this new Ghost format when it's out will improve on AAC only about 8-16 kb/s. So 40 kb/s Ghost will be equal to about 48 kb/s, 56 will be as good as 64 AAC etc.
    When it comes to human-generated content, it's just way too complex to properly encode at such low bitrates, no matter what technique is used. The popular audio and video codecs you see today like MP3 and H.264 is based off 1950s mathematics. We've reached the limit. The next step is super-lossy codecs that will fail miserably with outdated metrics but will produce artifacts that won't be annoying. How about compressing an interview to 2KB where all the voices will sound like the same person but in high quality? How about a photo compressed to 1KB with entropic textures and details replaced with high-quality generated detail that might change a license plate number but will look really good?
    That is the future of lossy coding.
    Quote Quote  
  6. Member
    Join Date
    Jan 2014
    Location
    Kazakhstan
    Search Comp PM
    Originally Posted by Mephesto View Post
    That is the future of lossy coding.
    Ghost is likely as the push for future video encoder Daala (exclusive?) Theirs lull means something ... I will be quite enough if he could at 80kb/s to play as OPUS ~96kb/s or 64>80kb/s
    Quote Quote  
  7. Member
    Join Date
    Jan 2014
    Location
    Kazakhstan
    Search Comp PM
    Xiph within a month they should release test tools - https://wiki.xiph.org/DaalaRoadmap
    - they demonstrate a super modern technology "lapped transform"
    Quote Quote  
  8. Here's something in the Ghost article that I think is important.
    The current Ghost design has an advantage over naive spectral band replication techniques; because the harmonic data is removed from the subbanded signal, folding/replication in Ghost does not need to worry about artifacts resulting from displaced/corrupted harmonic structures.
    This is precisely what made SBR so impractical in HE-AAC. For linear, predictable sounds in the upper shelf it was great and this was the key to making a codec as good as MP3 with half the bitrate. Junk the upper half of the frequency shelf and use information from lower half to predict and replicate it. Problem is if the upper shelf had unpredictable things in it like high notes and jingles, it gets all distorted by SBR and sounds really annoying, like a bad samplerate conversion job. So on most parts it sounds fine, but on some sounds really bad and just to fix 10% of the soundtrack would require twice the bitrate so SBR won't be used. Pointless feature it turned out to be.
    I wished there was a way to separate harmonics from the noise somehow so only the noise and drums get SBR'd and harmonics won't be harmed. Finally this gets addressed.

    Xiph within a month they should release test tools - https://wiki.xiph.org/DaalaRoadmap
    - they demonstrate a super modern technology "lapped transform"
    I gotta confess I have difficulty getting what these lapped transforms are. The brainy text in that link is beyond my grasp. I'll play around with their test tools when they release them though.
    Quote Quote  
  9. Originally Posted by Mephesto View Post
    That really doesn't sound right.
    5 minutes of stereo, 44.1k, 16 bit, PCM audio should be somewhere around the 50MB mark. Your hard drive struggles to read a 50MB file in under 5 minutes?
    Don't assume the file is contiguous. HDDs are really bad with randomly fragmented files. In the worst case scenario a stereo 44.1 kHz WAV file fragmented into the smallest 4KB chunks all across the HDD needs about 176 KB retrieved in real time per second. My state-of-the-art HDD reads random 4KB chunks at about 400 KB/s so assuming the HDD is focusing exclusively on this WAV and nothing else, it can barely play it in real-time. Meanwhile, the disk thrashing that went on for 5 minutes to listen to the song has stressed the HDD out and lowered its lifespan.
    I know it's a couple of weeks later but it's kind of stuck in my head. Not enough to do the math myself until today..... because I happened to be reading the specs of some hard drives, according to CrystalDiskMark. And it popped into my head again.

    CrystalDiskMark reports something like 0.6MB/s for a USB3 hard drive, and a little more for a SATA 3TB drive, but every time I do the math, even if I assume 0.5MB/s of random 4KB access, I don't know how you work that out to be 176KB per second in "real time"? Doesn't 4KB "random access" include seek time and the time it takes to read the actual data, arriving at being able to read around 0.5MBs of random 4KB chunks of data per second?
    I'm pretty sure it does, which means even if the whole file was split into 4KB random chunks, the drive would take two or three seconds to read an entire 5 minute, 50MB uncompressed wave file.

    That makes more sense. I just copied a 150MB file from one drive to another and the Windows copy window thingy barely appeared on the monitor and the job was done.
    Quote Quote  
  10. I'm pretty sure it does, which means even if the whole file was split into 4KB random chunks, the drive would take two or three seconds to read an entire 5 minute, 50MB uncompressed wave file.
    50 ÷ 0.5 = 100 seconds, not 2-3.

    The last time I remember having a severely fragmented WAV, double-clicking it to play made my system stall for 20 seconds while the player and all its dependencies, DLLs and whatever else it has to load off the disk to get running was being loaded, then the song played fine in real-time but it was slowing the hell out of my disk and I couldn't really do anything else while the song was running. Pathetic. I remember a time when playing MP3s took 20% CPU so you couldn't play games while listening to music or your FPS would be low. Now I can't play my game because the disk thrashing is interfering with it loading the new levels. Good god almighty...

    By the way, I just backed up my 300GB partition to another one on the disk and guess how long it took? Exactly 24 hours...
    I did use a partitioning program I particularly hated though and I'm sure others are programmed to be a lot more efficient than this but it proves my point with how ******* inconvenient having shitloads of data really is and why infinite terabytes of space mean jack shit when the HDD could die tomorrow and backups are way too much of a goddamn hassle let alone having to do ones regularly.
    I think I came pretty close to a worst case scenario with the backup I just did inside a VM with a crappy partitioning program. You must always assume worst case scenario. Misleading system specs sell but they rarely deliver.
    Quote Quote  
  11. Originally Posted by Mephesto View Post
    I'm pretty sure it does, which means even if the whole file was split into 4KB random chunks, the drive would take two or three seconds to read an entire 5 minute, 50MB uncompressed wave file.
    50 ÷ 0.5 = 100 seconds, not 2-3.
    Obviously. What was I thinking??

    Originally Posted by Mephesto View Post
    By the way, I just backed up my 300GB partition to another one on the disk and guess how long it took? Exactly 24 hours...
    Slow, but I could believe it. Especially if the disk was fragmented. Having to keep moving the heads around to both read and write can really slow down a single dive.
    The only solution is multiple drives. At least until SSDs become affordable at large capacities. I have four hard drives in this PC. They're pretty old now (320GB and 500GB) but running them in pairs as two RIAD-0 volumes really speeds things up. My other PC has four 1TB WD Black drives. It adds a bit to the cost of building a new PC, but it's worth every cent. If copying a large file from one partition to another on a single drive would take (for example) 4 minutes, then partition to partition on a RAID-0 volume with two drives should take two minutes, and the same file copied from one RAID-0 volume to another should take about a minute. Realistically though, once you've got one pair of drives doing nothing but reading and another doing nothing but writing it wouldn't be unreasonable to expect to finish the same copying job in closer to 30 seconds, assuming the hard drives aren't too full. About the only time I do feel I'm being slowed down by hard drive speed is if I try to do something whilst copying a large file from one RAID-0 volume to another. Other than that I can usually multi-task away without hard drive speed being much of a factor.

    I have Windows and programs installed on a 80GB partition. I put My Documents and all my personal files on a different partition so I'm only imaging Windows and programs. As I run XP I'm only using about 10GB of the 80GB partition, but when imaging it, writing the image file to the second RAID-0 volume takes three or four minutes. Restoring the image takes about two. I could never go back to single hard drives.

    I agree, backing up large drives is slow. I have eight external 2TB drives which are pretty full, but only four of them contain unique files as the other four contain the backup copies. One day a drive will die so I have a backup of each, and I don't burn stuff to discs as backups any more. Now that's slow.

    PS I rarely defrag the traditional way any more (using a defrag program). The imaging program effectively defrags when it restores an image, and for the other partitions I try to keep enough free space so I can copy the entire contents of one partition to the other RAID-0 volume. To defrag, I copy the files, delete everything on the original partition, then copy them back. It tends to be much faster than actual defragging and it doesn't work the drives as hard.
    Last edited by hello_hello; 5th Mar 2014 at 09:18.
    Quote Quote  
  12. Member
    Join Date
    Jan 2014
    Location
    Kazakhstan
    Search Comp PM
    On Russian sites HEVC stuck and many now expect Daala (although xiph.org suspiciously silent)
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!