VideoHelp Forum




+ Reply to Thread
Results 1 to 29 of 29
  1. Member
    Join Date
    Oct 2006
    Location
    Australia
    Search Comp PM
    Hey

    I was just wonderin, is there anything wrong with running multiple video/encoding tasks at once ?

    For example, at one instance:
    -running multiples of DGIndex at once to create d2v files of mpeg2's.
    or
    -encoding with say Gordian Knot using XviD and with Megui using x264
    or
    -encoding with GK/Megui to XviD/x264 and using CCE to encode XviD/x264 to MPEG-2 (DVD)

    Is it possible to get screwed up video or glitches or any other sort of problems in any of the above scenarios ?

    Thanks
    Quote Quote  
  2. Mod Neophyte redwudz's Avatar
    Join Date
    Sep 2002
    Location
    USA
    Search Comp PM
    There shouldn't be any problem running multiple tasks, it will just run them all a little slower. Some tasks that demand more real time results, like capturing, may be affected by dropped frames when running too many other things in the background, and possibly burning, because of disc access, could have problems, but encoding or similar should have no ill effects, other than slowing.
    Quote Quote  
  3. Member gadgetguy's Avatar
    Join Date
    Feb 2002
    Location
    West Mitten, USA
    Search Comp PM
    My experience with doing two encodes at the same time was that it actually took longer than if I encoded one after the other. But both encodes turned out fine (although terribly fragmented).
    "Shut up Wesley!" -- Captain Jean-Luc Picard
    Buy My Books
    Quote Quote  
  4. Member
    Join Date
    Oct 2006
    Location
    Australia
    Search Comp PM
    Terribly fragmented ? Doesn't fragmented mean broken up into little fragments

    :S
    Quote Quote  
  5. Member gadgetguy's Avatar
    Join Date
    Feb 2002
    Location
    West Mitten, USA
    Search Comp PM
    Originally Posted by spanky123
    Terribly fragmented ? Doesn't fragmented mean broken up into little fragments

    :S
    Yes. The encoded files are fragmented across the hard drive, with as many as 100+ fragments.
    "Shut up Wesley!" -- Captain Jean-Luc Picard
    Buy My Books
    Quote Quote  
  6. Banned
    Join Date
    Dec 2005
    Location
    Canada
    Search Comp PM
    If you put too much stress on the HDD (reading, writings too many items at the same time, I'm not talking CPU) you may end up with bad blocks and HDD premature failure eventually.
    ...happened to me, so go ahead... As it was said above, not gonna be faster but this is inviting trouble... your encoding has to be set to lowest priority so that PC could interrupt when it needs to, if it can't interrupt things happen.
    Quote Quote  
  7. Member
    Join Date
    Oct 2006
    Location
    Australia
    Search Comp PM
    So I probably shouldn't do it ?

    Last night, for the first time I ran DGIndex for 3 different videos at the same time to make d2v files and to demux the audio. Then I encoded them to x264 with Megui.
    For the first time in a long time, the filesizes were quite a bit off what I had specified. The only thing I done differently this time was that I ran multiples of DGIndex simultaneously.

    Is it possible, that this could have affected the output filesize ?
    Seems kinda strange to me
    Quote Quote  
  8. Banned
    Join Date
    Dec 2005
    Location
    Canada
    Search Comp PM
    It shouldn't affect encoding outcome, just makes HDD work harder, especially when fragmented. I lost a HDD (not really, replacement) while encoding too many things (and watching) at the same time. This was for 20 min. max enough to produce over 100 bad blocks, some of them irreparable (decided not lo low level format which could remap the drive, just return).
    Just not worth the time you have to invest fixing things afterwards. Happened month ago. So beware. Task priority is the key. If your PC has room to react you may be OK, if it cannot interrupt, unload memory, sh.t happens.
    Quote Quote  
  9. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    I do it all the time but usually write to different drives to avoid the fragmentation issue. NTFS handles fragmentation well but multi-encode thrashes the poor hard drive. The drive may keep up with it but makes you fear the thing will explode or at least expire.

    Dual core allows capture to proceed while encoding if set up properly. Best thing since sliced bread.

    Again capture ideally should be to a separate drive form encoder read/write.
    Quote Quote  
  10. If you have a dual core system but only a single threaded (or poorly multithreaded) encoder you can nearly double your throughput doing two encodes at the same time.
    Quote Quote  
  11. Member gadgetguy's Avatar
    Join Date
    Feb 2002
    Location
    West Mitten, USA
    Search Comp PM
    As I said, the files were fine, but instead of saving time (my goal) it actually took longer. At the time I did not have the luxury of enough drives. The source files were on one drive and the destinations were to another, but the constant reads and writes between the drives caused a lot of fragmentation and when I tried to author I had some delays that made it kind of frustrating. I defragged and it worked fine then, but that just added more time to the process. I can't say it caused my hard drives to die prematurly as I'm still using the destination drive. The source drive has since been replaced but it gave me a good six years of service so I can't complain.
    "Shut up Wesley!" -- Captain Jean-Luc Picard
    Buy My Books
    Quote Quote  
  12. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    The slowest response on a drive is seek time as the heads mechanically move across the drive.
    Quote Quote  
  13. Member
    Join Date
    Oct 2006
    Location
    Australia
    Search Comp PM
    Originally Posted by edDV
    The slowest response on a drive is seek time as the heads mechanically move across the drive.
    Is that kinda how the laser on a cd/dvd drive moves across to read different parts of the disc ?
    Would defraggin the HDD help at all here ?

    Originally Posted by spanky123
    For the first time in a long time, the filesizes were quite a bit off what I had specified. The only thing I done differently this time was that I ran multiples of DGIndex simultaneously.

    Is it possible, that this could have affected the output filesize ?
    Seems kinda strange to me
    I should add (I forgot) that I had just run an update for MeGUI (updated 'core' and 'x264') before this. I think this must be the problem for my off filesizes because I just tested another one now and the filesize was way off again. I don't think running multiple's of DGIndex had anything to do with it (as yall have pointed out).
    Quote Quote  
  14. Originally Posted by spanky123
    Originally Posted by edDV
    The slowest response on a drive is seek time as the heads mechanically move across the drive.
    Is that kinda how the laser on a cd/dvd drive moves across to read different parts of the disc ?
    Yes.

    Originally Posted by spanky123
    Would defraggin the HDD help at all here ?
    Yes. But when two files are being written at the same time they may still end up fragmented.
    Quote Quote  
  15. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Originally Posted by jagabo

    Originally Posted by spanky123
    Would defraggin the HDD help at all here ?
    Yes. But when two files are being written at the same time they may still end up fragmented.
    On a fragmented disk the heads may have to move a great distance to find read fragments or to find space to write sectors. If the disk is near full, empty sectors can be widely scattered causing the disk to thrash and slow down.
    Quote Quote  
  16. Banned
    Join Date
    Dec 2005
    Location
    Canada
    Search Comp PM
    One thing to remember is that HDD is still the slowest (beside CD/DVD drive) item in your PC processing CPU commands.
    While RAM and CPU can easily interrupt internal processes HDD is taken hostage by them and has to deal with a slew of requests until finished with a chunk it got. If it has to be halted by brute force with repeat req. from CPU (another app. banging it for access to disk) it will eventually corrupt either page file or data. I use dual CPU and although it gives a lot of room for internal processing most of the processed data either comes from or ends up on a HDD which is an obvious bottleneck for a PC. Unlike RAM and CPU HDD has mechanical parts that have a limit to what and how quickly they can process data. While RAM memory is purged on the spot your HDD needs time to complete the task. Rarely HDD fails in hardware without some early signs and most disk corruption occurs due to logical rather then physical errors. Unfortunately both are equally (although not to the same degree) frustrating and difficult to recover from. Obviously physical failure will be a candidate for professional recovery services while chances to salvage data from logical errors are usually better for a home user. Although I prsonally don't subscribe to weekly defrags, performing it once a month and monitoring fragmentation level may prevent a disaster.
    Encoding is one of the most intense tests for PC hardware so performing it on a sequential basis is safer rather then trying to cram everything at once. I was pushing it and could hardly believe how much damage it has done after I have done very much in depth sector analysys. Looked like a repeated head crash even in areas that contained no data at all. This HDD was a new addition (Sept. 2006) - 320 Seagate. One thing to say though - love Seagate 5 year warranty.
    Quote Quote  
  17. Member
    Join Date
    Apr 2004
    Location
    Connecticut, USA
    Search Comp PM
    Hi.
    So, it's not rocket science. When two processes are doing intensive disk I/O you will definitely have fragmentation. You will put extra wear and tear on the hard drive heads and mechanism. If your drive uses a voice coil mechanism then you will have good performance - however most consumer drives use a stepper motor to move the heads. This will eventually wear out.
    Ideally, you want a clean defragged optimized HD with free space that is contiguous toward the end of the drive.
    And you want to run one encoding at a time.

    Here's what I do:
    I have 3 hard drives. One is meant for final output - being encoding to MPEG video. That drive is always defragged and optimized.
    I use CCE Basic and create a batch of files to run then run them overnite.
    With this setup I know I won't have any problems.

    JB
    Quote Quote  
  18. I must strongly disagree with some of what has been stated here.

    A heavy usage pattern will certainly shorten a drive's lifetime. However, that lifetime is measured in tens of thousands of hours.

    A heavy usage pattern will not CAUSE a defect in a drive's hardware. It will REVEAL one that already exists. Head crashes specifically are a failure of the hardware. Usage does not create this failure. There is no possible set of software usage which should cause a head crash in a non-defective drive. Power isues or extreme vibration would be the only case where a defective drive is not the cause.

    Example - you drive a car at 80mph for one hour and the engine explodes. Did the driving cause the failure? You could argue that it did, however any engine that is functioning properly with no inherent defects should be able to do this with no problem at all. An engine which fails in these circumstances was defective in some way.

    Defragging - it does reduce wear and tear, however the defrag itself represents a fair amount of usage. For most folks, once a month or even three is fine. Some need it every day. If the analysis says you are 95% or better unfragmented, wait longer next time.
    Quote Quote  
  19. Banned
    Join Date
    Dec 2005
    Location
    Canada
    Search Comp PM
    Keep in mind that even new HDD's come with "known errors" table stored in their firmware (as well as a table of reserve sectors in case of bad ones occurring) so ALL of them are imperfect straight from the factory.
    Head crash is a physical damage, I said it looked like it (magnitude), never said it actually happened.
    If you say "usage does not create this failure" then what does, sonar-ocular emissions?
    Your very eloquent example with the car only confirms my suspicion, that I've had for some time, that things are not perfect even if they sometimes appear so. Otherwise the engine would run forever... Just a thought.

    PS.
    Originally Posted by Nelson37
    A heavy usage pattern will certainly shorten a drive's lifetime. However, that lifetime is measured in tens of thousands of hours.
    This is utopia but feels good to know nevertheless.
    Quote Quote  
  20. Going Mad TheFamilyMan's Avatar
    Join Date
    Jan 2004
    Location
    south SF bay area, CA USA
    Search Comp PM
    Short answer: Running multiple encodes simultaneously will not effect the results of the encodes. But it can create disk access bottlenecks (and possibly excessive OS/CPU overhead, depending the CPU used), that slow down the overall progress of the encodes compared to running them successively. I've noticed this phenomenon big-time when runnning encodes, so I usually do them successively.
    Usually long gone and forgotten
    Quote Quote  
  21. Member gadgetguy's Avatar
    Join Date
    Feb 2002
    Location
    West Mitten, USA
    Search Comp PM
    I'm with Nelson37. All things being equal, just because a drive is doing a ton of read/writes for an extended period does not mean the drive will be damaged. It will produce more heat and if your drive isn't properly vented, that can cause damage, but a drive will not be damaged simply by doing what it was designed to do.

    I think TheFamilyMan summed it up nicely.
    "Shut up Wesley!" -- Captain Jean-Luc Picard
    Buy My Books
    Quote Quote  
  22. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    Originally Posted by gadgetguy
    I'm with Nelson37. All things being equal, just because a drive is doing a ton of read/writes for an extended period does not mean the drive will be damaged. It will produce more heat and if your drive isn't properly vented, that can cause damage, but a drive will not be damaged simply by doing what it was designed to do.

    I think TheFamilyMan summed it up nicely.
    I thought the issue was speed and fragmentation. Spreading data across the drive slows things down and increases wear, or at least increases the noise the thing puts out.
    Quote Quote  
  23. Member gadgetguy's Avatar
    Join Date
    Feb 2002
    Location
    West Mitten, USA
    Search Comp PM
    Originally Posted by edDV
    Originally Posted by gadgetguy
    I'm with Nelson37. All things being equal, just because a drive is doing a ton of read/writes for an extended period does not mean the drive will be damaged. It will produce more heat and if your drive isn't properly vented, that can cause damage, but a drive will not be damaged simply by doing what it was designed to do.

    I think TheFamilyMan summed it up nicely.
    I thought the issue was speed and fragmentation. Spreading data across the drive slows things down and increases wear, or at least increases the noise the thing puts out.
    It is. I was referring to
    Originally Posted by Nelson37
    I must strongly disagree with some of what has been stated here...
    A heavy usage pattern will not CAUSE a defect in a drive's hardware. It will REVEAL one that already exists. Head crashes specifically are a failure of the hardware. Usage does not create this failure. There is no possible set of software usage which should cause a head crash in a non-defective drive. Power isues or extreme vibration would be the only case where a defective drive is not the cause.
    "Shut up Wesley!" -- Captain Jean-Luc Picard
    Buy My Books
    Quote Quote  
  24. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    ahh. OK.

    As multi core goes 2x to 4x to 8x and single drive sizes keep increasing to >TB, this is an interesting (Chinese meaning) trend.
    Quote Quote  
  25. MTBF - that is mean time between failure - is at least 50,000 hours for even the cheapest drives. This is not "utopian", this is established baseline fact.

    As far as what causes the actual failure, what difference would that make to you? Simple wear and tear over tens of thousands of hours, sunspots, molecular discontinuities measured at the Angstrom level, none of these are under user control. Clean power, good cooling, minimal vibration, these are the only things the user can effectively control.

    Usage pattern, continued over a period of months of thrashing, might begin to have an effect but a few hours is largely irrelevant. I have tested drives with butterfly seeks for days with no detectable ill effects, same drives have continues to run perfectly for years. The one or two which failed these tests were by definintion defective before the tests were run.
    Quote Quote  
  26. Originally Posted by Nelson37
    MTBF - that is mean time between failure - is at least 50,000 hours for even the cheapest drives. This is not "utopian", this is established baseline fact.
    It's very much a utopian concept for hard drives:

    http://www.storagereview.com/guide2000/ref/hdd/perf/qual/specMTBF.html

    I agree that encoding two videos simultaneously isn't an issue though. Unless your converting uncompressed video to uncompressed video 24 hours a day for months on end.
    Quote Quote  
  27. What some would call "utopian" certainly appears to me to be valid statistical analysis, based on a large sample set. The specific manner by which this does or does not apply to an individual drive is something a lot of people just don't get a handle on. Understanding the "MEAN" in MTBF is the first step in this process.

    The author mentions reduced warranty periods as an indicator of reliability. Absolutely no mention is made of competitive pricing which creates pressure to reduce costs. Warranty coverage period reduction is an effective way to reduce costs. The lower pricing is something we the consumer have voted with our wallets and demanded by our purchasing practices. In doing so, we the consumer have actually lowered the warranty periods by our own actions. We are not willing to pay more for a drive with a longer warranty period.

    Someone will undoubtedly post to state their individual purchases conflict with this, illustrating the lack of understanding of the large sample set mentioned in paragraph one.
    Quote Quote  
  28. Originally Posted by Nelson37
    What some would call "utopian" certainly appears to me to be valid statistical analysis, based on a large sample set.
    Originally Posted by StorageReview
    MTBF figure is intended to be used in conjunction with the useful service life of the drive (drives must be replaced with a new one once the service life has expired)...

    theoretical MTBF... these MTBF figures are estimates based on a theoretical model of reality, and thus are limited by the constraints of that model. There are typically assumptions made for the MTBF figure to be valid: the drive must be properly installed, it must be operating within allowable environmental limits, and so on. Theoretical MTBF figures also cannot typically account for "random" or unusual conditions such as a temporary quality problem during manufacturing a particular lot of a specific type of drive...

    After a particular model of drive has been in the market for a while, say a year, the actual failures of the drive can be analyzed and a calculation made to determine the drive's operational MTBF.... operational MTBF is rarely discussed as a reliability specification because most manufacturers don't provide it as a specification...
    Translation: if you buy thousands of drives with no component or manufacturing defects, use them only within stated evironmental and duty limits (most consumer drives aren't spec'd to run full tilt 24 hours a day), replace the drives with identical (equally perfect) drives every few years (when the service life expires), and the marketing department didn't simply pull the MTBF number out of its ass, you would expect to match the theoretical MTBF.
    Quote Quote  
  29. Member
    Join Date
    Oct 2006
    Location
    Australia
    Search Comp PM
    Lol. Good translation
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!