VideoHelp Forum




+ Reply to Thread
Results 1 to 12 of 12
  1. This may get long. Let me start by saying I use my computer only for video work: VHS caps, TV caps, conversions to DVD, burning, etc. Nothing more. I even disabled all my COM ports and onboard LAN. I read an article in the newest computer shopper mag where they entered a number of stores and presented a scenario to the sales people and graded them on it. The scenario included their desire to capture and process footage from a DV camcorder. Some of the "correct" answers for this specific issue was to pass on a celeron processor for a P4 and integrated graphics for a dedicated card. First, is this true and why? I often wondered if a P4 1.7 GHZ chip would totally outperform my current celeron 1.7GHZ chip. I found this at tom's Hardware:
    http://www6.tomshardware.com/cpu/20020903/p4_celeron-08.html#mpeg4_encoding_flask_501

    Now, before you laugh off Tom's Hardware, it's just a simple comparison of an MPEG-4 encode with each chip. It's pretty much a dead heat. As for the video card, someone argued with me at work that since my video would "render" faster than my integrated graphics (845GE), my encode would be faster. Render what? I don't see anything when it's encoding. I have noticed that every 100MHZ worth of overclock on my Cleleron is worth about 4 seconds off every minute with TMPG. It's very linear up to about 300-400MHZ over (until my computer crashes). So should I be left to believe that a P4 would make my encodes/editing with virtualdub/avisynth go faster because of the bigger cache? And a dedicated video card would improve speed of this as well? I can pick up a cheap geforce Fx 64MB card...
    Quote Quote  
  2. Banned
    Join Date
    Oct 2003
    Location
    Americas
    Search Comp PM
    Encoding speed goes hand in hand with your CPU (and only CPU) calculating capabilities, that means mainly FPU (floating point calc.). New, improved Celerons are much better then the old ones in terms of comparison with the main line of processors. Server CPU's are the kings, like Xeon and AthlonMP (Opteron etc.). These are best suited for encoding as they can easily handle the load. It has to do with the internal architecture plus the benefit of the large cache. Size of the cache alone is not indicative of encoding performance, it has to go with processing power (more cache does not mean a faster CPU). Rendering, on the other hand, is done by a dedicated video processor and does not affect encoding in any way.
    Quote Quote  
  3. Member sacajaweeda's Avatar
    Join Date
    Sep 2003
    Location
    Would I lie?
    Search Comp PM
    You could always switch to a faster encoder. TMPGEnc does a good job but it encodes like old people ****.
    "There is nothing in the world more helpless and irresponsible and depraved than a man in the depths of an ether binge, and I knew we'd get into that rotten stuff pretty soon." -- Raoul Duke
    Quote Quote  
  4. You have several different questions there. If you are capping VHS and TV, then I assume you will not be using DV there. For analog caps, the built-in video does not have this capability and is therefore useless.

    Capture card, which is not necessarily the video card, can be highly variable. AVI or DV caps need large, fast hard drives, preferably at least two on seperate channels. Real-time (ATI) encodes need a reasonably fast processor. Hardware encoders do not.

    Processor, plenty of fast RAM, and fast, dual hard drives will speed up encoding. Get the best bang for the buck in all areas. Consider overclocking possibilities. Video card is absolutely meaningless for encoding. Unless it runs very poor drivers or some kind of wierd config utility running constantly. Rendering for video games is completely different, would advise not trying for hi-performance in both areas on the same machine.

    You have to balance needs with available cash. Dual hard drives give a lesser benefit for encoding but major benefit for capture. Doing both bumps this up in priority. Going to the fastest processor your current mobo supports may or may not be worth it. New board and proc with a single, old, slow hard drive may be expensively disappointing.
    Quote Quote  
  5. Member
    Join Date
    Apr 2004
    Location
    The bottom of the planet
    Search Comp PM
    TMPGEnc does a good job but it encodes like old people ****.
    Thanks for that lovely mental image. :P

    All kidding aside, the faster the processor, the faster the encode. It's that simple.
    "It's getting to the point now when I'm with you, I no longer want to have something stuck in my eye..."
    Quote Quote  
  6. I like to chime in here. I had integrated video on my motherboard with 768MB RAM, on an Athlon 2000+ XP CPU. I would encode using Tmpgenc a half-hour sitcom into a XVCD in around 4-1/2 hours. Then I decided to get a dedicated video card (nVidia MX440 with 64MB DDR). To my astonishment, my encoding now takes 2-3/4 hours! Nothing else has changed, and the results are pretty consistent as I've tried this with many encodes subsequently. I'm guessing that when I was using the integrated video approach, the video processing was stealing CPU cycles from the CPU itself, thus showing down the encodes. When I got the dedicated video card, the video card can render the video itself without having to steal cycles from the CPU, thus streamlining the process. So for me, the biggest bang for my bucks in increasing encoding speed was for me to get a dedicated video card instead of relying on the integrated video on the motherboard.

    Just my 2 cents.
    Quote Quote  
  7. Originally Posted by micmel2
    Then I decided to get a dedicated video card (nVidia MX440 with 64MB DDR). To my astonishment, my encoding now takes 2-3/4 hours! Nothing else has changed, and the results are pretty consistent as I've tried this with many encodes subsequently. I'm guessing that when I was using the integrated video approach, the video processing was stealing CPU cycles from the CPU itself, thus showing down the encodes. When I got the dedicated video card, the video card can render the video itself without having to steal cycles from the CPU, thus streamlining the process.
    Now there's a post I didn't want to see. Along with a theory that may not be too far off the mark. Maybe I'll pick up another NVidia card and try it. I got one cheap a few months ago...
    Quote Quote  
  8. Banned
    Join Date
    Oct 2003
    Location
    Americas
    Search Comp PM
    Originally Posted by micmel2
    I like to chime in here. I had integrated video on my motherboard with 768MB RAM, on an Athlon 2000+ XP CPU. ....
    ...
    Just my 2 cents.
    That may be the case, onboard video was never a preferred solution, although your speed gain is unusually high, I'd say only possible if picture was rendered (viewed) at the same time. I wonder if getting rid of this option (disabling rendering in TMPEG) would yield the same result. If not, then it could have been a corrupted import filter which you replaced by installing new video card driver. That seems to be a most likely scenario for me.
    Quote Quote  
  9. Member
    Join Date
    Jan 2004
    Location
    Finland
    Search Comp PM
    Integrated video generally doesn't steal cpu time. It steals memory cycles, since it's shared. There is only one exception to this AFAIK, it's one of SIS chipsets with integrated graphics, but with dedicated graphics memory.

    Adding another memory module is likely to give (little) performance boost, if there is currently only one, so it would have then dual channel. Of course chipset has to support dual channel and modules have to be equal size etc. To simplify, 2 x 256Mb has slightly better performance than single 512Mb, especially with integrated graphics.

    Integrated graphics would require cpu cycles when you are playing something, since integrated solutions generally don't have programmable shaders etc, they require cpu to be used for things normally done on (discrete graphics card) graphics processor.
    Quote Quote  
  10. Well, curiousity got the better of me on this one. I ripped the Nvidia GE Force mx440 (64MB) and did a few simple tests. I don't have the full results with me, so I'm going from memory. The first set of tests were with my integrated graphics. I used a 3 minute avi clip at 704x480 with huffyuv compression.

    1. Conversion with CCE one pass CBR (7000) elementary video stream to separate drive just formatted with 4092 block size

    2. A "save as AVI" operation with virtualdub using huffyuv compression with fast recompress

    3. A "save as AVI" operation with virtualdub using huffyuv compression with full processing mode

    I then repeated these exact tests with the Nvidia card installed. By the way, if you have integrated graphics, DON'T disable the onboard video memory setting until you change the display adapter setting to "external AGP" or the like. This will save you digging through the manual for the "clear CMOS" jumper ick:

    I saved an average of only 5 seconds for the activities listed. I know that's 5 seconds for 3 minutes but it will work out to 200 seconds for a 2 hour movie. Not enough to make me plunk down another $50 for a video card. I kind of wish I would have had the same result as the other poster that saved a bunch of time. Anyone have a 1.7GHZ P4 they'd like to give me to test the Celeron/P4 theory? :P
    Quote Quote  
  11. Wonder if any video card would do for you. They still makes some 32mb video cards. Most are SIS based.
    Quote Quote  
  12. Banned
    Join Date
    Oct 2003
    Location
    Americas
    Search Comp PM
    That's more like it and is fully in line with common knowledge.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!