VideoHelp Forum




+ Reply to Thread
Results 1 to 14 of 14
  1. I'm using Tempgenc for encoding to svcd/vcd. I noticed that my cpu is running at 100% so I tried to figure out how I could speed the encoding (without overclocking ). Then I found something about distributed computing (using multiple pc's for 1 task via local area network or internet). Does anybody know how I can do this or have any tips about websites where this is explained?

    Tnx
    Quote Quote  
  2. AFAIK tentpeg were going to bring out a server version, which is as you describe a way of distributing the encodeing task around a network of computers. But I think they may have put the can on that??
    Maybe they beleive that with the general increase in cpu speed 3.0ghz + its no longer really necessary?
    Corned beef is now made to a higher standard than at any time in history.
    The electronic components of the power part adopted a lot of Rubycons.
    Quote Quote  
  3. Member
    Join Date
    Feb 2001
    Location
    Macondo, Puerto Rico
    Search Comp PM
    I will love to see that working!

    Can you imaging using tmpgenc running on 5 3.0 GHz PCs? It will be really really fast!
    Quote Quote  
  4. i would like the front end, flexibility and controls of tmpgenc and the encodingng speed of CCE. I want faster than real time. I want 5.0ghz hyperthreaded. I also want a lottery win
    Corned beef is now made to a higher standard than at any time in history.
    The electronic components of the power part adopted a lot of Rubycons.
    Quote Quote  
  5. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    CCE already encodes faster than realtime. Ya gotta love it... 8)
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  6. I can never get it to work ...
    Corned beef is now made to a higher standard than at any time in history.
    The electronic components of the power part adopted a lot of Rubycons.
    Quote Quote  
  7. But it can work faster, as always. Theoretically speeking my harddisk can write at almost 50 Mbytes/sec, my network speed is 100 mbit and me and my housemates have a 100 mbit switch (not a fancy 3com but still 100 mbit). This equals (100/8 = 12,5) 12,5 MBytes/ sec. Say it works at about 80% capacity, this would mean 10 Mbytes/ sec. When maxing my cpu with Tempgenc my pc spits out 300 Mbytes per 50 min, so 0,1 MBytes/ sec !! Can you see the potential power in distributed computing! This means that via network (10 MBytes/ 0,1 MBytes = 100) a 100 equal pc's (1400 ahtlon/256 ddr) can help me encode.... And my harddisk still has time to scratch his ass.
    Try to beat that with CCE "faster then realtime encoding".
    Quote Quote  
  8. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    Numbers like this always look good on paper. You also need to take into account the computing overhead to 'manage' the entire operation. Then there's the timing issue involved. This isn't a huge task, requiring many different aspects to be processed. It's very linear, requiring a small set of instructions on a frame at a time. Something like this wouldn't seem to scale well, since the handoff to the other PC's would have very little for each to do. The timing would also be critical, since the encoding has to be done in a linear (serial) fashion, the process would have to 'wait' while each pc performing a task completed it's part.

    The tasks assigned to PC's that do use this sort of system are more along the lines of a parallel configuration, so that timing is not so much of an issue. It would be kindoff like lining up 20 people to say a 20 word sentance. Each person needing to speak only their word. You can guess that it would be much faster to just say the words yourself, rather than letting each person blurb out their word, hopefully not stepping on anyone else. This type of task just isn't suited to timeshare.

    It's a nice idea in theory, but I wouldn't hold your breath. The relatively slow communications over a network, to another PC, is inherently too slow for this type of work, where a dual-processor system has no such problem, since the communications/timing issue is not that much of an issue.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  9. Hmmm, so what you're saying is I shouldn't bother trying this. Unless I would be some diehard Javaprogrammer, which I am not. It would be much faster if I would cut the file in pieces and send it to the other pc's and on encode on that pc and then send it back and merge everything together... Damn I wish I knew Java...
    Quote Quote  
  10. Member DJRumpy's Avatar
    Join Date
    Sep 2002
    Location
    Dallas, Texas
    Search Comp PM
    Doing that would make your task parallel in nature, which does make sense.
    Impossible to see the future is. The Dark Side clouds everything...
    Quote Quote  
  11. Member
    Join Date
    Jun 2002
    Location
    MO, US
    Search Comp PM
    It would probably also help with doing really heavy filtering, since that's something else that can run in parallel (to some extent anyway, it would have to be pipelined which brings up some basic timing issues).

    Even if you did know Java it wouldn't help you much with this stuff, aside from making it easier to learn another language. Java in its current incarnation is a little too inefficient, you could easily end up without enough memory or CPU left for the actual encoding program.
    Quote Quote  
  12. Member flaninacupboard's Avatar
    Join Date
    Aug 2001
    Location
    Northants, England
    Search Comp PM
    a chain of machines would be interesting, machine one would be ripping a dvd and be running vfapi on the file (or something similar) machine two would be decoding the mpeg2 and running filters such as clip resize noise reduction and ivtc, machine three would be doing the encoding of the prepared images.
    actually, while talking about this, i was curiious, is the "fake" .avi created by vfapi only usable on the local machine, or can that .avi be used over a network? if so i would love to do so, i have two netwoked machines, one of which i use as a capturer, the other an encoder. it's be lovely to set my capturer up with a NR filter for my encoder, it'd save me bags of time!!
    Quote Quote  
  13. Originally Posted by flaninacupboard
    a chain of machines would be interesting, machine one would be ripping a dvd and be running vfapi on the file (or something similar) machine two would be decoding the mpeg2 and running filters such as clip resize noise reduction and ivtc, machine three would be doing the encoding of the prepared images.
    actually, while talking about this, i was curiious, is the "fake" .avi created by vfapi only usable on the local machine, or can that .avi be used over a network? if so i would love to do so, i have two netwoked machines, one of which i use as a capturer, the other an encoder. it's be lovely to set my capturer up with a NR filter for my encoder, it'd save me bags of time!!
    This is an excellent question....anyone kow the answer ?
    Quote Quote  
  14. Member SaSi's Avatar
    Join Date
    Jan 2003
    Location
    Hellas
    Search Comp PM
    Reading through the posts, a funny idea just came to mind.

    Tmpgenc can receive frames from a frame server and encode to a MPEG stream, right?

    We do (logically) split our video into chapters, no?

    What if we provide the chapter splitting information before encoding, to a program that would scan the AVI file, say in N different parallel reads, send the frames to N different client programs running on N separate machines that receive frames and act as frame servers to local instances of Tmpgenc?

    The result would be that each separate machine would be encoding different chapters of the same movie. We do want to have I frames on chapter breaks anyway, so separate streams to concatenate would not be bad.

    This process does not have to include audio. Audio can be handled (and is handled better) separately and encoding audio is 100 times faster than video.

    Scaling out this approach is good while chapters are enough.

    Network bandwidth should not be saturated, as only one machine would be sending and the others receiving. Even then, clustering dual 100Mbit ethernet cards in full duplex mode should give a 200MBit chanel to send data (more than enough, I guess).

    Dispatching one frame per machine in round robin fashion would balance the load (assuming that each peripheral encoding machine has an equal CPU).

    This whole thing might be a nice project using several 1-U P-4 machines. I think that VidtualDUB could be the best candidate workhorse to modify so that it does just that.

    Any ideas? Am I stupid?
    The more I learn, the more I come to realize how little it is I know.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!