VideoHelp Forum




+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 53
  1. The new Sandy Bridge chips from Intel are set to debut (yesterday), some, even, at reasonable prices.
    One of their main selling points seems to be their inbuilt hardware dedicated to video transcoding functions.
    Promising speed ups of 200-300% now (and probably with more in the bag) is this family of chips set to blow AMD out of the water ?
    Offers more bang per watt than anything before, and generally faster with better in built gfx ..
    is this the promised holy Grail for all media mavens?
    or can they sell this to Joe Sixpack as well as Gary Geek?

    What are your thoughts?
    Corned beef is now made to a higher standard than at any time in history.
    The electronic components of the power part adopted a lot of Rubycons.
    Quote Quote  


  2. I'll be interested to know what all the Gary Geeks on this site think (i.e all the people here smarter than me).
    Pull! Bang! Darn!
    Quote Quote  
  3. Member
    Join Date
    Feb 2004
    Location
    Australia
    Search Comp PM
    If its built in ... throw it ... as for those % ... it aint going to happen according to recent benchmark scores
    Last edited by Bjs; 5th Jan 2011 at 11:31.
    Quote Quote  
  4. It's just another step in the progression of CPU/GPU design. AMD has similar CPU/GPU designs coming. There is one huge shortcoming right now: the "Quick Sync" functions (GPU based encoding) are only available if the graphics chip is enabled. Ie, you have to be using the onboard graphics to use Quick Sync GPU based encoding. And, as usual, the GPU based encoding isn't the highest quality. Anandtech has the best review I've seen:

    http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i7-2600k-i5-2500...-2100-tested/9

    But there was something severely wrong with their CUDA (NVIDIA) encoder. I've been playing around with MediaCoder on a GTX 460 and haven't seen anything as bad as their sample images.
    Last edited by jagabo; 5th Jan 2011 at 11:50.
    Quote Quote  
  5. Member
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    anything that's "all-in-one" I have no interest in. This is gna be a consumer based thing. Get an HP desktop from best buy with this and they'll promote it till the end of days as the greatest. Good for average use but as it stands, what would ou rather do? game/encode with a built in gpu set chip or have a decent vid card to do it? I prefer the video card a thousand times to 1
    Quote Quote  
  6. Originally Posted by Moontrash View Post
    what would ou rather do? game/encode with a built in gpu set chip or have a decent vid card to do it? I prefer the video card a thousand times to 1
    The Sandy Bridge GPU is encoding several times faster than a GTX 460.
    Quote Quote  
  7. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    sandy bridge, in my opinion, is as significant a step forward as when protected memory mode, ddr, sse, or 32 bit were first introduced.

    the quick sync engine, in fixed function mode, offers image quality almost on par as software only encoding with speeds that smoke cuda based encoders. considering that said engine is also fully programmable, i think it spells the end of gpu powered encoders, before they were even able to fully realize their potential.

    when you mate that with the improved ipc, the new 256 bit avx simd capabilities, the improved caches, intel already having an sdk available for download that allows programmers to start making use of the new abilities, the performance potential of sandy bridge is staggering.

    my guess, SB will be the cpu that leads to nvidia filing for bankruptcy within 18 months and amd abandoning the discrete consumer market and concentrating on apu's (accelerated processing units) exclusively.
    Quote Quote  
  8. I don't think anybody is saying the inbuilt (snb) gfx are cutting edge, merely 2x (or 3x) as good as they were previously, thus shrinking the market for low end discrete cards. As for the inbuilt transcoding , yes I like the look of it, but it all crucially depends on software support. I would hate to see the potential fail to be unlocked by open source software. Maybe wait and see how it stacks up to bulldozer, what software makes use of it, what the prices come down to?.
    If they produce this in a version that fits in a socket 775 and bundle it up with either (Call of duty) or Windows Seven, then I might come out of the starting blocks running, but now it seems new CPU=New MoBo +new Memory, but I'm not holding my breath.

    It is true that video transcoding and gaming are the two things that probably tax most peoples CPU the most. and this covers both arenas, while still keeping power consumption low. Maybe they see the CPU as becoming less relevant as everything drifts off into the cloud's and up off the Desktop into Pads and phones.
    Amd and Nvidia will still be here in two years imo.
    Corned beef is now made to a higher standard than at any time in history.
    The electronic components of the power part adopted a lot of Rubycons.
    Quote Quote  
  9. Originally Posted by RabidDog View Post
    I don't think anybody is saying the inbuilt (snb) gfx are cutting edge, merely 2x (or 3x) as good as they were previously, thus shrinking the market for low end discrete cards.
    Intel has gone from being 4 generations behind AMD/NVIDIA graphics, to only being 2 generations behind. And that makes them good enough for casual gamers now.

    Regarding the inability to use Quick Sync if the graphics device isn't actively running, maybe someone will write a null graphics driver that will enable the graphics device without actually having it connected to a monitor or used by the desktop.

    Originally Posted by RabidDog View Post
    Maybe they see the CPU as becoming less relevant
    Of course Intel and AMD would happily have gone on increasing clock speeds if they could. I don't know if you remember, at introduction of the P4 Intel was talking about how the design would serve through 10 to 20 GHz. As we all know, they hit a wall with power consumption at 4 GHz. So the only choice has been to go wider rather than faster. Since few desktop applications scale well beyond 2 or 4 cores they now have another problem of what to do with all the die space* that's becoming available as the fab dimension continues to shrink. Hence the movement of everything else onto the "CPU".

    * There's a practical limit to how small you can make a CPU because you need space for the physical connections to the outside world. So you can't just make zillions of tiny little CPUs on each wafer.
    Quote Quote  
  10. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by jagabo View Post
    Regarding the inability to use Quick Sync if the graphics device isn't actively running, maybe someone will write a null graphics driver that will enable the graphics device without actually having it connected to a monitor or used by the desktop.
    a solution already exists:

    http://www.anandtech.com/show/4113/lucid-enables-quick-sync-with-discrete-graphics-on-sandy-bridge

    in all honesty unless you are one of these morons who spends $500 on a graphics card to play the latest fps clone, on a 20" monitor, all while sitting bolt upright in an office chair, i don't see any need for a discrete graphics card when using a SB cpu.

    maybe i'm getting older but i would much rather play a boxing, racing, 3rd person or sports game on a 40+" hdtv, at 1080p, with an xbox 360 or ps3, from a nice comfortable reclining chair than spend the same amount or more building a high end gaming pc and play the latest glorified quake 3 rip off.

    but that's just me.

    as a side note, as i was reading the reviews i couldn't help but think that with all the execution units on the new SB's, that they would be perfect for a console, were a programmer would be free to code a game that makes full use of the avx and sse registers, as well as the programmable portion of quick sync, the alu's and the built in graphics chip. well evidently at leat one of the valve developers had the same idea:

    http://kotaku.com/5726127/half+life-devs-say-new-processor-will-bring-console+like-experience-to-pcs
    Quote Quote  
  11. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by RabidDog View Post
    Amd and Nvidia will still be here in two years imo.
    of course amd will be around, i just don't think they'll still be making discrete graphics cards for the desktop, workstation cards is another matter.

    as far as nvidia, i think they already see the writing on the wall, they seem to be betting on ARM based cpu's:

    http://www.anandtech.com/show/4099/nvidias-project-denver-nv-designed-high-performance-arm-core

    considering that SB's graphics capabilities are pretty decent with just 12 EU's (execution units, roughly analogous to cuda cores or stream processors) in the HD 3000 gfx chip, no dedicated ram and a clock of 850 mhz, by the end of the year we'll have the SB refresh, most likely better graphics and inside of 18 months i expect that we'll see integrated graphics on par with current high end nvidia/amd cards and probably we'll see avx extended to integer math.

    at that point i can't see anyone spending a dime on a discrete gpu, kind of how onboard audio has gotten good enough so that almost no one uses a discrete sound card anymore.
    Quote Quote  
  12. i expect that we'll see integrated graphics on par with current high end nvidia/amd cards
    Hundred Bucks says you're wrong on this, Intel wants to squeeze as much profit from each generation (= slow improvements) and cant afford to kill off NVidia.
    Corned beef is now made to a higher standard than at any time in history.
    The electronic components of the power part adopted a lot of Rubycons.
    Quote Quote  
  13. Member wulf109's Avatar
    Join Date
    Jul 2002
    Location
    United States
    Search Comp PM
    Overclocking is a thing of the past with Sandybridge,except for K series CPU's. The standard CPU's have no overclock ability (a very tiny % is possible). Note that P series motherboards do not enable quick-synch,only K series motherboards.
    Quote Quote  
  14. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by RabidDog View Post
    i expect that we'll see integrated graphics on par with current high end nvidia/amd cards
    Hundred Bucks says you're wrong on this, Intel wants to squeeze as much profit from each generation (= slow improvements) and cant afford to kill off NVidia.
    in what way would intel suffer if they killed off nvidia? if anything intel's business would benefit. just look at what intel has done, starting with nehelam intel yanked nvidia's x86 chipset license so they couldn't make i7 compatible chipsets. nvidia has tried suing, looking into the south bridge market, all to no avail.

    now intel introduces technology that effectively renders software based encoding a thing of the past; they introduce an encoding engine that allows the encoding of 15 mb/s 1080p content at 100 fps, 4 mb/s 720p at 200 fps and 1.5 mb/s 480p at 265 fps and the engine is capable of encoding and decoding vc-1, mpeg-2 and h264.

    intel has basically decided to remove one of the two killer apps (gaming being the other one) from the equation that drives end users to upgrade cpu's. i'd say that's all the proof that's needed that intel is trying to kill nvidia.
    Quote Quote  
  15. Originally Posted by deadrats View Post
    now intel introduces technology that effectively renders software based encoding a thing of the past; they introduce an encoding engine that allows the encoding of 15 mb/s 1080p content at 100 fps, 4 mb/s 720p at 200 fps and 1.5 mb/s 480p at 265 fps and the engine is capable of encoding and decoding vc-1, mpeg-2 and h264.
    I'm sure Nvidia and ATI thought the same thing with CUDA and Stream, unfortunately with their GPU based encoders the quality turned out to be awful because it transpired that GPUs aren't that good at performing the operations required for video encoding, no matter how fast they operate.
    Quote Quote  
  16. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by mh2360 View Post
    I'm sure Nvidia and ATI thought the same thing with CUDA and Stream, unfortunately with their GPU based encoders the quality turned out to be awful because it transpired that GPUs aren't that good at performing the operations required for video encoding, no matter how fast they operate.
    there's so many things wrong with what you said that it's difficult to know where to begin.

    1) amd, who as you know bought ati, has done everything they can to ensure that gpu powered encoding would be a stillborn technology. ati, as far back as the 9700 pro aiw had hardware mpeg-2 encoding built into there graphics cards and i remember testing avivo encoding back in the 9600 pro days (and it was quite good). as soon as amd bought ati's assets they put gpgpu technology on the back burner so they wouldn't cannibalize their cpu business.

    2) cuda based encoders do not have "awful" quality, the reference cuda h264 encoder has sub par b frames at low bit rates, once you stop trying to bit rate starve encodes by using 4 mb/s for 720p and above video, the quality is practically indistinguishable at blu-ray standard bit rates. for lower bit rate encoding, try main concept's cuda powered encoder or the elemental encoder included with adobe premiere (or even sony's avc encoder).

    3) the biggest thing holding back adoption of cuda powered encoders is that they are difficult to program for general purpose computing, not that they "aren't any good performing the operations required for video encoding" (what a silly thing to think, let alone say).

    that's neither here nor there, intel is being much smarter about quick sync than nvidia was with cuda, where as cuda has been around for 3 years and we barely have a handful of apps that use that technology, intel contacted all the major players, an intel developer even offered to help the x264 devs modify x264 so that it can benefit from quick sync, to ensure that it's technology would be available in all the major apps as soon as SB hit retail, there's even a plug in already available for adobe premiere pro:

    http://software.intel.com/en-us/blogs/2011/01/07/intel-quick-sync-video-encoder-plug-i...umer-products/

    if you download the intel media sdk and look through the documentation and code samples you will see that using the built in fixed function capabilities is almost trivial for a good programmer and even the fully programmable capabilities are easy to use, especially compared to cuda.

    18 months and it's curtains for nvidia, bookmark this thread.
    Quote Quote  
  17. As a CPU the Sandy Bridge looks impressive enough, but as the primary GPU where games are concerned, it seems to fair no better than a Nivida 210 or an ATI HD 5450, and games are what sell GPUs not video encoding.

    I'm sure Nvidia and ATI aren't losing any sleep over something which in most cases performs no better than their budget range hardware.

    I'll still be here in 18 months as I'm sure will Nvidia and ATI.

    http://www.techpowerup.com/reviews/Intel/Core_i5_2500K_GPU/6.html
    Last edited by mh2360; 9th Jan 2011 at 03:29.
    Quote Quote  
  18. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by mh2360 View Post
    As a CPU the Sandy Bridge looks impressive enough, but as the primary GPU where games are concerned, it seems to fair no better than a Nivida 210 or an ATI HD 5450, and games are what sell GPUs not video encoding.

    I'm sure Nvidia and ATI aren't losing any sleep over something which in most cases performs no better than their budget range hardware.

    http://www.techpowerup.com/reviews/Intel/Core_i5_2500K_GPU/6.html
    i'll grant you that the integrated gpu in SB lag behind discrete graphics cards but each gpu core on SB only has 6 execution units (the benchmarks floating around are for the dual gpu parts). that means that with 12 execution units (and no dedicated frame buffer) intel can match the performance of a geforce 210 discrete graphics card with 16 cores and 512mb to 1gb dedicated gddr.

    what do you think will happen with the product refresh at the end of this year? or the new cpu next year? don't you think by then intel will have 24 or 36 execution units?

    the truth is that while pc gaming may still be popular and most likely console gaming will never truly kill it off, most people don't buy the high end $300-$500+ video cards, they buy the $200 video cards. and that's all intel has to offer, performance comparable to a mid range discrete graphics solution. if in one year you can buy a cpu with an integrated gpu that offers gaming performance comparable to a gtx 460 or even a gts 450, would you still spend money on a discrete graphics card?
    Quote Quote  
  19. Next year low end graphics cards will have twice the performance too.
    Last edited by jagabo; 9th Jan 2011 at 06:32.
    Quote Quote  
  20. Present hi-end video cards expend over a billion transistors on GPU. There is simply no way that Intel will devote that amount of Die space to graphics. They will (at some point in time) upgrade the graphics in SB to better than low end and approach midrange. then they will stop. There will be no justification for them to do any better (GPU-wise), topend GPU cards are about 1% of the total discrete card market, which are only used in about 10% of computer systems.
    Intel would suffer a regulatory backlash if they were seen to kill off Nvidia, (quickly) or AMD.
    Desktop CPU's are becoming irrelevant, Phones are where the growth and future markets lie. Nvidia is doing Ok in mobile devices.
    Corned beef is now made to a higher standard than at any time in history.
    The electronic components of the power part adopted a lot of Rubycons.
    Quote Quote  
  21. Member
    Join Date
    Oct 2010
    Location
    England
    Search Comp PM
    Originally Posted by deadrats View Post
    intel already having an sdk available for download that allows programmers to start making use of the new abilities
    How does that compare to Nvidia's offering?:
    http://developer.nvidia.com/object/cuda_3_2_downloads.html
    Quote Quote  
  22. Let's not forget that Nvidia and ATI also supply GPUs to the PS3, Xbox 360 and Wii, and will no doubt provide for the next generation of games consoles.

    Originally Posted by intracube View Post
    Originally Posted by deadrats View Post
    intel already having an sdk available for download that allows programmers to start making use of the new abilities
    How does that compare to Nvidia's offering?:
    http://developer.nvidia.com/object/cuda_3_2_downloads.html
    AMD have made the "Stream" SDK available as well.

    http://developer.amd.com/gpu/atistreamsdk/pages/default.aspx
    Quote Quote  
  23. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by intracube View Post
    Originally Posted by deadrats View Post
    intel already having an sdk available for download that allows programmers to start making use of the new abilities
    How does that compare to Nvidia's offering?:
    http://developer.nvidia.com/object/cuda_3_2_downloads.html
    cuda is a pain in the ass to program with, it's C like syntax but you to code the gpu portion separately, compile that with nvcc then call that part from within your main program, you need to allocate memory in the frame buffer, synchronize the gpu threads with the cpu threads, it's just a pain and a half.

    open cl in some ways is better, but for an experienced windows programmer using dx9 and dx10 will prove to be more efficient.

    that's what makes intel's implementation so much better, you can access the fixed function features as you would any other function within a C/C++ program and you can also use them from within a dx application, so coding is much simpler.
    Last edited by deadrats; 9th Jan 2011 at 15:39.
    Quote Quote  
  24. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    for the record i retract my prediction of nvidia's demise, intel has agreed to give nvidia 1.5 billion over the next 5 years, in licensing fees:

    http://www.tomshardware.com/news/geforce-technology-patent-Licensing-agreement,11971.html
    Last edited by deadrats; 10th Jan 2011 at 19:43.
    Quote Quote  
  25. Originally Posted by deadrats View Post
    intel has agreed to give nvidia 1.5 billion over the next 5 years, in licensing fees
    Interesting development.
    Quote Quote  
  26. Originally Posted by deadrats View Post
    3) the biggest thing holding back adoption of cuda powered encoders is that they are difficult to program for general purpose computing, not that they "aren't any good performing the operations required for video encoding" (what a silly thing to think, let alone say).
    You are clearly unfamiliar with CUDA programming. Whoever said that is exactly right. Many video codecs have been designed specifically for the CPU and therefore are not very "parallelizable". If an algorithm is not easily parallelizable then it WILL run slower on the GPU.
    Quote Quote  
  27. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by jgreer View Post
    You are clearly unfamiliar with CUDA programming. Whoever said that is exactly right. Many video codecs have been designed specifically for the CPU and therefore are not very "parallelizable". If an algorithm is not easily parallelizable then it WILL run slower on the GPU.
    i'll hit you with the same example i hit everyone with that makes this idiotic claim: how do you account for the performance advantage gpu's hold in dna sequencing and data encryption/decryption?

    the gt200 gpu is composed of double pumped alu's (they run at twice the clock rate), they smoke in linear tasks.

    here's a code snippet, coded for both cpu and gpu execution, if you have the programming chops feel free to compile it, run it and analyse the execution times for yourself, if you don't know enough about programming to compile the code and test it for yourself have a nice big cup of STFU and don't try to participate in a conversation you clearly lack the technical know how to comprehend:


    // cpu code
    void add_matrix
    ( float* a, float* b, float* c, int N ) {
    int index;
    for ( int i = 0; i < N; ++i )
    for ( int j = 0; j < N; ++j ) {
    index = i + j*N;
    c[index] = a[index] + b[index];
    }
    }


    int main() {
    add_matrix( a, b, c, N );
    }

    //gpu code
    __global__ add_matrix
    ( float* a, float* b, float* c, int N ) {
    int i = blockIdx.x * blockDim.x + threadIdx.x;
    int j = blockIdx.y * blockDim.y + threadIdx.y;
    int index = i + j*N;
    if ( i < N && j < N )
    c[index] = a[index] + b[index];
    }

    int main() {
    dim3 dimBlock( blocksize, blocksize );
    dim3 dimGrid( N/dimBlock.x, N/dimBlock.y );
    add_matrix<<<dimGrid, dimBlock>>>( a, b, c, N );
    }
    Quote Quote  
  28. Matrix addition is an example of something that is embarrassingly parallel. You've shown me nothing. To recap: I never claimed GPUs don't have performance advantages over CPUs for extremely parallelizable algorithms. My claim was that GPUs perform worse than a CPU when an algorithm in not very parallelizable. The fact that you posted this example tells me that you are missing the point.
    Quote Quote  
  29. Banned
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Originally Posted by jgreer View Post
    Matrix addition is an example of something that is embarrassingly parallel. You've shown me nothing. To recap: I never claimed GPUs don't have performance advantages over CPUs for extremely parallelizable algorithms. My claim was that GPUs perform worse than a CPU when an algorithm in not very parallelizable. The fact that you posted this example tells me that you are missing the point.
    oh i got your point, and it's wrong. video encoding is not "inherently linear", you're dealing with thousands of frames, millions of pixels, and countless calculations (care to guess how many SAD calculations are made during a typical encode), you're not talking about rocket science to split the encoding task up by gop sequences, assign a thread to process each and then write the results back to the target file.

    x264 has a hard coded limit of 128 threads, clearly the developers are able to think of ways to parallelize the encoding process. gpu's, as i've said in the past, are more akin to RISC processors in their architecture than they are to CISC, all modern codecs have been coded by programmers that cut their teeth writing code for the x86 architecture, they aren't used to thinking about things in more RISC-esque terms.

    the code examples i gave you hardly qualify as "embarrassingly parallel", if you have the programming background put in a timer function and check the execution time of each.

    and again, data encryption/decryption is a very linear task, and gpu's shine in these applications, as they do in BLAST, so clearly the argument that gpu's are only good for massively parallel tasks is incorrect.

    but perhaps the best example i can think of is the latest version of tmpg, i just finished numerous encoding tests for a review i was putting together and the cuda encoder smokes the x264 encoder (even on the fastest settings), with superior visual quality and with only about 20% gpu utilization. considering this is a gts 250 that has 128 cores, 20% is only about 25 cores being used and since this is the gt200 chip that means it's less than 1 warp doing the work.

    hardly requires "embarrassingly parallel" programming to beat a software encoder.
    Quote Quote  
  30. Originally Posted by deadrats View Post
    split the encoding task up by gop sequences, assign a thread to process each
    You can't do that. The working set will become too large. You'll start cache thrashing and all gains from parallelism will go down the drain.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!