Unquestionably, much of the buzz at SC’2010 in New Orleans was about the performance that HPC users can attain using GPUs in hybrid computing architectures, much like Blue-Gene did over the past few years. Now, GPUs from AMD/ATI and Nvidia are taking hybrid compute offload out of the realm of proprietary architectures and putting it in the hands of almost every workstation user in the world.
When used correctly (for applications that can properly fit into the physical constraints of the GPU) the performance increase of using GPUs can be staggering. Low estimates for currently ported solvers show 4-5 times the speed increase with many often reaching 7-9 times the performance of regular processing speeds. With numbers like that, the HPC community takes notice!
The surprise in the GPU market is just how one-sided the supplier market seems to be. At SC’10 processor giant Intel only very quietly started talking about their accelerator codenamed “Knights-Ferry” (see http://www.intel.com/pressroom/archive/releases/20100531comp.htm) . The technology was being demo’d in the Intel booth, but with clear implications that it won’t be generally available for quite some time.
Meanwhile Nvidia is becoming a juggernaut. Their much bemoaned language CUDA is being upgraded very quickly, releasing new versions that have taken notice of criticisms and are addressing them one by one. By sometime next year, CUDA 4.0 will have had some major enhancements that should help compiler and debuggers used by ISVs to make better and higher performing code for Nvidia’s crop of new GPGPUs.
Alas, openCL seems to be playing step-sister to CUDA’s Cinderella. No major ISVs have announced--much less released--applications ported to leverage the AMD/ATI FireStorm series. What’s on the horizon for them? The Fusion architecture seems terribly compelling, but the question is, will Nvidia already own the GPGPU market by the time an alternative is available?
0 comments:
Post a Comment