Last, but definitely not least in my four-part cloud use case blog series is the high-performance computing (HPC) cloud. This type of private cloud use case allows an organization to automatically change compute node personalities based on workload requirements, enabling resource harvesting within the datacenter and cloud bursting to external resources or the public cloud. This resource maximization results in near-100 percent utilization of the infrastructure with a cost-effective means to scale capacity to meet dynamic business demand.
A perfect example of an HPC cloud in use is CERN (European Organization for Nuclear Research), one of the world’s largest and most respected centers for scientific research. CERN depends on computing power to ensure that 17,000+ scientists and researchers in 270 research centers in 48 countries can collaborate on a global scale to solve the mysteries of matter and the universe. To accelerate their research, CERN requires a cost-effective shared computing infrastructure that can support any combination of RHEL, KVM, Xen, and Hyper-V for a widely diverse set of scientific applications on x86 servers. Previously, their clusters could not flex or be re-provisioned automatically, creating idle resources as users waited for their environments to become available.
With Platform ISF CERN is able to eliminate silos by re-provisioning thousands of nodes and VM’s based on application workload requirements. Platform ISF provides the self-service capability that allows scientists to directly choose their application environments and manage their own workloads, increasing user efficiency and reducing IT management costs.
As a scientific research organization, CERN keeps a close watch on expenses and with Platform ISF, they are delivering more services within a fixed budget with performance doubled on many applications. Dr. Tony Cass, Leader Group, Fabric Infrastructure, CERN told me that “if we can move 150 machines [from a total of 200] out of this environment by improving utilization, we can either save some significant power and cooling costs, or we can redeploy the machines to the batch cluster without increasing the hardware budget.” This type of resource maximization is what is needed to get the most return on investment from an HPC cloud deployment.
This concludes my four-part blog series that outlines all four private cloud use cases for companies evaluating internal shared infrastructures (should you want to review some of my previous posts, just click on the links below):
1. Infrastructure Cloud
2. Application Cloud
3. Test/Dev Cloud
A New Chapter In My Life; Google
3 months ago