Putting the Cloud in HPC

It has been an interesting week in terms of analyst commentary and coverage evaluating the advances and adoption of cloud within HPC environments.

Three different reports on cloud adoption crossed my desk in the last week or so, and I wanted to provide some context and thoughts on what they mean to Platform and our customers. I have also provided links to the material should you be interested in learning more (Note: full access to the reports is restricted to subscribers of the various market research firms).

IDC’s Top 10 HPC Predictions

  • The IDC HPC team held an excellent webinar last week where they outlined their predictions for HPC in 2010. One of the key items on the predictions list is the adoption of cloud within HPC. The webinar references that most of the action is in private cloud environments, with HPC specific solutions emerging. This mirrors what we see in our customer base. It is interesting to see how some of the largest compute infrastructures used for research, design and analytics are embracing the notion of private cloud, to make their HPC environment more dynamic and elastic.

IDC Link on “Cloud Computing comes to HPC”
  • Steve Conway from IDC wrote a quick summary of the activity he is seeing in the market. Front and center is the CERN cloud use case being built around Platform ISF and Platform ISF Adaptive Cluster, enabling CERN to evolve their existing HPC environment to a dynamic cloud environment. Another good indication that the market is starting to embrace the cloud and take advantage of these concepts in HPC environments.

451 Group User Deployment Report on CERN
  • William Fellows of The 451 Group also just published a very detailed summary of the CERN use case that describes it as a “private cloud for scientists at scale.” His report shows some of the benefits that a large compute environment like CERN can achieve by evolving their HPC environment to the cloud. From our perspective, this has been a very exciting and interesting deployment since CERN is leading the industry with its advanced HPC cloud infrastructure, especially considering its scale of 10,000’s of servers.

The common theme across all these reports is the real adoption of cloud infrastructures and, more specifically, the use cases supporting private cloud within HPC environments. As these examples show, users can receive real benefit from a more dynamic and flexible HPC environment, enabling them to maximize their resources for their most demanding and business-critical research, design and analytical applications.

Top 10 Predictions: #1 Right? CA & 3tera

CA's acquisition of 3tera speaks to the inevitable consolidation that I predicted (I know a pretty easy prediction) in Nov blog post.

Congratulations to both companies.

Platform’s Favourite blogs…. What about yours?

We thought we’d share a quick list of some of the industry blogs the Platform Computing team reads and considers to be the most insightful and informative in the blogosphere. We surveyed the staff and here are our collective favorites in our markets. In the spirit of sparking conversation, we encourage your recommendations on others we should check out, as well. In no particular order, here they are:

1) InfoWorld’s Cloud Computing Blog, Dave LinthicumDave is undeniably one of the world’s foremost authorities on Cloud and SOA. His InfoWorld blog offers insightful and pithy commentary on the advances and pitfalls of cloud computing, along with news and market trends.
http://infoworld.com/blogs/dlinthicum

2) Forrester Blog for Infrastructure and Operations Professionals, James StatenAs a principal analyst, James tackles a range of topics and trends relative to infrastructure and operations professionals, including cloud computing infrastructures, cloud services, virtualization, and M&A activity in the cloud and virtualization marketplace.
http://blogs.forrester.com/it_infrastructure/

3) BriefingsDirect, Dana GardnerDana is an independent analyst and forward-thinking blogger and podcaster who writes about software productivity trends and new IT business growth opportunities. Dana is president and principal analyst at Interarbor Solutions and also a founding member and a weekly contributor to the Gillmor Gang podcast.
http://blogs.zdnet.com/Gardner or http://www.briefingsdirect.com/

4) Gartner Blog Network, Thomas Bittman
As a vice president and distinguished analyst at Gartner, Thomas coined the term “real-time infrastructure” and is an expert on cloud computing, virtualization and infrastructure evolution. His blog focuses exclusively on cloud computing, the future of infrastructure, virtualization and the market players.
http://blogs.gartner.com/thomas_bittman/

5) GigaOm
As the founder of GigaOm, Om Malik, along with is colleague bloggers, Stacy Higginbotham and Derrick Harris, keep close track of the news, trends, policy and events happening in Silicon Valley and the greater tech industry.
http://gigaom.com/

6) Information Week’s Plug Into The Cloud
This aggregate site pulls relevant cloud news and opinions from some of the IT industry’s best-known bloggers and journalists, including Charles Babcock and John Foley.
http://www.informationweek.com/cloud-computing/

7) Javelin Strategy & Research Blog
Various analysts including Mary Monahan, James Van Dyke, and Bruce Cundiff at Javelin cover banking technology and financial services.
http://www.javelinstrategy.com/category/blog/

8) CNET – The Wisdom of Clouds, James Urquhart
James’ 20 years of experience in distributed systems development and deployment have provided him with a specialization in SOA, cloud computing, and virtualization. He’s currently market manager for the Data Center 3.0 strategy at Cisco Systems and a CNET Blog Network author. The advice and opinions expressed on all things cloud in his blog are strictly his own.
http://news.cnet.com/wisdom-of-clouds/

9) ZDNet – Virtually Speaking, Dan Kusnetzky
Dan jointly pens the Virtually Speaking with Paul Rooney, an IT industry journalist and blogger with more than 15 years of experience. Dan is currently vice president of research operations at The 415 Group and brings years of experience and wisdom from his near 40 years in the IT industry to his posts.
http://blogs.zdnet.com/virtualization/?tag=trunk;content

10) Thinking Out Cloud, Geva Perry
Geva describes himself as “a software executive, cloud computing pundit and advisor to companies in enterprise software and cloud computing” and provides great technical insight and guidance on the cloud.
http://gevaperry.typepad.com/

11) Canadian Business - On investing, markets and personal finance, Larry MacDonald
As a former economist, Larry blogs about economic trends, stock market news and investment topics relevant to Canada. He’s authored several business books, including corporate biographies of Nortel and Bombardier, and manages his own portfolio.
http://blog.canadianbusiness.com/category/larry-macdonald/

12) Cloudscaling, Randy Bias
As the founder & chief technologist for Cloudscaling. His boutique cloud strategy consulting firm, Cloudscaling, advises Fortune 500 companies with their internal cloud initiatives. He has driven innovations in infrastructure, IT, Operations, and 24×7 service delivery since 1990. http://cloudscaling.com/blog

13) CNET - The Pervasive Datacenter, Gordon Haff
Gordon Haff is a principal IT advisor at market research firm Illuminata and focuses on computing infrastructure issues: enterprise servers, datacenter interconnects, operating systems, server blades and appliances, virtualization and the evolution of computing architecture. His Pervasive Datacenter blog discusses everything that’s hot and what’s not in enterprise computing.
http://news.cnet.com/pervasive-datacenter/

14) Gartner Blog Network, Kristin Moyer
Kristin Moyer is a research director for Gartner in Industry Advisory Services/Banking and Investment Services, blogging on a variety of relevant issues and trends for banking and investment services organizations.
http://blogs.gartner.com/kristin_moyer/

15) Vitualization.com
Created as an aggregate resource site on virtualization, the site also focuses on cloud computing and related technologies and features news, guest blog posts, interviews, white papers, people, jobs, photos, exclusive videos and reports about partnerships, M&A, events and rumors from the industry. Virtualization expert, Tarry Singh, is one of the guest editors.
http://www.virtualization.com

16) OnStrategies Perspectives, Tony Baer
Currently an analyst with Ovum, Tony has been following the IT industry for nearly two decades. His blog focuses on macro-level trends and events influencing innovation in the areas of SOA and Web Services, BPM, BI, Cloud Computing, SaaS, and software development.
http://www.onstrategies.com/blog/

Path to Cloud for UK Gov't IT 2010 Conference

I had the honor of presenting on the "Path to Cloud" twice,at the UK GovernmentIT 2010conference held in London on Jan 28th. My session centered on how government sectors are looking at cloud computing to gain flexibility and agility in order to save time and money, but do so in a secure and reliable way. Each time I presented there were also quite a number of other sessions (six in total, I believe) going on simultaneously, but nonetheless this particular topic was standing room only during both sessions.



This indicates that there is, indeed, a huge amount of interest in what cloud computing is and how to achieve it. But to an even larger degree it was about saving money and performing more efficiently and flexibly while maintaining reliability, availability, security, compliance and audit capabilities.

The key messages that I delivered were around the fact that the cloud is not just a technology driven solution but also very business driven. This includes business drivers such as implementing large scale strategies, exploring operational/business psychology within organizations and getting all of the stakeholders to the table.

For instance, to really become efficient, the cloud must be implemented across as large a set of resources as possible, meaning that multiple departments of the government must be willing to all get together and use the same facility. Governments considering cloud infrastructures must also make sure that the data center facilities people are fully participating and can host this type of system. (It will probably quite a bit denser in terms of heat, power, weight, etc. than what they traditionally manage.), It’s also necessary to get the application and database people into the room to understand the implications of this new methodology and how they should design and architect their solutions to make best use of it. For example, do they handle all H/A capabilities in the app space now (a la Google, with fail in place) or do they do things more traditionally, thus requiring dual PSUs, N+1 or better fans, multiple servers for H/A, etc.?

And possibly the most important aspect, how do you make the systems, network, storage, hypervisors, OSes, etc. all seem completely replaceable without disruption? This is where having a tool such as Platform Computing's ISF (Infrastructure Sharing Facility) can be so powerful ... if you want to change any of the pieces, including new generation VM capabilities (or more realistically have a heterogeneous setup consisting of multiple types)\ or change the application itself (via a front end submission facility that does all of the right things for the user, regardless of the underlying application) then you begin to truly explore the capabilities of a virtualized data center.

-Phil Morris
CTO, HPC BU
Platform Computing

Red Bull Racing Heading for #1

Last week I had the opportunity to join a scrum of reporters on a tour of Red Bull Racing’s design facility outside of London, England. For our non-racing audience, Red Bull Racing is one of about a dozen Formula 1 racing teams that compete globally. Last year the Red Bull Racing Team came in 2nd overall, and this year they appear to be the team to beat. Their facility in England employs about 600 designers, engineers and other staff dedicated to building really fast cars. This year they will design, test and build three new Formula 1 cars to compete during in the 2010 season.

Platform Computing is one of Red Bull Racing’s Innovation Partners, and together our two organizations are developing ways to get more out of their computing environment, which they use to design their cars. The governing body of Formula 1 racing, the FIA, has put restrictions on how much compute horsepower teams can use; so getting the most out of their computer grid gives Red Bull Racing a huge competitive advantage.

One of the biggest uses of Red Bull’s grid is to run software simulations of fluid dynamics tests (called Computational Fluid Dynamics or CFD). Traditionally these tests would be performed in wind tunnels with scale models of the cars; done on computer grids, the cost of testing can be reduced dramatically and the number of designs and conditions tested can be increased vastly. This is the same technology that many of our other customers, including GM and Audi, use to design their cars to make them more streamlined and subsequently more energy efficient.

In the case of Red Bull Racing, CFD is used for very complex simulations. Besides reducing the profiles of their Formula 1 cars they are also trying to make the cars hug the ground more tightly for better traction. This is done by designing the car like an upside-down wing. Oh, and did I mention that companies like Airbus and Boeing use CFD software (also run on Platform grids) to design jets? Same deal only with the “lift” used in reverse.
I must admit I’m more of an NFL fan, but this year I will be watching Formula 1 racing and cheering for the Red Bull Racing – Platform Computing team, naturally.



Stephen Mounsey of Scientific Computing World admires the surreal design of one of Red Bull Racing’s famous Formula 1 cars.
(Photo Credit: Tom Zsolt)

Cloud Evolution or Revolution

I recently participated in an intriguing podcast discussion with a flagship Platform customer, Tony Cass at CERN, IDC analyst Stephen Conway, and renowned media pundit and blogger Dana Gardner, entitled CERN’s Evolution to Cloud Computing Portending a Revolution in Extreme IT Productivity.

As the European Organization for Nuclear Research, CERN has a hefty job - solving extreme IT problems while conducting intense scientific research that explores the origins of the universe. As one of the key contributors to the creation of the World Wide Web, it’s no surprise, the organization is currently at the cloud computing frontier, having evolved high-performance computing (HPC) from clusters, to grid, and now to cloud. Dana said it best in our conversation: “In many ways CERN is quite possibly the New York of cloud computing. If cloud can make it there, it can probably make it anywhere.” Mostly because CERN deals with unbelievably large datasets, massive throughput requirements, a global workforce, finite budgets, and an emphasis on standards and openness. We’ve been working with CERN since 1997, when the organization deployed their Platform LSF grid infrastructure. Today, they’re piloting the world’s largest cloud computing environment for scientific collaboration using Platform’s private cloud management and HPC cloud-enabling software solutions, Platform ISF and Platform ISF Adaptive Cluster.

The conversation was fascinating, and I want to highlight one important part of the podcast discussion here: the evolutionary technology trend from clusters to grids to clouds and the revolutionary effect the technology is having on IT productivity and system management.

The transformation of historically static clusters and grids to highly dynamic cloud resources is what is forcing CIOs and their IT departments to rethink their IT architectures and, most importantly, the management layer. Cloud technology alone is nothing revolutionary, it’s the associated remodeling of architecture that rings a revolutionary bell.

At Platform we see the interaction between distributed, shared computing infrastructures and new technologies such as virtualization, requiring management. Just as clusters and grids required workload scheduling and management, so do the expansive farms of both virtual and physical servers in the cloud require management to efficiently share resources and make those resources dynamic. It’s with the help of a technology-agnostic management layer that all the heterogeneous environments are united across a wide-range of hardware, operating systems and virtual machines to create a highly-scalable, on-demand cloud infrastructure with self-service and provisioning. And it’s this self-servicing in the remodeled IT architecture that will drive the IT productivity revolution – making IT a truly competitive service.

I encourage you to listen to the whole podcast on CERN’s Evolution to Cloud Computing Portending a Revolution in Extreme IT Productivity. It provides great insight into how CERN is managing its own architectural and productivity revolution along with great industry insight from IDC and additional end user case studies.

Thank you Tony and Steve for participating and sharing your valuable insight and experiences! I look forward to the comments and conversations that will result from our conversation.