Survey Reveals FS Organizations Making Private Clouds A Priority

Yesterday we announced the findings of our 2nd Annual Financial Services Cloud Computing survey and it has produced some revealing insights into the mindset of IT professionals. Like last year, we interviewed 35 senior financial services IT executives and the headline result was that 83% are making private clouds their first priority in moving to a cloud model. This is very encouraging and something we have heard in conversations from many organizations in different industries around the world.


However, while our study showed that adoption of cloud may have been delayed as a result of the global recession it importantly has not been dismissed entirely. In fact, our findings indicate that in the last 12 months more organizations have incorporated cloud into their long term planning, and many are planning to increase investments in grid and HPC in preparation.


One of the highlights of our study was the transformation in knowledge and understanding of cloud that has taken place in the last year. In our 2008 study, we found the biggest hurdles being cloud’s infancy and lack of definition. 12 months on and the barriers have changed to more practical issues such as security concerns and network reliability. On these points we would argue that the security protocols that exist in many infrastructures today will apply to the cloud so security should not be a major barrier, and regarding network availability, this can be easily resolved through good business continuity planning.


Full adoption of cloud looks like some way off but banks are preparing themselves through investments in virtualization, grid and HPC. The next stage will be the deployment of cloud management software. When we see an increase in the take-up of this technology, we will know that private clouds will have moved from vision to reality.


David Warm

CTO Financial Services

Platform Computing

Clusters, Grids, Clouds … Whatever!

No, I’m not trying to be heretical. The point is that I find it more useful to get beyond the buzz to the real details and lessons behind cloud, utility, grid, or whatever we call the latest computing model.


There are a few high performance computing (HPC) type scientific applications that are being talked about living on the public cloud infrastructure, such as the European Space Agency project Gaia which will attempt to map 1% of the galaxy, and the backend map/reduce functionality will reside on a cloud, or the fact that the US DOE is funding Argonne National Lab’s Magellan project to do general scientific computing problems as a test case for HPC in the cloud.


Some cloud concepts are worthwhile in a private HPC setting, especially if your setting is multiple grids or one big grid but divided into many logical grids via queues or workload management. Elasticity (the ability to grow and shrink as needs dictate), self service provisioning, tracking and billing for chargeback, and flexibility/agility to handle multiple application requirements (such as OS and patches, reliability/availability, amount of CPU or memory, data locality awareness, etc) can all improve data center utilization and responsiveness.


Most HPC grids or clusters already have some of these features, such as self-service and metering, but the ideas of flexibility and elasticity have not been a realizable goal until recently.


Today the primary driver for cloud computing is to maximize the utilization of resources. Typically that is a goal for a workload management (WLM) as well, but too often the HPC landscape is turned into silos (either grid based or queue based), which can mean that overall utilization is only in the 30-50% range much of the time. And even when it is in the 60-70% range, there is quite a bit of either backlogged demand that cannot get scheduled or systems running at low loads consuming expensive power and cooling resources, meaning that there is definite room for improvement.


Some of the primary causes of lower than ideal utilization within many current grid implementations include the budgeting process (typically done project by project, with each one buying new equipment for their own grid, or part of the existing grid with dedicated queues for them), service level requirements (such as immediate access to critical resources without pre-emption, which means that certain machines are left idle much of the time to allow for these jobs), and a fixed, rather than dynamically changeable, OS on each node (meaning that applications requiring a different stack cannot reuse the same equipment).


There are also a number of potential roadblocks to getting many HPC applications into the cloud. Let’s first investigate one of the core assumptions when the term cloud comes up, and that is one of virtualization. Indeed, there are many out there that state that this is a foundational and required stepping stone to cloud computing. I would argue, rather, that concepts that are generally embodied into the technology of virtualization such as agility and flexibility, as well as non direct ownership of resources, would be the foundation, rather than any one tool or type of tool. This is one of the key concepts to a couple of the biggest features of cloud computing, flexibility and elasticity. It is certainly possible to achieve these goals using physical systems instead of virtual machines, just not as easily. But that brings up the question of why … why don’t people just put their HPC applications into VMs and be done with it? The primary reasons are centered around performance … most of these applications were created to scale out across hundreds or even thousands of systems in order to achieve one primary goal: Get the highest possible performance at the lowest possible price. And quite often that also means very specialized networks (such as InifiniBand (IB) using RDMA instead of TCP/IP), specialized global file systems (such as Lustre), specialized memory mapping and cache locality, etc all of which get somewhat or completely disrupted in a VM environment.


There are several companies addressing the problem, and one example is that Platform Computing has recently announced a new capability called HPC Adaptive Clusters, which takes these concepts and applies them equally to physical machines or to VMs. The physical instances would be multi-boot capable allowing for smart workload scheduler to dynamically change the landscape as needed and by policy in order to handle various job types, whether they need a flavor of Linux, Windows or whatever (thus, our tagline, “Clusters, Grids, Clouds, Whatever”). Additionally, as technology advances such as Intel’s new Nehalem processors with tools and APIs for power capping, socket and eventually individual core control, these physical boxes can even be setup appropriately for the application load as well as saving power and cooling costs whenever possible.


Platform has been a leader in WLM for over a decade, and now they are adding in the ability to efficiently combine resources, with dynamic control and distribution, as well as ever smarter workload management … Thus, HPC Adaptive Cluster. Check it out at


Phil Morris


Top 10 Predictions for 2010

Top 10 Predictions for 2010

As I reflect on my past 6 months at Platform, I thought I would take the risk and let history be the judge. Here goes:

1. Cloud startups consolidate almost as quickly as banks in 2010
2. The Open Cloud Manifesto never gets ratified by the market
3. Customers use private cloud as a leverage point to architect for management control and vendor choice
4. Cloud OS is elusive. Vendor camps emerge and vendor wars escalate.
5. Cloud polarizes the market. Go big or go small become service provider and vendor mantras.
6. A major VM player merges with a major DCA player.
7. Consolidation, consolidation, consolidation is the driving force for customers as they move to cloud computing.
8. Public clouds reduce corporate IT jobs and spend. CIOs lead the charge. Private clouds become THE strategic decision for enterprise IT.
9. Technical, Web2.0, Corporate, Test/Dev and BI clouds emerge and develop independently within the enterprise.
10. Clusters, Grids, Clouds, Whatever. Marketers waste 50% of their time equally on defining or redefining the next evolution of distributed, no I mean utility, no I mean commodity, no I mean elastic computing…….

OK so most of these were no-brainers (not all though). I’ll try to pin myself down further, but only when pushed.
Platform ISF goes live

Today we announced general availability of Platform ISF and the ability to download the product for free evaluations. Putting this release out demonstrates our commitment to develop and support a broadly applicable horizontal product.

The path to private clouds will not be without bumps but it’s important to see, find and meet those challenges in a wide variety of real-world environments rather than for only select customers under more controlled environments. As organizations architect their future, trials and proof-points are the way forward. I encourage the growing Platform community to work together as we make open, solid and smart private cloud computing a reality.

Computerworld Takes on Private Clouds

Some of you may have seen Computerworld’s story last week, “IT shops rally around private clouds.” SAS Institute, an early Platform ISF user, is also highlighted in the story for its innovative use of private clouds.

The article provides one of the most thorough assessments of how organizations are actually using private clouds I’ve seen, and hits on some of the key themes we have been talking about here at Platform since our launch of Platform ISF into beta back in June. With this in mind, I wanted to briefly recap the article’s themes, which I believe are fundamental in understanding the private cloud and are also key issues to keep in mind during implementations:

1. Private clouds address real business needs. Hall references research by Gartner, which concluded that by 2012, IT shops will spend more than half of their cloud investment on private clouds. These numbers reflect real needs within organizations to own and manage their compute resources internally.

2. Vendor lock-in must be avoided. In the story, Hall poignantly notes that, “For the most part, CIOs abhor vendor lock-in. Reliance on a single vendor can be costly and can keep a company from making necessary infrastructure changes.” This is something we recognized early here at Platform, which is why we introduced a heterogeneous cloud management solution to the marketplace with Platform ISF. I believe the concern over vendor lock-in will only continue to grow as more companies experiment with private clouds.

3. Tools to manage private cloud are critical. One of our CEO Songnian Zhou’s favorite phrases is “clouds are built, not bought.” Building a cloud requires tools to efficiently managing all the various components, also making management crucial for the success of private cloud implementations.