Showing posts with label private. Show all posts
Showing posts with label private. Show all posts

The Road to Private Cloud Doesn’t Need to Be Bumpy!

Industry analyst Bill Claybrook wrote a lengthy article that recently appeared discussing the “bumpy ride” many companies may face on the road to private cloud implementation. Ironically, the article coincided with the release of Platform’s ISF Starter Pack, which is actually designed specifically to make that road less bumpy!

The good news, according to Claybrook, is that private clouds are the focus of many IT managers these days, as evidenced by Gartner, which reports 75 percent of IT managers would both pursue and invest more in private clouds than in public clouds through 2012.

Claybrook cited a number of challenges that private clouds can pose for companies, which include:
  • Budget – According to Claybrook, private clouds can be expensive.
  • Public Cloud Integration – Claybrook says companies should build private clouds that can be easily moved to a hybrid public model if necessary.
    Scale – Private clouds often don’t have the same capacity to scale as public ones, says Claybrook.
  • Reconfiguration – Claybrook claims many organizations will need to tear down their infrastructure on the road to private cloud.
  • Legacy hardware – Similarly, Claybrook recommends not repurposing old servers that require manual configuration for private clouds.
  • Technology obsolescence – Claybrook advises that once implemented, private cloud stacks be kept up-to-date with the latest upgrades.
  • Fear of change – Finally, IT teams will need to learn new ways of doing things, Claybrook says. This should be turned into a growth opportunity for your IT staff.
At Platform, we’re fond of saying that private clouds are indeed “built not bought.” But we have to respectfully disagree with Claybrook that the road will be bumpy for most. That’s part of why you go through the “build” process in the first place. Private clouds are not necessarily meant to be an immediate replacement for all IT operations within an organization. Taking an iterative approach to private cloud and “building” it out from department to department as necessary is the best way to implement a private cloud—and not all IT operations may need to be part of the cloud, either.

Making private cloud implementation easier and at low cost and low risk is exactly why we came up with the Platform ISF Starter Pack. For $4995, the ISF Starter Pack allows companies to quickly and easily set up a private cloud while avoiding many of Claybrook’s stated pitfalls:

  • Budget – At $4995, the ISF Starter Pack is an affordable option. Many private cloud options in the market can cost upwards of $50K.
  • Public Cloud Integration – The Starter Pack includes the same functionality as Platform ISF, including the ability extend the initial deployment to public clouds like Amazon EC2.
  • Scale – Platform’s history in workload and resource management guarantees a high-scaling private cloud solution.
  • Reconfiguration – Platform works with what you have in place because it’s implemented as a management layer – there’s no need to rip and replace your entire infrastructure. Most organizations can’t afford this anyway.
  • Legacy hardware – Lather, rinse, repeat the “Reconfiguration” entry...
  • Technology obsolescence - with Platform ISF you get a single private cloud management focused product that is the core of your private cloud architecture. This will enable you to evolve your cloud at your pace (not another technology suppliers pace) and ensure you retain control and have a partner that will protect you from technology obsolescence.
  • Fear of change – The Starter Pack allows organizations that want to try private clouds to test them at minimal risk without a huge upfront investment. You can ease into the cloud on your own terms.
If you want to test private cloud without changing your entire infrastructure from the get-go try the ISF Starter Pack—it can help make the road to private cloud less bumpy.

Cloud Use Case Series, Part 4: HPC Cloud

Last, but definitely not least in my four-part cloud use case blog series is the high-performance computing (HPC) cloud. This type of private cloud use case allows an organization to automatically change compute node personalities based on workload requirements, enabling resource harvesting within the datacenter and cloud bursting to external resources or the public cloud. This resource maximization results in near-100 percent utilization of the infrastructure with a cost-effective means to scale capacity to meet dynamic business demand.

A perfect example of an HPC cloud in use is CERN (European Organization for Nuclear Research), one of the world’s largest and most respected centers for scientific research. CERN depends on computing power to ensure that 17,000+ scientists and researchers in 270 research centers in 48 countries can collaborate on a global scale to solve the mysteries of matter and the universe. To accelerate their research, CERN requires a cost-effective shared computing infrastructure that can support any combination of RHEL, KVM, Xen, and Hyper-V for a widely diverse set of scientific applications on x86 servers. Previously, their clusters could not flex or be re-provisioned automatically, creating idle resources as users waited for their environments to become available.

With Platform ISF CERN is able to eliminate silos by re-provisioning thousands of nodes and VM’s based on application workload requirements. Platform ISF provides the self-service capability that allows scientists to directly choose their application environments and manage their own workloads, increasing user efficiency and reducing IT management costs.

As a scientific research organization, CERN keeps a close watch on expenses and with Platform ISF, they are delivering more services within a fixed budget with performance doubled on many applications. Dr. Tony Cass, Leader Group, Fabric Infrastructure, CERN told me that “if we can move 150 machines [from a total of 200] out of this environment by improving utilization, we can either save some significant power and cooling costs, or we can redeploy the machines to the batch cluster without increasing the hardware budget.” This type of resource maximization is what is needed to get the most return on investment from an HPC cloud deployment.

This concludes my four-part blog series that outlines all four private cloud use cases for companies evaluating internal shared infrastructures (should you want to review some of my previous posts, just click on the links below):
1. Infrastructure Cloud
2. Application Cloud
3. Test/Dev Cloud

Cloud Use Case Series, Part 3: Test/Dev Cloud

The third installment of my four-part cloud use case blog series is the test and development cloud. This type of private cloud deployment provides a self-service test and development infrastructure where resources are provisioned automatically and within minutes according to project timelines for test/dev teams within an organization. This results in better utilization of existing servers, lower costs, increased developer productivity, and a proving ground for a production application cloud.

A classic example of a test/dev cloud is a leading global financial services firm that has multiple software development groups distributed across the world. The bank’s main challenges stemmed from regional groups building siloed “build and development” environments that each manage 30 or more applications. This caused slow software build processes and wasted configuration time required to set-up application environments, as well as duplication of effort across teams.

Thanks to Platform ISF, the bank was able to consolidate their application development environments into a shared cloud solution, which dynamically creates complete application environments for development teams according to self-service requests and resource reservations according to project schedules. This has enabled a shared environment for application development across global teams, using fewer resources at lower cost, while increasing developer productivity.

Stay tuned for part 4 in my cloud use case series – HPC cloud - next week!

Will the CIO allow the frog to be boiled?

What do you get when you combine a grass roots solution portfolio and adoption strategy with enterprise pricing schemes? Answer: A very attractive short-term value proposition that is sure to put many customers on the path to vCloud.

What’s the problem? Cost and vendor lock-in.

Cloud computing is the third major wave in the history of distributed computing and offers the promise of open systems enabling customer leverage through commodity IT components. However, incumbent vendors are fighting desperately to create integrated proprietary stacks that protect their core products and licensing models.

This battle for leverage between big vendors and customers comes down to a basic architectural decision: Does the CIO and senior architect team have an open systems vision for private cloud that they are willing to pursue in-spite of bundled offerings from vendors?

Platform Computing is betting that a large portion of the market will architect their private cloud platforms with heterogeneity and commodity as key principals.


Anyone who doubts that the frog is getting warm or that the intent for customer lock-in is real needs only to read Microsoft’s ad decrying vendor lock-in.

VMworld: A Strategy Straight out of Redmond?

Is the market doomed to a big orange, blue, or red cloud? Even if you won’t be attending VMworld this week in San Francisco, you can bet virtualization and clouds will be dominating the tech media and blogs throughout the course of the week.

We expect that VMware will announce plans to move up the stack and attempt to secure VM management customers into their evolving cloud platform. Platform won’t be attending VMWorld this year, and here’s why.

As we see it, there are four cloud computing camps currently forming:

  1. VM camp – led by VMWare and other OS/VM vendors (Microsoft, Red Hat and Citrix) who believe clouds should be based on homogeneous OS/VM
  2. Enterprise Management camp—led by the Big 4 enterprise management behemoths who are offering cloud portals tied to their existing management tools and acquiring components to keep their customers as they upgrade to cloud
  3. Cloudy Server camp—these are the cloud-in-a-box systems vendors who suggest that building private clouds is as simple as a plug-and-play solution (It’s not! And it’s more than a commitment to a single hardware or software vendor)
  4. Open Platform camp—promoted and built by customers who are intent on maintaining vendor leverage and avoiding lock-in. Platform Computing and other independents play here by working across the stack with an explicit heterogeneous strategy

Don’t get us wrong—each of these approaches is a legitimate path. However, when it comes to private clouds for medium and large enterprises, the risks of complexity, high cost and vendor lock-in become clear for the first three approaches. It’s here that the VM and Cloudy Server camps are better viewed as components and pools of standardized building blocks than the über-cloud (hardware + OS + VM + middleware + management, all from one-vendor!). Likewise, the Enterprise Management camp starts to look as complicated and expensive as the thing it was supposed to replace (itself!). We are told that we still need all their existing management tools plus some new functions and glues to build a cloud – just pay them more!

That’s not to say that the Open Platform approach is a panacea. But over time, it does deliver on a core promise of cloud that is to reduce vendor lock-in while commoditizing IT components.

The reason Platform won’t be at VMworld this year is that we want to make the statement that “clouds have no colors” and there is an alternative approach to either going big or legacy.

Integrated stacks are good and provide real value. That’s the good news.

The bad news is that when vendors control those stacks, they are both sticky and expensive. When customers own and control their stacks via heterogeneous open systems, they face integration issues, but in return, they get vendor leverage, great prices and commodity scale-out control. That’s the lesson that we’ve learned over the past 18 years and the core reason for our customer’s success.

We’re betting our business again on the open systems approach with cloud. A cloud management layer, such as Platform’s ISF product, allows companies to have private cloud in their own way and get value from an integrated, heterogeneous stack while also benefitting from the other promises of the cloud, such as automated provisioning, self-service, chargeback capabilities and workload management for a wide variety of applications across both VM and physical servers. And with solutions such as our Platform ISF Starter Pack companies can evaluate how a private cloud can work for them at low risk, low cost ($4995) and in a few days rather than months.

What color cloud do you want for your organization?

Cloud Use Case Series, Part 2: Application Cloud

Next up in my four-part cloud use case blog series is application cloud. With this approach, N-tier applications are migrated to a cloud-friendly, stateless architecture to enable automatic increases and decreases in resource allocations based on workload and resource policies.

As I mentioned in my previous post, while application clouds actually provide the most comprehensive benefits of all the four types of use cases, some early adopters have chosen to start with smaller-scope initiatives in order to acclimate their IT and business processes and application architectures to the new model of shared computing that private cloud provides. However, the long-term objectives of most of these organizations is to eventually evolve to application cloud to get the most value from their cloud investment.

Many applications—particularly Java environments—are deployed across multiple datacenters, where each application uses its own dedicated set of resources that are static and provisioned with excess capacity in order to meet peak demand periods. These applications typically run on expensive infrastructure, operating systems and middleware (e.g. UNIX SMP, UNIX OS, Websphere/Weblogic) and are built using inflexible architectures typically associated with these tools. As a result, enterprises see only about 10-15 percent utilization of the underlying infrastructure, and incur operating costs for equipment that isn’t being used (admin time, facilities, power, hardware and software maintenance).

Some of the challenges faced by organizations before migrating to an application cloud are that CIOs and architecture teams want to reduce costs by 75 percent or more by changing from UNIX SMP servers to x86 Linux servers and application middleware, using low-cost VM alternatives by Xen or KVM to make applications more portable and flexible. At the same time, the operations team wants to become more responsive to business needs and reduce application provisioning time by 90 percent or more while increasing resource utilization from 10-15 percent to 50-60 percent or higher. This would reduce IT operational costs by 25 percent or more - all while escaping vendor lock-in and regaining control of their applications and infrastructure.

To solve these challenges, applications are migrated to a cloud-friendly, stateless architecture to enable mobility of application components. Applications are automatically deployed across a shared pool of resources using private cloud management software, such as Platform ISF, which monitors application workloads and resource usage and dynamically expands or contracts capacity according to workload and resource policies. This enables guaranteed SLA’s and ensures the most efficient use of the share infrastructure, where all activities are metered to enable application-level chargeback, so business units only pay for what they use.

One of our customers, a leading global bank, develops and manages hundred of Java applications on thousands of proprietary UNIX SMP servers. These application environments are deployed in silos, resulting in low resource utilization and high operating costs with time-consuming, manual provisioning required for deployment and modification of infrastructure.

The bank migrated applications to a shared, stateless architecture managed by Platform ISF to enable automatic expansion and contraction of capacity according to application workload and resource policies on Intel-based Linux servers.

This has enabled the bank to reduce capital costs by 75 percent and operating costs by 25 percent, with resource utilization increased to 50-60 percent. The customer can now deploy application environments within minutes rather than months, and application teams can modify and scale the infrastructure of their deployed applications without IT involvement.

Stay tuned for part 3 in my cloud use case series – test/dev cloud - next week!

Getting Started with Private Cloud: Platform or Toolkit?

A new vendor coalition was announced this week, bringing together software from three vendors and an integrator to enable private cloud. You may be wondering what our take is on this announcement… Well, I can sum it up in one word: Difficult.

As James Staten of Forrester wrote in You’re not a cloud yet – get on the path today, private cloud will take quite some time for many organizations to deploy, and most firms are just getting started with private cloud research. As you get started into the world of private cloud do you really want to spend weeks and months cobbling together multiple toolkits just to see how it works in your environment? Or would you rather have an all-in-one platform that lets you get started quickly, with very small investment?

Platform engineered ISF to tackle all three toolkit approaches in a single software product--enabling self-service, application lifecycle automation and cloud elasticity. And with the Platform ISF Starter Pack we announced yesterday, you can get a sandbox environment up and running in just 30 minutes. For $4,995 you get software, best practices advice and help to set up private cloud – that includes a 1-year software license, training, cloud-builder consultation, and integration advice for your internal tools.

Based on our experience with users in the last year in private cloud, users are looking for a single solution they can get up and running quickly to help crystallize the requirements for their private cloud solutions. No messing around with multiple providers and disparate tools. No lengthy development and deployment projects just to get started. And when you’re ready to move forward with a private cloud deployment, Platform’s enterprise-class expertise and support will be with you every step of the way.

Staten recommends users begin investing now in cloud starter packs to better understand where they need to be down the road. With one all-in-one software product that provides everything you need for private cloud management, parting with less than $5,000 for tangible experience may be the best decision you make all year.

Cloud Use Case Series, Part 1: Infrastructure Cloud

In my many conversations with Platform ISF customers, I’ve noticed that early adopters of private cloud technologies tend to fall into four types of use case categories:

1. Infrastructure Cloud
2. Application Cloud
3. Test/Dev Cloud
4. HPC Cloud

It’s worth mentioning now that although application clouds actually provide the most comprehensive benefits of all the four types of use cases, some early adopters have chosen to start with smaller-scope initiatives in order to acclimate their IT, business processes and application architectures to the new model of shared computing that private cloud provides. However, the long-term objectives of most of these organizations is to evolve to application cloud to get the most value from cloud computing.

In an effort to educate our customers and readers on the benefits of private cloud, I plan on posting a four-part blog series that outlines each of these four private cloud use cases for companies evaluating internal shared infrastructures.

First up is the infrastructure cloud. This approach involves a ready-to-use infrastructure that enables deployment of application environments or simple virtual machine (VM) images based on application requirements, and business units are charged for what they use. This results in rapid infrastructure deployment and modification, reduced capital and operational expenses, higher utilization and better cost controls for users.

One of our early adopters of infrastructure cloud is
Fetch Technologies, a SaaS software company that enables organizations to extract, aggregate and use real-time information from websites. With over 200 virtual servers in production and test/dev using VMware, Fetch had to provision resources manually to increase SaaS capacity. Furthermore, it was taking several hours/days to change configurations on behalf of their customers.

With Platform ISF, Fetch provisions groups of servers at a time automatically, thereby dramatically reducing labor costs. Rick Parker, IT Manager at Fetch, recently told me that “if we had continued to do manual provisioning we would not be able to scale fast enough to keep up with customer demand with our current staff. Platform ISF lets us scale up without adding staff saving us cost, and helping us to better meet customer expectations.”

Thanks to the multi-hypervisor support in Platform ISF, Rich and his team only need to learn how to use one interface, which significantly reduces administrator training. Platform ISF’s self-service user portal allows Rich to simplify server management tasks so that users can make the changes they need at any time. And Platform ISF will allow the company to leverage public cloud resources to supplement the resources available in their private cloud.

Stay tuned for part 2 in my cloud use case series – application cloud - next week!

Platform customer featured on ZDNet’s Briefings Direct with Dana Gardner

A few weeks ago we posted a blog entry about Platform customer Dr. Marcos Athanasoulis’ participation in a panel discussion on cloud computing at The Open Group’s Architecture Practitioner’s Conference in Boston.

Panel moderator, blogger and independent analyst, Dana Gardner has now posted a podcast of the panel online at his ZDNet blog, “Briefings Direct,” titled “Harvard Medical School use of cloud computing provides harbinger for new IT business value, Open Group panel finds.”

As the title implies, Harvard Medical School (HMS), Athanasoulis and their private cloud were featured prominently in the podcast. Institutions like HMS are paving the way for how private clouds should ideally be implemented within organizations.

As Gardner points out, HMS is a harbinger for the new model of IT use that is the promise of cloud computing. HMS is unique because their cloud requires user participation to actually function. Due to internal policy at the school, the researchers that Athanasoulis works with are not required to use the IT services provided by the school’s IT department. In effect, they’re allowed to “grow their own” IT—so the fact that researchers who are used to doing their own thing are adopting the private cloud that Athanasoulis and his department have implemented (which uses Platform’s LSF and ISF products) is really a testament to the service model that private clouds can offer within organizations.

I won’t spoil the content by giving away too much of it here because I’d really suggest you listen to the panel of cloud experts featured at the conference, but Athanasoulis also offers some great advice for those considering private cloud implementations that he calls the Four P’s:

  • Pilot – Begin small within the organization, using pilot groups to kick-start your private cloud
  • Participation – Get buy-in from everyone you need it from to succeed
  • Produce - Get results from what you’ve implemented. If you don’t, it won’t succeed.
  • Promotion - Promote the service. Be an advocate and evangelist for it.

You can listen to the podcast here.

The Glue that Binds: Cloud Management Software and the 7 Key Components of Private Clouds – Part 2

As previously promised, this is a continuation to my earlier blog post on the seven main requirements for successful private cloud deployments, and will outline exactly how Platform’s private cloud management software supports all seven key components. For those of you that aren’t yet familiar with Platform’s ISF product, it’s our private cloud management software, which we introduced to the market in June 2009.

As organizations evaluate how to evolve their internal infrastructures to a private cloud, I want to clearly delineate how Platform ISF can facilitate this evolution:
  1. Heterogeneous systems support – Adapters within the Platform ISF integrate distributed and heterogeneous IT resources to form a shared system. All major industry standard hardware, operating systems (including Linux and Windows) and VM hypervisors (including VMware ESX, Citrix XenServer, Microsoft Hyper-V, and Red Hat KVM) are supported. Adapters are also available for provisioning tools (IBM xCAT, Symantec Altiris, and Platform Cluster Manager) to set up application environments on demand.
  2. Integration with management tools – Platform ISF integrates with many third-party tools for various systems management tasks out-of-the- box, including directory services for user and account management, security, monitoring and alerting.
  3. Configurable resource allocation policies – Once a pool of shared resources is formed, a set of site-specific sharing policies is configured in the allocation engine to ensure that applications receive the required resources. These policies also make certain that the organization’s resource sharing priorities are applied, and that the quota constraints applicable to business groups sharing the cloud are reinforced. The allocation engine matches IT resource supplies to their demands based on resource-aware and application-aware policies. This private cloud “brain” is critical for IT agility.
  4. Integration with workload managers, middleware and applications – Platform ISF provides interfaces to users and applications as well as supporting the lifecycle of cloud service management. Templates can be configured for simple and complex N-tier business applications to automate their lifecycle management. Platform ISF allows for the starting of all the components of an N-tier application, the adding or removal of a resource, and monitoring and failure recovery. It also supports middleware such as J2EE, SOA, CEP and BPM, and workload schedulers such as AutoSys, Platform LSF and Symphony.
  5. Support IT and business processes – A self-service portal enables users to request and obtain physical servers and VMs in minutes instead of days or weeks. Platform ISF has a set of APIs that can be called by applications, middleware and workload managers to request and return resources without human intervention. The service offerings can be structured as: complete application environments (e.g., application packages, CPU, memory, storage and networking); as bare metal servers with an operating system installed; or as virtual machines. SLAs can be associated with each service offering.
  6. Extensible to external resources – Platform ISF integrates with many service provider environments (eg. Amazon Web Services via Amazon Virtual Private Cloud), enabling centralized access, management, tracking and billing of external services.
  7. Enterprise, not workgroup, solution – Built on a technology foundation found in large scale production environments, Platform ISF is scalable to hundreds of thousands of cores under management which enables IT to start small and feel confident that their cloud will grow as more services are added over time.
Lastly, beyond the seven key components of enterprise cloud deployments, Platform ISF also collects all resource usage data and provides reports and billing information. Alternatively, the cloud administrator may choose to feed the usage data into site-specific reporting and charge-back tools.

Below I’ve included our depiction of the private cloud management stack and the location of Platform ISF with its various capabilities.

Are Japanese Banks Early Adopters of Private Cloud?

The adoption trends for cloud computing seem to be upside-down and inside-out as compared to traditional technology adoption. Instead of a handful of enterprise firms across industries (such as financial services and telecommunications taking the lead) it’s the smaller fish that are giving cloud a go. Although this “small fish” adoption trend is taking hold in the public cloud environment, it’s certainly not the case for private cloud. In private cloud, big banks, life-sciences, media and government organizations are leading the charge in North America, with Europe and Asia-Pacific only slightly behind.


But where does this leave Japan? Not known for its early adoption of technology, Japanese firms (especially the banks) like to play wait-and-see before investing in new options for their IT strategies. Take, for example, a contingent from a Japanese banking customer that I helped host in New York last month at the SIFMA Financial Services Technology Expo. You might think they were meeting with several North American banks and vendors to get an idea of where they’d need to be in 5 years, right?


Not so fast. In actuality, some of the earliest private cloud pilots are being run in Japan. For example, one of Platform’s system integrator partners in Japan has built an in-house “training course cloud” for their consultants and customers. Students can request the required infrastructure on-demand for course work requiring many servers, OS’s, etc, facilitating training for more people and eliminating costs associated with manual effort to set up training environments repeatedly.


Platform’s CEO Songnian Zhou discussed this case along with several other real-life success stories at the Global ICT Summit in Tokyo in June. The keynote panel entitled, “New Trends in Cloud Computing Technology,” paired Platform Computing with other industry leaders such as Amazon and Microsoft. To hear more about these case stories, check out the full post-event interview with Dr. Zhou.


Back to the Japanese customer contingent in New York… what did they find? Public cloud is not really an option for any of these financial firms, even for workload-driven cloud bursting. The major institutions the Japanese visited all had the same things to say: the data security and intellectual property issues would never make it past Compliance.


So what are these international firms doing, Japanese included? They’re building an internal private cloud first, where infrastructure operations are streamlined to lower costs and deliver better services. And the next step will not be to cloudburst, but to harvest other resources internally that would otherwise sit idle. Platform already has customers doing this in production--in one case, VDI servers are being used overnight to improve accuracy and timeliness of value-at-risk models for regulatory reporting.


While private cloud (and cloud computing in general) is still in its infancy, more and more pilots are going in every day all over the world. For the rest of the world, watch out. The membership of the early adopter club is expanding, and major Japanese firms are getting aggressive. Being a tech fast-mover is more important than ever to build competitive advantage.

The Glue that Binds: Cloud Management Software and the 7 Key Components of Private Clouds – Part 1

While the IT industry, analysts and media continue to do a pretty decent job at outlining, defining and documenting cloud computing and successful early deployments, I thought to contribute to the overall conversation by discussing some of the key elements that we at Platform Computing have identified to be necessary for private clouds, resulting from our own conversations within the industry and with our customers. This will be a two-part blog series that provides some recommendations for companies evaluating internal shared infrastructures by discussing seven key requirements for private clouds and underlying cloud management software.

As we see it, here are the seven key components of a private cloud environment:

  1. Heterogeneous systems support – The private cloud needs to support an organization’s heterogeneous infrastructure, as well as resources from external providers. This includes server, storage and networking hardware, operating systems, hypervisors, storage systems, and file systems.

  2. Integration with management tools – Enterprises use a variety of IT management tools for security, provisioning, systems management, directory, reporting, billing, data management, regulation, and compliance. Cloud computing does not replace these tools. Instead, properly designed private cloud management software easily integrates with existing tools and invokes them as needed during cloud operations.

  3. Configurable resource allocation policies – The cloud must be workload-aware as well as resource-aware. This means that the cloud management software can determine the most efficient placement of application workloads. The cloud management software guarantees resource reservations to its customers based on well-defined policies. And, when demand peaks, the software is able to arbitrate resources based on business priorities of various parts of the cloud workload to cost-effectively meet SLAs.

  4. Integration with workload managers, middleware and applications – Clouds exist to run applications. In addition to a self-service portal for users to request virtual or physical machines, private cloud management software provides flexible API’s to enable easy integration with the enterprise’s essential workload managers, middleware, and applications.

  5. Support IT and business processes – Clouds provide support for various IT and business processes and allow IT to automate many of its operations. In fact, cloud management enables the definition and ongoing modifications of many IT management processes that had been performed manually.

  6. Extensible to external resources – In addition to providing more flexible services with internal resources, the cloud should enable managed access to external resources that are hosted by service providers. This enables more flexible capacity planning where additional resources can be used and paid for only when needed, while centrally controlling access and metering of these services.

  7. Enterprise, not workgroup, solution – An organization usually consists of multiple departments and locations, often distributed internationally. A flexible cloud scales to meet their diverse needs in real time. While cloud computing may be adopted initially within an individual line of business or location, it enables the integration of IT across the enterprise by reconfiguring rather than replacing the private cloud management software. Therefore, a private cloud can be an enterprise-wide IT services delivery system that provides transparent and consistent access to global resources.


It’s very important for companies to remember that just like other mission-critical enterprise business systems and services, a private cloud is built by the IT organization, not bought from a vendor. Private cloud management software is the key to enabling IT to configure its data center resources, integrating its management tools, and supporting its applications and business processes. You could consider it “the glue” that binds together enterprise data center operations as organizations move into the era of cloud computing.

Stay tuned for the second part of this blog series that will showcase how Platform ISF meets all seven requirements of private cloud management discussed in this blog.


    Cloud Computing is Getting Less Cloudy

    Well, at least for me. I just attend the 2nd Annual Cloud Computing World Forum in London this week and after dozens of presentations, many conversations at the stand, and sifting through all the conference material, I believe my understanding of cloud is ‘less cloudy.’ Why? Because cloud computing is not just one thing that you can point to; it’s many use cases and I think I heard almost all of them here, from cloud bursting to SaaS services. But one thing that became clear is that cloud can be almost anything so long as ‘shared resources’ form its underpinnings.

    Other than shared resources, there was another thing about cloud computing that also became clear to me at the conference - all cloud implementations tend to have three key capabilities. The first is the capability to manage resources. This means the management tools implementing cloud must be able to recognize all types of resources – or in other words heterogeneous resources. Cloud management tools must be able to manage these heterogeneous resources through provisioning mechanisms, whether physical or virtual. The second is the capability to manage the software stack, or at least part of the software stack. Monitoring utilization data characteristics from physical resources is common, but monitoring utilization data in the software stack is less common, and certainly not generally inclusive of the entire software stack (IaaS, PaaS, and SaaS) under a single tool. But even if there were a single management tool that could monitor the entire software stack and had a dashboard that could clearly communicate the state of each tier within the stack, manual changes to the infrastructure to improve performance and efficiency would still be necessary. So this brings up a third capability that is needed: the ability to manage changes in the infrastructure through a set of policies. That is, to automatically make application and platform resource adjustments at any level in the software stack in order to improve utilization and efficient.

    Well, at least this is what I learned. When I talk about cloud I will be thinking about all three capabilities in management solutions. It doesn’t matter which use case or how big or little your use case is, if you can include all three of these basic cloud capabilities you will have a more efficient cloud. Of course the ‘got to be open’, and ‘work with key partners’ practices are always important too. But for cloud, I’m basing the best solution based on principle.

    Multiple global datacenters => one virtual datacenter? I think not…

    I was just reading Nick Heath’s article Private Cloud: Turning corporate datacentres into a global virtual machine posted on silicon.com about the potential of private cloud turning global datacenters into a massive, global VM, and it made me recall a somewhat-famous quote from Sun Microsystems on cloud from a couple of years back – to paraphrase: All enterprise IT will be run out of facilities hosted by a handful of service providers within the next decade. (I don’t remember the exact timeframe, but I do recall chuckling out loud when I read it).

    Now this particular “global VM” article by Nick Heath wasn’t worthy of a chuckle, but--seriously? Who in their right mind would be entertaining a near-term strategy to string together all of their datacenter assets into a single shared resource pool, where apps and workloads could cross the pond(s) and back on a whim? Don’t get me wrong – this is a wonderful vision, with incredible economics of scale and efficiency behind it – but cloud computing is in its infancy and it still has significant political, data security, and financial reporting issues that need to be resolved before a monumental project such as this could be entertained.


    When it comes to cloud computing, organizations of all sizes need to start small, either within a specific group (e.g. developers and testers of a BU) or class of apps (eg. stateless Java apps performing simple request/response). From there, they can get the kinks out, work the politics, figure out the chargeback models, etc. You could certainly implement this approach across multiple datacenters/geographies to support these teams/businesses now (some of our early adopters are doing just that), but not for the sake of having an über-global cloud.

    You don’t need some science-fiction, futuristic cloud model to realize the economics of private cloud – a small bite will do.

    It’s all about CHOICE

    Everyone in the tech industry has an opinion about cloud computing these days. More often than not, those opinions come not from the IT people in the trenches who are looking at implementing private clouds, but from industry pundits or vendors that are trying to tell them what they want.


    That’s why it was refreshing to see Steven Burke’s article this week where he interviewed actual IT people attending EMC World in Boston about what they are looking for when it comes to private clouds.


    In a word, they want CHOICE.


    Both IT execs interviewed in the article made it clear that what they need from cloud providers is to be able to offer their users a service—and they’re determined not to be tied to one provider for those services. What they want is a solution that will perform with what they already have and make what they have perform better. As one exec said, “I am not worried about what the chassis is or what the processors are.”


    What else do they need? To prove ROI. If they’re choosing your service, you’re going to need to prove that you can deliver—or they may look elsewhere. Again, choice.


    They also understand that moving to a service model for IT is going to take time—they know that building a private cloud and transitioning users to a service model is going to take years and what they need won’t come in a box or a plug-and-play solution. As we’re fond of saying around here, “Clouds are built, not bought.”


    IT is not going to be in the hands of vendors forever... what choices will you provide your users?


    Fetch Technologies Leading the Way with Private Cloud

    This week, Platform Computing announced a new customer for ISF, our private cloud management software. Fetch Technologies, an artificial intelligence-based data extraction company, is an excellent example of what cloud computing can do for your business. As a company with a small IT staff, a large-scale SaaS offering and clients that rely on their mission-critical data, Fetch needed to find a cost-effective way to scale and meet the increasing compute power requirements needed for their complex data extraction. Fetch’s story, as told in this excellent article from Nicole Hemsoth at HPC in the Cloud, also exemplifies a trend we are seeing with many of our customers and prospects- the desire to start with private cloud and eventually leverage public cloud resources. Nicole and Rich Parker from Fetch sum it up nicely here:


    “It is useful for firms to have the ability to leverage the public cloud as needed. In a discussion about private clouds in the model that Fetch is utilizing, ‘private cloud monitoring of resources and capacity planning are very critical. We need to know when we need to add more resources and how long it will take to add them. For example, we need to add more CPU and memory—that could take us 2 weeks to do. We monitor like crazy; we have over 200 monitors.’ However, as Parker did note, having the capability to scale to EC2—even if that never happens, is one of the attractive features of a cloud offering that the one they chose from Platform.”


    To learn more, take a look at Nicole’s article. Rich and his team are true innovators on the cloud front, and are definitely ones to keep an eye on as they evolve their cloud computing model.

    Wag the Dog: Private Clouds are Key to Regaining Supply Chain Control

    We’ve been talking a lot about control issues here at Platform. Not in the Janet Jackson, Type-A sense of the word, but rather as it relates to private cloud computing and who’s in control of IT resources at most organizations. The most common association as it relates to cloud is the perceived delegation of control to business users. I say "perceived" because with private clouds IT will still own the service catalog to ensure compliance, budget adherence, etc. Digging a bit deeper, however, you find the Fortune 500 using private clouds in an entirely different control angle: vendor leverage.

    Now, as a vendor providing a private cloud solution, you might find this a bit odd for us to talk about vendor leverage, perhaps even blasphemous. My perspective is coming not as an observer of the cloud vendor war going on in the industry right now, however, but rather from speaking with our customers and prospects that are tired of the big-vendor lock-in that they have inherited from architectures of years gone by, and see a private cloud as an opportunity to shift some things around and get the ball back in their court.

    Interested? I recently wrote a piece for SandHill.com on the topic-
    Wag the Dog: Private Clouds are Key to Regaining Supply Chain Control. Take a look and let me know what you think.

    Webinar Today: “Beyond Virtualization: Redefining Services with Private Cloud”

    Today we’re hosting a Webinar to discuss how private clouds can provide a more responsive and cost effective IT service delivery model. We’re lucky to have Rick Parker, IT Manager, Fetch Technologies and Joe Szalkiewicz, VP, The Pinnacle Group, on board to share their experiences in their organizations’ private cloud implementations.

    As many of you may know, Fetch Technologies is a software company that works with organizations to extract, aggregate and use real-time information from different websites. Fetch will share how moving to a private cloud allowed them to become more responsive to their customer demands without additional costs. The Pinnacle Group, a systems integrator that offers IT hardware, software and networking solutions and services, will talk about their experiences implementing in private clouds for their customers and why companies are looking to private clouds to help streamline their operations.

    Please join us on Tuesday, March 16 at 11:00 a.m. EST for this in-depth look at how companies can utilize existing IT assets more effectively and efficiently with a private cloud computing model.

    To register for the event, please visit: http://www.platform.com/go/2010/beyondvirtualization/index.asp

    EMC's Chuck Hollis: Good Blog Post

    Chuck Hollis at EMC posted an interesting blog last week; see Clouds Need To Be Better Than The Environments They Replace. Chuck is right but we are missing one overriding point about private clouds, namely that one aspect of “better” is often the ability to take control away from the vendors. This often is THE key value proposition for senior IT architects in the context of private clouds.

    Chuck makes the case for better quite well, including:
    · Usual progression of value propositions that drive new technology/model adoption that cloud is following: Cool --> Cheaper --> Better. Agreed. Maybe add Accessible
    · Cost-to-Serve. Time-to-Serve. Agreed--this is a great way to think about this because service is at the forefront. Maybe also add Ease-to-Serve.
    · Private Cloud defined as being under control of IT. Agreed, and agreed on the other elements of the definition. However, the issue is how to get there. This gets to my control point.

    Cloud (including private cloud) management and middleware are is a strategic control decisions for enterprises. One option is to extend VM management, provisioning and hardware resource management platforms. This should and will inevitably happen. However, there is another decision that needs to be made about whether to architect a “layer of independence” into the overall stack and system. Enterprises that want more control over their system and more leverage over their hardware and software vendors are likely to view independent and/or open source management and middleware solutions as a way to make their overall business model or investment plan “better” by squeezing more (performance-wise and financially) from their best-of-breed proprietary solutions.

    This is what it’s really about. The users taking control over IT to get better service, and IT taking control over its supply chain to either get better service or deliver better service. The point is that “better,” when it comes to private clouds, means not just a better alternative to today’s architectures/solutions/capabilities, it can mean a better alternative to their current position in the supply chain.

    Beyond Virtualization

    Wednesday we had an excellent discussion with Cheryl Doninger, R&D Director at SAS and William Fellows, Principal Analyst at the 451 Group about the benefits of getting beyond virtualization.

    Cheryl presented 4 use cases on where private cloud management can improve the end-user experience while reducing costs. The cases included: test/dev, IT operations, SAS customer deployments and SAS on-demand hosting.

    Wil discussed why organizations are looking beyond VM management in considering private clouds and gave several financial services customer examples as he characterized the state of the market.

    I discussed how Platform ISF is being used for enterprise and HPC applications respectively through two case examples.

    You can view the presentations and discussion at: http://www.platform.com/eforums/eforum.asp?1-1KP4BZ