Are Japanese Banks Early Adopters of Private Cloud?

The adoption trends for cloud computing seem to be upside-down and inside-out as compared to traditional technology adoption. Instead of a handful of enterprise firms across industries (such as financial services and telecommunications taking the lead) it’s the smaller fish that are giving cloud a go. Although this “small fish” adoption trend is taking hold in the public cloud environment, it’s certainly not the case for private cloud. In private cloud, big banks, life-sciences, media and government organizations are leading the charge in North America, with Europe and Asia-Pacific only slightly behind.


But where does this leave Japan? Not known for its early adoption of technology, Japanese firms (especially the banks) like to play wait-and-see before investing in new options for their IT strategies. Take, for example, a contingent from a Japanese banking customer that I helped host in New York last month at the SIFMA Financial Services Technology Expo. You might think they were meeting with several North American banks and vendors to get an idea of where they’d need to be in 5 years, right?


Not so fast. In actuality, some of the earliest private cloud pilots are being run in Japan. For example, one of Platform’s system integrator partners in Japan has built an in-house “training course cloud” for their consultants and customers. Students can request the required infrastructure on-demand for course work requiring many servers, OS’s, etc, facilitating training for more people and eliminating costs associated with manual effort to set up training environments repeatedly.


Platform’s CEO Songnian Zhou discussed this case along with several other real-life success stories at the Global ICT Summit in Tokyo in June. The keynote panel entitled, “New Trends in Cloud Computing Technology,” paired Platform Computing with other industry leaders such as Amazon and Microsoft. To hear more about these case stories, check out the full post-event interview with Dr. Zhou.


Back to the Japanese customer contingent in New York… what did they find? Public cloud is not really an option for any of these financial firms, even for workload-driven cloud bursting. The major institutions the Japanese visited all had the same things to say: the data security and intellectual property issues would never make it past Compliance.


So what are these international firms doing, Japanese included? They’re building an internal private cloud first, where infrastructure operations are streamlined to lower costs and deliver better services. And the next step will not be to cloudburst, but to harvest other resources internally that would otherwise sit idle. Platform already has customers doing this in production--in one case, VDI servers are being used overnight to improve accuracy and timeliness of value-at-risk models for regulatory reporting.


While private cloud (and cloud computing in general) is still in its infancy, more and more pilots are going in every day all over the world. For the rest of the world, watch out. The membership of the early adopter club is expanding, and major Japanese firms are getting aggressive. Being a tech fast-mover is more important than ever to build competitive advantage.

Cloud Killed Grid…Not if You Ask our Customers

It has been a week or two since Dave Rosenberg at the Register wrote an article that sounded the death knell for grid computing. In the article Dave points to the new Amazon EC2 Compute Cluster instance service as the cause for this conclusion.

It has taken a while to sit down and respond, but I think it is important to provide an alternative perspective based on our customer activity and feedback that addresses some of the points in Dave’s article.

We don’t believe Cloud is killing Grid, we see it as an evolution of the options for distributed computing infrastructure, Clusters, Grids, Clouds or whatever comes next. It doesn’t matter what you call the environment, what matters to customers are key things like:
  • Application performance
  • Application support
  • Total Cost of Ownership
  • Ease of management
  • Time to deploy/flexibility
  • Scalability

As Dave states there was a lot of hype and activity in the academic community about grid. Our perspective includes a very strong Global 2000 enterprise customer base..so it is important to understand how those organizations have adopted and continue to adopt grid (and clusters, and clouds) and how this new offering might add value ( not kill) their grid environments.

  1. Grid has been widely adopted in enterprise environments for computationally intense workloads. The companies (across many vertical markets) that have deployed Grids, use them as mission critical infrastructure to support key business processes (design, engineering, analysis, etc) and they have been able to achieve cost effective scale for these environments based on the adoption of commodity infrastructure. They are able to operate these grids at cents per hour, while driving resource utilization >80%, 24x7. Our customers’ demand for resources is ever increasing, so they look to whatever distributed computing infrastructure they can tap into for additional capacity.

  2. The new offering from Amazon which provides higher performance compute instances in the cloud, now starts to enable customers to consider leveraging the cloud for some of their Grid applications. The decision a customer makes to use a cluster in the Amazon cloud, or operate their own environment will driven by the considerations listed above.

  3. The cloud offering from Amazon is a good start, but still requires customers to configure and manage their environments for their applications. This is where companies with expertise in Grid and HPC like Platform can help customers make best use of this new offering. Individual customers are going to make the decision on how to leverage the cloud offering to extend and add value to their mission critical grid environment, and like most decisions on outsourcing, the answer will be “when it makes sense economically.” We are likely to see a hybrid model for many years to come as customers make the choice that is right for them and their business.

So Amazon offerings rather than killing grid will make grids more dynamic and make it more apparent that the evolution (and relationships/learning’s) of clusters, grids, clouds is a natural one.

The Cloud in Practice: Harvard Medical joins Industry Experts on Cloud Panel at The Open Group Boston Conference


This week Platform customer Dr. Marcos Athanasoulis from Harvard Medical School spoke on a panel at The Open Group’s Architecture Practitioner’s Conference at the Hyatt Harborside Hotel in Boston. Entitled “Taking the Business Decision to Use Cloud Computing,” Dr. Athanasoulis was joined on the panel by a number of industry experts on cloud including Mark Skilton, Global Director, Applications Outsourcing, Capgemini; Pam Isom, Senior Certified Executive IT Architect, IBM; and Henry Peyret, Principal Analyst, Forrester Research. The panel was moderated by Dana Gardner, Principal Analyst, Interarbor Solutions, who is also a prominent blogger for ZDNet.

In their discussion, the panel covered a wide range of topics, from the cloud themes that in their opinion will survive the hype cycle, to a deep dive on how Harvard Medical School (HMS) has implemented their own private cloud. According to Skilton, utility computing, SaaS and application stores will be the primary drivers of cloud. One overarching theme during the panel was the need to find some sort of common ground for IT and the business when it comes to cloud. Isom pointed out that cloud cannot be done in a vacuum, so providers and stakeholders will need to come together. Agreeing, Peyret pointed out that not only do IT and business need to be aligned, but they must actually be in sync at all times for cloud to work.

Providing the practical perspective, Athanosoulis talked about how HMS has found common ground from which to function. HMS is a unique use case for cloud because the policy at the school is that IT cannot mandate central IT services. Because the department must work with the school’s many researchers to provide services that meet their research objectives, Athanasoulis refers to HMS as the “land of a thousand CIOs.” Over the past few years, Athanasoulis has implemented a private cloud for HMS’ researchers, using Platform LSF and ISF, to support that cloud. According to Athanasoulis the real value of the cloud at HMS is the ability for them to handle projects that require a lot of adaptation and that can handle the “burstiness” needed for massive research compute cycles.

Athanasoulis has been able to find common ground within HMS by working closely with the school’s deans as well as with the researchers themselves. To implement the internal cloud, his department first went to the school’s senior business leaders and made the argument for the upfront investment they would need to demonstrate the value derived from using the cloud. The implementation was iterative, with Athanasoulis first urging first adopter colleagues to try the cloud, then using their word of mouth to encourage others to use it. Use throughout the individual research departments has been growing steadily ever since.

According to the other panelists, this type of iterative adoption approach to the cloud, beginning small and then repeating the process throughout the organization as needed, is the best way to get buy-in throughout an enterprise. They also foresee that most clouds within organizations will eventually take the form of providing a “business services catalog” for users to pick and choose their IT apps from. In this scenario, the IT department will evolve to be the internal promoter and broker of those services. Finally, the panel provided the following recommendations for IT departments considering the cloud:

· Evangelize and act as a service provider
· Contractualize services as a business catalog
· Look at cloud as a risk mitigator
· Use best practices
· Try it, then practice what you preach
· Pilot, Participate, Produce results, Promote the services

The Glue that Binds: Cloud Management Software and the 7 Key Components of Private Clouds – Part 1

While the IT industry, analysts and media continue to do a pretty decent job at outlining, defining and documenting cloud computing and successful early deployments, I thought to contribute to the overall conversation by discussing some of the key elements that we at Platform Computing have identified to be necessary for private clouds, resulting from our own conversations within the industry and with our customers. This will be a two-part blog series that provides some recommendations for companies evaluating internal shared infrastructures by discussing seven key requirements for private clouds and underlying cloud management software.

As we see it, here are the seven key components of a private cloud environment:

  1. Heterogeneous systems support – The private cloud needs to support an organization’s heterogeneous infrastructure, as well as resources from external providers. This includes server, storage and networking hardware, operating systems, hypervisors, storage systems, and file systems.

  2. Integration with management tools – Enterprises use a variety of IT management tools for security, provisioning, systems management, directory, reporting, billing, data management, regulation, and compliance. Cloud computing does not replace these tools. Instead, properly designed private cloud management software easily integrates with existing tools and invokes them as needed during cloud operations.

  3. Configurable resource allocation policies – The cloud must be workload-aware as well as resource-aware. This means that the cloud management software can determine the most efficient placement of application workloads. The cloud management software guarantees resource reservations to its customers based on well-defined policies. And, when demand peaks, the software is able to arbitrate resources based on business priorities of various parts of the cloud workload to cost-effectively meet SLAs.

  4. Integration with workload managers, middleware and applications – Clouds exist to run applications. In addition to a self-service portal for users to request virtual or physical machines, private cloud management software provides flexible API’s to enable easy integration with the enterprise’s essential workload managers, middleware, and applications.

  5. Support IT and business processes – Clouds provide support for various IT and business processes and allow IT to automate many of its operations. In fact, cloud management enables the definition and ongoing modifications of many IT management processes that had been performed manually.

  6. Extensible to external resources – In addition to providing more flexible services with internal resources, the cloud should enable managed access to external resources that are hosted by service providers. This enables more flexible capacity planning where additional resources can be used and paid for only when needed, while centrally controlling access and metering of these services.

  7. Enterprise, not workgroup, solution – An organization usually consists of multiple departments and locations, often distributed internationally. A flexible cloud scales to meet their diverse needs in real time. While cloud computing may be adopted initially within an individual line of business or location, it enables the integration of IT across the enterprise by reconfiguring rather than replacing the private cloud management software. Therefore, a private cloud can be an enterprise-wide IT services delivery system that provides transparent and consistent access to global resources.


It’s very important for companies to remember that just like other mission-critical enterprise business systems and services, a private cloud is built by the IT organization, not bought from a vendor. Private cloud management software is the key to enabling IT to configure its data center resources, integrating its management tools, and supporting its applications and business processes. You could consider it “the glue” that binds together enterprise data center operations as organizations move into the era of cloud computing.

Stay tuned for the second part of this blog series that will showcase how Platform ISF meets all seven requirements of private cloud management discussed in this blog.


    The Emperor's New Clothes

    I picked up today's FT on my way to work today. On page 11, an article titled "Bank traders rush to launch spin-offs before rule change" reported that many traders - fearing tougher prop trading regulations - are leaving banks to start hedge funds. The article's accompanying illustration showed over 250 new hedge funds in Q1. "In spite of one of the worst second-quarter for the industry on record", these funds are attracting investments. So I asked myself, "What has regulation wrought?" Over the past few weeks, supporters and detractors of the proposed Dodd-Frank bill have argued about the reform's effectiveness. In particular, detractors have singled out the exceptions that effectively delay compliance until 2012 or later. So will reforms really level the playing field for all investors or is it simply an illusory change? Reforms have unintended consequences. Perhaps the reinvigoration of the hedge fund industry is an example of such. So has anything changed on Wall Street? I don't know. What I suspect is banks will continue to have prop trading in the near future, and that there will be more hedge funds. These firms all need new servers, grid software, analytics, etc. to deal with high frequency trading, market data, and new regulations like Basel III. That's good news for technology firms like Intel, Sybase, and Platform Computing.

    Oh, by the way, Happy Bastille Day.

    Right Message, Not So Right Forum

    I attended SIFMA in June, courtesy of Sybase who provided a demo station for Platform Computing within their booth. Platform showcased the integration of Platform Symphony and Sybase RAP - The Trading Edition. Faced with the need for intraday and/or more rigorous analytics, banks of all sizes have expressed interest in technology that can accelerate calculations. The combination of Sybase RAP's Complex Event Processing (CEP) and Platform Symphony grid software can speed up complex analytics up to 20x or more. In the second day of SIFMA, our firms jointly presented our development in a seminar at the Hilton. In spite of the superb location of Sybase's booth and the highly marketed seminar, attendance was light overall. Of the attendees who dropped by, all expressed strong interest in the joint offering. They also noted the subdued nature of SIFMA compared with shows of past years. In contrast, the STAC quarterly meeting was very well attended by banks, funds, and vendors of all sizes. So what are my conclusions? I believe Platform and Sybase have the right solution for meeting the analytical challenges faced by banks and funds. Rather than large general trade shows like SIFMA, vendors need to focus on regional and specialized events like the STAC meeting and/or social media sites for getting their messages out. Otherwise, a lot of good work gets unheard.

    IT execs betting on private cloud in 2010

    Today we announced the findings of our third annual delegate research survey which took place at Hamburg’s International Super Computing (ISC’10) event in June. Our enthusiasm about this stems from the fact that the demand for private cloud among IT executive remains undiminished from last year, in spite of the difficult year businesses have faced, with 28% of delegates planning a deployment this year.

    It’s interesting to see that the main drivers have changed dramatically, however, reflecting improved awareness and understanding of the benefits of private clouds, in my opinion. For example, improving efficiency was the main motivator in 2009 (41%), but the 2010 survey reveals that drivers for deploying private cloud have evened out in 2010: efficiency (27%), cost cutting (25%), experimenting with cloud (19%), resource scalability (17%) and IT responsiveness (6%). I believe that this suggests that there has been an improvement in the level of understanding of the benefits which private clouds can deliver.

    Furthermore, the increased significance placed on cost cutting (25% compared to 17% in 2009) also suggests that the cautious economic climate has influenced drivers for adoption. 62% of executives also think that clouds are an extension of clusters and grids, with only 17% thinking it is a new technology. This indicates that greater awareness of private clouds is resulting in recognition that private clouds are the natural next step for organizations already using clusters or grids.

    It’s also telling that while the appetite for private cloud in 2010 remains as strong as in 2009 and general understanding is better, IT executives seem unconvinced about the benefits of using an external service provider for ‘cloud bursting’ – where the public cloud is tapped into when a company’s own resources reach capacity – with over three quarters (79%) stating that they have no plans to do this in 2010.

    Predictions are always difficult, but I expect that private clouds will continue to outpace public cloud models and that the correlation between private clouds and hybrid use-cases such as ‘cloud bursting’ will increase over time. Only ongoing measurement and time will tell. Meanwhile, we’re extremely positive about the long-term future of cloud computing.