As demand continues to soar, the suppliers of our natural oil and gas resources are under increasing amounts of pressure to deliver product and keep prices low – these are basic economic principles, after all. But less well-known is the role which HPC is playing to ease the burden on companies involved in this market.
The applications of HPC are diverse and virtually limitless. Analytics, simulation, modelling, seismic imaging – all of these help to extend life of oilfields, identify new submarine oilfields, achieve drastic time reductions in drilling, and make real-time and future-looking calculations about risk.
At Platform Computing we’ve experienced the challenges oil and gas companies are facing first hand through our work with leading companies in this industry, including Statoil Hydro and other oil field services providers and energy giants. For example, a top five global energy company turned to us recently to help achieve cuts in manufacturing and capital costs. Our analysis revealed that they could achieve this with a simple cluster management solution that is easy to use and administer without a dedicated IT admin resources.
As exploration and extraction takes companies to ever more extreme and challenging environments, HPC can be the difference between good and bad decision making... and, ultimately, how much profit the oil giants can turn.
The report is significant as the first ‘apples-to-apples” comparison of vendors using customer references, a compulsory 30 minute demonstration and written responses. Only 15 vendors were able to meet this initial hurdle, suggesting a number of vendors where marketing far outstrips reality. Vendors were scored 0-4 across 10 criteria. Forrester did not add up the scores but the media quickly did the sub-analysis and published a ranking table. The result: Platform Computing received the most points.
Source: SYS-CON Media
The # 2 vendor was VMware and we congratulate them on their effort. Given that one of the key report findings is to “prepare for islands of hypervisors” – meaning an increase in the need to support other hypervisors such as KVM and Xen -- companies should pause to reconsider locking into a single hypervisor vendor. Cloud is much more than VM. Also, be careful about pricing models that penalize your ability to utilize your cloud environment more efficiently (i.e., per VM versus per server).
First, in the report, an attempt is made to catalog vendors based upon their origins, e.g., enterprise systems management vendors versus pure-play cloud solutions, etc. Our belief is that regardless of origins, customers should choose a vendor with a comprehensive technology footprint, solid customer base, strong financials, and global support system. Claiming you are # 1 among pure-play or grid vendors is meaningless and an artificial distinction. At the end of the day, one wants the best solution to solving a set of problems. Period.
Second, Mr. Staten recommends looking differently at either software-only versus ‘physical compute and storage included’ vendors. If you take those points away from HP, IBM, Dell, Microsoft, and BMC, the Platform lead just grows. And if you want an appliance, Platform has partners that offer a complete pre-integrated solution today.
Finally, the report recommends a strategy of trying before you buy. Since a private cloud project involves both technology and process change, we recommend running a proof-of-concept (POC) to gain buy-in from end user communities and different business units. A POC can also help validate assumptions against your business case for justifying the value of a private cloud.
- Learn more about Platform's private cloud solution here
- Download the Platform ISF product here
- Schedule a discovery session with one of our cloud consultants by contacting us at email@example.com.
Are there any criteria that you would have added to the evaluation?
There is nothing better than a real world customer case study to explain the value of deploying a product. Cut through the marketing and explain:
- The underlying problem / pain points
- The vendor selection process
- Key decisions and lesson learned
- Actual value being achieved
At the recent Wall Street & Technology Capital Markets Cloud Symposium, the Chief Infrastructure Architect from State Street, Kevin Sullivan, discussed how cloud – specifically private cloud – is being rolled out across 20+ projects.
For more details about Mr. Sullivan's presentation, there is a great article in Advanced Trading magazine by Phil Albinus. Mr. Albinus provides details about the decision making process, key vendors, and next steps. To learn more about Platform Computing's cloud solution, please visit our private cloud solution area.
Some additional metrics that were discussed include:
- Started in 2009
- Went from 400 to 150 use cases
- Operated a 6 month proof-of-concept (POC)
- Down-selected from 150 technology partners to a ‘handful’
- Provisioning has been reduced from 8 weeks to hours
- Unique Active-Active two data center configuration for 100% app high availability
One of the four key elements to the State Street cloud environment is the ‘cloud controller’. In their environment, the cloud controller manages self-service, configurations, provisioning, and automation of the application construction within the infrastructure. A key benefit for the developers was the ability to get access to more self-service capacity on-demand much faster. This easily outweighed any cultural concerns regarding not being able to specify to the n-th degree an IT environment request. If needed, IT could still request ‘custom’ but it would take longer.
In addition, Mr. Sullivan highlighted several key design principles driving the design, building, and management of the cloud:
- Simplification – creating of standards, consolidation, and self-service
- Automation – for deployment, metrics, elasticity, and monitoring
- Leverage – commodity hardware and software stack with a focus on reuse of the platform, services, and data
The benefits and successes of private cloud at State Street are becoming increasingly known as their CIO Christopher Perretta is fond of discussing the topic. Some additional resources to learn more about the State Street cloud project are:
Earlier blog posts in this series have addressed the potential High Performance Computing has in helping scientists with medical research. This post focuses on Neuroscience in particular.
High Performance Computing can help neuroscientists quickly and effectively create and test accurate models of the brain. This helps bring research to life and could potentially unlock many of the mysteries of this complex organ. HPC can be used to research mental health problems, such as depression, and also neurological disorders, such as epilepsy. HPC is also being used for Alzheimer’s research.
The Visual Neuroscience Group at Harvard’s Rowland Institute is using HPC to better understand how the brain works. I like their analogy: “the brain is a massively parallel computer, far exceeding the power available in currently available computers”. If you’re going to try and understand the most complex organ in the human body, then it makes sense to match this research with the most powerful computing available.
After attending IDC’s HPC User Forum in Houston last month and participating in an HPC cloud panel, it's clear that many potential cloud users still seem confused about the economics of cloud and when it's beneficial. One of the complaints we heard many times was about Amazon’s pricing model being several factors (nearly three times) more expensive than outright hardware purchases. While true, users who complain about this fact may be, at least partially, missing the primary use case for external cloud computing.
Our cloud panel didn’t have enough time to delineate the conditions and workload where cloud computing offers economic advantages, so it seems appropriate to start that discussion here in the first of a series of the Economics of Cloud.
There several factors that should contribute to doing an HPC cloud computing pilot and most are necessary, but not required conditions. These include:
· Practical data sizes input and output or post processing methods that can be used to post process without data transfer
· Serial or course grained parallel workload
· Data security policies that can be satisfied by the cloud
· Application OS and performance requirements that lead to acceptable performance in the cloud
· Unsteady workload requirements (meaning the amount of resource a workload requires varies over time)
This last factor is the one that might be the most confusing. Using IaaS can be very cost effective if the results from a workload are highly valuable and short lived. Conversely, results of unknown value and lengthy execution durations or large data requirements can have enormous charges associated with them.
One simple way of visualizing this is to understand the peak workload (expressed as a fraction of the available local resource) and the average workload. The difference between these two values, if significant, is a good indicator for whether cloud computing could have positive ROI or not. If this effect is plotted in time and the average and peak lines are overlaid, the term "peak shaving" is clearly an apt description of what benefit cloud computing can offer.
Invariably, a steady workload is most efficiently processed in a local data enter resource when compared to pay-per-use rates. Indeed, most IaaS providers have calculated a factor between two and three times over into their pricing for hardware costs to account for the opportunity value of near instantaneous access to compute resources. Thus, paying this "tax" for a steady workload could have disastrous financial consequences if adopted as a strategy.
Anyone interested in permanent or long term cloud resource access should probably investigate longer term service contracts with a selected IaaS provider if local resources are not an option. Such an alternative agreement could easily change any potential negative financial estimates for the benefits of cloud.
High Performance Computing can assist with medical research, helping researchers and scientists achieve tangible and game changing results. It can speed up the time taken to crunch vital medical data from a wide variety of sources -- from breast cancer scans to DNA profiles. I expect the next medical breakthrough will have been powered in some part by High Performance Computing.
One of our customers, Harvard Medical School, has taken advantage of High Performance Computing to help aid their scientific discovery. Dr Marcos Athanasoulis, Dr. PH, Director of IT, HMS summed up the potential of High Performance Computing very neatly: “High performance computing is just at the center of discovery today and it’s personally gratifying for me that we are enabling researchers to one day find the cure for cancer, to continue the discovery and genomics and proteomics and that the impact of our work here can actually make a big difference on alleviating human suffering caused by disease.” He also said, “the internal grid allows our researchers to collaborate more easily than ever before and focus their attention on medical research instead of IT management.”
It is encouraging to see how HPC is being used to help researchers collaborate on finding a cure for some of the world's most devastating diseases. Another example is the FightAIDS@Home project which has set up an HPC grid in order to utilize idle computing cycles to assist in building on our growing knowledge of the structural biology of AIDS. It’s incredible to think of all the home users across to world contributing to the project!
This blog post will focus on the letter L, and potential ways HPC can help solve two issues, which begin with L. The first is technical, the second is rather more hypothetical.
Linear regression assesses the relationship between two different variables, to find a correlation. This method, which is popular among scientists, can sometimes be a lengthy process. However, HPC can help speed up this process by automating it, which means that researchers can identify correlations much quicker.
Imagine if we could use an HPC environment to model the brain to see what happens when we fall in love? Imagine if we could pin point what happens in our brains when we feel those first flutters and see how it affects our perception and sense of happiness? Or perhaps there’s a use for HPC in helping dating agencies find the perfect match for their customers? Do you think a HPC environment could solve a cosmic love equation?
Platform RTM 8 is a comprehensive operational dashboard that provides real-time workload monitoring, reporting and management for one or multiple HPC clusters. It helps cluster administrators be more efficient in their day-to-day activities by providing the information and tools needed to improve cluster efficiency, enable better user productivity, and contain or reduce costs. A flexible alerting facility quickly notifies administrators and managers of any issues so that they can take proactive action. Unlike competing tools that only monitor the infrastructure, Platform RTM is both workload and resource-aware, providing full visibility to Platform LSF clusters. With its broad set of capabilities, Platform RTM can replace multiple tools in typical Platform LSF environments, resulting in improved productivity as well as reduced cost and complexity.
Platform Analytics 8 covers the other end of the spectrum by providing a historical perspective of the datacenter. The tool includes a high performing analytical engine and a robust, easy to use interface, which makes is easier to identify and quickly removing bottlenecks, spot emerging trends and plan capacity more effectively. Built on top of a high performing analytics engine, Platform Analytics has an easy to use interface and correlates multiple types of data for improved decision making. The tool is fully functional “out of the box” and includes several interactive dashboards, making it quick and easy to analyze key HPC data. This is definitely an advantage over other HPC analytics products, which require you to build complex analytics models from scratch, often with several intermediate steps.
Many of Platform’s customers have been using these tools to help get peak utilization from their HPC datacenters, including Cadence Design Systems, Red Bull Racing and Simulia, Dassault Systemes. For more on today’s releases and the new features in Platform RTM 8 and Platform Analytics 8, please see today’s press release.