Waters USA 2010

Waters USA 2010 was held on December 6, 2010 at Marriot Marquis Hotel at Times Square. I was not sure what to expect as this was my first time attending this show. I must say that it was very well attended; my guess is that over 500 people attended the event. There were representatives from quite a few of the major Wall Street firms: JP Morgan Chase, Credit Suisse, Royal Bank of Scotland, Morgan Stanley, Deutsche Bank and Bank of America were on hand to discuss the most interesting topics in financial services today. An especially hot topic was High Frequency Trading (HFT).

The keynote was presented by Andrew Silverman from Morgan Stanley. He presented quite a few interesting facts; for example, S&P volatility compared to price in 2008/2009 was > 6%, this was the highest that we have seen; Average trade size is lower than it has been historically, however the trade volume is 7 times more than 2004. The average volume of trades in US markets is around 12 billion shares; in2010 it was about 8 billion - that is quite a drop! Is that a trend or just a blip? One other interesting thing that was mentioned was that it is ”guestimated” that 50-70% of volume is from High Frequency Trading.

There were quite a few panel discussions on topics including how firms were trying to drive by increasing the utilization of their existing assets/infrastructure to deliver services in cost effective way, as well as some discussing regulations on HFT, risk management and controls, while continuing to be innovative and drive revenue growth.

Public sentiment as well as regulatory bodies felt the FS Industry have not done enough to self regulate as Issues like the “flash crash” and order leakage (IOIs) will likely force government regulators to impose regulations now. Nobody knows exactly what the future regulation looks like at this point, but they know they are coming.

The main takeaways for me:
1. Firms need to try and gain additional advantages from existing infrastructure to support the growth of the business. This could include the use of grid software, private clouds and hybrid clouds (private/public clouds) which can be provided by vendors like Platform Computing.

2. Additional regulations, whatever shape or form they come in, will require IT infrastructure to be adaptive and flexible in terms of providing more computing power when it is needed to deal with the multitudes of computations and reporting requirements to comply with these regulations. This means that with more risk management controls and reporting transparency it will put more pressure on infrastructure and that means you will either need to build more or find ways to get more out of the existing servers.

In both cases, partnering with Platform Computing to continue to grow the business, meet new regulatory requirements and still meet business service levels is the key to firms being successful in the future, whatever it may hold.

What a Part Time HPC Cluster Admin Needs

In most organizations that use small and medium HPC clusters (smaller than 200 nodes), a HPC cluster is treated as a separate system from an IT management perspective. As a result, the amount of IT administration effort allocated to HPC clusters is very limited. Often, a part-time Linux administrator is tasked with taking care of an HPC cluster that is used by tens and even hundreds of users. Because most IT administrators are not HPC experts, they usually rely on the management tool included with the HPC cluster package to perform their daily work. Those packages are usually a stack of open source software with very limited support. Because these software packages are assembled from functionality perspective, they are not integrated. But what IT administrator really need is a robust, easy-to-use management tool to keep the cluster up and running rather spending time integrating the software stack provided with the cluster. This is a very different use case than large HPC shops that have lots of HPC expertise.

We’ve designed and built Platform HPC specifically for organizations that need small or medium-sized clusters. Starting with an easy one-step installation, Platform HPC automates many of the complex tasks of managing a cluster, including provision and patch nodes, integrate applications, and troubleshooting user problems, saving time and hassles for part-time IT administrators. A web interface also makes remote administration a reality for users. As long as administrator can access a web browser that can reach the head node of the HPC cluster, he can easily monitor, manage and troubleshoot problems.

Then again, the tools and technologies included in Platform HPC were originally designed for large scale environments. So under the hood, those tools are powerful, scalable, and highly customizable. For those running a small cluster, whether or not they are rapidly growing, Platform HPC would be their best choice now and for the future.

Santa and HPC

“So just how does Santa manage to travel the earth in just one evening and deliver all those presents” is a question you might get asked this Christmas. Sure, you can opt for the “magic” answer but perhaps Santa has a really awesome HPC environment up there at the North Pole?

Red Bull Racing is using HPC software to significantly accelerate its computer-aided design and engineering processes for its winning Formula One Cars. If Red Bull can use HPC to optimise designs that maximise the downforce and reduce the drag cars create on the track, then why can’t Santa use HPC for his sleigh design?

As Santa is going to experience some turbulent and icy conditions on his route this year, I’d like to think his elves have been running simulations via HPC to ensure his sleigh can cope with these adverse conditions, without sacrificing on speed. Throw into the mix the rising cost of gasoline, in monetary and environmental terms, Santa’s going to have to ensure his sleigh is energy efficient too. Of course the rise in population is something he needs to keep in mind, Santa now needs to make far more stops that he did many years ago, and can take advantage of HPC to analyze all those naughty and nice lists to take the most efficient route possible.

The design conundrum outlined above really does point toward the need for an HPC environment at the North Pole. Santa just can’t afford to take a risk with the design of his sleigh. There would just be too many disappointed children to imagine.

So when you’re faced with the “just how does Santa do it” question, why not take this as an opportunity to introduce the child to the infinite possibilities of High Performance Computing!

Note: All opinions in this blog are my own and not officially endorsed by other people named Saint Nick.

Waters USA 2010 - NYC

This year’s Waters USA December 6th, 2010 Conference was one of the biggest and best attended conferences I have been at all year. Held at the Marriot Marquis, which is in the heart of Times Square, was truly a great preview in terms of shedding light on where the Financial Services Industry will be heading in 2011. The show’s focus was on how FS firms continue to leverage technology to achieve their trading and business strategies as well as deal with new and pending regulation. Platform Computing was one of the event sponsors, along with IBM and Microsoft, HP and Bloomberg, which shows who they look to when trying to meet these tough challenges.

The event’s
program read like a who’s who in terms of the major FS firms in the industry. Morgan Stanley, Credit Suisse, Goldman Sachs and JP Morgan Chase all had experts on hand to discuss their most pressing issues around High-Frequency Trading (HFT), Risk Management and Controls and how firms can drive revenue and still meet regulatory guidelines using technologies like Platform LSF and Platform Symphony grid computing and Platform ISF cloud computing.

The keynote address, given by Andrew Silverman who is responsible for electronic trading at Morgan Stanley, explained effectively how the “regulatory pendulum” is starting to swing back towards regulatory mandated rules from organizations like SEC, Federal Reserve Bank, FSA (Financial Services Agency) and Japan FSA. Mr. Silverman felt FS firms needed to wake up and self-regulate before regulators mandate rules that are more costly to implement and ineffective to boot. He explained how technology is an enabler that firms could not do without it. He also noted firms need to use technology ethically, such as by putting the client’s needs in front of the brokers such as in the use of smart order routing technology to achieve best execution.

At the CIO/CTO roundtable which followed, Peter Kelso (Global CIO, DB Advisors), Michael Radziemski (Partner and CIO, Lord Abbett & Co), Scott Marcar (Global Head Risk & Finance Technology, RBS) and Peter Richards (Managing Director and CTO Global Head of Production and Infrastructure, JP Morgan Chase) strongly agreed that improving their existing infrastructures is a top priority. They all felt that their firms need to continue to break down their operating silos and change their infrastructures to increase efficiency, utilization, and to lower costs. All participants stated that their IT budgets will remain “flat” in 2011, even though they are expected to “do more”. They agreed this would only by possible by leveraging existing infrastructure better, with technologies such grid and cloud computing products, which will be a key focus for all their firms.

One other panel worth mentioning was the “Past the cloud computing hype: Constructing a safe cloud environment”, where two FS technology executives: Michael Ryan (Director, Bank of America) and Uche Abalogu (CTO, Harbinger Capital Partners) and two vendors: Dr. Songnian Zhou (CEO, Platform Computing) and Matt Blythe (Product Manager, Technical Computing, Microsoft) discussed the merits and challenges of cloud computing. In summary, the panelists believe FS firms are first looking at better utilizing their existing infrastructure resources by building their own private clouds. While firms are exploring hybrid clouds and expect that this is eventually where they will head, security, data management and latency remain key issues in their adoption.

In summary it seems that building private clouds was where the focus was now and that partnering with a vendor like Platform who has experience in leveraging existing infrastructure to maximize efficiency and utilization, while still being able to meet business service levels is key in being successful.

Storm is brewing – HPC Cloud

At the 2009 Supercomputing conference in Portland last year, Platform Computing showed off our our first generation cloud computing management tool, Platform ISF. At that time, the “cloud” buzzword was still fairly new to the HPC community, and had several stern critics in the HPC space. To many HPC folk, “cloud” meant virtualization, and virtualization meant low performance. Very few other vendors at the conference last year even used the C-word, and when they did it was to describe other types (e.g. enterprise computing, dynamic datacenter provisioning, etc) of computing. So for a while, many believed as we did, that virtualization takes the “H” out of “HPC.”

In contrast to last year’s conference, this year both software vendors and infrastructure vendors were present talking about making cloud adoption easy, with every hardware vendor trying to persuade potential customers that building a private cloud using their hardware was a smart choice – especially when the vendor offers their own IaaS model for workload overflow (Platform calls that “Cloud Bursting”).

Also in contrast to 2009, this year has shown hypervisors and processors alike have matured to better support near hardware performance with virtualization. Indeed the performance chasm for some applications has narrowed to a crack (For more on this see our Platform whitepaper). Also, perceptions of the cloud have started to change in the HPC community. For the correct jobs, virtualization doesn’t have to mean there’s an unacceptable performance burden, and the advantages it brings to management, not to mention flexibility, are hard to ignore.

This year at SC’2010 we gave almost the same demonstration with more polish. The difference was the reaction had turned from disdain and skepticism to curiosity and interest. Yes, there are still several issues that need to be sorted before cloud computing is simple for HPC (licensing, data movement, and data security are the biggies). Nevertheless, HPC users are finally beginning to think about the cloud and performance is becoming less and less of an issue. Amazon, for instance, let their HPC performance data walk and talk for itself at the show. A cluster using HPC on EC2 placed at 230th in the TOP500(see http://www.top500.org/system/10661). So there’s no debating it--you can do HPC in the external public cloud – at least if you’re running Linpack.

Even if your application may be difficult to adapt to the cloud, the barriers are falling one by one. So taking the longer term view, in the next 5 years, HPC in the cloud doesn’t seem only feasible, it seems--as we a Platform Computing believe--inevitable.