Waters USA 2010
The keynote was presented by Andrew Silverman from Morgan Stanley. He presented quite a few interesting facts; for example, S&P volatility compared to price in 2008/2009 was > 6%, this was the highest that we have seen; Average trade size is lower than it has been historically, however the trade volume is 7 times more than 2004. The average volume of trades in US markets is around 12 billion shares; in2010 it was about 8 billion - that is quite a drop! Is that a trend or just a blip? One other interesting thing that was mentioned was that it is ”guestimated” that 50-70% of volume is from High Frequency Trading.
There were quite a few panel discussions on topics including how firms were trying to drive by increasing the utilization of their existing assets/infrastructure to deliver services in cost effective way, as well as some discussing regulations on HFT, risk management and controls, while continuing to be innovative and drive revenue growth.
Public sentiment as well as regulatory bodies felt the FS Industry have not done enough to self regulate as Issues like the “flash crash” and order leakage (IOIs) will likely force government regulators to impose regulations now. Nobody knows exactly what the future regulation looks like at this point, but they know they are coming.
The main takeaways for me:
1. Firms need to try and gain additional advantages from existing infrastructure to support the growth of the business. This could include the use of grid software, private clouds and hybrid clouds (private/public clouds) which can be provided by vendors like Platform Computing.
2. Additional regulations, whatever shape or form they come in, will require IT infrastructure to be adaptive and flexible in terms of providing more computing power when it is needed to deal with the multitudes of computations and reporting requirements to comply with these regulations. This means that with more risk management controls and reporting transparency it will put more pressure on infrastructure and that means you will either need to build more or find ways to get more out of the existing servers.
In both cases, partnering with Platform Computing to continue to grow the business, meet new regulatory requirements and still meet business service levels is the key to firms being successful in the future, whatever it may hold.
What a Part Time HPC Cluster Admin Needs
We’ve designed and built Platform HPC specifically for organizations that need small or medium-sized clusters. Starting with an easy one-step installation, Platform HPC automates many of the complex tasks of managing a cluster, including provision and patch nodes, integrate applications, and troubleshooting user problems, saving time and hassles for part-time IT administrators. A web interface also makes remote administration a reality for users. As long as administrator can access a web browser that can reach the head node of the HPC cluster, he can easily monitor, manage and troubleshoot problems.
Then again, the tools and technologies included in Platform HPC were originally designed for large scale environments. So under the hood, those tools are powerful, scalable, and highly customizable. For those running a small cluster, whether or not they are rapidly growing, Platform HPC would be their best choice now and for the future.
Santa and HPC
Red Bull Racing is using HPC software to significantly accelerate its computer-aided design and engineering processes for its winning Formula One Cars. If Red Bull can use HPC to optimise designs that maximise the downforce and reduce the drag cars create on the track, then why can’t Santa use HPC for his sleigh design?
As Santa is going to experience some turbulent and icy conditions on his route this year, I’d like to think his elves have been running simulations via HPC to ensure his sleigh can cope with these adverse conditions, without sacrificing on speed. Throw into the mix the rising cost of gasoline, in monetary and environmental terms, Santa’s going to have to ensure his sleigh is energy efficient too. Of course the rise in population is something he needs to keep in mind, Santa now needs to make far more stops that he did many years ago, and can take advantage of HPC to analyze all those naughty and nice lists to take the most efficient route possible.
The design conundrum outlined above really does point toward the need for an HPC environment at the North Pole. Santa just can’t afford to take a risk with the design of his sleigh. There would just be too many disappointed children to imagine.
So when you’re faced with the “just how does Santa do it” question, why not take this as an opportunity to introduce the child to the infinite possibilities of High Performance Computing!
Note: All opinions in this blog are my own and not officially endorsed by other people named Saint Nick.
Waters USA 2010 - NYC
The event’s program read like a who’s who in terms of the major FS firms in the industry. Morgan Stanley, Credit Suisse, Goldman Sachs and JP Morgan Chase all had experts on hand to discuss their most pressing issues around High-Frequency Trading (HFT), Risk Management and Controls and how firms can drive revenue and still meet regulatory guidelines using technologies like Platform LSF and Platform Symphony grid computing and Platform ISF cloud computing.
The keynote address, given by Andrew Silverman who is responsible for electronic trading at Morgan Stanley, explained effectively how the “regulatory pendulum” is starting to swing back towards regulatory mandated rules from organizations like SEC, Federal Reserve Bank, FSA (Financial Services Agency) and Japan FSA. Mr. Silverman felt FS firms needed to wake up and self-regulate before regulators mandate rules that are more costly to implement and ineffective to boot. He explained how technology is an enabler that firms could not do without it. He also noted firms need to use technology ethically, such as by putting the client’s needs in front of the brokers such as in the use of smart order routing technology to achieve best execution.
In summary it seems that building private clouds was where the focus was now and that partnering with a vendor like Platform who has experience in leveraging existing infrastructure to maximize efficiency and utilization, while still being able to meet business service levels is key in being successful.
Storm is brewing – HPC Cloud
At the 2009 Supercomputing conference in Portland last year, Platform Computing showed off our our first generation cloud computing management tool, Platform ISF. At that time, the “cloud” buzzword was still fairly new to the HPC community, and had several stern critics in the HPC space. To many HPC folk, “cloud” meant virtualization, and virtualization meant low performance. Very few other vendors at the conference last year even used the C-word, and when they did it was to describe other types (e.g. enterprise computing, dynamic datacenter provisioning, etc) of computing. So for a while, many believed as we did, that virtualization takes the “H” out of “HPC.”
In contrast to last year’s conference, this year both software vendors and infrastructure vendors were present talking about making cloud adoption easy, with every hardware vendor trying to persuade potential customers that building a private cloud using their hardware was a smart choice – especially when the vendor offers their own IaaS model for workload overflow (Platform calls that “Cloud Bursting”).
Also in contrast to 2009, this year has shown hypervisors and processors alike have matured to better support near hardware performance with virtualization. Indeed the performance chasm for some applications has narrowed to a crack (For more on this see our Platform whitepaper). Also, perceptions of the cloud have started to change in the HPC community. For the correct jobs, virtualization doesn’t have to mean there’s an unacceptable performance burden, and the advantages it brings to management, not to mention flexibility, are hard to ignore.
This year at SC’2010 we gave almost the same demonstration with more polish. The difference was the reaction had turned from disdain and skepticism to curiosity and interest. Yes, there are still several issues that need to be sorted before cloud computing is simple for HPC (licensing, data movement, and data security are the biggies). Nevertheless, HPC users are finally beginning to think about the cloud and performance is becoming less and less of an issue. Amazon, for instance, let their HPC performance data walk and talk for itself at the show. A cluster using HPC on EC2 placed at 230th in the TOP500(see http://www.top500.org/system/10661). So there’s no debating it--you can do HPC in the external public cloud – at least if you’re running Linpack.
Even if your application may be difficult to adapt to the cloud, the barriers are falling one by one. So taking the longer term view, in the next 5 years, HPC in the cloud doesn’t seem only feasible, it seems--as we a Platform Computing believe--inevitable.