HPC from A-Z (part 23) - W

W is for weather mapping

From Argentina to Afghanistan, Lisbon to London, and Boston to Beijing, strip away all our cultural differences and you are left with one single unifying trait of humanity: we love to talk about the weather.

Weather by its very nature is unpredictable, and it is this uncertainty that has been a thorn in the side of humanity since the beginning. So, in a similar way that Ancient Greek sailors consulted an oracle for the likelihood of a smooth ocean voyage in 300BC, we now check the weather forecast on our phones before we leave for work in 2011.

Nowadays of course, our ability to predict the weather is significantly more accurate than it was in Ancient Greece, but it is still by no means an exact science.

For The University of Oklahoma, weather forecasting applications put one of the heaviest burdens on its computing resources. These HPC applications help crunch masses of data from satellites and radars, which is then amalgamated throughout the day and processed into a forecast so that hundreds of thousands of Americans know whether to pack an umbrella or sunscreen.

Of course, while the ability to accurately predict the weather can help keep us dry or plan what to do with our long holiday weekends, it also provides a much more important service in helping scientists to predict the occurrence of natural disasters. HPC is crucial for the successful research of severe storm patterns in Oklahoma, the surrounding region, and across the United States, and is literally helping save lives.

HPC from A-Z (part 22) - V

V is for… virtual prototyping

While virtual prototyping may sound like complicated process, it’s actually just a form of product development that uses Computer Aided Design or Engineering to develop a virtual prototype before putting a physical object into production. In other words, it’s about building big expensive stuff like spaceships, race cars, and battleships.

Organizations like NASA aren’t just going to experiment with billions of dollars in the hope that the resulting rocket successfully launches. When you’re investing that much money, you want a guarantee that your product is 99.9% likely to work as it should and the people using it are going to come out in one piece! It’s in the design and test phase of building these products that HPC plays a crucial role.

In the last blog post we talked about developing fictional worlds for computer games. HPC helps to easily produce a realistic virtual environment for testing manufacturing designs in a very similar way. And no company better understands how important this is than engine manufacturer, MTU Aero Engines. They use Platform LSF to prioritize jobs so that time sensitive tests can be completed quickly and make use of maximum possible computing power to create a realistic test environment. In fact, the technology is so popular that they keep coming back for more and the organization has had to create a special queue that allows users to submit low priority, short duration jobs to test for immediate problems before submitting for thorough testing.

HPC from A-Z (part 21) - U

U is for universe modeling

As we near the end of our HPC ABC we’ve seen a wide range of HPC examples. It literally does not get bigger than this one…

Our H blog post explained how CERN’s Large Hadron Collider is using HPC to crunch data and determine the origins of the universe. What many don’t know is it can also be used to see how our universe might grow. According to Scientific American, the diameter of the observable universe is at least 93 billion light years or 8.80×1026 meters [1]. So, the big question is how fast does it expand? HPC can help calculate the rate at which the universe is expanding.

Not only can HPC help model our universe, it can also help create fictional universes. Parallel processing can help render imagined virtual worlds. Many of today’s games are incredibly immersive and realistic, and HPC can play an important part in powering that level of reality by reducing the amount of time it takes to create complex graphics.

Hadoop Summit 2011 Validates Platform MapReduce

“Big Data” is in and hot these days. This year’s Hadoop Summit 2011 attracted nearly 1,600 people, doubling the size of the conference from last year. Topics discussed at the Summit ranged from questions and concerns about Hortonworks, a fresh spinoff from Yahoo!, to various technical and use case discussions around Hadoop. While the center stage was dominated by full distribution players, such as Cloudera, Hortonworks and MapR, newcomers focused on providing alternative, best-of-breed component solutions in the stack are also emerging and getting increased traction from the market. This is not a surprise; the market for “Big Data” is still young and fragmented, and people at various phases of the technology adoption lifecycle are looking for solutions best suited for their needs. 

So the question is: full distribution or best-of-breed? 

At Platform Computing, we believe there is a need for both. For someone who is new to Hadoop and would like to experiment with this new programming model, a full distribution solution seems to be an easy way to get up to speed and get acquainted with Hadoop as it contains all the elements in the stack needed for running MapReduce applications. But for someone who is already Hadoop savvy and would like to bring their MapReduce applications into production, a whole new set of requirements will need to be met. Customers who need a production ready solution are seeking enterprise-class capabilities, such as 1) superior predictability of the infrastructure and distributed runtime engine for MapReduce jobs, so it meets the organization’s SLA requirements; 2) high resource utilization to eliminate a siloed environment while allowing organizations “do more with less”; 3) a rich set of management capabilities for operational efficiency; 4) high availability to ensure hardware and service failures do not require jobs to be manually recovered or restarted from scratch, and 5) of course, faster performance. 

The full distribution solutions currently on the market do not deliver those capabilities the mature market is looking for, that’s why Platform Computing is delivering Platform MapReduce, a best-of- breed, distributed runtime engine for MapReduce workloads, to fill in the gap. Launched on June 28, the eve of Hadoop Summit, Platform MapReduce received great traction at the event. As expected, users who have had a few years of experiences with either open source Hadoop or commercial solutions are well aware of the shortcomings in the existing options, and they were excited to hear about Platform MapReduce and the enterprise-class capabilities it provides. 

Built on Platform Computing’s decade’s worth of experience in managing and scheduling workloads in distributed environments, Platform MapReduce is designed using the same core technology that has powered many Fortune 1000 customers for their mission critical, most demanding workloads--bringing that capability to MapReduce environment is a natural market expansion for the company. Platform MapReduce addresses the major issues that are holding back the current market, and it is designed to help organizations overcome those barriers of moving MapReduce applications into production. The positive responses we’ve already received from the market are a solid validation of our solution, we are looking forward to bringing a new set of capabilities to the Hadoop world.

HPC from A-Z (part 20) - T

T is for transactional processing, time travel and tea

When it comes to “T” I can think of a few great examples of HPC use, or potential use, ranging from the very practical, to science fiction, and then back down the earth.

The very practical example is transactional processing. It can be a complicated process, but HPC can really simplify it. Consider, for example, the computing processes involved when moving a pot of money from one account to another. HPC can be used to automate this activity by processing transactions in parallel , speeding up the time it takes for money to travel from one account to another.

If HPC can help speed up machine processes, perhaps one day it can help us travel beyond the speed of light, possibility into the future or back into the past. HPC could model the perfect time travelling machine, which is optimized to travel at fast speeds across the cosmos.

Closer to home, HPC could also help create the perfect cup of tea. Calculations could be performed to discover the perfect tea leaf-to-water-and-milk ratio. Bad cups of tea could become a thing of the past!