Big congratulations to the team for this significant achievement!
Big congratulations to the team for this significant achievement!
Without a proper tool or a lot of practice, getting Linux and Windows to work together seamlessly to provide a unified interface for end users is a very challenging task. Having both systems coexist in an HPC cluster environment adds an order of magnitude of additional complexity compared to an already complex enough HPC Linux cluster.
This is because Windows and Linux “speak very different languages” in many areas such as user account management, file path and directory structure, cluster management practice, application integrations etc.
The good news is the Platform Computing engineering team did some heavy lifting in product development for this project. Platform HPC integrates with the full software stack required to run an HPC Linux cluster. Its major differentiator compared to alternative solutions is that Platform HPC is application aware. When adding Windows HPC Server into the HPC cluster, the solution delivered by Platform HPC ensures it provides a unified user experiences across Linux and Windows, and hides the difference and complexity between the two OSs.
Platform HPC team has developed a step by step guide for implementing an end-to-end solution with provisioning both Windows and Linux, unified user authentication, unified job scheduling, automated workload driven OS switch, application integrations, and unified end-user interfaces.
This solution significantly reduces the complexity of a mixed Windows and Linux cluster, so users can focus on their applications and their productive work, as opposed to managing the complexity of the mixed Windows and Linux cluster.
The pre-SC11 dawn has already started to heat up with announcements being made by new vendors who see the promise of cloud computing aiding the needs of HPC users everywhere.
Just recently, a competitor of Platform Computing announced their entry into the “cloud bursting” space . They claim that their technology functions with all of the common workload management systems. Without much detail, the only conclusion that can be drawn from the announcement is that they have built a system to “poll – act – sleep” in a loop. While possibly illustrative of the promise of cloud computing, this simplistic view ignores what we believe is the most fundamental challenge to cloud bursting – data locality.
By “data locality” we refer to the fact that compute resources must have some access to data for them to be useful. When datasets get large (input or output), getting access to globally distributed compute resources may have dubious value until workload schedulers understand what data exists where and can affect transfers of data between localities to take better advantage of those remote resources.
Separately, we are very pleased to see others in the HPC industry focusing on the ease of use / ease of build idea. In fact just recently, Rightscale made an announcement to this effect for building clusters in the cloud Platform has been talking about the importance of this for some time, namely with our Platform HPC product. Stay tuned for more announcements as we take this same story into the cloud.
Finally, we were also very happy to see that HPC as a service continues to gain momentum. Just recently there was a story from the Netherlands where a small cluster of very “thick” servers running KVM have been used to create a serviced based HPC infrastructure to be rented to various researchers in the academic and government communities.
No doubt these announcements will be just the beginning of a coming onslaught from many vendors waiting to announce during the week. We expect that cloud and HPC will take one step closer together than they were last year in New Orleans. Stay tuned to the Platform blogs, we’ll provide a summary of the show and any key items we hear about.