Cloud-based initiative spearheaded by Imperial leads push for variety in processor usage
The EU-backed HARNESS project, led by Professor Alexander Wolf of Imperial College London's Department of Computing, is set to transform the cloud computing landscape. The project's primary goal is to address the current challenge of the cloud model not being well-suited for applications that require non-commodity hardware, such as quantum, neuromorphic, or other specialized architectures.
HARNESS aims to develop a unified hardware abstraction layer, enabling cloud providers to transparently incorporate diverse hardware types into their services. This layer is designed to preserve the economic benefits of the cloud model, including scalability, cost-efficiency, and on-demand access.
By abstracting the underlying hardware heterogeneity, HARNESS provides a common programming and execution environment that supports different hardware accelerators. This approach maintains the flexibility and economic advantages of commodity cloud models while expanding the available hardware types to meet growing demands for compute-intensive scientific and industrial workloads.
The project's objectives address key challenges linked to integrating advanced hardware in clouds, such as complexity, vendor lock-in, and cost escalation. By doing so, HARNESS fosters innovation and adoption of next-generation high-performance computing (HPC) services in the cloud.
One of the potential benefits of this project could be the unlocking of SAP's HANA in-memory database's potential in the cloud. This could potentially remove concerns about application performance for businesses looking to put their ERP applications into the cloud. The project also aims to accelerate certain database operations using non-standard processors like GPGPUs or FPGAs.
SAP's interest in the HARNESS project is evident, as the company announced the availability of its HANA in-memory database for its Business Suite ERP application portfolio. With HARNESS, cloud providers could potentially offer customers options to allocate resources in their data centers based on the performance requirements of applications.
In the long-term, research could lead to cloud environments that automatically detect the best kind of processor for a given workload. This would be a significant leap forward, as it would allow cloud providers to offer various deployment options based on the performance requirements of applications at the end of the three-year project.
Amazon Web Services already offers servers based on GPGPUs as part of its high-performance computing suite, but the user is responsible for building an application that utilizes these capabilities. The HARNESS project aims to change this by developing software techniques that would simplify this process, making high-performance computing more accessible to a wider range of users.
The homogeneous nature of cloud providers' IT environments allows for automated systems management and lower HR costs. However, the current cloud model is not designed to accommodate non-commodity hardware. The HARNESS project aims to overcome this limitation, opening up new possibilities for cloud computing.
In summary, the HARNESS project is a groundbreaking EU-funded initiative that aims to revolutionize cloud computing by enabling cloud providers to support non-commodity hardware. By developing a unified hardware abstraction layer, HARNESS promises to preserve the economic benefits of the cloud model while expanding the available hardware types to meet growing demands for compute-intensive scientific and industrial workloads.
Technology and data-and-cloud-computing are intertwined in the HARNESS project, as its primary goal is to address the challenge of the cloud model not fitting applications that require non-commodity hardware like specialized architectures. The project aims to develop a unified hardware abstraction layer, enabling cloud providers to incorporate diverse hardware types into their services, maintaining the economic advantages of commodity cloud models while expanding hardware choices.