Could RISC-V Become A Force In High Performance Computing?
Analysis The RISC-V architecture looks set to become more prevalent in the high performance computing (HPC) sector, and could even become the dominant architecture, at least according to some technical experts in the field.
Meanwhile, the European High Performance Computing Joint Undertaking (EuroHPC JU) has just announced a project aimed at the development of HPC hardware and software based on RISC-V, with plans to deploy future exascale and post-exascale supercomputers based on this technology.
RISC-V has been around for at least a decade as an open source instruction set architecture (ISA), while actual silicon implementations of the ISA have been coming to market over the past several years.
Among the attractions of this approach are that the architecture is not only free to use, but can also be extended, meaning that application-specific functions can be added to a RISC-V CPU design, and accessed by adding custom instructions to the standard RISC-V set.
This latter could prove to be a driving factor for broader adoption of RISC-V in the HPC sector, according to Aaron Potler, Distinguished Engineer at Dell Technologies.
“There's synergy and growing strength in the RISC-V community in HPC,” Potler said, “and so RISC-V really does have a very, very good chance to become more prevalent on HPC.”
Potler was speaking in a Dell HPC Community online event, outlining perspectives from Dell’s Office of the Chief Technology and Innovation Officer.
However, he conceded that to date, RISC-V has not really made much of a mark in the HPC sector, largely because it wasn't initially designed with that purpose in mind, but that there is “some targeting now to HPC” because of the business model it represents.
He made a comparison of sorts with Linux, which like RISC-V, started off as a small project, but which grew and grew in popularity because of its open nature (it was also free to download and run, as Potler acknowledged).
“Nobody would have thought then that Linux would run on some high end computer. When in 1993, the TOP500 list came out, there was only one Linux system on it. Nowadays, all the systems on the TOP500 list run Linux. Every single one of them. It's been that way for a few years now,” he said.
If Linux wasn’t initially targeting the HPC market, but was adopted for it because of its inherent advantages, perhaps the same could happen with RISC-V, if there are enough advantages, such as it being an open standard.
“If that's what the industry wants, then the community is going to make it work, it's gonna make it happen,” Potler said.
He also made a comparison with the Arm architecture, which eventually propelled Fujitsu’s Fugaku supercomputer to the number one slot in the TOP500 rankings, and which notably accomplished this by extending the instruction set to support the 512bit Scalable Vector Engine units in the A64FX processor.
“So why wouldn't a RISC-V-based system be number one on the TOP500 someday?” he asked.
There has already been work done on RISC-V instructions and architecture extensions relating to HPC, Potler claimed, especially those for vector processing and floating point operations.
All of this means that RISC-V has potential, but could it really make headway in the HPC sector, which once boasted systems with a variety of processor architectures but is now dominated almost entirely by X86 and Arm?
“RISC-V does have the potential to become the architecture of choice for the HPC market,” said Omdia chief analyst Roy Illsley. “I think Intel is losing its control of the overall market and the HPC segment is becoming more specialized.”
Illsley pointed out that RISC-V’s open-source nature means that any chipmaker can produce RISC-V-based designs without paying royalties or licensing fees, and that is supported by many silicon makers as well as by open-source operating systems.
Manoj Sukumaran, Principal Analyst for Datacenter Compute & Networking at Omdia agreed, saying that the biggest advantage for RISC-V is that its non-proprietary architecture lines up well with the technology sovereignty goals of various countries. “HPC Capacity is a strategic advantage to any country and it is an inevitable part of a country’s scientific and economic progress. No country wants to be in a situation like China or Russia and this is fueling RISC-V adoption,” he claimed.
RISC-V is also a “very efficient and compelling instruction set architecture” and the provision to customize it for specific computing needs with additional instructions makes it agile as well, according to Sukumaran.
The drive for sovereignty, or at least greater self-reliance, could be one motive behind the call from the EuroHPC JU for a partnership framework to develop HPC hardware and software based on RISC-V as part of EU-wide ecosystem.
This is expected to be followed up by an ambitious plan of action for building and deploying exascale and post-exascale supercomputers based on this technology, according to the EuroHPC JU.
It stated in its announcement that the European Chips Act identified RISC-V as one of the next-generation technologies where investment should be directed in order to preserve and strengthen EU leadership in research and innovation. This will also reinforce the EU’s capacity for the design, manufacturing and packaging of advanced chips, and the ability to turn them into manufactured products.
High-performance RISC-V designs already exist from chip companies such as SiFive and Ventana, but these are typically either designs that a customer can take and have manufactured by a foundry company such as TSMC, or available as a chiplet that can be combined with others to build a custom system-on-chip (SoC) package, which is Ventana’s approach.
- After less than half a year, Intel quietly kills RISC-V dev environment
- As Arm plays chicken with Qualcomm, both have a lot to lose
- Should open source sniff the geopolitical wind and ban itself in China and Russia?
- Intel cuts some workers' pay to fund its future
Creating a CPU design with custom instructions to accelerate specific functions would likely be beyond the resources of most HPC sites, but perhaps not a large user group or forum. However, a chiplet approach could de-risk the project somewhat, according to IDC Senior Research Director for Europe, Andrew Buss.
“Rather than trying to do a single massive CPU, you can assemble a SoC from chiplets, getting your CPU cores from somewhere and an I/O hub and other functions from elsewhere,” he said, although he added that this requires standardized interfaces to link the chiplets together.
But while RISC-V has potential, the software ecosystem is more important, according to Buss. “It doesn’t matter what the underlying microarchitecture is, so long as there is a sufficient software ecosystem of applications and tools to support it,” he said.
Potler agreed with this point, saying that “One of the most critical parts for HPC success is the software ecosystem. Because we've all worked on architectures where the software came in second, and it was a very frustrating time, right?”
Developer tools, especially compilers, need to be “solid, they need to scale, and they need to understand the ISA very well to generate good code,” he said.
This also plays a part in defining custom instructions, as these calls for a profiler or some performance analysis tools to identify time consuming sequences of code in the applications in use and gauge whether specialized instructions could accelerate these.
“So if I take these instructions out, I need a simulator that can simulate this [new] instruction. If I put it in here and take the other instructions out, the first question is, are the answers correct? Then the other thing would be: does it run enough to make it worthwhile?”
Another important factor is whether the compiler could recognize the sequences of code in the application and replace it with the custom instruction to boost performance, Potler said.
“You also see that extensions to the instruction set architecture will provide performance benefits to current and future HPC applications, whatever they may be,” he added.
However, Buss warned that even if there is a great deal of interest in RISC-V, it will take time to get there for users at HPC sites.
“There’s nothing stopping RISC-V, but it takes time to develop the performance and power to the required level,” he said, pointing out that it took the Arm architecture over a decade to get to the point where it could be competitive in this space.
There was also the setback of Intel pulling its support for the RISC-V architecture last month, after earlier becoming a premier member of RISC-V International, the governing body for the standard, and pledging to offer validation services for RISC-V IP cores optimized for manufacturing in Intel fabs.®
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more