Power To The Engineering People
Sponsored The value of great engineering is often overlooked, yet almost every object we use on a daily basis has been meticulously designed and tested by somebody somewhere to deliver the best possible performance and meet exacting cost and efficiency requirements.
Those processes have become considerably more sophisticated with the evolution of finite element analysis (FEA), which now plays a critical role in the computational simulation of physical components using mathematical techniques. FEA has become a staple feature of the modeling software that engineers from multiple verticals use to optimize their designs by running virtual experiments which help to reduce the number of physical prototypes they then have to build.
FEA is utilized by pretty much any company that does any sort of engineering which includes everybody from aircraft or rocket manufacturers to healthcare companies that make stents used in coronary arteries. It is geared toward structural analysis which is why FEA is one of the key tools for computer-aided engineering (CAE) applications. The simulation process involves generating a mesh that maps a 3D drawing of the overall shape of an object using a series of mathematical points that form millions of small elements which are then analyzed with a structural physics solver.
With that said, you can imagine FEA needs a lot of compute power to solve structural physics equations over those millions of computational elements. A single large engineering simulation job can run on 1000 CPU cores for hours, with a design process for a single product or component involving 100s of individual simulations and thousands of jobs. FEA also has some specific requirements for computers depending on the type of problem to be solved. There are dozens and dozens of different types of problems and applications, and each of them has different needs for memory and throughput for example.
Just getting access to those sort of compute resources can be a significant challenge for companies involved in FEA simulation.
For many, it makes more sense to use the flexibility and agility offered by the cloud. Cloud hosted HPC infrastructure often provides far more in the way of flexible billing, ease of access, scale and redundancy than any alternative HPC cluster that could be owned and operated in-house, thus enabling engineering firms to concentrate on doing what they do best, which is building products.
Amazon EC2 Hpc6id instances head into town
AWS has a proven history of delivering HPC infrastructure services for customers as diverse as Boeing, Volkswagen Group, Formula 1 and Western Digital. In recognition of the fact that engineers want to run ever more complex FEA workloads on cloud-hosted HPC clusters more quickly and cost efficiently, the company is also investing in scaling up the power and speed of the HPC-optimized Amazon EC2 instances it offers using the latest processors from Intel.
AWS announced general availability of Amazon Elastic Compute Cloud (Amazon EC2) Hpc6id instances at its annual re:Invent conference last November. These instances are optimized to efficiently run memory bandwidth-bound, data-intensive HPC workloads, and specifically FEA models, powered by 3rd Gen Intel® Xeon® Scalable processors. That raw CPU muscle is supplemented by the AWS Nitro System and Elastic Fabric Adapter (EFA) network interconnect which delivers 200Gbit/s of inter-node throughput between different instances, so that customers can instantly scale their HPC resources to handle even the most demanding of FEA workloads.
EC2 Hpc6id instances have 15.2TB of local NVMe storage for data-intensive workloads. These instances also have very fast network interconnection bandwidth, meaning multiple computers can be put together to rapidly solve large problems. As such, EC2 Hpc6id instances deliver double the compute speed of the previous instance in the AWS line, says Amazon.
The performance boost added by 3rd Gen Intel Xeon Scalable processors coupled with ample local storage, the AWS Nitro System, and EFA, mean that customers can run their FEA simulations on a smaller number of instances. That in turn has a knock-on effect in terms of faster job completion and reduced infrastructure and licensing costs. FEA workloads also require the ability to read and write data very quickly. To meet that fast I/O requirement, AWS has attached dedicated NVMe local storage to EC2 Hpc6id instances which eliminates latency associated with using networked storage components.
Engineering simulation for the digital masses
This is the sort of infrastructure that enables companies to solve particularly intense FEA problems such as those used for linear static analysis or vibration analysis for example. Think about safety simulations on the shell of an aircraft, where the intensity of vibration on the wing inevitably creates stresses and strains which can constrain the number of passengers and load that it can safely carry.
Those in the automotive industry too make extensive use of engineering simulation to design everything from the car chassis, engine and thermal cooling systems to the battery, electric motor and electronic sensors, as well as overall aerodynamic profile. One prominent example of FEA usage is car crash simulation. Hundreds of thousands of virtual car crash tests have been conducted with FEA simulations over the past couple of decades, and the learnings have led to safer car designs that may save lives.
High-tech companies involved in the manufacturing of CPUs, memory, batteries, antennae and other electronics components also widely use FEA. And EC2 Hpc6id instances bring new metrics to the table in terms of accessibility and cost for businesses in this sector that may not previously have had access to compute resources with this power and scale.
HPC in the cloud brings benefits
Irrespective of the precise CPU architecture those HPC workloads utilize, the very fact that they are hosted in the cloud delivers many advantages for enterprises compared to using their own on-prem HPC clusters for the same job. That's a realization reflected in market trends – seeing more organizations shifting their workloads to the cloud.
Research firm Hyperion has forecast that the HPC cloud market will grow twice as quickly as its on-prem equivalent – a 17.6 percent as opposed to a 6.9 percent CAGR respectively - as companies in various industry verticals shift their workloads into externally hosted infrastructure, led by the manufacturing, financial services and government sectors. The company notes that small organizations, including workgroups and department segments within larger companies, are particularly keen to adopt cloud resources for their HPC jobs to better align their procurement with budgets, timescales and skillsets that dictate their operational schedules.
The shift to a pay-as-you-go rather than a fixed cost model associated with infrastructure owned and operated in-house brings its own rewards in terms of flexible billing, and economies of scale leading to lower overall total cost of ownership in many cases. And associated productivity improvements usually follow from having more scalable compute resources available to larger numbers of concurrent users.
Customers might have a limited number of on-prem servers for example, but if they have a lot of users submitting jobs, there can be long queues for using those resources. In the cloud though, all of those users can run their jobs simultaneously simply because they have access to a vast pool of HPC-optimized instances from AWS at the same time. That gives them a lot of flexibility when it comes to optimizing CPU, memory, storage and networking utilization rates which can vary significantly at different stages of the engineering development and testing process, or during specific times of the year when companies see peaks in seasonal demand.
Even a single user can run fluctuating volumes of FEA workloads at different points in time, particularly when they are simulating multiple jobs to support the testing and release schedule of a particular product. So, if they need a burst access to a very high quantity of resources and those resources are not available on prem due to fixed capacity, the job could stall. Whereas on AWS if demand goes up, the resources which are readily available to be utilized can increase in parallel.
The world has some tough problems to solve over the next decade, and the ability to streamline crash testing and design more energy efficient structures and components can go some way to addressing them. Clever engineers using FEA simulation are undoubtedly up to the task, but they might need the backing of instantly available, powerful compute resources like EC2 Hpc6id instances to help them complete it.
Sponsored by AWS and Intel.
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more