You're Right Not To Rush Into Running AMD, Intel's New Manycore Monster CPUs
Opinion Intel recently teased a 128-core Granite Rapids Xeon 6 processor, and your humble vulture thinks you can ignore them – indeed, ignoring them might be your safest course of action.
That’s because Intel, and AMD, will encourage you to put a lot of eggs in their manycore baskets. I’ve heard both argue that parts such as the 72-to-128-core 6900P processor family, the 144-core Sierra Forrest Xeon 6, a promised 288-core monster Xeon, and the imminent 192-core Turin Epyc offer the chance for a new round of server consolidation by packing more cores into a single machine.
The chipmakers suggest that replacing your current servers with machines running their monster silicon will free as much as half your rack space and slash your power bills. They are all but evoking a moment in which datacenter ops folk who put this new tech will bask in the glow of a job well done, a planet protected, and a bonus pocketed.
You don’t have to do this. And if you didn't want to do this, hold firm. Plenty of orgs have standardized on modest hardware and done just fine. But if the boss reads about the chance for a fresh wave of server consolidation in an airline magazine, have them consider a few items.
One is the concentration of risk: A manycore box can run so many workloads that its failure would be catastrophic.
Yes, failover to another server is a mature art.
Making memory remains a less certain process, which is why it's still so expensive. A server running hundreds of cores is going to need huge amounts of RAM to handle all the workloads it runs, and that memory would end up costing more than the server itself.
That might explain why memory-maker Micron is so excited about the prospect of manycore servers driving up demand for its products.
But CFOs won’t be excited if you buy RAM-crammed servers that only ever achieve low utilization rates, so they have enough capacity left for DR duties when your other manycore-equipped servers fall over.
Next, ponder whether your DR rig is set up to quickly handle 128 cores worth of workload. Failover and VM teleportation tech like VMware’s vMotion remains almost miraculous. But the DR practices you built for your current fleet may not work well when moving more data. Data protection and storage vendors will claim they’re ready, but their reference architectures won’t survive contact with the enemy.
Check your software licenses, too. Does your software vendor let you pay for fewer cores than are present in the box you’re using? Some don’t on bare metal or insist on minimum core counts for VMs. You’ll need to plan carefully to ensure these big new boxes don’t complicate licensing.
- With Granite Rapids, Intel is back to trading blows with AMD
- Chinese server-maker Inspur claims it's on track for better liquid cooling with 'railway sleeper' design
- AMD's Victor Peng: AI thirst for power underscores the need for efficient silicon
- All the datacenter roadmap updates Intel, AMD, Nvidia teased at Computex
Consider, too, that handling hardware risk at this scale is not a core competency for many organizations.
But I can think of a few for which it is absolutely essential: AWS, Microsoft, Google, Oracle, Alibaba, and a handful of other hyperscalers.
Those orgs can buy servers by the boatload and understand how to make them pay without tying up capital. They’re also masters of resilience and redundancy and have built predicted hardware failure rates into their pricing and plans.
It’s not your job to match them. Nor can the managed service provider who you trust to tend your colo, or small clouds.
Hyperscale clouds are therefore the natural destination for manycore machines, which look less like a new wave consolidation opportunity and more like a current tugging you into the cloud.
And the cloud is an environment we’ve come to understand brings with it cost uncertainty and lock-in risk.
So by all means, join The Register as we gaze in awe at the astounding CPUs coming our way in 2025. Then pick your jaw off the floor and do some real thinking about whether you will ever be ready to operationalize such monsters. If you’re not, that’s fine. And it’s far better to be fine than offline. ®
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more