Nvidia Extends Its Commodity Server On-prem AI Push Into Hyperconverged Tin
![](https://regmedia.co.uk/2021/08/24/handout_nvidia_ai_enterprise_with_2u_server.jpg)
Nvidia has extended its on-premises AI push into the wonderful world of hyperconverged infrastructure.
The company's move into the mainstream data centre has two prongs. One is a pair of small GPUs that fit into typical 2U servers and won't burn them down or instantly torch your budget – the model A10 and A30 cost around $2,000 and $3,000 respectively.
The other is "NVIDIA AI Enterprise", a bundle of AI tools – PyTorch, TensorFlow, Nvidia Inference Server and more - packaged and ready to run inside VMware’s vSphere virtualization environment, either as VMs or containers.
Virtzilla and Nvidia have also worked together to virtualize GPUs so they can be carved into logical slices that are shared out to applications, rather than tightly coupled to servers. Nvidia also certified a server program and signed the box-makers that count – Atos, Dell, GIGABYTE, HPE, Inspur, Lenovo and Supermicro – so they can guarantee that AI Enterprise on VMware will work as promised on their tin.
- Nvidia opens Hardware Grant Programme – which doesn't mean RTX 30 series cards
- Russian Arm SoC now shipping in Russian PCs running Russian Linux
- Wanna use your Nvidia GPU for acceleration but put off by CUDA? OpenAI has a Python-based alternative
As of now everything mentioned above is on sale, having made it through beta testing.
Also shipping now is a Dell EMC VxRail rig running AI Enterprise, an offer Nvidia is quite excited about as it means its new bundle runs on both vanilla servers and hyperconverged infrastructure.
The firm also talked up its new relationship with ML Ops outfit Domino Data Labs, which will link to AI Enterprise on vSphere so that when analytics types ask IT to start running a new model, VMware's platform can quickly spawn just the right containers or VMs to do the job.
Nvidia thinks this all matters for two reasons. One is that IT teams are tired of their analytical colleagues treating cloud as the default destination for AI and ML workloads and creating governance and security worries as they send data beyond the reach of on-prem policy. The other is that line-of-business applications increasingly require AI and/or ML and mostly run on-prem where they create a need for easier adoption tools.
If those who sign up for AI Enterprise on commodity servers outgrow it and end up with either Nvidia servers or Nvidia in a cloud, the company has still built itself an on-ramp – although some might suggest gateway drug is a better metaphor. ®
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more