Why AI Builds Best On Private Clouds
Sponsored Feature As those petabyte-heavy AI workloads start to stack up, hyperscale providers are having to beef-up their infrastructures to maintain full-service compute, network and data storage availability thresholds.
It could take years of upgrades for the hyperscalers to make those infrastructures truly 'AI-fit'. Until then they can find consolation in surging revenue streams from customers' escalating need to store and shift datasets through the AI model lifecycle – while being billed each step of the way.
Cost is just one of the reasons why enterprises are looking for on-premises options for their burgeoning AI workloads, which they also hope will provide them with higher performance, tighter management, and improved data security.
"Enterprise customers are realizing that leveraging AI for optimal performance, but with integrated cost control, calls for a fundamentally different approach to the management and storage of their proprietary data," says Chris Greenwood, Data Services & Storage Vice President (UKIMEA) at HPE. "They're looking for AI-optimized enablement platforms that don't incur usage surcharges or impact their other line-of-business applications."
For organizations exploring how AI and generative AI can enable them to develop new revenue-generating products/services the way ahead is beset with risk. They face technological challenges of utmost complexity.
Various studies indicate that the majority of AI projects do not succeed as planned – some unverified figures suggest the failure rate is higher than 80 percent. Research by RAND found that a top root cause for this high failure rate is that organizations do not have adequate infrastructure to store and manage their data, and then deploy completed AI models.
"There's continued buzz around AI, how it can both 'unlock' data value as well as accelerate speed-to-market of revenue-generation solutions," reports James Watson-Hall, Worldwide Field CTO - Hybrid IT at HPE. "One of the pitfalls of AI adoption for customers is that while it promises opportunities it can also lead them up developmental dead ends. Private clouds mitigate a lot of that risk. They enable customers to access the proven benefits of hyperscaler platforms, while securely retaining the control and data ownership of on-premises operations."
Private cloud plus points
Effective speed-to-market needs low latency network computing to support AI development environments. Ensuring that the inference and tuning stages occur close to – i.e., on the same physical premises as – the data sources can significantly boost the speed and accuracy of AI models. Public cloud solutions, however, can entail transferring data across geographical distances, which leads to latency lags and other inefficiencies.
In addition, the on-premises approach allows for closer customer governance, which simplifies management, clarifies accountability, and facilitates secure, collaborative workflows, Watson-Hall points out, along with expenditure monitoring: "Cost control is directed by flexible financial models, avoiding the unanticipated costs associated with public cloud contracts. Private cloud lets customers optimize their technology stack for specific use-cases, so AI deployments are tailored from pilot to deployment for utmost efficiency and to foster innovation."
These factors – and others – come together to demonstrate why HPE's Private Cloud AI, introduced earlier this year, was designed and built to facilitate the specific requirements of AI workloads running in on-premises environments. Part of the NVIDIA AI Computing by HPE portfolio, Private Cloud AI is a turnkey, scalable private cloud designed to speed-up AI project deployment and ensure the data it uses is entirely under enterprise control.
The solution combines NVIDIA accelerated computing, networking and software with HPE's industry-leading compute platforms and data storage portfolio through the HPE GreenLake Cloud, along with HPE capabilities for data pipelines, orchestration and Machine Learning operations (MLOps).
The solution meets the long-standing need for a resource-efficient, fast and flexible development and deployment environment for AI and generative AI applications, says HPE. "AI workloads are unlike other workloads – you can't really run one on infrastructure that's not optimized for AI and expect to achieve optimal results," Greenwood cautions.
"Also, AI projects must show highly quantifiable speed-to-value to justify investment. And successful AI workloads must be easily and quickly deployable to maintain competitive status within given vertical sectors and industries. They must show tangible ROI within a period of months, not years."
Greenwood adds: "This is only feasible if pre-defined, integrated and tested tools are provided from a single, integrated platform – that's what HPE Private Cloud AI does."
Success starts with data
Of course, all successful AI outcomes start with data – information that's stored and managed optimally for AI.
"Enterprises are anxious to interrogate their data to uncover value, but before they can reach that stage they need to understand where the data is, and what they actually possess – i.e., is the data trusted – that's challenge number one," says Greenwood. "Then they need to ensure they are using the correct interrogation platform that will enable them to achieve their aims."
HPE Private Cloud AI functions help enterprises locate, identify and understand their separate data assets, and then provide a data pipeline that allows interrogation to take place across data domains. These tools are designed to be used across enterprise personas, from data scientists and data analysts to non-techie business owners.
Consolidating dispersed datasets into a central repository that's accessed through HPE GreenLake for File Storage – fully integrated across HPE Private Cloud AI configurations – makes it much easier and cost-effective to interrogate data as part of a single, unified storage environment, Watson-Hall says: "HPE GreenLake's suite of Infrastructure-as-a-Service (IaaS) options allow customers to control how IT resources are consumed. And it scales as the extent of stored data assets grow."
And grow they almost certainly will. Referencing data derived from research company IDC, JLL expects AI to drive up datacenter storage capacity by 18.5 percent over the next three years (from 10.1 ZB in 2023 to 21.0 ZB in 2027).
"Storage architectures of old were not designed to scale at that pace," according to Watson-Hall. "They are not designed to meet the needs of these AI-driven requirements. The scale is beyond traditional array architectures. We're seeing 100 PB single-platform systems out there. That's way beyond what most datacenter storage systems can manage without gasping. It's an enormous challenge for even large businesses to stay on top of without tech partner solutions and support."
Furthermore, data storage best practice must take account of sustainability, says Greenwood. "It has become one of the issues that's causing organizations to take onboard that they cannot keep all of their data for ever. Data storage incurs an environmental cost even when data is archived. That must be factored into TCO assessments."
Greenwood adds: "These issues are heightened because of the increase in data AI is generating. Making enterprises better able to manage their data, with tools to optimize its use, contributes to better sustainability outcomes."
Industry sectors bringing home the AI advantage
Opportunities to improve process efficiency by using AI to automate operations and deliver new products/services abound across verticals and industries. But with today's ultra-competitive environments, scale and speed-of-deployment are vital.
"Businesses realize that they cannot afford to devote human and financial resources in large measure to bring their AI projects to market," Greenwood reports. "In financial services, for instance, banks are now investing in pretested, as-a-service solutions – rather than pursuing a 'build-it-ourselves' approach." He cites the example of Barclays recent reaffirmation of HPE GreenLake Cloud as a core pillar of its hybrid cloud strategy.
In healthcare, HPE solutions with data platform WEKA enable radiologists to use increasing amounts of medical imaging data to train AI models that assist clinicians in diagnostic testing and enhance patient experiences.
Sectors like public services are looking to AI to help them find operational efficiencies and better workforce management that helps teams by automating routine tasks and improving workflows. "This reduces operational costs, ramps-up service delivery, and allows human resources to focus on more complex, value-added activities," says Watson-Hall.
Central to these accelerated development lifecycles is HPE Private Cloud AI's use of NVIDIA's Nvidia Inference Microservice – NIM – a growing set of reference architectures that enable enterprises across sectors and industries to adopt cloud-enabled AI apps/use-cases at speed.
NVIDIA NIM helps to create data pipelines, and develop and fine-tune generative AI models. When deployed, NIM exposes industry-standard APIs for simple integration into AI applications, development frameworks and workflows.
Securing against LLM leaks
Large language models (LLMs) are key to deriving insights when enterprises interrogate their proprietary data. Hyperscale clouds used to pretty much be the only public environments in which organizations could develop and run AI/LLM-class workloads – giving rise to data security concerns. While LLMs can be targeted by cyber-attacks, training LLMs can draw in sensitive data that, if mishandled, can pose security risks.
"Working with HPE Private Cloud AI, enterprises can now run LLMs on-premises, with all the cloud-credible performance attributes they want, but with a secure wraparound," Greenwood says. "If LLM data leaks into the public domain, it could cause formidable problems if it contains a business's intellectual property. HPE Private Cloud AI is designed to secure the LLMs it helps to develop and deploy."
The rapid adoption of AI by the commercial mainstream is proving a potent check to the public cloud advocacy that fashions many enterprises' IT strategies. As Greenwood and Watson-Hall describe, from pilot to production there are many factors that play in favor of an AI-ready organization choosing private cloud for more complete control of its projects. HPE recognizes how optimization of data storage – improving unification, latency, manageability – is an essential priority for winning AI ventures.
Sponsored by HPE.
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more