AI Puts Value In Data. So How Do We Get It Out?

Sponsored Feature AI is driving an explosion in infrastructure spending. But while GPU-enabled compute may grab the headlines, data management and storage are also central to determining whether enterprises ultimately realize value from their AI investments and drive broader transformation efforts.

The worldwide AI infrastructure market is expected to hit $100bn by 2027, according to IDC. Servers are expected to account for the lion's share of this spending, but storage investment is increasing in line with overall growth as tech leaders cater for the massive datasets which AI requires along with the need for training, checkpoint and inference data repositories.

While AI is fueling this spending boom, many of the underlying challenges facing CIOs haven't changed, explains HPE's SVP and GM for storage, Jim O'Dorisio. These include driving innovation, streamlining operations, and reducing the total cost of operations, all within the maelstrom of a constantly evolving tech and business landscape.

Data, and therefore storage, all play into this. AI relies on data. But so do the myriad of other, more traditional, operations that companies regularly undertake. But it must be the right data, available to the right systems at the right time, and at the right speed, says O'Dorisio.

"If you go back 15 years ago, 10 years ago, storage was really just where data sat. Increasingly, it's where we create value now, right," he explains.

Dealing with the issues of data gravity and location is particularly challenging, a situation aggravated by the broader span and complexity in customer IT environments. The last two decades have seen a rush to the cloud, for example. But many enterprises are now wondering just how much they actually need to be off premises, particularly when it comes to managing all the data they need in order to realize value from AI.

That decision may come down to higher-than-expected costs, or any given cloud provider's inability to meet strict organizational performance or security requirements, especially for real time and/or AI workloads. IDC notes that even cloud native organizations are beginning to question whether private cloud or on-prem has a role to play for them.

And beyond creating value through AI or other advanced applications, enterprise data still needs to be protected and managed as well. The cyberthreat is more acute than ever – with threat actors themselves enthusiastically leveraging AI.

The cyber challenge is clearly right up there, says O'Dorisio, but this repatriation of data also creates additional hybrid complexity. There's sustainability to consider as well, for example. Complex systems require energy to run, and data should be managed efficiently. But the underlying storage should also be as efficient as possible. That includes optimizing energy consumption but also considering the impact of overprovisioning and unduly short life cycles.

This is a legacy problem

The crucial question for an organization's storage systems then is whether they can keep up with the speed of change. The answer, too often, is they can't. For multiple reasons.

Traditional architectures that rigidly tie together compute and storage can pose problems when scaling up to meet increasingly complex or large workloads. Expanding storage capacity can mean spending on compute that isn't really needed, and vice versa. This can lead to silos of systems built out for a particular business unit or workload, or a particular location, for example, core datacenters or edge deployments.

Likewise, legacy architectures are often targeted at specific types of storage: block; file; object. But AI doesn't distinguish between data formats. It generally wants to chew through all the data it can, wherever it is.

This lack of flexibility can be aggravated by legacy systems that were designed for a particular type of organization or scale, e.g. "enterprise" or a medium sized business. Integrating a raft of standalone systems can present a clear architectural issue as well as management challenges.

Disparate hardware often means disparate management systems and consoles for example, meaning managers are left with a fragmented view of their overall estate. That situation can force team members to specialize in a subset of the organization's infrastructure, which can often result in inefficiencies and increased operational costs.

These fragmented, siloed, and often hard to scale systems don't lend themselves well to the hybrid operations that are increasingly becoming the norm. Any organization contemplating repatriating some or all of its data will likely balk at losing the ease of use of managing their data in the cloud.

This can all contribute to a massive bottleneck when it comes to maximizing the value of all the data available. "The architectures are typically complex, and they're siloed, explains O'Dorisio. "And it makes extracting value from the data very difficult."

Where is the value?

HPE has sought to address these challenges with its HPE Alletra storage MP platform. The architecture disaggregates storage and compute, meaning each can be scaled separately. So, as the demands of AI increase, infrastructure can be scaled incrementally, sidestepping the likelihood of siloes or wasteful overprovisioning, says HPE. This is bolstered by HPE's Timeless program, which ensures a free, nondisruptive controller refresh, cutting TCO by 30 percent compared to standard forklift upgrades according to HPE estimates.

The MP stands for multiprotocol, with common underlying hardware optimized for particular applications. HPE Alletra Storage MP B10000 modernizes enterprise block storage with AI-driven cloud management, disaggregated scaling, and 100 percent data availability for all workloads, says HPE. Whereas, the HPE Alletra Storage MP X10000 is purpose built for intelligent high-performance object storage. The AMD EPYC embedded processors at their core are designed to offer a scalable X86 CPU portfolio delivering maximum performance with enterprise-class reliability in a power-optimized profile.

An upcoming release of the X10000 system will give the ability to tag data and add metadata as data is being stored. Users will be able to add vector embeddings and similar functions to support downstream Gen AI RAG pipelines. "Our whole notion is really to add the intelligence and create value as the data is being stored, which really significantly reduces time to value for our customers," O'Dorisio says. Together with the unified global namespace in HPE Ezmeral Data Fabric, customers can aggregate data from across their enterprise to fuel AI initiatives.

But, even if tech leaders have good reason to situate some or even all their storage infrastructure outside the cloud, giving up the ease of management the cloud offers is a harder sell. Step forward the HPE GreenLake cloud, designed to deliver a single cloud operating model to manage the entire storage estate, across the core, edge and cloud.

Any form of disruption to IT operations, whether due to a disaster or a cyberattack, is now considered an inevitability rather than misfortune. However, by harnessing the Zerto ransomware detection and recovery software, organizations "can really recover in hours and days, versus maybe weeks and months when you're trying to recover a bunch of data from a cloud," says O'Dorisio.

Intelligent data savings

This intelligent approach to architecture and ownership also supports a reduction in associated emissions by half, O'Dorisio adds, by reducing overprovisioning and the need for forklift upgrades.

HPE's own research shows that HPE Alletra Storage MP's disaggregated architecture can reduce storage costs by up to 40 percent. Better still, intelligent self-service provisioning can deliver up to 99 percent operational time savings, calculates the company.

One major global telecom provider recently deployed HPE Alletra Storage MP B10000 to refresh its legacy storage arrays. In the process, the company dramatically reduced the costs associated with support, energy and cooling, as well as datacenter space, says HPE.

The move helped reduce operating expenses by more than 70 percent while allowing the telco to accommodate a higher volume of traditional databases as well as more modern applications. The increased storage capacity with a smaller footprint means the telco provider also now has space in their datacenter to accommodate future growth.

None of that is to suggest that storage in the AI age is anything less than complex.After all, as O'Dorisio says, "The data really spans, from private to the edge to the public cloud. Data sits across all those environments. Data is more heterogeneous."

But deploying block, file or object storage services on a common cloud-managed architecture means both managing and extracting value from that data will be much easier and efficient.

Sponsored by Hewlett Packard Enterprise and AMD.

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more