The Smart Way To Tackle Data Storage Challenges

Sponsored Feature Object storage used to trade performance for cost and scalability. With features like high performance, data intelligence, and simple management, HPE is updating the technology for modern use cases.
When object storage first launched in the late 1990s, it enabled companies to tackle a perennial problem: how to store large amounts of data at low cost. That requirement hasn't gone away, with research company IDC predicting that the volume of enterprise data stored globally will expand at a CAGR of 28.2 percent from 2022-27. But now there's an additional need to process information faster to meet the demands of more modern applications and workloads.
Object storage scales out to manage vast amounts of data across distributed systems of general-purpose storage nodes. It's excellent for unstructured data like multimedia files, storing extensive metadata with those objects for more advanced data management and retrieval. However, this mass-scale storage traditionally came with a performance penalty.
Companies have lived with that trade-off by restricting their object storage to low-performance applications such as archiving and large digital repositories where retrieval speed wasn't a factor. Now, with the rise of data-intensive applications like analytics, AI, and modern data protection, there's more demand for high-performance object storage that can handle low-latency, high-speed data storage and retrieval.
Traditional object storage can't keep up
Traditional object storage has not been able to keep up. In November 2024, HPE addressed the issue by launching HPE Alletra Storage MP X10000, powered by AMD EPYC™ embedded processors. The company's first home-grown entry into the object storage market is an all-flash solution that adds speed to scalability and data intelligence capabilities, while also making object storage easier to manage.
The X10000 handles the same high-volume storage applications that legacy object storage tackles, but its fast operation makes it particularly well-suited for modern use cases, including AI, says HPE. It's good at supporting the AI lifecycle, for example - an application area that IDC estimates will drive $21.9 billion in global enterprise spending by 2028 – because its low-latency retrieval can help to accelerate training and inference.
HPE is especially targeting HPE Alletra Storage MP X10000 at the newer generation of generative AI (GenAI) applications. With enterprises embracing retrieval-augmented generation (RAG) as a way to tailor large language model (LLM) technology to their own applications and data, they need rapid retrieval of indexed unstructured data.
The driving force behind the X10000's improved performance is what HPE calls data intelligence. It's a real-time process where data is scanned and ingested into the object store. HPE creates vector embeddings from the data. These are numerical values that represent semantic meaning, enabling it to quickly retrieve data based on similar concepts. The device stores these in a vector database that enables large language models to fold the retrieved data into their responses.
Data intelligence makes data stored on the X10000 ready for AI applications to use as soon as they are ingested. The company has a demo of this, where the X10000 ingests customer support documents and enables users to instantly ask it relevant natural language questions via a locally hosted version of the DeepSeek LLM. This kind of application wouldn't be possible with low-speed legacy object storage, says the company.
The X10000's all-NVMe storage architecture helps to support low-latency access to this indexed and vectorized data, avoiding front-end caching bottlenecks. Advances like these provide up to 6x faster performance than the X10000's leading object storage competitors, according to HPE's benchmark testing.
Software-defined flexibility
HPE designed the X10000 from the start to be deployed on-premises and in the cloud thanks to its software-defined architecture. Its control software runs on a containerized Kubernetes (K8s) platform. The X10000 is a modular solution based on what HPE has dubbed a Shared Everything Disaggregated Architecture (SEDA), with compute and storage (or Just Bunch Of Flash -JBOF) nodes scaling independently of each other. Organizations can lean towards performance or capacity according to their own needs when adding modules, rather than having to scale storage and compute linearly with each other. HPE's research suggests that this disaggregated architecture can reduce storage costs by up to 40 percent.
The containerized architecture opens up options for inline and out-of-band software services, such as automated provisioning and life cycle management of storage resources. It is also easier to localize a workload's data and compute resources, minimizing data movement by enabling workloads to process data in place rather than moving it to other compute nodes. This is an important performance factor in low-latency applications like AI training and inference.
Another aspect of container-based workloads is that all workloads can interact with the same object storage layer. The X10000 architecture offers native support for Amazon's S3, widely perceived as the industry standard private cloud API.
Data analytics is the third leg of this market, representing a $17.1 billion opportunity in 2028 calculates IDC. These applications increasingly draw on data lakes, holding all manner of structured and unstructured data. The X10000's low latency makes it suitable for advanced applications in real-time analytics which increasingly use AI algorithms.
Good for data protection
While AI and analytics are strong growth areas for the X10000, it's also likely to gain significant traction in high-speed, scalable backup and restore applications. This is a focal point for HPE, which views data protection and backup storage as a core market for the device. Its all-flash architecture gives it the IOPS to recover and restore data quickly, which becomes more important as the scale of backup data grows. Meanwhile, high-performance recovery helps to minimize the business impact of an outage.
Enterprises can rarely, if ever, afford data corruption or leakage, which is why the X10000 includes several features designed to secure data. These include always-on monitoring and auditing for data integrity, along with object lock-based immutability for ransomware protection. Customers can apply authentication and encryption options, both for in-place and in-flight data. The system also features erasure coding, which gives customers more data resilience while preserving storage space; splitting data copies across multiple nodes and using parity blocks enables customers to recover data without storing full copies of it.
This performance boost has already been demonstrated in real-world environments. French technology integrator and IT service provider AntemetA was a prime candidate to beta test the X10000, given its work with AI applications and its provision of backup-as-a-service options for customers.
AntemetA primarily tested the X10000 for data protection and data analytics applications, explained its pre-sales architect Jeff Charpentier at a recent HPE Discover conference. The former is a key application for the company given regulatory changes in the EU. "Data protection is evolving at the moment. There are regulations around the world, like DORA [the Digital Operational Resilience Act] in Europe, and our customers want to accelerate recovery," he says.
The X10000 passed the company's tests, and then some, according to Charpentier. "We were quite amazed by the performance," he continues. "On this system, we were able to reach 40 gigabits per second of write, and also 40 gigabits per second of read."
Mastering storage management
Charpentier was also impressed by the X10000's management features, which are based on the Data Services Cloud Console (DSCC). This integrates with HPE GreenLake to create a cloud-based management system that enables customers to manage their entire HPE storage infrastructure - including on-premises and cloud-based systems like the HPE Alletra Storage MP B10000 (formerly GreenLake for Block) - in a single interface.
Unifying management of the entire HPE Alletra Storage range makes it easier to monitor and control not just structured and unstructured data from that one interface, but also block, file, and object storage. This is a big advantage for the X10000 over legacy object storage systems which can be difficult to manage, says HPE.
DSCC enables customers to control functions ranging from performance optimization based on AI-driven predictive analytics, through to rapid data restoration. It supports security features including encryption, intrusion detection, and detailed auditing. It also helps to simplify onboarding with automatic detection, deployment, and integration of new hardware components. That was a big deal for Charpentier, who experienced a component failure during his beta test.
"We do maintenance for our customers, so we are used to failing parts and failures," he says. "During our test, we had a failure of one flash module among the 24. HPE just shipped a module, we replaced it online, and everything went on as usual with no interruption to the service."
By supporting identity management for controlled access, DSCC provides a role-based management model that makes it easier for customers to use the same system for multiple functions without confusion, Charpentier adds: "What is interesting also in the DSCC console for management is that you can segregate infrastructure management roles from the DevOps roles."
Maximizing performance for multiple AI applications
HPE has been forging industry cooperation to drive storage performance at a deep technical level, enabling direct memory access transfers between GPU memory, system memory and indexed metadata storage. This results in reduced latency and CPU overhead says HPE, with a further boost to overall system performance delivered by the HPE Alletra Storage MP's AMD EPYC embedded CPUs. The AMD EPYC embedded processors at their core are designed to offer a scalable X86 CPU portfolio delivering maximum performance with enterprise-class reliability in a power-optimized profile.
HPE is clearly targeting a big list of applications for the new X10000 model, which means it had to build in sufficient flexibility for customers to tailor it for their specific use cases. The company has also tried to make that scalability more financially manageable by offering the product on a subscription basis.
How can companies best take advantage of a unit like the X10000? It's a massively scalable system, but you don't have to start big. Beginning with as little as three nodes, you can test your assumptions and gain confidence with the management interface before scaling up, concentrating on either storage capacity or compute, or a mixture of both. That will help you to minimize the total cost of ownership by avoiding overprovisioning.
Object storage isn't going anywhere. IDC expects it to grow at a five- year CAGR of 14.9 percent through 2027 in on- premises and public cloud environments. But it needed reinventing for a new age of AI, data analytics, and data protection at mass scale. The X10000's software-defined architecture and data intelligence feature delivers high performance while also simplifying management by unifying everything under a simple HPE GreenLake cloud interface.
With these enhancements, HPE is banking not so much on nudging the object storage needle as sending it spinning at rapid speed. It seems to be succeeding - and it has the customer stories to prove it.
Sponsored by Hewlett Packard Enterprise and AMD.
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more