Surviving A Cyberattack? It Aint What You Store, Its The Way You Restore It

Sponsored Feature What's the biggest endorsement of critical data backup systems? It could be the fact that when a threat actor breaks into an organization, their first target is usually the victim's primary storage followed immediately by the backup storage. They know that if they compromise both, their target has little option but to pay up, if only because it knows it has to get back to business as quickly as possible.

Data breaches disclosed by an attacker, including those caused by ransomware, cost companies significantly more than other breaches, according to research by the Ponemon Institute for IBM.

Yet while cyber criminals are crystal clear on the importance of an organization's backup systems, management can sometimes underestimate its priority and lump it into a broader architectural strategy.

Organizations often simply buy their backup hardware from the same organization that supplies their primary storage for example – even to the point of just replicating their primary storage behind their chosen backup application.

The problem here is that those systems are unlikely to be tuned specifically for backup. There might be some rudimentary deduplication included within the chosen backup applications, but organizations will often find they are still retaining large amounts of data on expensive storage. And because the entire architecture is network facing, it will be easier for threat actors to make the leap from primary storage systems to the backup architecture.

Don't assume the backup is secure

A solution to the data size issue might be to use an inline dedupe appliance as part of the backup process, but dedupe is compute intensive, dramatically slowing down backups. And in the event of an attack – and there will be an attack – restores will also be painfully slow as data has to rehydrated from deduplicated data for each restore request. That's assuming the backup is secure - this setup is still network facing, after all.

Moreover, expanding the capacity of such a system typically means adding not just more storage, but the entire front-end controller architecture including processor, memory, and all. This also raises the prospect of forklift upgrades, or, possibly worse, realizing your existing system is nearing obsolescence.

Another problem with both these approaches is they assume that all data is created equal.

But as ExaGrid's president and CEO Bill Andrews explains, it's more helpful to think of two types of data. First there are information assets that the company needs simply to maintain daily operations – production data, customer data, VMs etc. An organization can't predict when it's going to be hit by a ransomware attack, or any other sort of catastrophic outage, but knows that it needs this data to be restored as quickly as possible so it can get back to business. The longer the restore process takes, the more likely it is to lose customers, followed by revenue, the faith of suppliers and investors, ballooning costs, and, possibly, collapse.

Tiering up

Then there's data which needs to be retained for the longer-term and which needs to be produced for financial or regulatory audits, legal discovery purposes, or to meet compliance requirements. There's no question this information is important. However, storage professionals will typically have time to produce that data, for example, in response to a legal discovery request.

"We looked at that, and we said, we need to do something very different," says Andrews. "We call it Tiered Backup Storage."

The tiering refers to ExaGrid's division of its appliance into a front-end, network-facing "Landing Zone" and a non-network-facing Repository Tier.

Rather than directing backup data via a dedupe appliance, data is written directly to the Landing Zone. "There's nothing slowing the backups down," as Andrews says. Speed is achieved by getting the dedupe process out of the way and letting the backup software write straight to disk. The company supports upwards of 25 backup applications – doing what it does best.

"We tweak the OS to deal with large backup jobs. Most primary storage is tweaked for small NAS files," he explains. ExaGrid's system also allows for job concurrency, "We do the jobs in parallel coming at all of our appliances just to make sure you're really getting a lot of stream coming at the exit rates." At the same time, front end load balancing means jobs are sprayed across all the available appliances.

In addition, Andrews says, ExaGrid uses the "most advanced protocols for performance we can find." And, the backup app's encryption is switched off, meaning a 20 to 30 percent performance impact, while the ExaGrid system uses self-encrypting drives. Andrews says using drive level encryption adds another 20 to 30 percent improvement on ingest performance.

Those backups are then deduplicated from the Landing Zone into the Repository Tier for long-term retention, he explains. "As the backups hit our Landing Zone, we start deduplicating and replicating them into the repository." This means the backup process is completed as quickly as possible, shortening the backup window. At which point, more system resources can be switched to deduping data into the repository.

While the Repository Tier is physically on the same appliance it is not networking facing, meaning the data is effectively air gapped. As Andrews explains, "Our software is the only thing that can get to the repository."

The Repository Tier takes care of long-term data retention and recent backups, according to policies set by the user. The most recent backups reside in the Landing Zone undeduplicated and in their native format meaning they are available for rapid restore, should an organization's production systems be compromised, whether by ransomware, file corruption, or the myriad other reasons companies lose data. There is no need for the lengthy, compute intensive rehydration deduped files would require.

Activate the delayed delete system

The Repository Tier stores the data as immutable data objects. So, should corrupted or encrypted data find its way to the repository, "That is all new data to us when we see encrypted data and it does not change any of the previous deduplication images." Unusual delete or encryption activity will be picked up by the system, and an alarm is sent to the user to investigate the activity.

Further security comes through what ExaGrid calls, Retention Time-Lock, a delayed delete policy. This means that admins can define a policy so that whenever a delete command is issued for data in the Repository Tier, the command is delayed for a specified period of time. Andrews says, "90 percent of our customers think 10 days is the right number." So, should an attacker manage to hack in the network and issue a delete command, the most recent backups are still available in the Repository Tier within the delayed delete window.

Multifactor authentication and role-based access (RBAC) requirements further secure these policies. "So, if you want to change the policy, if you want to do abnormal deletes, you've got to get the Security Officer and the Admin to sign into the system," Andrews explains. "You have 2FA enabled, so if the threat actors are monitoring password usage and get a hold of the passwords, it's useless because they would need your phone to get in."

One distinctive aspect of ExaGrid's platform is that it is hard disk based. HDDs are, as a rule, cheaper than flash-based drives. Andrews says that the longer the retention policy a customer has, the more this makes a difference. "Because we're deduplicating all the way down, the longer the retention, the larger the benefit for the user."

Of course, the longer the retention policy, the more data needs to be stored. The nature of ExaGrid's architecture means expansion is through a scale-out approach, adding more appliances. This avoids the need for complex upgrades – and the fear of products being end of life'd just as storage needs increase.

The company's appliances range from models with raw capacity of 72TB, with maximum backup throughput of 6.09TB/hour to 192TB with throughput of 15.25TB per hour. Up to 32 appliances can be combined into a single scale-out system, with 6.14PB of usable capacity, supporting a full backup of 2.7PB plus retention, with a throughput of 488TB per hour.

Backup window stays constant

This approach means the backup window can be kept constant. "If the backup window is six hours at 100TB, it's six hours at 1PB and six hours at 2PB, and it will be six hours at 5PB," says Andrews. In addition to adding additional appliances across multiple datacenters via cross replication and global deduplication to support disaster recovery, the platform also supports replication to AWS and Azure.

That potential for global replication is matched by a global ExaGrid support team composed of only level 2 engineers. The company has more than 4,000 active installations across 80 countries, ranging from the Salvation Army to multiple financial institutions to Los Alamos National Laboratory. Each installation has an assigned customer support engineer, who also has expertise in the specific backup software platform that installation relies on. "We have no gatekeepers, no junior level 1 techs," says Andrews. "Our customers don't go through any rotation. They are able to work with the same person all the time."

Is this all enough to ensure your data is never attacked? Threat actors are relentless, and the potential rewards are too great. As Andrews explains, "We're not going to prevent the ransomware attack, because that happens on the primary network."So, organizations still need intrusion prevention systems, virus scanners, and the full gamut of modern security tools. Even then, the odds are attackers will still, on occasion, make it through and compromise data.

But, as Andrews says, "We have customers who have been attacked time and time again, every week. And they're able to restore their data from the ExaGrid system. We're saving a lot of IT professionals and their organizations by enabling them to restore after an attack. It's a great thing."

Sponsored by ExaGrid.

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more