Why ‘RTO’ is Key to Business Continuity

If you have never seen or heard the term ‘RTO’ in the context of your business continuity plans or tests, then this will give you a solid next step to ensure that you’re in a good position. Unfortunately, nearly 80% of all SMBs are in the same boat, which has been and continues to be massively exploited by criminal organizations using ransomware to make money. Lots of it.

To paraphrase an old tech adage “if you can’t recover quickly, then it’s not a backup.”

What is RTO?

Recovery Time Objective, or RTO, is the time it will take to restore business operations in any event of downtime caused by hardware failures, ransomware infections, software errors, human errors, and natural disasters

Unfortunately, for many businesses, the problems that arise when RTO is not a key component of the plan isn’t realized until it’s too late. Many organizations have found this out over the last few years because of the ever-growing threat of ransomware attacks.

Many businesses with preventive measures and backups in place end up in a bad situation because their plan didn’t factor in the recovery time for restoring production databases or mission-critical applications. Read our Tale of Two Ransomware Victims for more info.

What is business continuity and what role does RTO play?

Business continuity is the ability for a business to remain in operation despite risks and events of downtime and disasters. By the numbers, 80% of businesses experience some type of unplanned downtime.  Of this total, some experience catastrophic outages that knocks them offline for 3-5 days – and apportion of these never recover and ultimately out of business as a result of the outage.

Simply put, RTO is Business Continuity.

A proper business continuity plan includes:

  1. Identification of potential downtime risks
  2. Evaluating the business impact of those risks
  3. Identifying ways to prevent those risks
  4. Identifying ways to recover from downtime
  5. Regular testing of those methods against specific risks
  6. Regular re-evaluation of risks & methods

Your prevention and recovery needs are directly based on the evaluation of risks. Such an evaluation is known formally by Project Management Professionals (PMPs) as a “Risk Registry.” Don’t worry, it sounds like more work than it is.

It’ll actually save you time as ensure that all your bases are covered by understanding your critical systems and their dependencies.

Evaluating Your Risks

Evaluating risks can start pretty general and become more specific as you get closer to making buying decisions. For example, the table below was developed by American Precision Industries that focuses on recovery at a system level.

Application/Data/System Impact Chance Risk Factor Recovery Plan
CAD application server 99% 100% 99% Infrascale Disaster Recovery replicating from site A to site B. Local boot for testing or individual machines. File recovery readily available from either site. Spare hardware required in the event of hardware destruction. Restore time is less than 20 minutes once hardware is available for recovery.
Machine Tools 100% <1% <1% N/A. These units are closed systems.
CAD files 80% 100% 80% Files are protected by Infrascale Disaster Recovery and replicated to a secondary DR appliance and are available for restore within minutes. Files can be recovered to any USB device to then be fed to the machining tools’ systems.
Payroll DB 60% 100% 60% Infrascale Disaster Recovery replicating from site A to site B. Local boot available for recovery in less than 10 minutes. Production recovery time dependent on available hardware, less than 20 minutes once available.
Customer/Order DB 80% 100% 80% Infrascale Disaster Recovery replicating from site A to site B. Local boot available for recovery in less than 10 minutes. Production recovery dependent on available hardware, less than 20 minutes once available.
CAD user endpoints 70% 100% 70% Systems are backed up centrally and covered in DR backups onsite and replicated to the secondary. Endpoints can be restored within 20 minutes once hardware or a VM is available.


The table above shows the impact to the business in terms of “how much of the business will be inoperable if this system goes down?” with the chance of that system experiencing downtime (all risks included), and the risk factor, which is the product of Impact and Chance. The rule of thumb is to pay close attention to any Risk Factor over 10%.

Once all systems are listed and evaluated, you can begin posing options for various disaster recovery options and RTO objectives. This will ensure that you have a plan that you need rather than a mix of “too much” or even worse, “too little”.

You can also add specific uptime goals for specific systems, like this:

Application/Data/System Hardware OS RTO, Uptime
CAD application server IBM compatible Windows <12 hours, 99%
Machining Tools Proprietary Proprietary NA, 99.9%
CAD files IBM compatible Windows <12 hours, 99%
Payroll DB IBM compatible Windows <24hours, 99%
Customer/Order DB IBM compatible Windows <24 hours, 99%
CAD user endpoints Various Windows <12 hours, 99%


The benefit of this preplanning far outweighs any time you saved by skipping it and “hoping” it’ll be enough. Every year, thousands of businesses discover that their “hope” was indeed a poor plan when something takes their business out of operations and they scramble to get back online.

Unfortunately, when it comes to recovery, there are no second chances.


Announcing the 2017 Gartner Magic Quadrant for DRaaS

This week, Gartner released its annual Magic Quadrant report on Disaster Recovery as a Service (DRaaS). While Infrascale was named a Leader in this evolving space – an accomplishment of which we’re fiercely proud – there are other notable market shifts in the report.

We believe Gartner’s own scoring criteria, for example, can be very telling in terms of where the market is heading and the types of questions they’re received from their enterprise clients. This is Gartner’s third MQ for DRaaS and comes at a point of maturation in the DRaaS space when the innovators are able to better differentiate and separate themselves from the pack.

But first, let’s start with some background on Gartner’s Magic Quadrant (or MQ as it’s more commonly called), and the Critical Capabilities Report. The Gartner MQ is widely seen as the ultimate bake-off for vendors in any given technology market.  By applying a graphical treatment after a uniform set of evaluation criteria, the Magic Quadrant helps enterprise IT buyers quickly ascertain how well technology providers are executing their stated visions and how well they are performing against Gartner’s market view.

Gartner’s first order of business is defining the inclusion criteria and determining which vendors to include in the MQ.  The DRaaS market consists of hundreds of providers all with different approaches and capabilities to Disaster Recovery as a Service. This year, 24 providers met the inclusion criteria, demonstrating a 20% increase over 2016 and a 43% increase over 2015.

It’s Gartner’s aim to provide some objective guardrails for assessing the different vendors in the space. According to Gartner, “Some vendors have continued to build upon prior momentum, and some have pivoted in terms of strategic direction regarding DRaaS in their portfolios. Some did not put forth the level of investment or make the progress expected in the past twelve months; while yet others have made investments or acquisitions but need time to further mature and capitalize on them.”

This is how the Gartner MQ shook out this year.

Gartner DRaaS Magic Quadrant 2017

Gartner 2017 DRaaS Magic Quadrant

If I were an enterprise buyer of DRaaS, I would not instantly dismiss any of the vendors based on their location in the MQ.  While selecting a vendor from the “leader” quadrant is always going to be a solid choice, there are situations where it might make sense to consider other vendors from the non-leader quadrants. Vendors featured in the Leader quadrant may have more complete technology, but they can sometimes be expensive; while vendors in the Niche quadrant may have new technology that is ideal for a specific use case or audience.

It’s also important to pay attention to the relative moves of vendors from one year to the next within the MQ.  I believe this could be a signal of a change in vision, investment and execution relative to their competitors.  For example, vendors that are investing in the DRaaS category, both in terms of R&D and strategic focus, should result in an increase of “completeness of vision” and “ability to execute”, moving up and to the right year-over-year.

If I were a data center manager, I would also review Gartner’s scoring algorithm. According to Gartner, “What were once differentiating attributes only a couple years ago are now considered mere table stakes. Meanwhile, customer expectations have increased with respect to the ability to perform more granular recovery of workloads and address a variety triggers that cause disasters, including ransomware.”

Understanding which solutions performed best based on the criteria that you care about should also be factored in to your decision.

I would also recommend checking out Gartner’s lesser known but essential companion report, known as the Critical Capabilities Report.  This report provides deeper insight into providers’ product and service offerings by extending the Magic Quadrant analysis. The Critical Capabilities Report allows would-be buyers to further investigate product and service ratings based on key differentiators.  In the case of the DRaaS, each vendor was stack ranked on 15 criteria within four specific use cases:

  • Low complexity customer environment
  • Medium complexity customer environment
  • Small enterprise complexity customer environment
  • Mid Enterprise complexity customer environment

Collectively reviewing these reports, and the year-over-year shifts, should help you arrive at a solid short list of vendors to consider.  Then it’s just a matter of fit, budget, and vendor rapport.

To get your copy of this year’s Gartner MQ for DRaaS, click here.

How do you think these vendors stacked up? Let us know via email at team@infrascale.com, look forward to hearing your thoughts.



GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Tape Backup and Disaster Recovery Just Don’t Mix

This isn’t going to be a finger-wagging post, but let’s look at some jaw dropping stats.

A 2016 study of 400 IT professionals found that many organizations are still using an outdated approach to backup and disaster recovery.  This includes 36% of respondents who do not perform data backups at all and 44% of respondents who still rely on external drives (like tape) for their backup needs.  I’m not going waste my time on the 36% — they’re sitting ducks for ransomware and will likely suffer significant downtime with any sort or server or site-wide failure.

I do want to speak to the 44%.  I know there’s a subset of folks within this group that are salvageable. To this group, let me share the story of CenCal Health – a company squarely in the 44% crowd.

CenCal Health needed a modern backup solution that would centralize all their data with one solution. In light of ever-looming ransomware attacks, the ability to have fast failover was also a critical consideration. CenCal needed the Windows solution to protect all the key applications running in a mixed (physical and virtual) environment — including MS Exchange and SQL databases. They also required the data to be encrypted, both in transit and at rest, for HIPAA compliance. Another key requirement, given their history with tape backups, was the ability to meet more stringent RTO standards, measured in minutes, not days.

CenCal Health recently made the move to DRaaS and hasn’t looked back.  Check out the table below and their case study at www.infrascale.com.

DRaaS enables organizations of every shape and size benefit from near-instant data recovery – the type of high availability historically reserved for the largest of enterprises. And organizations like CenCal Health can leapfrog from antiquated tape backups to an industry-leading failover solution – quickly, securely and affordably.

As noted in last year’s post, “Is DRaaS the Next Leapfrogging Technology?” (Nov 2016), tape backup still has its place, but the use cases are dwindling. Tape backups still suffer from media corruption, inaccessibility, and slow recovery times.

With today’s increased threat of ransomware and cost of downtime, tape backup becomes an impractical, outdated and risky form of disaster recovery. IT professionals who are increasingly embracing the cloud and virtualization must now embrace these same technologies to protect their organizations from modern threats with always-on availability and business ccontinuity.

What’s your data protection plan of choice? Email us at team@infrascale.com to weigh in, we look forward to hearing from you.

Kicking the Cyber Security Can down the Road

Over the weekend, a WannaCry decryption tool was released by parties unknown. While the tool has saved some people, it’s not always effective.

According to The Hacker News, Adrien Guinet, a security researcher for Quarkslab, IT admins can make use of a flaw in the way WannaCry operates, thus allowing him to create a decryptor. The WannaCry ransomware generates a pair of keys on the victim’s computer – a public and private key for encryption/decryption – which rely on prime numbers. Although WannaCry erases the keys from the system, thus forcing the victim to pay $300 in Bitcoin to the cybercriminals, there’s a catch. Guinet says that the malware “does not erase the prime numbers from memory before freeing the associated memory.”

This is how the ransomware authors (aka “cyber-dirt bags”) are able to create the decryption tool. To be able to use the decryption tool, you need the encryption key stored in the local cache.

This is great, right?

This is where things get a bit fuzzy. Per our own ransomware guidance, the first step in any ransomware attack is to isolate the infected machine and confirm that the ransomware is present. If it is, the best course of action is to power off the machine and begin the recovery of clean files and applications from your most recent “clean” backup (or better yet, spin up a clean VM to recover your apps in minutes).

Unfortunately, this is the problem. By powering off the infected machine, you will flush the cache,including the encryption key.


Backups and DRaaS are a huge help here. If you can get the encryption tool working, restoring the data set from a secondary location (as painful as that may seem) and simply running the decryption tool from that location will prevent the proverbial snake biting its own tail, assuming the infection hasn’t spread further.

Stop Kicking the Can

Patching and decryption tools are important tools, but they don’t help you get on your front foot.

Companies of all stripes need a more comprehensive data protection plan that address the blocking-and-tacking security best practices which include regular patching, AV protection, and offsite backup.  But, that’s not good enough. You should also train your users on how to recognize phishing attacks since most ransomware attacks are still spread that way (WannaCry withstanding).  Finally, you need a Plan B that empowers you to quickly recover your files and running systems when you’re infected. This is the sweet spot of DRaaS.

Over the weekend, The Economist  published a great article entitled: “WannaCry should make people treat cyber-crime seriously.”  This quote struck a chord with me:

“Despite the flurry of headlines, WannaCry is not the worst malware infection the world has seen. Other worms—Conficker, MyDoom, ILOVEYOU—caused billions of dollars of damage in the 2000s. But Bruce Schneier, a noted independent security expert, points out that people seem to have a fundamental disregard for security. They frequently prefer to risk the long-term costs of ignoring it rather than pay actual cash for it in the present.”

No one cybersecurity company has THE answer.  Instead, modern businesses must rely on a best-of-breed multi-prong approach.  So, if you’re looking for long-term guidance, looking for ways to address the problem here-and-now and reduce the protracted downtime associated with most ransomware attacks, let’s talk. Soon.



Aaron Jordan, Infrascale Sales Engineer

Aaron Jordan is a Sales Engineer and Sr. Technical Support Manager at Infrascale maniacally focused on help our customers eradicate downtime, data loss and ransomware.

How WannaCry just made You a Bigger Target for Ransomware

If your only takeaway from the WannaCry ransomware attack is “gosh, we need a better patch management process” or “maybe, it’s time to move off these old operating systems,” then you’re probably a soft target for the next attack. This watershed moment signals a major and important shift in the evolution of ransomware.

As a disaster recovery professional, I keep a close eye on all events that cause data-loss or prevent people from doing work (downtime), so we can modify our products or better educate people on how to protect themselves. Since 2015, ransomware has been the number one threat to downtime. And to me, the biggest threat born from the May 2017 WannaCry ransomware attack is the false sense of security many people may feel after they’ve patched their Microsoft systems.

The organization(s) involved in the WannaCry campaign weren’t so unique from other ransomware campaigns: they paired an exploit kit with ransomware to gain access to systems, encrypt data and collect payments.

What separated WannaCry from the pack was how they acquired the exploit itself (i.e., from the National Security Agency, the NSA) and the sheer size of the campaign.

I’m going to put aside the obvious point that the NSA and other government organizations need to seriously wake-up when it comes to their own security and focus on how this sets the tone for the future of cybercriminal organizations using ransomware.   Here are three key takeaways from the WannaCry pandemic:

  1. There will be more, many more. Anytime someone starts making money, other people join in. When someone make a lot of money, the market floods with new actors trying to snatch a piece of that pie. The growth of ransomware was already accelerating, but with the massive success of WannaCry, it’ll surely signal even more growth and beckon new criminal organizations to join the fray. As Bruce Schneier says: “Criminals go where the money is, and cybercriminals are no exceptions.”
  2. Expect more, larger-scale campaigns. Wannacry succeeded as a global campaign, despite some junior execution errors including typos, grammar mistakes, kill switches left in the code (which effectively neutered the ransomware by simply registering a cryptic domain name for $10.96). No doubt, a better, more coordinated campaign at the same scale will happen and is probably already being planned and will wreak significantly more havoc.
  3. Increased Demand for Ransomware-as-a-Service. To date, most ransomware campaigns use exploits that take advantage of known issues that can be found in recent patch notes for operating systems, firewalls or simply look for common gaps in poorly managed IT environments with loose user-account controls (UACs). WannaCry’s success with a weaponized worm and a stolen operating system exploit has certainly increased demand for professionally (criminally) developed and/or stolen exploits.

Fortunately, these trends should not change your security and business continuity game plan.


Fortunately, such solutions exist and the good ones will protect you from a broad range of downtime threats, including ransomware, hardware failures, software errors and natural disasters.

Consider again the words of security expert Bruce Schneier:

I’ve never figured out the fuss over ransomware…the single most important thing any company or individual can do to improve security is have a good backup strategy. It’s been true for decades, and it’s still true today.[i]

The implication here is that a ‘good backup strategy’ includes a good recovery plan, which is dependent upon your business needs and just how quickly you need to recover full systems and/or files in the wake of an attack or server outage. If you can reliably and quickly recover your systems, you’ve completed the most crucial part of your ransomware preparation and help reduce the size of that target on your back.



Derek Wood

The optimist in me expects people to learn and act, the pessimist expects the wrong lessons will be learned with no action, so the realist writes and educates.



[i] Bruce Schneier Blog, June 16, 2008

How to be an IT Hero by Reducing Downtime to Minutes

Let me set the stage.

I head up the Marketing department here at Infrascale and am not as technically savvy as I probably should be.  So, when technology breaks I tend to get a little whiny and demand instant help from my IT team.

I’m sure the tech team at every organization must cope with us non-technical Muggles as we navigate new technology and demand always-on availability.

This past week, I was having an impromptu online meeting when the line went dead halfway through the conversation.  We had lost Internet connectivity and, after a few choice “driving words,” I quickly contacted our IT support team pleading for an immediate resolution.

Keep in mind, we’re a disaster recovery company.  This is what we do for a living. Now, most people wouldn’t consider this a disaster, but in my mind, this at the least qualifies as a micro-disaster.

Sergio, our intrepid IT support specialist, discovered that this wasn’t an isolated event (translation: not user error) as other users were also unable to connect to our wi-fi network. As Sergio began to tick off the steps of his standard troubleshooting procedure, he realized that our DHCP domain controller had become frozen.  Perhaps, Sergio felt my withering gaze, but he knew that he couldn’t wait to perform his normal troubleshooting protocol. Sergio knew that the only way he could possibly get us back online was to boot a virtual machine of the domain controller from a recent backup stored on our local appliance. Finding the root cause of the problem could wait. Job #1 was to get us back online.

Sergio logged into our Disaster Recovery appliance – just like we instruct our partners and customers to do — and booted the domain controller from the most recent backup.  This enabled us to get back online within a minute or two.  I quickly resumed my online meeting.  No harm done.

When everyone was back online, Sergio was then able to focus on resolving the root issue which ended up requiring a forced reboot and 15-20 minutes of rolling back and reconfiguring Windows updates. These were important minutes that I didn’t have to spare.

Because of Sergio’s swift decision making, he reduced our downtime from around 30 minutes to just five minutes – simply by following our own guidance.

Think about how many of these micro-disasters happen to organizations every day.  Think about the business costs, lost productivity, and opportunity costs if you aren’t able to open emails for 30 minutes, four hours or even a day?   These micro-disasters add up, and significantly impact an organization’s bottom line.

But thankfully, there are now DRaaS solutions that let IT pros like Sergio step in and save the day.  You’ll still have to contend with us non-technical Muggles, but now you have a powerful and affordable tool in your arsenal to combat downtime, data loss, and us whiny executives.

99 problems but Automated Orchestration Ain’t 1

When it comes to your IT responsibilities, being able to recover data and maintain business continuity is among the most critical. But, there’s tradeoffs to factor in between cost, convenience and the benefit.

That’s why “DR tests” become much maligned – they require too much work, time, and effort to properly execute.  Fortunately, technologies like disaster recovery as a service (DRaaS) and CloudBoot Orchestration can alleviate the time, cost and complexity required to perform system recoveries.

01. Why are some DR tasks dreaded?

It truly comes down to a classic cost-benefit analysis. Dreaded tasks are the ones with high costs and moderate-to-high benefits, especially when those benefits are not immediate—think about how often most people floss. When you look at a routine IT task like ‘run a site-wide recovery test,’ it’s obviously valuable, albeit not immediate, but requires a lot of upfront work and time – two commodities in increasingly short supply.

At Infrascale, we’re trying to make the process of scheduling and conducting DR tests simple. Brain-dead simple.  And our new drag and drop orchestration is the next step in that evolution.

Convenience Cost Benefit Test Frequency

02. Simplifying Recovery

Recovery of a full network can be a complicated affair. The administrator needs to be aware of the dependencies of all the applications and machines in use to get a business back and in production.

Usually, this knowledge is shared tribally among the techs or the expertise assumed to be in place when disaster strikes.  But, this requires the system admin to deduce the recovery steps (and the system dependencies) by looking at what’s going on the network. That’s a pretty tall order, especially when the disaster is highly visible, stressful, and revenue-impacting. This is not a good recovery plan and leaves too many opportunities for human error to creep into the equation. A recovery plan should mean that steps to recover your systems are second nature, no learning needed.

With DRaaS-based orchestration, the administrator can predefine the order in which machines are recovered, and can add time-delays between specific machines or groups of machines to allow for additional tasks to be performed or to accommodate application/database load times.

This means that any technician tasked to recover a full site will be able reduce many steps that require pre-existing knowledge to a single series of documented steps that is simple, straight forward, and transferrable.

03. Audits, SLAs and Trust

When recoveries and test recoveries become a convenient and high value task, it also means that the pain normally associated with audits fall by the wayside.

Easier testing means you can test more granular scenarios and ensure that your dependencies have all been mapped out and tested successfully.  For example, you can simulate a ransomware attack that has infected all of your end users, the local backup files, mission critical databases and your file servers. You can even simulate a RAID failure on the hardware running vSphere or recover a CEO’s laptop that was crushed by TSA and you need to restore an important presentation in one hour.  Now, you have the time – before an actual disaster strikes —  to play out these scenarios to ensure that you can uncover any ‘gotchas’ and maintain your service agreements.

Regular testing and role playing translates into a smoother and more confident approach to real-life disaster scenarios. Click here to see how our Drag & Drop Orchestration simplifies your disaster recovery plans and system recoveries.

Ransomware Victims : A Tale of Two Ransomware Victims

I have often touted how DRaaS should be deployed to mitigate the damage, downtime, and data loss associated with a ransomware attack.  But, when one of our partners, Pervasive Solutions, had two clients hit by ransomware victims within 24 hours, it offered powerful real-world evidence of the power of DRaaS.

You can read the entire case study here: Pervasive Solutions Case Study. But, here’s the abridged version.

Pervasive Solutions is a Victor, New York-based managed services provider, information security consulting firm, and Infrascale partner.  Just after Thanksgiving last year, two of their clients – one was a retailer and the other, a local manufacturer – were targeted with ransomware.  The actual company names have been obscured since these companies don’t want the fact that they were infected to be on the public record.  This is pretty typical and why many experts believe the ransomware threat to be much larger and pervasive (pun intended) than has been reported.

What makes this story especially interesting is that the manufacturer was protected with Infrascale Disaster Recovery; the other wasn’t.  And this distinction made a huge difference when it came to Pervasive’s ability to recover their clients’ data and systems.

Here’s a quick snap shot of the results:


The key stat in this table is the amount of time required to get each client fully operational post infection. With our DRaaS solution, Jason Miglioratti, Director of Managed Services at Pervasive, was able to restore the manufacturer’ systems in less than an hour. Without a commercial grade DRaaS solution, it took Jason and his team two full weeks to get the retailer back online and fully operational.

For many companies, being down for a few days would be catastrophic.  But, thankfully, the retailer was able to weather this storm.  But, it’s clear that all businesses need to bake in operational resiliency into their IT infrastructure. Without having a data protection and recovery strategy in place, organizations are leaving themselves wide open to significant financial and reputational loss.

Thankfully, Pervasive Solutions is part of a new and emerging breed of MSPs that are going well beyond reselling IT solutions. They’re educating their clients that data protection requires a four-pronged approach which includes user education, strong security systems (e.g., AV, firewall, email-filtering, application white-listing, etc.), cloud-based disaster recovery, and regular DR testing.

Just as importantly, SMBs need to wake up and start asking important questions. How soon could our company get back up and running if it gets infected by ransomware victims? How quickly can you isolate and halt the spread of an infection? What would you do if your production database got encrypted? Would you be willing to pay the ransom?

If you want to start getting some answers to those questions, talk to us or one of our trusted partners.


How Ransomware is Beating your Backup

Traditional approaches to backup and DR simply don’t work against ransomware

It’s been over 2 hours since ransomware hit your business and you still have no update from your techs and none of your employees can work.

After what seems like an eternity, your technician emerges with a not-so-confident look and sheepishly admits “the problem is that the ransomware has infected your backups. I’m doing what I can to see how far back we can recover, but it doesn’t look good. We should begin setting up a bitcoin account in case we can’t recover from the backups within the next 15 hours, which is the amount of time we’ve been given to pay or they’ll delete the encryption key for good.”

You’re overcome with mixed emotions. You’ve been violated. You’re mad as hell.  You’re unsure whether you’ll get your data back even if you pay the ransom. As you go through the phases of grief, you become engrossed in the effect beyond the business to your personal life. Your head clears enough for you to start asking yourself how this came to be.

You did everything you thought was going to keep you safe, didn’t you?

  • You paid for a business-grade backup system
  • Your backups were regularly tested to make sure that they’re working properly
  • Your backup drives were refreshed to protect against hardware failure

Why then? How did ransomware beat the system that was supposed to save you?

This is not uncommon. In fact, in North America alone, over $1 billion USD was paid in ransoms over the course of 2016 due to this very common scenario. 2017 is predicted to be worse. Much worse.

Here are four reasons why your backups didn’t save you:

One. These are criminal organizations and attacks are not random.

They have purposefully designed their viruses and exploit kits to increase the success rate of collecting ransom payments. They use social media and even your own website to figure out how to best penetrate your business. Who works there? What servers and services are your users and business using?

Two. Ransomware attacks are increasingly targeting your critical applications.

Previous viruses were largely covert, quietly stealing data for as long possible without being discovered. In 2015, ransomware targeted users by encrypting files on individual machines before presenting clear instructions for payment.

By 2016, ransomware firms began targeting businesses by using your employees as entry points before accessing and encrypting critical applications (e.g., your Exchange server, SQL servers, Oracle database, etc.) on your network, locking you and your users out via strong encryption algorithms.

Any application, service or network location with heavy traffic becomes a major target
because the impact of downtime is heightened, increasing the value of the data being held hostage and therefore, the likelihood that you’ll pay the ransom.

Three. Backup systems are their kryptonite, and are their top priority.

They know that a business’s ability to recover data and critical systems is directly related to the chance to collect a ransom payment. Therefore, these firms target backup files as a top priority before triggering their virus to encrypt files and display a ransom notice.

If backup and/or DR files are stored on a network-accessible drive, the ransomware viruses will be able to locate them.

Typical backup programs write files in a proprietary or common format. Known file-types are easy to search and discover once network access is gained.

In addition to file-type searches, ransomware kits will look at Volume Shadow Service (VSS) logs as an easy way to find where backups are being written since many backup services will use VSS to create backups for databases and other open files.

Once the location is discovered, only a short-time stands between the virus and your critical applications and files.

Four. Backup systems typically store files on administratively accessible drives/locations.

Gaining network administrative access is a primary objective because it allows ransomware variants to read/write data on the most critical locations on the network. With this access, they can encrypt the backup files themselves, meaning there’s not even an option to test recover to see if there are or are not infected files—the backup file itself is completely useless. This situation leaves a single option to recover the data—pay the ransom.

What can you do?

Get a cloud backup/DR system.

By moving backup/DR files to the cloud, you can at least recover a previous version before the infection took place, since the virus will not be able to access and infect files already stored in the cloud.

You still have to download and recover the files to a safe location and test recoveries for individual file infections before moving to a production environment. This can take time, but at least you haven’t lost valuable information.

Get an enterprise grade Disaster Recovery as a Service (DRaaS) solution.

A proper DRaaS solution will lock administrators and intruders out of the storage used for the backups and DR files while still being stored on the network. Management access to these files is only granted through the software/portal given to you by the solution provider and no level of network administrative access will allow viruses like ransomware to infect the actual backup files.

A cloud-DRaaS solution wherein all backups are replicated offsite will allow a much faster recovery via cloud-based recovery of entire machines from which your users can continue work while a production environment is prepared for final recovery.

What a ransomware experience should be…

It’s been roughly 30 minutes since your tech began investigating. All critical servers have already been failed over to the cloud and verified to be virus free. You’ve been given an estimate of roughly 1 hour before your users will be reconnected and ready to work. You tell your staff to take an executive lunch but to be ready for work upon their return.

Infrascale + Google Cloud: Faster Failover Starts with a Faster Cloud

Today, we’re announcing a partnership with Google that pairs our cloud backup and disaster recovery-as-a-service solutions with the Google Cloud Platform.  I know it’s a cliché with any partner announcement – one plus one equals three —  but, I think in this case there’s real proof in this pudding.

The Problems We’re Trying to Solve

Let’s start with some jaw dropping stats that are impacting organizations of all sizes, but are especially devastating to SMBs who often lack the IT resources and manpower to defend and quickly recover from prolonged periods of downtime.

  • Ransomware: Ransomware is the number one cyber threat on the planet. 70% of businesses hit by ransomware paid the hackers to regain access to systems and data. Of those attacked, 20% paid over $40,000 to retrieve data, while more than half paid more than $10,000. Source: IBM X-Force’s Ransomware, December 2016.
  • Downtime: At an estimated $700 billion is losses per year, the average cost of IT downtime is about $9,000 per minute for most midmarket enterprises (obviously this estimate will vary by the size and type of organization). Consider that complete unplanned outages, on average, last 66 minutes longer than partial outages. You can do the math and the impact is scary. Source: Poneman Institute, Cost of Data Center Outages report, January 2016.
  • Data Loss: Data loss statistics can be chilling. Studies suggest that nearly 3 out of 4 of the companies lose critical data – from lost mission critical software applications to lost virtual machines to lost critical files every year. Adding insult to injury, more and more organizations are being breached by cyber criminals and no location, industry or organization is immune from attack. Source: The Cost of Server, Application, and Network Downtime: North American Enterprise Survey and Calculator, ISH Inc., January 2016.

To help address these threats to uptime, we’re integrating our data protection solutions to Google Cloud.

Backup to the Google Cloud

Infrascale has always offered broad OS support and a variety of cloud targets – whether they be public or private clouds. Now, Infrascale customers and the partners who serve them can replicate to Google Cloud.  Google Cloud Platform lets you focus on what’s next for your business and frees you from the overhead of managing infrastructure, provisioning servers and configuring networks. This powerful infrastructure improves the performance of backup and recovery for everyone.

Disaster Recovery in the Google Cloud

Infrascale Disaster Recovery protects your organization against server failures, site-wide disasters, and even ransomware attacks. It delivers guaranteed 15-minute failover of mission-critical applications in the event of a minor or major crash. In fact, our own testing for VM failover within the Google Cloud is lightning fast and measured in seconds, not hours.

You are probably justifiably skeptical and you should be. But, this is the type of performance you have to experience for yourself. Given the opportunity, we’ll combat your skepticism with real results, real performance, and real proof.  The pairing our leading-edge CloudBoot technology with the Google Cloud Platform delivers eye-popping boot speeds.  You can read about our failover technology, but a big part of the performance gains is the faster cloud that Google Cloud offers.

How can a public cloud deliver that kind of failover performance? 

There’s a lot of innovation that Google has baked into its data centers and worldwide fiber network that give it a leg up on other cloud infrastructures, including

  • Sub-second Archive Restore: Google Cloud delivers sub-second data availability and provides high throughput for prompt restoration of data. Competing systems take 4-5 hours to do the same data archiving tasks, offer substantially lower throughput, and often charge confusing and expensive fees for restore.
  • Global load balancers that scale to 1 million+ users instantly: Google Cloud’s built-in load balancers are part of a worldwide distributed system for delivering enterprise-class infrastructure to organizations, big and small — the same system that supports Google Maps, Gmail, and YouTube.
  • Faster boot times: Google Cloud Compute Engine instances boot in the range of 40-50 seconds, roughly 1/5th of the time required by competing clouds.
  • Reduced Latency: Google’s global network footprint, with over 75 points-of-presence across more than 33 countries, ensures you receive the same low latency and responsiveness customers expect from Google’s own services.

Continued Cloud Evolution

In the early days, cloud platforms like Google focused their efforts on attracting start-ups and young, agile companies that were ripe for the cloud. This made sense, as their platforms offered these companies a quick and easy alternative to conventional, on-prem IT—as well as the ability to scale their operations without a lengthy procurement process.

But now the tables are turning. Midmarket and enterprises are waking up to the benefits of on-demand, pay-as-you-go infrastructure and following in the footsteps of the early adopters. That’s why this partnership is so exciting to us. We’re combining the speed and innovation of our own failover services with the power and performance of the Google cloud to protect an organization’s most valuable assets – uptime and data.

This partnership gives us the opportunity to equip organizations of all sizes with much improved operational resiliency and ransomware insurance that’s affordable, simple, and secure.