We put the A in ITRA

You are going to be hearing a lot about ITRA in the near future. It is a term that industry pundits devised to denote the next evolution of DRaaS. This is a new acronym that literally means Information Technology Resilience Assurance. According to the Oxford dictionary, Assurance means “a positive declaration intended to give confidence; a promise.” When applied to a a description for a solution, it would then follow that the word “assurance” should denote a promise of performance, but most solutions labeled as ITRA solutions today lack any promise to perform or SLA for specifying the expected performance and timeframe for full recovery.

Today, businesses want the ability to maintain acceptable service levels even in the event of severe disruptions to their applications, data and IT systems. This means not waiting around for a disaster to occur, but rather incorporating early detection of events that may lead to downtime along with automated processes to mitigate any damage and minimize their impact on uptime.  To meet the needs of the vast majority of businesses, this requires a solution that is affordable, automated and easy to use.

Infrascale takes the last letter very seriously. In fact, we believe the industry is applying the ITRA too freely and to solutions that do not really fit the definition of the term.  If you are buying assurance you want a guarantee not fluff. As the only vendor with a solution that includes a 15-Minute Failover Guarantee, we believe we put the A in ITRA.

2018 Proper Disaster Recovery Planning

The backup and disaster recovery industry has seen plenty of changes over the years, but with these changes has also come increased cost and complexity to most BDR environments. If you’re wondering why disaster recovery is so expensive and complex, you aren’t the only one. Most of us are left crossing our fingers and hoping whatever we have in place will work.

But the reality is, that every modern company depends on data and operational uptime for their survival. There are no exceptions. Because of this, IT is tasked with finding an appropriate solution, that’s cost effective, and works every time – guaranteed. This can prove to be quite a daunting task!

In order to avoid data disaster in 2018, a proper disaster recovery plan must be put in place. But protecting large data sets in a mixed environment isn’t simple or affordable with traditional DR solutions. It’s why every business thinks push-button failover is out of reach. Let’s break down the challenges of ensuring data and operational uptime, by looking at three key considerations in proper disaster recovery planning:

1. Compatibility

A proper disaster recovery plan includes a flexible solution that can meet your needs. Can it protect any OS and device? Can you store your data in any cloud? Can it be deployed as physical or virtual? These are some of the questions you want to ask in order to plan your DR strategy properly. Making sure that you find a solution compatible with your environment is a critical component.

2. Complexity

Simple is key. Push-button failover should mean exactly that – so your disaster recovery planning should allow you to failover to a second site in minutes or seconds (not hours or days). No additional IT resources needed! When considering the complexity of a solution, built-in orchestration is one of the key differentiators in DR solution providers. Proper disaster recovery planning includes failback, and it’s important to understand exactly how the solution plans to failover your applications, and then failback, in addition to how much customization and control you have in the whole orchestration process.

3. Cost

Look for a solution that is truly as-a-service, meaning no add-on charges or professional service fees. When keeping costs in mind, remember that no additional secondary site means no additional hardware costs. Planning your DR strategy with a provider who can give you a low, monthly subscription service means that everything is included — support, maintenance, unlimited testing, and hardware upgrades to name a few. A service solution provides the benefit of a single monthly subscription payment, without the unnecessary add-on fees.

Disaster recovery planning doesn’t need to make your head spin. Keep these three considerations in mind as you go about your strategy and implementation. Here at Infrascale, we minimize the risk of downtime at a price that will make your CFO smile. We allow IT to stop buying and managing disparate hardware and software to solve their DR needs. An administrative dashboard, accessible from any browser or device, makes it easy to recover mission critical applications and systems with push-button simplicity.

Spring Cleaning Starts Early in 2018 – Disaster Recovery Release v6.13.2


As part of our continued efforts to make using Infrascale a pleasant experience that simplifies your backup and disaster recovery lives, we’ve started spring cleaning early this year with the release of Infrascale Disaster Recovery (IDR) v6.13.2.

Again, big thanks to all of our partners and admins that helped report these issues and find resolutions.

Continue reading for a detailed explanation of the release or scroll to the bottom for the list.

For Those Protecting VMware Environments

Issues Running Hourly Backups of VMware

A large piece in the fight against data loss is your restore point objective (RPO), or how frequently your backups are running. More frequent backups means your risk of data loss is reduced, that’s good. However,  many partners trying to reduce that risk to a mere hour reported JVM crashes when running in VMWare environments, requiring a manual full backup to run to resolve the issue. We’ve fixed this issue and are happy to say that you can now run hourly backups without concern.

VMware Snapshots Causing Storage Issues

The next item were reports of production storage being consumed by left-over VMware snapshots.  Occasionally,  our system left these VMware snapshots behind rather than cleaning them up, leaving some rather tedious work for admins. We’ve fixed the automated clean-up of these snapshots so you’ll no longer run into this problem.

Appliance Disconnects from VMware After Reboot

Classic tech-support steps, is it plugged in? Try restarting it. In many cases, we found that primary appliances would not automatically reconnect to VMWare after a reboot, meaning no backups will run. The result would be an influx of monitoring errors that backup jobs were skipped, sending support into a frenzy. We’ve fixed this so reboots no longer require a manual reconnection to VMware and after you update v6.13.2, VMware will automatically reconnect.

Additionally, we improved memory usage during VMware backup, so your backups should perform a bit faster now with fewer memory peaks.

And Now, the Bulk of the Release

Remote Access Goes Dark After a Connection Interruption

You’re working on a recovery, test or real, and suddenly you lose remote access and have to reconnect, queue heart palpitations and expletives. You just need to reconnect, but this costs time, and time is money, especially in a real downtime scenario. We’ve added some logic on our end to prevent this from happening. To reconnect, you have to go to the primary and restart the whole appliance or disable and re-enable remote access. This is a huge problem if you run into this problem and you’re not on site, which is most cases.

When accessing remote VMs after running cloud boot, admins would receive timeouts after on sessions 7 and beyond. We’ve upped the limit of how many remote access windows you can open from the Dashboard at a single time. You’ll still want to keep an eye on performance of your machine as you increase the number of sessions, but now you can launch as many as you like.

“Unknown” Status on Appliance Page

Similar to the remote access fix, those scary instances of “lost” appliances was also resolved. In the case that a connection interruption occurred between the appliance and the cloud infrastructure, admins would simply receive a shoulder shrug from the dashboard–no monitoring data, no usage data, nothing. The fix was to reboot the appliances, We’ve both changed the behavior so that your appliance doesn’t disappear in such a case, and we’ve put in work to ensure that connections are more stable.

Backups Stop with error “VimSDK Error: Bad Parameters of Function”

There were some reports of a “VimSDK error: bad parameters of function” that started to pop-up from the community. We found that the issue was caused when Windows provisions a disk with a partition larger than the disk, causing the backups to fail with the before mentioned error. Our system can now recognize this occurrence and will continue on with backups as before.

Can’t boot inconsistent NTFS Volumes

In the scramble after hard resetting a production server, administrators will often need to run a system utility, ChkDsk, to put the system back into a consistent state. If admins didn’t get the chance before a backup ran, then  our system would be unable to boot that or any subsequent version. While we can’t make things nicer on the Windows side, we did add a check before booting to see if it is inconsistent, and, if necessary, we’ll run ChkDsk utility so the boot can perform as expected.

Primary appliance stops working if the secondary is running a different version

For paired-appliance setups that are not replicating to Infrascale’s cloud, there was no automated update on the secondary appliance upon updating the primary. This caused the backups to fail as well as any replications, loading up your ticketing queue with a ton of errors. We’ve now automated the update of the secondary appliance once you’ve updated your primary.

Sluggish Backups after a Firmware Update

In a few cases, we had reports of extremely sluggish backup performance after a firmware update. We found an error that moved a vital catalog off the solid-state-drive (SSD) and onto the primary storage drives. While we can’t automate the fix, we have put in a place a warning telling the administrator to contact Infrascale support so we can dig in and move the catalog back to the right spot.

Unable to Download Files via “Browse and Restore”

The granular file recovery from the cloud appliance didn’t work. This is obviously a super-critical issue and we commend both the reporter and our team for jumping on it ASAP.

Hyper-V Recovery Speed and Bandwidth Improvement

During a backup, we protect only the data that exists and make a note of the empty blocks on each volume. But, during recovery, we were transferring these ’empty’ blocks. Transferring an empty block isn’t so bad, but transferring millions of them could significantly impact recovery time and waste valuable download bandwidth. We’ve changed the behavior to simply no longer send the empty blocks, and provide instructions for the recovery engine to provision as many empty blocks on each volume as when it was protected.

Unnecessary Job Replication

This is another, yet larger, bandwidth saver. If you had an appliance running without replication, then down the line began replicating offsite, your appliance might have been unnecessarily replicating data that would just be removed due to retention settings for the job. To resolve this, we now check the retention settings before each replication event begins, and, if the data is set to be deleted upon arrival, we simply don’t replicate and cancel the job. The replication status will indicate that the job was cancelled due to retention policies.


Release Notes

  • FIX: JVM crash during frequent incremental VMWare backups
  • FIX: Background cleanup of VMWare snapshots that were left behind
  • FIX: Do not replicate jobs that will be deleted remotely due to retention
  • FIX: Remote access may become unavailable after interruption of network connection between appliance and cloud infrastructure
  • FIX: Unable to open more than 6 remote access windows from Dashboard at the same time
  • FIX: Stalled information and “Unknown” status on Appliances page in Dashboard after interruption of network connection between appliance and cloud infrastructure
  • FIX: Restore of HyperV VMs only transfers information inside disk image and doesn’t transfer empty blocks
  • FIX: Proper DR Image backup of partitions that are outside of disk bounds (“VimSDK error: bad parameters of function”)
  • FIX: Primary appliance stops working after some time if secondary is on incompatible version
  • FIX: Notification on Appliance UI if Catalog volume is not on SSD
  • FIX: Always reconnect to VMware after reboot of appliance
  • FIX: Unable to “Browse and Restore” files from cloud appliance
  • FIX: Unable to perform boot of Windows machines with inconsistent NTFS
  • FIX: Memory leak in JNI during backup of VMWare

10 Ways DRaaS Can Save Your Bacon

Names can be so misleading.

Take baking soda.  Many of us think of baking soda as only an ingredient used for baking, or maybe something that helps to keep our refrigerators odor-free. But baking soda has many other uses, and is surprisingly good for your health and home, too. The use cases for baking soda vary from basic daily hygiene, injuries, digestive issues, stomach pain, coughs and even sore throats.

DRaaS (disaster recovery as a service) faces the same perceived limitations. There are many applications of DRaaS that go beyond recovering your operations in the wake of a genuine site-wide disaster.

That’s why we created this infographic: 10 Ways DRaaS Can Save Your Bacon.
Download your copy:  HERE.

  1. Recover from Ransomware…Fast

    It’s one thing for a user’s files to get infected by ransomware, it’s quite another to have a production database or mission-critical application infected. But, restoring these database and apps for a traditional backup solution (appliance, cloud or tape-based backup) will take hours or even days — which can cost a business tens of thousands of dollars.

  2. Acts of Nature- Hurricanes, Tornadoes & Floods

    If your data center gets knocked offline by mother nature, you need a Plan B to restore operations, so your employees can stay productive and your customers aren’t disrupted. DRaaS offers simplicity, rapid recovery, and lower costs (both in terms of infrastructure and administrative overhead). Just as important, replicating your backups and other key resources in geographically disparate data centers also means they won’t be wiped out by local disasters.  Because your data and VMs are replicated to the cloud, failing over production systems in the cloud takes minutes …even if your server is under water.

  3. End User Errors

    Recent ransomware attacks, including WannaCry and Petya, prove the adage that a chain is only as strong as the weakest link, and the weakest link in a data security chain is very often the end-user. Ransomware spreads easily across connected systems once a user unwittingly allows entry. Spoofing and phishing are not simply about stealing data or credit card numbers, they are about stealing access to systems. DRaaS equips you with a reset switch that helps you recover from end user mistakes – whether it be a phishing attack or accidental deletion of critical files.

  4. Spilt coffee, Power Surges, and Bad Disks (Micro-Diasters)

    While hurricanes and natural disasters grab all the headlines, it’s far more likely a company will face downtime from such mundane causes as hardware failure, corrupted software, human errors, or even spilt coffee. For this reason, a cloud DR solution that includes an on-site storage component and the ability to provide local, rapid recovery for failed servers have considerable appeal. DRaaS is absolutely built for these types of micros-disasters.

  5. Hardware Upgrades

    Traditionally, hardware refresh cycles have averaged around five years, but they have accelerated during the last decade. Some businesses now work on a three-year replacement cycle. Replacing servers and other critical hardware allows organizations to deploy updated equipment intended to improve reliability, enable new and anticipated capabilities, and save money in the long term, but they are usually accompanied with significant “planned downtime” — usually performed in the middle of the night or on weekends.With DRaaS, you can failover your production environment to the cloud where you can comfortably run your production operations – effectively eliminating any planned downtime. Once running in the cloud, you can perform the upgrade or refresh to your production equipment and then shift replication fromthe cloud to your production data center via “failback” procedures.

  6. Sandbox for Production Testing

    Maintaining a separate test environment can be expensive, especially when you want to do a test against full production data. Modern disaster recovery as a service solutions often include the ability to “sandbox” or partition virtual machines so testing can be done without impacting the still-functional production servers. Sandboxing is often much more difficult in typical on-premises solutions using traditional virtualization management tools.  Since it already contains replicas of your systems and built-in network connectivity, your DRaaS environment can easily be repurposed as a sandbox for production testing.

  7. Lift and Shift Workloads to the Cloud

    According to the Harvey Nash/KPMG 2016 CIO survey, 31% of responding CIOs said they are investing significantly in the cloud today and 49% expect to do so over the next three years. In fact, Forrester expects 50% of large enterprises to have production workloads running in the cloud by 2018. But migrating workloads to a public cloud demands a seamless, non-disruptive transition. This migration process is known as “lift and shift.” And here again, DRaaS can play an important role by enabling you to automatically capture workloads and migrate them to the cloud – effectively running your applications in failover mode in perpetuity.

  8. Pass Compliance Mandates with Flying Colors

    Disaster recovery solutions are just about table stakes for any modern organization, but they are especially important for public companies and organizations governed by compliance mandates (such as financial services, banking, and healthcare organizations). With DRaaS, testing and monitoring your DR plan is becoming simpler with Drag and Drop Orchestration, all baked into the cost of the subscription. Plus, leading DRaaS providers offer smaller agencies enterprise-class security and encryption of data in transit and at rest within top-tier data centers.

  9. Stolen Laptops

    Because of the mobility of today’s workforce and the dispersion of important intellectual property within your organization, companies more and more are asking, “So, I understand all this data is out there. What happens to it if something gets lost or stolen?” When your company laptop goes missing, it’s time to leap into action! Whether it was stolen from your car, forgotten in the airport security line or was physically wrenched from your hands in a grab-and-run. With DRaaS, you can quickly recover your data and applications on a new (or temporary) laptop since your data is always protected within the cloud.

  10. Ahhh…Peace of Mind

    It’s hard to put a price tag on peace of mind.  Real-time DR solutions are expensive, complex and require a fair amount of hand holding. DRaaS presents a refreshing alternative that provides flexibility in terms of commitment, capacity and cost.  But, more importantly, it’s your insurance policy that protects against the unexpected. DRaaS provides operational resiliency that lets you spin up VMs — locally or in the cloud — in minutes.  With a proper DRaaS solution, you’ll minimize the loss of production data and impact of downtime to your business.

DRaaS isn’t just for disasters any more. It’s for micro and macro outages. It’s designed for rapid failover of routine server outages and rapid recovery from full-blown ransomware attacks.  It’s for system upgrades, hardware refreshes and lift and shift migrations.

Like baking soda, there are probably other compelling use cases of DRaaS – ones that we haven’t even imagined.  If you’re leveraging DRaaS in interesting and unexpected ways to protect your organization, we want to hear from you — email us at team@infrascale.com.

Top 5 Ways to Protect Against Ransomware

You’ve seen the headlines — when it comes to ransomware strains like Locky, Wannacry, and Petya, we’re all at risk. What’s more, with the growing ransomware-as-a-service (RaaS) trend, cybercrime is now at an all-time high and accessible to nearly anyone.

Since the introduction of RaaS, negotiating with hackers is now a business in and of itself. We see websites offering up the latest advice to hackers, ransomware customer service lines, and FAQ available to help victims make Bitcoin payments.

So, why do organizations pay the ransom anyway? Well, in many cases, an organization’s systems were never backed up properly, or the backups were too old. In others, the recovery attempts failed – maybe there was no DR testing, leaving no usable backups from which to recover. Often the amount of time it takes to recover is far more costly — in terms of downtime — than paying the ransom fee itself. In other words, the process is simply broken.

What’s critical to understand is how ransomware gets into your organization, and more importantly, how you can protect your business from current and future threats of ransomware.

1. Best Practices for Ransomware Prevention

First and foremost, to protect against ransomware, start by doing what you can from a prevention standpoint.

  • Make sure servers and firewalls are all patched.
  • Update your anti-virus software with latest signatures.
  • Train users to recognize suspicious emails and attachments, and to identify nefarious websites.

While this may sound like old news, it’s a critical component to ensuring that you have a proper disaster recovery prevention plan in place.

2. Update Your Backup Process

Long gone are the days where overnight backups every 24 hours is sufficient for proper data protection. A quick and easy fix? Increase your backup frequency. In order to minimize downtime associated with an outage, you should be backing up in 15 minute increments. Your solution should be able to set policies on those backups alert the administrator to any errors.

Also, to protect against ransomware, data should be safely stored both on-premise and off-site. In addition, you want to ensure that you protect all of the servers in your environment, whether they be physical or virtual, with the same level of security. You may instinctively focus on mission-critical applications like Microsoft SQL, Exchange, and your financial systems, but don’t overlook those file servers that are also susceptible to attack.

3. Evaluate New Technology

The requirements mentioned above are now considered table stakes, and legacy backup systems simply just don’t cut it. Traditional backup applications will not be able to sufficiently address the capabilities needed for a modern data protection and ransomware solution, because they take too long to recover running systems. That’s where Disaster Recovery as a Service comes in, better known as DRaaS. DRaaS replicates and protects your entire environment and let’s you quickly failover your systems – not just files and folders – to ensure uptime and availability when something goes wrong like in a ransomware attack.

When considering new technologies to protect against ransomware, take into account that there are many different ways to define a DRaaS solution. Ensure that you’re comfortable with how you will be able to backup and recover critical systems and data, as well as the flexibility in backup targets and recovery options. Ensure that your chosen solutions also addresses compliance mandates, as needed.

4. Early Detection Capability

In a ransomware attack, time is your worst enemy. By the time encryption hits, you could have thousands of files encrypted in mere seconds. What’s worse, if you wait for your end users to identify that encryption is spreading via a ransomware attack, you’re going to have a much larger problem on your hands. The longer it takes to detect an issue, the more files are getting encrypted!

Ransomware can spread like wildfire, but early detection capabilities are available. IT needs a solution that will measure high change rates in files, thus using the way ransomware works — against it. Ransomware opens files and changes files in the system. Protect against ransomware by utilizing a solution that can identify a high change rate of modified files on a per-user basis.

If you’re using the 15-minute backup frequency we recommend, you can prevent most of the damage of the attack by simply having this proactive alerting system in place.

5. Lightning Fast Failover

If you are infected with ransomware and have to recover your data and systems, an important concern is to ensure the recovery process is faster and easier than paying the ransom. There could be hundreds of thousands of files infected, and you need to recover them quickly. Your best bet will be recovering the full server, rather than individual files.

Failover technology will give you the ability to boot and run from a backup. But, not all failover solutions are created equal. Only certain solutions give you the ability to boot from the backup and run either on-premise or in the cloud.

With the Infrascale Disaster Recovery (DRaaS) solution you can simultaneously cloud boot multiple versions of the same machine to determine the safe version to recover, and boot either to the cloud, a virtual environment, or recover to production hardware. The total downtime is about 1-2 minutes, saving a lot of time and money. The Infrascale DRaaS solution also includes built-in failover orchestration that lets you create predetermined failover plans, which can be scheduled to boot simultaneously or in a specific order.

No matter what DR solution you choose, it’s so important to understand exactly how the solution plans to failover your applications, and then failback, in addition to how much customization and control you have in the whole orchestration process.

With these 5 recommendations in place, you’re closer to staying protected against the current threats of ransomware. There’s no telling what ransomware attacks will look like in 2018, but we know that Ransomware will continue to get more sophisticated, more intelligent, and more harmful as time goes on. You can’t completely prevent ransomware, but you can keep yourself educated and up-to-date on the most recent technology solutions available. Also, look to the experts to vet and validate what you learn when it comes to ransomware protection.

The Infrascale approach is getting a lot of attention, from leading analyst firms like Gartner and others. Gartner named us the 2015 Cool Vendor in Business continuity and disaster recovery, a 2016 Visionary in Disaster recovery as a service, and a Leader in the 2017 Magic Quadrant for DRaaS.

Want to see for yourself? Download a copy of the report here.

Why ‘RTO’ is Key to Business Continuity

If you have never seen or heard the term ‘RTO’ in the context of your business continuity plans or tests, then this will give you a solid next step to ensure that you’re in a good position. Unfortunately, nearly 80% of all SMBs are in the same boat, which has been and continues to be massively exploited by criminal organizations using ransomware to make money. Lots of it.

To paraphrase an old tech adage “if you can’t recover quickly, then it’s not a backup.”

What is RTO?

Recovery Time Objective, or RTO, is the time it will take to restore business operations in any event of downtime caused by hardware failures, ransomware infections, software errors, human errors, and natural disasters

Unfortunately, for many businesses, the problems that arise when RTO is not a key component of the plan isn’t realized until it’s too late. Many organizations have found this out over the last few years because of the ever-growing threat of ransomware attacks.

Many businesses with preventive measures and backups in place end up in a bad situation because their plan didn’t factor in the recovery time for restoring production databases or mission-critical applications. Read our Tale of Two Ransomware Victims for more info.

What is business continuity and what role does RTO play?

Business continuity is the ability for a business to remain in operation despite risks and events of downtime and disasters. By the numbers, 80% of businesses experience some type of unplanned downtime.  Of this total, some experience catastrophic outages that knocks them offline for 3-5 days – and apportion of these never recover and ultimately out of business as a result of the outage.

Simply put, RTO is Business Continuity.

A proper business continuity plan includes:

  1. Identification of potential downtime risks
  2. Evaluating the business impact of those risks
  3. Identifying ways to prevent those risks
  4. Identifying ways to recover from downtime
  5. Regular testing of those methods against specific risks
  6. Regular re-evaluation of risks & methods

Your prevention and recovery needs are directly based on the evaluation of risks. Such an evaluation is known formally by Project Management Professionals (PMPs) as a “Risk Registry.” Don’t worry, it sounds like more work than it is.

It’ll actually save you time as ensure that all your bases are covered by understanding your critical systems and their dependencies.

Evaluating Your Risks

Evaluating risks can start pretty general and become more specific as you get closer to making buying decisions. For example, the table below was developed by American Precision Industries that focuses on recovery at a system level.

Application/Data/SystemImpactChanceRisk FactorRecovery Plan
CAD application server99%100%99%Infrascale Disaster Recovery replicating from site A to site B. Local boot for testing or individual machines. File recovery readily available from either site. Spare hardware required in the event of hardware destruction. Restore time is less than 20 minutes once hardware is available for recovery.
Machine Tools100%<1%<1%N/A. These units are closed systems.
CAD files80%100%80%Files are protected by Infrascale Disaster Recovery and replicated to a secondary DR appliance and are available for restore within minutes. Files can be recovered to any USB device to then be fed to the machining tools’ systems.
Payroll DB60%100%60%Infrascale Disaster Recovery replicating from site A to site B. Local boot available for recovery in less than 10 minutes. Production recovery time dependent on available hardware, less than 20 minutes once available.
Customer/Order DB80%100%80%Infrascale Disaster Recovery replicating from site A to site B. Local boot available for recovery in less than 10 minutes. Production recovery dependent on available hardware, less than 20 minutes once available.
CAD user endpoints70%100%70%Systems are backed up centrally and covered in DR backups onsite and replicated to the secondary. Endpoints can be restored within 20 minutes once hardware or a VM is available.


The table above shows the impact to the business in terms of “how much of the business will be inoperable if this system goes down?” with the chance of that system experiencing downtime (all risks included), and the risk factor, which is the product of Impact and Chance. The rule of thumb is to pay close attention to any Risk Factor over 10%.

Once all systems are listed and evaluated, you can begin posing options for various disaster recovery options and RTO objectives. This will ensure that you have a plan that you need rather than a mix of “too much” or even worse, “too little”.

You can also add specific uptime goals for specific systems, like this:

Application/Data/SystemHardwareOSRTO, Uptime
CAD application serverIBM compatibleWindows<12 hours, 99%
Machining ToolsProprietaryProprietaryNA, 99.9%
CAD filesIBM compatibleWindows<12 hours, 99%
Payroll DBIBM compatibleWindows<24hours, 99%
Customer/Order DBIBM compatibleWindows<24 hours, 99%
CAD user endpointsVariousWindows<12 hours, 99%


The benefit of this preplanning far outweighs any time you saved by skipping it and “hoping” it’ll be enough. Every year, thousands of businesses discover that their “hope” was indeed a poor plan when something takes their business out of operations and they scramble to get back online.

Unfortunately, when it comes to recovery, there are no second chances.


Announcing the 2017 Gartner Magic Quadrant for DRaaS

This week, Gartner released its annual Magic Quadrant report on Disaster Recovery as a Service (DRaaS). While Infrascale was named a Leader in this evolving space – an accomplishment of which we’re fiercely proud – there are other notable market shifts in the report.

We believe Gartner’s own scoring criteria, for example, can be very telling in terms of where the market is heading and the types of questions they’re received from their enterprise clients. This is Gartner’s third MQ for DRaaS and comes at a point of maturation in the DRaaS space when the innovators are able to better differentiate and separate themselves from the pack.

But first, let’s start with some background on Gartner’s Magic Quadrant (or MQ as it’s more commonly called), and the Critical Capabilities Report. The Gartner MQ is widely seen as the ultimate bake-off for vendors in any given technology market.  By applying a graphical treatment after a uniform set of evaluation criteria, the Magic Quadrant helps enterprise IT buyers quickly ascertain how well technology providers are executing their stated visions and how well they are performing against Gartner’s market view.

Gartner’s first order of business is defining the inclusion criteria and determining which vendors to include in the MQ.  The DRaaS market consists of hundreds of providers all with different approaches and capabilities to Disaster Recovery as a Service. This year, 24 providers met the inclusion criteria, demonstrating a 20% increase over 2016 and a 43% increase over 2015.

It’s Gartner’s aim to provide some objective guardrails for assessing the different vendors in the space. According to Gartner, “Some vendors have continued to build upon prior momentum, and some have pivoted in terms of strategic direction regarding DRaaS in their portfolios. Some did not put forth the level of investment or make the progress expected in the past twelve months; while yet others have made investments or acquisitions but need time to further mature and capitalize on them.”

This is how the Gartner MQ shook out this year.

Gartner Magic Quadrant 2017

Gartner 2017 DRaaS Magic Quadrant

If I were an enterprise buyer of DRaaS, I would not instantly dismiss any of the vendors based on their location in the MQ.  While selecting a vendor from the “leader” quadrant is always going to be a solid choice, there are situations where it might make sense to consider other vendors from the non-leader quadrants. Vendors featured in the Leader quadrant may have more complete technology, but they can sometimes be expensive; while vendors in the Niche quadrant may have new technology that is ideal for a specific use case or audience.

It’s also important to pay attention to the relative moves of vendors from one year to the next within the MQ.  I believe this could be a signal of a change in vision, investment and execution relative to their competitors.  For example, vendors that are investing in the DRaaS category, both in terms of R&D and strategic focus, should result in an increase of “completeness of vision” and “ability to execute”, moving up and to the right year-over-year.

If I were a data center manager, I would also review Gartner’s scoring algorithm. According to Gartner, “What were once differentiating attributes only a couple years ago are now considered mere table stakes. Meanwhile, customer expectations have increased with respect to the ability to perform more granular recovery of workloads and address a variety triggers that cause disasters, including ransomware.”

Understanding which solutions performed best based on the criteria that you care about should also be factored in to your decision.

I would also recommend checking out Gartner’s lesser known but essential companion report, known as the Critical Capabilities Report.  This report provides deeper insight into providers’ product and service offerings by extending the Magic Quadrant analysis. The Critical Capabilities Report allows would-be buyers to further investigate product and service ratings based on key differentiators.  In the case of the DRaaS, each vendor was stack ranked on 15 criteria within four specific use cases:

  • Low complexity customer environment
  • Medium complexity customer environment
  • Small enterprise complexity customer environment
  • Mid Enterprise complexity customer environment

Collectively reviewing these reports, and the year-over-year shifts, should help you arrive at a solid short list of vendors to consider.  Then it’s just a matter of fit, budget, and vendor rapport.

To get your copy of this year’s Gartner Magic Quadrant 2017 for DRaaS, click here.

How do you think these vendors stacked up? Let us know via email at team@infrascale.com, look forward to hearing your thoughts.


GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Tape Backup and Disaster Recovery Just Don’t Mix

This isn’t going to be a finger-wagging post, but let’s look at some jaw dropping stats.

A 2016 study of 400 IT professionals found that many organizations are still using an outdated approach to backup and disaster recovery.  This includes 36% of respondents who do not perform data backups at all and 44% of respondents who still rely on external drives (like tape) for their backup needs.  I’m not going waste my time on the 36% — they’re sitting ducks for ransomware and will likely suffer significant downtime with any sort or server or site-wide failure.

I do want to speak to the 44%.  I know there’s a subset of folks within this group that are salvageable. To this group, let me share the story of CenCal Health – a company squarely in the 44% crowd.

CenCal Health needed a modern backup solution that would centralize all their data with one solution. In light of ever-looming ransomware attacks, the ability to have fast failover was also a critical consideration. CenCal needed the Windows solution to protect all the key applications running in a mixed (physical and virtual) environment — including MS Exchange and SQL databases. They also required the data to be encrypted, both in transit and at rest, for HIPAA compliance. Another key requirement, given their history with tape backups, was the ability to meet more stringent RTO standards, measured in minutes, not days.

CenCal Health recently made the move to DRaaS and hasn’t looked back.  Check out the table below and their case study at www.infrascale.com.

DRaaS enables organizations of every shape and size benefit from near-instant data recovery – the type of high availability historically reserved for the largest of enterprises. And organizations like CenCal Health can leapfrog from antiquated tape backups to an industry-leading failover solution – quickly, securely and affordably.

As noted in last year’s post, “Is DRaaS the Next Leapfrogging Technology?” (Nov 2016), tape backup still has its place, but the use cases are dwindling. Tape backups still suffer from media corruption, inaccessibility, and slow recovery times.

With today’s increased threat of ransomware and cost of downtime, tape backup becomes an impractical, outdated and risky form of disaster recovery. IT professionals who are increasingly embracing the cloud and virtualization must now embrace these same technologies to protect their organizations from modern threats with always-on availability and business ccontinuity.

What’s your data protection plan of choice? Email us at team@infrascale.com to weigh in, we look forward to hearing from you.

Kicking the Cyber Security Can down the Road

Over the weekend, a WannaCry decryption tool was released by parties unknown. While the tool has saved some people, it’s not always effective.

According to The Hacker News, Adrien Guinet, a security researcher for Quarkslab, IT admins can make use of a flaw in the way WannaCry operates, thus allowing him to create a decryptor. The WannaCry ransomware generates a pair of keys on the victim’s computer – a public and private key for encryption/decryption – which rely on prime numbers. Although WannaCry erases the keys from the system, thus forcing the victim to pay $300 in Bitcoin to the cybercriminals, there’s a catch. Guinet says that the malware “does not erase the prime numbers from memory before freeing the associated memory.”

This is how the ransomware authors (aka “cyber-dirt bags”) are able to create the decryption tool. To be able to use the decryption tool, you need the encryption key stored in the local cache.

This is great, right?

This is where things get a bit fuzzy. Per our own ransomware guidance, the first step in any ransomware attack is to isolate the infected machine and confirm that the ransomware is present. If it is, the best course of action is to power off the machine and begin the recovery of clean files and applications from your most recent “clean” backup (or better yet, spin up a clean VM to recover your apps in minutes).

Unfortunately, this is the problem. By powering off the infected machine, you will flush the cache,including the encryption key.


Backups and DRaaS are a huge help here. If you can get the encryption tool working, restoring the data set from a secondary location (as painful as that may seem) and simply running the decryption tool from that location will prevent the proverbial snake biting its own tail, assuming the infection hasn’t spread further.

Stop Kicking the Can

Patching and decryption tools are important tools, but they don’t help you get on your front foot.

Companies of all stripes need a more comprehensive data protection plan that address the blocking-and-tacking security best practices which include regular patching, AV protection, and offsite backup.  But, that’s not good enough. You should also train your users on how to recognize phishing attacks since most ransomware attacks are still spread that way (WannaCry withstanding).  Finally, you need a Plan B that empowers you to quickly recover your files and running systems when you’re infected. This is the sweet spot of DRaaS.

Over the weekend, The Economist  published a great article entitled: “WannaCry should make people treat cyber-crime seriously.”  This quote struck a chord with me:

“Despite the flurry of headlines, WannaCry is not the worst malware infection the world has seen. Other worms—Conficker, MyDoom, ILOVEYOU—caused billions of dollars of damage in the 2000s. But Bruce Schneier, a noted independent security expert, points out that people seem to have a fundamental disregard for security. They frequently prefer to risk the long-term costs of ignoring it rather than pay actual cash for it in the present.”

No one cybersecurity company has THE answer.  Instead, modern businesses must rely on a best-of-breed multi-prong approach.  So, if you’re looking for long-term guidance, looking for ways to address the problem here-and-now and reduce the protracted downtime associated with most ransomware attacks, let’s talk. Soon.



Aaron Jordan, Infrascale Sales Engineer

Aaron Jordan is a Sales Engineer and Sr. Technical Support Manager at Infrascale maniacally focused on help our customers eradicate downtime, data loss and ransomware.

How WannaCry just made You a Bigger Target for Ransomware

If your only takeaway from the WannaCry ransomware attack is “gosh, we need a better patch management process” or “maybe, it’s time to move off these old operating systems,” then you’re probably a soft target for the next attack. This watershed moment signals a major and important shift in the evolution of ransomware.

As a disaster recovery professional, I keep a close eye on all events that cause data-loss or prevent people from doing work (downtime), so we can modify our products or better educate people on how to protect themselves. Since 2015, ransomware has been the number one threat to downtime. And to me, the biggest threat born from the May 2017 WannaCry ransomware attack is the false sense of security many people may feel after they’ve patched their Microsoft systems.

The organization(s) involved in the WannaCry campaign weren’t so unique from other ransomware campaigns: they paired an exploit kit with ransomware to gain access to systems, encrypt data and collect payments.

What separated WannaCry from the pack was how they acquired the exploit itself (i.e., from the National Security Agency, the NSA) and the sheer size of the campaign.

I’m going to put aside the obvious point that the NSA and other government organizations need to seriously wake-up when it comes to their own security and focus on how this sets the tone for the future of cybercriminal organizations using ransomware.   Here are three key takeaways from the WannaCry pandemic:

  1. There will be more, many more. Anytime someone starts making money, other people join in. When someone make a lot of money, the market floods with new actors trying to snatch a piece of that pie. The growth of ransomware was already accelerating, but with the massive success of WannaCry, it’ll surely signal even more growth and beckon new criminal organizations to join the fray. As Bruce Schneier says: “Criminals go where the money is, and cybercriminals are no exceptions.”
  2. Expect more, larger-scale campaigns. Wannacry succeeded as a global campaign, despite some junior execution errors including typos, grammar mistakes, kill switches left in the code (which effectively neutered the ransomware by simply registering a cryptic domain name for $10.96). No doubt, a better, more coordinated campaign at the same scale will happen and is probably already being planned and will wreak significantly more havoc.
  3. Increased Demand for Ransomware-as-a-Service. To date, most ransomware campaigns use exploits that take advantage of known issues that can be found in recent patch notes for operating systems, firewalls or simply look for common gaps in poorly managed IT environments with loose user-account controls (UACs). WannaCry’s success with a weaponized worm and a stolen operating system exploit has certainly increased demand for professionally (criminally) developed and/or stolen exploits.

Fortunately, these trends should not change your security and business continuity game plan.


Fortunately, such solutions exist and the good ones will protect you from a broad range of downtime threats, including ransomware, hardware failures, software errors and natural disasters.

Consider again the words of security expert Bruce Schneier:

I’ve never figured out the fuss over ransomware…the single most important thing any company or individual can do to improve security is have a good backup strategy. It’s been true for decades, and it’s still true today.[i]

The implication here is that a ‘good backup strategy’ includes a good recovery plan, which is dependent upon your business needs and just how quickly you need to recover full systems and/or files in the wake of an attack or server outage. If you can reliably and quickly recover your systems, you’ve completed the most crucial part of your ransomware preparation and help reduce the size of that target on your back.



Derek Wood

The optimist in me expects people to learn and act, the pessimist expects the wrong lessons will be learned with no action, so the realist writes and educates.



[i] Bruce Schneier Blog, June 16, 2008