Fall 2018 Release – Infrascale DR v6.16

This Fall, we have releases for 2 of our services, Infrascale Disaster Recovery (IDR) and Infrascale Cloud Backup (ICB).

IDR v6.16 boasts quality of life improvements, key fixes and new support for VMware v6.7!

ICB v7.3 includes many fixes, performance improvements and remote management improvements for distributed networks, plus new support for Windows Server 2019 and MS SQL 2017.

Again, big thanks to our Partners, who continue to be a cornerstone of our mission to eradicate downtime and data loss for businesses of all sizes.

And of course, hats off to our Product and Dev teams for their agile performance and turn-around time!


VMware 6.7 Support (IDR 6.16)

This is a big but easy one to describe. While most functions remained stable with the update to VMware v6.7, there were a handful of workarounds we employed for our administrators to regain full and consistent functionality. With IDR v6.16, all functionality and performance experienced with previous versions of VMware have been returned, with the added benefits of the items listed below.

Custom QEMU Commands during LocalBoot (IDR 6.16)

Like any high performance machine, it’s sometimes handy to get in and make some tweaks before hitting the road. Administrators are now able to do this with editing a config file each time by configuring custom QEMU commands to run with boot operations.

Define Client Schedules During Client Creation (IDR 6.16)

This is one of those simple but important time-savers. Rather than configuring a client, THEN going back to edit the schedule, we’ve combined these steps into a single, fluid motion.

“Legal Hold” backup/DR jobs (IDR 6.16)

Whether for a legal hold or any other reason, sometimes you just need to stop a backup from getting purged by the retention policies you have set. Administrators can now “pin” backup jobs to do exactly this. When an administrator “pins” a job, they are able to set the time-frame before the job would cycle back into the regular retention policy for that particular location (primary or secondary location).

*Note, this feature, like retention policies, are set per backup location; either primary or secondary. We note this because most services in the market require identical retention periods for replication services, but we figured we’d let you decide.

Improved Mass Deployment/Management for Cloud Backup (ICB v7.3)

We continue to find great success with our administrators using ICB to protect data at the point of data creation, rather than limiting BCDR plans to central servers. With this growing trend and major differentiation when comparing against traditional DR vendors, we’ve received many improvement requests to keep this growth going strong.

Details can be found in the table below, but there were a number of improvements to help administrators protect data across all devices for their respective businesses. This includes changes to workflows for deployment, changes to required permissions helpful when password resets are in order and difficulty in protecting personal folders when deployed with administrative credentials.


Full List – 

The table below contains the full release notes, and while there are too many to highlight, they add up to yet another major quality of life improvement for our techs and their customers.

Short Description Explanation
FIX: Weekly appliance report could not be generated Some customers were unable to get helpful weekly reports and would need to manually gather the information. This is no longer an issue. This bug fix saves 4-6 steps per report.
FIX: Update of Agent installed on client could no be completed due to service not stopping properly In this case, it would require additional steps and time for the administrator to manually stop the service before restarting the update. This bug fix saves roughly 6 steps per agent.
FIX: DR backup could fail sometimes In some cases, a DR backup would fail, which triggers alerts and additional follow-up. The causes are varied, and most fixes were simply to re-run a backup. We’ve fixed what seems to be central to the issue, saving roughly 4-6 steps per event.
UPDATE: New technology for Browse&restore function – increased list of supported file systems, stability, multiple bug fixes Some new types of file systems could not be browsed during restore before introduction of this technology, thus we are decreasing time spent to restore one or a few files drastically – customer will not need perform restore of the whole system or mount notbrowsable disk.
UPDATE: Full support for VMware 6.7 VMware 6.7 is now fully supported, whereas before, users could run into some issues here and there.
NEW: Ability to add DR schedule during creation of Client, column shown by default now in the list of Clients Before, these were separate steps. Setting up new clients is now 4-6 clicks shorter.
FIX: Some jobs could not be deleted by retention Retention periods are important to reduce unnecessary usage on both the primary and secondary appliances. Now, all jobs can be removed by policies unless exempted by the “pinning” feature.
FIX: Hardware Monitoring section did not show CPU and Hard Disk information on some models Fixed so all data shows for all models.
UPDATE: Support for local boot of DR images /VM backups of clients with disks with 32 sectors per track We have customers with specific models of hardware and disk geometry now they have them fully protected – we will be abel to boot them in case of DR event.
NEW: Ability to specify custom QEMU commands during LocalBoot In very specific cases local/cloud boot settings need some tweeks by support inside of the appliance, which were not persistent, with this update this specific settings can be stored without need to edit configuration files.
FIX: Automatically set correct CPU model during LocalBoot
NEW: Ability to change CPU model during LocalBoot
During a DR restore, we automatically recommend/use settings based on the machine being booted to reduce the chance of booting an a machine into an environment that will underperform while simultaneously saving time in doing so. We’ve improved this automation and extended customization of boot settings from being only available for cloud boots, to restores on the local appliance as well.
REMOVED: Autoarchive of differential and incremental levels of backups on Virtual Machine We are treating all backups of VM as fulls, it was inconsistency and preparation step for Retention 2.0 policy (coming in 6.17)
FIX: Disabled ability to edit clients on secondary appliance Removed the confusion with ability to edit clients on secondary – that was making no sense.
FIX: Disable visibility of Boot verification images in the list of clients in case Boot verification is turned off If Boot verification was turned off after sometime of working it would show in sometime very old boot verification results and was disinformational to customers.
NEW: Ability to connect to appliances inside local network with This will remove steps to connect keyboard and mouse during initial setup in order to know IP adress of the Appliance – after turning on any appliance will become visible on
FIX: Ability to bond network interfaces on some models of appliances New 550 appliance had disabled some network settings as it does not have boot ability which were required to increase connection speed by bonding network interfaces.
UPD: Set “date until” to +10 years from current moment for previously pinned jobs Many compliance requirements are for the length of data retention, and the “pinned jobs” feature released earlier this year did not allow jobs to be excluded from retention policies long enough, requiring a reminder and manual adjustment of the “pin” after some amount of time.
FIX: Unable to boot VM with dynamic disks in certain circumstances This was a major fix for customers in this situation, as they would have to go through a much more traditional baremetal recovery process rather than a quick, easy DR recovery flow.
UPDATE: Reduce amount of storage used for performing DR backups This helps reduce the need to upgrade primary appliances due to the need for temporary storage of files during DR backup procedures. The temporary storage footprint is now significantly smaller. The result of not having enough space would be that backup jobs would fail, or run much more slowly and require a storage upgrade.
UPDATE: Preserve disks order during LocalBoot on Appliance on in the Cloud Some VM have got a lot of number of disks and our boot process in some cases could not detect correct boot disk which lead to boot fail. We have implemented more sophisticated algorithm and number of boot failures will decrease now.
FIX: Ability to use UK keyboard layout
NEW: Ability to choose keyboard layout for locally booted machine
Keyboards aren’t all the same. The default option was the US ENGLISH QWERTY keyboard layout, and we now support the UK ENGLISH QWERTY keyboard.
UPDATE: More concise description of requirements for performing LocalBoot on Virtual Appliance We have removed pain, when customers were trying to boot image inside of the virtual appliance and were facing information that not enough resources are assigned in order to be able to boot – now it’s stated how much exactly memory and CPU are required.
UPDATE: Correct link to the Knowledge Base in Support tab of Appliance InsideWe are routing customers now to maintained and updated version of Infrascale’s Knowledge Base.
FIX: Sometime VMware backup of turned off machine could fail If a VM within VMware was turned off, the backup would occassionally fail. This is common for VMs created as templates, but not regularly used.
UPD: Show user-friendly message on Browse and Restore if no data can be found. Our users where confused in some corner cases, when we could not browse the disk and show it content – we just showed empty screen, now we give friendly message about such situations.
FIX: Jobs re-imported from archive or pushed back from secondaries will not be deleted by retention at once In case of different retention settings on primary(short) and secondary ( longer) job restored from archive or pushed back to primary would have passed retention and deleted. Now it will be automatically pinned in order the user would be able to restore and manipulate the job.
NEW: Ability to pin particular job until some date in future “Pinning” a job excludes it from retention policies for as long as determined by the user when “pinning” a job. This is helpful in many cases: compliance, breach investigation, employee turn over, legal disputes.
FIX: Garbage Collection history now shows data Process of freeing space on Appliance is done during Garbage Collection – now it’s possible to see how it is working.
FIX: VMware client will show progress of backup now Reduced number of clicks in order to check how progress is working – now it is visible on all Clients.
FIX: Backup would failed of the Hyper-V guest migrated to another host
UPDATE: Stability of DR backups
UPDATE: Stability of Hyper-V backups
In general, this is a group of updates and fixes that improve the reliability and reduce the TCO when protecting Hyper-V environments as well as fixing an issue caused after Hyper-V VMs migrate to new hosts.

Previously, these issues would result in alerts/warnings that would take techs time to evaluate and resolve.

NEW: RAID status is shown on console screen Many of our administrators cited this as a big help rather than having to dig around for the info, deeper in the console. Here you go!
UPDATE: Ability to perform Linux DR backup with non-root user credentials A lot of Linux distributive have root user deactivated now by default due to security measures, so our customers could not perform backups of such Linux machines – we opened ability to perform backup with any users credentials (after special settings are done)
UPDATE: Changed customization of replication bandwidth limit Some of our customers are not working Monday-Friday and would like to have more flexible way to customize replication bandwidth limit – now any day during week can be set to decrease speed.
FIX: Daily report will show correct restore point date in case of last backup has failed Daily report was showing not latest but first very old restore point and made confusion that no recent backups are available per machine. So now we show correct most recent restore point.
NEW: In case of appliance decomission you can securely shred all data We have added script in order customer would be able securely delete all data on appliance in case decommissioning.
FIX: DDFS in case of shutdown will be unmounted at once. We have changed logic of deleting files in DDFS in order to being able to unmount filesystem ot once.


What’s Next?

We’re on the cusp of some major changes under the hood that will result in simpler deployment steps and improved performance.

We’re also going to be announcing updated and new integrations with Connectwise Automate.

Stay tuned!

-The Infrascale Product Team

Summer 2018 Release Part 2 – Infrascale DR v6.15

Part 2 of our summer release schedule is days away! Meet IDR v6.15.

IDR v6.15 boasts quality of life improvements and key fixes. This release removes many common procedures by adding in some automation and/or shorter workflow options. We’ve estimated a reduction of roughly 20 steps across multiple, recurring scenarios, which could easily be 200-500 less tasks performed per year per IDR deployment–money in the bank.

Our ability to turn around such a quick and valuable release is due largely to our terrific community of Partners that continue to be a cornerstone of our mission to eradicate downtime and data loss for businesses of all sizes.

And of course, hats off to our Product and Dev teams for their agile performance and turn-around time!

Quality of Life Improvements

NEW: Automated, Online DDFS compact

Previously, admins would receive a warning/error from the monitoring system saying storage limits have been reached or were close to being reached.

Next, the admin would try to free up some space by deleting jobs and/or would contact support for assistance.

Support would then suggest running “compact” to free up the needed space. Doing so requires shutting down the entire appliance and could take days to complete; roughly 1 day per 1TB of freed space.

By automating the DDFS compact task to take place in the background, we’ve eliminated at least 4 steps (per occurrence) and eliminated downtime during such an event.

NEW: Ability to unlock VMware VM migration option from Appliance

Before a VMware VM is protected, we must disable the ability for a VM to be migrated to ensure the backup is successful. Sometimes, the VM doesn’t properly unlock afterwards, requiring the user to manually unlock the VM by either running another backup or going into vCenter.

While we’re still working on an automated resolution, adding an option to unlock a VM from within our system now allows an administrator to manually unlock the VM in 1 step instead of 2 or 3. We’re working to better avoid the issue altogether in later releases.

NEW: Archive Option – Ability to Pin/Unpin Jobs to be ignored by retention policies

Whether due to a hardware refresh, employee turn-over or some other event, it’s important to be able to retain specific backups despite the retention policy set for them by the client. This new “pinning” feature allows administrators to do just that.

First, a vocab review – within Infrascale IDR, a client is created by defining a retention policy and a schedule for a particular machine (virtual or physical). When a backup runs according to the configured client, that’s called a job. Jobs are stored on the primary and/or secondary for as long as the retention policy indicates.

Pinning a job will exclude it from the retention policies set for that client and will default to keeping that job indefinitely–essentially, an archival option.

For example, if you’re decommissioning a machine but you want, need, to keep a backup for it, you’d pin the jobs you want to keep before removing the client. Compliance, maintained.

This is a first step in many to add more flexibility and customization when needing exceptions at the job-level.

Important note* Like retention policies, pinning jobs is done individually for the primary and the secondary (or cloud) appliances; pinning a job on your primary will not auto-pin the same job on the secondary. Be sure to pin jobs on the appliance where you’ll want the archived job to be kept, or both.


We’ve also added ‘deleting a client’ as a recorded event in audit logs and removed a few clicks from everyone’s life by adding the firmware version on the login screen in addition to the settings area. Strangely but understandably, we slowed down the initiation of a local boot with a 5 second delay so admins have time to smash into safe mode for some debug.

•    UPDATE: Audit logging – added “client deletion” as a new event

•    UPDATE: Firmware version is now visible on login screen

•    UPDATE: Added 5 second delay before localboot (to access safe mode)

•    UPDATE: Change filtering logic by dates in all grids on Appliance interface


•    FIXED: Issues restoring DR image backups with multiple disks

Introduced in 6.14, there was a reported issue wherein a second disk on a DR image backup would timeout during a restore, and we’ve squashed this out.

•    FIXED: VMware VMs that have not been powered on will not be protected

This is for all you VM templaters out there. If you upload a VM image to the VMware host, as a template, then this template VM would not be protected by our system until it was powered on. Now, admins can protect their VM templates without doing this step.

•    FIXED: Incremental Backup fails on VMware VM with a newly added disk

The workaround for this was to run a full after a new disk was added to a VM. That step is no longer needed.

•    FIXED: Hyper-V backup jobs hang/freeze

6.14 introduced new parallel processing to handle multiple jobs at one time. Some of our partners reported instances wherein Hyper-V backups would freeze and this has been resolved.

•    FIXED: Improved Replication Process

Customers may not have noticed a problem here as the job would restart where it got stuck and, at most, it would appear that a replication job took a bit longer than expected. We resolved this issue so replication jobs are more stable.

•    FIXED: False positive – failed verification status after replication

Verification of jobs would appear failed until an automated task would run after the job was completely closed. We’ve changed this verification step to instead be a part of the backup process, eliminating these false positives from alarming administrators and filling up ticketing queues (and the steps that go with closing them).

•    FIX: Hyper-V backup fails if the password had been modified

Not bad for less than 2 months since our last release, eh?

What’s Next?

Good news looking to the 6.16 release set for September/October 2018.

Key highlights for 6.16 are VMware 6.7 support and a new Super Agent with setup and configuration improvements.

Thank you!

-The Infrascale Product Team

As of April 1st, 2019, Infrascale Cloud Backup will no longer support backup of Windows XP and Windows Server 2003 endpoints

As of April 1st, 2019, Infrascale Cloud Backup will no longer support backup of Windows XP and Windows Server 2003 endpoints, regardless of installed version of the application. For other versions of Windows, latest updates are recommended to be installed.

Additionally, as of April 1st, 2019, Infrascale Cloud Backup will no longer support outdated application versions of desktop applications: Windows clients below v6.8 and Mac clients below 3.7.

What is happening and why?

As we release new versions of our Cloud Backup software to include additional features, better performance, and enhanced security, these versions are not always compatible with older operating systems. In fact, Microsoft stopped supporting Windows XP in April, 2014 and stopped supporting Windows Server 2003 in July 2015.

Currently, Windows XP and Windows Server 2003 utilize a TLS v.1.0 protocol and 3DES, AES128 ciphers (encryption algorithms) that pose vulnerabilities. To learn more about why Microsoft is encouraging its users to update from TLS v1.0, please read this blog from Microsoft:

The Transport Layer Security (TLS) protocol is intended to serve as a secure link between a client machine and the server or Web application.  While Infrascale supports a variety of other, more secure, TLS protocols, we have decided to stop supporting the TL1 v1.0 due to security concerns.

Phasing out outdated versions of our software and disabling vulnerable 3DES and AES128 ciphers allows us to strengthen the security of your data.

The National Institute of Standards and Technology (NIST) advises all users to migrate to stronger ciphers:

Your Options

Option A (Preferred): If you are running Infrascale Cloud Backup on a Windows XP computer or with Windows Server 2003, you need to update your operating system to Windows Vista or a later version by following the instructions on Microsoft’s support site.

Option B: You can put the data on a network share thatis accessible from other computers in your network. Infrascale’s Online Backup and Recovery Manager can be installed on another computer and you can continue backing up your data from the network share.

Also please make sure that you are running the latest version of Infrascale software that can be downloaded here:

If you have any questions, please email support at

Summer 2018 Release – Infrascale DR v6.14

Infrascale Disaster Recovery (IDR) version 6.14 is our big summer release, and has much to offer our customers and partners in terms of improved quality-of-life changes, security features, performance improvements and bug fixes. We’d like to start by thanking all our partners, especially those that worked with us to find solid and timely solutions to not just issues, but to overall usability improvements.

The IDR v6.14 release is scheduled for public availability July 2nd, 2018.


Our partners and customers will be happy to know that, in our lab, our teams were seeing massive (up to 5X) speed improvements when protecting VMware environments with v6.14.

When protecting Hyper-V, we also saw significant performance improvements by enabling backup jobs to run in parallel (improvements will be greater for those with larger appliances protecting many smaller backup jobs versus those with fewer, but larger jobs).

In both cases, the performance improvements will allow customers to more quickly get their environments protected, which means less hassle managing network and system I/O during initial and regular backups and an chance for improved restore point objective (RPO) goals (less data loss due to more frequent backups).


Quality of Life (QoL) improvements make up a bulk of the line items you’ll see below in the release notes, and range from usability improvements in the GUI, to new features that allow our administrators to automate testing and verification of the integrity of backups run to time-saving additions, enjoy.

The QoL list is highlighted by the new boot verification option. This means admins can run these tests and have automated reports with screenshots of systems running to help themselves and everyone around them sleep easy knowing the system will be there for them when it counts.

In addition, there are a ton of time-savers in here like allowing administrators to perform tasks from within the secondary appliance GUI rather than having to switch to the primary, automation during initial setup and the ability to define individual disks on VMware VMs for backup rather than being limited to selecting entire VMs.

There’s a lot here, so check out the list below:

  • NEW TIME SAVER: Mass-update appliance firmware from Dashboard (for firmware after 6.14.0)
  • NEW PEACE OF MIND: Boot Verification of backup jobs (individual jobs, stay tuned for boot orchestration verification!)
  • NEW REPORTS: Daily backup reports have added clarity regarding overall daily backups and the inclusion of the new* boot verification results
  • NEW CONTROL: Ability to select specific VMWare disks within a VM to help save on local and cloud (secondary) backup space usage
  • NEW: Support of IDR 550 appliance – stayed tuned for more info on this new, little workhorse for those smaller offices
  • TIME SAVER: Auto-configure RAID during initial provisioning of appliance, no reboot required
  • TIME SAVER: Simplification of QuickStart Wizard: no “Certificate” step, Time zone/Date/Time steps are combined for easier deployment and management of multiple IDR appliances
  • TIME SAVER: Allow manual deletion of jobs from secondary appliance
  • UX IMPROVEMENT: Revisited columns in Client / Summary view based on customer feedback
  • UX IMPROVEMENT: Client/Summary shows date of last successful backup for each client
  • UX IMPROVEMENT: Number of jobs pending replication is shown in Dashboard
  • UX IMPROVEMENT: Job message logs show timestamp
  • UPDATE: Default retention for new appliances is set to 3 months
  • UPDATE: Automatically delete failed jobs after 7 days
  • UPDATE: Protected Space calculation support for various file systems, software RAID, LVM, Windows Dynamic Disks


Security has long been a pillar of strength here at Infrascale, and we’ve brought some previously “upon request” options straight to your finger-tips. In addition, there are additional access controls that IT teams will appreciate, including the much in-demand ability to have multiple administrative logins. Check ’em out:

  • NEW: Ability to create multiple admin accounts on appliance (command line-only)
  • NEW: Email notification on login event for administrators
  • NEW: Option to require an appliance-specific password for remote access via Dashboard (that’s 2 sets of credentials, now)
  • NEW: Option to disallow Infrascale staff from accessing secondary appliance (we’ll ping you when needed)
  • NEW: Audit logs on-demand or via daily digest emails for key events–logins, DR boots, job deletions.
  • UPDATE: Email server settings support custom SMTP port and encryption
  • UPDATE: Enable/disable remote access on Dashboard credentials entry screen (enabled by default)


Every sprint, our teams dedicate a portion of their efforts to killing bugs, dead. Here are the bugs we smashed with 6.14:

  • FIX: Multiple stability improvements for MS Exchange backup and recovery
  • FIX: Auto-archive option is now working
  • FIX: We’ve prevented a number of VMWare errors that would be thrown during backup/restore by using dynamic buffer size
  • FIX: Rather than shutting off before the process finished, the auto-download firmware update will remain on until the appliance has been successfully setup
  • FIX: Resolved some issues with large files replicating (but failing) to a secondary appliance (or cloud)
  • FIX: Remote access and Support Tunnel stability
  • FIX: Sort devices alphabetically in Orchestration

Click here to join or view our IDR v6.14 Release Webinar.

There is still a lot of summer left, stay tuned for news on the next 6.15 release for even more improvements.

Thank you!

-The Infrascale Product Team

Spring Cleaning Starts Early in 2018 – Disaster Recovery Release v6.13.2


As part of our continued efforts to make using Infrascale a pleasant experience that simplifies your backup and disaster recovery lives, we’ve started spring cleaning early this year with the release of Infrascale Disaster Recovery (IDR) v6.13.2.

Again, big thanks to all of our partners and admins that helped report these issues and find resolutions.

Continue reading for a detailed explanation of the release or scroll to the bottom for the list.

For Those Protecting VMware Environments

Issues Running Hourly Backups of VMware

A large piece in the fight against data loss is your restore point objective (RPO), or how frequently your backups are running. More frequent backups means your risk of data loss is reduced, that’s good. However,  many partners trying to reduce that risk to a mere hour reported JVM crashes when running in VMWare environments, requiring a manual full backup to run to resolve the issue. We’ve fixed this issue and are happy to say that you can now run hourly backups without concern.

VMware Snapshots Causing Storage Issues

The next item were reports of production storage being consumed by left-over VMware snapshots.  Occasionally,  our system left these VMware snapshots behind rather than cleaning them up, leaving some rather tedious work for admins. We’ve fixed the automated clean-up of these snapshots so you’ll no longer run into this problem.

Appliance Disconnects from VMware After Reboot

Classic tech-support steps, is it plugged in? Try restarting it. In many cases, we found that primary appliances would not automatically reconnect to VMWare after a reboot, meaning no backups will run. The result would be an influx of monitoring errors that backup jobs were skipped, sending support into a frenzy. We’ve fixed this so reboots no longer require a manual reconnection to VMware and after you update v6.13.2, VMware will automatically reconnect.

Additionally, we improved memory usage during VMware backup, so your backups should perform a bit faster now with fewer memory peaks.

And Now, the Bulk of the Release

Remote Access Goes Dark After a Connection Interruption

You’re working on a recovery, test or real, and suddenly you lose remote access and have to reconnect, queue heart palpitations and expletives. You just need to reconnect, but this costs time, and time is money, especially in a real downtime scenario. We’ve added some logic on our end to prevent this from happening. To reconnect, you have to go to the primary and restart the whole appliance or disable and re-enable remote access. This is a huge problem if you run into this problem and you’re not on site, which is most cases.

When accessing remote VMs after running cloud boot, admins would receive timeouts after on sessions 7 and beyond. We’ve upped the limit of how many remote access windows you can open from the Dashboard at a single time. You’ll still want to keep an eye on performance of your machine as you increase the number of sessions, but now you can launch as many as you like.

“Unknown” Status on Appliance Page

Similar to the remote access fix, those scary instances of “lost” appliances was also resolved. In the case that a connection interruption occurred between the appliance and the cloud infrastructure, admins would simply receive a shoulder shrug from the dashboard–no monitoring data, no usage data, nothing. The fix was to reboot the appliances, We’ve both changed the behavior so that your appliance doesn’t disappear in such a case, and we’ve put in work to ensure that connections are more stable.

Backups Stop with error “VimSDK Error: Bad Parameters of Function”

There were some reports of a “VimSDK error: bad parameters of function” that started to pop-up from the community. We found that the issue was caused when Windows provisions a disk with a partition larger than the disk, causing the backups to fail with the before mentioned error. Our system can now recognize this occurrence and will continue on with backups as before.

Can’t boot inconsistent NTFS Volumes

In the scramble after hard resetting a production server, administrators will often need to run a system utility, ChkDsk, to put the system back into a consistent state. If admins didn’t get the chance before a backup ran, then  our system would be unable to boot that or any subsequent version. While we can’t make things nicer on the Windows side, we did add a check before booting to see if it is inconsistent, and, if necessary, we’ll run ChkDsk utility so the boot can perform as expected.

Primary appliance stops working if the secondary is running a different version

For paired-appliance setups that are not replicating to Infrascale’s cloud, there was no automated update on the secondary appliance upon updating the primary. This caused the backups to fail as well as any replications, loading up your ticketing queue with a ton of errors. We’ve now automated the update of the secondary appliance once you’ve updated your primary.

Sluggish Backups after a Firmware Update

In a few cases, we had reports of extremely sluggish backup performance after a firmware update. We found an error that moved a vital catalog off the solid-state-drive (SSD) and onto the primary storage drives. While we can’t automate the fix, we have put in a place a warning telling the administrator to contact Infrascale support so we can dig in and move the catalog back to the right spot.

Unable to Download Files via “Browse and Restore”

The granular file recovery from the cloud appliance didn’t work. This is obviously a super-critical issue and we commend both the reporter and our team for jumping on it ASAP.

Hyper-V Recovery Speed and Bandwidth Improvement

During a backup, we protect only the data that exists and make a note of the empty blocks on each volume. But, during recovery, we were transferring these ’empty’ blocks. Transferring an empty block isn’t so bad, but transferring millions of them could significantly impact recovery time and waste valuable download bandwidth. We’ve changed the behavior to simply no longer send the empty blocks, and provide instructions for the recovery engine to provision as many empty blocks on each volume as when it was protected.

Unnecessary Job Replication

This is another, yet larger, bandwidth saver. If you had an appliance running without replication, then down the line began replicating offsite, your appliance might have been unnecessarily replicating data that would just be removed due to retention settings for the job. To resolve this, we now check the retention settings before each replication event begins, and, if the data is set to be deleted upon arrival, we simply don’t replicate and cancel the job. The replication status will indicate that the job was cancelled due to retention policies.

Release Notes

  • FIX: JVM crash during frequent incremental VMWare backups
  • FIX: Background cleanup of VMWare snapshots that were left behind
  • FIX: Do not replicate jobs that will be deleted remotely due to retention
  • FIX: Remote access may become unavailable after interruption of network connection between appliance and cloud infrastructure
  • FIX: Unable to open more than 6 remote access windows from Dashboard at the same time
  • FIX: Stalled information and “Unknown” status on Appliances page in Dashboard after interruption of network connection between appliance and cloud infrastructure
  • FIX: Restore of HyperV VMs only transfers information inside disk image and doesn’t transfer empty blocks
  • FIX: Proper DR Image backup of partitions that are outside of disk bounds (“VimSDK error: bad parameters of function”)
  • FIX: Primary appliance stops working after some time if secondary is on incompatible version
  • FIX: Notification on Appliance UI if Catalog volume is not on SSD
  • FIX: Always reconnect to VMware after reboot of appliance
  • FIX: Unable to “Browse and Restore” files from cloud appliance
  • FIX: Unable to perform boot of Windows machines with inconsistent NTFS
  • FIX: Memory leak in JNI during backup of VMWare