This Fall, we have releases for 2 of our services, Infrascale Disaster Recovery (IDR) and Infrascale Cloud Backup (ICB).
IDR v6.16 boasts quality of life improvements, key fixes and new support for VMware v6.7!
ICB v7.3 includes many fixes, performance improvements and remote management improvements for distributed networks, plus new support for Windows Server 2019 and MS SQL 2017.
Again, big thanks to our Partners, who continue to be a cornerstone of our mission to eradicate downtime and data loss for businesses of all sizes.
And of course, hats off to our Product and Dev teams for their agile performance and turn-around time!
VMware 6.7 Support (IDR 6.16)
This is a big but easy one to describe. While most functions remained stable with the update to VMware v6.7, there were a handful of workarounds we employed for our administrators to regain full and consistent functionality. With IDR v6.16, all functionality and performance experienced with previous versions of VMware have been returned, with the added benefits of the items listed below.
Custom QEMU Commands during LocalBoot (IDR 6.16)
Like any high performance machine, it’s sometimes handy to get in and make some tweaks before hitting the road. Administrators are now able to do this with editing a config file each time by configuring custom QEMU commands to run with boot operations.
Define Client Schedules During Client Creation (IDR 6.16)
This is one of those simple but important time-savers. Rather than configuring a client, THEN going back to edit the schedule, we’ve combined these steps into a single, fluid motion.
“Legal Hold” backup/DR jobs (IDR 6.16)
Whether for a legal hold or any other reason, sometimes you just need to stop a backup from getting purged by the retention policies you have set. Administrators can now “pin” backup jobs to do exactly this. When an administrator “pins” a job, they are able to set the time-frame before the job would cycle back into the regular retention policy for that particular location (primary or secondary location).
*Note, this feature, like retention policies, are set per backup location; either primary or secondary. We note this because most services in the market require identical retention periods for replication services, but we figured we’d let you decide.
Improved Mass Deployment/Management for Cloud Backup (ICB v7.3)
We continue to find great success with our administrators using ICB to protect data at the point of data creation, rather than limiting BCDR plans to central servers. With this growing trend and major differentiation when comparing against traditional DR vendors, we’ve received many improvement requests to keep this growth going strong.
Details can be found in the table below, but there were a number of improvements to help administrators protect data across all devices for their respective businesses. This includes changes to workflows for deployment, changes to required permissions helpful when password resets are in order and difficulty in protecting personal folders when deployed with administrative credentials.
Full List –
The table below contains the full release notes, and while there are too many to highlight, they add up to yet another major quality of life improvement for our techs and their customers.
|FIX: Weekly appliance report could not be generated
|Some customers were unable to get helpful weekly reports and would need to manually gather the information. This is no longer an issue. This bug fix saves 4-6 steps per report.
|FIX: Update of Agent installed on client could no be completed due to service not stopping properly
|In this case, it would require additional steps and time for the administrator to manually stop the service before restarting the update. This bug fix saves roughly 6 steps per agent.
|FIX: DR backup could fail sometimes
|In some cases, a DR backup would fail, which triggers alerts and additional follow-up. The causes are varied, and most fixes were simply to re-run a backup. We’ve fixed what seems to be central to the issue, saving roughly 4-6 steps per event.
|UPDATE: New technology for Browse&restore function – increased list of supported file systems, stability, multiple bug fixes
|Some new types of file systems could not be browsed during restore before introduction of this technology, thus we are decreasing time spent to restore one or a few files drastically – customer will not need perform restore of the whole system or mount notbrowsable disk.
|UPDATE: Full support for VMware 6.7
|VMware 6.7 is now fully supported, whereas before, users could run into some issues here and there.
|NEW: Ability to add DR schedule during creation of Client, column shown by default now in the list of Clients
|Before, these were separate steps. Setting up new clients is now 4-6 clicks shorter.
|FIX: Some jobs could not be deleted by retention
|Retention periods are important to reduce unnecessary usage on both the primary and secondary appliances. Now, all jobs can be removed by policies unless exempted by the “pinning” feature.
|FIX: Hardware Monitoring section did not show CPU and Hard Disk information on some models
|Fixed so all data shows for all models.
|UPDATE: Support for local boot of DR images /VM backups of clients with disks with 32 sectors per track
|We have customers with specific models of hardware and disk geometry now they have them fully protected – we will be abel to boot them in case of DR event.
|NEW: Ability to specify custom QEMU commands during LocalBoot
|In very specific cases local/cloud boot settings need some tweeks by support inside of the appliance, which were not persistent, with this update this specific settings can be stored without need to edit configuration files.
|FIX: Automatically set correct CPU model during LocalBoot
NEW: Ability to change CPU model during LocalBoot
|During a DR restore, we automatically recommend/use settings based on the machine being booted to reduce the chance of booting an a machine into an environment that will underperform while simultaneously saving time in doing so. We’ve improved this automation and extended customization of boot settings from being only available for cloud boots, to restores on the local appliance as well.
|REMOVED: Autoarchive of differential and incremental levels of backups on Virtual Machine
|We are treating all backups of VM as fulls, it was inconsistency and preparation step for Retention 2.0 policy (coming in 6.17)
|FIX: Disabled ability to edit clients on secondary appliance
|Removed the confusion with ability to edit clients on secondary – that was making no sense.
|FIX: Disable visibility of Boot verification images in the list of clients in case Boot verification is turned off
|If Boot verification was turned off after sometime of working it would show in sometime very old boot verification results and was disinformational to customers.
|NEW: Ability to connect to appliances inside local network with https://devices.infrascale.com
|This will remove steps to connect keyboard and mouse during initial setup in order to know IP adress of the Appliance – after turning on any appliance will become visible on devices.infrascale.com.
|FIX: Ability to bond network interfaces on some models of appliances
|New 550 appliance had disabled some network settings as it does not have boot ability which were required to increase connection speed by bonding network interfaces.
|UPD: Set “date until” to +10 years from current moment for previously pinned jobs
|Many compliance requirements are for the length of data retention, and the “pinned jobs” feature released earlier this year did not allow jobs to be excluded from retention policies long enough, requiring a reminder and manual adjustment of the “pin” after some amount of time.
|FIX: Unable to boot VM with dynamic disks in certain circumstances
|This was a major fix for customers in this situation, as they would have to go through a much more traditional baremetal recovery process rather than a quick, easy DR recovery flow.
|UPDATE: Reduce amount of storage used for performing DR backups
|This helps reduce the need to upgrade primary appliances due to the need for temporary storage of files during DR backup procedures. The temporary storage footprint is now significantly smaller. The result of not having enough space would be that backup jobs would fail, or run much more slowly and require a storage upgrade.
|UPDATE: Preserve disks order during LocalBoot on Appliance on in the Cloud
|Some VM have got a lot of number of disks and our boot process in some cases could not detect correct boot disk which lead to boot fail. We have implemented more sophisticated algorithm and number of boot failures will decrease now.
|FIX: Ability to use UK keyboard layout
NEW: Ability to choose keyboard layout for locally booted machine
|Keyboards aren’t all the same. The default option was the US ENGLISH QWERTY keyboard layout, and we now support the UK ENGLISH QWERTY keyboard.
|UPDATE: More concise description of requirements for performing LocalBoot on Virtual Appliance
|We have removed pain, when customers were trying to boot image inside of the virtual appliance and were facing information that not enough resources are assigned in order to be able to boot – now it’s stated how much exactly memory and CPU are required.
|UPDATE: Correct link to the Knowledge Base in Support tab of Appliance
|InsideWe are routing customers now to maintained and updated version of Infrascale’s Knowledge Base.
|FIX: Sometime VMware backup of turned off machine could fail
|If a VM within VMware was turned off, the backup would occassionally fail. This is common for VMs created as templates, but not regularly used.
|UPD: Show user-friendly message on Browse and Restore if no data can be found.
|Our users where confused in some corner cases, when we could not browse the disk and show it content – we just showed empty screen, now we give friendly message about such situations.
|FIX: Jobs re-imported from archive or pushed back from secondaries will not be deleted by retention at once
|In case of different retention settings on primary(short) and secondary ( longer) job restored from archive or pushed back to primary would have passed retention and deleted. Now it will be automatically pinned in order the user would be able to restore and manipulate the job.
|NEW: Ability to pin particular job until some date in future
|“Pinning” a job excludes it from retention policies for as long as determined by the user when “pinning” a job. This is helpful in many cases: compliance, breach investigation, employee turn over, legal disputes.
|FIX: Garbage Collection history now shows data
|Process of freeing space on Appliance is done during Garbage Collection – now it’s possible to see how it is working.
|FIX: VMware client will show progress of backup now
|Reduced number of clicks in order to check how progress is working – now it is visible on all Clients.
|FIX: Backup would failed of the Hyper-V guest migrated to another host
UPDATE: Stability of DR backups
UPDATE: Stability of Hyper-V backups
|In general, this is a group of updates and fixes that improve the reliability and reduce the TCO when protecting Hyper-V environments as well as fixing an issue caused after Hyper-V VMs migrate to new hosts.
Previously, these issues would result in alerts/warnings that would take techs time to evaluate and resolve.
|NEW: RAID status is shown on console screen
|Many of our administrators cited this as a big help rather than having to dig around for the info, deeper in the console. Here you go!
|UPDATE: Ability to perform Linux DR backup with non-root user credentials
|A lot of Linux distributive have root user deactivated now by default due to security measures, so our customers could not perform backups of such Linux machines – we opened ability to perform backup with any users credentials (after special settings are done)
|UPDATE: Changed customization of replication bandwidth limit
|Some of our customers are not working Monday-Friday and would like to have more flexible way to customize replication bandwidth limit – now any day during week can be set to decrease speed.
|FIX: Daily report will show correct restore point date in case of last backup has failed
|Daily report was showing not latest but first very old restore point and made confusion that no recent backups are available per machine. So now we show correct most recent restore point.
|NEW: In case of appliance decomission you can securely shred all data
|We have added script in order customer would be able securely delete all data on appliance in case decommissioning.
|FIX: DDFS in case of shutdown will be unmounted at once.
|We have changed logic of deleting files in DDFS in order to being able to unmount filesystem ot once.
We’re on the cusp of some major changes under the hood that will result in simpler deployment steps and improved performance.
We’re also going to be announcing updated and new integrations with Connectwise Automate.
-The Infrascale Product Team