In our last post, “Managing Administrative Access to an Azure-based Cardholder Data Environment,” we outlined ways to secure administrative workflows by using various Azure technologies. We’ll resume with part eight of the Azure Secure Cloud Migration blog series, covering considerations for backing up VMs in Azure and all the associated restrictions and caveats.
Device backup and backup infrastructure efforts are usually underestimated and become an interesting beast during complex projects like mergers, separations, or other infrastructure migrations. One question we consistently get is, “We’re going to the cloud so my data will be backed up, right?” Unfortunately, there is no easy or straightforward way to answer this question. Typical backup requirements consist of multiple copies of data and some of those copies are often required to be stored off-site (or somewhere geographically diverse from your production workload). Luckily, in an IaaS architecture, by design you have the ability to choose what region of the world you’d like to store your data and/or VMs in, covering that requirement of backup strategies. In addition, most cloud providers will offer supplementary redundancy at an additional cost so that your backups won’t be affected in the case of a regional outage, should you choose to store your backups in a different region than your production workload.
Recently, we migrated a client’s infrastructure to Microsoft’s cloud platform, Azure. Due to the nature of the client’s business model and standard operating procedures, backups were a crucial part of their SaaS platform as well as their disaster recovery (DR) and business continuity (BC) plans. We deployed servers using Azure Resource Manager (ARM) to future proof the environment, per Microsoft’s recommendation to use ARM over Azure Service Manager (ASM, or “Classic Mode”) objects. One challenge we had to overcome (at the time) with Azure is that it did not offer full server backups for ARM objects; instead, they offered only file and folder backups using Backup Vaults.
It should be noted that in the time since our deployment, Microsoft has released Recovery Services Vault (RSV) functionality for ARM objects. What once was a manual collaboration of classic mode services and objects is now an automated process for the most part. One major benefit to RSVs is that they support Resource Manager Mode objects natively, including full server backups for ARM objects. This means no more having to deal with file and folder backups or Classic Mode Backup Vault objects. Also included in RSV is the ability to push the backup client to servers instead of the manual install process.
In Azure, the concept of a Backup Vault is precisely what it sounds like: a vault that contains backups of one or many servers. In order to enable Backup Vaults for ARM objects (since Backup Vaults are a Classic Mode object/service), we first had to add the Backup Vault ARM extension to our Azure subscription (via PowerShell) to allow for the Backup Vaults to communicate with our ARM objects. Once our subscription had the extension added, we were then able to create vaults that were compatible with ARM objects. We created Backup Vaults for every unique service to keep things simple and structured (e.g., Domain Controller Backup Vault, SQL Server Backup Vault, File Server Backup Vault, etc.) One of the available options we could set for our vaults was how the data was stored. For servers,
After our Backup Vaults had been created, we needed to deploy the Microsoft Azure Recovery Service (MARS) agent to all servers in scope to be backed up as well as register each server to a specific vault (e.g., SQL servers registered to the SQL server vault). During the registration process, we needed to create a unique recovery password per server. This password is not stored on any Microsoft server and Microsoft cannot recover this key for you. This means that if we lost access to the server’s unique recovery key and we couldn’t log in to the server to reset the key, we lost access to all of the backup data. It is imperative that this key be stored in a secure location, such as Azure Key Vault and be able to be correlated to the server it was attached to. We could’ve used the same key for all of our servers, but if someone were to gain access to that key, that would in return essentially grant them access to all backup data instead of a limited portion of the environment.
Domain Controller (DC) Backup
Due to Backup Vaults having limited functionality with ARM objects, we had yet another challenge to overcome – backing up domain controllers without being able to do a system state backup. The method by which we solved this challenge was that we attached an additional data drive to each DC strictly for backups and used Microsoft’s native Backup and Restore tool and directed full DC backups to be stored on that drive. We then updated the MARS agent to backup files from that backup drive to its respective backup vault.
Once we deployed the MARS agent to all of our servers and triggered the first round of backups, we were able to login to the Azure classic portal and have a dashboard view for all the backup vaults. This view includes data such as number of servers attached to each vault, total backup size, backup schedules, etc. This portal also gives you the ability to create a new vault and be able to quickly download the latest MARS agent.
When architecting your backup solution, it is crucial to understand the limitations of the solution in order to fully construct an effective solution that meets all requirements. It’s important to remember that just because you deploy your infrastructure in the cloud does not mean you are automatically protected from data loss.