Last week, we covered the segmentation of the client environment using Azure’s Network Security Groups. For an overview of the Azure Secure Cloud Migration blog series and a list of the topics being covered, see the introductory post, Preparing to Migrate to a Secure Cloud. This week, in part 4 of the Azure Secure Cloud Migration blog series, we’ll cover the implementation of hard disk encryption using BitLocker in Azure.
BitLocker is a part of the Windows operating system, so it isn’t unique to Azure. However, implementing it in Azure is different from doing so on-premises. At its core, BitLocker works just like it would with a physical workstation or server. The big difference is that without a trusted platform module present in the virtual hardware or a way for a user to access a boot screen or console window to type in a password, there’s no way to provide a startup key or password to unlock an encrypted OS disk. A combination of an Azure VM extension and the Azure Key Vault (AKV) address this problem by emulating USB storage at boot time to unlock the OS disk and storing the BitLocker Encryption Keys (BEKs) securely in the key vault as AKV “secrets.”
Environment configuration is simple: We created and configured an AKV, marking it as usable for BitLocker key storage, and then created and configured an Azure Active Directory (AAD) application and application secret, granting it permissions to write to the key vault.
With that configuration in place, there is a single line PowerShell command that’s used to install a VM extension on each IaaS VM. This command also specifies the AKV that will store BEKs and the AAD application the VM should use to write to the Key Vault. If the BitLocker feature is not already installed, the process adds that feature automatically. After a reboot, the VM starts encryption.
After rebooting, you’ll notice a new, very small USB drive attached to each encrypted VM that contains the BEKs for any encrypted volumes on that VM. It is important to note that this USB drive is not exposed as a virtual disk on an Azure storage account, so it can’t be downloaded along with the encrypted VHDs to gain access outside of Azure. And because BEKs are also stored in the AKV, if this drive is ever damaged or the keys are ever lost, the VM extension can “heal” the VM by automatically retrieving the BEKs from the key vault and replacing them on this emulated USB storage. During testing, after purposely deleting the keys from the USB drive, all that was required to bring the server back online was to stop (deprovision) and restart it. After a few minutes, the VM had healed itself and was able to boot again.
There is an additional option to wrap your BEKs with a Key Encrypting Key (KEK), but because Azure BitLocker is still a preview feature, the APIs for interaction with the key vault are still works in progress. As we were implementing there wasn’t a method (PowerShell, REST, or otherwise) to “unwrap” an AKV secret. There are REST methods for unwrapping AKV Keys, but since BEKs are secrets (not keys), wrapping them in a KEK means that they’re effectively inaccessible to you until the API grows to support an unwrap mechanism for key vault secrets. During a true emergency where the AKV is unavailable, this means encrypted Azure hard disks cannot be accessed by unwrapping the BEK and decrypting the data. Although the scenario where we’d need to download Azure VHDs and manually unlock them with BEKs is pretty unlikely, it’s still something we considered, and eventually we opted not to wrap the BEKs.
Unfortunately, implementing BitLocker on Azure VMs in this fashion doesn’t satisfy all Payment Card Industry Data Security Standard (PCI DSS) requirements related to encryption; considerations need to be made regarding key encryption, storage, and management. In our case, because BEKs are stored as secrets, they cannot be protected in the hardware backed portion of the AKV, which fails PCI DSS requirement 3.5.2. But even though BitLocker does not meet PCI DSS requirements, implementing it is still a good idea to provide an extra layer of data security. Next week, we’ll cover how we implemented SQL Transparent Data Encryption, which does ensure that the encryption requirements of the PCI DSS were met.