Picking up from last week’s post on Preparing to Migrating to a Secure Cloud, the first part of our Azure Secure Cloud Migration blog series, we’ll jump right in to how and why the client’s Azure network was architected. Connectivity and traffic flow between Azure, on-premises locations, and even business partners are things to consider immediately, as they impact the overall structure of the environment.
One of the first considerations for migrating into the Azure environment is the Azure region in which the environment will reside.
- Regions which are geographically distant from you will introduce network latency and as such can impact latency-sensitive tasks (i.e. database calls, remote debugging, and small transactions).
- Always consider the full round trip path of your application flows, not just from point A to point B.
- Azure ExpressRoute can help provide more stable, high bandwidth connectivity, but latency is still a constant—there is no magic bullet.
- Regions dictate features and services available to your deployment, feature richness may come at the cost of increased latency
In the case of our recent client engagement factors such as satellite office locations, VM sizing requirements, and available services were all variables considered. In the end, because the deployed environment was mostly internet facing, increased latency was an acceptable tradeoff for feature richness.
Creating multiple virtual networks (VNETs) depends heavily on the requirements of a deployment. To meet PCI requirements and limit PCI scope, we were required to segregate card holder data from the rest of the environment. We took the following approach with scope limitations in mind:
- Individual VNETs would function as independent network segments and would not provide connectivity without a VPN tunnel, effectively limiting compliance scope. We carefully considered what constituted the PCI cardholder data environment (CDE) and made it a discrete VNET.
- Subnets within a VNET were used as a logical application function grouping, making it easier to write and maintain network security groups (NSGs).
- A thoughtful design of VNET boundaries, along with NSGs, can limit compliance scope, and generally make a more manageable environment.
Making sure you fully understand the segmentation, scalability, and overall network requirements of a cloud deployment is a crucial first step, and lays the foundation for the rest of the environment.
Most environments require secure and reliable connectivity between the cloud and their on premises locations. It’s also important to identify what 3rd parties require VPN access; ones you have no administrative control over.
- VNET to VNET, and VNET to on-premises connectivity: The native Azure VPN gateways provide simple, solid, and easy to manage connectivity at a reasonable price point, but have stringent requirements of support. In nearly all scenarios a route-based gateway is your only real option, and newer generation hardware at the office locations is required for support.
- Azure VNET to 3rd party: While Azure VPN gateways are reliable and suitable in most cases, connectivity to remote parties often requires a large selection of feature and encryption support. In our experience nearly every business uses a Cisco ASA to provide VPN connectivity, a device not supported by route-based gateways. For the limited amount of 3rd party connections required, we chose to deploy a Cisco CSR 1000v router from the Azure Marketplace as a ‘Swiss army knife’ of connectivity.
- 3rd Party connectivity often becomes a ‘battle of the wits’ to get configured correctly, with debugging information needed on both sides to help determine the missing step. Because Azure VPN gateways behave as ‘black boxes’ and provide little or no troubleshooting information, a more traditional VPN appliance can eliminate a lot of headaches that comes with coordinating connectivity outside your control.
- Azure ExpressRoute providers were selected for any physical circuits at office locations, allowing dedicated circuits to easily replace VPN connectivity as bandwidth requirements grew and organizations moved to more cloud-dependent systems.
While most cloud services can be easily deleted and re-provisioned, the overall architecture needs to meet all of an environment’s requirements from the beginning. Our approach provided a mix of administrative ease for internal components, while providing rich feature-sets for external connectivity outside our control.
Stay tuned for an in-depth look at implementing, and more importantly, managing network security within Azure.