Controlling Inbound and Outbound Traffic Flow in an Azure-based Cardholder Data Environment

Controlling Inbound and Outbound Traffic Flow in an Azure-based Cardholder Data Environment

In our previous post, we examined the trade-offs when implementing a Public Key Infrastructure (PKI) in Azure using Active Directory Certificate Services. For an overview of the blog series and a list of the topics being covered, see the introductory post, “Preparing to Migrate to a Secure Cloud”. This week, we will continue the Azure Secure Cloud Migration blog series by discussing methods to secure traffic moving in and out of your Azure environment.

Customer Access

Customer interaction with resources in a cardholder data environment (CDE) is one of the primary means by which large amounts of traffic are authorized to enter and exit a CDE, and thus these methods must be heavily scrutinized. During this project, the SaaS application that was migrated to Azure offered its services through public web applications. The implications of this public access as applied to the Payment Card Industry Data Security Standard (PCI DSS) are such that the applications needed to be protected against threats and vulnerabilities on an ongoing basis, as well as ensure that no direct connections to the CDE from the internet were established.

To address these requirements, we implemented a cloud-based web application firewall (WAF). The WAF served two purposes – to continually filter traffic and block layer 7 attacks like cross-site scripting, SQL injection, privilege escalation and cross-site request forgery, and to act as a (reverse) proxy to web servers. By configuring clustered web servers to act as the elements of a backend pool of an Azure load balanced set and implementing a WAF, we created a tiered architecture that disallowed direct connections to web servers from the internet. The WAF acted as a relay point between the CDE and the internet by making use of DNS aliasing. The hosted WAF service that was selected provided public certificates to be used for customer-to-WAF traffic, and allowed us to upload specific certificates to be used for WAF-to-server traffic. The figure below shows the high level architecture that was implemented.Drawing1

The deployed design allowed us to filter traffic prior to being load balanced, as well as further control traffic allowances by using appropriate Network Security Group (NSG) rules. As previously mentioned in “Securing Cloud Networks”, we used whitelist-based NSG rules to specifically permit traffic between all the resources in the CDE. We created inbound rules for the blocks of IPs from which the hosted WAF service contacted our web servers, instead of a blanket “permit ANY” statement. As these IPs were publically available, specific NSG rules were easy to implement and manage. Ultimately, this meant that no inbound NSG rule applied to a web server contained ANY/INTERNET as a source.

Operational Requirements

Besides customer interaction, there are also other services found in CDEs that require connectivity to the internet. Typically, these services are operational tools that support business processes or are services required by regulations.  Often times deployment of these tools using an infrastructure as a service (IaaS) public cloud model does not differ greatly from an on-premises implementation. Below are some key things to keep in mind when configuring some of these services in a CDE in either case.

  • Anti-malware
    • Many anti-malware vendors provide managed clients the ability to download updates from either a management station, update provider server (proxy), the internet, or a combination of these based on configurations. By default, most clients are configured to connect to vendor servers over the internet to download updates – make sure to confirm whether or not you have a proxy server available if you implement this method.
  • Managed Services
    • If using a managed security service provider for IDS alerting/reporting (or any service for that matter), they may require access to a management appliance for troubleshooting or applying updates. If you are unable to configure a secure reverse proxy, consider creating a micro-demilitarized zone (DMZ) – a network segment containing a single server that makes use of whitelist-based NSG rules that requires any traffic to be explicitly permitted. For more information about the considerations of employing a vendor in PCI DSS compliant environments, see our blog post, “Addressing the Magic Bullet – Part 1“.
      .
  • Network Time Protocol (NTP)
    • In a CDE, the PCI DSS dictates that centralized time server(s) must obtain time from a reliable external source and that all other servers must derive their time from the central time server(s). Combining this with the requirement only one primary function must be implemented on any given server means your primary domain controller emulator cannot serve as your central time server!
  • File Transfer Protocol (FTP)
    • Using an edge server that is in a micro-DMZ to handle user interactions is required to disallow direct connections to the CDE. Using a proxy server in this fashion ensures that the location where potential cardholder data may be at rest is not directly exposed to the internet. Make sure to take this step in addition to hardening the FTP server(s).
  • General Administrative Access
    • Because this is a very particular topic, we’ve decided to dedicate the entire next post in this blog series to discuss this. Make sure to check back next week when we take a deep dive into securing administrative access.

Whether your CDE is hosted in the cloud or not, disallowing direct connections from the internet is critical. I hope this review can help you add additional security when designing and implementing your PCI DSS environment.

Your email address will not be published. Required fields are marked *

Phone: 312-602-4000
Email: marketing@westmonroepartners.com
222 W. Adams
Chicago, IL 60606
Show Buttons
Share On Facebook
Share On Twitter
Share on LinkedIn
Hide Buttons