The recent data breach at Equifax is an unfortunate reminder of a long list of hacked sites (Target Stores, OPM, Sony, and others) that were caused by vulnerabilities in the application code. These companies seemed just as surprised as their customers by the attack and had to wonder, “how did this happen?” They did not need to look further than their own unintentional coding policies. Let’s identify the four coding policies that almost guarantee a data breach and replace them with intentional strategies to shore up security.
- Relying on firewalls and scanners only to safeguard their websites.
The first defended area of a computing system is its network connection since this is the simplest point for a hacker to attack. In the early days of the internet, websites were built rapidly and without security protections from their potential misuse. However, eventual mischief on the internet led to creating firewalls that block known forms of “bad” network requests from hackers. Since firewalls are quick to install, have a known cost, and provide a “feeling” of security, many organizations have these in place. Unfortunately, they stop there and these remain as the sole defense. This is unwise as we fast forward to today’s hackers who can use encrypted web connections and slip right past these firewalls without detection, using the web server itself to extract and download data.
- Requiring the development teams to never alter “working code.”
Secondly, code vulnerabilities remain in place due to the developer’s adage “if it ain’t broke don’t fix it.” When updates are made to software, there is a cost to run many tests to validate these new features. Reducing testing costs is often a priority that results in a formal or informal policy to fix only feature bugs, not vulnerabilities in production code. The result is a growing level of cybersecurity debt in the code from vulnerabilities that still wait to be remediated.
- Identifying vulnerabilities late in the software development process.
Thirdly, coding teams are measured by the number of valued features added to new software. Much attention and time is focused on completing these features before it is released to production. When code is scanned late in the development process to identify vulnerabilities, there is seldom time for remediation, and it still ships on the promised date. While the code is functionally correct, it also includes a zero-day vulnerability because the need to deliver new features has outweighed the cost of making the repairs. Tale as old as time.
- Missing integral security controls from the coding language itself.
Fourthly, computer languages that are used to code websites were not originally designed to provide security. Instead, they were created to solve other problems:
C/C++: Machine instructions were simplified into general purpose logic and data storage.
SQL: Simple grammar was invented to allow easier search and filtering of databases.
PHP: HTML formatting tags and browser requests were merged with SQL commands to allow quick programming of websites.
Java: Automatic memory reuse, rich libraries, and portable compiled code were combined to allow the same code to run on any operating system.
.NET: High-speed and scalable code architecture was designed to leverage the elastic computing resources of server farms and the cloud.
While feature-rich, these languages lack essential security controls that are needed on a website application, such as data validation, context-aware encoding, and secure database access. Security must be thoughtfully programmed using the appropriate add-on libraries, templates, and custom code. And since the developer is busy focused on actual features (see policy #3, above), any missing security controls in the chosen language will not be noticed if the application still functions as expected until it is too late and hackers have breached the system.
On any web server, an attack can be made on something as simple as a data search page. User input to the browser can include special characters that can trigger a database command. The web page program must sift these out before using the search input, otherwise a hacker who discovers this can exploit the web page and retrieve or modify any of the application’s data. Some databases even allow system commands to be run, which opens the potential to install malware or viruses onto the server.
Another example is the common way that a virus infects a PC from a website’s content. If a web page sends dynamic content to the browser that includes untouched user inputs, a hacker can exploit this by sending additional commands through this same pathway. These commands can allow the hacker to take over the victim’s machine and install additional malware or launch a local network attack.
Stay tuned for the 2nd post in this series where we will look at actionable solutions.