12 Database Security Landmines, Failures, and Mistakes That Doom Your Data

In most business stacks today, the database is where all of our secrets are waiting. It is partly a refuge, a preparation room, and a staging ground for items that can be intensely personal or extremely valuable. Defending it against all incursions is one of the most important tasks for DBAs, programmers, and DevOps teams that depend on it.

Alas, the work is not easy. The creators give us all the tools. They incorporate good security measures and document them. Yet dozens of potential mistakes, oversights, and silly yet understandable mistakes make this a never-ending challenge.

To help you keep track and stay on our toes, here’s a list of the various failure modes that have tripped up even the best of us.

1. Inadequate access management

Many databases live on their own machine, and that machine should be as locked down as possible. Only essential users should be able to log in as the DBA, and connections should be limited to a restricted range of networks and other machines. Firewalls can block IP addresses. The same rules should also apply to the operating system layer and, if running in a virtual machine, to the hypervisor or cloud administration. These constraints will slow down the work of updating software and resolving issues, but restricting the paths attackers can take is worth it.

2. Easy physical access

It’s unclear what a smart attacker might be doing inside the server room. Cloud computing companies and colocation facilities offer locked cages inside heavily guarded buildings with limited access. If your data is stored locally in your own data center, follow the same rules, ensuring that only trusted people have access to the room containing the physical disks.

3. Unprotected backups

It’s not uncommon for a team to do a great job securing a database server, but then forget about backups. They hold the same information and therefore need the same care. Tapes, drives and other static media should be locked away in a safe, preferably in another location where they will not be damaged by the same fire or flood that might destroy the originals.

4. Unencrypted Data at Rest

Data scrambling algorithms are generally reliable because they have been extensively tested and current standards have no publicly known weaknesses. Adding good encryption to database and backups is now easy to do for all data at rest. Even though the algorithms and implementations are secure, the keys must also be carefully protected. Cloud providers and server developers are creating reliable hardware that stands out from the average workflow so the keys are safer inside. Even if the systems are not perfect, they are better than nothing. Where data will remain encrypted at rest for a period of time, some prefer a different physical location for the keys, preferably offline. Some even print out the keys and put the paper in a safe.

5. Do not use privacy-protecting algorithms

Encryption is a good tool for protecting physical copies of the database as long as you can protect the key. A wide variety of good algorithms also scramble the data all the time. They can’t fix every problem, but they can be surprisingly effective when it’s not necessary to keep all sensitive data available. The easiest may be to just replace the names with random nicknames. Dozens of other approaches use just the right amount of math to protect personal data while leaving enough clarity to achieve database goals.

6. Lack of proliferation control

When the data is used, it will be copied to caches and running servers. The goal of data storage architects is to minimize the number of copies and ensure that they are destroyed as soon as the data is no longer in use. Many databases offer mirroring or routine backup options as a machine crash protection feature. Although this can be essential to provide a stable service, it is useful to think carefully about proliferation when designing. In some cases, it may be possible to limit creeping copying without compromising service too much. Sometimes it may be better to choose slower and less redundant options if they limit the number of places an attacker could break into.

7. Lack of database control

The best databases are the product of decades of evolution, driven by endless testing and security research. Choose a good one. Moreover, the creators of the database have added good tools to manage and limit access. You should use them. Make sure only the right applications can see the right tables. Do not reuse the same password for all applications. Definitely don’t use the default. Limit access to local processes or the local network when possible.

8. Vulnerable secondary databases

Many stacks use fast in-memory caches like Redis to speed up responses. These secondary databases and content delivery networks often have copies of the same information in the database. Spend as much time configuring them properly as you would the major databases.

9. Vulnerable apps with access to data

All the painstaking database security isn’t worth much when a trusted app misbehaves, especially when the trusted app has access to all the data. A common problem is SQL injection, an attack that tricks a poorly coded application into passing malicious SQL code into the database. Another is simply poor security for the app itself. In many architectures, the application sees everything. If it fails to block the right users, all that data can go out the front door.

10. Risky Internet Exposure

Databases are ideal candidates for living in a part of the network without public access. While some developers want to make life easier by opening up the database to the general Internet, anyone keeping non-trivial information should think differently. If your database is only going to talk to front-end servers, it can happily live on a part of the network where only those front-end servers can reach it.

11. Lack of integrity management

Modern databases offer a wide variety of features that will prevent errors and inconsistencies from entering the data set. Specifying a schema for data ensures that individual data items conform to a set of rules. Using transactions and locking prevents errors from being introduced when one table or row is updated and another is not. Deploying these integrity management options adds computational overhead, but using as many as possible reduces the effects of random errors and can also prevent users from inserting inconsistent or incorrect data.

12. Retention of unnecessary data

Sometimes the safest solution is to destroy the database. Development teams often think like packrats, storing information for a future that may never come. Sometimes the easiest way to protect against data breaches is to erase the data. If you don’t need the bits to provide future service and customers will never ask to see them, you can zero out the information and leave it unprotected. If you are not completely certain that the data will no longer be used, you can erase the online copies and keep only the offline backups in deep storage where access is even more limited.

Copyright © 2021 IDG Communications, Inc.

Maria H. Underwood