Securing Your Cloud DB Instance Against Unauthorized Access or Modification | by Keith Tay | Culture Geek | April 2022
If you’re an avid reader of cyberspace news, you’ll notice that many data leaks or database data changes were the result of misconfigurations or weak configurations. Just this month, the Fox News database with 13 million records was exposed online through an unprotected database. Looking at the cyberwar space, the hacking group “Anonymous” has exposed tons of databases belonging to the Russian Federation. Based on the following article, the research company was able to determine that 92 out of 100 databases were compromised using non-password protected cloud storage repositories. It has been observed that in addition to accessing the contents of the database, the files have been deleted and a cyber vandalism campaign has been launched to send a message.
The call to action to all readers is “Never leave your cloud database publicly available”. An analogy is that if you put a safe you own in the middle of a street, people will just try to open that safe. Even with a combination to open the safe, it’s only a matter of time when an opponent is able to bypass this control. To put it into perspective, the Internet is an “open” space. This basically means that adversaries can scan the network and attempt to break into your database if it is publicly available.
For all developers, DevOps engineers, IT professionals, or even security practitioners, there must be a mindset that once your database is made public, whether intentionally or not, attacks against your database data may occur within the first hour. As documented by Imperva’s honeypot research, hackers will attempt to log into the database and attempt to brute force authentication (if applicable).
*This article will use Amazon Web Services (AWS) as illustration and remember that security is a shared responsibility when using a cloud service provider.
The root cause of public cloud database access:
Lack of secure design considerations — During the design phase of a system/application, it is essential to evaluate the design and architecture holistically through the use of threat modeling, secure design patterns and security architectures. reference. Based on AWS documentation, it is recommended that you run public systems (for example, web servers) on the public subnet and back-end services (for example, an application server and database servers). data) in the private subnet. This increases the security posture as adversaries will not be able to directly access your internal resources. The secure design thinking phase is often omitted or shortened to allow faster deployment to market.
In the event that it is a business requirement to enable public availability of the database, the organization must determine potential threats, apply the necessary security controls, and assess the residual risk.
*Note: In 2021, OWASP created a new category titled “A04: Insecure Design” as one of the 10 most critical application security risks.
Weak or misconfigured network policy used — There are several network controls to regulate database connections. For example, security groups, network ACLs, network or local firewalls. Developers could become careless (whether intentionally or not) by allowing all connections (0.0.0.0/24); allow connections from a large CIDR block; or/and even using the default network ACL (allow all inbound or outbound traffic) or subnet (public subnet – the main route table sends traffic from the subnet destined for the internet to the gateway Internet).
Lack of policy visibility and enforcement — As an organization scales, it can be difficult to keep track of all developer actions performed in the cloud (this can also span different AWS accounts). Often organizations give developers complete cloud autonomy without designing the security baselines for cloud deployment. Even if there was an internal process (non-technology control) articulated to the developers, human errors and negligence could lead to misconfiguration of the database.
i would like to share Three steps to better secure your DB instance (also applicable to other use cases or cloud services) on the cloud against unauthorized access and modification.
Step 1: Always plan the design (including security) of your solution before development and list any residual risks
The saying goes “If you don’t plan, you plan to fail”. Before building a solution on the cloud, it is essential to plan the design and highlight any potential risks to stakeholders. The architecture should be aligned with business goals and organizational strategies, including security.
Let’s take as an example a simple three-tier architecture: Web server, application server and database server. Should we deploy all three servers in the public subnet, or should we follow security best practices by deploying the web server on the public subnet and the application and database server on the subnet? private network? To be frank, there is no straight answer to this scenario. It really depends on your business needs. Is there a use case where you want the application and database server to be made available to the public? Subsequently, perform threat modeling and assess the security controls that can be put in place to eliminate or mitigate the identified risk, thereby allowing public access to open up a larger attack surface. Depending on the organization’s budget and risk appetite, risk can be accepted, mitigated, or avoided. If this is avoided, a separate design should be considered until stakeholders are comfortable with the risk.
Security is not meant to be a barrier for organizations, but instead the risks associated with each design must be articulated and understood.
Step 2: Leverage technology controls to enforce a security baseline
If the organization has clear security or regulatory requirements, it is possible to translate them into a cloud policy document. In AWS, we can leverage Service Control Policies (SCPs) or IAM policies (with permission boundaries) to dictate what action is allowed or denied. For example, a rule can be configured to deny the launch of any DB instance in public subnets. Additionally, it is possible to enforce which security groups can be attached to the deployed DB instance.
Leveraging technology controls on the cloud can help eliminate human error and help developers get in sync with the organization’s business and security strategy.
Note: If the database is going to be made public, consider enforcing a granular whitelist of IP addresses and revise the policy from time to time. Otherwise, the DB instance should only allow connections from the application server.
Step 3: Perform proactive audits and analysis
An important security principle is ensuring sufficient visibility into the cloud. At a minimum, I would recommend using AWS Security Hub, a cloud-based security posture management service that performs security best practices checks, aggregates alerts, and enables automated remediation. AWS Security Hub supports several recognized industry standards such as CIS, PCI-DSS, AWS. Regarding database security, this will raise issues such as DB instances are publicly accessible, DB encryption at rest is not enabled, DB instance must have deletion protection not enabled, database logging must be enabled and many more. The organization can use the Release Form Security Hub for risk detection and articulation purposes if it deviates from security best practices. Additionally, Security Hub scans run every 12 hours, giving an up-to-date status of your cloud instances.
If your organization’s resources are not scarce, consider hiring a team to create an automated script to run proactive scans on your deployed cloud instances. Inspired by an essay by a security researcher, the researcher discovered thousands of open databases (Elasticsearch) on AWS. These databases revealed personal customer information and even production logs that could leak the internals of an organization’s network.
Note: It is possible to use the AWS configuration to monitor changes (for example, running a DB instance in a public subnet) and act on them. Additionally, AWS Config rules enable automatic remediation of non-compliant resources. Additionally, AWS GuardDuty can be enabled to detect anomalies or potential brute force attempts on your DB instance.
Migrating to the cloud has a myriad of benefits. But before deploying a system/application to the cloud, it is essential to ensure that the design is well thought out, which includes alignment with business and security objectives and strategy. A small mistake, such as a misconfigured database instance or inadequate security controls for your database, can lead to unauthorized access or modification of your database by attackers. We must have the mentality that everything made available on the public internet is an open space where anyone can try to enter.
As part of risk identification, it is essential to apply threat modeling to assess all potential attacks based on your design project. Thereafter, the organization should implement the relevant security controls to eliminate or mitigate any risk and determine if the level of residual risk is something the organization can accept. Otherwise, a redesign and evaluation is required.
Additionally, organizations can enforce policies on the SCP or authorization boundary to align their developer’s configuration with business and security policy. Finally, always take a proactive step to audit and analyze the cloud configuration. Any deviation from the organization’s objective must be rectified and corrected immediately (it can also be automated).
Finally, cloud security is a shared responsibility. The cloud user has a huge role to play in ensuring that secure configurations are enforced for the systems and applications deployed by the organization.