Architectural Differences of Public Cloud and On-Premises Databases

Oct 22, 2020 / by SecureCloudDB

Over the past decade, a significant portion of IT infrastructure has moved from proprietary data centers into public clouds. This move requires examining the technique used to manage and secure these resources, especially databases. In order to adequately secure data in the cloud, an understanding of the architectural differences between public cloud and on-premises databases is required.

The first point to consider is that the actual database systems have changed. In the data center, databases were typically large monolithic RDBMS with the leading systems being Oracle, Sybase, IBM DB2, and Microsoft SQL Server. Few people are running Oracle, Sybase, or IBM DB2 in the public cloud. These systems simply don’t lend themselves to the new cloud architectures. New types of databases have emerged as the leaders. In the public cloud, RDBMS are mostly systems such as MySQL, PostgreSQL, and Amazon Aurora. As well, data has moved into NoSQL databases such as Amazon DynamoDB, into warehouses such as Amazon Redshift, and into in-memory databases such as Amazon ElastiCache. Securing data in the public cloud requires looking at and focusing across many new database types and engines.

The second point to consider is how data and databases live in the public cloud. In the data center, databases were monolithic and static. A database server was a physical device with an overpowered CPU, memory and operating system with complex disk storage and network cables running out of the back of a rack. Once a database was set up, it was there for years. In the public cloud, how a database exists is fundamentally different. Databases can be scaled up and down dynamically when needed. When a database fails, it is recommissioned and a fresh database is created from a backup. Databases such as Amazon Redshift run as clusters that have nodes that are spun up and torn down as needed. Consider the complexity of how to inventory, validate the security of, and monitor nodes that are dynamic and ephemeral.

Finally, consider the difference in how database access is controlled in the public cloud. In the database center, database security was often lax because databases were buried far behind layers of firewalls. Databases were rarely exposed directly to the public internet. This strategy continues to become less effective as devices continue to become more exposed to each other. As a result, a new idea of zero trust has emerged. According to CloudFlare

Zero trust security is an IT security model that requires strict identity verification for every person and device trying to access resources on a private network, regardless of whether they are sitting within or outside of the network perimeter. ...Traditional IT network security is based on the castle-and-moat concept.

Public clouds are shared resources. Adopting this concept of zero trust security for your databases is now a requirement. When moving databases to the public cloud, organizations will need to adopt new procedures, policies, and products to be successful. However, the net effect will be much more robust and reliable security. Now is the time to learn how to navigate this new world and get proper security in place before poor security can lead to a data breach. 

 

This is an excerpt taken from Database Security: Moving to the Public Cloud. Interested in learning more? Download your copy of the white paper today.

Written by SecureCloudDB