The role of the DBA in light of DevOps and Cloud Migration

It’s an old problem. The more results you deliver, the more that are expected. The faster you provide them, the faster they are expected. Next thing you know, old methods don’t work as well as they used to.

New demands require new workflows. On top of it new technologies are appearing making it seem like your old tools are no longer needed. Next thing you know the environment in which you work has changed. It is barely recognisable, and you might be afraid that you are no longer relevant.

The above sentences could apply to just about any occupation. It is no less the case for Database Administrators. It’s happening both in the areas of workflow and business culture, particularly with the growth of DevOps, and in the technological area of database servers migrating to the cloud.

The fears are real, and not unfounded, however perhaps they are a bit exaggerated. The need for DBAs in either environment is certainly not going away. Let’s address these separately.

DevOps and Changing Workflows

As the software development cycle has by necessity sped up, the nature of how roles are defined are changing. Traditionally (if we can say traditionally about a field that’s actually not that old; SQL itself was created less than a half a century ago), roles were pretty well defined. Developers wrote code. Sysadmins managed the servers. DBAs build and managed the databases, and handled all deployment from development, to staging, to testing, to production. QA s noticed the mistakes that everyone and made nobody happy (except the customers, by their reduction in complaints… oh who are we kidding? customers always complain). The workflow was pretty clear, however change was slow. At first there wasn’t as much of a problem. Customers didn’t have a lot of choice, and competition was relatively limited, so it was okay if any new development took months to come to
fruition

However, for better or worse that has changed. Changes in business models forced the need for a faster model for bringing products, data, and fixes to the customers. Next thing you know, it has become necessary to bring everyone on board throughout the whole process, and to make it far more iterative. If you’re not open to change, it can be very disconcerting (especially with QA jumping into the fray throughout the process). Even worse, Developers are committing changes to the database schema. And it seems to be working (at least in some cases). What’s a DBA to do?

Well, the first thing is to remember: “Don’t Panic.” As a DBA, you know whether the database is running properly. Remember that nobody else knows the inner workings of how databases work, and how to manage performance. It’s important to remember that DBAs are integral members of the “Ops” part of DevOps. DBAs are crucial to this role quite simply because they know how differences in one system can affect another. On top of this, some of the changes are requiring DBAs in different ways than they did in the past.

DBAs must change a little to fit the new machine. While it was possible in the past to get away with taking things slowly and making sure everything was running at its best before releasing a product, customer demands have increased. This is no longer a real option. As a response, it is crucial for DBAs to learn more about the Agile process wherein problems are broken into pieces and problems are addressed in the real world (I know this is a simplification, but this isn’t an article about Agile).

Of course, it is true that some shops still take too long to deploy code. The DevOps folks are completely right about this. However if we are going to use these shorter cycles, the risk of bad code getting released is greater than it every has been before. Bad code can take down a database faster than you can say “ Bobby Tables .”

Of course you can’t stop the pace of development, which means that the need for DBAs to be monitoring the databases (with good analytic tools) is greater now than ever before, quite simply because of these shorter cycles. Sure, the code worked in testing, but nobody can realistically tell how it will work with the volume that is seen in the real world.

For any smart business to provide a quality product, it’s absolutely critical to be monitoring status and performance. With good tools that can effectively monitor and analyse the behaviour of this code in real life, DBA can keep track of any problems that need to be fixed, or database performance that needs to be tweaked.

Cloud Migration and Changing Technologies

Another major change affecting DBAs is the actual location of the database servers. In the past, our databases were all run on locally hosted servers. While this still is the case in many places (there will always be that dark freezing room in the basement where new forms of artificial intelligence are creating themselves and preparing for the singularity), more and more we are starting to host in the cloud.

Despite these advantages of cloud hosting, new threats emerge, hidden in the silky words of the cloud host’s marketing language. There’s a claim that their systems are now “fully automated.” All you need to do, they claim, is choose a few configurations, push a button and you’re all set. This language getting into corporate managers’ ears is fodder for any DBA’s nightmare.

As we know (and anyone with any experience with the cloud will know), these promises are pretty much fantasies. Sure, there are many advantages to cloud hosting. In many ways it does run smoother; there’s better replication and uniformity of access in the cloud. However, these systems still need to be monitored, maybe not entirely in the same way as your local servers, but also for other reasons.

The supposedly “self-managed” instances you are running in the cloud? Those are typically only subsets of what you are used to running locally. They may require less manual work quite simply because they have less functionality than your local systems. They are often virtual and shared. The hardware they are using? Opaque to you, but likely highly variable from one location to another. Instances may behave entirely different from one location than on another. Typically, the functionality is severely limited compared to your own servers which you can tinker with to your hearts delight.

 

On top of this, applications rushed into production as a result of the DevOps process will not suddenly perform better in the cloud. Sure you can add more resources to solve the problem, however it may not be the most efficient way, and for each bit of extra resources, such as new instances, more replication, this increases the expense.

This brings us to our next point: cost. Typically, most cloud hosts charge by usage. If you leave the management of these Databases to them, they have really no incentive to make these run well, and to identify if there are processes that are causing more processor time and/or bandwidth. In fact, it is in their interest not to fix these. So DBAs will need to monitor activity, keep good records, identify if there’s a bad process, and be able to fix these. You need to be able to fine tune the performance, identify whether adding resources makes sense, or if in some cases clean up unused resources.

Overall, despite changes in workflow and technology, DBAs remain important for new reasons. With the right tools, their value relevance has actually increased as business needs increase.

Start your free trial today! download here

Security considerations in database operations

As most DBAs know, security of data is one of the most difficult yet important tasks of maintaining a large estate of databases. It has kept more than one administrator up at night worrying about potential threats and pitfalls. With the growth of the information economy, not only is most information stored in databases, the value of that information has grown. As with anything with
value, the threats to security increases at a direct correlation to its worth.

If you are handling a great deal of sensitive data, you already know and must deal with this on a daily basis, not only due to the necessity of maintaining business integrity, but also due to potential legal pitfalls if sensitive information were to leak. It doesn’t take much browsing of technology or business news to hear or read about some large company that leaked tremendous amounts of user data, and have been subjected to millions, if not billions of dollars of losses.

Even if the data you store is not particularly sensitive in nature, this does not leave you invulnerable. Perhaps this data is integral to running your business? What if you lost access to this data? Even if you have backups of everything, the sheer amount of time that gets lost repairing data, can become astronomical in a short amount of time. And on top of this, nefarious users may not even care; they may break in just for the “fun” of it. No matter what type of system you run, security is a serious concern.

Confidence in the security of your database operations is fundamental to your business.

Some of the major vulnerabilities to databases include (but are certainly not limited to) default/weak passwords, SQL injection, improper user security, DBMS packages with too many features enabled, and more.

Before you start to panic, while there is no failsafe solution to protecting the integrity of your databases, there are quite a few steps that you can take to reduce the likelihood this sort of disaster.

Keep Sensitive Databases Separate

As any black-hat hacker knows, all it takes is one weak spot in a system to get in. Due to this, never assume that all of your security should exist externally. If someone malicious (or even accidentally gets in, they should run into more walls.

If a particular database contains very sensitive information, it should be quarantined from all other systems. If it’s not possible to keep this data completely offline, make sure nothing else can reach it with any ease.

Regularly Monitor

Keep a record of your database inventories, and regularly monitor behaviour on each of them to determine if there are any anomalies in their behaviour. Having a good system for keeping track of statistics and to be able to flag unusual activity will go a long way to spotting any potential breaches.

Role-Based Access Control

There’s a fundamental truism associated, not just with databases, but all systems: the most vulnerable part of any system is the human component. For a multitude of reasons, including inattention, forgetfulness, laziness, or even outright malicious intent, people are just not as reliable as we’d like them to be.

For this reason, do not give admin rights to all DBAs as default, but create roles and assign roles to DBAs. It is easier to revoke role access than changing admin passwords all around. Also, start off with the assumption that your DBAs only need minimal access. It’s a lot easier to deal with frustrated users than it is to deal with putting out a fire after the barn has been burnt down.

Don’t let your developers have administrative power over users. The temptation to simply “test” a piece of code has a way of accidentally opening security holes, and the creation of “temporary solutions” that never get patched. 

You should also consider giving your developers access only to views instead of tables. If for some reason a hole gets left open, this will reduce the likelihood of actual data destruction.

Centralise Access Control

If you are running on a Windows network, you have the ability to use Active Directory to handle access rights and roles. Use a central login point. Let command console connect through the firewall to the management server, and then connect from there to the instances.

Try to place management servers in subnets behind the firewalls, so you do not have to open all firewalls to allow connections directly to all instances.

Encryption

Don’t forget that even secure connections like SSH have vulnerabilities at their endpoints. Encrypt all connections where possible. If for someone is sniffing at your database connections (the larger you are, the more likely this is occurring, and for extremely sensitive data, this should be a given), make sure that any packets that are intercepted are encrypted, preferably using 256-bit which should
be enough to prevent most brute-force attacks.

Software Security

While, as mentioned before, you shouldn’t depend on software to handle the security aspects of your databases, it’s generally a good idea to enforce software security. It never hurts to have extra layers, so if you have developers accessing, or you are developing for your databases, consider using stored procedures and transactions with fallbacks wherever possible, and if software must access the
database from a public interface, make sure your data packets are being stores as objects. In other words never allow inputs to directly access your database itself. Again, as mentioned before, use views and not direct access to tables.

Stay Up To Date

Keep abreast of all security news, particularly as it relates to your databases. It’s a good idea to regularly check to see if there are any updates of any sort, not only as it applies to your database platforms themselves, but also with any software or connections you use for managing your systems. When an update does come through, do it immediately lest you be vulnerable to zero-day exploits.

Conclusion

This is really only a cursory overview of approaches to take when maintaining the
security of your databases. As any security professional will tell you, there is no such thing as a completely secure system. However if you can take a few steps to make the effort getting in much more difficult than the payoff, you will have thwarted most types of vulnerability.