Extending Database Monitoring Into Cloud

No matter what the service, no matter what the purpose, no matter what the product, the cloud is being touted as an answer to scalability by pretty much every major vendor. It started with data-storage, and rapidly moved to databases. In the past we were limited by the actual hardware that we owned. If we needed to expand on it, we needed to purchase more memory, and in many cases we needed to purchase new hardware as the old wore out, or was incapable of handling what we needed it to do.
However, we also had full control over the functionality; we got to know our systems and could modify them to meet our needs. Sure this was a lot of work, and required a lot of maintenance, but they functioned reasonably well, and typically the only continued cost was the human hours spent maintaining them.

Now, we are told, if we move everything to the cloud, everything will run without much effort at all. All we need to do is to simply allocate more resources and at the push of a button we will have more available to us. As a result we have migrated many of our database holdings to cloud services.

This has, however, provided mixed results. Let’s break this down into the good, the bad, and the ugly, and then try and find some solutions to work it all out.

The Good

Moving to the cloud for database management does have some genuine advantages. In many cases there is a lot less need for hands-on management for most processes. Some aspects of these services really do run effectively, and many of the automated processes take away some of the grunt work associated with on-premise DB management.

Flexibility within the cloud database services is certainly better. Scaling up and down is a relatively seamless process, and it makes it considerably easy to adapt quickly to meet company needs. As a result, cloud hosted databases are excellent for testing and development. The cloud makes it easy to quickly set up a new server and determine how well it works. You also have some flexibility with platforms. For example, while not exactly the same as MS SQL Server, Azure databases are very agile. It’s also relatively easy to move your databases into the cloud; servers can be set up extremely quickly and all you need to do is transfer the data; the interfaces tend to be very straight forward.

This flexibility also makes it easy to manage peak loads without a huge amount of intervention. In fact to solve most of the complicated problems you had before, you can simply move resources around or add more.


The Bad

I gather you are already seeing one of the flaws here. Simply adding more resources may be effective, but it’s not particularly efficient.


While with our on premise databases, we own what we have purchased, in the cloud everything is on a pay-per-usage model. Every time you add resources, there’s an increased cost. This is exactly how and why the hosting providers act this way. It can become very expensive to simply throw resources at a problem and the expense can grow rapidly if left unchecked.

With your locally-hosted servers, there is considerably more functionality. Sure these tend to need a lot more maintenance, but you also have a much larger suite of tools to work from; you can get your hands dirty and solve any problems that come up yourself. One of the reasons why the cloud-based hosts are easier to manage? They have a lot less functionality. While streamlined, they are essentially limited versions of what you are used to running; much of the access you have been used to just doesn’t exist.

The Ugly

And then it gets more complicated. Without going into too many details, throwing more resources at a problem is like sweeping it under the rug. You aren’t solving the problem, you’re just hiding it.

As mentioned before, with each bit of resources being added, the expense increases. The cloud hosting providers love this, but of course your company doesn’t. You still have to manage these costs. You now need to determine whether or not resources are being over- utilised. This never used to be as much of a problem, but now in the cloud the meter is constantly ticking. Getting around this can become difficult. Because of the reduced functionality in the cloud, solving real problems can become complicated quickly.

Those offerings that supposedly made it easier to manage peak loads in the cloud? Counter- intuitively, they can result in a lot of extra work for the DBA. Maybe you had some good monitoring tools that you created that worked great on your local servers. They might have been a mess, and often sluggish, but you knew how they ran. Getting them to work with the cloud means developing a whole new set of tools. Sure the cloud services will provide you with their own monitoring tools, but they may not be anywhere as flexible as something you had for your on-premise servers.

Another problem is that these cloud services are not quite as homogeneous as the providers would have you believe. They might be running the same platform, but they are typically running on different servers, with unknown hardware, and located who-knows-where. For these reasons, often one instance will be running completely differently than a supposedly identical one in a different location.

So you need to find a way to get your monitoring tools to work in the cloud. As a DBA, you are still responsible for managing the databases. Regardless of whether the databases are hosted on premise or in the cloud, they can’t be left alone. Most likely, especially if your company has been around for more than a few minutes, you likely have a hybrid environment, with some databases housed on- premise, with others in the cloud. You still need a way to track both.


With your local servers, your main challenge was to make sure you had enough resources to deliver performance, or to find a way to make sure that you could squeeze the best performance out of what you have. The needs involved with managing databases in the cloud are in some ways the same as on-premise, but sometimes different. You still need to make sure that you can deliver good performance out of what you have, but not achieving good performance will also cost you financially. On top of that, if you have resources you aren’t even using, and unlike on your on- premise servers, you don’t own them; you are still paying for them every day. So you have to balance what you need and what you have on a constant basis, and that requires constant and vigilant monitoring.

Moving to the cloud may be helpful in some ways, but it does not reduce the amount of work required for the DBA; it only moves it around. However, one way of making sure things run more efficiently, no matter which approach you take (or are forced to take), is to have good tools which can help you monitor each database and instance, regardless of where it lives.

Start your free trial today! download here

The role of the DBA in light of DevOps and Cloud Migration

It’s an old problem. The more results you deliver, the more that are expected. The faster you provide them, the faster they are expected. Next thing you know, old methods don’t work as well as they used to.

New demands require new workflows. On top of it new technologies are appearing making it seem like your old tools are no longer needed. Next thing you know the environment in which you work has changed. It is barely recognisable, and you might be afraid that you are no longer relevant.

The above sentences could apply to just about any occupation. It is no less the case for Database Administrators. It’s happening both in the areas of workflow and business culture, particularly with the growth of DevOps, and in the technological area of database servers migrating to the cloud.

The fears are real, and not unfounded, however perhaps they are a bit exaggerated. The need for DBAs in either environment is certainly not going away. Let’s address these separately.

DevOps and Changing Workflows

As the software development cycle has by necessity sped up, the nature of how roles are defined are changing. Traditionally (if we can say traditionally about a field that’s actually not that old; SQL itself was created less than a half a century ago), roles were pretty well defined. Developers wrote code. Sysadmins managed the servers. DBAs build and managed the databases, and handled all deployment from development, to staging, to testing, to production. QA s noticed the mistakes that everyone and made nobody happy (except the customers, by their reduction in complaints… oh who are we kidding? customers always complain). The workflow was pretty clear, however change was slow. At first there wasn’t as much of a problem. Customers didn’t have a lot of choice, and competition was relatively limited, so it was okay if any new development took months to come to

However, for better or worse that has changed. Changes in business models forced the need for a faster model for bringing products, data, and fixes to the customers. Next thing you know, it has become necessary to bring everyone on board throughout the whole process, and to make it far more iterative. If you’re not open to change, it can be very disconcerting (especially with QA jumping into the fray throughout the process). Even worse, Developers are committing changes to the database schema. And it seems to be working (at least in some cases). What’s a DBA to do?

Well, the first thing is to remember: “Don’t Panic.” As a DBA, you know whether the database is running properly. Remember that nobody else knows the inner workings of how databases work, and how to manage performance. It’s important to remember that DBAs are integral members of the “Ops” part of DevOps. DBAs are crucial to this role quite simply because they know how differences in one system can affect another. On top of this, some of the changes are requiring DBAs in different ways than they did in the past.

DBAs must change a little to fit the new machine. While it was possible in the past to get away with taking things slowly and making sure everything was running at its best before releasing a product, customer demands have increased. This is no longer a real option. As a response, it is crucial for DBAs to learn more about the Agile process wherein problems are broken into pieces and problems are addressed in the real world (I know this is a simplification, but this isn’t an article about Agile).

Of course, it is true that some shops still take too long to deploy code. The DevOps folks are completely right about this. However if we are going to use these shorter cycles, the risk of bad code getting released is greater than it every has been before. Bad code can take down a database faster than you can say “ Bobby Tables .”

Of course you can’t stop the pace of development, which means that the need for DBAs to be monitoring the databases (with good analytic tools) is greater now than ever before, quite simply because of these shorter cycles. Sure, the code worked in testing, but nobody can realistically tell how it will work with the volume that is seen in the real world.

For any smart business to provide a quality product, it’s absolutely critical to be monitoring status and performance. With good tools that can effectively monitor and analyse the behaviour of this code in real life, DBA can keep track of any problems that need to be fixed, or database performance that needs to be tweaked.

Cloud Migration and Changing Technologies

Another major change affecting DBAs is the actual location of the database servers. In the past, our databases were all run on locally hosted servers. While this still is the case in many places (there will always be that dark freezing room in the basement where new forms of artificial intelligence are creating themselves and preparing for the singularity), more and more we are starting to host in the cloud.

Despite these advantages of cloud hosting, new threats emerge, hidden in the silky words of the cloud host’s marketing language. There’s a claim that their systems are now “fully automated.” All you need to do, they claim, is choose a few configurations, push a button and you’re all set. This language getting into corporate managers’ ears is fodder for any DBA’s nightmare.

As we know (and anyone with any experience with the cloud will know), these promises are pretty much fantasies. Sure, there are many advantages to cloud hosting. In many ways it does run smoother; there’s better replication and uniformity of access in the cloud. However, these systems still need to be monitored, maybe not entirely in the same way as your local servers, but also for other reasons.

The supposedly “self-managed” instances you are running in the cloud? Those are typically only subsets of what you are used to running locally. They may require less manual work quite simply because they have less functionality than your local systems. They are often virtual and shared. The hardware they are using? Opaque to you, but likely highly variable from one location to another. Instances may behave entirely different from one location than on another. Typically, the functionality is severely limited compared to your own servers which you can tinker with to your hearts delight.


On top of this, applications rushed into production as a result of the DevOps process will not suddenly perform better in the cloud. Sure you can add more resources to solve the problem, however it may not be the most efficient way, and for each bit of extra resources, such as new instances, more replication, this increases the expense.

This brings us to our next point: cost. Typically, most cloud hosts charge by usage. If you leave the management of these Databases to them, they have really no incentive to make these run well, and to identify if there are processes that are causing more processor time and/or bandwidth. In fact, it is in their interest not to fix these. So DBAs will need to monitor activity, keep good records, identify if there’s a bad process, and be able to fix these. You need to be able to fine tune the performance, identify whether adding resources makes sense, or if in some cases clean up unused resources.

Overall, despite changes in workflow and technology, DBAs remain important for new reasons. With the right tools, their value relevance has actually increased as business needs increase.

Start your free trial today! download here

Security considerations in database operations

As most DBAs know, security of data is one of the most difficult yet important tasks of maintaining a large estate of databases. It has kept more than one administrator up at night worrying about potential threats and pitfalls. With the growth of the information economy, not only is most information stored in databases, the value of that information has grown. As with anything with
value, the threats to security increases at a direct correlation to its worth.

If you are handling a great deal of sensitive data, you already know and must deal with this on a daily basis, not only due to the necessity of maintaining business integrity, but also due to potential legal pitfalls if sensitive information were to leak. It doesn’t take much browsing of technology or business news to hear or read about some large company that leaked tremendous amounts of user data, and have been subjected to millions, if not billions of dollars of losses.

Even if the data you store is not particularly sensitive in nature, this does not leave you invulnerable. Perhaps this data is integral to running your business? What if you lost access to this data? Even if you have backups of everything, the sheer amount of time that gets lost repairing data, can become astronomical in a short amount of time. And on top of this, nefarious users may not even care; they may break in just for the “fun” of it. No matter what type of system you run, security is a serious concern.

Confidence in the security of your database operations is fundamental to your business.

Some of the major vulnerabilities to databases include (but are certainly not limited to) default/weak passwords, SQL injection, improper user security, DBMS packages with too many features enabled, and more.

Before you start to panic, while there is no failsafe solution to protecting the integrity of your databases, there are quite a few steps that you can take to reduce the likelihood this sort of disaster.

Keep Sensitive Databases Separate

As any black-hat hacker knows, all it takes is one weak spot in a system to get in. Due to this, never assume that all of your security should exist externally. If someone malicious (or even accidentally gets in, they should run into more walls.

If a particular database contains very sensitive information, it should be quarantined from all other systems. If it’s not possible to keep this data completely offline, make sure nothing else can reach it with any ease.

Regularly Monitor

Keep a record of your database inventories, and regularly monitor behaviour on each of them to determine if there are any anomalies in their behaviour. Having a good system for keeping track of statistics and to be able to flag unusual activity will go a long way to spotting any potential breaches.

Role-Based Access Control

There’s a fundamental truism associated, not just with databases, but all systems: the most vulnerable part of any system is the human component. For a multitude of reasons, including inattention, forgetfulness, laziness, or even outright malicious intent, people are just not as reliable as we’d like them to be.

For this reason, do not give admin rights to all DBAs as default, but create roles and assign roles to DBAs. It is easier to revoke role access than changing admin passwords all around. Also, start off with the assumption that your DBAs only need minimal access. It’s a lot easier to deal with frustrated users than it is to deal with putting out a fire after the barn has been burnt down.

Don’t let your developers have administrative power over users. The temptation to simply “test” a piece of code has a way of accidentally opening security holes, and the creation of “temporary solutions” that never get patched. 

You should also consider giving your developers access only to views instead of tables. If for some reason a hole gets left open, this will reduce the likelihood of actual data destruction.

Centralise Access Control

If you are running on a Windows network, you have the ability to use Active Directory to handle access rights and roles. Use a central login point. Let command console connect through the firewall to the management server, and then connect from there to the instances.

Try to place management servers in subnets behind the firewalls, so you do not have to open all firewalls to allow connections directly to all instances.


Don’t forget that even secure connections like SSH have vulnerabilities at their endpoints. Encrypt all connections where possible. If for someone is sniffing at your database connections (the larger you are, the more likely this is occurring, and for extremely sensitive data, this should be a given), make sure that any packets that are intercepted are encrypted, preferably using 256-bit which should
be enough to prevent most brute-force attacks.

Software Security

While, as mentioned before, you shouldn’t depend on software to handle the security aspects of your databases, it’s generally a good idea to enforce software security. It never hurts to have extra layers, so if you have developers accessing, or you are developing for your databases, consider using stored procedures and transactions with fallbacks wherever possible, and if software must access the
database from a public interface, make sure your data packets are being stores as objects. In other words never allow inputs to directly access your database itself. Again, as mentioned before, use views and not direct access to tables.

Stay Up To Date

Keep abreast of all security news, particularly as it relates to your databases. It’s a good idea to regularly check to see if there are any updates of any sort, not only as it applies to your database platforms themselves, but also with any software or connections you use for managing your systems. When an update does come through, do it immediately lest you be vulnerable to zero-day exploits.


This is really only a cursory overview of approaches to take when maintaining the
security of your databases. As any security professional will tell you, there is no such thing as a completely secure system. However if you can take a few steps to make the effort getting in much more difficult than the payoff, you will have thwarted most types of vulnerability.

The growing problem of “complexity creep” and how to avoid it

Complexity is often a natural condition of most successful businesses. We build databases to handle complex data, to
maintain a layer of structure for important business information.

However, when building a database, or cluster of databases, typically the needs or requirements change over time. New
divisions or projects spring up. This is generally not a bad thing for a business or organisation. In most cases growth is good. However order to do this, without incurring huge amount of expense, often you a add these modules into existing databases rather than create new ones for different purposes.

There are many advantages to this approach; it makes accessing data easier if needed. However, in some (read: many) cases, you need to create new databases to handle different functions. One part of a business, such as vendor contract information, may have literally nothing to do with another, such as customer service records. So new databases are created. Maybe even the needs in one function, such as sales records increase beyond the capacity of the original
database they receive higher workloads than others. 

A problem appears on one of your instances, so you fix that. Meanwhile a different problem occurs on different ones, and you fix that separately, so instances are created. The complexity grows even more. This problem only increases with Hybrid of on premise vs Cloud systems, and each comes with different needs.

Maybe there are entirely different databases all holding versions of the same data; these are hacked together through XML or JSON feeds, and it works okay. But the more often this happens the more complex your system gets. This, of course, is not something that you can completely avoid (such as dealing with the differences between operation and analytical structures) but ideally you can minimise this.

The next thing you know you’ve created a monster. It’s inevitable. We’ve all experienced it.

While it’s not possible to remove all complexity, sometimes it makes sense to take a step back and take a look at your structures. Below is list (admittedly simplified) of techniques that you can use to try to reduce complexity creep.

Efficient Database Design

While it may seem obvious, design needs to be addressed before anything else. If you start off with a bad or inefficient
database structure, complexity can spread like a virus.

Build scalability into your systems from the ground up.

As a general rule, never assume that your small little database project will always remain so small. Quick and dirty design may work great in the short-term, but you never know when you could be creating a headache for yourself down the road

Use constraints

Don’t depend on software to handle this. If you enforce foreign key relationships at the data-level, you don’t need to worry about different coding styles of different developers. This will ensure the integrity of your data, and less vulnerable to bad code.

Avoid common lookup tables

I’ve seen this more often than not. Just because two different types of tables have similar structure does not
mean that they should necessary belong together. While this may seem simple and cleaner at first, it can cause
problems down the road. Though they both are worn on your feet and have similar attributes, you don’t put shoes and socks in the same dresser drawer. The same is true in your tables. Even if you can visualise everything going together in this way, you will run into constraint problems and a ton of confusion (not to mention that your developers may want to cause you physical harm).

Good normalisation vs database speed

On the other hand, are you repeating data in multiple places? Have you configured for scalability? Sure
sometimes it seems crazy to create a new table for each new piece of data and each look up can slow you
down, but remember that complexity often gets created by oversimplified as circumstances change. 

Standardise Configurations

The more you standardise, the less clutter and confusion. A large number of different configurations, setups, and tasks
and scripts is inefficient, time-consuming, particularly for complex operations. Creating standardised configurations
makes clustering easier, which will in turn increase the uptime of your instances.

Operational vs. Analytical Configurations

While in many cases it’s not possible to configure all databases in the same way, you can typically break them in to some generalised categories. The reality is that different types of databases often operate in opposition to each other; what works well for business workflow may not work well for running analytics. So maybe you spin off a view of one into a view.

However, you can place almost all instance/views in one or the other models. As a policy make sure that all databases (within a specific purpose-type) are built with the same or similar configurations, whether it is needed or not. It may seem like it is unnecessary at first but it will save you a lot of time and frustration down the line.

For operational databases, use one standardised configuration and a different for databases used for analytical purposes.

Management Tool

Once you have everything configured in a similar fashion, you can manage everything from a central place. However, sometimes you don’t have this option, particularly if you’ve inherited a veritable forest of platforms and configurations.

The problem with working with multiple platforms and configurations is that it can become difficult to manage them without jumping from one to the other. Analytical and management tools (if they exist) tend to work for one platform or another. They may also have limitations for the number of instances they can handle within one version of the application.

However, if you can have one tool that will work with all of these platforms, you will save a considerable amount of time. Ideally this tool should do more than monitor activity and statistics but it should also be able to assist in distributing the same scripts, tasks and reports to all instances.

Facilitate Group Operations

Once you have a good management tool, for each database, create a standard set of group operations that you can handle as one batch process.

Try to keep these as close to identical as possible, at least for groups of instances that have the same or similar functions. If they are configured in the same way, you will likely need only need one script for operations such as installations, updates, and reporting.

These are only a few examples of ways to reduce complexity creep and to gain a stronger hold over your database operations, however if you use these methods, you will certainly be in a better place.
Start your free trial today! download here

5 inevitable red flags to watch out for in complex database systems

The goal of any organisation, business or otherwise, is to grow.

In the past, the size of a company was determined by the number of physical products that one either created or sold. Say you built and sold left-handed toolboxes. If you sold enough of these left-handed toolboxes, and if they were of good enough quality, more people wanted your toolboxes, so you created more.

With this growth, was a need to keep a record of these products, customers, sales, etc. Companies simply kept records in pen and paper, in stacks of ledgers. Soon these started to fill filing cabinets and these cabinets started to fill rooms and even buildings.

Computers made things a lot easier; instead of paper, ledgers were stored in text files. After that came spreadsheets, which were fantastic at first (anyone remember Lotus 1-2-3?) but soon became inadequate for large amounts of data. Next, relational databases helped organise information. 

As this data became more and more complex, the economy changed to be more focused around information itself, so information storage itself was essential to even the most basic operations of any company.

However with change comes new problems.

A Mix of Different Platforms and Versions

Much like in the past, as companies and organisations grew larger, the methods storing information grew more and more complicated. As a result, that simple database that your organisation used to manage your data? It was no longer sufficient. More databases were added. As new platforms became available new functions of your business were stored differently.

The earliest business was stored in an old Access database (yes, admit it, you know that’s how you started, or maybe you can earn some real credibility and talk about your old DB2 systems to anyone who will listen… but they won’t).

So you moved to more complex systems, like Oracle or Sybase. However as new segments of your business were added, different platforms were used (or new DBAs preferred them).

Soon you ended up with a far more complex array of different systems. Even within databases running the same platform, you now have different versions, not all of which are compatible. Why not upgrade the previous ones? Several factors make this unlikely, or even a good idea. Sure you can migrate everything to a new platform but this is time-consuming, and also expensive. Also, every time some new system comes out, you’d need to do the same thing over and over again. This quite simply would slow down your entire organisation and leave your business in the dust.

Many Different Tools All With Different and Complex UI

So you are stuck with this rainbow of platforms. Of course you need to monitor and maintain each of these systems. However due to the differences between them, you may end up with any entire infrastructure of tools that you use for monitoring each of platform.

Each tool is designed for a specific platform.

You have tools for monitoring your MSSQL databases (such as SQL Monitor) which work well in this platform, but do not play well with others.
To keep things simple, view are in-depth article: Why Simplicity Is Key When Managing Your Complex Database Systems

Lack of Proper Tools

Sometimes you don’t even have tools that do what you need. Similarly you can either purchase tools for the others, or use variants of the above which claim to do this work, but have drawbacks. Maybe each works fine for its specific system.

However, reality being what it is; different sets of DBAs work with different platforms.

Each tool has its own UI, and different processes for reporting data. Maybe one tool can monitor emergencies well, but can’t spot problems before they occur, or if they do, require specialised knowledge to identify potential pitfalls in the logs.

Maybe another system is good at the latter, but doesn’t provide clear enough information on other crucial information, such as whether an index has been dropped, or whether it has just disappeared into the aether never to be seen again.

Missing Overview and Routines

Many of these systems come hermetically sealed. Much like many popular computer companies (who shall be left unnamed) these have beautiful designs and look great out of the box. However, if something doesn’t work that way that you want, there is virtually no way to get under the hood, at least without voiding the complex warranties.

So customisation becomes a hardship.

Others might be a little rougher, but even though you might be able to hack into them, because they use their own proprietary languages, it makes managing these somewhat difficult, and virtually impossible to customise for other systems. 

Home-Grown Tools

Sometimes it’s possible, particularly if you have some particularly talented DBAs, to create some specialised tools. You have a genius working in your systems area who has created their own tools which, through its convoluted way managed to keep records of each system.

She has managed to hack through the proprietary systems (of course voiding the warranty, but hey, she’s talented, who needs warranties anyway, right? Right???) and create something that works.

Unfortunately, she works alone and doesn’t like to share information, and one day gets sick, or rage-quits due to the complexity of all the different systems and the constant changes (you add a MongoDB, and suddenly the idea of a non-
relational NoSQL database system causes her brain to break).

So now you have everyone else left to manage these great tools. Much like the platforms, including the older ones (yes, they keep updating trying to keep up the newer systems, but creating unknown horrors for your DBAs), the tools quickly become no longer compatible.

Due to human limitations (and no, the Singularity has not yet occurred, so you can’t replace your DBAs with robots…. thankfully), keeping all of these scripts and tools up-to-date is a virtual impossibility.

So what do you do?

Ideally what you need is a system that can handle all of these tasks.. If only you could find some tool that could provide a top-level view of all of your systems, with an interface that remains at least mostly consistent, that provides the same information for each of your platforms and versions. If you have the ability to customise the system to handle what you need to do, while at the same time making them accessible and manageable from a central location. Well that would be great, wouldn’t it? 

Could a database dashboard improve your business’s strategic decisions?

In large businesses and organisations, it’s often vitally important that strategic decisions are made on up-to-date reliable information. Decisions made on the basis of poor information can lead to costly mistakes and problems further down the road.

In an IT and database setting, this means having access to a complete overview of your entire server farm, with the ability to monitor every one of your database andVM instances.

This sort of overview is what’s necessary for making the right decisions when it comes to resourcing and planning for the future. A high level of insight into your instances can help you to overcome or sidestep any problems.

In this article, we’ll take a look at how a ‘database dashboard’ could help your strategic decision making:

Information-Based Decision Making

Decision making as the manager of a high number of instances can be difficult without access to all the relevant information. Deciding what to do in regards to resource allocation, company-wide security changes, and other management processes can be extremely slow without instant access to the figures that you need.

Thankfully, tools exist that can provide DMs with a ‘database dashboard’– the means to see all relevant information in one place. These dashboards can provide all the information that a DM needs to make any decision without the need for lengthy reports and studies, saving both time and money.

Decisions made without access to this level of insight and information can be risky. For example, resourcing estimates can be wrong without an accurate readout of current usage and trends, leading to costly over or under-resourcing.

Complete Instance Control

Just as a database dashboard can give you all the information that you need to make decisions, they can also provide the total control that you need to effect changes across all of your instances from one place.

Database tools that work with all of your instances, regardless of their number or the platform they run on, allow you to quickly make global changes to your organisation’s server systems. Management and monitoring tools from dbWatch do just this, giving you the control you need to make informed changes across your entire business.

Whether you need to make the resourcing changes that we’ve already mentioned, or change roles and security measures, or perform routine maintenance, having a top-down view with total control makes most jobs quicker and easier, and thus cheaper too.

Preventing Future Issues

When it comes to cutting costs and saving your staff time and headaches, there are few more important things than reducing the risk of future issues or problems with your servers. Disasters that bring your server systems down, or cause data leaks, can have costly implications for your entire organisation too, so it’s important to keep the risk to a minimum.

A database dashboard provides the information necessary for DMs to identify issues before they become a problem, and to take appropriate action well in advance of any disaster.

Whether you need to make structural or resource changes, monitor performance over time or establish routine maintenance plans, a dashboard gives you the information that you need to make informed decisions and the power to make them from just one place.


The most important tool for any database manager is full control over their system – and a database dashboard provides both the complete overview and the deep insight necessary for that full control.

With database tools that provide you with the information you need quickly and all in one place, you’ll be best placed to make information-based strategic decisions for the benefit of your entire business.

3 ways to remain consistent when performance tuning your SQL servers

Tuning the performance of SQL servers can be a complex and time-consuming task that can affect consistency. Frustratingly, there are no shortcuts either – it’s a job that needs doing and cutting corners can spell disaster for your organisation and require some costly maintenance to put right.

Tuning the performance of SQL servers can be a complex and time-consuming task that can affect consistency. Frustratingly, there are no shortcuts either – it’s a job that needs doing and cutting corners can spell disaster for your organisation and require some costly maintenance to put right.

Here, we’ll take a look at three ways to keep the process consistent when you performance tune SQL servers:

Total Control

For many organisations today, server instances can number in the hundreds, if not the thousands, and taking the time to individually tune each one can be a very long process.

Thankfully, some database tools are designed to work with large and scalable systems, such as those offered by dbWatch which work with thousands of instances at a time. With the help of these tools, DBAs can now work in a new way, comparing and tuning their entire environment together, without needing to dig down into each instance, one at a time.

Not only does this level of control greatly reduce the time needed for performance tuning, but it can provide much-needed oversight too. From just one interface DBAs can access all of the information that they need, managing and monitoring their entire system without the need for multiple reports or a suite of different tools.

Top To Bottom Tweaks

While the total oversight offered by database tools is useful for implementing sweeping changes and seeing their effects at a glance, the same tools can also be used to dive deeper into the system to make smaller instance-specific tweaks.

Thanks to the all-in-one interface provided by tools, allowing you to see and manage all of your instances from within one tool, diving into an individual instance is a simple task. This level of control again helps to save DBAs valuable time, as well as cutting down on the number of tools used by your business – saving you money on licensing and installation costs too

Planning Ahead

No matter how many instances you operate, or how complex the changes you need to make may be, no good project can start without adequate planning. So, before a DBA can start the process of tuning a server, they need to know which areas are fit for improvement and what sort of changes should be made.

To help with this, many teams use tools which automatically collect and analyse data about load and resourcing. Armed with the relevant reports, DBAs can waste no time in making the best possible tweaks to an instance, providing the exact benefits that their business needs. Check out SQL Monitoring – 5 steps to full control on monitoring/data collection and analysis.

These same automated tools can be used to provide data to show the effectiveness of any performance changes, giving DBAs an idea of how successful their tuning was. This can help to inform any future maintenance and serve as evidence to others in your organisation of the benefits of the work that you’ve completed.


SQL server tuning can be a long and complicated job, but there are tools out there designed to make it a little easier. 

With access to reports and figures to guide you, and control over all your instances from just one tool, your DBAs will be able to get the job done quicker than ever before without cutting any corners.