3 ways tools can streamline your database maintenance procedures

Database management and monitoring tools are a great solution for streamlining the way your DBAs work. With the potential to save time, make workflows more efficient, and save your company money, these tools can change the way that IT teams work, providing access to greater automation and oversight of a business’s infrastructure.

Many large organisations around the world already make use of tools like these to help with managing and maintaining their servers, and here we’ll explore three ways that they can help your team too:

Automated Maintenance

Automation is one of the best ways to streamline database maintenance, saving staff time, cutting your costs and giving DBAs the ability to focus on other important tasks.

Many tools designed for large and scalable database servers provide the ability to automate maintenance and a host of other tasks too. From automatic backups and updates to routine report production and real-time alerts, these tools can handle the small tasks that would otherwise take up a DBAs time.

Automated maintenance can also offer an organisation an additional level of security and stability. With DBAs occupied with other important tasks, database tools can ensure that essential work is still completed and that your system is well maintained, meaning you’ll never have to worry about vital work being overlooked by busy staff.

Faster Deployment

Increasingly, many businesses are now making use of large and rapidly expanding numbers of servers and instances for their work. While this growth brings many advantages to businesses with additional storage and resource availability, larger systems can be harder for DBAs to manage.

Monitoring and management solutions that have been designed to work with large and varied systems can help in these cases, however. Capable of providing an overview and access to a very high number of instances at once, regardless of their platform or version, these tools can allow DBAs to apply changes to all instances across their business at once.

Being able to affect changes on all of your instances at once from within just one tool reduces the time it takes DBAs to complete maintenance and procedural changes, and also can help to reduce system downtime too. Long periods of downtime, or slowdowns caused by DBAs working on systems, can have a negative impact on customers and potentially cost you valuable business.

A Unified Approach

Not only do these tools dramatically reduce the amount of time needed to make changes across a large number of servers, but they can also provide a simple and unified overview of your entire system.

Making use of these solutions, DBAs can control, monitor and report on all instances in one place, regardless of software versions or platform, streamlining their processes and cutting down on the inefficient use of time and money. DBAs can edit databases, control user groups and security, and make all other changes that they need without having to load up a whole suite of tools – just one scalable tool can handle the entire system.

Using just one interface like this has the advantage of saving DBAs time by streamlining their maintenance and monitoring processes. With less time spent switching between tools, accessing different servers and completing routine tasks, DBAs can allocate their time to more pressing issues to help your business.


With the help of database tools designed specifically to help streamline and automate your database maintenance procedures, your DBAs can work more efficiently to save your business time and money.

4 ways database management tools can reduce your IT costs

For many organisations today, spiralling IT costs are a major concern. From the cost of operating large server farms, to staffing and maintenance costs, IT budgets are growing exponentially in many industries.

To help businesses to help mitigate some of the costs of working with large server farms, and to help their staff be more efficient, several solutions now exist for monitoring and managing databases. From cutting down on unnecessary bills to making DBA teams more productive and streamlining workflows, these tools are built from the ground up for efficiency, scalability and use by large enterprises.

Here, we’ll cover four ways that these solutions can keep your IT costs in check and improve efficiency.

Fewer Serious Incidents

Disasters and serious incidents represent a very high financial cost for businesses, both in the time spent planning and the cost of restoring services and maintaining security.

For many IT teams, a lack of oversight over their systems means that emerging incidents aren’t discovered until it’s too late. By making use of database monitoring solutions, DBAs can utilise proactive monitoring and analytics to detect and correct problems before they become serious. Monitoring tools can also be used to discover security weaknesses, to identify struggling infrastructure, and to institute regular preventative maintenance across your servers and instances.

Lower Licensing Costs

Many software providers offer several forms of license for organisations to choose from, with an enterprise license usually being the most expensive and offering access to the full range of software features. Quite often, however, DBAs won’t use or need the full suite of features that enterprise licenses unlock. Despite that, many businesses still pay for enterprise licenses which they don’t need, either by accident or because their needs have changed and they lack the time, tools or expertise to detect and correct the situation.

Database management tools help to solve this problem. By analysing actual usage and reporting on which software and features are being used versus licensing level, DBAs can leverage management tools to track and adjust their licensing levels.

In the case of Oracle, in particular, many DBAs overstep the standard license terms inadvertently or by accident. Oracle always supplies its full suite of tools to DBAs regardless of license, and later demanding more money if they discover via audit that a company used tools or features that they hadn’t been licensed for – something easily done when changing just one option can trigger full and costly enterprise licenses.

Optimised Performance

The performance of your IT systems is vital to your business, not just because it provides your customers and staff with a smooth service but because it saves you money too. Poorly performing servers can lead to slowdowns at work, potentially disrupting services to customers which could cost your organisation valuable business and hurt your reputation.

With the help of database management tools, DBAs can optimise the performance of your organisation’s IT systems, improving your setup to ensure that it constantly runs smoothly.

Be More Productive

DBA productivity is one of the key areas that can be improved by any enterprise looking to cut costs and improve operational expenses. With the right solutions, DBAs can get access to the analytics, insight and monitoring that they need to perform their jobs efficiently and to allocate their time where it’s most needed.

Many database management tools offer streamlined interfaces that allow DBAs easy access to all the information that they need and the ability to monitor multiple instances at once. Tools like this also combine the features of many other individual server tools, saving time for DBAs and reducing the amount of time spent training to use new software.

The best practice guide to database scalability excellence

We’ve spoken before on our blog about the importance of database scalability to modern businesses. Companies can quite easily be held back by rigid databases and server setups, leading to higher costs, difficulty with future business, and reduced scope for further growth.

In this article, we’ll be looking at best practice for businesses currently, or in the planning stage of upgrading their IT infrastructure and moving towards greater scalability. From adequate testing and planning to putting the right tools and procedures in place, these tips should help you to transition smoothly into a scalable way of working and give your business the edge going forwards.

Here are our top best practices for database scalability:

Capacity Planning

One of the key reasons for a business using scalable technology is the continued growth in both database size and the number of instances needed.

Best practice for transitioning to a more scalable infrastructure is to properly analyse and plan out your capacity, both in terms of storage, performance, capability, redundancy and other resources. This means both looking at the required capacity of your servers today and in the future, and planning your new systems around that. Think about how much expansion you expect your business to experience in the coming years too, and plan accordingly – choosing both infrastructure and software that suit your requirements.

A lack of proper planning at this point could cost you in the future. Low resource capacity could lead to you needing to undergo this process again in the coming years, bringing more cost and disruption to your company.

Security First

Security should be a priority for any team looking at improving or expanding their businesses server equipment and databases.

From encryption and certification concerns through to full user access controls, it’s important that your team is able to maintain complete control over your database’s security at all times.

Key to this is having the right tools in place that allow complete controls over your database instances and will integrate with Active Directory and/or Kerberos. A strong role-based and/or group based access control regime will facilitate secure access to your data. It is possible that the existing tools you have in place, those used for managing a smaller database farm, won’t scale with the higher number of instances and users that you are now planning for – so don’t overlook a potential upgrade of your tools.

Adequate Monitoring

Just as with the security tools mentioned above, it’s vital that you consider putting the right monitoring tools in place during your scalable transition.

Monitoring tools are about more than just resource tracking – though that is a big feature. Having tools capable of monitoring across thousands of instances will also give you access to health checks, report production, real-time alerts and statistics, and load and performance data that will give you valuable insights necessary for performance optimisation, server consolidation, Oracle/SQL Server license optimisation and other procedures that should be considered routine in larger environments.

Again, it’s important to check whether you’re using tools that will work with your new systems. Many database monitoring tools aren’t able to monitor the large number of instances required by modern businesses, and some are tied to particular platforms or versions, restricting your options for growth in the future.

Whatever stage of your scalability journey you’re at – whether you’re only now at the planning phase or the transition is already in progress – these tips should help your business to move to a secure, reliable and scalable database in the future.

How database monitoring tools can help assess your need for DB expansion

Databases and IT systems are the backbone of most businesses today – businesses of any size in any industry. From tracking transactions to hosting online services and storing business information, database servers are used for countless vital parts of companies, and business can grind to a halt if they’re not properly maintained and supported.

A key part of managing a company database system is knowing when it’s time to expand or adjust your system – whether that means growing the number of instances that you operate, changing the platform you work on or making other structural changes.

To know when it’s time to make those changes, and to see whether your business could benefit from IT infrastructure expansion, DBAs need to make use of database monitoring tools. With the right tools, IT staff can see at a glance what is holding their system back and put a plan in place to improve their services.

Here are just some of the ways in which database monitoring tools can help you to assess your need for database server expansion.

Real-time Alerts

A key part of any great database monitoring solution is a real-time alert system. Capable of alerting you and your team to system outages, usage spikes, poor performance or other issues, alerts give you the means to adopt a proactive approach to monitoring, to see exactly what areas of your system are falling behind and what effect that is having on your business. By configuring tools to monitor the weaknesses in your system, you can receive alerts to prompt action and to warn you of any impending problems. With the data that these alerts provide, you can then put a plan in place to make sure that problem doesn’t occur again. If the solution you have allows it, you can even extend your plan to include custom tasks to be scheduled to run whenever required, thereby achieving more automation and consistency in future.

Resource and Usage Monitoring

One of the most important jobs of any database tool is resource monitoring, whether virtual, cloud based, or otherwise, and this forms the basis of the alerts and reports mentioned elsewhere in this article. By keeping an active log of resource usage, including disk space and performance, CPU and memory usage, and other variables too, you can identify any bottlenecks and areas due for upgrade.

When looking for a solution to help you with database monitoring, it’s important to choose one that is compatible with all of the systems that you use. Scalable tools from dbWatch can work across thousands of instances and support a wide range of platforms and versions, making them a great choice for resource monitoring on large, varied systems.

Database System Reports

In addition to real-time alerts, reports and longer-term logs produced by monitoring tools can be invaluable when assessing your database’s performance.

By producing reports that look at your usage and performance statistics, and server usage over a certain period of time, your team can properly assess the need for expansion. Times of day or week of heavy usage can be identified and capacity increased to compensate, and weaker parts of your infrastructure can be noted and replaced.

Reports backed up with facts and figures can also help DBAs to make a stronger case for investment into IT systems, providing a real-world example of how improved server systems could benefit the business.

Database monitoring solutions can be invaluable for determining the need for database server expansion. With the right tools in place, you can receive real-time and long-term views of performance and usage and put plans in place to improve the areas of your system to best support your business.

What is meant by horizontal and vertical database scalability?

Database server scalability is a common practice in use by IT departments today to help them cope with ever-growing databases and server requirements.

As data requirements grow, the number of server instances in use by businesses explodes, and the importance of stable and reliable IT systems increase, it’s no longer possible for businesses to cope with rigid, non-scalable systems and tools.

Making your database servers and solutions scalable isn’t necessarily a simple task, however, and there are two main variations of server scalability to take into account. In this article, we’ll outline the differences, pros and cons of horizontal and vertical scaling.

Horizontal Scaling

“Scaling out”, or Horizontal Scaling is the practice of adding more instances or servers, to spread out databases on more machines to deal with low capacity or increased demand”. When more capacity is needed in a system, DBAs can simply add more machines to keep up. In database terms, this means that data is often partitioned across the many machines that make up the cluster, with each individual server holding one part of the whole database. “Horizontally scaled servers can also make use of data replication, whereby one machine holds a primary copy of the entire database while the multiple other copies are used for read-only load.”

Horizontal scaling has several advantages over a vertical approach, particularly in terms of cost. Not only is establishing a scalable system easier this way, with individual nodes being cheaper than in a vertical set-up, but upgrades are also quicker and more affordable. Maintenance can be easier too, with faulty instances quickly switched out with minimal disruption to the rest of the system.

Conversely, a high number of instances adds complexity to a system, which can make monitoring, administration and troubleshooting harder and can increase recovery time from disasters. Licencing fees can also be much higher under a horizontal system if machined are licensed individually, and the physical space needed to house multiple servers can also bring cost and logistical issues.

Vertical Scaling

Vertical scaling, or “scaling up”, involves adding more resources to a smaller number of server instances – the opposite approach to a horizontal system. Through increasing CPU resources, memory and storage or network bandwidth, performance of every individual node can be improved, scaling even the smallest servers to handle large databases.

Compared to horizontally scaled servers, this offers the advantage of being much easier to establish and administer – as it is just a small number of machines, or even just one. Vertical systems can also offer advantages in terms of stability, reliability and development, and cost savings through being suitable for smaller data centres and licence costs being lower.

Vertically scaled systems do come with some disadvantages though. Not only can initial hardware costs be high due to the need for high-end hardware and virtualisation, but upgrades can be both expensive and limited – there is, after all, only so much you can add to one machine before it is still outgrown by your database. Normally clustering like Always On or RAC is applied to these large servers to make them reliable and with enough capacity to handle the load.

You may also find yourself quickly ‘locked-in’ to a particular database vendor by following this strategy, and moving away from that vendor later could mean very expensive server upgrades.

As database requirements continue to grow, your organisation will need to adopt a form of scalability in order to keep up. While horizontal scaling is widely considered to be the modern, flexible approach, and certainly does have some advantages, it can bring some unwanted complexity into your IT infrastructure.

Vertical scalability brings complexity, horizontal scalability brings logistical challenges. In either case good tools for monitoring, analysis and administration can reduce the challenges and help you deliver greater productivity and performance and keep costs in check.

The approach that you choose will depend on your business’s requirements, but whichever you opt for will help your business to continue growing and to keep up with ever-expanding databases.

4 Early Warning Outgrowing Database Systems

Modern businesses of all sizes rely on large and scalable databases for the smooth running of their IT systems. From logging transactions and stock levels, to providing staff logins and sharing information throughout a business, databases and their associated server farms provide essential functions for organisation in every industry.

But, as more and more business is conducted online, and as every business moves to digital storage and transmission of data, those databases and serverfarms are growing larger than ever. Because of continued expansion, many server administrators and IT managers are finding that their servers and database management tools can no longer keep up with demand, and that their business has outgrown their hardware.

Here are four signs that you might have outgrown your business’ database tools and infrastructure:

Poor Performance

A key indicator of a database that has been outgrown is poor performance for the system’s users.

Whether it’s resulting from a lack of storage, inadequate hardware, slow connections or a lack of proper maintenance and management, a poorly performing database system can negatively affect almost every aspect of your business. With applications performing poorly and users unable to access what they need when they need it, your business could be running inefficiently, leading to lost business and wasted money.

While a scalable serverfarm can help to alleviate some of the strain, you can also look to management and monitoring tools to track down and analyse performance issues and to better maintain your services.

Too Many Tools

If your morning starts with logging into multiple database tools – say more than four – it could be a sign that it’s time to clean up your server environment and make use of tools specifically designed for a system of your size.

While multiple tools may work on small numbers of instances, larger systems can often benefit from tools that scale with them – avoiding a duplication of data and effort as well as cutting down on licensing costs and staff training time.

Slow Reporting Performance

If you dread running reports and auditing your systems, or often find yourself running reports overnight to avoid system downtime or busy periods, it might be time to think about improving your database systems and tools.

Getting accurate data quickly is vital for many business, and having access to reports on demand can mean the difference between making or losing a lot of money. And while some database monitoring and reporting tools can cope well with small numbers of instances, as your database and server numbers grow you may need to invest in new scalable services and infrastructure.

High IT Costs

Another sign that your business might be running an inadequate database system is if your IT maintenance costs have risen significantly.

As databases and their servers grow, they often require more maintenance and the use of specialised tools to help keep them running smoothly. If your DBA is facing a daily battle to keep your business online and has had to resort to patches and quick fixes over a strategic plan for expansion, it might be time to make a proper investment into your infrastructure.

Taking the time and money to create a stable, scalable and large-enough system for your business can save all of your staff many headaches down the line – and spiralling costs and a beaten-down DBA team are sure signs that you need to take action.

You can also potentially cut your IT costs by working to consolidate your servers and by ensuring that you’re making use of the correct software licences. Many businesses can keep costs down by using the right sized licences, making sure that they aren’t paying for something that they don’t need or don’t actually use.

Keeping your database and server instances an appropriate scale for your business, and keeping them running smoothly, is essential in today’s data-driven world. With the help of scalable tools, proper planning, and investment in the correct services, your system can continue to grow with your to support your business for years to come.

3 performance issues to watch out for when monitoring/managing large number of SQL instances

Large databases are a key ingredient of today’s online world – most businesses, organisations, applications and software rely on them every second of the day to keep running smoothly.

But maintaining stability and smooth performance across a large and ever-growing set of databases isn’t simple, and that’s why many IT teams and DBAs rely on monitoring and management tools to identify and solve issues quickly.

Monitoring and management tools give you the means to spot anomalies, to track server performance, and to make performance optimisation tweaks. But to solve problems and to develop strategies to improve system performance, you need to know what you’re looking for, and understand how to use the tools at your disposal.

To help, here are three key performance issues to look out for if you manage a large database:

System Slowdowns or Outages

One job of a DBA is to translate often vague complaints from colleagues and clients into action – and that is especially true when it comes to complaints of slow software and poor system performance. Using monitoring and maintenance tools, an experienced DBA can identify the source of problems for a system’s users, and put fixes in place to improve application performance.

Frequent system outages or crashes can also be identified and remedied by a DBA using the correct monitoring tools. Using the right tools can also help you to spot issues before they become a problem for users, allowing you to preempt slowdowns and outages and to correct problems before they affect your business.

Low System Resources

Many of the issues that your end-users might experience can be attributed to a lack of server resources – from slow or busy CPUs to a poor-performing network or simply not enough memory available.

Using server monitoring tools, a DBA can keep an eye on system resources and compile reports. These reports can be tallied up with known issues or user reports to determine whether lack of resources is responsible for your user’s issues. Resource monitoring is a key aspect of any database and server monitoring tool – and it is vitally important when looking for issues and diagnosing their source.

A Poorly Maintained System

While system resources and related issues can lead to a poor service for customers, a database and server system that isn’t well maintained and tuned is vulnerable to even more problems.

Using a database management tool, DBAs can continually tweak and tune a system to perform at its best; reorganizing indexes, perform consistency checks, optimizing memory usage and overall system tuning.

Without regular maintenance, you’re likely to see a rise in system slowdown, user complaints, corrupted data and more, and, in that case, it can be hard to determine a cause. A DBA equipped with the best monitoring and management tools can look out for issues and create maintenance plans to suit – scheduling automated maintenance, or putting processes in place for regular resource reviews.

With the help of server monitoring and database management, it is possible to keep your organisation’s IT infrastructure running smoothly, regardless of the amount of data or users it handles. Use your tools to keep an eye on upcoming issues, to monitor resources, and to develop fixes before users are affected.

Monitoring database servers is an important topic for DBA’s and is  covered in depth in SQL Monitoring – 5 steps to full control.

Get started with dbWatch - 30 day free trial

Optimizing Oracle and SQL Server licensing cost with dbWatch

Want to lower the cost of your database operations? You might start by consolidating database servers and loads, reducing the number of servers or standardising software. Even if you’ve settled on a single database engine like Oracle, it’s common to have multiple different versions in use; out of all the organizations using our dbWatch tools, only one managed to converge on a single version of Oracle. But just as you need to see if you’re making the most of your hardware and software environment, you also need to check if you’re using your budget efficiently on what can be complex licensing decisions.

You don’t want to waste money by over licensing or risk an audit that discovers that you’re not paying for the database versions and features you’re using. Efficient licensing means having the correct licenses for the features and the numbers of databases you run, so you need to know whether the licenses you have match the license you really need. The license reports in dbWatch are an excellent way to do that.

Inadvertenly scaling up Oracle

With Oracle licenses, becoming an Enterprise customer inadvertently is as simple as running a command that requires a single Enterprise Edition feature (and the database tracks which features are in use). If you restore a database and turn on compression, you’ve immediately become an Enterprise customer. Partitioning a database is an Enterprise feature; are you turning that on for databases so small you don’t get any performance benefits?

Even performance monitoring and database statistics change the licenses you need (something to beware of if you need to supply performance information as part of a support call). If you look at historical performance, you trigger both the Tuning and Diagnostics Packs, over and above the Enterprise Edition license, even if you’re not using Enterprise Manager, and their per-processor cost is more than a quarter that of an Oracle processor licence. Ironically, the best way to discover your license liability for your Oracle environment is to run monitoring scripts, which themselves trigger features that need extra licenses, and the tables of information they produce can require significant expertise to analyse.

dbWatch can report on what Oracle features you’re using and what license you have so that if there’s a difference between the two, you can take action before an audit happens – whether that’s turning off those enterprise features or allocating budget to cover the licenses. The reports also include a handy list of which Oracle features trigger extra licensing requirements. You can see a sample report of license issues  here, section 1.3.

That tracking can also be useful for auditing your audit. Because the dbWatch reports show how many times a feature has been used, and the first and last date it was in use, one customer facing a very large bill was able to prove that the feature that made them liable for an enterprise license was only used by an Oracle consultant on one single day rather than as part of their normal production service.


You also need to consider what hardware you’re running your databases on, and even the virtual environment they’re in. Oracle uses different CPU licensing factors for different versions of VMware, and if you use an Enterprise feature in one database in a virtual environment the licensing applies on all the physical CPUs in that environment. Another dbWatch customer who used a feature that made them liable for an Enterprise license on a single Oracle database running on a 50-core VMware cluster was presented with a seven-figure licensing bill – which they were able to reduce significantly by using the licensing report in their negotiations with Oracle.

For more insight and tuning of your database server farm, start reading aobout SQL monitoring.

The dbWatch engine itself works with Oracle Standard Edition as well as Enterprise Edition, and includes many of the configuration, tuning and management tools you might otherwise need Enterprise Edition and Oracle Grid Control for.

Optimizing and troubleshooting a database server means looking at how well it’s been running, so you need to look back at performance and diagnostics information to get all the relevant information, and that’s even more important in a clustered environment, where the cost of an Enterprise license can be extremely high. If you’re dealing with a complex Oracle installation – and especially if you’re bringing in consultants to tune the system – you need to either budget for an Enterprise license with the extra Tuning and Diagnostics packs, or use a third-party tool like dbWatch (which has a much lower price).

Often, Enterprise features that you didn’t know you were using turn out to have been turned on by consultants – or by someone clicking through the Oracle Grid Control interface to see what features are available, so you’ll want to run these reports regularly. That could be every month or even more often, because you only have ten days to remediate your usage of Enterprise features to avoid being liable for the extra license. If you don’t want to schedule reports, dbWatch can even send you an alert when you make a change to a database that triggers extra cost features.

Scaling down SQL Server

With Microsoft SQL Server, you have the opposite problem. Even though the SQL Server APIs are now unified across the Express, Standard, Enterprise and Developer editions – so the programming interface always looks the same to the database developer – you can’t use any features you haven’t already licensed. That makes it tempting to install Enterprise edition to make sure all the features are available.

Enterprises frequently have large numbers of instances with Microsoft SQL Server Enterprise edition installed by default, even if they’re small and relatively simple departmental databases that don’t need the partitioning, compression, clustering and other high-end features a data warehouse requires, and could run happily on Standard edition. Sometimes that’s because more features have migrated into Standard edition over the years and a database uses a feature that used to require Enterprise edition but no longer does. And database admins who are used to older, less capable server hardware may install Enterprise edition to get performance that new server hardware can deliver with Standard edition.

You can use dbWatch to see if you need to scale up your Microsoft SQL Server environment and take an Enterprise license to utilise more CPUs and cores to improve performance, or if you don’t really need the editions you’re paying for and can scale down. The reports also include resource information like the on-disk size of each database so you can see whether you need Enterprise edition features like compression and partitioning or not, or whether you can even consolidate onto fewer servers, giving you less to monitor and upgrade.

With the sheer number of databases businesses use these days, you need good tools to make sense of the cost of licenses and the level of performance your database environment is delivering. The information you can see through dbWatch lets you consider performance management, consolidation and license management together so you can manage efficiently, analysing the database environment and optimising the licensing costs and hardware provisioning as part of optimising performance for your business needs. Instead of doing that manually (a lengthy process when you have a large number of instances), dbWatch lets you automate collecting and analysing the information about your databases, so you can set up a continuous cycle of monitoring, auditing and optimising your database environment.

Final note: Clouds like Azure SQL changes everything again – you no longer pay for license, but for usage. How does that affect your business? What tools will you need to manage database cost in the cloud?

The database feedback loop: How visibility drives better design

You have a solid database architecture. You spent all the requisite time needed making sure your models are normalised in a way to provide the cleanest structures. However, reality being what it is, there are occasional problems.
Rarely is any database complete and perfect the first time it is deployed.

Sure, you can handle any problems that arise at first; every now and then something doesn’t run the way it is supposed to. You can see most of what you need through a quick examination of the logs; there are few slow-downs, but for the most part, due to your diligence, you can fix it with a few tweaks. But now the business needs have grown and you’re needing to create more and more instances.

This is where the trouble begins
You’ve reached a point where you now have hundreds of instances, on as many virtual servers. Maybe everything is functioning whenever you look at one instance at one point in time, but you’re seeing signs of slowdowns in individual locations and you are unsure of what is causing it.

You have this sneaky suspicion that this is happening in more places than you can manage. Something may have crashed but you have no idea that it even occurred. Indexes may have been dropped, but you get no indication. You might find it in the logs, but it is extremely time-consuming if you don’t know
where to look. Not only that, pouring through them becomes arduous, and by the time you’ve found a problem in one place, new ones have cropped up somewhere else.

Proactive vs. Reactive Database Management

The immediate advantages of a proactive approach are fairly self-evident. Too often, database administration is centred around critical event response. You respond to a problem when it occurs; if nothing bad is happening, the assumption is that nothing should be changed (i.e. “If it isn’t broken, why fix it?”) However, when something does go wrong, this can be mission-critical.

If you are only working when something has gone wrong, you are likely missing a lot of invisible problems. It’s extremely difficult to handle these issues if you don’t even know when they are occurring. A view of a single instance at one time may not show any problems, however, you could be suffering slow-downs (or worse; dropped transactions) due to minor crashes, and dropped indexes at times when you are not looking at the specific location where the problem occurred.

If you are running at a high volume, a few seconds per transaction may not seem a lot. However, small occurrences can build up quickly. A few milliseconds per transaction can have ripple effects across an entire organisation.
Imagine sales volume; If you are running an online system, where your users are customers, human behaviour being what it is, your business could be losing potential customers. For instance, in the US, on recent tax deadline, many users were unable to file their returns due to high volume. The IRS’s databases could not handle the volume of transactions. While the federal government will always get its “business,” this is clearly not the case for many private enterprises. For each second lost, you can lose users and business. Evidence shows that user patience has decreased at an inverse relationship to the speed of transactions. At this point, most people will give up on an action if it takes 10 seconds or less.
This is worse if the transaction never goes through.

If you can have a higher-level view of how your system is running, you can be much more proactive in your maintenance and stop problems before they occur.

Database Design Decisions

All of those great ERDs you built may not have accounted for usage. Some portions of your system may be getting considerably more traffic than others. Your designs may not be accounting for the true business needs. Worse still, you have certain areas of the business that are suddenly making decisions and creating their own uses without notifying you.

With a good top level view you can get an idea where certain areas may need some re-indexing, and maybe some different workflows in your scripts. You may realise a possible need to re-design at least some segments of your structures or applications.

Resource Deployment

It could be that some instances are getting hit more frequently, depending on the time of day, physical location of servers and traffic. If you can see a higher level view, you may realise that certain parts of your database may need more attention during specific times. You can run some scripts to handle some of these load problems, but it would be a lot easier to know where and when to deploy these.

Platform Selection

As is sometimes the case, you may be (for whatever reason) running some instances or different platforms, or at the very least you may wish to test the function on different platforms, be it Oracle, SQL Server, PostgreSQL, or even MySQL/Maria. A more visible reporting system can help you identify and test which platforms work best; it could be that some parts of your DB might need to be segregated into a separate system.

Human Resource Needs

Of course, you can’t automate everything. No matter how well you designed your system, you are likely going to need DBAs to manage it. A good high level view of your DB will give you a better idea about which pieces need attention. You may have too many cooks, and it may mean that some are duplicating their tasks. If you can see existing patterns, it may be easier to create scripts to handle some of those functions, and redirect your personnel to areas where they can provide the greatest benefit.

If you consider all of these factors, it’s not a stretch to recognise that the larger it grows, the more you need a method for gaining a greater level of visibility to manage your databases. To ignore this can result in seriously negative repercussions for your organisation. However, if you plan based on actual real-time data, you can head off many of these problems before they occur.


To discover how dbWatch can give you the freedom to monitor all of your database instances, in real-time, across multiple platforms in an all-in-one solution, contact the team today.

dbWatch: Database Operations Redefined

About the Author: 

Rey Lawrence Torrecampo is a Pre-sales Engineer for dbWatch and  

a full-time Database Administrator. He has extensive knowledge in Postgres and MSSQL database management systems, with SQL as his most proficient language.  

Today, managing databases is no longer driven by a concentrated focus on single instance optimization. Gone are the days when squeezing every last ounce of improvement was the norm. At the same time, Database Administrators are also caught in a whirlwind of modern problems like: 

1. The looming threat of database security
2. Database continually being distributed over the cloud or outsourced to hosting platforms.
3. The complexity of handling hundreds and thousands of databases and instances

Slowly, DBAs are losing control over the instances they manage. It is not that DBAs lack the knowledge or capabilities to address these problems, far from it. DBAs lack the necessary tools and time to provide a better solution. Hence, modern problems require modern solutions. And what better tool to use is a monitoring and management tool like dbWatch Control Center.  

What is dbWatch Control Center?

dbWatch Control Center is a database monitoring and management solution launched by dbWatch in 2021. It has an intuitive User Interface. It is easy to pick up with minimal training. 


But dbWatch Control Center is not just for aesthetics. Above all else, it champions functionality.

dbWatch Control Center installs jobs to your database instances. These are agentless jobs that are triggered by the dbWatch Server. Data is then kept in that database server, and it will only be pulled when needed. This method conserves resources for both the dbWatch Server and database server thus; either machine will feel little or no additional resource consumption.  

Once the dbWatch Server pulls data, it displays them in graphs and tables in dashboards and views. Non-DBAs reading them will have an easier time understanding the presented database information. In essence, dbWatch is also accessible to non-technically inclined staff members who want to see their database system’s accurate and contextual information. 

But that’s not all. Control Center can monitor hundreds of databases simultaneously. Whether you are database instances are MySQL, MS SQL, Postgres, or Oracle, connecting to them is possible. You can read our blogs on database farm management to learn more about it. 

On top of that, Control Center is reliable even in unstable environments or multiple domains. You can connect your existing dbWatch servers or centralize them under one dbWatch Server. These servers will run smoothly even in low-bandwidth connections. 

dbWatch Control Center presents unique advantages. dbWatch Control Center gives you the tools to improve efficiency and productivity for all roles: IT manager, operations manager, or a DBA. 

Making best practices better

Manual execution is the common industry practice for index maintenance, disaster recovery, version, log, and license management. Although this is a widespread practice that most DBAs observe, it is riddled with inefficiency and too much time exhaustion. Even in big IT departments fall prey to this practice can be observed. 

dbWatch Control Center presents a solution to combat the problems of manual execution. Through automation, the suitable policies can be  applied throughout your databases. No longer does a DBA have to micromanage an entire database farm. For instance, he can easily track the compatibility level of his monitored SQL instances. (Click the link to read more). DBAs are assured that the versions are aligned and compatible across the farm. 

Another application is maintenance. Using Control Center you can get information across your SQL environments.  


With the Farm Module, you can check which instances needs indexes to be reorganize or rebuild. Using the severity indicator, you can check if that instance that deserves attention. The tabs presented gives DBA options to rebuild or reorganize indexes, clean data fragmention and check their databases for corruption. Coupled with the compatibility level tracking, you can standardize your entire SQL farm. By having compatible SQL jobs installed in every monitored instance, it’s easier to monitor them with the correct jobs and deploy necessary maintenance with just a click of a button. 

With Control Center’s monitoring, you can detect potential problems before they occur. dbWatch scans database instances for backups and memory management. It provides timely alerts color-coded in red or yellow. Users can act on potential problems before they become critical. 

In addition, it sports a feature where it scans for new database servers over a range of IP addresses. You no longer need to add database instances manually.  

dbWatch is not just for automating operations; it offers customizable and reliable reports. You can schedule a report that is accessible for your colleagues and managers. Control Center prepares it with your organization’s logo and desired file format such as HTML or PDF by personalizing the report. 

In Control Center, you can also customize your dashboard. Using Farm Data Language, you can manipulate fields and displays the output data with your organization’s needs. Like the image above, you can create your dashboard and personalize it the way you want.

In an industry where best practice is at the forefront of an organization’s operations, automated monitoring , auto-generated reports, and customizable dashboards  are vital differentiators. dbWatch Control Center delivers these services. Consequently, it redefines your approach to database operations.

Counting the cost 

Databases are your most valuable assets. They are the bread and butter of your organization.  Your organization’s priority is to maintain it at all costs. Being too fixated on budgeting your maintenance costs should not compromise your organization’s goals in delivering excellent services to your customers. Control Center offers a solution where it adds value to your organization and, at the same time, minimizes costs in the long run.  

License management is one of dbWatch Control Center’s features. By trimming down underutilize databases, you save money in the process. This way, you are taking control of your database license portfolio and decommission that unwanted database instance. 

The same idea applies to cloud services or any hosting platform. Cloud services observe a pay-per-use model. You can add resources or pool from existing resources to your machine on the fly, but this could drive up the cost rapidly. 

Exhibiting control over your cloud database instances is of utmost importance to most businesses. Businesses need a way to project costs while maintaining a level of service. dbWatch delivers in this aspect. As it monitors all your registered cloud instances, you get accurate database utilization and estimated cost projection. Your database’s monthly utilization metrics are vital information. With this, you can properly decommission underutilized  cloud databases and prevent overutilization. Not only that, but you can also monitor and manage them using the same tool without needing to open another one.  

You can see this application best exemplified when migrating to Azure cloud services. Having accurate information on your local database’s utilization maps a close estimation in setting up your cloud services. You can select the most appropriate service tier and instance configuration. With this information, you no longer need to mix and match services tiers and configurations blindly.  Upon deployment, you are assured that there will be a seamless transition between your off-premise database server to the cloud server saving you time and money.  

Any database. Anywhere

dbWatch Control Center can operate with any cloud, virtual or physical servers. Regardless of your company’s decision to adopt an exclusive cloud or on-premise setup, it works well with most database engines.  As long as there is an IP connection or any subnets with differing firewall and policies, whatever the bandwidth and latency, dbWatch Control Center can connect to it. 

Control Center employs that engine’s native SQL dialect: calling procedures and functions and running scripts through a native interface. The ODBC connector and Java drivers will support different Oracle, MS SQL, Postgres, or MySQL versions. Even if your database uses a Linux OS or Windows OS, Control Center pushes flexibility in all and any environment. 

That sort of flexibility creates a seamless experience in cross-platforming between different kinds of database engines. Using dbWatch Control Center, you can check the status of one of your MSSQL Servers, then check the status of an Oracle Server from the comfort of one screen.   

Generating a report also encompasses all registered database instances. You do not have to create a report for each instance individually; with just a click of a button, it will account for all registered instances in your local machine. That is more efficient than using multiple tools to do a single task and get an overview of your entire database environment. 

dbWatch also restricts the privileges and access of other users using the application. The fine-grained access control provides needed security and control. Using the tool, you can modify access to individual users to specific modules, database connections, and authorizations. 


Put it all together, dbWatch gives you the tools to go beyond simple performance tuning to manage your entire database environment. It provides optimum efficiency and cost savings while still getting the performance the business needs. In the modern age of bigger data, a tool as trusty as dbWatch will redefine how you will look at database operations. 

Get you free licenses for five instances valid for six months now! dbWatch Control Center free license

If you have any questions about dbWatch Control Center, feel free to contact me: rey@dbwatch.com 

For more information, visit www.dbWatch.com or the dbWatch wiki pages  

#DevOps #DBA #DatabaseManagement #databasemonitoring #sqlserver #oracle #mysql #mariadb #postgresql #sqlmonitor #sqlmanagement #dbmonitor #database monitor #databasefarms #clouddb #Azure