Database Locks – How to Monitor and Manage

And open lock symbolizes how this key unlocks your database problems

Database locks have been a bane to most DBAs.


They cause unnecessary problems to databases and business processes. In this blog, we will examine what database locks are and issues that can arise from them and provide ways to deal with them on SQLServer.


  • Not only that, but we will also implement it in dbWatch Control Center as an example.
  • What is a database lock?
  • The Most Common Table Lock in SQL

What is a database lock?

Database locks occur to secure shared resources during transactions. Locks are safeguards for databases as a means of:

  1. Observing an all-or-nothing scenario for multiple and separate transactions
  2. Preserving consistency in the database state
  3. Isolating transactions from being committed until that transaction is complete
  4. Saving committed transactions even in the event of abnormal termination


They adhere to the ACID properties of a transaction. To put it in perspective, imagine a piece of paper being shared by three writers.


The graphic below, gives and example of how locks work. Writer One finishes their message and gives the paper to Writer Two. Then, the Writer Two begins to draft their message. Now all other writers must wait for Writer Two to finish writing message before they can write anything on that piece of paper. Locks work in the same fashion. They prevent simultaneous writing and updates from maintaining data integrity.

Visual representation of acid properties

Locks, like the above example, work in the same fashion. They prevent simultaneous writing and updates from maintaining data integrity.

Locks within Database Access Levels

Locks can occur in a variety of forms. They appear in levels of access to a database. Below are the kinds of locks that can occur:

  • Row Level Locking
    Most common type of locking. It locks a row of a table.
  • Column Level Locking
    Simply put, it locks a column or columns within a table. Database vendors do not commonly offer this since it requires a lot of resources to apply it.
  • Table Level Locking
    Locks an entire table. It prevents modifying the table properties or updating all the rows by another user.
  • File Level Locking
    Locks database files, preventing alterations to an entire table, parts of that table, or another table. This is the least preferred method of locking since it can cause unintended locking.
  • Database Level Locking
    Restricts to only one database session to apply changes within that database. This type of locking is rarely seen, but it is widely applied during software updates.


Database locks most commonly appear during record updates or uncommitted SQL statements. They primarily isolate a database component, such as tables, so that session updates and alterations will successfully take effect and no data loss will be experienced.


Locks are not a problem when only a few active sessions transact in your database. However, with more database users accessing and utilizing your databases, locks will impact your database resources and potentially your business. There are three kinds of locking that I will discuss: lock contention, long-term blocking, and deadlocks.

The Most Common Table Locks in SQL

First, it’s important to understand what the most common table locks are. After that is clear, we can start working on solving the problems.

Lock contention

The most common form of SQL table lock is lock contention. It happens when concurrent sessions compete to update the same data. As discussed earlier, locks are enforced when a user alters a table property or updates data. This, in turn, blocks other session users from accessing that same database component, i.e., table. SELECT and INSERT statements run slower when accessing that table as it waits for the lock to end. Worse, lock contention may lead to indefinite access to a particular table and high CPU usage on that database server.

Long Term Blocking

This leads us to the second kind of table locks in SQL, Long Term Blocking. Like Lock Contention, Long Term Blocking happens when multiple users access the same database component. The notable difference occurs when the user holds on to that component for a long time. Dependent sessions and objects will be blocked from reading and writing.


Database Deadlocks

Database Deadlocks happen when two or more database sessions hold to a database object dependent on their transaction. Hence, sessions can only continue if the other concedes hold of that dependent object.


The flow chart below simplifies how deadlocks occur. Database Management Systems (DBMS) can detect deadlocks. It will select a session and roll back that session’s transaction. The only caveat is that DBMS will choose the most straightforward transaction to roll over.

Simplified visual of two session Deadlocks

Sure, deadlocks are manageable when it concerns user-driven ad-hoc. But, when applications and SQL jobs are involved, this can lead to data loss and logic problems. Transactions highly dependent on timing and data will always lead to writing or outputting incorrect information.

What can I do when database locks occur?

Now that we better understand the kinds of locks and how they occur, we can discuss how to deal with and prevent them in the future. First, you need to understand how a lock can occur in a session.


I have prepared two separate sessions to act as our users in this example. Session 1 will designate as our first user; then Session 2 will be our second user.


The figure 1 below shows a simple query that locks the designated table test_table. After that, we will execute another query for Session 2 that will select the specified table.


How to select the specified table when dealing with database locks.
Figure 1

Use Master
exec sp_who2

The stored procedure above will display a tabularized format of information you need to show who is blocking who. As seen below, in Figure 2, ID 54 (session 2) is being blocked by session 76 (session 1).


Showing how the blocked session is highlighted.
Figure 2

Similarly, you can use the following query to get the information you want:


USE Master
SELECT session_id, start_time, command, status, blocking_session_id, wait_time FROM sys.dm_exec_requests WHERE blocking_session_id <> 0;

You will still end up with the same information as before.


Identify the blocked session in database before killing the query.
Figure 3

After understanding the blocking session, you can kill the query by using the command below:


KILL – – blocking_session_id
— in this example two queries were executed:
— KILL 76
— KILL 54
— You can also use: KILL [session_id] WITH STATUSONLY

After killing them, it will restart the session. This was also done by killing session_id 54 (session 1).


Showing the killed session message for a locked database fix.
Figure 4

That seems more straightforward than expected. But this is still a bare-bone example with a negligible impact on our business processes and databases. In reality, killing a session rolls back the transaction before changes in the database state. Use killing with precaution since it may take longer for the session to recover the lost transaction, and it may affect dependent objects during data updates and writing.


You need sufficient knowledge when dealing with table locks. Do not nonchalantly kill the session to have a quick-fix solution to the problem. In queries using BEGIN TRANSACTION, you always have to save points after every critical alteration to the database. Remember to end your transactions with ROLLBACK or COMMIT since the transaction affects these lock tables. Finally, only kill sessions if they will have minimal impact on your business operations and data dependencies. This is your last resort.


Another option is to utilize database monitoring and management tools such as dbWatch Control Center. This not only saves you time with querying for information but also gives you an overall view of the state of your database.


How fix Table Locks in SQL with dbWatch Control Center

Since dbWatch Control Center is both a monitoring and management solution, we will do both in one active window. For reference, we will simulate the previous example step-by-step using Control Center.


How to simulate Database locks in dbwatch Control center
Screenshot 1

First of all, we will change both the warning threshold and the alarm threshold. The above picture shows the default values of the blocking statistics. This means that a warning status will appear when blocking occurs for more than 5 minutes and will raise an alarm status when it is above 30 minutes.


How to configure blocking statistics when Simulating Database locks in dbwatch.
Screenshot 2

I will alter it to notify me as a warning when it reaches 4 minutes and alarms me when it reaches over 12 minutes.


How to know if blocking has occurred when Simulating Database locks in dbwatch.
Screenshot 3

To show how that blocking has occurred, the screenshot above shows that the blocking has already taken effect.


Using a query to see which database locks are occurring.
Screenshot 4

Using the query before, I checked if table locks were occurring in the database. There is. We now know that session 75 is blocking session 58. For the following steps, we will not need any Management Studio opened. These previous two screenshots are only here to confirm that table locks are occurring.


Screen shot of monitoring module for database locks.
Screenshot 5

In the monitoring module, the blocking statistics job will check if table locks are occurring in the database. As we defined earlier, the monitoring job warns us after 4 minutes of blocking has happened. By clicking on the blocking statistics, it shows a complete overview of the following:


  •          Who?    –>  which sessions are blocking who
  •          What?   –> what query is being executed
  •          Where? –> what management tool host is this occurring in when the                                            blocking has occurred
The alarm status for a locked database.
Screenshot 6

After waiting a little while, the status is changed to an alarm, indicating that it has already passed the 12-minute mark.


The tree in the monitoring module.
Screenshot 7

Now, we will go to the monitoring module. Click on the tree. Then, find “Performance” and click on the arrow adjacent to it. Underneath it, you will see “Blocking Session.” Click on the arrow beside Blocking sessions to display “Blocking Session History,” which will help us determine if logs are present during the lock. As we can see in the above screenshot, the lock was recorded and sorted starting from the latest blocking.


The blocking sessions module that manages databases in dbWatch.
Screenshot 8

After confirming the logs, we will return to “Blocking sessions” to manage our database. Two sections will appear on that view – “Blocking sessions” and “Blocked Sessions.” One record should appear in each section. By right-clicking on the record, you can “Kill the session” or “Show current SQL.” For now, we will not kill the session. We will investigate further.


Where to check both sessions to resolve a locked database.
Screenshot 9

Upon checking both sessions by “Show current SQL,” it shows that session 2 (SPID 76) only makes a select statement. In contrast, session 1 (SPID 58) is blank, indicating that a transaction is being done. Knowing this, I can terminate session two since it only makes a select statement or terminates session 1 to stop the transaction. I killed session two in this example since it’s only a select statement.


The locks sql server error message.
Screenshot 10

Upon checking it, session two has been terminated, releasing the lock in the process. Session 1 continues to execute its transaction.


In summary, tools like dbWatch make it much easier to deal with locks. You get alarm notifications, quick insight into the situation, and the ability to resolve the issue quickly.

Discover how to best manage your database locks, try dbWatch Control Center today

And open lock symbolizes how this key unlocks your database problems

Database Locks – How to Monitor and Manage

Offering a deep dive into understanding database locks and their consequences on SQLServer, this guide provides actionable advice on managing and monitoring with dbWatch Control Center, ensuring smoother business operations.

Read More »

Multiplatform Database Farm Monitoring and Inventory Management

Database administrators (DBAs) require a dedicated solution to assist them in their daily tasks of monitoring and managing the entire database farm. The custom scripts they once used for a few servers might no longer be effective in overseeing the current, more extensive database farm.



A database farm comprises all the database servers, instances of various platforms, and versions used in an enterprise to support its day-to-day business operations or functions.      

As a DBA managing a large database farm, one of the most important things is to have a complete overview of your database farm.   

You should be able to answer questions such as:   

  • What is the health status of all databases/instances? Which databases need attention right now, and which should I prioritize?   
  • How many disk, memory, and CPU cores have been allocated to the database farm?   
  • What is the resource utilization for each instance?   
  • How many instances exist in my database farm?    
  • What are the database platforms and versions running in the database farm?   

Virtualized environments, resource allocation, and utilization are the issues. You should carefully analyze all the resources allocated in your database farm; you should have a clear overview of how the various resources like your disk, memory, and CPU are utilized in your database farm. You must determine if your servers have been over or under-provisioned with resources.   You should strive to allocate resources across the farm and VMs to maximize utilization and performance.


Documenting a large database farm is also a demanding issue as your database farm expands. With proper documentation, you have up-to-date information on your databases in the database farm. It makes maintenance easier, helps the management or DBA’s decision-making and diminishes risks. It is more straightforward to make an assessment or plan ahead using your database documentation as a guide.  


When you manage a database farm, you rarely have the time to create let alone maintain complete documentation of all resources in your farm. It would be best if you had tools to automate the documentation process.   


In your database farm, you need to ensure that all your backup jobs are running optimally and running as scheduled. You need to have complete control over your databases’ backup and recovery plan; it is essential if an unexpected situation like corrupted databases, sudden server failure, or other forms of data loss occur. You will use existing backups to restore lost information to the system.   


Imagine – having a database farm with hundreds or even thousands of instances. Do you still have enough time to remote connect to each database server to know its health status?   


Graphical user interface, application Description automatically generated



dbWatch Control Center helps users with their capacity and resource planning projects.  It features built-in views that display the capacity overview like your database resource consumption in the database farm.     


This overview provides DBAs and managers with information to determine which instances consume the most or least memory resource in the database farm.     It gives you clear information on which instances have excess resources allocated to them, which you can reallocate to instances that have more need for them.   


For example, this view displays the total disk usage in the database farm for you to determine which instances have the most amount of disks or volumes allocated. It also shows the usage rate per volume.    


Chart, waterfall chart Description automatically generated

It’s a complete database farm monitoring and management solution, which offers automation, consolidated views, and performance reports giving you the key information you need at your fingertips. Automating your routine tasks helps you save time and effort on manually managing databases on your farm by focusing on more critical tasks at hand. At the same time, dbWatch Control Center does all the proactive monitoring of your database farm, allowing you to have complete control over your database farm.   


dbWatch Control Center offers the following:   

  • Farm overview – consolidated views for performance, capacity, and maintenance for your entire database farm   
  • Monitor in-depth all your instances and platforms    
  • Performance – identify performance bottlenecks within the farm   
  • Scalability – ability to adapt to your growing database farm   
  • Alerting – alerting and third-party extensions tailored to your business requirements   
  • Security – role-based access controls, Kerberos integration encrypts connections and support security certificates  


 With dbWatch Control Center as your partner in monitoring and managing your database farm you achieve the following:   

  • Proactive monitoring displays a health overview of all databases in your database farm. It provides you with the necessary information for you to act against possible issues. 
  •  Provides complete information on your database farms’ total allocated hardware and resource utilization like your CPU, memory, and disk usage.    
  • Displays information on all instances in the database farm.    
  • Auto-discover instances on specified network ranges.    
  • Delivers information for performance metrics and identifies which database platforms and versions are running in your database farm.  

Discover Control Center today, download a trial version of dbWatch that monitors up to 5 instances for 6 months.  

Monitor your SQL Server Indexes

Person monitors SQL server indexes


I want to view the performance of my SQL Server indexes across the database farm and determine the top instances by index disk usage. I also want to look for servers that encounter problems when running the update index statistic job.  


dbWatch Control Center can monitor your Microsoft SQL Server’s index usage rate and perform database index maintenance to make sure that your servers are running smoothly. 


Microsoft SQL Server recommends reorganizing indexes with fragmentation level between 5% to 30% but if it is greater than 30% you will need to perform an index rebuild instead. 

In this blog, you will learn how you can analyze your database indexes within your database farm and determine which instance has its SQL Server maintenance plans or index maintenance jobs running longer than the usual by using dbWatch Control Center features. 

Example 1: Index Analyze Statistics Job 

On the monitoring view, right click on your SQL Server instance. Click Configure Jobs. 

Graphical user interface, application Description automatically generated

A new window will appear, tick the job Index usage statistics (all databases). Click Install.

Graphical user interface, text, application, email Description automatically generated

As seen in the screenshot below, this job collects database index and table usage, shows information about the object/table name type of index total seeks and scans. It gives you information about how big your tables are and if it frequently uses an index scan or index seek. Ideally, you would want your indexes using an index seek.

Graphical user interface, text, application Description automatically generated

In the job – Update index statistics, within this view you will be able to see useful information such as the database name, database index count per database and total indexes (clustered/non clustered) analyzed per database. It will also show you the number of errors within the execution of the update statistics task, if there are any.  

Graphical user interface, text, application, email Description automatically generated

In this view you can easily sort the columns in ascending or descending order. This will help to determine which instance within the database farm has the longest/shortest elapsed time when analyzing indexes, this is useful if some maintenance jobs are taking a longer time to finish than usual.  

Graphical user interface, application Description automatically generated

An example application of the jobs shown is when you have a scheduled index rebuild job weekly, from Week 1 to Week 7 your average elapse time was around 15-20 mins. to finish. But suddenly, when you look at the dbWatch Control Center view shows that this week (Week 8) it took 80-90 mins. to finish. The average time increased 4 times its usual execution, this tells you that there is something going on within your SQL Server indexes – from there on, you will start investigating the cause of the problem.  

Example 2: Index Reorganize and Rebuild View 

dbWatch Control Center also provides you with index maintenance jobs such as Index rebuild and reorganize.

Graphical user interface, application Description automatically generated

This view displays the index rebuild/reorganize statistics. Here you will see the list of instances and the number of databases checked within the instance, and it also shows which databases was rebuilt and which was ignored.

Table Description automatically generated with low confidence

In this view you will be able to sort the column to determine which instance consumes the most disk space.

Graphical user interface, application, Word Description automatically generated

The important part of this view is the size before and after the index rebuild took place and the disk usage reduced by the operation.  

If you rightclick on a selected instance, you have an option to execute the task and start an index reorganize/rebuild task on the instance databases’ or configure the task and its schedule. 

If you choose Details, you will see a report regarding the history of the reorganize or rebuild execution within that instance, it also shows the information about the table and index fragmentation average and the disk usage before and after the reorganize or rebuild operation.

Table Description automatically generated

With the following examples provided, you will be able to determine which instances has the biggest database index disk usage within your database farm. You will also be able to assess if a certain index operation schedule should be adjusted based on the performance and other details shown in the reporting view. 

With dbWatch Control Center, database admin’s can now keep track of their SQL Server indexes and their performance. As a DBA, you can now analyze how your database indexes are being used in your environment and plan on how you will maintain these SQL Server indexes efficiently.   

dbWatch Control Center assists database administrators to efficiently run their databases, from proactive monitoring of database performance to the management of tasks.   

Discover how dbWatch Control Center can monitor your SQL Server Indexes, try dbWatch Control Center today

Running SQL Queries with Tools in dbWatch Control Center

If you find yourself developing any kinds of applications focused more towards the web or even mobile applications, it is inevitable that sometimes you might find yourself running a couple of SQL queries – you need to interact with your database systems anyway, don’t you? 


Don’t fret – all database users run SQL queries. It’s important to run them in a proper medium though – while most, say, MySQL database administrators, find themselves running their queries inside of phpMyAdmin, some database administrators might find themselves searching for different alternatives.

Indeed, there a quite a couple – if you find yourself deeply immersed inside of the database management world, you might have already noticed that you can run your SQL queries together with some database management tools. Most database management tools that allow you to run SQL queries inside of your database instances also allow you to do a couple of different things – for example, run a couple of database jobs that let you observe the full power of your database instances or let you see the amount of logical reads (writes) going on in your database instance: 

Today we are going to be focusing on another one for this blog – we are going to be telling you why it’s important to get assistance while running SQL queries inside of your database instance together with tools like the one provided by dbWatch. 

Running SQL Queries with dbWatch Control Center 

You see, while dbWatch isn’t a SQL client, it does have an ability to help you craft your SQL queries – simply observe the left-hand side of dbWatch and choose the fourth option from the top: 

Once you’re done, make sure dbWatch is actually connected to the database instance you want to run queries on – click on the Reload Auths button: 

Once the authentication is reloaded, you should be able to see the username next to your database instance: 

Tick the checkbox next to your database instance and you should be more than ready to run queries inside of a given database instance (do note that if you do not, dbWatch will provide you with an error): 

Now it’s time to write your queries that interact with your database instance – for that, dbWatch will provide you with a very nice user interface and underneath – the ability to see the results of a given database query – input your query inside of the SQL field, and click “Execute” if you want to run it – if not, you can also clear all of the input, output (also save them.)

While your query is running, you will be able to see the amount of time your database query is running – if you don’t see any results, but keep seeing that the query continues executing (see example below), you might want to add a couple of partitions or indexes on top of your table – that should fix the issue: 

If your query ran successfully, you will be able to observe the output (in this case, our column is called “column”, so we needed to escape it with backticks (note the `column` inside of the query)): 

See underneath the SQL window displayed by dbWatch – you will be able to observe what queries failed to run and what queries ran successfully: 

See the “!” in the triangle-shaped box? Those queries failed. See the tick next to the last query in the list? That query ran successfully! 

The ability to simply run queries is not everything that dbWatch comes with though – as already noted, you will also be able to “load input” to run (e.g. import SQL files that contain SQL queries for dbWatch to run), clear input (the SQL queries you just ran) or output (the results of the SQL queries), if you want, you will be able to save them as well. For example, click “Clear Input” and you will be able to see that your SQL queries disappeared:

See? Easy as that! No more deletion – who needs that? dbWatch will delete all of the things for you. 

Output Panels 

In case you didn’t know that, dbWatch will also provide you with one output panel per every result set returned. Here’s how that looks like: 


It doesn’t matter whether you are a high-end developer that deals with databases on-and-off or a full-fledged DBA of Oracle, MySQL, MariaDB, or even Sybase – you will need to be running queries focused at a specific database instance sooner or later. dbWatch comes with a SQL query feature – underneath the query you will be able to observe the results of it. You will also be able to see how many queries of yours have run successfully and how many failed – do so and your database instances should be headed towards a more performant future! 

If you find yourself running into any kinds of issues when using the tool, keep in mind that dbWatch has a support team consisting of top-notch database administrators – they can solve all of the problems you face within no time, so don’t be afraid to reach out and you will receive the help you need very quickly. We hope you enjoyed reading this blog and will stick around – we have a lot more content coming up. 

Job Statuses in dbWatch – the How, Why and When

Monitoring the status of database instances is the daily task of every database administrator or even a developer. Database monitoring these days can be incredibly complex and time-consuming if performed manually: thankfully, there are tools like dbWatch that can help you in time of need. If you are a user of dbWatch, you have noticed that it has a thing called “job statuses”: this blog post should provide some insight into it. 

businessman hand using tablet computer and server room background


What are Job Statuses? 


Job statuses in dbWatch, simply put, depicts how did a certain job in dbWatch execute – was it successful? Did it produce a warning? An alarm? There are three categories of job statuses in dbWatch and here is how everything works: 


  • A job status of “success” means that the job ran successfully and can provide some interesting details on your database availability, capacity or performance if instructed to do so (simply right click the job and click Details to observe them in all their glory); 
  • A job status of “warning” means that the job encountered some errors on the way (it might also mean that your database farms or instances are not up to par and can perform better – check the Details section as instructed above to learn more) 
  • A job status of “alarm” means that something is (probably) terribly wrong – seeing such a job status means that something very bad is happening in your database farms and (or) database instances and needs immediate further attention. If left unresolved, this job can negatively impact your database availability, capacity, or performance. 


Statuses and Your Database Instances

In order to check the status of your database jobs with dbWatch, you might first want to check what database instances you are running. To do that, click on the Database Farm icon (that’s the third icon from the top on the left-hand side), then expand per platform in the inventory overview section:


That way you will get a nice animated view of what database instances your servers are running! 

With dbWatch, you can also go back to the main page and observe the statuses of your database instances themselves (statuses are not specific to jobs, they can also be applied to database instances  – keep that in mind!




Once you have ensured that your database instances are running smoothly, it’s time to keep an eye out on your database jobs. Right click your database instance, click configure jobs and you should see an output similar to this one (black checkmarks mean that the job is installed, green checkmarks mean that it’s enabled):



Once you have installed the jobs you wish to use on your database instance, it’s time to put them to the test! Choose a job you like, afterwards you can configure it if you wish: 


Finally, run it by right clicking on it and clicking Run now:


Once your job has been run, you will be able to see its status along with a comment on how well it was executed – a green setting icon with a tick mean that the job executed well, an orange setting icon means that the job produces some errors (check the details section to know what they are) and a red setting icon means that the job found something wrong with your instances: 




In general, job statuses in dbWatch are like small advisors for your database instance that are always keeping an eye out for its availability, capacity or performance – once either of those do not satisfy you, check the details section of the jobs to see what can be improved. You would want to keep an eye out on the job statuses in dbWatch since it’s really easy to deploy and it enables you to make sure your database instances are always performing in a tip-top shape.

The Challenges of Scaling MySQL

Man works on scaling My SQL Server.

If you have ever worked with scaling MySQL or if you are a MySQL DBA, you have probably faced the challenge of scaling your MySQL instances. This blog post will outline the challenges of scaling MySQL and tell you how can dbWatch help you to scale, monitor and manage your MySQL (and other) database servers efficiently while also providing you with total control over all aspects of operation, performance and resource usage.

Man works on scaling MySQL Server.

When Would You Need to Scale MySQL?

In general, the challenge of scaling MySQL would frequently appear in a situation when you start seeting your MySQL-backed application’s traffic increase by tens, hundreds or even thousands of times and you start seeing query timeouts more and more often.

How to Scale MySQL

As far as the actual scaling of MySQL is concerned, you have a few options you can choose from. You can apply application level optimizations and, of course, you can apply some optimizations on the MySQL level and you can also scale through infrastructure. As we’re talking about the challenges of scaling MySQL in this blog post, we will go through some of the things that you can do to optimize the performance at the MySQL level yourself and tell you how can dbWatch help you to scale your database instances too.

4 Things to Keep in Mind of Optimizing Your Database Instances

If you want to optimize your database instances at the MySQL level, here’s some basic things you have to keep in mind:

  1. Only Ask For The Data You Need .

    Your queries should only fetch the data that is absolutely needed. Avoid using queries like SELECT * if it’s not absolutely necessary, because a query working with too much data is one of the most basic reasons a query might not perform well. Optimize the data your queries access and optimize your queries such that they access less data. The less data your queries access, the faster they will be – do not ask the database for data you don’t need.

  2. Optimize Query Performance with EXPLAIN.

    The EXPLAIN clause can be very useful if you want to optimize your query performance – the EXPLAIN clause is able to give you an access type that MySQL is using to find results (the access types range from a full table scan to index scans, range scans, unique index lookups and constants) – understanding the general concepts of scanning a table, scanning an index, range accesses and single value accesses could prove to be very valueable if you want to optimize your database instances at the MySQL level. The type ALL means that MySQL has ran a full table scan while ref means that MySQL used the ref access type on some sort of an index.

  3. Understand SQL Query Execution.

    Keep in mind that there are some basic things related to query execution. First, the client sends the SQL statement to the server, then the server checks the query cache. If there’s a hit in the query cache, it returns the stored result from the cache, otherwise the server parses, preprocesses and optimizes the SQL into a query execution plan. Afterwards, the query execution engine executes the plan by making calls to the storage engine API and the server sends the result to the client. Keeping these things in mind could put you on a good path when scaling your MySQL instances.

  4. Know when to Use a Wildcard Search.

    If you are using wildcard searches (for example if you use LIKE queries) do not start your search with a wildcard. Only use a wildcard at the end. If a wildcard is used MySQL won’t know what the search term begins with and thus an index might not be used even if it exists on the column you are querying.

The advice above should set you on a good path regarding fixing query timeouts, deadlocks and the like, but if you need to scale MySQL seriously, chances are that you are going to need certain tools suitable for the task.

Solving MySQL Scaling Challenges with dbWatch

If you want to be able to solve your MySQL scaling challenges using tools, dbWatch can help you. dbWatch is a highly scalable software solution that helps enterprises monitor and manage your database server instances efficiently while also providing you with total control over all aspects of operation, performance and resource usage. dbWatch can help you scale all kinds of database management systems – it does matter if you are using MSSQL, Oracle, PostgreSQL, Sybase, or MySQL.Here’s how the user interface of dbWatch looks like:

dbWatch offers a few options to scale your MySQL (or any kind of database management system) instances:

1. You can (and should) make use of the dbWatch jobs that are available – the dbWatch jobs are split across three categories (Availability, Capacity and Performance) and each of those categories contain certain jobs that perform specific tasks.

For example, the Availability category for MySQL consists of database monitoring jobs that check the uptime of your DBMS and give you alerts, the Capacity category provides you with aggregated and detailed database growth rates and the Performance category consists of database jobs that check the effectiveness of the InnoDB buffer pool, the MyISAM key buffer, it can provide you with some database lock statistics, it can analyze the memory setup of your database server, it can also provide you with some information regarding your session load and the query cache hitrate and so on. Keep in mind that jobs can be configured (or even disabled if you so desire) and they also have a details section meaning that you can see some more information. Simply right click on a job and click on “Details”, “Configure” or “Set Schedule”:

For example, here’s the Details section on one of MySQL jobs in all of its beauty:

2. Make use of job scheduling – if you have a lot of database instances (and you can do that with dbWatch), job scheduling can be an invalueable resource. To schedule your jobs with dbWatch, simply click Set Schedule after you right click a job. Then you will be able to set a schedule for the job to run every minute, hour, day of a week or week of a year:

3. When you’re using dbWatch, you can also connect to or shutdown all servers at once – that might help with scaling your database instances too.


4. You can also make use of FDL (Farm Data Language) – now this one deserves an entire book alone (take a look into the documentation), but in short, FDL can help you with a multitude of different things ranging from filtering instances by database name to sorting the results of a column by an ascending or descending order. For example if you use use #sort your instances would be sorted in a certain order (use asc for an ascending order and desc for a descending order):

FDL can be really useful if you want to scale MySQL further and push your MySQL instances to its limits.


If you’re a MySQL DBA, chances are that sometime in the future you will face issues in relation to scaling MySQL. Your queries might become slower and slower, your traffic might increase more and more etc.

When using dbWatch you can make sure these kinds of problems will be solved quickly, securely, and effectively no matter what kind of database instance you use. 

Learn how to solve your challenges of scaling MySQL or other kinds of database instances, book a demo today.

5 Tips for Server Migration Planning

Server migration planning

As a DBA, infrastructure admin, or manager, you may have encountered a scenario where you are tasked to lead, plan and execute a server migration project for reasons such as: improving the application performance by using new or upgraded hardware or optimize the enterprise architecture.


Planning for a server migration can be quite complex, especially in large enterprises with many different servers used for various web applications, internal applications, databases, and many more. Today I will discuss some of the issues you need to consider when planning a server migration.

Server migration planning

The 5 R’s to consider when planning a Server Migration are the following:

1. Re-hosting by Moving to a New Host– (also known as Lift and Shift)

This approach involves moving your current infrastructure, applications, or databases to another server, another data center, or moving it to the cloud as it is.

Advantages of Re-hosting:

  • Simple migration by re-hosting all applications and databases as it is
  • No optimizations, no architecture changes, no code changes, moving to other servers or in the cloud without doing additional changes

Disadvantages of Re-hosting:

  • Can cause security risks if users and roles are not applied effectively
  • Can cause application failure or services unavailability if processes, security roles, jobs are not synchronized correctly

 2. Re-platform with New Database Platform or Infrastructure

Re-platform involves migrating to a new platform or infrastructure, for example, moving your on-premise databases to an Azure Managed instance in the cloud or moving your on-premise web application to AWS ElasticBeanstalk.

Advantages of Re-platforming

  • Not changing the core architecture but may require some code changes for optimization

Disadvantages of Re-platforming

  • It will most likely require extra time and effort to apply code changes
  • Use different tool kits and packages only available to the new platform

3. Re-target to Another Platform

Moving to a different product or database platform (Oracle to MariaDB). An example is you are trying to migrate your Oracle databases to a MariaDB to save substantial licensing costs. The move from one database platform to another requires changes in your stored procedures and packages when moving from Oracle to a MariaDB database.

Advantages of Re-targeting

  • Migrate the database to a better solution for reasons such as cost-benefit, feature/function availability.

Disadvantages of Re-targeting

  • It may consume time migrating to a new database platform as you need to map out every process happening within the database.
  • You will need to learn new tools and tech if you are not already familiar with a target platform

4. Refactor

Restructuring the enterprise architecture (databases and applications)

Advantages of Refactoring

  • Overhaul the application, driven by the need of business to add new features and functions
  • Optimizing the overall application and usage rate of resources by query optimization and rewriting queries
  • Long term cost savings as your applications/database are optimally running and properly provisioned.

Disadvantages of Refactoring

  • It may require a lot of time and effort on the part of DBAs/Developers to work on the project to refactor the whole application/database architecture
  • May introduce new bugs or issues
  • It will require extensive planning and testing before stable and ready

5. Retire

Turn off things that are no longer being used or running may be due to refactoring the whole environment. Consolidate database instances that are infrequently used to other servers that have extra capacity and resources.

Advantages of Retiring

  • Save costs up to 10 – 30%
  • It helps reduce security vulnerabilities
  • Fewer servers to maintain, monitor, and manage
  • Simplify environment
  • Remove old legacy platforms and apps
  • Better overview
  • Cleanup

Disadvantages of Retiring

  • Hard to verify instances or databases no longer in use without extensive activity logs going back a long time or extensive manual checks
  • Moving databases may introduce unexpected side effect

Summary for Your Server Migration Planning

When planning your server migration, always remember the 5 R’s, which are:

  • Re-hosting
  • Re-platform
  • Re-target
  • Refactor
  • Retire

Before you migrate your servers, you should put a monitoring and management solution in place to keep track of your applications or databases’ health.


dbWatch Control Center allows you to track your database health and performance using different database monitoring jobs for every performance metric.


Control Center has built-in reporting capabilities for the management to have a quick overview of their database farms’ status, capacity, and resource usage.

Track your database health, try dbWatch Control Center today.

Monitoring the Performance of Database Indexes with dbWatch

monitor database performance

In general, most database administrators sometimes face a few common problems. One of those issues is optimizing the performance of queries – whenever query optimization is mentioned, the chances are that you will often see some advice regarding indexes. Today we will try to see why indexes are so essential and dive into how to monitor your database indexes’ performance with dbWatch.  

monitor database performance

What are Database Indexes?

In the database world, indexes are data structures that are frequently used to improve the speed of data retrieval operations. Indexes make data retrieval operations faster because when indexes are in use, databases can quickly locate data without having to scan through every row in a database table every time it‘s accessed. The usage of indexes, of course, has both its upsides and downsides – we will start from the good things, then go into the minuses, and finally, we will tell you how to monitor the performance of your database indexes using dbWatch. 

Advantages and Disadvantages of Using Database Indexes

There are a few main benefits of using database indexes as long as databases are concerned. We will use MySQL as an example. In this relational database management system, among other things, indexes can be used to: 

  • Quickly and efficiently find rows matching a WHERE clause.
  • Retrieve rows from other tables in JOIN operations. 
  • Save disk I/O when values are retrieved straight from the index structure. 

However, we mustn’t forget that what has advantages probably has disadvantages too. Here are the disadvantages of using indexes in MySQL: 

  • One of the main drawbacks of using indexes in MySQL is that your data will consume more space than usual. 
  • Indexes degrade the performance of certain types of queries in MySQL – INSERTUPDATE and DELETE queries can be significantly slower on indexed columns. When data is updated, the index needs to be updated together with it. 
  • You may use redundant indexes in MySQL (e.g., you might index the same column two or three times by adding an ordinary INDEX, a FULLTEXT index, and a PRIMARY KEY or a UNIQUE INDEX, etc.) In this case, it’s helpful to remember that MySQL does not error out when you use multiple types of indexes on the same column, so it never hurts to be careful. 
  • We also must not forget that there also are multiple indexes in MySQLYou can use a PRIMARY KEY (this type of index allows you to use automatically incrementing values), An ordinary INDEX accepts NULL values and is frequently used to speed up SELECT operations (while slowing down INSERTUPDATE and DELETE queries), a UNIQUE INDEX can be used to remove duplicate rows, you can also use full-text search capabilities while using a FULLTEXT index, or if you want to store rows in a descending format, you can also use a DESCENDING INDEX. 

Monitoring the Performance of Indexes Using dbWatch

To monitor the performance of your database indexes using dbWatch, you can utilize a couple of methods outlined below: 

  • dbWatch allows you to see your database growth rates. For that, dbWatch has two specific jobs letting you see the aggregated and detailed growth rates of your databases regardless of the platform you use. Here’s how the aggregated growth rates look like: 

Database growth rates as seen in dbWatch.

The red line depicts the data size, the orange is for index size and the green one is reserved for the total size. 


By observing aggregated growth rates of your database you can easily see the data and index size derived from your database server, letting you decide whether your indexes are starting to be redundant or not. 


Here’s how the detailed growth rates look like: 

detailed growth rate as seen in dbWatch

Detailed growth rates show a chart detailing the growth rate for the most extensive databases on the server. Both of the jobs also display dates letting you observe how your database grew over time. 


If your indexes’ size is very small, it might be time to look into a different optimization method. On the other hand, if the size of your indexes is a bit bigger, indexes can become the primary reason your queries run efficiently. It all depends on the index – indexes are critical for good performance, but people often misunderstand them, so indexing can cause more hassle than it’s worth too. To get the best out of the indexes that are in use in your database management system, you can also utilize the InnoDB buffer pool checking job or the MyISAM key buffer checking job – these jobs can give you an excellent indication of the buffer pool utilization in InnoDB or the key buffer utilization in MyISAM. 


The InnoDB buffer pool check job can be configured to give an alarm or a warning if the buffer utilization exceeds a certain value in percent, allowing you to keep an eye on the buffer pool at all times – since the buffer pool is maintained primarily for caching data and indexes in memory, monitoring its performance can be a crucial aspect of monitoring the performance of your database indexes with dbWatch: 

monitoring performance menu in dbWatch

Configure menu in dbWatch

The same can be said about the MyISAM key buffer check job. Once again, this job can be found by simply looking to the dbWatch Control Center’s left side. All that’s left to do is to configure and enable it for it to work: 

The dbWatch configure menu.

When configuring the job, keep in mind that there are a couple more parameters that you can use: 

  • You can choose the number of days you want to keep the data for – after a specified amount of days has passed, data will be discarded. 
  • The job can give you an alarm or a warning if the buffer utilization exceeds certain specified values in percent. 
  • The job can give you an alarm or a warning if the read ratio exceeds certain specified values in percent. 

The configure key buffer


The key buffer utilization alarms can be beneficial not only if you want to know whether the indexes you use in MyISAM are effective or not but also if you want to know when to upgrade your database instances or servers that you usually use to run the database instances on (e.g if a buffer utilization threshold constantly exceeds, say, 90% it might be time to look for how you can push your resources further to accommodate the data and indexes that you use). 

 Summary of Database Performance Monitoring with dbWatch

Monitoring the indexes’ performance in your database with dbWatch can be a substantial step if you want to ensure that some of your database queries (e.g., search queries) stay fast and efficient. Do keep in mind that indexes usually slow down the performance of certain types of queries too (e.g INSERT and UPDATE queries), but if you have a lot of data, indexes can be handy. When using indexes, keep in mind that there are separate types of them (for example, B-Tree indexes and FULLTEXT indexes, PRIMARY KEYs are also indexes) and that you have multiple types of indexes on the same column at once. 


Software developed by dbWatch  can help you monitor the performance of the database indexes that you use – the database growth rate job can help you check the size of your indexes helping you decide whether they’re efficient or not, the InnoDB buffer pool checking job can help you monitor the data and indexes of your InnoDB tables, and the key buffer checking job can help you monitor your MyISAM table performance.

Understand more about database performance monitoring, book a demo today.



Monitoring MyISAM Performance with dbWatch, a Guide


MyISAM was the default MySQL storage engine for MySQL versions prior to 5.5 released in 2009. The MyISAM engine is based on older ISAM code. One of the key differences between InnoDB and MyISAM is that MyISAM is significantly faster when using COUNT(*) queries because MyISAM stores the number in the table metadata, InnoDB does not.

What do You Need to Monitor in MyISAM?

Monitoring your MyISAM-based table performance is crucial if you use MyISAM to store data. One of the most crucial things related to this engine is the MyISAM key buffer (also called the key cache). MyISAM employs a cache mechanism to keep the most frequently accessed table blocks in memory to minimize disk I/O.

The key cache (or key buffer) structure contains a number of block buffers where the most-used index blocks are placed. When the MyISAM key cache is in use, multiple sessions can access the cache concurrently. To control the size of the key cache in MyISAM, use the key_buffer_size variable. If this variable is equal to zero, no key cache is used.


How to Monitor the Performance of MyISAM with dbWatch?

Thankfully, if you want to monitor your MyISAM-based tables with dbWatch, doing so is pretty simple. Open up the dbWatch Control Center, navigate to your MySQL instance and expand the performance section as shown. You should see a “Key buffer check” job underneath:



The next thing you should probably do is configure the job. Here are your options:

This task can be configured to:

– Change the buffer utilization alarm threshold – dbWatch will alert you with a status of ALARM if this value exceeds the specified value in percent.

– The buffer utilization warning threshold means that dbWatch will present a warning when the buffer utilization of the MyISAM engine exceeds a value specified, which can be useful if you want to monitor your MyISAM performance or even know when to migrate to a new server.

– The read ratio alarm threshold will give you an alarm if the reads exceed a specified value in percent.

– The read ratio warning threshold will present you with a warning if the reads exceed a specified value in percent.


In general, this job can be beneficial if you want to ensure that you use your server resources with MyISAM engine well. If the values don’t satisfy your desires, you are free to change them.



To summarize, the MyISAM key buffer cache checking job in the dbWatch Control Center can be very useful if your MySQL data is stored using the MyISAM engine. The job can help you ensure that your server resources are utilized to ensure the MyISAM engine’s best performance.


Monitoring SQL Server error logs

Screen giving an idea of Monitoring sql server error logs

As a data platform expert who manages and maintains hundreds or maybe thousands of database instances, you may be thinking of a better way in managing these number of servers more efficiently. An environment this huge with a lot of vital applications relying on those databases can be quite difficult to keep track of. Especially when you’re too busy with a lot of administration work, with little resources and time you might have difficulties prioritizing your daily tasks.


As your day goes on you will encounter the usual errors which happens from time to time. It may be caused by different reasons for example; a user tried to input his/her credentials, but he/she failed logging in as the credential had already expired, maybe there was a job that failed due to a query timeout, or maybe there was a backup job which failed due to an unknown reason.

Screen giving an idea of Monitoring sql server error logs

The errors mentioned are automatically logged on your error logs in SQL Server. You can view all these by expanding SQL Agent on your management studio, there you will see all the logs of your database instance.


When monitoring tens or maybe hundreds of database instances, you need to be able to track and efficiently monitor errors happening in each instance, within the database farm you are managing.

How to Easily Track SQL Instances

You may think that there is no easy way to manage and have full control of everything that is happening within each of your instances, luckily there is dbWatch which will assist you with your daily monitoring tasks.


dbWatch contains various tasks which helps you manage and monitor error logs of your database instances. dbWatch helps you keep track of all errors encountered on your database farm. You can also monitor error log file sizes and check if your file size is growing too much or too fast within a period. And lastly, you can configure when you wish to recycle error logs within your database instance, which helps free up disk space being consumed by the number of growing logs. 

How to Track Error Logs Within dbWatch

  • You’ll need to start by going to the monitoring module.
  • Here you’ll select a sql server instance. Right click and select configure monitoring. (See Figure 1, below)
Selecting SQL server, screen shot
Figure 1

In this window make sure that you have installed the tasks for monitoring error logs in SQL Server. (See figure 2, below)

The following tasks are:

  • Instance error log
  • Instance error log file size check
  • Cycle error log

showing how to install the tasks for monitoring error logs in SQL Server

After installing the tasks, go back to the monitoring module.

On the column Name, input error on the filter to find the tasks regarding the error logs. (Figure 3, below)

how to input error on the filter to find the tasks regarding the error logs.
Figure 3

If you right click on instance error log, you have an option to run the task to retrieve the updated information of the collected error logs from the instance. (Figure 4, below)

How to retrieve the updated information of the collected error logs
Figure 4

Click on Details, to view the actual errors collected from this SQL Server instance. (Figure 5, below)

How to view the actual errors
Figure 5

A new window will appear, showing you details of the errors which occurred in your sql server instance.

For example, the last lines read from error log table displays all lines read, from the error log file by this task during its last execution.


The error history details table, shows the history of the last 20 collected records from your SQL Server error log file (error-text defined by the “error text” parameter). (Figure 6, below)


Later in the blog, you’ll learn how to defining the error text parameter to filter out errors that you’d like excluded.

Figure 6

SQL Server Error History

The error history graph shows the number of errors per day being registered in the log, helping you determine if the errors you are encountering are growing daily. (Figure 7)

SQL server error history graph.
Figure 7

You don’t want the monitoring task to consume to much resources, and if it takes a long time to read the error log you should investigate it (even there are no errors in the log file). If the number of records read is very high, it could indicate potential problems in your instance. (See figures 8 and 9, below.)

Showing a high readings in the SQL error log graph.
Figure 8
Another example of high error log readings in SQL Server
Figure 9

And lastly, the alert execution aggregated statistics graph. The left vertical axis shows the total rows read per day, and the right vertical axis shows the total time spent reading the SQL Server error log per day. This helps you determine if you are experiencing an increase in error on your instance. With the information available, you can deep dive on the instance where the number of errors is growing.


Going back to the monitoring view, click on Configure. (Figure 10, below)

Image shows how to return to monitoring view
Figure 10

A configuration window will appear for this Instance error log task. (Figure 11, below)

The Confirmation window in dbWatch.
Figure 11

You can modify the configuration based on your requirements.

The options are:

  • Error text – defines which string (errors) the check must look for. Values must be separated with commas (,).
  • Error text allowed – Specifies the text strings (specified by the “error text” parameter) to be excluded when found in the same record (error log line).
  • History threshold – the maximum numbers of error messages the history table will retain (error_log_err_histr_tab table).
  • Last run – the last time the error log has been checked.
  • Max elapsed time – the maximum execution time (in seconds) for the task. If this threshold is reached a warning is raised.
  • Max lines per execution – the maximum number of lines the task will read during execution before a warning/alarm is raised.
  • Return status – return status value (ALARM – 2, WARNING – 1, or OK – 0) when “error text” is found (or “max lines per execution” is reached).
  • Click apply to save the configuration.

You can set the schedule on when you wish this task to execute. (Figures 12 and 13, below)

How it looks when you schedule a task in dbWatch
Figure 12
The Pop-up window when scheduling a task in dbWatch.
Figure 13

Right click on the Instance error log file size check and choose Details.

Note: In order to install and make this task (Instance error log file size check) work, you need to enable xp_cmdshell in SQL Server by executing the following command below, (Figure 14) or you can set it on the configuration parameter. (Shown later.)

 EXEC sp_configure ‘xp_cmdshell’, ‘1’; 



Instance error log file size check
Figure 14

A new window will appear showing you the information in your error log directory. It shows the error log file name, create date and file size. (Figure 15, below)

How the error log directory looks in dbWatch.
Figure 15

The error log file size history graph shows you information of your historical error log size growth rate. (Figure 16)

The error log file size history in dbWatch.
Figure 16

The error log directory size history graph. Displays the number of files within your log directory, and the total file size for the accumulated logs. With the information available it will help you with your decision making and consider clearing some error log files, which are no longer needed as it consumes additional disk space within your environment. (Figure 17, below)

Error log directory size history as seen in dbWatch.
Figure 17

Go back to the monitoring view, and right click again on the Instance error log file size check. Choose Configure. (Figure 18)

How to check on Instance error log file size check.
Figure 18

Again, you can modify the following configuration based on your requirements

The options are:

  • Enable xp_cmdshell – if set to “YES” the sql server instance configuration xp_cmdshell will be set to enabled. This parameter is required if you want to allow the task to execute operating system commands, to discover the size of files and directories.
  • Error log directory size threshold – the maximum total size (in MB) of all files in the error log-file directory.
  • Error log file path – path where the error log file is located.
  • Error log size threshold – maximum size (in MB) of error log file before a warning or an alarm is returned by the alert.
  • File count threshold – the maximum number of files in the error log catalog.
  • History threshold – the maximum numbers of history statistics (in days) of error log file size.
  • Return status – return status value (ALARM – 2, WARNING – 1, or OK – 0) when the “* threshold” parameters are reached.
  • Click apply to save the configuration.
Showing how to check cycle error log.
Figure 19

Choose Cycle error log. Right click then select Details

A new window will appear which displays the specific log file and the date when it was recycled. This information gives you an idea if the error log was cleared on a specific date. (Figure 20)

The window shown here displays the specific log file.
Figure 20

Go back to the monitoring view. Right click and choose Configure.

A configuration window will appear which you can modify. (Figure 21)

The options are:

  • Cycle History – number of days to provide information when the error log and agent error log has been recycled.
  • Number of SQL Server error logs – by default, there are 7 SQL Server error logs – Errorlog and Errorlog 1 to 6. This parameter can be used to modify the registry value (REG_DWORD type) to the number of logs which you wish to maintain.
  • Click apply to save the configuration.
A configuration window to modify.
Figure 21

Summary of Monitoring SQL Server Error Logs

Now you have an idea on how you can easily track the various errors happening within your database farm. You can set the return status so that you will be notified once a certain baseline, or error has been identified by dbWatch.


You can also configure your own parameters or baseline based on your business requirements. It is a great way of customizing different alerts based on the errors retrieved by the dbWatch task. For more information visit: 


Monitor your database error logs, try dbWatch Control Center today

Managing SQL Server Farms

A women stands next to a database, managing sql server farms

Imagine transitioning from managing a small family farm to overseeing an extensive industrial farm. That considerable change is similar to what a DBA managing a SQL Server experiences.


In this webinar, DBA Fundamentals virtual group in PASS, Per Christopher explores the journey from managing a handful of SQL servers to operating vast database server farms. Per draws parallels between agricultural farm management and SQL Server farm management.

As a Senior DBA with over 15 years of experience at dbWatch AS, a Norwegian software company specializing in database operations solutions, Per Christopher shares invaluable lessons on upgrading your mindset and toolset as your farm grows.

“Most DBAs are cowboys, but there is much to learn from the world of cow herding and dairy farms that can help us manage SQL Server instances better. SQL Server farms are the future of large-scale database management, and we will teach you the ropes. Introducing the FarmQueryLanguage brings another dimension of control to large-scale management. Managing large database farms requires a different mindset and tools compared to having only a few instances.”

Learn How to Manage SQL Server Farms

Webinar highlighs include: an in-depth discussion of the new challenges that managing a large farm brings, from how you think (see the forest, not just the trees) and how you change your focus from instance-centric performance and health to farm-centric resource management.


Watch the webinar to understand how to better manage SQL Server farms.