Database Locks – How to Monitor and Manage

And open lock symbolizes how this key unlocks your database problems

Database locks have been a bane to most DBAs.


They cause unnecessary problems to databases and business processes. In this blog, we will examine what database locks are and issues that can arise from them and provide ways to deal with them on SQLServer.


  • Not only that, but we will also implement it in dbWatch Control Center as an example.
  • What is a database lock?
  • The Most Common Table Lock in SQL

What is a database lock?

Database locks occur to secure shared resources during transactions. Locks are safeguards for databases as a means of:

  1. Observing an all-or-nothing scenario for multiple and separate transactions
  2. Preserving consistency in the database state
  3. Isolating transactions from being committed until that transaction is complete
  4. Saving committed transactions even in the event of abnormal termination


They adhere to the ACID properties of a transaction. To put it in perspective, imagine a piece of paper being shared by three writers.


The graphic below, gives and example of how locks work. Writer One finishes their message and gives the paper to Writer Two. Then, the Writer Two begins to draft their message. Now all other writers must wait for Writer Two to finish writing message before they can write anything on that piece of paper. Locks work in the same fashion. They prevent simultaneous writing and updates from maintaining data integrity.

Visual representation of acid properties

Locks, like the above example, work in the same fashion. They prevent simultaneous writing and updates from maintaining data integrity.

Locks within Database Access Levels

Locks can occur in a variety of forms. They appear in levels of access to a database. Below are the kinds of locks that can occur:

  • Row Level Locking
    Most common type of locking. It locks a row of a table.
  • Column Level Locking
    Simply put, it locks a column or columns within a table. Database vendors do not commonly offer this since it requires a lot of resources to apply it.
  • Table Level Locking
    Locks an entire table. It prevents modifying the table properties or updating all the rows by another user.
  • File Level Locking
    Locks database files, preventing alterations to an entire table, parts of that table, or another table. This is the least preferred method of locking since it can cause unintended locking.
  • Database Level Locking
    Restricts to only one database session to apply changes within that database. This type of locking is rarely seen, but it is widely applied during software updates.


Database locks most commonly appear during record updates or uncommitted SQL statements. They primarily isolate a database component, such as tables, so that session updates and alterations will successfully take effect and no data loss will be experienced.


Locks are not a problem when only a few active sessions transact in your database. However, with more database users accessing and utilizing your databases, locks will impact your database resources and potentially your business. There are three kinds of locking that I will discuss: lock contention, long-term blocking, and deadlocks.

The Most Common Table Locks in SQL

First, it’s important to understand what the most common table locks are. After that is clear, we can start working on solving the problems.

Lock contention

The most common form of SQL table lock is lock contention. It happens when concurrent sessions compete to update the same data. As discussed earlier, locks are enforced when a user alters a table property or updates data. This, in turn, blocks other session users from accessing that same database component, i.e., table. SELECT and INSERT statements run slower when accessing that table as it waits for the lock to end. Worse, lock contention may lead to indefinite access to a particular table and high CPU usage on that database server.

Long Term Blocking

This leads us to the second kind of table locks in SQL, Long Term Blocking. Like Lock Contention, Long Term Blocking happens when multiple users access the same database component. The notable difference occurs when the user holds on to that component for a long time. Dependent sessions and objects will be blocked from reading and writing.


Database Deadlocks

Database Deadlocks happen when two or more database sessions hold to a database object dependent on their transaction. Hence, sessions can only continue if the other concedes hold of that dependent object.


The flow chart below simplifies how deadlocks occur. Database Management Systems (DBMS) can detect deadlocks. It will select a session and roll back that session’s transaction. The only caveat is that DBMS will choose the most straightforward transaction to roll over.

Simplified visual of two session Deadlocks

Sure, deadlocks are manageable when it concerns user-driven ad-hoc. But, when applications and SQL jobs are involved, this can lead to data loss and logic problems. Transactions highly dependent on timing and data will always lead to writing or outputting incorrect information.

What can I do when database locks occur?

Now that we better understand the kinds of locks and how they occur, we can discuss how to deal with and prevent them in the future. First, you need to understand how a lock can occur in a session.


I have prepared two separate sessions to act as our users in this example. Session 1 will designate as our first user; then Session 2 will be our second user.


The figure 1 below shows a simple query that locks the designated table test_table. After that, we will execute another query for Session 2 that will select the specified table.


How to select the specified table when dealing with database locks.
Figure 1

Use Master
exec sp_who2

The stored procedure above will display a tabularized format of information you need to show who is blocking who. As seen below, in Figure 2, ID 54 (session 2) is being blocked by session 76 (session 1).


Showing how the blocked session is highlighted.
Figure 2

Similarly, you can use the following query to get the information you want:


USE Master
SELECT session_id, start_time, command, status, blocking_session_id, wait_time FROM sys.dm_exec_requests WHERE blocking_session_id <> 0;

You will still end up with the same information as before.


Identify the blocked session in database before killing the query.
Figure 3

After understanding the blocking session, you can kill the query by using the command below:


KILL – – blocking_session_id
— in this example two queries were executed:
— KILL 76
— KILL 54
— You can also use: KILL [session_id] WITH STATUSONLY

After killing them, it will restart the session. This was also done by killing session_id 54 (session 1).


Showing the killed session message for a locked database fix.
Figure 4

That seems more straightforward than expected. But this is still a bare-bone example with a negligible impact on our business processes and databases. In reality, killing a session rolls back the transaction before changes in the database state. Use killing with precaution since it may take longer for the session to recover the lost transaction, and it may affect dependent objects during data updates and writing.


You need sufficient knowledge when dealing with table locks. Do not nonchalantly kill the session to have a quick-fix solution to the problem. In queries using BEGIN TRANSACTION, you always have to save points after every critical alteration to the database. Remember to end your transactions with ROLLBACK or COMMIT since the transaction affects these lock tables. Finally, only kill sessions if they will have minimal impact on your business operations and data dependencies. This is your last resort.


Another option is to utilize database monitoring and management tools such as dbWatch Control Center. This not only saves you time with querying for information but also gives you an overall view of the state of your database.


How fix Table Locks in SQL with dbWatch Control Center

Since dbWatch Control Center is both a monitoring and management solution, we will do both in one active window. For reference, we will simulate the previous example step-by-step using Control Center.


How to simulate Database locks in dbwatch Control center
Screenshot 1

First of all, we will change both the warning threshold and the alarm threshold. The above picture shows the default values of the blocking statistics. This means that a warning status will appear when blocking occurs for more than 5 minutes and will raise an alarm status when it is above 30 minutes.


How to configure blocking statistics when Simulating Database locks in dbwatch.
Screenshot 2

I will alter it to notify me as a warning when it reaches 4 minutes and alarms me when it reaches over 12 minutes.


How to know if blocking has occurred when Simulating Database locks in dbwatch.
Screenshot 3

To show how that blocking has occurred, the screenshot above shows that the blocking has already taken effect.


Using a query to see which database locks are occurring.
Screenshot 4

Using the query before, I checked if table locks were occurring in the database. There is. We now know that session 75 is blocking session 58. For the following steps, we will not need any Management Studio opened. These previous two screenshots are only here to confirm that table locks are occurring.


Screen shot of monitoring module for database locks.
Screenshot 5

In the monitoring module, the blocking statistics job will check if table locks are occurring in the database. As we defined earlier, the monitoring job warns us after 4 minutes of blocking has happened. By clicking on the blocking statistics, it shows a complete overview of the following:


  •          Who?    –>  which sessions are blocking who
  •          What?   –> what query is being executed
  •          Where? –> what management tool host is this occurring in when the                                            blocking has occurred
The alarm status for a locked database.
Screenshot 6

After waiting a little while, the status is changed to an alarm, indicating that it has already passed the 12-minute mark.


The tree in the monitoring module.
Screenshot 7

Now, we will go to the monitoring module. Click on the tree. Then, find “Performance” and click on the arrow adjacent to it. Underneath it, you will see “Blocking Session.” Click on the arrow beside Blocking sessions to display “Blocking Session History,” which will help us determine if logs are present during the lock. As we can see in the above screenshot, the lock was recorded and sorted starting from the latest blocking.


The blocking sessions module that manages databases in dbWatch.
Screenshot 8

After confirming the logs, we will return to “Blocking sessions” to manage our database. Two sections will appear on that view – “Blocking sessions” and “Blocked Sessions.” One record should appear in each section. By right-clicking on the record, you can “Kill the session” or “Show current SQL.” For now, we will not kill the session. We will investigate further.


Where to check both sessions to resolve a locked database.
Screenshot 9

Upon checking both sessions by “Show current SQL,” it shows that session 2 (SPID 76) only makes a select statement. In contrast, session 1 (SPID 58) is blank, indicating that a transaction is being done. Knowing this, I can terminate session two since it only makes a select statement or terminates session 1 to stop the transaction. I killed session two in this example since it’s only a select statement.


The locks sql server error message.
Screenshot 10

Upon checking it, session two has been terminated, releasing the lock in the process. Session 1 continues to execute its transaction.


In summary, tools like dbWatch make it much easier to deal with locks. You get alarm notifications, quick insight into the situation, and the ability to resolve the issue quickly.

Discover how to best manage your database locks, try dbWatch Control Center today

And open lock symbolizes how this key unlocks your database problems

Database Locks – How to Monitor and Manage

Offering a deep dive into understanding database locks and their consequences on SQLServer, this guide provides actionable advice on managing and monitoring with dbWatch Control Center, ensuring smoother business operations.

Read More »

Track your SQLServer database compatibility level

Two people work to improve their sql compatibility level


In this blog,I will discuss SQL Servers’ database compatibility level. By using dbWatch, we will check the proper setting and configure the right compatibility level based on the version you are using. 


Each new version of SQL Server introduces new functionalities and performance benefits. These are identified by compatibility level and cardinality estimators.  


As you can see each database version is associated with a compatibility level, so if you want to utilize the newest features available for your database, you need to check and set the right compatibility level for your instance. 


As DBA, you must ensure the compatibility level of each database is matched with your instance current SQLServer version. 


The usual problems of enterprises are some take too long to migrate to a newer version of SQLServer. 


Some DBAs end up taking two steps. For example –they first migrate the SQL Server 2000 database to SQL Server 2008 R2, and then migrate it to SQL Server 2019. This is the reason NOT to wait too long before migrating a SQL Server database to its latest version.  


Sooner or later, your compatibility level will have to change, and this will impact your applications and processes. By doing it sooner, you will reduce the amount of work you will have to do later.

In this view, you can manually check for each databases compatibility level, by opening the database properties window in your SQL Server management studio.

But on a database farm level, especially, when managing hundreds up to thousands of databases, this is not the ideal way to investigate for the compatibility level of your databases.


dbWatch Control Center provides you the overview of your database’s compatibility level and displays other attributes of the SQLServer database they are using. 


Let us start by clicking on the Farm Management Icon, which is at the left side of your Control Center monitor as seen in the image below. 


Expand Inventory Overview Performance. 

Click on the Per platform option and choose MS SQL Server. 


In this view displays different attributes, you have the instance status, name, database version, edition, port #, instance name, host, host IP, OS type, the creation date of your database and total DB count 

 With this information you will have a quick understanding of the different versions of SQL server that is running within your database farm. 


On top, choose the compatibility tab. 

 On the topmost part of this view, you can see the servers and databases compatibility level.  


It has 2 groups: 

  • Databases with lower compatibility than the server 
  • Databases with the same compatibility as the server

As you can see, there are 2 databases that use lower compatibility level than the servers’ version, and 60 databases with the same compatibility level as the server. 


On the right side, you have a pie chart that displays the different compatibility levels configured on your databases. We have 2 databases with SQL Server 2014 (120) compatibility level, 50 databases with SQL Server 2016 (130) compatibility level and 10 databases with SQL Server 2019 (150). 


At the bottom of the view, displays the instance name, database, status, create date, recovery model, database compatibility, server compatibility, server version. 

 With this information, you will be able to analyze which databases you need to reconfigure and update their assigned compatibility level based on the servers compatibility level.  


In the image below, you can see a database farm with a total of 903 SQL Server databases with version SQLServer 2008 R2. This feature is very handy when managing these huge numbers of databases, as it helps you keep track of your databases compatibility level when it comes to the latest available version. 

The change in compatibility level tells the instance to change its feature sets, it might add new features or remove some old features so be wary of the changes that might affect your existing application.  


The compatibility report view enables DBAs to find where they have the potential for performance improvements by upgrading the compatibility level of a database which might be a version behind. 



For more info visit the website 

For more tutorials visit 

Follow us on FacebookLinkedIn, and Twitter - dbWatch 

Also check out our YouTube channel dbWatch


5 Tips for Server Migration Planning

Server migration planning

As a DBA, infrastructure admin, or manager, you may have encountered a scenario where you are tasked to lead, plan and execute a server migration project for reasons such as: improving the application performance by using new or upgraded hardware or optimize the enterprise architecture.


Planning for a server migration can be quite complex, especially in large enterprises with many different servers used for various web applications, internal applications, databases, and many more. Today I will discuss some of the issues you need to consider when planning a server migration.

Server migration planning

The 5 R’s to consider when planning a Server Migration are the following:

1. Re-hosting by Moving to a New Host– (also known as Lift and Shift)

This approach involves moving your current infrastructure, applications, or databases to another server, another data center, or moving it to the cloud as it is.

Advantages of Re-hosting:

  • Simple migration by re-hosting all applications and databases as it is
  • No optimizations, no architecture changes, no code changes, moving to other servers or in the cloud without doing additional changes

Disadvantages of Re-hosting:

  • Can cause security risks if users and roles are not applied effectively
  • Can cause application failure or services unavailability if processes, security roles, jobs are not synchronized correctly

 2. Re-platform with New Database Platform or Infrastructure

Re-platform involves migrating to a new platform or infrastructure, for example, moving your on-premise databases to an Azure Managed instance in the cloud or moving your on-premise web application to AWS ElasticBeanstalk.

Advantages of Re-platforming

  • Not changing the core architecture but may require some code changes for optimization

Disadvantages of Re-platforming

  • It will most likely require extra time and effort to apply code changes
  • Use different tool kits and packages only available to the new platform

3. Re-target to Another Platform

Moving to a different product or database platform (Oracle to MariaDB). An example is you are trying to migrate your Oracle databases to a MariaDB to save substantial licensing costs. The move from one database platform to another requires changes in your stored procedures and packages when moving from Oracle to a MariaDB database.

Advantages of Re-targeting

  • Migrate the database to a better solution for reasons such as cost-benefit, feature/function availability.

Disadvantages of Re-targeting

  • It may consume time migrating to a new database platform as you need to map out every process happening within the database.
  • You will need to learn new tools and tech if you are not already familiar with a target platform

4. Refactor

Restructuring the enterprise architecture (databases and applications)

Advantages of Refactoring

  • Overhaul the application, driven by the need of business to add new features and functions
  • Optimizing the overall application and usage rate of resources by query optimization and rewriting queries
  • Long term cost savings as your applications/database are optimally running and properly provisioned.

Disadvantages of Refactoring

  • It may require a lot of time and effort on the part of DBAs/Developers to work on the project to refactor the whole application/database architecture
  • May introduce new bugs or issues
  • It will require extensive planning and testing before stable and ready

5. Retire

Turn off things that are no longer being used or running may be due to refactoring the whole environment. Consolidate database instances that are infrequently used to other servers that have extra capacity and resources.

Advantages of Retiring

  • Save costs up to 10 – 30%
  • It helps reduce security vulnerabilities
  • Fewer servers to maintain, monitor, and manage
  • Simplify environment
  • Remove old legacy platforms and apps
  • Better overview
  • Cleanup

Disadvantages of Retiring

  • Hard to verify instances or databases no longer in use without extensive activity logs going back a long time or extensive manual checks
  • Moving databases may introduce unexpected side effect

Summary for Your Server Migration Planning

When planning your server migration, always remember the 5 R’s, which are:

  • Re-hosting
  • Re-platform
  • Re-target
  • Refactor
  • Retire

Before you migrate your servers, you should put a monitoring and management solution in place to keep track of your applications or databases’ health.


dbWatch Control Center allows you to track your database health and performance using different database monitoring jobs for every performance metric.


Control Center has built-in reporting capabilities for the management to have a quick overview of their database farms’ status, capacity, and resource usage.

Track your database health, try dbWatch Control Center today.

Monitoring the Performance of Database Indexes with dbWatch

monitor database performance

In general, most database administrators sometimes face a few common problems. One of those issues is optimizing the performance of queries – whenever query optimization is mentioned, the chances are that you will often see some advice regarding indexes. Today we will try to see why indexes are so essential and dive into how to monitor your database indexes’ performance with dbWatch.  

monitor database performance

What are Database Indexes?

In the database world, indexes are data structures that are frequently used to improve the speed of data retrieval operations. Indexes make data retrieval operations faster because when indexes are in use, databases can quickly locate data without having to scan through every row in a database table every time it‘s accessed. The usage of indexes, of course, has both its upsides and downsides – we will start from the good things, then go into the minuses, and finally, we will tell you how to monitor the performance of your database indexes using dbWatch. 

Advantages and Disadvantages of Using Database Indexes

There are a few main benefits of using database indexes as long as databases are concerned. We will use MySQL as an example. In this relational database management system, among other things, indexes can be used to: 

  • Quickly and efficiently find rows matching a WHERE clause.
  • Retrieve rows from other tables in JOIN operations. 
  • Save disk I/O when values are retrieved straight from the index structure. 

However, we mustn’t forget that what has advantages probably has disadvantages too. Here are the disadvantages of using indexes in MySQL: 

  • One of the main drawbacks of using indexes in MySQL is that your data will consume more space than usual. 
  • Indexes degrade the performance of certain types of queries in MySQL – INSERTUPDATE and DELETE queries can be significantly slower on indexed columns. When data is updated, the index needs to be updated together with it. 
  • You may use redundant indexes in MySQL (e.g., you might index the same column two or three times by adding an ordinary INDEX, a FULLTEXT index, and a PRIMARY KEY or a UNIQUE INDEX, etc.) In this case, it’s helpful to remember that MySQL does not error out when you use multiple types of indexes on the same column, so it never hurts to be careful. 
  • We also must not forget that there also are multiple indexes in MySQLYou can use a PRIMARY KEY (this type of index allows you to use automatically incrementing values), An ordinary INDEX accepts NULL values and is frequently used to speed up SELECT operations (while slowing down INSERTUPDATE and DELETE queries), a UNIQUE INDEX can be used to remove duplicate rows, you can also use full-text search capabilities while using a FULLTEXT index, or if you want to store rows in a descending format, you can also use a DESCENDING INDEX. 

Monitoring the Performance of Indexes Using dbWatch

To monitor the performance of your database indexes using dbWatch, you can utilize a couple of methods outlined below: 

  • dbWatch allows you to see your database growth rates. For that, dbWatch has two specific jobs letting you see the aggregated and detailed growth rates of your databases regardless of the platform you use. Here’s how the aggregated growth rates look like: 

Database growth rates as seen in dbWatch.

The red line depicts the data size, the orange is for index size and the green one is reserved for the total size. 


By observing aggregated growth rates of your database you can easily see the data and index size derived from your database server, letting you decide whether your indexes are starting to be redundant or not. 


Here’s how the detailed growth rates look like: 

detailed growth rate as seen in dbWatch

Detailed growth rates show a chart detailing the growth rate for the most extensive databases on the server. Both of the jobs also display dates letting you observe how your database grew over time. 


If your indexes’ size is very small, it might be time to look into a different optimization method. On the other hand, if the size of your indexes is a bit bigger, indexes can become the primary reason your queries run efficiently. It all depends on the index – indexes are critical for good performance, but people often misunderstand them, so indexing can cause more hassle than it’s worth too. To get the best out of the indexes that are in use in your database management system, you can also utilize the InnoDB buffer pool checking job or the MyISAM key buffer checking job – these jobs can give you an excellent indication of the buffer pool utilization in InnoDB or the key buffer utilization in MyISAM. 


The InnoDB buffer pool check job can be configured to give an alarm or a warning if the buffer utilization exceeds a certain value in percent, allowing you to keep an eye on the buffer pool at all times – since the buffer pool is maintained primarily for caching data and indexes in memory, monitoring its performance can be a crucial aspect of monitoring the performance of your database indexes with dbWatch: 

monitoring performance menu in dbWatch

Configure menu in dbWatch

The same can be said about the MyISAM key buffer check job. Once again, this job can be found by simply looking to the dbWatch Control Center’s left side. All that’s left to do is to configure and enable it for it to work: 

The dbWatch configure menu.

When configuring the job, keep in mind that there are a couple more parameters that you can use: 

  • You can choose the number of days you want to keep the data for – after a specified amount of days has passed, data will be discarded. 
  • The job can give you an alarm or a warning if the buffer utilization exceeds certain specified values in percent. 
  • The job can give you an alarm or a warning if the read ratio exceeds certain specified values in percent. 

The configure key buffer


The key buffer utilization alarms can be beneficial not only if you want to know whether the indexes you use in MyISAM are effective or not but also if you want to know when to upgrade your database instances or servers that you usually use to run the database instances on (e.g if a buffer utilization threshold constantly exceeds, say, 90% it might be time to look for how you can push your resources further to accommodate the data and indexes that you use). 

 Summary of Database Performance Monitoring with dbWatch

Monitoring the indexes’ performance in your database with dbWatch can be a substantial step if you want to ensure that some of your database queries (e.g., search queries) stay fast and efficient. Do keep in mind that indexes usually slow down the performance of certain types of queries too (e.g INSERT and UPDATE queries), but if you have a lot of data, indexes can be handy. When using indexes, keep in mind that there are separate types of them (for example, B-Tree indexes and FULLTEXT indexes, PRIMARY KEYs are also indexes) and that you have multiple types of indexes on the same column at once. 


Software developed by dbWatch  can help you monitor the performance of the database indexes that you use – the database growth rate job can help you check the size of your indexes helping you decide whether they’re efficient or not, the InnoDB buffer pool checking job can help you monitor the data and indexes of your InnoDB tables, and the key buffer checking job can help you monitor your MyISAM table performance.

Understand more about database performance monitoring, book a demo today.



Fix and Optimize Database Performance Degradation

database management tool
database management tool

Online transactions are the new normal. From banks, retailers and food industry, Information Systems have adapted with the times to comply with the customer’s demand.


However, not everything is fine the state of ‘new normal.’ The increase in demand, could case degradation in your database performanceMost common of this occurrence is slow running database that is visibly seen on top of applications. 

What is database slowness?

Let’s discuss database performance first. Database performance is the speed at which your database responds to your request of accessing its stored data. To visualize it, a simple SELECT statement, when executed, pulls resources from your database, and returns a tabulated data for display.  This seems fine in the first thousand records but when you are querying tens of thousands of rows or more, a reduced performance can often be noticed. 

Database slowness is that stunted performance. The main difference between that simple SELECT statement and database slowness is that the latter is a chronic ailment of your databases. 

Slowness can be attributed to multiple factors such as: 

  • network problems
  • missing indexes
  • improper caching
  • un-optimized queries
  • database architecture

Reduced database performance is heavily contributed by your database’s existing structure and as DBAs, it is our primary responsibility to mitigate these effects.

How you can Increase Database Speed

Monitor your network and memory

  • Other nodes
  • Estimate disk space and cache memory

Constantly check which applications are consuming a lot of resources. Be wary of cache memory and disk spaces as they can deplete with continuous reading and writing to your databases. Monitor your network connections to avoid any unplanned downtimes. See to it that the bandwidth allocated is sufficient and latency for and between database servers are kept at a minimum. 

Coordinate with your network team, developers, and other related departments to determine what is the root cause of the problem especially if it is database performance related. Fixing slowness won’t happen overnight. But with gradual steps taken, the system will be tuned to its peak performance. 

Review your database architecture and structure

Poorly defined and maintained databases will cause retrieval latency and performance issues. Missing indexes or unnecessary table locks are not only the contributors for undesirable database performance. Database architecture also plays a significant role. 

You must review your overall architecture if it follows correct normalization. Duplicate data, and missing primary keys are the common problems of fledgling database systems. These are easy to shelve for backlogsYet, by constantly pushing them below your priority list, the cracks in your database architecture begins to show. Sooner or later, addressing those cracks would be labor-intensive as DBAs will have to work longer hours and costly because you need to plan when and how much time you will be dedicating in fixing them. 

Take for example re-indexing tables containing millions of master records. The best approach would be to schedule it on a weekend with low transactions done with the databaseThen, formulate a strategy for the execution and revert, and disseminate the scheduled hours for maintenance to all stakeholders. Most importantly, secure DBAs to work during that time frame. Essentially, procrastination negatively impacts business when bad quality persists. 

  1. Does the design make sense?
  2. What is the probability of data duplicate data per row? Is it low? Is it high?
  3. When I query, will I need to go to another schema just to retrieve that other table?
  4. Are there redundant data that will be stored in every field?
  5. Are the data type properly defined? Are there composite data types I should be wary about?
  6. Are the primary and foreign keys properly defined? Will they lead to a conflict of inputting data?
  7. Are chosen indexing approach sufficient to optimize my searches?
  8. Should I use VIEWS as a substitute for this table?
  9. How will we handle orphan data?
  10. Will we automatically flush or clean cache memory once data rows are deleted?
Based on your answers, you get a feel on what your ideal database architecture should be versus the reality of it. Although this is under the authority of DevOps and Developers, as DBAs, it is also our responsibility to manage databases.

Query Plan. Plan. Plan

Whether you are a senior DBA or junior DBA, always remember to value your query execution plans. Utilize your EXPLAIN statementsand execution plan tabsDo this also with Oracle, Postgres or any other platforms.  

Double check your database objects and queries such as: 

1. Stored procedures 

2. Functions
3. Ad hoc queries
4. Connecting application queries

See to it that upon their execution, they do not consume a large portion of your database resources. 

JOIN and SUBQUERY statements will also be headaches to be dealt with. Irresponsible developers will match up tables containing tens of thousands of records with ambiguous or incorrect join keys. Poorly scripted subqueries will often lead to slower returns and result to NULL values even if the conditions are met. If possible, use JOIN statements as they take precedence in the query statement and limit the returned records before applying the WHERE condition. 

DBAs should search for these anomalies while finding ways to optimize them. Fine tuning these queries are no joke. It may take hours just to optimize a query.  

Using a Tool to make Database Management Effective and Efficient

With all these talk about optimization, management and monitoring, DBAs have a lot in their hands. It will be a challenge to focus on other tasks especially if your company is suffering from reduced database performance . However, you don’t have to do these tasks manually. 

Database Management and Monitoring solution such as dbWatch will assist you in tracking down resource heavy queries. dbWatch Control Center will monitor your database’s memory and storage spaces such as disk space, cache memory and temporary tables. These are supervised by jobs that deliver the information right at your dashboard. Boxed in Red are the jobs checking for memory usage. 

Screen shot shows some ways to Ways to fix and optimize Database Performance Degradation

Database and server connections are also constantly checked by dbWatch Control Center. The monitoring dashboard lists all possible issues and provide keen statistics of databases when a problem occurs. The screenshot below in red highlights the issues encountered when the job is scheduled to run. Dominantly, no backup plans were established for the instance listed which categorized them as recurring issues. 

Monitoring Your Instances Constantly

Green box shows the statistics of instances. There are 5 categories with differing colors. Red indicates a major problem has occurred within that instance. Yellow shows that the DBA’s intervention maybe needed to properly address that instance’s issue. Green highlights instances showing no signs of problems. Blue displays instance that are down while Gray points out added instances without monitoring jobs installed. 

How dbWatch lists all possible issues and provides statistics

DBA in a Single Window


On the database management side, you can perform your database administration task in a single window. Upon accessing the management module, look at the left side pane of the screen. Here, you will see a tree-like structure (see Red Box) where you can do the following: 

1. Configure your databases security

2. Alter indexes

3. Set up backups and restore them

4. Configure your SQL agent

You don’t need to access your database management studio to accomplish these tasks. All you need is your dbWatch Control Center monitor. 

Overview of dbWatch showing statistics.

How’s that for convenient? An all in one tool that provides solutions to your database problems. 


Understand more about using dbWatch for database management, book a demo today.

Monitoring SQL Server error logs

Screen giving an idea of Monitoring sql server error logs

As a data platform expert who manages and maintains hundreds or maybe thousands of database instances, you may be thinking of a better way in managing these number of servers more efficiently. An environment this huge with a lot of vital applications relying on those databases can be quite difficult to keep track of. Especially when you’re too busy with a lot of administration work, with little resources and time you might have difficulties prioritizing your daily tasks.


As your day goes on you will encounter the usual errors which happens from time to time. It may be caused by different reasons for example; a user tried to input his/her credentials, but he/she failed logging in as the credential had already expired, maybe there was a job that failed due to a query timeout, or maybe there was a backup job which failed due to an unknown reason.

Screen giving an idea of Monitoring sql server error logs

The errors mentioned are automatically logged on your error logs in SQL Server. You can view all these by expanding SQL Agent on your management studio, there you will see all the logs of your database instance.


When monitoring tens or maybe hundreds of database instances, you need to be able to track and efficiently monitor errors happening in each instance, within the database farm you are managing.

How to Easily Track SQL Instances

You may think that there is no easy way to manage and have full control of everything that is happening within each of your instances, luckily there is dbWatch which will assist you with your daily monitoring tasks.


dbWatch contains various tasks which helps you manage and monitor error logs of your database instances. dbWatch helps you keep track of all errors encountered on your database farm. You can also monitor error log file sizes and check if your file size is growing too much or too fast within a period. And lastly, you can configure when you wish to recycle error logs within your database instance, which helps free up disk space being consumed by the number of growing logs. 

How to Track Error Logs Within dbWatch

  • You’ll need to start by going to the monitoring module.
  • Here you’ll select a sql server instance. Right click and select configure monitoring. (See Figure 1, below)
Selecting SQL server, screen shot
Figure 1

In this window make sure that you have installed the tasks for monitoring error logs in SQL Server. (See figure 2, below)

The following tasks are:

  • Instance error log
  • Instance error log file size check
  • Cycle error log

showing how to install the tasks for monitoring error logs in SQL Server

After installing the tasks, go back to the monitoring module.

On the column Name, input error on the filter to find the tasks regarding the error logs. (Figure 3, below)

how to input error on the filter to find the tasks regarding the error logs.
Figure 3

If you right click on instance error log, you have an option to run the task to retrieve the updated information of the collected error logs from the instance. (Figure 4, below)

How to retrieve the updated information of the collected error logs
Figure 4

Click on Details, to view the actual errors collected from this SQL Server instance. (Figure 5, below)

How to view the actual errors
Figure 5

A new window will appear, showing you details of the errors which occurred in your sql server instance.

For example, the last lines read from error log table displays all lines read, from the error log file by this task during its last execution.


The error history details table, shows the history of the last 20 collected records from your SQL Server error log file (error-text defined by the “error text” parameter). (Figure 6, below)


Later in the blog, you’ll learn how to defining the error text parameter to filter out errors that you’d like excluded.

Figure 6

SQL Server Error History

The error history graph shows the number of errors per day being registered in the log, helping you determine if the errors you are encountering are growing daily. (Figure 7)

SQL server error history graph.
Figure 7

You don’t want the monitoring task to consume to much resources, and if it takes a long time to read the error log you should investigate it (even there are no errors in the log file). If the number of records read is very high, it could indicate potential problems in your instance. (See figures 8 and 9, below.)

Showing a high readings in the SQL error log graph.
Figure 8
Another example of high error log readings in SQL Server
Figure 9

And lastly, the alert execution aggregated statistics graph. The left vertical axis shows the total rows read per day, and the right vertical axis shows the total time spent reading the SQL Server error log per day. This helps you determine if you are experiencing an increase in error on your instance. With the information available, you can deep dive on the instance where the number of errors is growing.


Going back to the monitoring view, click on Configure. (Figure 10, below)

Image shows how to return to monitoring view
Figure 10

A configuration window will appear for this Instance error log task. (Figure 11, below)

The Confirmation window in dbWatch.
Figure 11

You can modify the configuration based on your requirements.

The options are:

  • Error text – defines which string (errors) the check must look for. Values must be separated with commas (,).
  • Error text allowed – Specifies the text strings (specified by the “error text” parameter) to be excluded when found in the same record (error log line).
  • History threshold – the maximum numbers of error messages the history table will retain (error_log_err_histr_tab table).
  • Last run – the last time the error log has been checked.
  • Max elapsed time – the maximum execution time (in seconds) for the task. If this threshold is reached a warning is raised.
  • Max lines per execution – the maximum number of lines the task will read during execution before a warning/alarm is raised.
  • Return status – return status value (ALARM – 2, WARNING – 1, or OK – 0) when “error text” is found (or “max lines per execution” is reached).
  • Click apply to save the configuration.

You can set the schedule on when you wish this task to execute. (Figures 12 and 13, below)

How it looks when you schedule a task in dbWatch
Figure 12
The Pop-up window when scheduling a task in dbWatch.
Figure 13

Right click on the Instance error log file size check and choose Details.

Note: In order to install and make this task (Instance error log file size check) work, you need to enable xp_cmdshell in SQL Server by executing the following command below, (Figure 14) or you can set it on the configuration parameter. (Shown later.)

 EXEC sp_configure ‘xp_cmdshell’, ‘1’; 



Instance error log file size check
Figure 14

A new window will appear showing you the information in your error log directory. It shows the error log file name, create date and file size. (Figure 15, below)

How the error log directory looks in dbWatch.
Figure 15

The error log file size history graph shows you information of your historical error log size growth rate. (Figure 16)

The error log file size history in dbWatch.
Figure 16

The error log directory size history graph. Displays the number of files within your log directory, and the total file size for the accumulated logs. With the information available it will help you with your decision making and consider clearing some error log files, which are no longer needed as it consumes additional disk space within your environment. (Figure 17, below)

Error log directory size history as seen in dbWatch.
Figure 17

Go back to the monitoring view, and right click again on the Instance error log file size check. Choose Configure. (Figure 18)

How to check on Instance error log file size check.
Figure 18

Again, you can modify the following configuration based on your requirements

The options are:

  • Enable xp_cmdshell – if set to “YES” the sql server instance configuration xp_cmdshell will be set to enabled. This parameter is required if you want to allow the task to execute operating system commands, to discover the size of files and directories.
  • Error log directory size threshold – the maximum total size (in MB) of all files in the error log-file directory.
  • Error log file path – path where the error log file is located.
  • Error log size threshold – maximum size (in MB) of error log file before a warning or an alarm is returned by the alert.
  • File count threshold – the maximum number of files in the error log catalog.
  • History threshold – the maximum numbers of history statistics (in days) of error log file size.
  • Return status – return status value (ALARM – 2, WARNING – 1, or OK – 0) when the “* threshold” parameters are reached.
  • Click apply to save the configuration.
Showing how to check cycle error log.
Figure 19

Choose Cycle error log. Right click then select Details

A new window will appear which displays the specific log file and the date when it was recycled. This information gives you an idea if the error log was cleared on a specific date. (Figure 20)

The window shown here displays the specific log file.
Figure 20

Go back to the monitoring view. Right click and choose Configure.

A configuration window will appear which you can modify. (Figure 21)

The options are:

  • Cycle History – number of days to provide information when the error log and agent error log has been recycled.
  • Number of SQL Server error logs – by default, there are 7 SQL Server error logs – Errorlog and Errorlog 1 to 6. This parameter can be used to modify the registry value (REG_DWORD type) to the number of logs which you wish to maintain.
  • Click apply to save the configuration.
A configuration window to modify.
Figure 21

Summary of Monitoring SQL Server Error Logs

Now you have an idea on how you can easily track the various errors happening within your database farm. You can set the return status so that you will be notified once a certain baseline, or error has been identified by dbWatch.


You can also configure your own parameters or baseline based on your business requirements. It is a great way of customizing different alerts based on the errors retrieved by the dbWatch task. For more information visit: 


Monitor your database error logs, try dbWatch Control Center today

InnoDB: High Performance vs. Reliability with dbWatch

If you are a developer that deals with MySQL or a MySQL database administrator, you probably know what MySQL database engines are. One of the most popular database engines as far as MySQL or MariaDB is concerned is InnoDB. This storage engine is very widely regarded as a high-performance storage engine that also balances high performance with high reliability.


This storage engine replaced MyISAM since generally used in  MySQL 5.5 which was  – MyISAM was released in 2010. This blog post will go through what MySQL can offer in this space and how dbWatch can help monitor performance and reliability issues.

How does InnoDB Ensure High Performance and Reliability? 

If you ask a MySQL database administrator or a developer who deals with databases, how does InnoDB ensure high performance and reliability? You will probably hear the term “ACID” being mentioned. As it deals with databases, the term ACID is an acronym for four words:

  • Atomicity
  • Consistency
  • Isolation
  • Durability


Here’s how InnoDB ensures that the ACID parameters are being followed:

  • Can ensure that statements in a transaction operate as an indivisible unit, and their effects are either seen collectively or not seen at all.
  • Has logging mechanisms that record all of the changes to the database.
  • Provides row-level locking.
  • Tracks all of the changes to the system by maintaining a log file.

It is worth noting that not all InnoDB engines are ACID-compliant “out of the box”.– ACID-compliance for InnoDB is controlled by the innodb_flush_log_at_trx_commit variable in my.cnf.


This variable has three possible options: zero (0), one (1), and two (2). The default value is 1 – this value makes InnoDB ACID compliant. The other two values, 0 and 2, can be used to achieve faster write speeds, but then InnoDB will no longer be ACID-compliant, and so the engine can lose up to one second’s worth of transactions.


In general, the innodb_flush_log_at_trx_commit parameter controls how to perform fsync operations. – fsync() is a Linux function that transfers (“flushes”) all modified data in such a way that forces a physical write of data from the buffer cache. –iIt also ensures that all of the data up to the time that thewhen fsync() call was invoked is will be recorded on the disk after a system crash, power outage or any other hiccup.

How can dbWatch Help Ensure InnoDB High Performance and Reliability? 

If you want to ensure that your MySQL InnoDB instances follow high performance and reliability principles, keep an eye on dbWatch. dbWatch has quite a few jobs that are aimed to ensure that the performance of your InnoDB instances will follows the high performance and high-reliability principles. Here’s how that looks like at the time this blog post is written:

Image 1 – dbWatch Performance jobs

Simply expand the Performance job section and you will see a couple of database-based jobs that can help you monitor the binlog cache, monitor your database load, and your lock statistics. It can show you your memory setup, your query cache hit rate, session load, temporary table status, etc.But we are interested in one job – that’s the InnoDB buffer pool checking job.


Right-click the job, click Details, and you should see this screen which explains what the job does in detail:

Image 2 – InnoDB buffer pool hit ratio details

This graph depicts the hit ratio for the InnoDB buffer pool. In order to ensure that your InnoDB instances follow high-performance principles, aim for:

  • The hit ratio to be as high as possible – when InnoDB cannot read from the buffer pool, the disk is accessed. Queries hitting the disk are usually slower.
  • A large InnoDB buffer pool value – the larger it is, the less disk I/O is needed to access data in tables.


To set these parameters up, you might want to make use of the free –h command (this command displays how much RAM is free in your system in a human-readable format) – to make a good decision, evaluate your project needs upfront and account the RAM usage for the applications that will run on your server.


To account for the InnoDB buffer pool value properly, keep in mind that this value can be set to up to 80% of free memory on Linux (on Windows machines, it’s a little bit less). The more memory you allow for InnoDB to use in this scenario, the more performant it will be.


dbWatch also shows you a graph that depicts the actual usage of the InnoDB buffer pool by your database instances. –it shows the total number of the buffer pool read requests and how many of them accessed the disk:

Image 3 – the usage of the InnoDB buffer pool

dbWatch also allows you to configure this job easily –right-click and click Configure, and you should see this screen:

As you can see, dbWatch also lets you configure the hit ratio alarm and warning thresholds.,mMeaning that you will be presented with a warning or a notice if the InnoDB buffer pool hit ratio falls below specific values in percent (%). 



InnoDB is widely known as a high-performance and high-reliability storage engine for most developers that deal with MySQL and MySQL DBAs. It’s important to push your InnoDB instances to the next level and help ensure they stay highly performant and reliable.


Keep your database instances run smoothly, try dbWatch Control Center today

How is increased complexity harming your DB performance?

Dealing with a wide variety of databases, platforms and versions is a reality for many companies. While it would be nice to be able to have one platform to handle all tasks necessary within an organisation, the reality is that different needs require different solutions. As a result, we create (or at least are forced to work with) diverse systems.

A variety of systems can be great for providing solutions which might not be easily available within one database platform. Unfortunately, this diversity also can bring unwanted complexity to the database management process. When a system becomes too complex, it can sometimes feel like you are losing track of the needs and goals of the organisation. While solving one problem, another problem occurs in a different database, which behaves entirely differently than the one you are working on.

Problems start to snowball, and you feel like you are losing your grasp over the multitude of systems.

In order to address this, we need to step back and make a list of the various things that can go wrong.
As many in the therapeutic fields state, “the first step is recognising that you have a problem.”

Okay, you’ve already reached that point (otherwise why would you be reading this?). So let’s get to the second step; at this point we need to figure out and document what the problems are; we can’t get started solving problems unless we know where they exist, right? Only after we understand what the pitfalls are in our database management process can we begin to map out solutions.


Complex Environment

We often don’t have the option of having our entire of array of databases running the same platform, or even if we do, they are likely not all running the same system. You may have systems any combination of SQL Server, Oracle, Sybase, PostgreSQL, MySQL or more (not to mention MongoDB or other NoSQL databases).

We could also be hosting different instances on different virtual servers; often in multiple locations. Each of these platforms may have useful tools for tracking down these problems, but they don’t all work for all systems. Simply trying to keep track of everything slows pretty much everything down.

Ineffective Use of DBA Time

Your DBAs may be spending a lot of time fixing a small, but pesky problem, but miss the fact that there’s a larger systemic problem that may be more important in the long run.

For example, maybe there is an annoying problem where an index keeps getting dropped on one particular table. One of your DBAs sees this, and rebuilds it, and moves on to something else. It gets dropped again the next day, and another DBA fixes this. Or maybe it gets dropped in the middle of the night, and nobody knows why; it happens silently, but the system has slowed down and the next day time gets wasted trying to find the cause.

While working on individual problems, it may be missed that there’s a fundamental issue with the database structure itself. Perhaps a larger fix, which may take a little more time, may solve the problem once and for all. While this might be caught by someone paying attention at a specific time, it’s quite easy to miss these, so the same work gets repeated over and over again.

Troubleshooting Time

Often, as is described above, it’s difficult to see what happened. The DBA is staring at the problem without seeing a larger picture of why it occurred, so he or she spends a lot of time trying to track down the root cause. There’s no overview so unless detailed notes are kept (in ways that everyone understands), the same process will likely need to be repeated each time it occurs.

Firefighting Becomes The Main Focus

Instead of being able to perfect a particular database, there’s little time for planning, tuning, and much less redesigning databases as putting out fires as they occur becomes a daily activity. Simply the process of having to take your attention away from a project can slow you down in exponential ways.

Even if a fix is quick, the time and energy spent solving problems (especially if they are complicated ones) can be extremely distracting. Simply getting back into the groove on the original project takes time, and it becomes even worse if something else goes down on a different system, server, or DB instance.

At the end of the day, there is virtually no time to do any preventative maintenance or improve the overall flow of your procedures. Even if you manage to come up with a few good scripts that can handle some of this, one that will work on Oracle will not work on SQL Server, and definitely not on PostgreSQL. The complexity itself becomes the enemy.

Finding Answers

The first step is to remember these two words:

Don’t Panic.

Simply understanding where the problems are can go a long way to helping you find solutions, and we are already partly there. Most of problems associated with complexity tend to stem from each other.

What you need is overview.

One of the main problems in a complex system is not knowing what is happening to all of your systems at once. It is the classic problem of only seeing the trees, but not the entire forest.

If we can find a way to step back and see the entire forest at once, patterns will begin to appear. Ideally it would be great to have the ability to have only the responsibility to pay attention to the larger picture without getting stuck in the mud of individual issues, however as we know, that’s typically not an option.
Also, even then, you’d still need a way of viewing all of these issues at once, rather than poring through logs just to find patterns. See where I’m going here?

The main point is that you need to be able to see what is going on you need is a good tool. Software that can enable you to take a step back. Maybe this isn’t your primary role, but if problems can be spotted before they arise, then half of your problem is solved.

This tool needs to be able to examine multiple systems and platforms at once. Instead of jumping from one tool to another, each of which has a different interface, functionality, and with varying options for customisation; it’s best to have something that can give you a high-level view of all of your systems.

Once we have this, the rest of the problems start to solve themselves. If we can spot problems before they exist, there’s better communication and less need for firefighting.


4 Warning Signs You’re Out-growing Your Database Systems

Modern businesses of all sizes rely on large and scalable databases for the smooth running of their IT systems. From logging transactions and stock levels, to providing staff logins and sharing information throughout a business, databases and their associated server farms provide essential functions for organisation in every industry.

But, as more and more business is conducted online, and as every business moves to digital storage and transmission of data, those databases and serverfarms are growing larger than ever. Because of continued expansion, many server administrators and IT managers are finding that their servers and database management tools can no longer keep up with demand, and that their business has outgrown their hardware.

Here are four signs that you might have outgrown your business’ database tools and infrastructure:

Poor Performance

A key indicator of a database that has been outgrown is poor performance for the system’s users.


Whether it’s resulting from a lack of storage, inadequate hardware, slow connections or a lack of proper maintenance and management, a poorly performing database system can negatively affect almost every aspect of your business. With applications performing poorly and users unable to access what they need when they need it, your business could be running inefficiently, leading to lost business and wasted money.


While a scalable serverfarm can help to alleviate some of the strain, you can also look to management and monitoring tools to track down and analyse performance issues and to better maintain your services.

Too Many Tools

If your morning starts with logging into multiple database tools – say more than four – it could be a sign that it’s time to clean up your server environment and make use of tools specifically designed for a system of your size.


While multiple tools may work on small numbers of instances, larger systems can often benefit from tools that scale with them – avoiding a duplication of data and effort as well as cutting down on licensing costs and staff training time.

Slow Reporting Performance

If you dread running reports and auditing your systems, or often find yourself running reports overnight to avoid system downtime or busy periods, it might be time to think about improving your database systems and tools.


Getting accurate data quickly is vital for many business, and having access to reports on demand can mean the difference between making or losing a lot of money. And while some database monitoring and reporting tools can cope well with small numbers of instances, as your database and server numbers grow you may need to invest in new scalable services and infrastructure.

High IT Costs

Another sign that your business might be running an inadequate database system is if your IT maintenance costs have risen significantly.


As databases and their servers grow, they often require more maintenance and the use of specialised tools to help keep them running smoothly. If your DBA is facing a daily battle to keep your business online and has had to resort to patches and quick fixes over a strategic plan for expansion, it might be time to make a proper investment into your infrastructure.


Taking the time and money to create a stable, scalable and large-enough system for your business can save all of your staff many headaches down the line – and spiralling costs and a beaten-down DBA team are sure signs that you need to take action.


You can also potentially cut your IT costs by working to consolidate your servers and by ensuring that you’re making use of the correct software licences. Many businesses can keep costs down by using the right sized licences, making sure that they aren’t paying for something that they don’t need or don’t actually use.


Keeping your database and server instances an appropriate scale for your business, and keeping them running smoothly, is essential in today’s data-driven world. With the help of scalable tools, proper planning, and investment in the correct services, your system can continue to grow with your to support your business for years to come.


Understand more about database management, book a demo today.