Database Locks – How to Monitor and Manage

And open lock symbolizes how this key unlocks your database problems

Database locks have been a bane to most DBAs.


They cause unnecessary problems to databases and business processes. In this blog, we will examine what database locks are and issues that can arise from them and provide ways to deal with them on SQLServer.


  • Not only that, but we will also implement it in dbWatch Control Center as an example.
  • What is a database lock?
  • The Most Common Table Lock in SQL

What is a database lock?

Database locks occur to secure shared resources during transactions. Locks are safeguards for databases as a means of:

  1. Observing an all-or-nothing scenario for multiple and separate transactions
  2. Preserving consistency in the database state
  3. Isolating transactions from being committed until that transaction is complete
  4. Saving committed transactions even in the event of abnormal termination


They adhere to the ACID properties of a transaction. To put it in perspective, imagine a piece of paper being shared by three writers.


The graphic below, gives and example of how locks work. Writer One finishes their message and gives the paper to Writer Two. Then, the Writer Two begins to draft their message. Now all other writers must wait for Writer Two to finish writing message before they can write anything on that piece of paper. Locks work in the same fashion. They prevent simultaneous writing and updates from maintaining data integrity.

Visual representation of acid properties

Locks, like the above example, work in the same fashion. They prevent simultaneous writing and updates from maintaining data integrity.

Locks within Database Access Levels

Locks can occur in a variety of forms. They appear in levels of access to a database. Below are the kinds of locks that can occur:

  • Row Level Locking
    Most common type of locking. It locks a row of a table.
  • Column Level Locking
    Simply put, it locks a column or columns within a table. Database vendors do not commonly offer this since it requires a lot of resources to apply it.
  • Table Level Locking
    Locks an entire table. It prevents modifying the table properties or updating all the rows by another user.
  • File Level Locking
    Locks database files, preventing alterations to an entire table, parts of that table, or another table. This is the least preferred method of locking since it can cause unintended locking.
  • Database Level Locking
    Restricts to only one database session to apply changes within that database. This type of locking is rarely seen, but it is widely applied during software updates.


Database locks most commonly appear during record updates or uncommitted SQL statements. They primarily isolate a database component, such as tables, so that session updates and alterations will successfully take effect and no data loss will be experienced.


Locks are not a problem when only a few active sessions transact in your database. However, with more database users accessing and utilizing your databases, locks will impact your database resources and potentially your business. There are three kinds of locking that I will discuss: lock contention, long-term blocking, and deadlocks.

The Most Common Table Locks in SQL

First, it’s important to understand what the most common table locks are. After that is clear, we can start working on solving the problems.

Lock contention

The most common form of SQL table lock is lock contention. It happens when concurrent sessions compete to update the same data. As discussed earlier, locks are enforced when a user alters a table property or updates data. This, in turn, blocks other session users from accessing that same database component, i.e., table. SELECT and INSERT statements run slower when accessing that table as it waits for the lock to end. Worse, lock contention may lead to indefinite access to a particular table and high CPU usage on that database server.

Long Term Blocking

This leads us to the second kind of table locks in SQL, Long Term Blocking. Like Lock Contention, Long Term Blocking happens when multiple users access the same database component. The notable difference occurs when the user holds on to that component for a long time. Dependent sessions and objects will be blocked from reading and writing.


Database Deadlocks

Database Deadlocks happen when two or more database sessions hold to a database object dependent on their transaction. Hence, sessions can only continue if the other concedes hold of that dependent object.


The flow chart below simplifies how deadlocks occur. Database Management Systems (DBMS) can detect deadlocks. It will select a session and roll back that session’s transaction. The only caveat is that DBMS will choose the most straightforward transaction to roll over.

Simplified visual of two session Deadlocks

Sure, deadlocks are manageable when it concerns user-driven ad-hoc. But, when applications and SQL jobs are involved, this can lead to data loss and logic problems. Transactions highly dependent on timing and data will always lead to writing or outputting incorrect information.

What can I do when database locks occur?

Now that we better understand the kinds of locks and how they occur, we can discuss how to deal with and prevent them in the future. First, you need to understand how a lock can occur in a session.


I have prepared two separate sessions to act as our users in this example. Session 1 will designate as our first user; then Session 2 will be our second user.


The figure 1 below shows a simple query that locks the designated table test_table. After that, we will execute another query for Session 2 that will select the specified table.


How to select the specified table when dealing with database locks.
Figure 1

Use Master
exec sp_who2

The stored procedure above will display a tabularized format of information you need to show who is blocking who. As seen below, in Figure 2, ID 54 (session 2) is being blocked by session 76 (session 1).


Showing how the blocked session is highlighted.
Figure 2

Similarly, you can use the following query to get the information you want:


USE Master
SELECT session_id, start_time, command, status, blocking_session_id, wait_time FROM sys.dm_exec_requests WHERE blocking_session_id <> 0;

You will still end up with the same information as before.


Identify the blocked session in database before killing the query.
Figure 3

After understanding the blocking session, you can kill the query by using the command below:


KILL – – blocking_session_id
— in this example two queries were executed:
— KILL 76
— KILL 54
— You can also use: KILL [session_id] WITH STATUSONLY

After killing them, it will restart the session. This was also done by killing session_id 54 (session 1).


Showing the killed session message for a locked database fix.
Figure 4

That seems more straightforward than expected. But this is still a bare-bone example with a negligible impact on our business processes and databases. In reality, killing a session rolls back the transaction before changes in the database state. Use killing with precaution since it may take longer for the session to recover the lost transaction, and it may affect dependent objects during data updates and writing.


You need sufficient knowledge when dealing with table locks. Do not nonchalantly kill the session to have a quick-fix solution to the problem. In queries using BEGIN TRANSACTION, you always have to save points after every critical alteration to the database. Remember to end your transactions with ROLLBACK or COMMIT since the transaction affects these lock tables. Finally, only kill sessions if they will have minimal impact on your business operations and data dependencies. This is your last resort.


Another option is to utilize database monitoring and management tools such as dbWatch Control Center. This not only saves you time with querying for information but also gives you an overall view of the state of your database.


How fix Table Locks in SQL with dbWatch Control Center

Since dbWatch Control Center is both a monitoring and management solution, we will do both in one active window. For reference, we will simulate the previous example step-by-step using Control Center.


How to simulate Database locks in dbwatch Control center
Screenshot 1

First of all, we will change both the warning threshold and the alarm threshold. The above picture shows the default values of the blocking statistics. This means that a warning status will appear when blocking occurs for more than 5 minutes and will raise an alarm status when it is above 30 minutes.


How to configure blocking statistics when Simulating Database locks in dbwatch.
Screenshot 2

I will alter it to notify me as a warning when it reaches 4 minutes and alarms me when it reaches over 12 minutes.


How to know if blocking has occurred when Simulating Database locks in dbwatch.
Screenshot 3

To show how that blocking has occurred, the screenshot above shows that the blocking has already taken effect.


Using a query to see which database locks are occurring.
Screenshot 4

Using the query before, I checked if table locks were occurring in the database. There is. We now know that session 75 is blocking session 58. For the following steps, we will not need any Management Studio opened. These previous two screenshots are only here to confirm that table locks are occurring.


Screen shot of monitoring module for database locks.
Screenshot 5

In the monitoring module, the blocking statistics job will check if table locks are occurring in the database. As we defined earlier, the monitoring job warns us after 4 minutes of blocking has happened. By clicking on the blocking statistics, it shows a complete overview of the following:


  •          Who?    –>  which sessions are blocking who
  •          What?   –> what query is being executed
  •          Where? –> what management tool host is this occurring in when the                                            blocking has occurred
The alarm status for a locked database.
Screenshot 6

After waiting a little while, the status is changed to an alarm, indicating that it has already passed the 12-minute mark.


The tree in the monitoring module.
Screenshot 7

Now, we will go to the monitoring module. Click on the tree. Then, find “Performance” and click on the arrow adjacent to it. Underneath it, you will see “Blocking Session.” Click on the arrow beside Blocking sessions to display “Blocking Session History,” which will help us determine if logs are present during the lock. As we can see in the above screenshot, the lock was recorded and sorted starting from the latest blocking.


The blocking sessions module that manages databases in dbWatch.
Screenshot 8

After confirming the logs, we will return to “Blocking sessions” to manage our database. Two sections will appear on that view – “Blocking sessions” and “Blocked Sessions.” One record should appear in each section. By right-clicking on the record, you can “Kill the session” or “Show current SQL.” For now, we will not kill the session. We will investigate further.


Where to check both sessions to resolve a locked database.
Screenshot 9

Upon checking both sessions by “Show current SQL,” it shows that session 2 (SPID 76) only makes a select statement. In contrast, session 1 (SPID 58) is blank, indicating that a transaction is being done. Knowing this, I can terminate session two since it only makes a select statement or terminates session 1 to stop the transaction. I killed session two in this example since it’s only a select statement.


The locks sql server error message.
Screenshot 10

Upon checking it, session two has been terminated, releasing the lock in the process. Session 1 continues to execute its transaction.


In summary, tools like dbWatch make it much easier to deal with locks. You get alarm notifications, quick insight into the situation, and the ability to resolve the issue quickly.

Discover how to best manage your database locks, try dbWatch Control Center today

And open lock symbolizes how this key unlocks your database problems

Database Locks – How to Monitor and Manage

Offering a deep dive into understanding database locks and their consequences on SQLServer, this guide provides actionable advice on managing and monitoring with dbWatch Control Center, ensuring smoother business operations.

Read More »

Analyzing SQL Server With New Performance Tools

oracle performance check

Analyzing database performance on Microsoft SQL Server is quite a complex issue. In dbWatch Control Center, our primary aim is first to provide structure and toolsets to avoid running into performance problems in the first place.

Consistency in deploying preventative maintenance routines should take care of many problems that occur over time, such as index and table fragmentation. Bad statistics are a foundation that will ensure that we don’t waste time investigating preventable causes.


As most DBAs will know, unexpected performance issues can occur even with the best foundation, and we will need to be able to investigate the underlying causes.


For this reason, dbWatch Control Center has a lot of monitoring jobs focused on monitoring the behavior of different performance metrics. At its core, all performance issues occur when resource limits are met. It can be the CPU, disk, memory, or internal database limits.


The first we need to do to start our investigation is to find the time window where the performance issues occurred. The typical problem is that one or more SQL statements cause an overload of the database resources in the time period in which they were running. In this blog, we will briefly investigate performance issues due to SQL statements.


In the SQL Performance section in the management module of dbWatch Control Center, you can look at historical resource usage, together with the SQL statements that were performed in that time period.



In this section, we can see spikes in logical/physical reads and writes. This often, but not always, coincide with performance loads. We can also look at other metrics in the same time frame, such as Time, the elapsed time for all queries in a time frame, which could indicate slow query response. The Waits could indicate spikes where the database is waiting for specific resources. CPU, where we can see any spike in CPU load. Data cache where we can see if there was a fight for memory resources and other graphs focusing on metrics that could be relevant.



Once a peak has been identified, you can use the mouse to zoom in on the time period. Picture2

The SQL performance dashboard will then zoom in, and looking at the SQL handle statistics, it’s easy to see what SQL handle is responsible for this peak.


You can then right-click and select show statistics, to open the SQL handle and view the SQL code, statistics, and when the SQL statement is inside that SQL handle.


If we here are interested in the SQL statement in this SQL handle that is causing the most logical reads, we can quickly identify it in the list and right-click to show this SQL statement or open the query plan in SQL Server Management Studio for further investigation.


If we choose the “Show SQL statement and statistics,” we can open the SQL statement.


In this case, it was the fetch statement in the procedure. We can see in the SQL code for the procedure that this is the select_cur which is defined as:




SELECT * FROM dbw_test_tab

DECLARE @col1 INT, @col2 VARCHAR(1000)


This allows us quickly to see what is going on, in this case, a select statement from a performance load routine, so nothing is out of what we expected.

If this was interesting, we go more in-depth on the SQL performance dashboard on the wiki page, (


With dbWatch Control Center, SQLServer DBAs can now monitor their resource usage trend and analyze the peak hours in their database environment based on available charts.   


dbWatch Control Center assists SQLServer DBAs in efficiently running their databases, from proactive database performance monitoring to managing SQLServer tasks.  


Control Center also helps SQLServer DBAs play back their historical performance to drill down and pinpoint problematic queries during the past few weeks. 


Control Center offers the following:  


  • Farm overview – consolidated views for performance, capacity, and maintenance for your entire database farm 
  • Monitor in-depth all your instances and platforms 
  • Performance – identify performance bottlenecks within the farm 
  • Scalability – ability to adapt to your growing database farm 
  • Alerting – alerting and third-party extensions tailored to your business requirements 
  • Security – role-based access controls, Active Directory, and Kerberos integration encrypts connections and support security certificates 


You can try dbWatch Control Center today. Use this link to download a version of Control Center that can monitor up to 5 instances for free:  Download dbWatch Control Center  


If you have any questions or you would like to know more about how dbWatch Control Center can assist you in your current enterprise database monitoring and database management situation, feel free to contact me directly:     


For more information, visit or the dbWatch wiki pages   





Monitor your SQL Server Indexes

Person monitors SQL server indexes


I want to view the performance of my SQL Server indexes across the database farm and determine the top instances by index disk usage. I also want to look for servers that encounter problems when running the update index statistic job.  


dbWatch Control Center can monitor your Microsoft SQL Server’s index usage rate and perform database index maintenance to make sure that your servers are running smoothly. 


Microsoft SQL Server recommends reorganizing indexes with fragmentation level between 5% to 30% but if it is greater than 30% you will need to perform an index rebuild instead. 

In this blog, you will learn how you can analyze your database indexes within your database farm and determine which instance has its SQL Server maintenance plans or index maintenance jobs running longer than the usual by using dbWatch Control Center features. 

Example 1: Index Analyze Statistics Job 

On the monitoring view, right click on your SQL Server instance. Click Configure Jobs. 

Graphical user interface, application Description automatically generated

A new window will appear, tick the job Index usage statistics (all databases). Click Install.

Graphical user interface, text, application, email Description automatically generated

As seen in the screenshot below, this job collects database index and table usage, shows information about the object/table name type of index total seeks and scans. It gives you information about how big your tables are and if it frequently uses an index scan or index seek. Ideally, you would want your indexes using an index seek.

Graphical user interface, text, application Description automatically generated

In the job – Update index statistics, within this view you will be able to see useful information such as the database name, database index count per database and total indexes (clustered/non clustered) analyzed per database. It will also show you the number of errors within the execution of the update statistics task, if there are any.  

Graphical user interface, text, application, email Description automatically generated

In this view you can easily sort the columns in ascending or descending order. This will help to determine which instance within the database farm has the longest/shortest elapsed time when analyzing indexes, this is useful if some maintenance jobs are taking a longer time to finish than usual.  

Graphical user interface, application Description automatically generated

An example application of the jobs shown is when you have a scheduled index rebuild job weekly, from Week 1 to Week 7 your average elapse time was around 15-20 mins. to finish. But suddenly, when you look at the dbWatch Control Center view shows that this week (Week 8) it took 80-90 mins. to finish. The average time increased 4 times its usual execution, this tells you that there is something going on within your SQL Server indexes – from there on, you will start investigating the cause of the problem.  

Example 2: Index Reorganize and Rebuild View 

dbWatch Control Center also provides you with index maintenance jobs such as Index rebuild and reorganize.

Graphical user interface, application Description automatically generated

This view displays the index rebuild/reorganize statistics. Here you will see the list of instances and the number of databases checked within the instance, and it also shows which databases was rebuilt and which was ignored.

Table Description automatically generated with low confidence

In this view you will be able to sort the column to determine which instance consumes the most disk space.

Graphical user interface, application, Word Description automatically generated

The important part of this view is the size before and after the index rebuild took place and the disk usage reduced by the operation.  

If you rightclick on a selected instance, you have an option to execute the task and start an index reorganize/rebuild task on the instance databases’ or configure the task and its schedule. 

If you choose Details, you will see a report regarding the history of the reorganize or rebuild execution within that instance, it also shows the information about the table and index fragmentation average and the disk usage before and after the reorganize or rebuild operation.

Table Description automatically generated

With the following examples provided, you will be able to determine which instances has the biggest database index disk usage within your database farm. You will also be able to assess if a certain index operation schedule should be adjusted based on the performance and other details shown in the reporting view. 

With dbWatch Control Center, database admin’s can now keep track of their SQL Server indexes and their performance. As a DBA, you can now analyze how your database indexes are being used in your environment and plan on how you will maintain these SQL Server indexes efficiently.   

dbWatch Control Center assists database administrators to efficiently run their databases, from proactive monitoring of database performance to the management of tasks.   

Discover how dbWatch Control Center can monitor your SQL Server Indexes, try dbWatch Control Center today

Running Databases: Docker vs. Alternatives

running databases: docker vs. alternatives

For many developers, Docker is a frequent friend. Some even turn to Docker to run their database systems – applications are safe in containers, containers usually are lightweight, and generally, containers allow Devs to bundle all the software they use into “boxes” called Docker containers. But how are they compared to other alternatives?



Running Databases in Containers 

Running software in containers is generally mainstream, but what about databases or database servers (for example, SQL Server, MySQL, Oracle, PostgreSQL, MariaDB)? Have you ever heard of a dev running his databases in containers? Most likely not, because doing so requires extra resources and dedication toward the craft of containers. To run your databases in a containerized environment and utilize database monitoring properly, here’s what you must know: 

  •  Containers let you approach your databases as an on-demand utility (each application can have its own database.) 
  • Containers let you have less “mess” on your machines – you always know what’s where. 
  • Multiple containers can run on the same OS. 

Running databases in containers is not very easy, though: if we think of containers as a lightweight alternative to VMs, we will quickly realize that containers, in this case, would let us “engulf” our applications in containers that each might run different operating systems. In the case of databases, that most likely would look like each DBMS is “encapsulated” into its own container image. Don’t forget that when dealing with Docker, we’d usually need .yml files that would define the services our containers run, etc. 


Everything looks pretty complex, doesn’t it? How do we decide whether we need that in regards to databases? It might help to look at alternatives before answering this question. 

Alternatives to Containerization with Databases

The most frequent alternative to containerization is simply running a database on a standalone server. This process is quite a lot easier than running databases on containers, and it can easily be accomplished both locally and in a live environment. On a local server, you need to spin up WAMP or its alternative if you’re using Linux, and when you’re in a live environment, buy access to shared hosting, a VPS, or a dedicated server if you’re building services requiring more hardware resources, and you should be good to go. In many cases, databases will even come pre-installed to the services you use. 

What’s Better? 

If you ask a developer or even a database administrator which option is better – should you run your databases on Docker or a more “natural” environment involving servers – you will probably hear an answer like “Don’t bother with Docker.” Want to know why? Running databases in a containerized environment (using Docker) isn’t a very smart choice due to complexity – Docker might bring less clutter. Still, it will certainly bring more headaches just by the way it’s structured. 


Running your databases on Linux is the traditional and straightforward way to get on the web. When monitoring is concerned, everything can be set up in a more “traditional” fashion when containers don’t get in the way. Look at dbWatch: define your connection details, and you’re done! Isn’t that awesome? 

After your database instances (of whatever database management system flavor you’re using) are imported into dbWatch, they then could be monitored. In a span of minutes, a bunch of expert-made database jobs could be run: 

With a wide variety of database jobs, dbWatch proves to be a perfect choice for many business owners and database administrators – just a couple of clicks, and your database issues will be solved. There’s no need to reinvent the wheel and run databases in a containerized environment using Docker: in fact, not many people run their databases on Docker at all – there’s a good reason for that: doing so is pretty complex, even to advanced database administrators. In many cases, database administrators are not sysadmins, so the traditional way is preferred here. Did we talk about junior developers and DBAs? Docker is not a good choice for them either: in fact, it might steer them away from development altogether, not even talking about choices

Summary of Running Databases: Docker vs. Alternatives

As far as containers are concerned, they certainly have their own upsides, but running databases inside of them isn’t one of them. 


Running your databases in a traditional, non-containerized way lets you develop and roll out features faster, and you will be able to solve issues faster as well: for example, post a problem you’re having with your databases on Stack Overflow. Chances are, you will instantly be bombarded with questions like “Why are you using Docker?” “Isn’t the traditional way – a good way for you?” etc., so go with the traditional path instead. Whatever path you choose, remember that there are a bunch of database monitoring tools – and dbWatch is one of them.

Understand more about database management, book a demo of dbWatch today.

Track your SQLServer database compatibility level

Two people work to improve their sql compatibility level


In this blog,I will discuss SQL Servers’ database compatibility level. By using dbWatch, we will check the proper setting and configure the right compatibility level based on the version you are using. 


Each new version of SQL Server introduces new functionalities and performance benefits. These are identified by compatibility level and cardinality estimators.  


As you can see each database version is associated with a compatibility level, so if you want to utilize the newest features available for your database, you need to check and set the right compatibility level for your instance. 


As DBA, you must ensure the compatibility level of each database is matched with your instance current SQLServer version. 


The usual problems of enterprises are some take too long to migrate to a newer version of SQLServer. 


Some DBAs end up taking two steps. For example –they first migrate the SQL Server 2000 database to SQL Server 2008 R2, and then migrate it to SQL Server 2019. This is the reason NOT to wait too long before migrating a SQL Server database to its latest version.  


Sooner or later, your compatibility level will have to change, and this will impact your applications and processes. By doing it sooner, you will reduce the amount of work you will have to do later.

In this view, you can manually check for each databases compatibility level, by opening the database properties window in your SQL Server management studio.

But on a database farm level, especially, when managing hundreds up to thousands of databases, this is not the ideal way to investigate for the compatibility level of your databases.


dbWatch Control Center provides you the overview of your database’s compatibility level and displays other attributes of the SQLServer database they are using. 


Let us start by clicking on the Farm Management Icon, which is at the left side of your Control Center monitor as seen in the image below. 


Expand Inventory Overview Performance. 

Click on the Per platform option and choose MS SQL Server. 


In this view displays different attributes, you have the instance status, name, database version, edition, port #, instance name, host, host IP, OS type, the creation date of your database and total DB count 

 With this information you will have a quick understanding of the different versions of SQL server that is running within your database farm. 


On top, choose the compatibility tab. 

 On the topmost part of this view, you can see the servers and databases compatibility level.  


It has 2 groups: 

  • Databases with lower compatibility than the server 
  • Databases with the same compatibility as the server

As you can see, there are 2 databases that use lower compatibility level than the servers’ version, and 60 databases with the same compatibility level as the server. 


On the right side, you have a pie chart that displays the different compatibility levels configured on your databases. We have 2 databases with SQL Server 2014 (120) compatibility level, 50 databases with SQL Server 2016 (130) compatibility level and 10 databases with SQL Server 2019 (150). 


At the bottom of the view, displays the instance name, database, status, create date, recovery model, database compatibility, server compatibility, server version. 

 With this information, you will be able to analyze which databases you need to reconfigure and update their assigned compatibility level based on the servers compatibility level.  


In the image below, you can see a database farm with a total of 903 SQL Server databases with version SQLServer 2008 R2. This feature is very handy when managing these huge numbers of databases, as it helps you keep track of your databases compatibility level when it comes to the latest available version. 

The change in compatibility level tells the instance to change its feature sets, it might add new features or remove some old features so be wary of the changes that might affect your existing application.  


The compatibility report view enables DBAs to find where they have the potential for performance improvements by upgrading the compatibility level of a database which might be a version behind. 



For more info visit the website 

For more tutorials visit 

Follow us on FacebookLinkedIn, and Twitter - dbWatch 

Also check out our YouTube channel dbWatch


SQL Server Uptime: When to Restart Your SQL Server Instances

A pair of glasses in front of a screen showing SQL Server Uptime

If you are a database-savvy developer or even a database administrator, you probably know how important it is to always keep an eye out on your database servers. Some database servers might need more of your attention and upkeep than others, of course, but after all, even if they are facing some sorts of issues, none of them should be neglected – when your database instances are facing issues, one of the things you might want to keep an eye out on is SQL Server uptime. Sometimes your database instances might also need to be restarted.

An out of focus screen calculating sql-server uptime.

As easy as it sounds, the restarting of your database instances might be a pretty tedious task: you might need to shut down all of the running processes, keep an eye out on your database schemas, and also keep an eye out on their uptime – thankfully,  we have tools that can help us with that. 

dbWatch for Improving SQL Server Uptime

dbWatch, for example, is a fully-fledged database management solution letting you keep an eye on all kinds of database instances: from Oracle and Sybase to PostgreSQL. However, what we’re interested in the most is the capabilities dbWatch provides for Microsoft SQL server (MSSQL) to successfully run. If you’re using SQL Server Express, you will be able to make use of dbWatch as well! 

Open up dbWatch and import a SQL Server instance (in this example we will use SQL Server 2017) – you will see a bunch of “Folders.”) Those folders house categories of database jobs – database jobs that can help you monitor your database availability, capacity, replication capabilities, maintenance, or performance: 

Screen shot showing Database server uptime in dbWatch.

In this case, you might have noticed that we marked a job called “Database Server uptime” – check what it says and you will notice that it depicts the amount of time your SQL Server has been running: 

Message about how long instances run.

When to Restart SQL Server Instances

Seeing this shouldn’t be surprising – however, when do you decide when it’s time to restart your SQL Server instances? In that kind of scenario, we would need to take care of a few aspects. We should think about: 

  1. Do we really need to restart our SQL Server instances in the first place? Perhaps we could do without a restart and just change a couple of queries around here and there? 
  2. When we should go about restarting our SQL Server instances (i.e. what kind of time of restart would be most beneficial for both us and not interrupt the work for our clients, employees, etc.?) 
  3. How we should restart our database (SQL Server) instances? Manually? Should we make use of the help of certain tools (e.g. dbWatch, etc.)? How should we use such tools to our advantage? 

Solve SQL Server Issues

Answering these three questions shouldn’t be too difficult – however, what might be more difficult is the fact that,even if you restart your database (in this case, SQL Server) instances while aiming to solve some kind of an issue you are facing, you might find that you might have the need to keep an eye on a couple of different things related to your database instances ranging from availability to maintenance at the same time as well. How do you do that?

How Monitoring Can Help You

Probably, you should use software that is up for the task, such as dbWatch. For example, expand the Performance section of database jobs in dbWatch and you will see a couple of different jobs that will be suitable to improve your database performance – dbWatch in this case can monitor your databases for high activity (which might be useful to decide when to restart your database instances – you should do so at the lowest point of usage to not disrupt anybody using your services): 

Message about activity monitor.

dbWatch can also provide you with the statistics of the usage of your indexes inside of a certain database instance, etc.: 

Menu from dbWatch about performance.

Deciding your Next Steps

However, that’s only when performance is related. dbWatch also has other similar MS SQL database jobs (simply expand the categories of Availability, Capacity, and the like), and it can also show you a bunch of really colourful statistics to help you decide what should the next step be for your database instances (simply click Monitor -> Management to observe the following screen): 

Screen shot showing instance information regarding SQL instances.

This output might also be useful when deciding when to restart your SQL Server database instances. You should restart your database instances once they are at the lowest point of usage (for that, you could make use of the logical reads per second and the sessions graphs.) See how everything starts to work together so nicely? If you ever need any support, the dbWatch team is always here for you too – ping them and they will get back to you to help you solve your database issues sooner than you think. 

Conclusion on Improving SQL Server Uptime with Restarts

We hope that this blog post has been a little of an eye-opener of what restarting SQL Server and other database instances usually comes down.


Interested in trying dbWatch to help with SQL Server management?  Book a demo today.



5 Tips for Server Migration Planning

Server migration planning

As a DBA, infrastructure admin, or manager, you may have encountered a scenario where you are tasked to lead, plan and execute a server migration project for reasons such as: improving the application performance by using new or upgraded hardware or optimize the enterprise architecture.


Planning for a server migration can be quite complex, especially in large enterprises with many different servers used for various web applications, internal applications, databases, and many more. Today I will discuss some of the issues you need to consider when planning a server migration.

Server migration planning

The 5 R’s to consider when planning a Server Migration are the following:

1. Re-hosting by Moving to a New Host– (also known as Lift and Shift)

This approach involves moving your current infrastructure, applications, or databases to another server, another data center, or moving it to the cloud as it is.

Advantages of Re-hosting:

  • Simple migration by re-hosting all applications and databases as it is
  • No optimizations, no architecture changes, no code changes, moving to other servers or in the cloud without doing additional changes

Disadvantages of Re-hosting:

  • Can cause security risks if users and roles are not applied effectively
  • Can cause application failure or services unavailability if processes, security roles, jobs are not synchronized correctly

 2. Re-platform with New Database Platform or Infrastructure

Re-platform involves migrating to a new platform or infrastructure, for example, moving your on-premise databases to an Azure Managed instance in the cloud or moving your on-premise web application to AWS ElasticBeanstalk.

Advantages of Re-platforming

  • Not changing the core architecture but may require some code changes for optimization

Disadvantages of Re-platforming

  • It will most likely require extra time and effort to apply code changes
  • Use different tool kits and packages only available to the new platform

3. Re-target to Another Platform

Moving to a different product or database platform (Oracle to MariaDB). An example is you are trying to migrate your Oracle databases to a MariaDB to save substantial licensing costs. The move from one database platform to another requires changes in your stored procedures and packages when moving from Oracle to a MariaDB database.

Advantages of Re-targeting

  • Migrate the database to a better solution for reasons such as cost-benefit, feature/function availability.

Disadvantages of Re-targeting

  • It may consume time migrating to a new database platform as you need to map out every process happening within the database.
  • You will need to learn new tools and tech if you are not already familiar with a target platform

4. Refactor

Restructuring the enterprise architecture (databases and applications)

Advantages of Refactoring

  • Overhaul the application, driven by the need of business to add new features and functions
  • Optimizing the overall application and usage rate of resources by query optimization and rewriting queries
  • Long term cost savings as your applications/database are optimally running and properly provisioned.

Disadvantages of Refactoring

  • It may require a lot of time and effort on the part of DBAs/Developers to work on the project to refactor the whole application/database architecture
  • May introduce new bugs or issues
  • It will require extensive planning and testing before stable and ready

5. Retire

Turn off things that are no longer being used or running may be due to refactoring the whole environment. Consolidate database instances that are infrequently used to other servers that have extra capacity and resources.

Advantages of Retiring

  • Save costs up to 10 – 30%
  • It helps reduce security vulnerabilities
  • Fewer servers to maintain, monitor, and manage
  • Simplify environment
  • Remove old legacy platforms and apps
  • Better overview
  • Cleanup

Disadvantages of Retiring

  • Hard to verify instances or databases no longer in use without extensive activity logs going back a long time or extensive manual checks
  • Moving databases may introduce unexpected side effect

Summary for Your Server Migration Planning

When planning your server migration, always remember the 5 R’s, which are:

  • Re-hosting
  • Re-platform
  • Re-target
  • Refactor
  • Retire

Before you migrate your servers, you should put a monitoring and management solution in place to keep track of your applications or databases’ health.


dbWatch Control Center allows you to track your database health and performance using different database monitoring jobs for every performance metric.


Control Center has built-in reporting capabilities for the management to have a quick overview of their database farms’ status, capacity, and resource usage.

Track your database health, try dbWatch Control Center today.

Monitoring SQL Server error logs

Screen giving an idea of Monitoring sql server error logs

As a data platform expert who manages and maintains hundreds or maybe thousands of database instances, you may be thinking of a better way in managing these number of servers more efficiently. An environment this huge with a lot of vital applications relying on those databases can be quite difficult to keep track of. Especially when you’re too busy with a lot of administration work, with little resources and time you might have difficulties prioritizing your daily tasks.


As your day goes on you will encounter the usual errors which happens from time to time. It may be caused by different reasons for example; a user tried to input his/her credentials, but he/she failed logging in as the credential had already expired, maybe there was a job that failed due to a query timeout, or maybe there was a backup job which failed due to an unknown reason.

Screen giving an idea of Monitoring sql server error logs

The errors mentioned are automatically logged on your error logs in SQL Server. You can view all these by expanding SQL Agent on your management studio, there you will see all the logs of your database instance.


When monitoring tens or maybe hundreds of database instances, you need to be able to track and efficiently monitor errors happening in each instance, within the database farm you are managing.

How to Easily Track SQL Instances

You may think that there is no easy way to manage and have full control of everything that is happening within each of your instances, luckily there is dbWatch which will assist you with your daily monitoring tasks.


dbWatch contains various tasks which helps you manage and monitor error logs of your database instances. dbWatch helps you keep track of all errors encountered on your database farm. You can also monitor error log file sizes and check if your file size is growing too much or too fast within a period. And lastly, you can configure when you wish to recycle error logs within your database instance, which helps free up disk space being consumed by the number of growing logs. 

How to Track Error Logs Within dbWatch

  • You’ll need to start by going to the monitoring module.
  • Here you’ll select a sql server instance. Right click and select configure monitoring. (See Figure 1, below)
Selecting SQL server, screen shot
Figure 1

In this window make sure that you have installed the tasks for monitoring error logs in SQL Server. (See figure 2, below)

The following tasks are:

  • Instance error log
  • Instance error log file size check
  • Cycle error log

showing how to install the tasks for monitoring error logs in SQL Server

After installing the tasks, go back to the monitoring module.

On the column Name, input error on the filter to find the tasks regarding the error logs. (Figure 3, below)

how to input error on the filter to find the tasks regarding the error logs.
Figure 3

If you right click on instance error log, you have an option to run the task to retrieve the updated information of the collected error logs from the instance. (Figure 4, below)

How to retrieve the updated information of the collected error logs
Figure 4

Click on Details, to view the actual errors collected from this SQL Server instance. (Figure 5, below)

How to view the actual errors
Figure 5

A new window will appear, showing you details of the errors which occurred in your sql server instance.

For example, the last lines read from error log table displays all lines read, from the error log file by this task during its last execution.


The error history details table, shows the history of the last 20 collected records from your SQL Server error log file (error-text defined by the “error text” parameter). (Figure 6, below)


Later in the blog, you’ll learn how to defining the error text parameter to filter out errors that you’d like excluded.

Figure 6

SQL Server Error History

The error history graph shows the number of errors per day being registered in the log, helping you determine if the errors you are encountering are growing daily. (Figure 7)

SQL server error history graph.
Figure 7

You don’t want the monitoring task to consume to much resources, and if it takes a long time to read the error log you should investigate it (even there are no errors in the log file). If the number of records read is very high, it could indicate potential problems in your instance. (See figures 8 and 9, below.)

Showing a high readings in the SQL error log graph.
Figure 8
Another example of high error log readings in SQL Server
Figure 9

And lastly, the alert execution aggregated statistics graph. The left vertical axis shows the total rows read per day, and the right vertical axis shows the total time spent reading the SQL Server error log per day. This helps you determine if you are experiencing an increase in error on your instance. With the information available, you can deep dive on the instance where the number of errors is growing.


Going back to the monitoring view, click on Configure. (Figure 10, below)

Image shows how to return to monitoring view
Figure 10

A configuration window will appear for this Instance error log task. (Figure 11, below)

The Confirmation window in dbWatch.
Figure 11

You can modify the configuration based on your requirements.

The options are:

  • Error text – defines which string (errors) the check must look for. Values must be separated with commas (,).
  • Error text allowed – Specifies the text strings (specified by the “error text” parameter) to be excluded when found in the same record (error log line).
  • History threshold – the maximum numbers of error messages the history table will retain (error_log_err_histr_tab table).
  • Last run – the last time the error log has been checked.
  • Max elapsed time – the maximum execution time (in seconds) for the task. If this threshold is reached a warning is raised.
  • Max lines per execution – the maximum number of lines the task will read during execution before a warning/alarm is raised.
  • Return status – return status value (ALARM – 2, WARNING – 1, or OK – 0) when “error text” is found (or “max lines per execution” is reached).
  • Click apply to save the configuration.

You can set the schedule on when you wish this task to execute. (Figures 12 and 13, below)

How it looks when you schedule a task in dbWatch
Figure 12
The Pop-up window when scheduling a task in dbWatch.
Figure 13

Right click on the Instance error log file size check and choose Details.

Note: In order to install and make this task (Instance error log file size check) work, you need to enable xp_cmdshell in SQL Server by executing the following command below, (Figure 14) or you can set it on the configuration parameter. (Shown later.)

 EXEC sp_configure ‘xp_cmdshell’, ‘1’; 



Instance error log file size check
Figure 14

A new window will appear showing you the information in your error log directory. It shows the error log file name, create date and file size. (Figure 15, below)

How the error log directory looks in dbWatch.
Figure 15

The error log file size history graph shows you information of your historical error log size growth rate. (Figure 16)

The error log file size history in dbWatch.
Figure 16

The error log directory size history graph. Displays the number of files within your log directory, and the total file size for the accumulated logs. With the information available it will help you with your decision making and consider clearing some error log files, which are no longer needed as it consumes additional disk space within your environment. (Figure 17, below)

Error log directory size history as seen in dbWatch.
Figure 17

Go back to the monitoring view, and right click again on the Instance error log file size check. Choose Configure. (Figure 18)

How to check on Instance error log file size check.
Figure 18

Again, you can modify the following configuration based on your requirements

The options are:

  • Enable xp_cmdshell – if set to “YES” the sql server instance configuration xp_cmdshell will be set to enabled. This parameter is required if you want to allow the task to execute operating system commands, to discover the size of files and directories.
  • Error log directory size threshold – the maximum total size (in MB) of all files in the error log-file directory.
  • Error log file path – path where the error log file is located.
  • Error log size threshold – maximum size (in MB) of error log file before a warning or an alarm is returned by the alert.
  • File count threshold – the maximum number of files in the error log catalog.
  • History threshold – the maximum numbers of history statistics (in days) of error log file size.
  • Return status – return status value (ALARM – 2, WARNING – 1, or OK – 0) when the “* threshold” parameters are reached.
  • Click apply to save the configuration.
Showing how to check cycle error log.
Figure 19

Choose Cycle error log. Right click then select Details

A new window will appear which displays the specific log file and the date when it was recycled. This information gives you an idea if the error log was cleared on a specific date. (Figure 20)

The window shown here displays the specific log file.
Figure 20

Go back to the monitoring view. Right click and choose Configure.

A configuration window will appear which you can modify. (Figure 21)

The options are:

  • Cycle History – number of days to provide information when the error log and agent error log has been recycled.
  • Number of SQL Server error logs – by default, there are 7 SQL Server error logs – Errorlog and Errorlog 1 to 6. This parameter can be used to modify the registry value (REG_DWORD type) to the number of logs which you wish to maintain.
  • Click apply to save the configuration.
A configuration window to modify.
Figure 21

Summary of Monitoring SQL Server Error Logs

Now you have an idea on how you can easily track the various errors happening within your database farm. You can set the return status so that you will be notified once a certain baseline, or error has been identified by dbWatch.


You can also configure your own parameters or baseline based on your business requirements. It is a great way of customizing different alerts based on the errors retrieved by the dbWatch task. For more information visit: 


Monitor your database error logs, try dbWatch Control Center today

4 Ways Database Management Tools Reduce IT Costs

graph on computer showing how to keep database it costs down

For many organizations today, spiraling IT costs are a major concern. From the cost of operating large server farms, to staffing and maintenance costs, IT budgets are growing exponentially in many industries.


To help businesses to help mitigate some of the costs of working with large server farms, and to help their staff be more efficient, several solutions now exist for monitoring and managing databases. From cutting down on unnecessary bills to making DBA teams more productive and streamlining workflows, these tools are built from the ground up for efficiency, scalability and use by large enterprises.

Here, we’ll cover four ways that these solutions can keep your IT costs in check and improve efficiency.

1. Fewer Serious Incidents

Disasters and serious incidents represent a very high financial cost for businesses, both in the time spent planning and the cost of restoring services and maintaining security.


For many IT teams, a lack of oversight over their systems means that emerging incidents aren’t discovered until it’s too late. By making use of database monitoring solutions, DBAs can utilise proactive monitoring and analytics to detect and correct problems before they become serious.


Monitoring tools can also be used to discover security weaknesses, to identify struggling infrastructure, and to institute regular preventative maintenance across your servers and instances.

2. Lower Licensing Costs

Many software providers offer several forms of license for organisations to choose from, with an enterprise license usually being the most expensive and offering access to the full range of software features. Quite often, however, DBAs won’t use or need the full suite of features that enterprise licenses unlock. Despite that, many businesses still pay for enterprise licenses which they don’t need, either by accident or because their needs have changed and they lack the time, tools or expertise to detect and correct the situation.


Database management tools help to solve this problem. By analysing actual usage and reporting on which software and features are being used versus licensing level, DBAs can leverage management tools to track and adjust their licensing levels.


In the case of Oracle, in particular, many DBAs overstep the standard license terms inadvertently or by accident. Oracle always supplies its full suite of tools to DBAs regardless of license, and later demanding more money if they discover via audit that a company used tools or features that they hadn’t been licensed for – something easily done when changing just one option can trigger full and costly enterprise licenses.

3. Optimised Performance

The performance of your IT systems is vital to your business, not just because it provides your customers and staff with a smooth service but because it saves you money too. Poorly performing servers can lead to slowdowns at work, potentially disrupting services to customers which could cost your organisation valuable business and hurt your reputation.


With the help of database management tools, DBAs can optimise the performance of your organisation’s IT systems, improving your setup to ensure that it constantly runs smoothly.

4. Be More Productive

DBA productivity is one of the key areas that can be improved by any enterprise looking to cut costs and improve operational expenses. With the right solutions, DBAs can get access to the analytics, insight and monitoring that they need to perform their jobs efficiently and to allocate their time where it’s most needed.


Many database management tools offer streamlined interfaces that allow DBAs easy access to all the information that they need and the ability to monitor multiple instances at once. Tools like this also combine the features of many other individual server tools, saving time for DBAs and reducing the amount of time spent training to use new software.


How to Keep Down Database and IT Costs

Navigating the complexities of escalating IT and database expenditures causes concern for many organizations. The financial burdens associated with maintaining extensive server infrastructures, along with personnel and upkeep expenses, are on an upward trajectory.


To combat these rising costs, a variety of database monitoring and management technologies have emerged, designed to enhance operational efficiency and scalability while curbing  spending and amplifying the effectiveness of database administration teams.


This discussion delved into four pivotal approaches:

  • Deploying preemptive monitoring to avert critical system failures
  • Optimizing license expenditures by aligning software usage with actual needs
  • Enhancing the efficiency of IT infrastructure to avoid performance-related disruptions
  • Improving the productivity of database administrators through sophisticated tools.


These strategies collectively contribute to a more streamlined operational workflow, significantly reducing the financial strain associated with IT and database management and offering a sustainable competitive advantage.


Learn how dbWatch can help you gain a better operational workflow, book a demo today.

Analyze Oracle Performance With New Performance Tools