5 Key Features You Need in a Database Activity Monitoring Tool

In database work, we all need an overview. Database activity monitoring tools show how your databases function and tell you if users can quickly access accurate data. Monitoring database activity prevents costly downtime, enhances security, and informs strategic decisions.

Because the databases’ health affects an entire organization, it’s vital to ensure that you choose the proper database monitoring tool that matches your current and future needs.

When you’re shopping around, it’s important to consider these five points:

Keep in mind tools can be costly. Choose your selection wisely.

1 Keep Your Database Monitoring Cost-effective

As with any purchase your business makes, it’s important to consider the cost of adopting a new database activity monitoring tool. The right database management tools do more than just monitoring; they provide analytics and insights that can lead to significant cost savings. You can optimize resource allocation by finding inefficiencies and bottlenecks, reducing unnecessary spending on hardware upgrades or unneeded added capacity.

Preventing downtime can result in considerable cost savings. Databases are critical for day-to-day operations, and downtime can lead to direct revenue loss and damage to customer trust. Practical monitoring tools mitigate these risks, ensuring businesses can avoid these costly interruptions. According to a Gartner study on the impact of downtime, even a short period of downtime can lead to significant financial losses.

Finding a Cost-effective Database Activity Monitoring Tool

When considering a new tool, consider the cost per instance of the software you plan to deploy and how the new tool will fit into your current business. Don’t forget the hidden costs of time spent on training and adopting workflows. An ideal tool should minimize the need for extensive training on all platforms so you can focus on other critical tasks.

2 Manage Databases Cross-platform

Companies often end up with more than one type of database platform. It’s very helpful to have a monitoring tool that spans database platforms to get everything into one view, so you don’t have to switch tools.

When choosing a new database tool, it’s important to be sure it will be compatible with your current and future server setup. By choosing a tool that works across all your platforms and versions, you can support the number of servers you use in the future.

In modern IT environments, no tool is an island. Integration capability ensures the database monitoring tool can communicate and cooperate with other systems within the IT infrastructure. This includes everything from security tools to application performance management software.

Integrated systems unify IT operations, enhancing the ability to diagnose and resolve issues quickly. They also enable automated workflows where the monitoring tool can trigger actions in other systems, such as scaling resources or starting backups, further streamlining operations, and enhancing response times.

Scalability is more than size; it’s also about the complexity of the environment. Many organizations have databases that are in the cloud, with some that are on-premises for security reasons. Often, the various database environments require different monitoring tools. The more systems you have, the more fragmented your monitoring environment is, and the more likely something will be missed by not monitoring the right system at the right time. It’s so much easier to ensure it’s in place already.

Database Integration Tip

Remember that scalable tools can typically be used to monitor and manage large numbers of servers using various platforms. Before adopting a new tool, check with the developer to make sure that your tool of choice will remain relevant to your business for a long time to come.

Integration capability is critical in modern IT environments. A new tool must be compatible with existing and future server setups.

3 Choose Scalability to Future-proof Your Database Systems

As organizations grow, so do their data management needs. A scalable database monitoring tool is vital for accommodating increased loads without compromising performance. Scalability ensures that as the volume of data or the number of transactions increases, the monitoring system can handle the added workload without needing a complete overhaul.

This flexibility is essential in an era where business scales rapidly, and data influx can vary unpredictably. A scalable monitoring system can adjust dynamically, ensuring that sudden spikes in data and gradual increases over time are managed efficiently.

Why You Need Your Tool to Detect New Databases

Detecting new databases is important in IT environments. New databases can suddenly pop up, and often, you don’t hear about them. Unfortunately, it’s a typical scenario: A project sets up 30 databases and doesn’t inform IT. The DBA team first hears about them after they crash, and the project lead requests a backup recovery.

Having automated scans spot and integrate new databases into your monitoring systems is a game-changer. This way, you won’t get blindsided. You’ll know about databases right from the start and manage them smoothly.

screen showing code for database monitoring

4 Real-Time Database Monitoring

Real-time database monitoring helps you pinpoint performance issues and bottlenecks before they cause considerable damage. By identifying issues proactively, you maintain database performance and customers aren’t affected.

 

This approach supports service continuity and helps your organization avoid costly downtime and minimize the impact on users and customers.

 

The benefits include:

–  Detect potential failures before they occur

– Provide a continuous audit of data transactions

– Craft a proactive response framework

– Reduce downtime

– Avoid the costly disruption of expected failure

Pick a Tool that Monitors As-is and What Was

While knowing what’s happening as it happens is critical, it’s also helpful when a monitoring tool can analyze past events.

 

Consider this scenario: There’s a ticket complaining that the database was slow at noon. If you don’t know what happened at 12 o’clock, you could try to recreate it, but it’s unlikely. Instead, ensure your database activity monitoring tool includes historical resource usage and the SQL statements performed in that period.

5 Track Your Database Resources

Data needs to be viewed efficiently. By looking at the resources an instance consumes, you can see if enough resources are available for the instance based on the hardware components. When we see the big picture of how an instance uses time, memory, and resources, we can quickly know if the instance is working efficiently.

 

You can also use that information to match the database with the applications. To give an analogy: if databases were cars, applications would be drivers. Consider the Mini Cooper. It’s a smart little car, but it won’t win a race even with the best racing car driver. Conversely, a Ferrari driven by this blog writer would not win any races either.

 

We must ensure the car and driver match to have a system running optimally. Monitoring information will help define what is needed and ensure you get a Formula 1 Mercedes driven by Max Verstappen when you need to race. However, if you need milk at the store around the block, a Mini Cooper, driven by a careful driver, would be more logical.

Keep Track of the User Pattern

The number of users on the database also gives insight. Choose a tool that tracks long-term user patterns. These give insight into the database.

 

If something goes wrong, you can see the activity and resource consumption and when the problem happened. Did it happen at high speed/peak database usage? Or when the car was parked?

 

This is also useful when deciding about making changes to your databases. What once was the ‘milk run’ to the store now has many users and needs a ‘bus’ instead of a ‘car’ because the database is much more active. The user patterns help you see these changes and make informed decisions about what, if any, action to take.

 

When you can see what isn’t in use, space can be cleared for higher-priority active instances. Importantly, when an instance’s use is minimal, it can be taken offline or made into a read-only document.

Database Activity Monitoring Tool Checklist

Use the checklist below to help you when choosing a tool. The right tool will keep your databases running smoothly and avoid disruptions. 

 

Try it out with dbWatch,  sign up for a trial, and see the benefits for yourself.

Checklist showing what you need in a database activity monitoring tool.

5 Reasons for Proactive Database Monitoring

Get jobs done with proactive database management

Just a proactive leadership helps a company, proactive database monitoring supports business. Looking head to fix problems ensures database operations enhance, rather than hinder, business performance. While database management can support organizational efficiency, poor management can cause business bottlenecks.

 

By adopting proactive management, you can:

Addressing these areas before they become painful transforms your database management from reactive, with no time for anything but ‘putting out fires,’ to proactive management. In other words, getting a database tool that predicts issues gives you more time to do what needs to be done.

Avoiding DBA Alarm Fatigue

Alarms help you swiftly identify any irregularities or potential issues before they impact the broader system. However, you’ll get alarm fatigue with never-ending notifications when the alarm threshold is low. A small problem could become a downtime issue if the alarm threshold is too high. You need to get the right warnings and alarms in time to respond.

A fire alarm box shows the problem of constant alarm notifications.

 

Some tools come without alarm thresholds, and everything is an alert. Constant alerts cause problems: nothing gets attention if everything is an alert. It doesn’t have to be this way. For example, our product, dbWatch, comes with preconfigured alarm thresholds. The thresholds are based on over 20 years of feedback from DBAs about what they need to know and when they need to know it.

 

dbWatch alarms and warnings identify issues before they become problems and advise on the urgency of each issue. dbWatch still collects the underlying data needed for background research. This way, the numbers are available for reports, capacity planning, and performance planning.

Be on Top of Database Issues

Modern database monitoring tools not only react to issues but anticipate them. A Database performance monitoring tool provides automated alerts and notifications to help organizations identify and resolve issues before they cause significant damage.

 

When you reach a certain threshold, DBA tools know it’s time to be aware of a potential issue. Then, you can investigate the issues as you have time rather than waiting until there’s an alarm.

 

In addition, a good tool tracks information, so you know what was done, and when it happened. Using a versatile tool like dbWatch, you can monitor information on one centralized platform for all your databases. In addition, you can customize warnings and alarms to address the specific issues in your databases to ping you early when a problem occurs.

 

Looking at this data helps you decide how to allocate future resources. For example, when you know how capacity will increase or decrease seasonally, you can make changes accordingly. However, filters are essential. You need enough information without being overwhelmed.

Minimize Database Downtime

All database systems have downtimes for patching and other activities. Turning off your monitoring tool during these downtimes is essential, so maintenance isn’t included in the uptime statistics.

 

If you have scheduled downtimes, remember to automate your tool to disconnect during those times so you don’t get any alarms when making changes and then automatically connect after the shutdown time.

 

Proactive monitoring allows IT teams to perform maintenance and optimizations during scheduled database downtimes. This planned approach minimizes disruptions like unexpected failures.

A Calendar shows selected dates, as you might plan database downtime.

Plan Routine Maintenance and Automate

It’s easy to start a maintenance routine and then leave it to work independently without checking its effectiveness. However, it’s important to ensure routines work after they go out into the ether. Some need almost no attention, while others require adjusting and tuning before they are optimal.

 

Routine maintenance tasks like index reorganization, updates, and patches can be automated and optimized based on the insights gained from monitoring, ensuring the longevity and health of the database system.

Use Automated Tools Effectively

Let’s face it: routine jobs are repetitive and time-consuming, especially if you have many databases. Automation can save you endless work hours, so you can focus on tasks that need human input.

 


With a database management tool, many jobs run automatically on the system to look for things that need to be slightly adjusted to improve performance or the system’s overall health. Within dbWatch, there are 15 to 30 automated jobs that focus on preventative maintenance of the databases. They’re things that a DBA should do, but they can be automated and deployed on your system so that they’re automatically fixed and the databases stay in optimal health.


If you’re using a tool like DbWatch, there are monitoring jobs that alert you to how long the last patch has been in the system. If you have many databases, you know exactly what systems can be patched and where the patches are available. This overview helps you better plan the maintenance period to get the patches.


Access to current and historical data is useful for many organizations, and making well-informed business decisions from reports or adjusting IT workflow depending on server load can be essential in fast-moving industries or large businesses.

What you Gain With Proactive Database Monitoring

Proactive database monitoring helps with more than operational upkeep. Organizations adopting a robust database monitoring tool benefit from avoiding any issues. Finding a tool that can provide all these services across many varied servers is critical for large and growing businesses in every industry.


As databases grow and become more complex, businesses must evolve their monitoring strategies to stay agile and prepared. Stay ahead of your problems: Find a DBA monitoring software that helps you achieve these five steps to proactive management:


1. Avoid DBA Alarm Fatigue
Database health in IT infrastructure management underpins the entire digital framework. Achieving the perfect balance in database monitoring lets organizations quickly identify and address irregularities or potential issues, fostering a responsive strategy that supports proactive maintenance and long-term health of databases.


2. Anticipate Database Issues
Modern database monitoring tools transform database management by anticipating issues rather than reacting. These tools use automated alerts and a centralized information platform to keep databases running optimally with minimal downtime and inform decisions regarding resource allocation.


3. Schedule Database Downtimes

Proactive monitoring empowers IT teams to perform maintenance and optimizations during scheduled downtimes, minimizing disruptions and maintaining database efficiency. This approach allows automation and optimization of routine maintenance tasks like index reorganization and software updates, enhancing database system longevity and health.


4. Plan Routine Maintenance

Often, after setting up maintenance routines, there’s a tendency to neglect them; however, checking their effectiveness regularly is vital. dbWatch can help ensure these routines function optimally, which is crucial for maintaining healthy database systems.


5. Use Automated Tools Effectively Access to both current and historical data aids organizations in making informed business decisions and adjusting IT workflows. The right tools provide comprehensive reporting and real-time monitoring across various servers, essential for managing performance and resources effectively in large and growing businesses.



Start managing your databases proactively today. Try a free dbWatch trial.

Free Trial

Gain control of your databases today.

Monitor and Manage Database Locks

Database locks are represented by keys unlocking a padlock with a computer in the background

Locked databases and locks in SQL influence your database performance and challenge database management. Many elements are involved in database locks from row-level locks to entire database exclusions.  

 

Identifying how the locks contribute to inefficiencies or bottlenecks might seem overwhelming. However, with a clear understanding of database locks, you can pinpoint critical issues and implement solutions with precision to ensure your database system operates optimally at every level of interaction.  

 

This blog delves into the concept of database locks focusing on SQLServer, where locks are a common occurrence during transactions and can significantly influence both database and business operations.  

 

By the end of this blog, you’ll have a comprehensive understanding of the different types of locks, their effects on transaction processing, and practical strategies for managing and resolving lock issues effectively. You can also see how a database management tool can be used to detect, manage, and prevent database locks.  

What is a Database Lock? 

Database locks are a critical component of database management. Locks help ensure data integrity and consistency because they stop multiple processes from simultaneously making conflicting changes to the data. However, while databases need locks, the locks can also introduce problems if not managed carefully.  

 

The main problem with database locks is that they block. A transaction holding a lock often blocks other transactions from proceeding, which can lead to a cascading effect. Blocks can delay multiple transactions, impacting overall application performance and user experience. Simply put: Locks block. 

How Databases Lock 

Database locks occur to secure shared resources during transactions. Locks safeguard for databases because they: 

1. Observe an all-or-nothing scenario for multiple and separate transactions 

2. Preserve consistency in the database state 

3. Isolate transactions from being committed until that transaction is complete 

4. Save committed transactions even in the event of abnormal termination 

 

They adhere to the ACID properties (atomicity, consistency, isolation, and durability) of a transaction. To put it in perspective, imagine a piece of paper being shared by three writers. 

 

The illustration below shows an example of how locks work. Writer One (🙂) finishes their message and gives the paper to Writer Two (😁). Then, Writer Two (😁) begins to draft their message. Now all other writers (🙂, 😱) are blocked. They must wait until Writer Two finishes before they can write anything. Locks work in the same fashion. They maintain data integrity by preventing simultaneous writing and updates. 

A graphic shows how three programs can experience a database lock.

Locks within Database Access Levels 

Locks can occur in a variety of forms. They appear in levels of access to a database. Below is a short list of common locks.  

 

Row Level Locking 

Most common type of locking. It locks a row of a table. 

 

Column Level Locking 

Locks a column or columns within a table. Database vendors do not commonly offer this since it requires a lot of resources to apply it. 

 

Table Level Locking 

Locks an entire table. It prevents modifying the table properties or updating all the rows by another user. 

Why Databases Lock 

Database locks often appear during record updates or uncommitted SQL statements. They primarily isolate a database component, such as tables or parts of a table, so that session updates and alterations will successfully take effect and no data loss will be experienced. 

 

Locks are not a problem when only a few active sessions transact in your database. However, with more database users accessing and utilizing your databases, locks will impact your database resources and potentially your business. There are three kinds of locking that we will discuss: lock contention, long-term blocking, and deadlocks. 

Common Table Locks in SQL 

Before diving in let’s make sure we all have a common understanding of table locks. After that is clear, we can start working on solving the problems. 

Database Lock Contention or Long-Term Database Blocking 

Lock contention refers to the bottleneck that happens when several database sessions are trying to access the same information. Waiting for other programs to finish accessing the information can cause a significant bottleneck in database performance. 

A transaction holding a lock can block other cause long-term blocking of transactions from proceeding, which sometimes leads to a cascading effect which delays multiple transactions, impacting overall application performance and user experience. 

 

Like Lock Contention, Long Term Blocking happens when multiple users access the same database component. The notable difference occurs when one user holds on to that component for a long time. Dependent sessions and objects will be blocked from reading and writing during this time. 

Database Deadlocks 

Database Deadlocks happen when transactions from two or more databases wait on each other to give up their locks. In this situation, they cannot roll back or commit, as neither can move forward.  

Below is a simplified example.

– Session A acquires a lock on Table 1.

– Session B acquires a lock on Table 2.

– Session A then tries to acquire a lock on Table 2, but it is already held by Session B, so Session A is put on hold.

– Session B then tries to acquire a lock on Table 1, but it is already held by Session A, so Session B is put on hold.

– Both transactions are now waiting for each other to be released.

Graphic showing how deadlocks occur and block databases.

MS Management Studio and Database Locks 

Microsoft SQL Server Management Studio (SSMS) is a comprehensive tool used for managing, configuring, and administering SQL Server databases. While it’s cost free, it is not built for working with a complex databases situation. However, it can be used to deal with database locks. 

In this example with MS Management Studio, there are two separate sessions to act as the users. The screen shot below shows a simple query that locks the designated table test_table. Next User 2 executes another query for Session 2 that will select the specified table. 

How to select the specified table when dealing with database locks.

Use Master 
GO 
exec sp_who2 
GO 

The stored procedure above will display a tabularized format of information you need to show who is blocking who. As seen below, in Figure 2, ID 54 (session 2) is being blocked by session 76 (session 1). 

The blocked session is highlighted.

 

Similarly, you can use the following query to get the information you want: 

  

USE Master 
GO 
SELECT session_id, start_time, command, status, blocking_session_id, wait_time FROM sys.dm_exec_requests WHERE blocking_session_id <> 0; 
GO 

You will still end up with the same information as before. 

Identify the blocked session in database before killing the query.

Remember, in queries using BEGIN TRANSACTION, you always have to save points after every critical alteration to the database. Also: end your transactions with ROLLBACK or COMMIT since the transaction affects these locked tables. 

Finally, only kill sessions if they have minimal impact on your business operations and data dependencies. Killing a session is your last resort. 

If you need that last resort, here are the commands: 

KILL – – blocking_session_id 
— in this example, two queries were executed: 
— KILL 76 
— KILL 5 
— You can also use: KILL [session_id] WITH STATUSONLY  

The session will restart after being killed. 

After killing them, it will restart the session. This was also done by killing session_id 54 (session 1).

Showing the killed session message for a locked database fix.

How to Deal with Database Locks and Blocks 

Now that you better understand the kinds of locks and how they occur, you can see how to deal with them in dbWatch. Download a free version of dbWatch to try this in real-time.  

Monitor Database Locks and Blocks  

First, please open dbWatch and find the instance you want to monitor.  

 

You’ll see three numbers in the screenshot below.  

 

#1. Now that you’re in the correct instance, look in the ‘performance’ folder. 

 

#2. Click on ‘Blocking statistics.’ After clicking on ‘Blocking statistics’ you can see the status of the job. 

 

#3. Under ‘Job execution status’ You can see here that the system automatically checks for database blocked sessions by reviewing the active database locks. In this example, it checks every five minutes to see if any blocks are happening.

 

On either side of ‘Schedule’ you’ll see when the last check was run (12:30:18) and when the next run will happen (12:35:18).

Screenshot shows first steps of fixing database lock in dbWatch

For the next screen shot, we created a lock on this instance. While the session we created remains uncommitted, all other sessions are blocked. 

 

#4. Shows you that the session is blocked.  When you hover over the details, a pop-up box tells you the Session ID, where it’s located and how long the block has taken place.  

 

#5. Here you’ll find the ‘alarm threshold,’ which is 12 minutes here; and ‘warning threshold,’ which is three minutes here. Depending on how you’ve configured dbWatch the alarms and warnings could trigger emails or alerts. 

 

The status of the job here is still ok because the time threshold has not passed.  

 

 

In addition to the alerts you’ve configured, you’ll find that the icon changes color to yellow (warning) shown below, or red (alert) as the block continues. 

screenshot shows how to see monitor database locking statistics

dbWatch always gives you an overview, then you can dig deeper. Let’s start by getting a report. (Screen shot below) 

 

#6. Right click on the ‘Blocking Statistics’ line.  

 

#7. Select and click on ‘Details.’  

screenshot on how to see database locks execution statistics

Now you’ll have all the details about the blocking session. (See screenshot below.) You’ll see the lock statistics, what kind of blocks there are, and the blocked session history. The session history clarifies what happened with any previous blocking.

 

You’ll also find the session number, what it has blocked, how long it’s been blocked, etc.

screenshot of a blocked session as shown in dbWatch

How fix Table Locks in SQL with dbWatch Control Center 

So far, we’ve only monitored which tables are locked and creating blocks – basically extracting information. Now we’ll switch to managing and resolving the issues.  

 

Right click on the instance you’ve been working on. You’ll see a drop-down menu. Scroll down to ‘Management.’  

change to management mode to manage database locks

In Management, you need to go to the blocking sessions view. Navigate there from the left-side menu. Go to ‘Server Activity’ and then ‘Blocked Sessions.’  You can see details about the blocking and blocked session. You can decide which session to terminate.

How to view blocked sessions in dbWatch

Killing a Locked or Blocking Database Session

If a session is blocking or locked for hours, you should resolve it. Normally, the only way to resolve it is to kill it. But which session should you kill? The one who is waiting, or the one who is blocking?

Sometimes it’s a person causing the lock, but most often it’s a batch job. There isn’t a special science to deciding which session to kill. As a rule, if you know someone works with the locked or blocking instance, always ask them first.

Before killing a session, it’s often helpful to investigate the blocked session history and answer a few questions like: 

 

– Where did the block happen?

– Which login was involved with the block?

– How often has it happened?

– Which program created the block? For example: if the program is SQLCMD, normally it could be a CRM system.

 

You can choose if you want to see the history of blocked sessions or blocking sessions. The blocked history tells you which sessions were blocked. The blocking history shows which sessions created the block.

 

In dbWatch you can investigate what’s going on before you talk to someone. The filters are very helpful when digging into blocked and blocking sessions history. You can check the type of database, and which program, etc.

 

You can also check out these statistics in the side bar where you can view them per database, per login, or per program.

Finally it’s easy to kill a session in dbWatch. 

  

First, go back to the blocking and blocked session overview. Now, choose to kill the blocking session or the blocked session. Do a right click on the session you want to kill and choose ‘kill session.’ Now you’re a successful session murderer, and the block is resolved.  

But wait! Before you relax, remember to check the statistics and history (below), just to see what kind of queries were running and what they were trying to doIf it’s relevant, pass this information to the product developer or the person responsible for that program so they’re aware of the problem.  

screenshot showing the status of instances

Effectively managing database locks is crucial to maintaining optimal performance and preventing bottlenecks in your SQL databases. By monitoring, analyzing, and resolving locks using tools like dbWatch, you can ensure that your databases run smoothly and efficiently.  

 

With the right approach, you can stay on top of potential problems and maintain a healthy, high-performing database environment. 

 

Stay ahead of database bottlenecks! Sign up for a free dbWatch trial and see how it can help you.  

5 Key Features You Need in a Database Activity Monitoring Tool

Are you tired of costly downtime and security threats in your database environment? Look no further than a proper database activity monitoring tool. By investing in the right tool, you can optimize resource allocation, prevent revenue loss, and make strategic decisions based on accurate data. Don’t let your databases hold back your organization’s growth – choose the right monitoring tool today and future-proof your database systems.

Read More »
Get jobs done with proactive database management

5 Reasons for Proactive Database Monitoring

Keep your database running smoothly with proactive management. Avoid alarm fatigue, anticipate issues, and plan maintenance without stress. Automated tools like dbWatch can save you time and hassle. Stop firefighting and start optimizing your database today with a free dbWatch trial.

Read More »
Database locks are represented by keys unlocking a padlock with a computer in the background

Monitor and Manage Database Locks

Offering a deep dive into understanding database locks and their consequences on SQLServer, this guide provides actionable advice on managing and monitoring with dbWatch Control Center, ensuring smoother business operations.

Read More »

Multiplatform Database Farm Monitoring and Inventory Management

 
Database administrators (DBAs) require a dedicated solution to assist them in their daily tasks of monitoring and managing the entire database farm. The custom scripts they once used for a few servers might no longer be effective in overseeing the current, more extensive database farm.
 

3-1

 

A database farm comprises all the database servers, instances of various platforms, and versions used in an enterprise to support its day-to-day business operations or functions.      

As a DBA managing a large database farm, one of the most important things is to have a complete overview of your database farm.   

You should be able to answer questions such as:   

  • What is the health status of all databases/instances? Which databases need attention right now, and which should I prioritize?   
  • How many disk, memory, and CPU cores have been allocated to the database farm?   
  • What is the resource utilization for each instance?   
  • How many instances exist in my database farm?    
  • What are the database platforms and versions running in the database farm?   

Virtualized environments, resource allocation, and utilization are the issues. You should carefully analyze all the resources allocated in your database farm; you should have a clear overview of how the various resources like your disk, memory, and CPU are utilized in your database farm. You must determine if your servers have been over or under-provisioned with resources.   You should strive to allocate resources across the farm and VMs to maximize utilization and performance.

 

Documenting a large database farm is also a demanding issue as your database farm expands. With proper documentation, you have up-to-date information on your databases in the database farm. It makes maintenance easier, helps the management or DBA’s decision-making and diminishes risks. It is more straightforward to make an assessment or plan ahead using your database documentation as a guide.  

 

When you manage a database farm, you rarely have the time to create let alone maintain complete documentation of all resources in your farm. It would be best if you had tools to automate the documentation process.   

 

In your database farm, you need to ensure that all your backup jobs are running optimally and running as scheduled. You need to have complete control over your databases’ backup and recovery plan; it is essential if an unexpected situation like corrupted databases, sudden server failure, or other forms of data loss occur. You will use existing backups to restore lost information to the system.   

 

Imagine – having a database farm with hundreds or even thousands of instances. Do you still have enough time to remote connect to each database server to know its health status?   

 

Graphical user interface, application Description automatically generated

  

 

dbWatch Control Center helps users with their capacity and resource planning projects.  It features built-in views that display the capacity overview like your database resource consumption in the database farm.     

 

This overview provides DBAs and managers with information to determine which instances consume the most or least memory resource in the database farm.     It gives you clear information on which instances have excess resources allocated to them, which you can reallocate to instances that have more need for them.   

 

For example, this view displays the total disk usage in the database farm for you to determine which instances have the most amount of disks or volumes allocated. It also shows the usage rate per volume.    

  

Chart, waterfall chart Description automatically generated

It’s a complete database farm monitoring and management solution, which offers automation, consolidated views, and performance reports giving you the key information you need at your fingertips. Automating your routine tasks helps you save time and effort on manually managing databases on your farm by focusing on more critical tasks at hand. At the same time, dbWatch Control Center does all the proactive monitoring of your database farm, allowing you to have complete control over your database farm.   

   

dbWatch Control Center offers the following:   

  • Farm overview – consolidated views for performance, capacity, and maintenance for your entire database farm   
  • Monitor in-depth all your instances and platforms    
  • Performance – identify performance bottlenecks within the farm   
  • Scalability – ability to adapt to your growing database farm   
  • Alerting – alerting and third-party extensions tailored to your business requirements   
  • Security – role-based access controls, Kerberos integration encrypts connections and support security certificates  

Summary   

 With dbWatch Control Center as your partner in monitoring and managing your database farm you achieve the following:   

  • Proactive monitoring displays a health overview of all databases in your database farm. It provides you with the necessary information for you to act against possible issues. 
  •  Provides complete information on your database farms’ total allocated hardware and resource utilization like your CPU, memory, and disk usage.    
  • Displays information on all instances in the database farm.    
  • Auto-discover instances on specified network ranges.    
  • Delivers information for performance metrics and identifies which database platforms and versions are running in your database farm.  

Discover Control Center today, download a trial version of dbWatch that monitors up to 5 instances for 6 months.  

Monitor your SQL Server Indexes

Person monitors SQL server indexes

Problem 

I want to view the performance of my SQL Server indexes across the database farm and determine the top instances by index disk usage. I also want to look for servers that encounter problems when running the update index statistic job.  

Solution 

dbWatch Control Center can monitor your Microsoft SQL Server’s index usage rate and perform database index maintenance to make sure that your servers are running smoothly. 

1-2

Microsoft SQL Server recommends reorganizing indexes with fragmentation level between 5% to 30% but if it is greater than 30% you will need to perform an index rebuild instead. 

In this blog, you will learn how you can analyze your database indexes within your database farm and determine which instance has its SQL Server maintenance plans or index maintenance jobs running longer than the usual by using dbWatch Control Center features. 

Example 1: Index Analyze Statistics Job 

On the monitoring view, right click on your SQL Server instance. Click Configure Jobs. 

Graphical user interface, application Description automatically generated

A new window will appear, tick the job Index usage statistics (all databases). Click Install.

Graphical user interface, text, application, email Description automatically generated

As seen in the screenshot below, this job collects database index and table usage, shows information about the object/table name type of index total seeks and scans. It gives you information about how big your tables are and if it frequently uses an index scan or index seek. Ideally, you would want your indexes using an index seek.

Graphical user interface, text, application Description automatically generated

In the job – Update index statistics, within this view you will be able to see useful information such as the database name, database index count per database and total indexes (clustered/non clustered) analyzed per database. It will also show you the number of errors within the execution of the update statistics task, if there are any.  

Graphical user interface, text, application, email Description automatically generated

In this view you can easily sort the columns in ascending or descending order. This will help to determine which instance within the database farm has the longest/shortest elapsed time when analyzing indexes, this is useful if some maintenance jobs are taking a longer time to finish than usual.  

Graphical user interface, application Description automatically generated

An example application of the jobs shown is when you have a scheduled index rebuild job weekly, from Week 1 to Week 7 your average elapse time was around 15-20 mins. to finish. But suddenly, when you look at the dbWatch Control Center view shows that this week (Week 8) it took 80-90 mins. to finish. The average time increased 4 times its usual execution, this tells you that there is something going on within your SQL Server indexes – from there on, you will start investigating the cause of the problem.  

Example 2: Index Reorganize and Rebuild View 

dbWatch Control Center also provides you with index maintenance jobs such as Index rebuild and reorganize.

Graphical user interface, application Description automatically generated

This view displays the index rebuild/reorganize statistics. Here you will see the list of instances and the number of databases checked within the instance, and it also shows which databases was rebuilt and which was ignored.

Table Description automatically generated with low confidence

In this view you will be able to sort the column to determine which instance consumes the most disk space.

Graphical user interface, application, Word Description automatically generated

The important part of this view is the size before and after the index rebuild took place and the disk usage reduced by the operation.  

If you rightclick on a selected instance, you have an option to execute the task and start an index reorganize/rebuild task on the instance databases’ or configure the task and its schedule. 

If you choose Details, you will see a report regarding the history of the reorganize or rebuild execution within that instance, it also shows the information about the table and index fragmentation average and the disk usage before and after the reorganize or rebuild operation.

Table Description automatically generated

With the following examples provided, you will be able to determine which instances has the biggest database index disk usage within your database farm. You will also be able to assess if a certain index operation schedule should be adjusted based on the performance and other details shown in the reporting view. 

With dbWatch Control Center, database admin’s can now keep track of their SQL Server indexes and their performance. As a DBA, you can now analyze how your database indexes are being used in your environment and plan on how you will maintain these SQL Server indexes efficiently.   

dbWatch Control Center assists database administrators to efficiently run their databases, from proactive monitoring of database performance to the management of tasks.   

Discover how dbWatch Control Center can monitor your SQL Server Indexes, try dbWatch Control Center today

Running SQL Queries with Tools in dbWatch Control Center

If you find yourself developing any kinds of applications focused more towards the web or even mobile applications, it is inevitable that sometimes you might find yourself running a couple of SQL queries – you need to interact with your database systems anyway, don’t you? 

1-Sep-28-2021-07-53-02-71-AM

Don’t fret – all database users run SQL queries. It’s important to run them in a proper medium though – while most, say, MySQL database administrators, find themselves running their queries inside of phpMyAdmin, some database administrators might find themselves searching for different alternatives.

Indeed, there a quite a couple – if you find yourself deeply immersed inside of the database management world, you might have already noticed that you can run your SQL queries together with some database management tools. Most database management tools that allow you to run SQL queries inside of your database instances also allow you to do a couple of different things – for example, run a couple of database jobs that let you observe the full power of your database instances or let you see the amount of logical reads (writes) going on in your database instance: 

Today we are going to be focusing on another one for this blog – we are going to be telling you why it’s important to get assistance while running SQL queries inside of your database instance together with tools like the one provided by dbWatch. 

Running SQL Queries with dbWatch Control Center 

You see, while dbWatch isn’t a SQL client, it does have an ability to help you craft your SQL queries – simply observe the left-hand side of dbWatch and choose the fourth option from the top: 

Once you’re done, make sure dbWatch is actually connected to the database instance you want to run queries on – click on the Reload Auths button: 

Once the authentication is reloaded, you should be able to see the username next to your database instance: 

Tick the checkbox next to your database instance and you should be more than ready to run queries inside of a given database instance (do note that if you do not, dbWatch will provide you with an error): 

Now it’s time to write your queries that interact with your database instance – for that, dbWatch will provide you with a very nice user interface and underneath – the ability to see the results of a given database query – input your query inside of the SQL field, and click “Execute” if you want to run it – if not, you can also clear all of the input, output (also save them.)

While your query is running, you will be able to see the amount of time your database query is running – if you don’t see any results, but keep seeing that the query continues executing (see example below), you might want to add a couple of partitions or indexes on top of your table – that should fix the issue: 

If your query ran successfully, you will be able to observe the output (in this case, our column is called “column”, so we needed to escape it with backticks (note the `column` inside of the query)): 

See underneath the SQL window displayed by dbWatch – you will be able to observe what queries failed to run and what queries ran successfully: 

See the “!” in the triangle-shaped box? Those queries failed. See the tick next to the last query in the list? That query ran successfully! 

The ability to simply run queries is not everything that dbWatch comes with though – as already noted, you will also be able to “load input” to run (e.g. import SQL files that contain SQL queries for dbWatch to run), clear input (the SQL queries you just ran) or output (the results of the SQL queries), if you want, you will be able to save them as well. For example, click “Clear Input” and you will be able to see that your SQL queries disappeared:

See? Easy as that! No more deletion – who needs that? dbWatch will delete all of the things for you. 

Output Panels 

In case you didn’t know that, dbWatch will also provide you with one output panel per every result set returned. Here’s how that looks like: 

Summary 

It doesn’t matter whether you are a high-end developer that deals with databases on-and-off or a full-fledged DBA of Oracle, MySQL, MariaDB, or even Sybase – you will need to be running queries focused at a specific database instance sooner or later. dbWatch comes with a SQL query feature – underneath the query you will be able to observe the results of it. You will also be able to see how many queries of yours have run successfully and how many failed – do so and your database instances should be headed towards a more performant future! 

If you find yourself running into any kinds of issues when using the tool, keep in mind that dbWatch has a support team consisting of top-notch database administrators – they can solve all of the problems you face within no time, so don’t be afraid to reach out and you will receive the help you need very quickly. We hope you enjoyed reading this blog and will stick around – we have a lot more content coming up. 

Job Statuses in dbWatch – the How, Why and When

Monitoring the status of database instances is the daily task of every database administrator or even a developer. Database monitoring these days can be incredibly complex and time-consuming if performed manually: thankfully, there are tools like dbWatch that can help you in time of need. If you are a user of dbWatch, you have noticed that it has a thing called “job statuses”: this blog post should provide some insight into it. 

businessman hand using tablet computer and server room background

 

What are Job Statuses? 

 

Job statuses in dbWatch, simply put, depicts how did a certain job in dbWatch execute – was it successful? Did it produce a warning? An alarm? There are three categories of job statuses in dbWatch and here is how everything works: 

 

  • A job status of “success” means that the job ran successfully and can provide some interesting details on your database availability, capacity or performance if instructed to do so (simply right click the job and click Details to observe them in all their glory); 
  • A job status of “warning” means that the job encountered some errors on the way (it might also mean that your database farms or instances are not up to par and can perform better – check the Details section as instructed above to learn more) 
  • A job status of “alarm” means that something is (probably) terribly wrong – seeing such a job status means that something very bad is happening in your database farms and (or) database instances and needs immediate further attention. If left unresolved, this job can negatively impact your database availability, capacity, or performance. 

 

Statuses and Your Database Instances

In order to check the status of your database jobs with dbWatch, you might first want to check what database instances you are running. To do that, click on the Database Farm icon (that’s the third icon from the top on the left-hand side), then expand per platform in the inventory overview section:

 

That way you will get a nice animated view of what database instances your servers are running! 

With dbWatch, you can also go back to the main page and observe the statuses of your database instances themselves (statuses are not specific to jobs, they can also be applied to database instances  – keep that in mind!

 

 

 

Once you have ensured that your database instances are running smoothly, it’s time to keep an eye out on your database jobs. Right click your database instance, click configure jobs and you should see an output similar to this one (black checkmarks mean that the job is installed, green checkmarks mean that it’s enabled):

 

 

Once you have installed the jobs you wish to use on your database instance, it’s time to put them to the test! Choose a job you like, afterwards you can configure it if you wish: 

 

Finally, run it by right clicking on it and clicking Run now:

 

Once your job has been run, you will be able to see its status along with a comment on how well it was executed – a green setting icon with a tick mean that the job executed well, an orange setting icon means that the job produces some errors (check the details section to know what they are) and a red setting icon means that the job found something wrong with your instances: 

 

 

Summary

In general, job statuses in dbWatch are like small advisors for your database instance that are always keeping an eye out for its availability, capacity or performance – once either of those do not satisfy you, check the details section of the jobs to see what can be improved. You would want to keep an eye out on the job statuses in dbWatch since it’s really easy to deploy and it enables you to make sure your database instances are always performing in a tip-top shape.

The Challenges of Scaling MySQL

Man works on scaling My SQL Server.

If you have ever worked with scaling MySQL or if you are a MySQL DBA, you have probably faced the challenge of scaling your MySQL instances. This blog post will outline the challenges of scaling MySQL and tell you how can dbWatch help you to scale, monitor and manage your MySQL (and other) database servers efficiently while also providing you with total control over all aspects of operation, performance and resource usage.

Man works on scaling MySQL Server.

When Would You Need to Scale MySQL?

In general, the challenge of scaling MySQL would frequently appear in a situation when you start seeting your MySQL-backed application’s traffic increase by tens, hundreds or even thousands of times and you start seeing query timeouts more and more often.

How to Scale MySQL

As far as the actual scaling of MySQL is concerned, you have a few options you can choose from. You can apply application level optimizations and, of course, you can apply some optimizations on the MySQL level and you can also scale through infrastructure. As we’re talking about the challenges of scaling MySQL in this blog post, we will go through some of the things that you can do to optimize the performance at the MySQL level yourself and tell you how can dbWatch help you to scale your database instances too.

4 Things to Keep in Mind of Optimizing Your Database Instances

If you want to optimize your database instances at the MySQL level, here’s some basic things you have to keep in mind:

  1. Only Ask For The Data You Need .

    Your queries should only fetch the data that is absolutely needed. Avoid using queries like SELECT * if it’s not absolutely necessary, because a query working with too much data is one of the most basic reasons a query might not perform well. Optimize the data your queries access and optimize your queries such that they access less data. The less data your queries access, the faster they will be – do not ask the database for data you don’t need.

  2. Optimize Query Performance with EXPLAIN.

    The EXPLAIN clause can be very useful if you want to optimize your query performance – the EXPLAIN clause is able to give you an access type that MySQL is using to find results (the access types range from a full table scan to index scans, range scans, unique index lookups and constants) – understanding the general concepts of scanning a table, scanning an index, range accesses and single value accesses could prove to be very valueable if you want to optimize your database instances at the MySQL level. The type ALL means that MySQL has ran a full table scan while ref means that MySQL used the ref access type on some sort of an index.

  3. Understand SQL Query Execution.

    Keep in mind that there are some basic things related to query execution. First, the client sends the SQL statement to the server, then the server checks the query cache. If there’s a hit in the query cache, it returns the stored result from the cache, otherwise the server parses, preprocesses and optimizes the SQL into a query execution plan. Afterwards, the query execution engine executes the plan by making calls to the storage engine API and the server sends the result to the client. Keeping these things in mind could put you on a good path when scaling your MySQL instances.

  4. Know when to Use a Wildcard Search.

    If you are using wildcard searches (for example if you use LIKE queries) do not start your search with a wildcard. Only use a wildcard at the end. If a wildcard is used MySQL won’t know what the search term begins with and thus an index might not be used even if it exists on the column you are querying.

The advice above should set you on a good path regarding fixing query timeouts, deadlocks and the like, but if you need to scale MySQL seriously, chances are that you are going to need certain tools suitable for the task.

Solving MySQL Scaling Challenges with dbWatch

If you want to be able to solve your MySQL scaling challenges using tools, dbWatch can help you. dbWatch is a highly scalable software solution that helps enterprises monitor and manage your database server instances efficiently while also providing you with total control over all aspects of operation, performance and resource usage. dbWatch can help you scale all kinds of database management systems – it does matter if you are using MSSQL, Oracle, PostgreSQL, Sybase, or MySQL.Here’s how the user interface of dbWatch looks like:

dbWatch offers a few options to scale your MySQL (or any kind of database management system) instances:

1. You can (and should) make use of the dbWatch jobs that are available – the dbWatch jobs are split across three categories (Availability, Capacity and Performance) and each of those categories contain certain jobs that perform specific tasks.

For example, the Availability category for MySQL consists of database monitoring jobs that check the uptime of your DBMS and give you alerts, the Capacity category provides you with aggregated and detailed database growth rates and the Performance category consists of database jobs that check the effectiveness of the InnoDB buffer pool, the MyISAM key buffer, it can provide you with some database lock statistics, it can analyze the memory setup of your database server, it can also provide you with some information regarding your session load and the query cache hitrate and so on. Keep in mind that jobs can be configured (or even disabled if you so desire) and they also have a details section meaning that you can see some more information. Simply right click on a job and click on “Details”, “Configure” or “Set Schedule”:

For example, here’s the Details section on one of MySQL jobs in all of its beauty:

2. Make use of job scheduling – if you have a lot of database instances (and you can do that with dbWatch), job scheduling can be an invalueable resource. To schedule your jobs with dbWatch, simply click Set Schedule after you right click a job. Then you will be able to set a schedule for the job to run every minute, hour, day of a week or week of a year:

3. When you’re using dbWatch, you can also connect to or shutdown all servers at once – that might help with scaling your database instances too.

 

4. You can also make use of FDL (Farm Data Language) – now this one deserves an entire book alone (take a look into the documentation), but in short, FDL can help you with a multitude of different things ranging from filtering instances by database name to sorting the results of a column by an ascending or descending order. For example if you use use #sort your instances would be sorted in a certain order (use asc for an ascending order and desc for a descending order):
instance{#sort(asc)}

FDL can be really useful if you want to scale MySQL further and push your MySQL instances to its limits.

Summary

If you’re a MySQL DBA, chances are that sometime in the future you will face issues in relation to scaling MySQL. Your queries might become slower and slower, your traffic might increase more and more etc.

When using dbWatch you can make sure these kinds of problems will be solved quickly, securely, and effectively no matter what kind of database instance you use. 

Learn how to solve your challenges of scaling MySQL or other kinds of database instances, book a demo today.

5 Tips for Server Migration Planning

Server migration planning

As a DBA, infrastructure admin, or manager, you may have encountered a scenario where you are tasked to lead, plan and execute a server migration project for reasons such as: improving the application performance by using new or upgraded hardware or optimize the enterprise architecture.

 

Planning for a server migration can be quite complex, especially in large enterprises with many different servers used for various web applications, internal applications, databases, and many more. Today I will discuss some of the issues you need to consider when planning a server migration.

Server migration planning

The 5 R’s to consider when planning a Server Migration are the following:

1. Re-hosting by Moving to a New Host– (also known as Lift and Shift)

This approach involves moving your current infrastructure, applications, or databases to another server, another data center, or moving it to the cloud as it is.

Advantages of Re-hosting:

  • Simple migration by re-hosting all applications and databases as it is
  • No optimizations, no architecture changes, no code changes, moving to other servers or in the cloud without doing additional changes

Disadvantages of Re-hosting:

  • Can cause security risks if users and roles are not applied effectively
  • Can cause application failure or services unavailability if processes, security roles, jobs are not synchronized correctly

 2. Re-platform with New Database Platform or Infrastructure

Re-platform involves migrating to a new platform or infrastructure, for example, moving your on-premise databases to an Azure Managed instance in the cloud or moving your on-premise web application to AWS ElasticBeanstalk.

Advantages of Re-platforming

  • Not changing the core architecture but may require some code changes for optimization

Disadvantages of Re-platforming

  • It will most likely require extra time and effort to apply code changes
  • Use different tool kits and packages only available to the new platform

3. Re-target to Another Platform

Moving to a different product or database platform (Oracle to MariaDB). An example is you are trying to migrate your Oracle databases to a MariaDB to save substantial licensing costs. The move from one database platform to another requires changes in your stored procedures and packages when moving from Oracle to a MariaDB database.

Advantages of Re-targeting

  • Migrate the database to a better solution for reasons such as cost-benefit, feature/function availability.

Disadvantages of Re-targeting

  • It may consume time migrating to a new database platform as you need to map out every process happening within the database.
  • You will need to learn new tools and tech if you are not already familiar with a target platform

4. Refactor

Restructuring the enterprise architecture (databases and applications)

Advantages of Refactoring

  • Overhaul the application, driven by the need of business to add new features and functions
  • Optimizing the overall application and usage rate of resources by query optimization and rewriting queries
  • Long term cost savings as your applications/database are optimally running and properly provisioned.

Disadvantages of Refactoring

  • It may require a lot of time and effort on the part of DBAs/Developers to work on the project to refactor the whole application/database architecture
  • May introduce new bugs or issues
  • It will require extensive planning and testing before stable and ready

5. Retire

Turn off things that are no longer being used or running may be due to refactoring the whole environment. Consolidate database instances that are infrequently used to other servers that have extra capacity and resources.

Advantages of Retiring

  • Save costs up to 10 – 30%
  • It helps reduce security vulnerabilities
  • Fewer servers to maintain, monitor, and manage
  • Simplify environment
  • Remove old legacy platforms and apps
  • Better overview
  • Cleanup

Disadvantages of Retiring

  • Hard to verify instances or databases no longer in use without extensive activity logs going back a long time or extensive manual checks
  • Moving databases may introduce unexpected side effect

Summary for Your Server Migration Planning

When planning your server migration, always remember the 5 R’s, which are:

  • Re-hosting
  • Re-platform
  • Re-target
  • Refactor
  • Retire

Before you migrate your servers, you should put a monitoring and management solution in place to keep track of your applications or databases’ health.

 

dbWatch Control Center allows you to track your database health and performance using different database monitoring jobs for every performance metric.

 

Control Center has built-in reporting capabilities for the management to have a quick overview of their database farms’ status, capacity, and resource usage.

Track your database health, try dbWatch Control Center today.

Monitoring the Performance of Database Indexes with dbWatch

monitor database performance

In general, most database administrators sometimes face a few common problems. One of those issues is optimizing the performance of queries – whenever query optimization is mentioned, the chances are that you will often see some advice regarding indexes. Today we will try to see why indexes are so essential and dive into how to monitor your database indexes’ performance with dbWatch.  

monitor database performance

What are Database Indexes?

In the database world, indexes are data structures that are frequently used to improve the speed of data retrieval operations. Indexes make data retrieval operations faster because when indexes are in use, databases can quickly locate data without having to scan through every row in a database table every time it‘s accessed. The usage of indexes, of course, has both its upsides and downsides – we will start from the good things, then go into the minuses, and finally, we will tell you how to monitor the performance of your database indexes using dbWatch. 

Advantages and Disadvantages of Using Database Indexes

There are a few main benefits of using database indexes as long as databases are concerned. We will use MySQL as an example. In this relational database management system, among other things, indexes can be used to: 

  • Quickly and efficiently find rows matching a WHERE clause.
  • Retrieve rows from other tables in JOIN operations. 
  • Save disk I/O when values are retrieved straight from the index structure. 

However, we mustn’t forget that what has advantages probably has disadvantages too. Here are the disadvantages of using indexes in MySQL: 

  • One of the main drawbacks of using indexes in MySQL is that your data will consume more space than usual. 
  • Indexes degrade the performance of certain types of queries in MySQL – INSERTUPDATE and DELETE queries can be significantly slower on indexed columns. When data is updated, the index needs to be updated together with it. 
  • You may use redundant indexes in MySQL (e.g., you might index the same column two or three times by adding an ordinary INDEX, a FULLTEXT index, and a PRIMARY KEY or a UNIQUE INDEX, etc.) In this case, it’s helpful to remember that MySQL does not error out when you use multiple types of indexes on the same column, so it never hurts to be careful. 
  • We also must not forget that there also are multiple indexes in MySQLYou can use a PRIMARY KEY (this type of index allows you to use automatically incrementing values), An ordinary INDEX accepts NULL values and is frequently used to speed up SELECT operations (while slowing down INSERTUPDATE and DELETE queries), a UNIQUE INDEX can be used to remove duplicate rows, you can also use full-text search capabilities while using a FULLTEXT index, or if you want to store rows in a descending format, you can also use a DESCENDING INDEX. 

Monitoring the Performance of Indexes Using dbWatch

To monitor the performance of your database indexes using dbWatch, you can utilize a couple of methods outlined below: 

  • dbWatch allows you to see your database growth rates. For that, dbWatch has two specific jobs letting you see the aggregated and detailed growth rates of your databases regardless of the platform you use. Here’s how the aggregated growth rates look like: 

Database growth rates as seen in dbWatch.

The red line depicts the data size, the orange is for index size and the green one is reserved for the total size. 

 

By observing aggregated growth rates of your database you can easily see the data and index size derived from your database server, letting you decide whether your indexes are starting to be redundant or not. 

 

Here’s how the detailed growth rates look like: 

detailed growth rate as seen in dbWatch

Detailed growth rates show a chart detailing the growth rate for the most extensive databases on the server. Both of the jobs also display dates letting you observe how your database grew over time. 

 

If your indexes’ size is very small, it might be time to look into a different optimization method. On the other hand, if the size of your indexes is a bit bigger, indexes can become the primary reason your queries run efficiently. It all depends on the index – indexes are critical for good performance, but people often misunderstand them, so indexing can cause more hassle than it’s worth too. To get the best out of the indexes that are in use in your database management system, you can also utilize the InnoDB buffer pool checking job or the MyISAM key buffer checking job – these jobs can give you an excellent indication of the buffer pool utilization in InnoDB or the key buffer utilization in MyISAM. 

 

The InnoDB buffer pool check job can be configured to give an alarm or a warning if the buffer utilization exceeds a certain value in percent, allowing you to keep an eye on the buffer pool at all times – since the buffer pool is maintained primarily for caching data and indexes in memory, monitoring its performance can be a crucial aspect of monitoring the performance of your database indexes with dbWatch: 

monitoring performance menu in dbWatch

Configure menu in dbWatch

The same can be said about the MyISAM key buffer check job. Once again, this job can be found by simply looking to the dbWatch Control Center’s left side. All that’s left to do is to configure and enable it for it to work: 

The dbWatch configure menu.

When configuring the job, keep in mind that there are a couple more parameters that you can use: 

  • You can choose the number of days you want to keep the data for – after a specified amount of days has passed, data will be discarded. 
  • The job can give you an alarm or a warning if the buffer utilization exceeds certain specified values in percent. 
  • The job can give you an alarm or a warning if the read ratio exceeds certain specified values in percent. 

The configure key buffer

 

The key buffer utilization alarms can be beneficial not only if you want to know whether the indexes you use in MyISAM are effective or not but also if you want to know when to upgrade your database instances or servers that you usually use to run the database instances on (e.g if a buffer utilization threshold constantly exceeds, say, 90% it might be time to look for how you can push your resources further to accommodate the data and indexes that you use). 

 Summary of Database Performance Monitoring with dbWatch

Monitoring the indexes’ performance in your database with dbWatch can be a substantial step if you want to ensure that some of your database queries (e.g., search queries) stay fast and efficient. Do keep in mind that indexes usually slow down the performance of certain types of queries too (e.g INSERT and UPDATE queries), but if you have a lot of data, indexes can be handy. When using indexes, keep in mind that there are separate types of them (for example, B-Tree indexes and FULLTEXT indexes, PRIMARY KEYs are also indexes) and that you have multiple types of indexes on the same column at once. 

 

Software developed by dbWatch  can help you monitor the performance of the database indexes that you use – the database growth rate job can help you check the size of your indexes helping you decide whether they’re efficient or not, the InnoDB buffer pool checking job can help you monitor the data and indexes of your InnoDB tables, and the key buffer checking job can help you monitor your MyISAM table performance.

Understand more about database performance monitoring, book a demo today.

 

 

Monitoring MyISAM Performance with dbWatch, a Guide

Why MyISAM?

MyISAM was the default MySQL storage engine for MySQL versions prior to 5.5 released in 2009. The MyISAM engine is based on older ISAM code. One of the key differences between InnoDB and MyISAM is that MyISAM is significantly faster when using COUNT(*) queries because MyISAM stores the number in the table metadata, InnoDB does not.

What do You Need to Monitor in MyISAM?

Monitoring your MyISAM-based table performance is crucial if you use MyISAM to store data. One of the most crucial things related to this engine is the MyISAM key buffer (also called the key cache). MyISAM employs a cache mechanism to keep the most frequently accessed table blocks in memory to minimize disk I/O.

The key cache (or key buffer) structure contains a number of block buffers where the most-used index blocks are placed. When the MyISAM key cache is in use, multiple sessions can access the cache concurrently. To control the size of the key cache in MyISAM, use the key_buffer_size variable. If this variable is equal to zero, no key cache is used.

 

How to Monitor the Performance of MyISAM with dbWatch?

Thankfully, if you want to monitor your MyISAM-based tables with dbWatch, doing so is pretty simple. Open up the dbWatch Control Center, navigate to your MySQL instance and expand the performance section as shown. You should see a “Key buffer check” job underneath:

 

 

The next thing you should probably do is configure the job. Here are your options:

This task can be configured to:

– Change the buffer utilization alarm threshold – dbWatch will alert you with a status of ALARM if this value exceeds the specified value in percent.

– The buffer utilization warning threshold means that dbWatch will present a warning when the buffer utilization of the MyISAM engine exceeds a value specified, which can be useful if you want to monitor your MyISAM performance or even know when to migrate to a new server.

– The read ratio alarm threshold will give you an alarm if the reads exceed a specified value in percent.

– The read ratio warning threshold will present you with a warning if the reads exceed a specified value in percent.

 

In general, this job can be beneficial if you want to ensure that you use your server resources with MyISAM engine well. If the values don’t satisfy your desires, you are free to change them.

 


Summary


To summarize, the MyISAM key buffer cache checking job in the dbWatch Control Center can be very useful if your MySQL data is stored using the MyISAM engine. The job can help you ensure that your server resources are utilized to ensure the MyISAM engine’s best performance.

 

Monitoring SQL Server error logs

Screen giving an idea of Monitoring sql server error logs

As a data platform expert who manages and maintains hundreds or maybe thousands of database instances, you may be thinking of a better way in managing these number of servers more efficiently. An environment this huge with a lot of vital applications relying on those databases can be quite difficult to keep track of. Especially when you’re too busy with a lot of administration work, with little resources and time you might have difficulties prioritizing your daily tasks.

 

As your day goes on you will encounter the usual errors which happens from time to time. It may be caused by different reasons for example; a user tried to input his/her credentials, but he/she failed logging in as the credential had already expired, maybe there was a job that failed due to a query timeout, or maybe there was a backup job which failed due to an unknown reason.

Screen giving an idea of Monitoring sql server error logs

The errors mentioned are automatically logged on your error logs in SQL Server. You can view all these by expanding SQL Agent on your management studio, there you will see all the logs of your database instance.

 

When monitoring tens or maybe hundreds of database instances, you need to be able to track and efficiently monitor errors happening in each instance, within the database farm you are managing.

How to Easily Track SQL Instances

You may think that there is no easy way to manage and have full control of everything that is happening within each of your instances, luckily there is dbWatch which will assist you with your daily monitoring tasks.

 

dbWatch contains various tasks which helps you manage and monitor error logs of your database instances. dbWatch helps you keep track of all errors encountered on your database farm. You can also monitor error log file sizes and check if your file size is growing too much or too fast within a period. And lastly, you can configure when you wish to recycle error logs within your database instance, which helps free up disk space being consumed by the number of growing logs. 

How to Track Error Logs Within dbWatch

  • You’ll need to start by going to the monitoring module.
  • Here you’ll select a sql server instance. Right click and select configure monitoring. (See Figure 1, below)
Selecting SQL server, screen shot
Figure 1

In this window make sure that you have installed the tasks for monitoring error logs in SQL Server. (See figure 2, below)

The following tasks are:

  • Instance error log
  • Instance error log file size check
  • Cycle error log

showing how to install the tasks for monitoring error logs in SQL Server

After installing the tasks, go back to the monitoring module.

On the column Name, input error on the filter to find the tasks regarding the error logs. (Figure 3, below)

how to input error on the filter to find the tasks regarding the error logs.
Figure 3

If you right click on instance error log, you have an option to run the task to retrieve the updated information of the collected error logs from the instance. (Figure 4, below)

How to retrieve the updated information of the collected error logs
Figure 4

Click on Details, to view the actual errors collected from this SQL Server instance. (Figure 5, below)

How to view the actual errors
Figure 5

A new window will appear, showing you details of the errors which occurred in your sql server instance.

For example, the last lines read from error log table displays all lines read, from the error log file by this task during its last execution.

 

The error history details table, shows the history of the last 20 collected records from your SQL Server error log file (error-text defined by the “error text” parameter). (Figure 6, below)

 

Later in the blog, you’ll learn how to defining the error text parameter to filter out errors that you’d like excluded.

Figure 6

SQL Server Error History

The error history graph shows the number of errors per day being registered in the log, helping you determine if the errors you are encountering are growing daily. (Figure 7)

SQL server error history graph.
Figure 7

You don’t want the monitoring task to consume to much resources, and if it takes a long time to read the error log you should investigate it (even there are no errors in the log file). If the number of records read is very high, it could indicate potential problems in your instance. (See figures 8 and 9, below.)

Showing a high readings in the SQL error log graph.
Figure 8
Another example of high error log readings in SQL Server
Figure 9

And lastly, the alert execution aggregated statistics graph. The left vertical axis shows the total rows read per day, and the right vertical axis shows the total time spent reading the SQL Server error log per day. This helps you determine if you are experiencing an increase in error on your instance. With the information available, you can deep dive on the instance where the number of errors is growing.

 

Going back to the monitoring view, click on Configure. (Figure 10, below)

Image shows how to return to monitoring view
Figure 10

A configuration window will appear for this Instance error log task. (Figure 11, below)

The Confirmation window in dbWatch.
Figure 11

You can modify the configuration based on your requirements.

The options are:

  • Error text – defines which string (errors) the check must look for. Values must be separated with commas (,).
  • Error text allowed – Specifies the text strings (specified by the “error text” parameter) to be excluded when found in the same record (error log line).
  • History threshold – the maximum numbers of error messages the history table will retain (error_log_err_histr_tab table).
  • Last run – the last time the error log has been checked.
  • Max elapsed time – the maximum execution time (in seconds) for the task. If this threshold is reached a warning is raised.
  • Max lines per execution – the maximum number of lines the task will read during execution before a warning/alarm is raised.
  • Return status – return status value (ALARM – 2, WARNING – 1, or OK – 0) when “error text” is found (or “max lines per execution” is reached).
  • Click apply to save the configuration.

You can set the schedule on when you wish this task to execute. (Figures 12 and 13, below)

How it looks when you schedule a task in dbWatch
Figure 12
The Pop-up window when scheduling a task in dbWatch.
Figure 13

Right click on the Instance error log file size check and choose Details.

Note: In order to install and make this task (Instance error log file size check) work, you need to enable xp_cmdshell in SQL Server by executing the following command below, (Figure 14) or you can set it on the configuration parameter. (Shown later.)

 EXEC sp_configure ‘xp_cmdshell’, ‘1’; 

 Reconfigure

 Go

Instance error log file size check
Figure 14

A new window will appear showing you the information in your error log directory. It shows the error log file name, create date and file size. (Figure 15, below)

How the error log directory looks in dbWatch.
Figure 15

The error log file size history graph shows you information of your historical error log size growth rate. (Figure 16)

The error log file size history in dbWatch.
Figure 16

The error log directory size history graph. Displays the number of files within your log directory, and the total file size for the accumulated logs. With the information available it will help you with your decision making and consider clearing some error log files, which are no longer needed as it consumes additional disk space within your environment. (Figure 17, below)

Error log directory size history as seen in dbWatch.
Figure 17

Go back to the monitoring view, and right click again on the Instance error log file size check. Choose Configure. (Figure 18)

How to check on Instance error log file size check.
Figure 18

Again, you can modify the following configuration based on your requirements

The options are:

  • Enable xp_cmdshell – if set to “YES” the sql server instance configuration xp_cmdshell will be set to enabled. This parameter is required if you want to allow the task to execute operating system commands, to discover the size of files and directories.
  • Error log directory size threshold – the maximum total size (in MB) of all files in the error log-file directory.
  • Error log file path – path where the error log file is located.
  • Error log size threshold – maximum size (in MB) of error log file before a warning or an alarm is returned by the alert.
  • File count threshold – the maximum number of files in the error log catalog.
  • History threshold – the maximum numbers of history statistics (in days) of error log file size.
  • Return status – return status value (ALARM – 2, WARNING – 1, or OK – 0) when the “* threshold” parameters are reached.
  • Click apply to save the configuration.
Showing how to check cycle error log.
Figure 19

Choose Cycle error log. Right click then select Details

A new window will appear which displays the specific log file and the date when it was recycled. This information gives you an idea if the error log was cleared on a specific date. (Figure 20)

The window shown here displays the specific log file.
Figure 20

Go back to the monitoring view. Right click and choose Configure.

A configuration window will appear which you can modify. (Figure 21)

The options are:

  • Cycle History – number of days to provide information when the error log and agent error log has been recycled.
  • Number of SQL Server error logs – by default, there are 7 SQL Server error logs – Errorlog and Errorlog 1 to 6. This parameter can be used to modify the registry value (REG_DWORD type) to the number of logs which you wish to maintain.
  • Click apply to save the configuration.
A configuration window to modify.
Figure 21

Summary of Monitoring SQL Server Error Logs

Now you have an idea on how you can easily track the various errors happening within your database farm. You can set the return status so that you will be notified once a certain baseline, or error has been identified by dbWatch.

 

You can also configure your own parameters or baseline based on your business requirements. It is a great way of customizing different alerts based on the errors retrieved by the dbWatch task. For more information visit: wiki.dbWatch.com 

 

Monitor your database error logs, try dbWatch Control Center today