Track your SQLServer database compatibility level with Control Center

In this blog,I will discuss SQL Servers’ database compatibility level. By using dbWatch, we will check the proper setting and configure the right compatibility level based on the version you are using. 

Each new version of SQL Server introduces new functionalities and performance benefits. These are identified by compatibility level and cardinality estimators.  d

As you can see each database version is associated with a compatibility level, so if you want to utilize the newest features available for your database, you need to check and set the right compatibility level for your instance. 

As DBA, you must ensure the compatibility level of each database is matched with your instance current SQLServer version. 

The usual problems of enterprises are some take too long to migrate to a newer version of SQLServer. 

Some DBAs end up taking two steps. For example –they first migrate the SQL Server 2000 database to SQL Server 2008 R2, and then migrate it to SQL Server 2019. This is the reason NOT to wait too long before migrating a SQL Server database to its latest version.  

Sooner or later, your compatibility level will have to change, and this will impact your applications and processes. By doing it sooner, you will reduce the amount of work you will have to do later.

In this view, you can manually check for each databases compatibility level, by opening the database properties window in your SQL Server management studio.

But on a database farm level, especially, when managing hundreds up to thousands of databases, this is not the ideal way to investigate for the compatibility level of your databases. 

dbWatch Control Center provides you the overview of your database’s compatibility level and displays other attributes of the SQLServer database they are using. 

Let us start by clicking on the Farm Management Icon, which is at the left side of your Control Center monitor as seen in the image below. 

Expand Inventory Overview Performance. 

Click on the Per platform option and choose MS SQL Server. 

In this view displays different attributes, you have the instance status, name, database version, edition, port #, instance name, host, host IP, OS type, the creation date of your database and total DB count 

 With this information you will have a quick understanding of the different versions of SQL server that is running within your database farm. 

On top, choose the compatibility tab. 

 On the topmost part of this view, you can see the servers and databases compatibility level.  

It has 2 groups: 

Databases with lower compatibility than the server

Databases with the same compatibility as the server

As you can see, there are 2 databases that use lower compatibility level than the servers’ version, and 60 databases with the same compatibility level as the server. 

On the right side, you have a pie chart that displays the different compatibility levels configured on your databases. We have 2 databases with SQL Server 2014 (120) compatibility level, 50 databases with SQL Server 2016 (130) compatibility level and 10 databases with SQL Server 2019 (150). 

At the bottom of the view, displays the instance name, database, status, create date, recovery model, database compatibility, server compatibility, server version. 

 With this information, you will be able to analyze which databases you need to reconfigure and update their assigned compatibility level based on the servers compatibility level.  

In the image below, you can see a database farm with a total of 903 SQL Server databases with version SQLServer 2008 R2. This feature is very handy when managing these huge numbers of databases, as it helps you keep track of your databases compatibility level when it comes to the latest available version. 

The change in compatibility level tells the instance to change its feature sets, it might add new features or remove some old features so be wary of the changes that might affect your existing application.  
The compatibility report view enables DBAs to find where they have the potential for performance improvements by upgrading the compatibility level of a database which might be a version behind. 

For more info visit the website www.dbWatch.com  

For more tutorials visit wiki.dbWatch.com/ControlCenter 

Follow us on Facebook, LinkedIn, and Twitter - dbWatch  

Also check out our YouTube channel dbWatch  

Database Performance Degradation: Ways to fix and optimize it

The New Normal solidified the preference of online transactions. From banks, retailers and food industry, Information Systems have adapted with the times to comply with the customer’s demand. However, not everything is dandy in this period of a new normal. With the increase in demand, this certainly merits a degradation in your database performanceMost common of this occurrence is slow running database that is visibly seen on top of applications. 

What is database slowness?

Let’s discuss database performance first. Database performance is the speed at which your database responds to your request of accessing its stored data. To visualize it, a simple SELECT statement, when executed, pulls resources from your database, and returns a tabulated data for display.  This seems fine in the first thousand records but when you are querying tens of thousands of rows or more, a reduced performance can often be noticed. 

Database slowness is that stunted performance. The main difference between that simple SELECT statement and database slowness is that the latter is a chronic ailment of your databases. 

Slowness can be attributed to multiple factors such as: 

– network problems
– missing indexes
– improper caching
– unoptimized queries
– database architecture

Reduced database performance is heavily contributed by your database’s existing structure and as DBAs, it is our primary responsibility to mitigate these effects.

What can you do?

Monitor your network and memory

– other nodes
– estimate disk space and cache memory

Constantly check which applications are consuming a lot of resources. Be wary of cache memory and disk spaces as they can deplete with continuous reading and writing to your databases. Monitor your network connections to avoid any unplanned downtimes. See to it that the bandwidth allocated is sufficient and latency for and between database servers are kept at a minimum. 

Coordinate with your network team, developers, and other related departments to determine what is the root cause of the problem especially if it is database performance related. Fixing slowness won’t happen overnight. But with gradual steps taken, the system will be tuned to its peak performance. 

Review your database architecture and structure

Poorly defined and maintained databases will cause retrieval latency and performance issues. Missing indexes or unnecessary table locks are not only the contributors for undesirable database performance. Database architecture also plays a significant role. 

You must review your overall architecture if it follows correct normalization. Duplicate data, and missing primary keys are the common problems of fledgling database systems. These are easy to shelve for backlogsYet, by constantly pushing them below your priority list, the cracks in your database architecture begins to show. Sooner or later, addressing those cracks would be labor-intensive as DBAs will have to work longer hours and costly because you need to plan when and how much time you will be dedicating in fixing them. 

Take for example re-indexing tables containing millions of master records. The best approach would be to schedule it on a weekend with low transactions done with the databaseThen, formulate a strategy for the execution and revert, and disseminate the scheduled hours for maintenance to all stakeholders. Most importantly, secure DBAs to work during that time frame. Essentially, procrastination negatively impacts business when bad quality persists. 

  1. Does the design make sense?
  2. What is the probability of data duplicate data per row? Is it low? Is it high?
  3. When I query, will I need to go to another schema just to retrieve that other table?
  4. Are there redundant data that will be stored in every field?
  5. Are the data type properly defined? Are there composite data types I should be wary about?
  6. Are the primary and foreign keys properly defined? Will they lead to a conflict of inputting data?
  7. Are chosen indexing approach sufficient to optimize my searches?
  8. Should I use VIEWS as a substitute for this table?
  9. How will we handle orphan data?
  10. Will we automatically flush or clean cache memory once data rows are deleted?
Based on your answers, you get a feel on what your ideal database architecture should be versus the reality of it. Although this is under the authority of DevOps and Developers, as DBAs, it is also our responsibility to manage databases.

Query Plan. Plan. Plan

Whether you are a senior DBA or junior DBA, always remember to value your query execution plans. Utilize your EXPLAIN statementsand execution plan tabsDo this also with Oracle, Postgres or any other platforms.  

Double check your database objects and queries such as: 

1. Stored procedures 

2. Functions
3. Ad hoc queries
4. Connecting application queries

See to it that upon their execution, they do not consume a large portion of your database resources. 

JOIN and SUBQUERY statements will also be headaches to be dealt with. Irresponsible developers will match up tables containing tens of thousands of records with ambiguous or incorrect join keys. Poorly scripted subqueries will often lead to slower returns and result to NULL values even if the conditions are met. If possible, use JOIN statements as they take precedence in the query statement and limit the returned records before applying the WHERE condition. 

DBAs should search for these anomalies while finding ways to optimize them. Fine tuning these queries are no joke. It may take hours just to optimize a query.  

————————–

With all these talk about optimization, management and monitoring, DBAs have a lot in their hands. It will be a challenge to focus on other tasks especially if your company is suffering from reduced database performance . However, you don’t have to do these tasks manually. 

Database Management and Monitoring solution such as dbWatch will assist you in tracking down resource heavy queries. dbWatch Control Center will monitor your database’s memory and storage spaces such as disk space, cache memory and temporary tables. These are supervised by jobs that deliver the information right at your dashboard. Boxed in Red are the jobs checking for memory usage. 

Database and server connections are also constantly checked by dbWatch Control Center. The monitoring dashboard lists all possible issues and provide keen statistics of databases when a problem occurs. The screenshot below in red highlights the issues encountered when the job is scheduled to run. Dominantly, no backup plans were established for the instance listed which categorized them as recurring issues. 

Green box shows the statistics of instances. There are 5 categories with differing colors. Red indicates a major problem has occurred within that instance. Yellow shows that the DBA’s intervention maybe needed to properly address that instance’s issue. Green highlights instances showing no signs of problems. Blue displays instance that are down while Gray points out added instances without monitoring jobs installed. 

On the database management side, you can perform your database administration task in a single window. Upon accessing the management module, look at the left side pane of the screen. Here, you will see a tree-like structure (see Red Box) where you can do the following: 

1. Configure your databases security

2. Alter indexes

3. Set up backups and restore them

4. Configure your SQL agent

You don’t need to access your database management studio to accomplish these tasks. All you need is your dbWatch Control Center monitor. 

How’s that for convenient? An all in one tool that provides solutions to your database problems. 

 

Tips On How You Can Achieve A Cost Efficient Database Farm With Dbwatch12

IT planning within an enterprise is crucial, whether you are a startup company or a big enterprise that is undergoing major changes. You need the full cooperation from various teams within the company and work together to determine how to improve the business, when it comes to finances, infrastructure, new projects, etc.

For example, your CTO has decided to migrate various applications and database systems from physical servers to vm servers. He would also like to migrate from one database platform to another for budgetary reasons. These changes should be carefully planned. Proper communication or coordination within the business and its key members determine the success of this project.

Having said that, another focus of the management is to save IT cost without undermining the efficiency and reliability of the whole IT infrastructure.

We will discuss key points that will help you analyze and identify resources for optimization, achieve cost efficiency by tracking license usage, and save time and money by properly utilizing available resources.

Here are the key points:

Analyze resource data – disk usage rate, disk/ memory free space unused/used servers

Task Automation – automate routine tasks to save DBAs (Database Administrator) time

Database licensing and feature usage

Analyze Resource Data

Daily operations in an enterprise might generate terabytes to petabytes of data. It could be customer transactions, employee records, operations data, reporting data for data warehousing and many more.

Proper handling of this data can help your company gain opportunities to make the best business decisions. In an enterprise database farm, you need to have the proper tools to analyze data within the farm. You need determine instances or servers which has been over provisioned or under provisioned with resources, identify CPU and memory utilization of each server.

dbWatch is an enterprise monitoring and management solution that helps you keep track of your overall server utilization and keeps track of instances that are over or underutilized.

This will help you and your management decide whether to purchase, consolidate or maybe re-provision some resources to servers that needs more power which leads to cost savings, because you don’t need to buy expensive resources if you can just consolidate resources that are rarely used.

Here is a simple example of a dbWatch dashboard which gives you the overview of your memory and disk usage within your database farm. In the screenshot below you can see the key indicators like the total physical memory usage(Physical (MB)) for each instance, Physical disk usage per instance.
With the overview available you can easily determine which instances uses the most/least memory resource.
In the screenshot below you can see the key indicators like the Instance Usage percentage, Number of disk volumes allocated per instance, Size in GB per volume.

Having this overview, especially when you are managing a large number of database instances within your enterprise, helps you have full control of your database farm. It will help you make better decisions when doing your server consolidation. Instead of buying new hardware resources, you can easily save money by re-purposing some excess resources available.

Task Automation

Working as a DBA (Database Administrator) or developer we always find a way to make tasks simpler and easier; focusing on automating tasks that we work on daily.

When managing a database farm with a large number of instances you cannot simply execute your maintenance scripts manually for each database. When monitoring that number of instances, it is very inefficient to log in remote and manually connect on each server just to check the performance, status, running jobs of each instance.

dbWatch helps you monitor and automate your routine maintenance tasks so that you can focus on more critical tasks.
In the screenshot below, you can see the maintenance tab wherein dbWatch displays the various maintenance tasks which were executed on different instances.

Here you can see that key indicators – Number of Database (DB) Checked, Number of index rebuild/reorganize, disk space before rebuilding, disk space after rebuilding, total reduction size.

You can easily schedule various maintenance task and dbWatch will automate all the routine tasks on your database farm. dbWatch also can automate your monitoring by configuring alerts and notifications refer to the article – Performance alerts and notification with dbWatch to learn more.

Database licensing and feature usage

It is necessary to keep track of your database licensing especially when using Oracle databases where  each feature has a certain cost. You need to have an audit of features enabled within your database farm to avoid unexpected charged from using these functions.
As an example, consider a scenario where the manager asks his team how they can track the various features that are currently enabled within a specific Oracle instance they have. This could be relevant and important, as he needs to know if they have added costs coming using certain features of Oracle. 

dbWatch can keep track of the Oracle licensing within each instance, show all Oracle features enabled within the instance and display information if an option being used has an extra cost. 

The screenshot shows a report generated in dbWatch along with Oracle component features being used within the instance.

With this information we would be aware of our Oracle instance licensing and the features that has an added cost to avoid unexpected charges using Oracle. This feature is very useful for internal auditing and cost control.
Those three tips will help you save time and money, by analyzing data intelligently, automation and licensing auditing of instances within your database farm.
 
If you have any questions or would like to know more on how dbWatch can assist you on your current enterprise database monitoring and management situation.
please contact us through the email: info@dbwatch.com so that we could schedule a meeting.
For more information visit www.dbWatch.com or wiki.dbWatch.com/ControlCenter
Follow us on FacebookLinkedInTwitter and subscribe to our Youtube channel – dbWatch

Can a database management solution help future-proof your business?

Future proofing is part and parcel of any business strategy. You need to plan for your business to succeed. A noteworthy solution is investing in a flexible and reliable database monitoring tool. By investing into it, your staff is equipped with the means to increase the productivity and your business adapts to the changing landscape of your IT infrastructure. However, the process of future proofing does not end with just getting the right tool. The application of the tool and how it can add value to your business will merit your business longevity and success.

In this article, we will discuss what a database monitoring tool is, why future proofing matters and the applications of a data monitoring tool to your business are.

Tools For Growing Businesses

Future proofing plans are indispensable when a business is continually growing. As management plots the course for their company’s direction, the alignment of everyone and everything being on the same page becomes more important than ever.

A pragmatic approach for future proofing is equipping your IT staff with the right tools. In terms of database management, this enables them to handle the growing number of databases dispersed into numerous clustered instances.

As your business grows, your databases and servers will need to handle the additional strain.You can opt to upgrade your hardware resources or optimize your databases by horizontally or vertically scaling them. Either method will cope with the increase in demand but implementing them would be costly.

Under a data management tool,you get a more cost-effective solution when applying either or both methods. Financially, you will not needlessly implement hardware upgrades since it monitors the memory of your database and provides real-time health checks. Time-wise, it saves you the additional steps in cascading database optimizations and monitor all your database because you can do it in a single window.Won’t it be great if you can reap the rewards of your business venture without worrying about possible future headaches?

Flexible Database Management Software

Business growth is unpredictable. You are not 100% certain what your exact needs for your IT systems will be five years down the line. In that regard, flexibility of another key attribute when searching for the right database monitoring tool.
Future proofing reassures you that your business is prepared to what adversities may come. With a flexible database monitoring tool, you can grow your business the way you want.

Some database monitoring solutions are designed to work across multiple platforms and versions. This is great for businesses who are currently using multiple technologies or adopting a new database management systems or license.
Flexible tools that can work capable of working across multiple platforms and varying sizes of servers can be beneficial for your long-term business goals. Not only is your Return of Investment larger when you invest in a flexible database monitoring tool, you get the advantage of:

  1. not needing to purchase different database tools,
  2. preventing additional cost to train your staff in handling the tool
  3. and decreasing the period of learning as your staff familiarizes themselves with the tool.

Evidently, there are numerous advantages and it can be package neatly into one database monitoring tool.

Application of Future-proofing

Growing, scaling and monitoring

There are two ways to of scaling. One of them is horizontal scaling while the other is vertical scaling. Traditionally, vertical scaling is a preferred method since it involves focusing on a single component and improving upon it. But times change and business are more attune to horizontal scaling because it’s more cost effective and flexible than its counterpart.

Whether you choose to implement vertical scaling or horizontal scaling, one thing is for certain – your databases will continue to expand. This is where a database monitoring tool comes in handy. For your monitoring solutions, the tool hides the complexity of deep diving into health checks and status displays. You don’t need to be a veteran DBA to understand this tool; the tool does the work for you. Even an amateur DBA can perform database monitoring without the need to repeatedly rely on SQL scripting.

When a downtime scenario happens, a database monitoring tool will be your reliable partner. It gives pinpoint accuracy on what and where the problem occurred. Even if your databases are not the cause of the issue, you can eliminate your databases as the possible suspect and move on in analyzing your servers. It saves steps in analyzing the root cause and saves your staff time. Potentially, you can recuperate losses faster brought about by a downtime and reassure your customers that they mean more as they go about the business.

But what if your database is the issue? Automation is our key word. Automating backups and real time analysis becomes your go-to options. As mentioned earlier, you get a real-time analysis of problems encountered. If your database is no longer viable, you have backups to be restored in multiple instances with only a press of a button. 

Cost-benefit with hybrid environments

Moving into cloud-based environment is a good alternative for future proofing. Scaling becomes more convenient and quickly adapt to business demands. But, there’s a catch for adapting a cloud-based environment; it’s a pay-per-usage model. Unchecked resource utilization and vendor’s discretion on provisioning cloud databases will eventually be the bane of your business.

But this should not discourage you from integrating cloud to your business. Cloud services has their own unique advantages and by correctly utilizing them, you gain more value in the long run.

Database monitoring is one key ingredient in tracking your cost. By knowing your actual utilization of resources, cost can be efficiently reduced. Insights into acquiring additional resources or trimming down resources through cost-benefit analysis are unlock in a strategic level. While, database performance monitoring, even if it’s inside a cloud platform, benefits DBAs on the technical level.

You can proportionally balance the utilization of both cloud and on-premise. It is mostly up to the business on how they will hybridize their IT structure but with a database monitoring tool, these challenges are dealt with. 

Aside from savings in your IT infrastructure, you also save on auxiliary costs for hybrid environments. You won’t need to shoulder the incremental costs of procuring additional licenses nor do you need to retrain your DBAs to familiarize with cloud platform databases. You save them hours of exerting extra effort so they can proceed with more relevant tasks. It’s a win for you and for them.

Multiple platforms in a single view

Database platforms offer differing advantages and disadvantages in their structure and performance. One platform could partition its layers into multiple instances while the others follow one instance and subdivided into schemas. There are also platforms that prevents cross database scripting while other openly embrace this feature. Whatever the case maybe, when a business decides to adopt one of the platforms and maintain an old one, this burdens the DBA even more.

Imagine a business deciding to transition to Postgres while retaining their MSSQL legacy database. The DBA will be forced to learn both database structure, limitation of their SQL languages and maintain both databases into two separate windows. But what if the business wants to try out Oracle then MySQL? This becomes problematic as you add more database platforms to your business.

Imagine a business deciding to transition to Postgres while retaining their MSSQL legacy database. The DBA will be forced to learn both database structure, limitation of their SQL languages and maintain both databases into two separate windows. But what if the business wants to try out Oracle then MySQL? This becomes problematic as you add more database platforms to your business.

Without the need of accessing multiple database management software, a database monitoring tool can access several heterogeneous databases and consolidates their statuses into one window. This makes it easier to do health checks to your databases, optimize functions and stored procedures, and monitor the uptime of your databases.

Monitoring an Oracle, for instance, will be a lot easier since the tool hides the complexity of the standard DBA procedure of doing health checks and optimizations. Not only that, you can save up more in terms of licenses.

Usually, business tend to conclude that additional licenses are their only option to answer their growing demand in storing data. The alternative of hiring or directing their staff to gather utilization information on their databases is no different. It costs time and money. But, with a database monitoring tool, you won’t needlessly purchase additional licenses nor allocate significant manpower to the task. The tool does it for you.

Ultimately, your DBAs needs are met so as your business needs.

Customization creates clear communication

A report can only convey much information as needed. Having a very lengthy and thorough report can bombard your readers with unwanted information and even discourage them from continuing.

Customization is another aspect that is essential for a database monitoring tool. By customizing your dashboard, you can get the overview of database’s performance. When generating reports, you only provide the most important information management seeks. No need to have lengthy emails explaining the issue at hand. A visual graph will summarize it for you. This line of communication delivers transparency between your managers and DBAs.

Customizing your dashboard and reports brings salient and accurate information for management to decide on their future strategy. In addition, with visual facts at their disposal, DBAs can confidently and easily execute solutions.

Recommendations

To sum it all up, it’s very reassuring with a database monitoring tool at your disposal when you are future proofing your business. It creates that freedom of choice, convenience and cost savings. However, do not just settle for any database management tool. You need to also assess your company’s needs and what value the tool offers. To help you with your decision process, below are questions that you might want to consider:

– Is this tool compatible with the database management system I am currently using?
– Will this tool still be compatible even if I change my database management system?
– Does this tool support major database management systems such as Oracle and Microsoft?
– With this tool, can my staff perform in that same level of efficiency even if my business rapidly grows?

You may not have the answers now, but there will be a point in time you will need to answer them. It’s very hard to decide on what database monitoring tool to use. There are several products out there in the market. But starting now with a brand like dbWatch can help you go a long way.

Monitoring SQL Server error logs

As a data platform expert who manages and maintains hundreds or maybe thousands of database instances, you may be thinking of a better way in managing these number of servers more efficiently. An environment this huge with a lot of vital applications relying on those databases can be quite difficult to keep track of. Especially when you’re too busy with a lot of administration work, with little resources and time you might have difficulties prioritizing your daily tasks.

As your day goes on you will encounter the usual errors which happens from time to time. It may be caused by different reasons for example; a user tried to input his/her credentials, but he/she failed logging in as the credential had already expired, maybe there was a job that failed due to a query timeout, or maybe there was a backup job which failed due to an unknown reason.

The errors mentioned are automatically logged on your error logs in SQL Server. You can view all these by expanding SQL Agent on your management studio, there you will see all the logs of your database instance.

When monitoring tens or maybe hundreds of database instances, you need to be able to track and efficiently monitor errors happening in each instance, within the database farm you are managing.

You may think that there is no easy way to manage and have full control of everything that is happening within each of your instances, luckily there is dbWatch which will assist you with your daily monitoring tasks.

dbWatch contains various tasks which helps you manage and monitor error logs of your database instances. dbWatch helps you keep track of all errors encountered on your database farm. You can also monitor error log file sizes and check if your file size is growing too much or too fast within a period. And lastly, you can configure when you wish to recycle error logs within your database instance, which helps free up disk space being consumed by the number of growing logs.  

Here is an example on how to keep track of your error logs within dbWatch.

We will start by going to the monitoring module.

Here we will select a sql server instance. Right click and select configure monitoring.

In this window make sure that you have installed the tasks for monitoring error logs in SQL Server.

The following tasks are:

Instance error log
Instance error log file size check
Cycle error log

After installing the tasks, go back to the monitoring module.

On the column Name, input error on the filter so we could find the tasks regarding the error logs.

If you right click on instance error log, you have an option to run the task to retrieve the updated information of the collected error logs from the instance.

Click on Details, to view the actual errors collected from this SQL Server instance.

A new window will appear, showing you details of the errors which occurred in your sql server instance.

For example, the last lines read from error log table displays all lines read, from the error log file by this task during its last execution.

The error history details table, shows the history of the last 20 collected records from your SQL Server error log file (error-text defined by the “error text” parameter).

Later, I will show you how to filter out errors which you want to be excluded by defining it on the error text parameter.

The error history graph shows the number of errors per day being registered in the log, helping you determine if the errors you are encountering are growing daily.

You don’t want the monitoring task to consume to much resources, and if it takes a long time to read the error log you should investigate it (even there are no errors in the log file). If the number of records read is very high, it could indicate potential problems in your instance

And lastly, the alert execution aggregated statistics graph. The left vertical axis shows the total rows read per day, and the right vertical axis shows the total time spent reading the SQL Server error log per day. This helps you determine if you are experiencing an increase in error on your instance. With the information available, you can deep dive on the instance where the number of errors is growing.

Going back to the monitoring view, click on Configure.

A configuration window will appear for this Instance error log task.

You can modify the configuration based on your requirements.

The options are:

Error text – defines which string (errors) the check must look for. Values must be separated with commas (,).

Error text allowed – Specifies the text strings (specified by the “error text” parameter) to be excluded when found in the same record (error log line).

History threshold – the maximum numbers of error messages the history table will retain (error_log_err_histr_tab table).

Last run – the last time the error log has been checked.

Max elapsed time – the maximum execution time (in seconds) for the task. If this threshold is reached a warning is raised.

Max lines per execution – the maximum number of lines the task will read during execution before a warning/alarm is raised.

Return status – return status value (ALARM – 2, WARNING – 1, or OK – 0) when “error text” is found (or “max lines per execution” is reached).

Click apply to save the configuration.

You can set the schedule on when you wish this task to execute.

Right click on the Instance error log file size check and choose Details.

Note: In order to install and make this task (Instance error log file size check) work, you need to enable xp_cmdshell in SQL Server by executing the following command below, or you can set it on the configuration parameter which I will show later.

EXEC sp_configure ‘xp_cmdshell’, ‘1’; 

 Reconfigure

 Go

A new window will appear showing you the information in your error log directory. It shows the error log file name, create date and file size.

The error log file size history graph shows you information of your historical error log size growth rate.

The error log directory size history graph. Displays the number of files within your log directory, and the total file size for the accumulated logs. With the information available it will help you with your decision making and consider clearing some error log files, which are no longer needed as it consumes additional disk space within your environment.

Go back to the monitoring view, and right click again on the Instance error log file size check. Choose Configure.

Again, you can modify the following configuration based on your requirements

The options are:

Enable xp_cmdshell – if set to “YES” the sql server instance configuration xp_cmdshell will be set to enabled. This parameter is required if you want to allow the task to execute operating system commands, to discover the size of files and directories.

Error log directory size threshold – the maximum total size (in MB) of all files in the error log-file directory.

Error log file path – path where the error log file is located.

Error log size threshold – maximum size (in MB) of error log file before a warning or an alarm is returned by the alert.

File count threshold – the maximum number of files in the error log catalog.

History threshold – the maximum numbers of history statistics (in days) of error log file size.

Return status – return status value (ALARM – 2, WARNING – 1, or OK – 0) when the “* threshold” parameters are reached.

Click apply to save the configuration.

Choose Cycle error log. Right click then select Details

A new window will appear which displays the specific log file and the date when it was recycled. This information gives you an idea if the error log was cleared on a specific date.

Go back to the monitoring view. Right click and choose Configure.

A configuration window will appear which you can modify.

The options are:

Cycle History – number of days to provide information when the error log and agent error log has been recycled.

Number of SQL Server error logs – by default, there are 7 SQL Server error logs – Errorlog and Errorlog 1 to 6. This parameter can be used to modify the registry value (REG_DWORD type) to the number of logs which you wish to maintain.

Click apply to save the configuration.

Now you have an idea on how you can easily track the various errors happening within your database farm. You can set the return status so that you will be notified once a certain baseline, or error has been identified by dbWatch.

You can also configure your own parameters or baseline based on your business requirements. It is a great way of customizing different alerts based on the errors retrieved by the dbWatch task.

For more information visit: wiki.dbWatch.com 

Follow us on FacebookTwitterYoutube and LinkedIn

Assigning user roles in dbWatch

As a busy database administrator managing and monitoring tens or hundreds of database instances, you are plenty busy enough as it is. While you’re busy with your daily routinesometimes your manager or developers would ask for your assistance regarding the status of certain database you’re in charged with. Athey were experiencing slow running queries, blocking sessions or deadlocks within the staging environment.  

Normally you would have to drop what you are doing in order to help them out, before you can resume with your own tasks again. With dbWatch, you have another time-saving option: You can give restricted or limited access to specific databases or instances, so your manager or developers can check status themselves without having to ask for your help all the time. 

dbWatch offers role-based access controls. With this you can create specific roles with very specific and restricted rights to access certain databases and perform certain specific operations. Using the role-based access controls you can grant developers, managers and others the exact rights they need to perform their tasks, but not more than they need. You can limit or secure access to specific databases or instances, and specific operations.

 For example, you only want your users to view the monitoring module and select the development group, which you can do by granting the specific role for a specific user. 

Here is an example on how to set this up.  

We will start by creating a new user.  

On top choose Configure under Server Configuration 

Select Manage users and roles  

In this window shows the different users, roles and object sets available. 

Users are the one who have the right to connect to dbWatch. 

Roles in dbWatch define a set of rights that can be granted to users. 

While object sets in dbWatch define the objects you can have rights on. 

Also, in this window you need to secure areas such as the restriction on various instances within the management and monitoring module, user rights and roles, creation of tasks, customization of reports, customization of server groups, extensions, file access and object sets. by default, the boxes are unchecked meaning all users can have the read, write and execute permission to the areas mentioned. 

 In order to secure these areas, make sure to check the boxes.  

And click apply to save the changes 

Currently we’re using the admin user- let’s create a separate user credential for the developers. 

Right click on the user’s folder and click add new user   

In this window specify your credential. 

You can demand a password change on first login and allow empty password if you want. 

But in this case, we will set a username – OPSDEV  

After creating the new user, we will create a new role. 

Under users right click on Roles > choose to add new role 

Specify your role name- DevRole  

After creating the role, grant permission to the role by clicking add Grant button  

Specify the rights for this role by clicking the ellipses  

A drop-down box will appear showing the options read, write, execute and administrator rights. 

We will grant a read permission for this role. 

At the right specify an object set which you will assign the specified read permission on. 

For this developer role we will select default monitoring and development instances only. 

Click apply to save the changes made to the role.

Go back to the OPSDEV user to assign the DevRole. 

Click the add grant button. 

Click the ellipses. 

A drop down will appear select the DevRole 

Click Apply to save the changes made to the user then click Done. 

Let’s use the OPSDEV Credential we’ve created. 

At the task bar below right click the server > and choose Change Authentication. 

In this window specify the user we’ve created.  

Go to management module and if you try to click on an instance and expand it.  

A window will appear asking for valid credentials that has permission to access the instances in the management module. Take note that you only have a read access for monitoring of the development server group for the user OPSDEV which we’ve created 

Go to the Monitoring Module. 

Let’s select the SQL Express instance.  

Choose the database backup task.

Right click and select Run Now. 

An error message will appear stating that the user has insufficient privilege. As we’ve only provided the user the read only permission using the DevRole. 

We have successfully created a user account for developers with limited access to the monitoring dashboard (Development server group), based on this example you should be able to easily customize and create your own roles based on what we did in this example. 

For more information visit: wiki.dbWatch.com  

Follow us on FacebookTwitterYoutube, LinkedIn 

Performance alerts configuration with dbWatch

In an enterprise with multiple systems across various departments, database administrators find not only multiple database platforms such as Oracle and SQL Server, MySQL, PostgreSQL but also different versions of each platform. DBA’s usually manage tens to hundreds up to thousands of database instances of different platforms and versions within the company.

Each database platform and sometimes each version come with their own set of tools, it is a waste of time switching through multiple applications just to monitor and manage hundreds of database instances.

dbWatch is the perfect solution to monitor and track all your instances for you. The dbWatch alert module is powerful and flexible, so you can configure precisely how you receive alerts, what alerts you receive, who receives them and at what time of the day or week. In this way you can create alert rules to match work schedules with different alert schemes for office hours, nighttime and weekends.

See the example below

Problem

I want to stay on top of our database farm and receive performance alerts like blocking transactions, deadlocks, failed database backups, high cpu and memory usage thru email or SMS. So that I could easily respond when an issue on our database farm arises especially during off work hours.

Solution

dbWatch can easily monitor and track the health and performance of your database systems. dbWatch has an alert module which you can configure based on your business requirements so for example you want to monitor the status of your weekly DBCC CheckDB maintenance job for SQL Server. You can configure it by following the step by step procedure below.

On top choose Configure > Extensions

In this window you will enable E-Mail extension for this demonstration

Within this window you will see various server extensions which you can use to send signals across different enterprise monitoring solutions, such as email, Nagios, SMS, OpenView etc.

Right click on E-mail extension and click enable.

After enabling the email extension, On top choose Configure


A window will appear specify the following:

SMTP Host – input your mail server/smtp settings
SMTP Port – input your smtp port number Username/Password – input the email credential which you will use to send the alert
From address– this field is the same with username as this email will be the one sending the alerts Subject – input the subject for the alert

After filling up the properties > Click Add Rule.

Below will appear a box, specify the following information:

Rule name – input the name of this rule
Type – select whether you want to send it thru email or SMS
Receivers – specify the receivers of this notification
Schedules – click Edit to set a schedule for the frequency check
Statuses – click Edit to choose from the following status
Group filter – click Edit, you can either include or exclude the selected server groups
DBMS type filter – click Edit, you can either include or exclude the selected dbms types (ex. oracle, sqlserver, MySQL)
Instance filter – click Edit, you can either include or exclude the selected instances
Package filter – click Edit, you can either include or exclude the selected packages (ex. AlwaysOn, Capacity, Cluster, Replication, Maintenance, Performance)
Check filter – click Edit, you can either include or exclude the selected performance checks (ex. DBCC CheckDB, High Activity, Blocking alerts, etc..)

After filling up the all the necessary information for this rule, click Save.

Here is a sample email alert of a failed check execution in Oracle which I’ve received from the dbWatch monitor.

We have successfully configured an alert notification, you can easily customize and create your own rules based on your requirements.

For more information visit:

Follow us on FacebookTwitterYoutube, LinkedIn

Analyzing SQL Server Indexes with dbWatch

Problem

I want to view the performance of my SQL Server indexes across the database farm and determine the top instances by index disk usage. I also want to look for servers which encountered problems when running the update index statistic job.

Solution

dbWatch is capable of monitoring your server’s index usage rate and perform maintenance routines to make sure that your servers are running smoothly.

In this blog you will learn how you can analyze indexes within your database farm and determine which instance has its maintenance jobs running longer than the usual by using dbWatch’s available views.

Example 1: Index Analyze Statistics View

At the left side choose the Monitoring tab > Go to the Maintenance tab and select the Analyze Index tab.

Within this view you will be able to see useful information such as the database count per instance, index count per instance and total index analyzed by the task. It will also show you the number of errors within the execution of update statistics task if there are any.

You can easily sort the columns in ascending or descending order. This will help to determine which instance within the farm has the longest/shortest elapsed time when analyzing indexes, this is useful if some jobs take longer time to finish than usual.

An example is when you have a scheduled index rebuild job weekly, from Week 1 to Week 7 your average elapse time was around 15-20 mins. to finish.  But suddenly when you look at the dbWatch view it shows that this week (Week 8) it took 80-90 mins. to finish.

The average time increased 4 times its usual execution, this tells you that there is something going on within your servers – from there on you will start investigating the cause of the problem. 

Going back to the views, If you right click on a selected instance you have an option to execute the task and update the index statistics. You can also configure the tasks. If you choose Details, you will see a report regarding the history of the update index statistics execution and the total number of indexes scanned and error history of the update statistics task.

Example 2: Index Reorganize and Rebuild View

At the left side choose the Monitoring tab > Go to the Maintenance tab and select the Reorganize Index or Rebuild Index tab.

This view displays the index rebuild/reorganize statistics. Here you will see the list of instances and the number of database checked within the instance, and it also shows which databases was rebuilt and which was ignored.

In this view you will be able to sort the column to determine which instance consumes the most disk space.

The important part of this view is the size before and after the index rebuild took place and the disk usage reduced by the operation. In this example – the highlighted instance which you can see was reduced from 24.5Gb to 23.2Gb. A total of 1.3Gb was achieved.

If you right click on a selected instance, you have an option to execute the task and start an index reorganize/rebuild task on the instance databases’ or configure the task and its schedule.

If you choose Details, you will see a report regarding the history of the reorganize or rebuild execution within that instance, it also shows the information about the table and index fragmentation average and the disk usage before and after the reorganize or rebuild operation.

With the following examples provided you will be able to determine which instances has the biggest index disk usage within your database farm. You will also be able to assess if a certain index operation schedule should be adjusted based on the performance and other details shown in the reporting view.