What is Horizontal and Vertical Database Scalability?

Buildings represent Horizontal and Vertical Database Scalability

The total amount of data in the universe is always growing. A lot of it ends up in our databases. DBAs need database scalability for database instances to cater to both the growth in data volumes and the increased traffic and workload.


This strains our database engines as we push ever-increasing demands on indexes, searches, references, updates, reliability and availability.

Planning Database Scalability

As data and workload grow, the number of server instances in use by businesses grows accordingly, and the importance of stable and reliable IT systems increase, it’s no longer possible for businesses to cope with rigid, non-scalable systems and tools.


Making your databases and applications scalable is not necessarily a simple task. There are two main variations of server scalability to consider: vertical and horizontal scaling.

Working With Vertical Scaling

Vertical scaling, or “scaling up”, involves adding more resources to a server instance. By increasing CPU resources, memory, and storage or network bandwidth, the performance of every individual node can be improved, scaling small servers to handle large databases.


This is the “old school” way of scaling up.  You either procure the largest machine you can afford or give your existing server as many resources as you can to fit the workload demand.

Buildings represent Horizontal and Vertical Database Scalability

Advantages of Vertical Scaling

Vertical scaling is typically easy for application developers as they do not have to do much to support a larger machine. Same for DBAs – usually quite easy to manage. But at some point, you may reach the capacity ceiling for the hardware you have, and you will need to spend an ever-increasing amount of time optimizing the application, SQL, and server instance to wring out every last bit of performance. This will bring you ever-diminishing returns as you get closer and closer to the maximum size you can take a single server. At some point, the cost and effort to scale vertically hit the ceiling, where you no longer can scale up.

Disadvantages of Vertical Scaling

Vertically scaled systems do come with some disadvantages, though. Not only can initial hardware costs be high due to the need for high-end hardware and virtualization, but upgrades can be both expensive and limited – there is, after all, only so much you can add to one machine before it is still outgrown by your database. Normally clustering like Always On or RAC is applied to these large servers to make them reliable and with enough capacity to handle the load.


You may also find yourself quickly ‘locked-in’ to a particular database vendor by following this strategy, and moving away from that vendor later could mean very expensive server upgrades.


In summary, vertical scaling is much easier to establish and administer – as it is just a small number of machines, or even just one. Vertical systems can also offer advantages in terms of stability, reliability and development, and cost savings through being suitable for smaller data centers, and license costs might be lower.

Using Horizontal Scaling

Horizontal Scaling, or “scaling out,” is adding capacity by adding more database server instances. This is done by spreading out databases on more machines to answer the call for increased demand and capacity.


This is now often the preferred way of scaling databases and is often referred to as the “cloud native” way of doing things.


When more capacity is needed in a system, DBAs can add more machines to keep up. In database terms, this means that data must be partitioned across all the machines that make up the cluster, with each individual server holding one part of the database.


Usually, applications need to be written specifically for horizontal scaling. But once built, it becomes very easy to scale the application to meet any workload.


Unless you are certain that your application will fit comfortably on a single server forever, you should write it so it can be horizontally scaled. Even then, consider making it horizontally scalable to make it future-proof and cost-effective to deploy in a cloud.


Horizontally scaled servers can also make use of data replication, whereby one machine holds a primary copy of the entire database while the multiple copies and/or caching are used for read-only load sharing. You often see this in large cloud-based solutions, where data can be distributed across geographic regions, and data centers by use of database read replicas. Consequently, it provides improved response time, better performance, and higher availability.

Advantages of Horizontal Scaling

Horizontal scaling has several key advantages over a vertical approach. The main advantage is that once built, horizontally scalable systems can scale to almost any size and take full advantage of elastic cloud computing – adding more servers to handle peak load. Not only is establishing a scalable system easier, with individual nodes being cheaper than in a vertical set-up but upgrades are also made quicker and more affordable. Maintenance can be easier, too, with faulty instances quickly switched out with minimal disruption to the rest of the system.


A high number of instances adds complexity to a system. It makes monitoring, administering, and troubleshooting more difficult and time-consuming during disaster recovery. When machines are licensed individually, licensing fees would also be higher. Not to mention, the physical space needs to house multiple servers will also bring cost and logistical issues.

A Summary of Scaling

As database requirements continue to grow, your organization will need to adopt a form of scalability to keep up. While horizontal scaling is widely considered to be modern, flexible, and advantageous, it can bring some unwanted challenges that you will eventually need to manage.


Whether you scale vertically or horizontally, you will always need to calculate the optimum hardware and software combination, especially from a cost perspective. You may want to use many small and cheap 4-core blade servers, or one or a few fast and large 24-core or larger servers. Even more important, check the software license cost for the total system based on the number of nodes and cores. Do the math; it is a non-trivial and important cost decision.


Vertical scalability brings complexity, while horizontal scalability brings logistical challenges. In either case, good tools for monitoring, analysis and administration are necessary to monitor performance, reduce the challenges, help you deliver greater productivity, and keep costs in check.


Whichever approach you choose will depend on your business’ requirements. However, regardless of what you opt for now, always keep in mind as your business continues to grow, you will eventually need to keep up with ever-expanding databases.

Properly Matching Character Sets and Collations in MySQL

Have you ever come across a scenario where your data looks good until you import it into MySQL? The data you’ve imported into MySQL is suddenly displayed as “???”, letting you wonder what’s wrong in your database configuration. The reason of this problem is a character set or collation mismatch – when worked with properly, character sets and collations can make our work with MySQL a dream, or a nightmare. 



Collations and character sets have their place – they let MySQL decide where from we acquired data and how best to work with it internally; since we can choose from a whole sea of charsets and collations within the RDBMS, properly matching them for our use case is crucial. 


What are Character Sets and Collations? 

Simply put, character sets define what characters are considered to be “legal” in a certain string, while collations are a set of rules that tell MySQL how to sort characters within the string. That’s why when we input, for example, data from a Chinese, Taiwanese, or Russian source without fully preparing for it, we’ll see “????” instead of our character string – our database won’t be prepared. 

Character sets and collations aren’t unique to MySQL, though – they are relevant to every database management system we find ourselves using, including Oracle, Sybase, TimescaleDB, or, you’ve guessed it, any flavor of MySQL as well. A proper match of character sets and collations is crucial if we want to work with data from “exotic” (not english) sources. Some of the character sets and collations considered “default” in MySQL are as follows: 


  • The default character set is latin1 with a collation of latin1_swedish_ci. This character set is a fit if we want to work with Swedish characters (MySQL is built by Swedish people), or Russian characters as well. 
  • If we find ourselves using Chinese characters, the collation that would be a fit would be big5_chinese_ci with a charset of big5. 
  • For general use cases, we should use a character set of utf8mb4 with a collation of utf8mb4_general_ci (or utf8mb4_unicode_ci.) The reason behind this is that utf8mb4 is considered the “true” form of utf8 since it can support 4 bytes in a character while utf8 only supports 3. For more “corner” use cases, there’s the utf8mb4_unicode_ci variation,  
  • Should we need Hebrew characters, we should use a Hebrew charset with a collation of hebrew_general_ci. 

You get the point – there are multiple character sets and collations we are able to choose to employ for a concrete use case. For a complete list, connect to MySQL and issue a SHOW CHARACTER SET query. Some of the results derived by that query are shown below: 



MySQL will even provide you with a description of the character set and collation, allowing you to make easier decisions. 

Properly Matching Character Sets and Collations 

As you can tell by now, there are numerous character sets and collations we are able to choose from. Properly matching them is a skill every database administrator gets hold of with experience and with the help of MySQL (see above) – there inevitably will be cases where we mismatch character sets and collations, and get to know of it after the fact, but that’s not a very big issue – and it allows us to learn as well. 

Some of you might know that some character sets consume more space on the disk than others do and once we have a lot of data that could become an issue, but here’s a lifehack: to overcome such an issue, we can issue a SHOW CHARACTER SET query and look at the “Maxlen” column (see above), or if we are dealing with UTF8, closely observe a part of the collation that is in use – e.g. “utf8mb4” will use 4 bytes instead of 3, etc. 

With all these character sets and collations, properly matching them all might be a challenge if we are not educated on how databases work – even with the query given above, MySQL doesn’t tell us all that much if we’re not aware of the basics of charsets and collations. Don’t worry much, though – the description, though very short and to the point, should become a good starting point. There’s no need to get too in depth on this one – look at the language descriptors after the charset and decide from there. Before deciding, though, you should use when to use different character sets and how to choose. 


When to Use Different Character Sets? 

People mostly use different character sets (i.e. not the default ones provided by MySQL) when they’re working with data from sources that MySQL would consider “exotic” (i.e. not English or Swedish of nature.) The closest definition on when you should switch to a different character set is “whenever you feel like your data belongs to an “exotic” country. Think of the countries people from Europe are vacationing in: Turkey, Dubai, Spain, Portugal, etc. There are countries that are different beasts in and of themselves, amongst them is China, Taiwan, Japan, South Korea, Russia and a couple of others (we mean those countries that talk in completely different alphabets than the one we are used to.) If you are not sure on when to make the switch and to what character set, turn to the documentation for some more explanation, but the explanation above should put you closer towards your goal. 



Going Beyond Character Sets and Collations 

Once you’ve properly matched character sets and collations, there a couple more things you need to be aware of for your database to perform at the very best of its capability – you need to keep a very close eye on your database performance and availability as well. Not even talking about security – everyone knows that after COVID, the number of data breaches have continued to rise and rise and no one wants their database infrastructure to become the next target. 


For that, consider employing database monitoring inside of your database infrastructure – database monitoring tools such as the ones provided by dbWatch or database vendors themselves will let you see how your database is doing from availability, performance, or security points of view all the time. The problem is, though, that database management software solutions provided by database vendors (think MariaDB, Percona, TimescaleDB, Redis, and the like), are generally expensive and you’re paying more for the name than software capabilities. With dbWatch, though, everything’s otherwise – it’s a relatively small company, but the software provided by the company is used by known companies such as H&M, tieto evry, ATEA, and the like (you can see more companies that are using dbWatch here), and they couldn’t be happier, so we’re confident that dbWatch won’t let your company down either. Make sure to give dbWatch a shot today by grabbing the 30-day free trial available on the website, and until next time! 

Alternative Storage Engines in MySQL

If you’ve been a frequent user of MySQL now or in the past, you know for a fact that there are multiple storage engines MySQL allows us to choose from. The most frequent one is InnoDB which is known as a “high performance” storage engine meaning that if optimized properly via the parameters provided by MySQL in my.cnf, the storage engine can do wonders – MyISAM is considered more obsolete though, but it has its own use cases including providing an exact count of the rows in a specific table and storing database files in a recoverable format since the files provided by MyISAM are not bound to any other files such as ibdata1 provided by InnoDB. 


What Alternative Storage Engines are Provided by MySQL? 


InnoDB is the default storage engine provided by MySQL – that’s a given. However, there are a bunch of other storage engines provided by the RDBMS, including the following: 

  • Percona XtraDB – this storage engine is only available on Percona’s infrastructure (i.e. only for people using Percona Server), but it’s essentially meant to be a fit for those who want InnoDB capabilities on their infrastructure. Not only that, it’s considered an advanced version of InnoDB as it’s designed for scale on any hardware, and it has a couple of other feats designed for high performance and capability. It’s also fully backwards compatible with InnoDB, so no worries if you want to switch back towards InnoDB after you’ve tried the storage engine and disliked it. 
  • MyISAM – this storage engine was the default storage engine until MySQL 5 rolled around so it’s old as a piece of bread, and most database administrators advise against using it. In fact, the newest version of MySQL considers this storage engine to be obsolete as it’s not reliable, and also prone to crashes due to how it’s built. However, it has a couple of upsides – those include the fact that all files relevant to MyISAM are stored in .MYD (.MYData) files alongside with .MYI (.MYIndex) files and they are not dependant on any one file to work, unlike InnoDB is dependant on ibdata1. The storage engine is also renowned for its capability to actually store row count of tables inside of its own metadata (that’s why InnoDB doesn’t show the row count and MyISAM does), but that’s really the extent of the usefulness of this storage engine. Users of MyISAM will face frequent crashes, data corruption, and other issues, which means that this storage engine is not a fit for those who are looking for performance capabilities neither now, or anytime in the future. It’s useful if we want to play around with it though! 
  • MEMORY – as the name suggests, this storage engine stores all of its data inside of the memory. The storage engine is designed to be a special-purpose storage engine as its all of its contents stored in memory, unlike other storage engines. Such an approach has a glaring flaw – if our server is shut down, all of the data is gone, but on the other hand, the storage engine is very suitable as a temporary working realm or as a read-only cache for data. 
  • CSV – this storage engine is known for its capability to store data in .CSV files together with comma-separated values. There’s not much that can be said here: .csv files usually consist of data values that are separated by a comma (“,”), and that’s the primary reason the storage engine is called that way. Such a storage engine isn’t used very frequently, but it can be useful for some corner-cases of project development, for example, for bulk data imports, or simply to make the job easier for an accountant working with Excel – that way files can be imported straight into MySQL without any additional hassle.
  • ARCHIVE – this storage engine, as the name already suggests, is suitable for archival operations. The primary use case of this storage engine is to store large amounts of data that isn’t accessed frequently – for such a use case, this storage engine is golden because it’s footprint on the disk is almost non-existent allowing DBAs and server admins to focus on other tasks rather than scaling up the server simply to support an archive of data.
  • FEDERATED – this storage engine allows us to access data in a remote database without using replication or clustering in the fashion that if we query a local table with a FEDERATED storage engine, MySQL automatically pulls data from other tables running the same engine. As easy as that! One thing to note, though, is that the FEDERATED storage engine would not usually be enabled by default, but it can be enabled by running MySQL after using the –federated option. 
  • BLACKHOLE – if there’s a storage engine in MySQL that makes you question the existence of the universe, it’s BLACKHOLE. Literally – the storage engine is called after a black hole in the universe in the fashion that it accepts data, but never stores it inside of itself. You might question yourself and wonder what it’s use case is, but MySQL explains it in a simple fashion that probably doesn’t warrant further explanation at all – such a storage engine should be used performance bottlenecks in MySQL that are not related to storage engines, or used for running triggers on data that shouldn’t be kept. 
  • EXAMPLE – and last but not least, there’s an example in the MySQL infrastructure letting you get a grasp of how storage engines are built in the first place. That’s the job of the EXAMPLE storage engine – this storage engine doesn’t store data (or, better put, it doesn’t have any functionality at all), but rather, it’s intended to become an example on how to build or improve on the functionality of storage engines yourself. If you want an example that depicts how to build storage engines in the MySQL source code, look no further! Glance at the MySQL documentation and start building one yourself. 

Exploring MySQL Beyond Storage Engines 

Should you want to explore MySQL beyond it’s storage engine capabilities, be advised that it’s a dangerous world out there in the sense that there’s quite a bit of information that might not be true – to get up to date information, consider reading up on blogs such as the one provided by MariaDB, Percona, and other database vendors.

Of course, the dbWatch blog is a very good place to start – it will provide you information on various kinds of database management systems aside from MySQL, and provide you information on how to best solve the issues related to any database management system you elect to use. Make sure to keep an eye on the dbWatch blog, grab a free trial of dbWatch to see how your databases fare against issues targeting them free for 30 days, and until next time! 

Monitor your SQL Server Indexes

Person monitors SQL server indexes


I want to view the performance of my SQL Server indexes across the database farm and determine the top instances by index disk usage. I also want to look for servers that encounter problems when running the update index statistic job.  


dbWatch Control Center can monitor your Microsoft SQL Server’s index usage rate and perform database index maintenance to make sure that your servers are running smoothly. 


Microsoft SQL Server recommends reorganizing indexes with fragmentation level between 5% to 30% but if it is greater than 30% you will need to perform an index rebuild instead. 

In this blog, you will learn how you can analyze your database indexes within your database farm and determine which instance has its SQL Server maintenance plans or index maintenance jobs running longer than the usual by using dbWatch Control Center features. 

Example 1: Index Analyze Statistics Job 

On the monitoring view, right click on your SQL Server instance. Click Configure Jobs. 

Graphical user interface, application Description automatically generated

A new window will appear, tick the job Index usage statistics (all databases). Click Install.

Graphical user interface, text, application, email Description automatically generated

As seen in the screenshot below, this job collects database index and table usage, shows information about the object/table name type of index total seeks and scans. It gives you information about how big your tables are and if it frequently uses an index scan or index seek. Ideally, you would want your indexes using an index seek.

Graphical user interface, text, application Description automatically generated

In the job – Update index statistics, within this view you will be able to see useful information such as the database name, database index count per database and total indexes (clustered/non clustered) analyzed per database. It will also show you the number of errors within the execution of the update statistics task, if there are any.  

Graphical user interface, text, application, email Description automatically generated

In this view you can easily sort the columns in ascending or descending order. This will help to determine which instance within the database farm has the longest/shortest elapsed time when analyzing indexes, this is useful if some maintenance jobs are taking a longer time to finish than usual.  

Graphical user interface, application Description automatically generated

An example application of the jobs shown is when you have a scheduled index rebuild job weekly, from Week 1 to Week 7 your average elapse time was around 15-20 mins. to finish. But suddenly, when you look at the dbWatch Control Center view shows that this week (Week 8) it took 80-90 mins. to finish. The average time increased 4 times its usual execution, this tells you that there is something going on within your SQL Server indexes – from there on, you will start investigating the cause of the problem.  

Example 2: Index Reorganize and Rebuild View 

dbWatch Control Center also provides you with index maintenance jobs such as Index rebuild and reorganize.

Graphical user interface, application Description automatically generated

This view displays the index rebuild/reorganize statistics. Here you will see the list of instances and the number of databases checked within the instance, and it also shows which databases was rebuilt and which was ignored.

Table Description automatically generated with low confidence

In this view you will be able to sort the column to determine which instance consumes the most disk space.

Graphical user interface, application, Word Description automatically generated

The important part of this view is the size before and after the index rebuild took place and the disk usage reduced by the operation.  

If you rightclick on a selected instance, you have an option to execute the task and start an index reorganize/rebuild task on the instance databases’ or configure the task and its schedule. 

If you choose Details, you will see a report regarding the history of the reorganize or rebuild execution within that instance, it also shows the information about the table and index fragmentation average and the disk usage before and after the reorganize or rebuild operation.

Table Description automatically generated

With the following examples provided, you will be able to determine which instances has the biggest database index disk usage within your database farm. You will also be able to assess if a certain index operation schedule should be adjusted based on the performance and other details shown in the reporting view. 

With dbWatch Control Center, database admin’s can now keep track of their SQL Server indexes and their performance. As a DBA, you can now analyze how your database indexes are being used in your environment and plan on how you will maintain these SQL Server indexes efficiently.   

dbWatch Control Center assists database administrators to efficiently run their databases, from proactive monitoring of database performance to the management of tasks.   

Discover how dbWatch Control Center can monitor your SQL Server Indexes, try dbWatch Control Center today

Working with MySQL vs. MariaDB – What’s Different?

Everyone has heard of MySQL. Well, probably not everyone – say, everyone in the development and database monitoring space. Do you ever wonder why? MySQL is everywhere. The industries that use MySQL as their primary database management system include the automobile industry (think Volvo, BMW, etc.), the fashion industry (think SheIn and the like), the finance industry uses it heavily, it’s prevalent in the education industry.

pexels-manuel-geissinger-325229 (1)


Do we even need to talk about software developers? In the software industry, MySQL is used even more heavily. We mean, think about it. Have you ever heard of anybody in the industry saying “I know of a software developer with experience who hasn’t heard of MySQL”? Chances are, you did not. And chances are, most developers would laugh at that statement – MySQL is one of the most widely available database management systems.

Most developers more familiar with MySQL probably know of MySQL flavors as well. Well, we’re talking Percona Server and MariaDB – you know the drill. In many cases, those are similar, but they have their own differences as well. In this blog, we are going to tell you how one of those (MariaDB in particular) differs from MySQL. 



What’s MariaDB and MySQL? 

Believe it or not, this question is one of the popular ones that hiring managers ask junior database administrators. Why? Because that’s a good way to determine if a beginner knows a little of their craft. Most novices think that both MariaDB and MySQL must be the one and the same – and the reason behind that is that they think that MySQL and MariaDB are built to solve the same issue, so the queries that might be run are the same, the way the databases are configured is pretty much the same, it’s just a question of the name. Ask a junior DBA such a question and you will hear a “the same DBMS, just with a different name” without much consideration. That’s not the ideal answer, though. In reality, MariaDB is a flavor of MySQL. A flavor that comes with many of its own variations of the features available in MySQL. 

That’s not to say that MySQL is a bad DBMS though – far from that – it’s just that MariaDB has a set of features not available in MySQL, and vice versa. 


MySQL vs. MariaDB 

Here are some of the key differences between both of the database management systems so you could better understand how database farm management might work with those: 



As you can see, the two might not seem too far apart – at least not until you look deeper into them. Many database administrators electing to use MariaDB instead of its counterpart do it because MariaDB comes with a load balancer unique to its own, though that’s not the core reason MariaDB is a competitor of MySQL – the main reason is the fact that MariaDB comes with more storage engines to choose from. 

Think about it: 

 – Is it possible to work with petabytes of data in MariaDB? Yes, but not in MySQL – one should use ColumnStore. 

  •  – Is it possible to have a crash-safe MyISAM that’s known for unreliability? Yes, but not in MySQL – one could use Aria. 
  •  – Want a read-only storage engine that would store all of its data inside of S3 storage in MySQL? Mission impossible. Turn to MariaDB and use S3 instead. 
  •  – Want an actively-developed version of FEDERATED? Turn to MariaDB and use FederatedX, an option not available in MySQL. 

Is it Worth Using MySQL? 

With all of these unique storage engines available in MariaDB, a natural question arises – should one still use MySQL for daily ops? Shouldn’t MariaDB be the option that is always used, given that it should have more use cases? Isn’t database monitoring more easy on MariaDB? No? 

Well, don’t jump to conclusions too quickly. There are a couple of things MySQL doesn’t have that MariaDB does, indeed, but there are a couple of things exclusive to MySQL as well. For example, have you heard of the Enterprise edition of MySQL? It’s not cheap, but it comes with a nice amout of features including the ability to deliver “hot”, online backups of your databases, Enterprise data masking features to protect your organizations protect your most sensitive data by hiding sensitive data when it’s required, but perhaps two of the most widely known features in this space are the firewall and the audit plugin. The MySQL Enterprise Firewall protects your MySQL installations from a wide range of cyber security threats including SQL injection. By using the firewall, you can create a “whitelist” (a list of allowed values) of SQL statements and block the rest, monitor the database for unauthorized activity, etc.: that’s one feature MariaDB doesn’t offer, and users who use MariaDB miss out on this subject. 


What to Choose? 

If you want to weigh your options before choosing, we have good news – it’s not that hard of a choice! If you’re a novice just learning about MySQL, go with MySQL, then transition into MariaDB when you know a little more about the DBMS. If you’re looking to work with big data sets or more advanced features, go with MariaDB, however, if you’re looking for high-level security measures (we’re talking enterprise level here) you’re better off using MySQL. 

Neither of those choices are “wrong” though – you can easily switch to MariaDB and vice versa if you get bored or if the feature list doesn’t satisfy you. Moreover, all of the monitoring tools that are available on the market, including dbWatch, let you monitor both database management systems – so you never lose out after choosing one of them.

As far as monitoring goes, though, we have good news – monitoring those two database management systems almost never differs. Seriously – one can improve performance in MySQL in the exact same way that one improves performance in MariaDB, and vice versa. The reason tools like dbWatch exist is plain and simple, though – it saves you both time and money that you will need when monitoring databases manually. Look at these database jobs: do you think you would have time to keep an eye out on each and every one of them? Not even talking about tens of database instances here.  



Tools like dbWatch help you schedule these kinds of database jobs and make sure they always run at the exact time specified – not only that, dbWatch will run your jobs in a proper way, always making sure that they never miss deadlines and do the work that they are supposed to do.

Do you have the time to take care of everything yourself? Chances are, you do not – try dbWatch for free and tell us what you think. We are sure – you will be impressed. Give dbWatch a spin and we will see you in the next one! 



Other blogs:

How is The Memory of Your SQL Server Doing?

Top Tips for Oracle Database Monitoring

Databases for Senior Developers


Track your SQLServer database compatibility level

Two people work to improve their sql compatibility level


In this blog,I will discuss SQL Servers’ database compatibility level. By using dbWatch, we will check the proper setting and configure the right compatibility level based on the version you are using. 


Each new version of SQL Server introduces new functionalities and performance benefits. These are identified by compatibility level and cardinality estimators.  


As you can see each database version is associated with a compatibility level, so if you want to utilize the newest features available for your database, you need to check and set the right compatibility level for your instance. 


As DBA, you must ensure the compatibility level of each database is matched with your instance current SQLServer version. 


The usual problems of enterprises are some take too long to migrate to a newer version of SQLServer. 


Some DBAs end up taking two steps. For example –they first migrate the SQL Server 2000 database to SQL Server 2008 R2, and then migrate it to SQL Server 2019. This is the reason NOT to wait too long before migrating a SQL Server database to its latest version.  


Sooner or later, your compatibility level will have to change, and this will impact your applications and processes. By doing it sooner, you will reduce the amount of work you will have to do later.

In this view, you can manually check for each databases compatibility level, by opening the database properties window in your SQL Server management studio.

But on a database farm level, especially, when managing hundreds up to thousands of databases, this is not the ideal way to investigate for the compatibility level of your databases.


dbWatch Control Center provides you the overview of your database’s compatibility level and displays other attributes of the SQLServer database they are using. 


Let us start by clicking on the Farm Management Icon, which is at the left side of your Control Center monitor as seen in the image below. 


Expand Inventory Overview Performance. 

Click on the Per platform option and choose MS SQL Server. 


In this view displays different attributes, you have the instance status, name, database version, edition, port #, instance name, host, host IP, OS type, the creation date of your database and total DB count 

 With this information you will have a quick understanding of the different versions of SQL server that is running within your database farm. 


On top, choose the compatibility tab. 

 On the topmost part of this view, you can see the servers and databases compatibility level.  


It has 2 groups: 

  • Databases with lower compatibility than the server 
  • Databases with the same compatibility as the server

As you can see, there are 2 databases that use lower compatibility level than the servers’ version, and 60 databases with the same compatibility level as the server. 


On the right side, you have a pie chart that displays the different compatibility levels configured on your databases. We have 2 databases with SQL Server 2014 (120) compatibility level, 50 databases with SQL Server 2016 (130) compatibility level and 10 databases with SQL Server 2019 (150). 


At the bottom of the view, displays the instance name, database, status, create date, recovery model, database compatibility, server compatibility, server version. 

 With this information, you will be able to analyze which databases you need to reconfigure and update their assigned compatibility level based on the servers compatibility level.  


In the image below, you can see a database farm with a total of 903 SQL Server databases with version SQLServer 2008 R2. This feature is very handy when managing these huge numbers of databases, as it helps you keep track of your databases compatibility level when it comes to the latest available version. 

The change in compatibility level tells the instance to change its feature sets, it might add new features or remove some old features so be wary of the changes that might affect your existing application.  


The compatibility report view enables DBAs to find where they have the potential for performance improvements by upgrading the compatibility level of a database which might be a version behind. 



For more info visit the website www.dbWatch.com 

For more tutorials visit wiki.dbWatch.com/ControlCenter 

Follow us on FacebookLinkedIn, and Twitter - dbWatch 

Also check out our YouTube channel dbWatch


The Challenges of Scaling MySQL

Man works on scaling My SQL Server.

If you have ever worked with scaling MySQL or if you are a MySQL DBA, you have probably faced the challenge of scaling your MySQL instances. This blog post will outline the challenges of scaling MySQL and tell you how can dbWatch help you to scale, monitor and manage your MySQL (and other) database servers efficiently while also providing you with total control over all aspects of operation, performance and resource usage.

Man works on scaling MySQL Server.

When Would You Need to Scale MySQL?

In general, the challenge of scaling MySQL would frequently appear in a situation when you start seeting your MySQL-backed application’s traffic increase by tens, hundreds or even thousands of times and you start seeing query timeouts more and more often.

How to Scale MySQL

As far as the actual scaling of MySQL is concerned, you have a few options you can choose from. You can apply application level optimizations and, of course, you can apply some optimizations on the MySQL level and you can also scale through infrastructure. As we’re talking about the challenges of scaling MySQL in this blog post, we will go through some of the things that you can do to optimize the performance at the MySQL level yourself and tell you how can dbWatch help you to scale your database instances too.

4 Things to Keep in Mind of Optimizing Your Database Instances

If you want to optimize your database instances at the MySQL level, here’s some basic things you have to keep in mind:

  1. Only Ask For The Data You Need .

    Your queries should only fetch the data that is absolutely needed. Avoid using queries like SELECT * if it’s not absolutely necessary, because a query working with too much data is one of the most basic reasons a query might not perform well. Optimize the data your queries access and optimize your queries such that they access less data. The less data your queries access, the faster they will be – do not ask the database for data you don’t need.

  2. Optimize Query Performance with EXPLAIN.

    The EXPLAIN clause can be very useful if you want to optimize your query performance – the EXPLAIN clause is able to give you an access type that MySQL is using to find results (the access types range from a full table scan to index scans, range scans, unique index lookups and constants) – understanding the general concepts of scanning a table, scanning an index, range accesses and single value accesses could prove to be very valueable if you want to optimize your database instances at the MySQL level. The type ALL means that MySQL has ran a full table scan while ref means that MySQL used the ref access type on some sort of an index.

  3. Understand SQL Query Execution.

    Keep in mind that there are some basic things related to query execution. First, the client sends the SQL statement to the server, then the server checks the query cache. If there’s a hit in the query cache, it returns the stored result from the cache, otherwise the server parses, preprocesses and optimizes the SQL into a query execution plan. Afterwards, the query execution engine executes the plan by making calls to the storage engine API and the server sends the result to the client. Keeping these things in mind could put you on a good path when scaling your MySQL instances.

  4. Know when to Use a Wildcard Search.

    If you are using wildcard searches (for example if you use LIKE queries) do not start your search with a wildcard. Only use a wildcard at the end. If a wildcard is used MySQL won’t know what the search term begins with and thus an index might not be used even if it exists on the column you are querying.

The advice above should set you on a good path regarding fixing query timeouts, deadlocks and the like, but if you need to scale MySQL seriously, chances are that you are going to need certain tools suitable for the task.

Solving MySQL Scaling Challenges with dbWatch

If you want to be able to solve your MySQL scaling challenges using tools, dbWatch can help you. dbWatch is a highly scalable software solution that helps enterprises monitor and manage your database server instances efficiently while also providing you with total control over all aspects of operation, performance and resource usage. dbWatch can help you scale all kinds of database management systems – it does matter if you are using MSSQL, Oracle, PostgreSQL, Sybase, or MySQL.Here’s how the user interface of dbWatch looks like:

dbWatch offers a few options to scale your MySQL (or any kind of database management system) instances:

1. You can (and should) make use of the dbWatch jobs that are available – the dbWatch jobs are split across three categories (Availability, Capacity and Performance) and each of those categories contain certain jobs that perform specific tasks.

For example, the Availability category for MySQL consists of database monitoring jobs that check the uptime of your DBMS and give you alerts, the Capacity category provides you with aggregated and detailed database growth rates and the Performance category consists of database jobs that check the effectiveness of the InnoDB buffer pool, the MyISAM key buffer, it can provide you with some database lock statistics, it can analyze the memory setup of your database server, it can also provide you with some information regarding your session load and the query cache hitrate and so on. Keep in mind that jobs can be configured (or even disabled if you so desire) and they also have a details section meaning that you can see some more information. Simply right click on a job and click on “Details”, “Configure” or “Set Schedule”:

For example, here’s the Details section on one of MySQL jobs in all of its beauty:

2. Make use of job scheduling – if you have a lot of database instances (and you can do that with dbWatch), job scheduling can be an invalueable resource. To schedule your jobs with dbWatch, simply click Set Schedule after you right click a job. Then you will be able to set a schedule for the job to run every minute, hour, day of a week or week of a year:

3. When you’re using dbWatch, you can also connect to or shutdown all servers at once – that might help with scaling your database instances too.


4. You can also make use of FDL (Farm Data Language) – now this one deserves an entire book alone (take a look into the documentation), but in short, FDL can help you with a multitude of different things ranging from filtering instances by database name to sorting the results of a column by an ascending or descending order. For example if you use use #sort your instances would be sorted in a certain order (use asc for an ascending order and desc for a descending order):

FDL can be really useful if you want to scale MySQL further and push your MySQL instances to its limits.


If you’re a MySQL DBA, chances are that sometime in the future you will face issues in relation to scaling MySQL. Your queries might become slower and slower, your traffic might increase more and more etc.

When using dbWatch you can make sure these kinds of problems will be solved quickly, securely, and effectively no matter what kind of database instance you use. 

Learn how to solve your challenges of scaling MySQL or other kinds of database instances, book a demo today.

5 Tips for Server Migration Planning

Server migration planning

As a DBA, infrastructure admin, or manager, you may have encountered a scenario where you are tasked to lead, plan and execute a server migration project for reasons such as: improving the application performance by using new or upgraded hardware or optimize the enterprise architecture.


Planning for a server migration can be quite complex, especially in large enterprises with many different servers used for various web applications, internal applications, databases, and many more. Today I will discuss some of the issues you need to consider when planning a server migration.

Server migration planning

The 5 R’s to consider when planning a Server Migration are the following:

1. Re-hosting by Moving to a New Host– (also known as Lift and Shift)

This approach involves moving your current infrastructure, applications, or databases to another server, another data center, or moving it to the cloud as it is.

Advantages of Re-hosting:

  • Simple migration by re-hosting all applications and databases as it is
  • No optimizations, no architecture changes, no code changes, moving to other servers or in the cloud without doing additional changes

Disadvantages of Re-hosting:

  • Can cause security risks if users and roles are not applied effectively
  • Can cause application failure or services unavailability if processes, security roles, jobs are not synchronized correctly

 2. Re-platform with New Database Platform or Infrastructure

Re-platform involves migrating to a new platform or infrastructure, for example, moving your on-premise databases to an Azure Managed instance in the cloud or moving your on-premise web application to AWS ElasticBeanstalk.

Advantages of Re-platforming

  • Not changing the core architecture but may require some code changes for optimization

Disadvantages of Re-platforming

  • It will most likely require extra time and effort to apply code changes
  • Use different tool kits and packages only available to the new platform

3. Re-target to Another Platform

Moving to a different product or database platform (Oracle to MariaDB). An example is you are trying to migrate your Oracle databases to a MariaDB to save substantial licensing costs. The move from one database platform to another requires changes in your stored procedures and packages when moving from Oracle to a MariaDB database.

Advantages of Re-targeting

  • Migrate the database to a better solution for reasons such as cost-benefit, feature/function availability.

Disadvantages of Re-targeting

  • It may consume time migrating to a new database platform as you need to map out every process happening within the database.
  • You will need to learn new tools and tech if you are not already familiar with a target platform

4. Refactor

Restructuring the enterprise architecture (databases and applications)

Advantages of Refactoring

  • Overhaul the application, driven by the need of business to add new features and functions
  • Optimizing the overall application and usage rate of resources by query optimization and rewriting queries
  • Long term cost savings as your applications/database are optimally running and properly provisioned.

Disadvantages of Refactoring

  • It may require a lot of time and effort on the part of DBAs/Developers to work on the project to refactor the whole application/database architecture
  • May introduce new bugs or issues
  • It will require extensive planning and testing before stable and ready

5. Retire

Turn off things that are no longer being used or running may be due to refactoring the whole environment. Consolidate database instances that are infrequently used to other servers that have extra capacity and resources.

Advantages of Retiring

  • Save costs up to 10 – 30%
  • It helps reduce security vulnerabilities
  • Fewer servers to maintain, monitor, and manage
  • Simplify environment
  • Remove old legacy platforms and apps
  • Better overview
  • Cleanup

Disadvantages of Retiring

  • Hard to verify instances or databases no longer in use without extensive activity logs going back a long time or extensive manual checks
  • Moving databases may introduce unexpected side effect

Summary for Your Server Migration Planning

When planning your server migration, always remember the 5 R’s, which are:

  • Re-hosting
  • Re-platform
  • Re-target
  • Refactor
  • Retire

Before you migrate your servers, you should put a monitoring and management solution in place to keep track of your applications or databases’ health.


dbWatch Control Center allows you to track your database health and performance using different database monitoring jobs for every performance metric.


Control Center has built-in reporting capabilities for the management to have a quick overview of their database farms’ status, capacity, and resource usage.

Track your database health, try dbWatch Control Center today.

Deploying Databases Locally vs. In a Live Environment

Database working on deployment live.

All developers and database administrators (DBAs) have been in a situation where they need to test database deployment before going live. There’s no way around it. Want to be live? Test your environment in a sandbox before moving onto publicly visible spaces (e.g. the web, etc.)


But have you ever wondered what DBAs need to consider before deploying their database environments both locally and live? We will tell you all about that in this blog post. 

Database working on deployment live.

Local Database Deployment

Neither developers, nor higher-tier DBAs have many worries when deploying databases locally. Perhaps they worry about configuration, but not much else. With local deployment you already know all of the parameters of your system, so the only thing that you need to do is allocate them appropriately. Literally – that‘s it! If you’re running MySQL, just open up my.cnf, set your parameters to use 60-80% of RAM, exit WAMP, start it again, and that’s it. Not that hard! 


To deploy a database, you simply use phpMyAdmin or log in through the CLI and run a couple of CREATE DATABASE queries (advanced DBAs might even have scripts that accomplish all of these tasks for them.) Select your collation, database storage engine might be set up in my.cnf (by default, that’s InnoDB or XtraDB, depending on the database version you elect to use), and that’s it – you’re up to a good start.


Unfortunately, everything gets more difficult in a live environment. The reasons of that happening are numerous – live environments usually need more care and nourishment than local ones, before adjusting your settings you must carefully consider all aspects both of your server, and before you know it, customers start pouring in – what do you do now? Adjust the functionality of the server then? No – that wouldn’t work. 

Deploying Databases in a Live Environment 

Developers and database administrators deploying databases (or software in general) in a live environment need to consider the following things: 


  1. How big is the customer base of our present project? 5 customers are not too much to worry about in regards to downtime – 5,000,000 however, is a big deal. In that case, 5 minutes of downtime could cost your customers thousands. Are your databases prepared for that? 
  2. What parts of our project are used the most frequently? Before deploying a database, perhaps we can deploy it in a server that’s used less frequently than others? If one server is always under 88% of load, the other – under 6%, it might be time to have a closer look at the second one. Why is that the case? Perhaps we can deploy our databases on that server to not cause hassle and load issues for other servers? 
  3. Do we need to deploy one additional database? Seriously, this question is overlooked so frequently these days – developers just deploy databases here or there. Before deploying one, ask yourself: why does our team deploy a database? Is it going to be normalized? If yes, were the rest of the databases not normalized? Why? What’s the use of one additional database? Is it crucial for our project? Will it not be a burden for the underlying infrastructure? Will it help us adhere to certain regulatory requirements (e.g. GDPR and the like?) 

Consider those questions carefully. Weigh your choices on three principles: 

  1. Who? Who is deploying the database? Who is it directed at? 
  2.  What? What’s the function of the database? 
  3. Why? Why the database is being deployed? 

The answers to these three questions will determine whether deploying a database is a good option for your use case. If the answers to all of those questions are, in your opinion, solid, deploy a database. Before deploying one though, investigate your configuration files and adjust the parameters so that they use the specifics provided by your operating system to the maximum. Look at your RAM, how much space you have in your hard drives, look at what your CPU can handle, and decide what parameters are you going to set and to what values – if you have a hard time deciding, though, automating database processes can help. 

Automating Processes in Deployment 

“Decide what parameters are you going to set and to what values” – as if it were that easy. 1, 2, 3, done? No. This step requires careful, very careful consideration. One step in the wrong direction and your database is in ruins. Thankfully, there are tools that can assist you in this realm: one of them is dbWatch. 


With dbWatch Control Center, your databases are in safe hands, because dbWatch can support pretty much all database flavors that are out there: MySQL, MariaDB, PostgreSQL, Oracle, Sybase, even SQL Server. Once you know what database flavor you’re going to use, think of the database tasks that you’re automating: dbWatch comes with a bunch of database jobs (tasks) that can be scheduled and ran at a specific time, but those database jobs that can be ran depend on the specific database management system that you find yourself using: if you’re using MySQL, they aren’t going to be very extensive simply because of the fact that MySQL is in active beta right now, but import a Microsoft SQL Server instance into dbWatch and you will be able to choose from tens of database jobs in the performance space alone ranging from performance to security to replication.  


Here’s an example of how MySQL jobs would look like so you can imagine what would happen in the space of other DBMSes: 

Choose what job you want to run, schedule it by right-clicking (click Set Schedule), and your job will be automated: 

Once your job is automated, the work doesn’t end there – you can stop here, but we don’t recommend you to, because there are other places of dbWatch you will need to keep an eye on if you want to put your database instances towards the highway of performance by using database farm management. One of those is the Database Farm available within dbWatch (that’s the third icon from the top.) The database farm will let you gain an overview of the inventory that your databases are using (what platforms, versions, etc.), and also let you keep a close eye on your database jobs (both those that you have automated, and those that are not yet set up.) This page will let you get an overview with jobs with a status of “OK”, “Warning”, and an “Alarm”.


The jobs that you will need to keep a close eye on when you’re deploying in a live environment will be those having the statuses of a “Warning” and an “Alarm” simply because a jobs of that status imply that something is either wrong or seriously wrong in your database instance. Inspect those jobs once they turn orange (meaning a warning) or red (an alarm) – that means something goes off the rails and needs immediate attention. 

With dbWatch database monitoring, everything is simple though – you don’t need another client to run queries to fix problems here or there – click the fourth icon from the top (that’s the SQL client) and use the client existing inside of dbWatch – this SQL client will help you run queries to fix problems while switching back to the monitor will provide you with information whether the queries you ran fixed the issue or not. While the SQL client is not perfect, it can help you execute your queries both as SQL and PL, load files to be ran (think of loading SQL files with a bunch of statements, etc.) – do keep in mind that this feature can be useful both in live environments and for testing, since these features in a testing environment will help you test whether you configured settings correctly, etc.: 

dbWatch Control Center is a great fit whether you’re deploying databases locally or in a live environment – as already noted before, the main features of the dbWatch Control Center will help you run database-related jobs, while the SQL client will let you run queries to fix any issues you encounter on the way.


Most clients see an impact in their organization withing the first week of using the tool. Remember whether you’re deploying database instances in a local or in a live environment: use the guidance outlined in the documentation.


Discover the impact dbWatch can make on your organization, download a trial version of dbWatch that monitors up to 5 instances for 6 months.