The New Normal solidified the preference of online transactions. From banks, retailers and food industry, Information Systems have adapted with the times to comply with the customer’s demand. However, not everything is dandy in this period of a new normal. With the increase in demand, this certainly merits a degradation in your database performance. Most common of this occurrence is slow running database that is visibly seen on top of applications.
What is database slowness?
Let’s discuss database performance first. Database performance is the speed at which your database responds to your request of accessing its stored data. To visualize it, a simple SELECT statement, when executed, pulls resources from your database, and returns a tabulated data for display. This seems fine in the first thousand records but when you are querying tens of thousands of rows or more, a reduced performance can often be noticed.
Database slowness is that stunted performance. The main difference between that simple SELECT statement and database slowness is that the latter is a chronic ailment of your databases.
Slowness can be attributed to multiple factors such as:
– missing indexes
– improper caching
– unoptimized queries
– database architecture
Reduced database performance is heavily contributed by your database’s existing structure and as DBAs, it is our primary responsibility to mitigate these effects.
What can you do?
Monitor your network and memory
– estimate disk space and cache memory
Constantly check which applications are consuming a lot of resources. Be wary of cache memory and disk spaces as they can deplete with continuous reading and writing to your databases. Monitor your network connections to avoid any unplanned downtimes. See to it that the bandwidth allocated is sufficient and latency for and between database servers are kept at a minimum.
Coordinate with your network team, developers, and other related departments to determine what is the root cause of the problem especially if it is database performance related. Fixing slowness won’t happen overnight. But with gradual steps taken, the system will be tuned to its peak performance.
Review your database architecture and structure
Poorly defined and maintained databases will cause retrieval latency and performance issues. Missing indexes or unnecessary table locks are not only the contributors for undesirable database performance. Database architecture also plays a significant role.
You must review your overall architecture if it follows correct normalization. Duplicate data, and missing primary keys are the common problems of fledgling database systems. These are easy to shelve for backlogs. Yet, by constantly pushing them below your priority list, the cracks in your database architecture begins to show. Sooner or later, addressing those cracks would be labor-intensive as DBAs will have to work longer hours and costly because you need to plan when and how much time you will be dedicating in fixing them.
Take for example re-indexing tables containing millions of master records. The best approach would be to schedule it on a weekend with low transactions done with the database. Then, formulate a strategy for the execution and revert, and disseminate the scheduled hours for maintenance to all stakeholders. Most importantly, secure DBAs to work during that time frame. Essentially, procrastination negatively impacts business when bad quality persists.
- Does the design make sense?
- What is the probability of data duplicate data per row? Is it low? Is it high?
- When I query, will I need to go to another schema just to retrieve that other table?
- Are there redundant data that will be stored in every field?
- Are the data type properly defined? Are there composite data types I should be wary about?
- Are the primary and foreign keys properly defined? Will they lead to a conflict of inputting data?
- Are chosen indexing approach sufficient to optimize my searches?
- Should I use VIEWS as a substitute for this table?
- How will we handle orphan data?
- Will we automatically flush or clean cache memory once data rows are deleted?
Query Plan. Plan. Plan
Whether you are a senior DBA or junior DBA, always remember to value your query execution plans. Utilize your EXPLAIN statements, and execution plan tabs. Do this also with Oracle, Postgres or any other platforms.
Double check your database objects and queries such as:
1. Stored procedures
See to it that upon their execution, they do not consume a large portion of your database resources.
JOIN and SUBQUERY statements will also be headaches to be dealt with. Irresponsible developers will match up tables containing tens of thousands of records with ambiguous or incorrect join keys. Poorly scripted subqueries will often lead to slower returns and result to NULL values even if the conditions are met. If possible, use JOIN statements as they take precedence in the query statement and limit the returned records before applying the WHERE condition.
DBAs should search for these anomalies while finding ways to optimize them. Fine tuning these queries are no joke. It may take hours just to optimize a query.
With all these talk about optimization, management and monitoring, DBAs have a lot in their hands. It will be a challenge to focus on other tasks especially if your company is suffering from reduced database performance . However, you don’t have to do these tasks manually.
Database Management and Monitoring solution such as dbWatch will assist you in tracking down resource heavy queries. dbWatch Control Center will monitor your database’s memory and storage spaces such as disk space, cache memory and temporary tables. These are supervised by jobs that deliver the information right at your dashboard. Boxed in Red are the jobs checking for memory usage.
Database and server connections are also constantly checked by dbWatch Control Center. The monitoring dashboard lists all possible issues and provide keen statistics of databases when a problem occurs. The screenshot below in red highlights the issues encountered when the job is scheduled to run. Dominantly, no backup plans were established for the instance listed which categorized them as recurring issues.
Green box shows the statistics of instances. There are 5 categories with differing colors. Red indicates a major problem has occurred within that instance. Yellow shows that the DBA’s intervention maybe needed to properly address that instance’s issue. Green highlights instances showing no signs of problems. Blue displays instance that are down while Gray points out added instances without monitoring jobs installed.
On the database management side, you can perform your database administration task in a single window. Upon accessing the management module, look at the left side pane of the screen. Here, you will see a tree-like structure (see Red Box) where you can do the following:
1. Configure your databases security
3. Set up backups and restore them
4. Configure your SQL agent
You don’t need to access your database management studio to accomplish these tasks. All you need is your dbWatch Control Center monitor.
How’s that for convenient? An all in one tool that provides solutions to your database problems.