MySQL Benchmarking Concepts and Strategies

MySQL is the world’s most popular open source relational database management system (RDBMS). As a result, over ten million installations are spread around the globe. Nevertheless, in reality only a small percentage of those are actually high performance, optimized, and tuned servers. This four-part article series targets the MySQL database and system administrators, covering various strategies to help benchmark and optimize databases, and tune servers to yield an outstanding performance.

You are reading the first segment of this multi-part series. In this part we will first present the importance of benchmarking and then discuss a few techniques for benchmarking MySQL servers. It is important to understand the basics that lie behind benchmarking and stress testing, the distinctions between each, and when we should use which. Ultimately, I will present the tools necessary for accomplishing these tasks. 

The upcoming three segments of this series are going to cover in great detail the tools you need for benchmarking, the various database performance optimization strategies, and MySQL server tuning strategies, respectively. Right now, we should focus only on benchmarking because we need to grasp these “metrics” to be able to analytically see the results of our optimizations later on. The beauty of benchmarks is that they give us numerical feedback.

Therefore, during this series we assume that the reader is somewhat familiar with the “inner mechanism” of relational databases, has a basic understanding of MySQL servers, and ultimately knows his or her way around as far as accomplishing administrative tasks in the operating system of choice is concerned (e.g.; chosen Linux distro or Windows).

Having said this, let’s begin. Please bookmark and/or save these articles for further reading because throughout the upcoming parts we will often refer to earlier articles and therefore take for granted that you are familiar with the concepts discussed in previous parts. 

{mospagebreak title=Basics of Benchmarking}

Before we get into the details of benchmarking we need to understand its basics. In order to explain as clearly as possible the concepts of “benchmarking” and “stress-testing” when it comes to the world of database servers, we’ll approach the situation with familiar analogies from generic computer benchmarking and stress-testing.

Since the graphics card industry started booming back in the early 2000s, newer and newer video card benchmarking utilities have come out, such as the popular 3DMark series. In a nutshell, these tools actually play snippets of bleeding-edge video games, which in fact test the overall video processing performance of the system during those scenes (calculations are based on frame rates per second).

In the end, the user is given a score. That score could be used for comparison measures with other users having either similar or identical hardware, or even totally different system setups. As a result, users could get a “sense” of the capabilities of their system, how it performs in comparison to others, and in the end, whether or not it could support the latest video game releases on the market.

Benchmarking becomes truly crucial the moment users try to tweak the settings of their components and even attempt overclocking. After each and every change, running the benchmarking utility could show whether the tweak was useful or not, whether it improves or degrades performance, and so forth. The same applies for benchmarking in the world of database servers.

Benchmarking, by definition, is a set of standards by which the performance of components can be measured. These standards are created from scenarios from the real-world. In video card benchmarking they are snippets from various new video games. In database benchmarking, as you will learn, they are based on day-to-day situations and the likelihood that these will appear during everyday activities is acceptable.

This is the most important and key distinction between benchmarking and stress-testing. The former gives us results in the shape of numbers; the ones we care about mostly are the differences between those numbers while we are applying tweaks, changing the server/system settings, and so forth. On the other hand, stress testing is nothing but testing on the extreme edge, finding the server’s/system’s limits, and so forth.

The purpose of this article series is to show you how to get the most out of your MySQL server by applying tweaks, optimizing your database, and tuning the performance of your server. Therefore, we won’t cover extreme stress-testing. It’s good to keep in mind, however, that stress testing isn’t useless, because finding the limits of your infrastructure, MySQL database, etc. is good to know. Covering this area, though, would be beyond the scope of this multi-part series.

Needless to say, by the end of this series you will know the factors professionals usually consider critical for benchmarking purposes, and in the next segment you will learn about a variety of tools that benchmark those criteria. With a little imagination you will realize that using those exact same tools and feeding them  “extreme” data is akin to stress-testing. It all depends on the source data.

It’s obvious that using an unrealistic amount of data, such as billions of rows in a table, while running the exact same benchmarking utilities would generate potentially bizarre results, but such results are important when stress-testing. As soon as your server crashes when running 1,500 simultaneous connections, you will know why it cannot accomplish the requested task. That’s how you find its limits.

Now that we know what benchmarking is about, we need to define the sub-concepts of MySQL server performance. In general, we consider throughput, latency times, and scalability the three most important performance factors. These factors, when combined, represent the overall performance of an SQL server. There are individual benchmark utilities that test each of these, but we’ll discuss those on the next page. Let’s move on.

{mospagebreak title=Benchmarking Concepts}

On the previous page we mentioned the so-called “trio” of performance factors. Throughput is, by definition, the measure of the amount of data transferred in a specific amount of time. In our case, this refers to transactions; specifically the throughput of client-server connections of the MySQL server. We need to realize that simultaneous transactions are an absolute necessity to avoid starvation.

Starvation is a DBA technical term that means one of the users is waiting for his or her request to be served. This wait needs to be as low as possible to avoid starvation altogether. But when this delay gets quite large, we call the situation starvation. The user is starving for data and the server is unable to respond.

An extension of the previous performance factor is latency/ response times. Slow response can happen due to the overall high load of the server. When this happens, the server simply tries to respond instantaneously, but it takes a while for the results to be given to the user. This can happen due to unnecessary DNS resolutions, lags throughout the network infrastructure (too many uncontrolled hops?), and then, of course, the most obvious reason: not enough query cache. Increasing its size can help.

The third and last factor is scalability. Perhaps for some, taking scalability into consideration is unnecessary, but in the real world this factor should never be neglected. There is always room for improvement, and capacity planning is something that no business should ever ignore. This is where we benchmark how well our infrastructure scales. 

For example, how does adding new hardware change the performance of our server? What would happen if our database size is doubled, or even increases tenfold? The possibility that the number of queries and simultaneous connections also increases shouldn’t be excluded either. What happens if the count of transactions becomes eight times as much as it is now? Does adding more memory alleviate the problem?

As you can see, each of these factors should be taken seriously. And yes, the last factor mentioned is akin to stress-testing, but usually scalability testing shouldn’t be taken that far. During stress-testing we want to find the extremes of our current server configuration. During scalability testing we consider possible real-world situations (i.e.; database size is doubled) and then run benchmarks on them.

Moreover, we need to understand that benchmarking is a fantastic strategy for measuring performance, but can often be very misleading, especially if the benchmarks aren’t run accurately. The input data with which benchmarking utilities are being fed should always be checked multiple times and multiple runs (at least five times) of each benchmark must be executed; averaging is necessary after completing these runs. 

Anybody that’s already familiar with benchmarking knows that the outcomes of benchmarking applications are very sensitive. There is nothing more dangerous than assuming your benchmarked score is accurate after one run. After playing around a while, you will realize that your results are frequently influenced by something that you may or may not know. Oh, and thankfully, query caching does happen. It’s recommended that you restart your MySQL server to eliminate the unwanted caching factors.

Beginners sometimes fall into the trap of positively surprising results. These happen when, for whatever reason(s), your benchmarking tool runs multiple times faster than usual, giving you an unrealistic outcome. For example, you have run the benchmark five times, and four of those times the results were very similar, but only one time they are stupendously different. In that case, it’s advisable to simply throw out all of your results and re-bench.

Benchmarking databases in professional environments should never be taken as a competition. Assuring heightened, optimized, and scalable performance of servers is not a joke; these tasks should be deployed with a great deal of dedication and interest. Therefore, don’t fall into any traps or fool yourself.

Now that we’re talking about the causes of poor benchmarking, let’s name a few others: always running benchmarks from the single host, especially if it’s run from the server system (this ignores the effects of network infrastructure); using the default settings of the MySQL server (not configuring your server to the specific unique needs of the company or corporation is a mistake); hesitating to eliminate caching artifacts; and there are others. 

{mospagebreak title=Taking a Break}

You’ve come to the end of this segment. Throughout this part we introduced the concepts of benchmarking, and now you are familiar with the most common strategies. We will present the tools to accomplish these tasks and to benchmark the trio of criteria we described in the upcoming article. You won’t want to miss it!

The first two segments of this four-part series shouldn’t be neglected, because you can only get the most out of the database optimization and server tuning tips that we will cover in the third and fourth segments if you really have experience in benchmarking. Each company has individual needs and totally different databases. There are no universal rules, optimal server settings, and tweaks that work on all kinds of setups.

Therefore, it is critical to be able to tailor everything that you are going to read and learn about from this article series to your very own database, server, and infrastructure. Benchmarking will help you to compare results. Always change one thing, re-bench, and see for yourself the numerical differences. And yes, arm yourself with patience.

As a finale, I’d like to invite you to join our experienced community of technology professionals on all areas of IT&C starting from software and hardware up to consumer electronics at Dev Hardware Forums. Also, be sure to check out the community of our sister site at Dev Shed Forums. We are friendly and we’ll do our best to help you.

[gp-comments width="770" linklove="off" ]
antalya escort bayan antalya escort bayan