Tomcat Performance Tuning

If you want a "pure Java" HTTP web server environment for Java code to run on, you may have already discovered Tomcat. This five-part article series shows you how to tune Tomcat’s performance and increase its efficiency. It is excerpted from chapter four of Tomcat: The Definitive Guide, Second Edition, written by Jason Brittain and Ian F. Darwin (O’Reilly; ISBN: 0596101066). Copyright © 2008 O’Reilly Media, Inc. All rights reserved. Used with permission from the publisher. Available from booksellers or direct from O’Reilly Media.

Once you have Tomcat up and running, you will likely want to do some performance tuning so that it serves requests more efficiently on your computer. In this chapter, we give you some ideas on performance tuning the underlying Java runtime and the Tomcat server itself.

The art of tuning a server is a complex one. It consists of measuring, understanding, changing, and measuring again. The following are the basic steps in tuning:

  1. Decide what needs to be measured.
  2. Decide how to measure.
  3. Measure.
  4. Understand the implications of what you learned.
  5. Modify the configuration in ways that are expected to improve the measurements.
  6. Measure and compare with previous measurements.
  7. Go back to step 4.

Note that, as shown, there is no “exit from loop” clause—perhaps a representative of real life. In practice, you will need to set a threshold below which minor changes are insignificant enough that you can get on with the rest of your life. You can stop adjusting and measuring when you believe you’re close enough to the response times that satisfy your requirements.

To decide what to tune for better performance, you should do something like the following.

Set up your Tomcat on a test computer as it will be in your production environment. Try to use the same hardware, the same OS, the same database, etc. The more similar it is to your production environment, the closer you’ll be to finding the bottlenecks that you’ll have in your production setup.

On a separate machine, install and configure your load generator and the response tester software that you will use for load testing. If you run it on the same machine that Tomcat runs on, you will skew your test results, sometimes badly. Ideally, you should run Tomcat on one computer and the software that tests it on another. If you do not have enough computers to do that, then you have little choice but to run all of the software on one test computer, and testing it that way will still be better than not testing it at all. But, running the load test client and Tomcat on the same computer means that you will see lower response times that are less consistent when you repeat the same test.

Isolate the communication between your load tester computer and the computer you’re running Tomcat on. If you run high-traffic tests, you don’t want to skew the test data by involving network traffic that doesn’t belong in your tests. Also, you don’t want to busy computers that are uninvolved with your tests due to the heavy network traffic that the test will produce. Use a switching hub between your tester machine and your mock production server, or use a hub that has only these two computers connected.

Run some load tests that simulate various types of high-traffic situations that you expect your production server to have. Additionally, you should probably run some tests with higher traffic than you expect your production server to have so that you’ll be better prepared for future expansion.

Look for any unusually slow response times and try to determine which hardware and/or software components are causing the slowness. Usually it’s software, which is good news because you can alleviate some of the slowness by reconfiguring or rewriting software. In extreme cases, however, you may need more hardware, or newer, faster, and more expensive hardware. Watch the load average of your server machine, and watch the Tomcat logfiles for error messages.

In this chapter, we show you some of the common Tomcat things to tune, including web server performance, Tomcat request thread pools, JVM performance, DNS lookup configuration, and JSP precompilation. We end the chapter with a word on capacity planning.

{mospagebreak title=Measuring Web Server Performance}

Measuring web server performance is a daunting task, to which we shall give some attention here and supply pointers to more detailed works. There are far too many variables involved in web server performance to do it full justice here. Most measuring strategies involve a “client” program that pretends to be a browser but, in fact, sends a huge number of requests more or less concurrently and measures the response times.*

You’ll need to choose how to performance test and what exactly you’ll test. For example, should the load test client and server software packages run on the same machine? We strongly suggest against doing that. Running the client on the same machine as the server is bound to change and destabilize your results. Is the server machine running anything else at the time of the tests? Should the client and server be connected via a gigabit Ethernet, or 100baseT, or 10baseT? In our experience, if your load test client machine is connected to the server machine via a link slower than a gigabit Ethernet, the network link itself can slow down the test, which changes the results.

Should the client ask for the same page over and over again, mix several different kinds of requests concurrently, or pick randomly from a large lists of pages? This can affect the server’s caching and multithreading performance. What you do here depends on what kind of client load you’re simulating. If you are simulating human users, they would likely request various pages and not one page repeatedly. If you are simulating programmatic HTTP clients, they may request the same page repeatedly, so your test client should probably do the same. Characterize your client traffic, and then have your load test client behave as your actual clients would.

Should the test client send requests regularly or in bursts? For benchmarking, when you want to know how fast your server is capable of completing requests, you should make your test client send requests in rapid succession without pausing between requests. Are you running your server in its final configuration, or is there still some debugging enabled that might cause extraneous overhead? For benchmarks, you should turn off all debugging, and you may also want to turn off some logging. Should the HTTP client request images or just the HTML page that embeds them? That depends on how closely you want to simulate human web traffic. We hope you see the point: there are many different kinds of performance tests you could run, and each will yield different (and probably interesting) results.

Load-Testing Tools

The point of most web load measuring tools is to request one or more resource(s) from the web server a certain (large) number of times, and to tell you exactly how long it took from the client’s perspective (or how many times per second the page could be fetched). There are many web load measuring tools available on the Web—see for a list of some of them. A few measuring tools of note are the Apache Benchmark tool (ab , included with
distributions of the Apache httpd web server at ), Siege (see http:// ), and JMeter from Apache Jakarta (see http://jakarta. ).

Of those three load-testing tools, JMeter is the most featureful. It is implemented in pure multiplatform Java, sports a nice graphical user interface that is used for both configuration and load graphing, is very featureful and flexible for web testing and report generation, can be used in a text-only mode, and has detailed online documentation showing how to configure and use it. In our experience, JMeter gave the most reporting options for the test results, is the most portable to different operating systems, and supports the most features. But, for some reason, JMeter was not able to request and complete as many HTTP requests per second as ab and siege did. If you’re not trying to find out how many requests per second your Tomcat can serve, JMeter works well because it probably implements all of the features you’ll need. But, if you are trying to determine the maximum number of requests per second your server can successfully handle, you should instead use ab  or siege .

If you are looking for a command-line benchmark tool, ab  works wonderfully. It is only a benchmarking tool, so you probably won’t be using it for regression testing. It does not have a graphical user interface, nor can it be given a list of more than one URL to benchmark at a time, but it does exceptionally well at benchmarking one URL and giving sharply accurate and detailed results. On most non-Windows operating systems, ab  is preinstalled with Apache httpd , or there is an official Apache httpd  package to install that contains ab , making the installation of ab  the easiest of all of the web load-testing tools.

Siege is another good command-line (no GUI) web load tester. It does not come pre-installed in most operating systems, but its build and install instructions are straightforward and about as easy as they can be, and Seige’s code is highly portable C code. Siege supports many different authentication features and can perform benchmark testing, regression testing, and also supports an “Internet” mode that attempts to more closely simulate the load your webapp would get with many real users over the Internet. With other, less featureful tools, there seems to be spotty support for webapp authentication. They support sending cookies, but some may not support receiving them. And, while Tomcat supports several different authorization methods (basic, digest, form, and client-cert), some of these less featureful tools support only HTTP basic authentication. Form-based authentication is testable with any tool that is able to submit the form, which depends on whether the tool supports submitting a POST HTTP request for the login form submission (JMeter, ab , and siege  each support sending POST requests like this). Only some of them do. Being able to closely simulate the production user authentication is an important part of performance testing because the authentication itself is often a heavy weight operation and does change the performance characteristics of a web site. Depending on which authentication method you are using in production, you may need to find different tools that support it.

As this book was going to print, a new benchmarking software package became available: Faban ( ). Faban is written in pure Java 1.5+ by Sun Microsystems and is open source under the CDDL license. Faban appears to be focused on nothing but careful benchmarking of servers of various types, including web servers. Faban is carefully written for high performance and tight timing so that any measurements will be as close as possible to the server’s real performance. For instance, the benchmark timing data is collected when no other Faban code is running, and analysis of the data happens only after the benchmark has concluded. For best accuracy, this is the way all benchmarks should be run. Faban also has a very nice configuration and management console in the form of a web application. In order to serve that console webapp, Faban comes with its own integrated Tomcat server! Yes, Tomcat is a part of Faban. Any Java developers interested in both Tomcat and benchmarking can read Faban’s documentation and source code and optionally also participate in Faban’s development. If you are a Java developer, and you are looking for the most featureful, long-term benchmarking solution, Faban is probably what you should use. We did not have enough time to write more about it in this book, but luckily Faban’s web site has excellent documentation.

{mospagebreak title=ab: The Apache benchmark tool}

The ab  tool takes a single URL and requests it repeatedly in as many separate threads as you specify, with a variety of command-line arguments to control the number of times to fetch it, the maximum thread concurrency, and so on. A couple of nice features include the optional printing of progress reports periodically and the comprehensive report it issues.

Example 4-1 is an example running ab. We instructed it to fetch the URL 100,000 times with a maximum concurrency of 149 threads. We chose these numbers carefully. The smaller the number of HTTP requests that the test client makes during the benchmark test, the more likely the test client will give less accurate results because during the benchmark the Java VM’s garbage collector pauses make up a higher percentage of the total testing time. The higher the total number of HTTP requests that you run, the less significant the garbage collector pauses become and the more likely the benchmark results will show how Tomcat performs overall. You should benchmark by running a minimum of 100,000 HTTP requests. Also, you may configure the test client to spawn as many client threads as you would like, but you will not get helpful results if you set it higher than the maxThreads you set for your Connector in your Tomcat’s conf/server.xml file. By default, it is set to 150 . If you set your tester to exceed this number and make more requests in more threads than Tomcat has threads to receive and process them, performance will suffer because some client request threads will always be waiting. It is best to stay just under the number of your Connector ’s maxThreads , such as using 149 client threads.

Example 4-1. Benchmarking with ab

$ ab -k -n 100000 -c 149 http://tomcathost:8080
This is ApacheBench, Version 2.0.40-dev <$Revision$> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd,
Copyright 1997-2005 The Apache Software Foundation,

Benchmarking tomcathost (be patient) Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Finished 100000 requests

Server Software:           Apache-Coyote/1.1
Server Hostname:           tomcathost
Server Port:               8080

Document Path:             /
Document Length:           8132 bytes



Concurrency Level:         149
Time taken for tests:      19.335590 seconds Complete requests:         100000
Failed requests:           0
Write errors:              0
Keep-Alive requests:       79058
Total transferred:         830777305 bytes HTML transferred:          813574072 bytes Requests per second:       5171.81 [#/sec] (mean)
Time per request:          28.810 [ms] (mean)
Time per request:          0.193 [ms] (mean, across all concurrent
Transfer rate:             41959.15 [Kbytes/sec] received


Connection Times (ms)


mean[+/-sd] median


























Percentage of the requests served within a certain time (ms)
  50%     29
  66%     30
  75%     31
  80%     45
  90%     47
  95%     48
  98%     48
  99%     49
100%     65 (longest request)

If you leave off the -k in the ab command line, ab will not use keep-alive connections to Tomcat, which is less efficient because it must connect a new TCP socket to Tomcat to make each HTTP request. The result is that fewer requests per second will be handled, and the throughput from Tomcat to the client (ab ) will be smaller (see Example 4-2).

Example 4-2. Benchmarking with ab with keep-alive connections disabled

$ ab -n 100000 -c 149 http://tomcathost:8080/
This is ApacheBench, Version 2.0.40-dev <$Revision$> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd,
Copyright 1997-2005 The Apache Software Foundation,

Benchmarking tomcathost (be patient) Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Finished 100000 requests


Server Software:        Apache-Coyote/1.1
Server Hostname:        tomcathost
Server Port:            8080

Document Path:          /
Document Length:        8132 bytes


Concurrency Level:      149
Time taken for tests:   28.201570 seconds Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      831062400 bytes
HTML transferred:       814240896 bytes Requests per second:    3545.90 [#/sec] (mean)
Time per request:       42.020 [ms] (mean) Time per request:       0.282 [ms] (mean, across all concurrent
Transfer rate:          28777.97 [Kbytes/sec] received

Connection Times (ms)

min mean[+/-sd] median max
Connect: 0 18 11.3 19 70
Processing: 3 22 11.3 22 73
Waiting: 0 13 8.4 14 59
Total: 40 41 2.4 41 73

Percentage of the requests served within a certain time (ms)
  50%     41
  66%     41
  75%     42
  80%     42
  90%     43
  95%     44
  98%     46
  99%     55
 100%     73 (longest request)

{mospagebreak title=Siege}

To use siege  to perform exactly the same benchmark, the command line is similar, only you must give it the number of requests you want it to make per thread. If you’re trying to benchmark 100,000 HTTP requests, with 149 concurrent clients, you must tell siege  that each of the 149 clients needs to make 671 requests (as 671 requests times 149 clients approximately equals 100,000 total requests). Give siege   the -b switch, telling siege  that you’re running a benchmark test. This makes siege ’s client threads not wait between requests, just like ab . By default, siege does wait a configurable amount of time between requests, but in the benchmark mode, it does not wait. Example 4-3 shows the siege  command line and the results from the bench mark test.

Example 4-3. Benchmarking with siege with keep-alive connections disabled

$ siege -b -r 671 -c 149 tomcathost:8080
** siege 2.65
** Preparing 149 concurrent users for battle.
The server is now under siege..      done. Transactions:                  99979 hits Availability:                 100.00 % Elapsed time:                  46.61 secs Data transferred:             775.37 MB Response time:                  0.05 secs Transaction rate:            2145.01 trans/sec
Throughput:                    16.64 MB/sec Concurrency:                  100.62 Successful transactions:       99979
Failed transactions:               0
Longest transaction:           23.02 Shortest transaction:           0.00

Some interesting things to note about siege ’s results are the following:

  • The number of transactions per second that were completed by siege  is significantly lower than that of ab . (This is with keep-alive connections turned off in both benchmark clients,* and all of the other settings the same.) The only explanation for this is that siege   isn’t as efficient of a client as ab  is. And that points out that siege’s benchmark results are not as accurate as those of ab .
  • The throughput reported by siege  is significantly lower than that reported by ab , probably due to siege not being able to execute as many requests per second as ab .
  • The reported total data transferred with siege is approximately equal to the total data transferred with ab .
  • ab completed the benchmark in slightly more than half the time that siege completed it in; however, we do not know how much of that time siege  spent between requests in each thread. It might just be that siege ’s request loop is not as optimally written to move on to the next request.

For obtaining the best benchmarking results, we recommend you use ab instead of siege . However, for other kinds of testing when you must closely simulate web traffic from human users, ab  is not suitable because it offers no feature to configure an amount of time to wait between requests. Siege  does offer this feature in the form of waiting a random amount of time between requests. In addition to that, siege can request random URLs from a prechosen list of your choice. Because of this, siege  can be used to simulate human user load whereas ab cannot. See the siege  manual page (by running “ man siege ”) for more information about siege ’s features.

Please check back next week for the continuation of this article.

[gp-comments width="770" linklove="off" ]

chat sex hikayeleri Ensest hikaye