Benchmarking Tomcat Performance

In this second part of a five-part article series on Tomcat performance tuning, you will learn how to do benchmarks so you can see what effects your changes produced. It is excerpted from chapter four of Tomcat: The Definitive Guide, Second Edition, written by Jason Brittain and Ian F. Darwin (O’Reilly; ISBN: 0596101066). Copyright © 2008 O’Reilly Media, Inc. All rights reserved. Used with permission from the publisher. Available from booksellers or direct from O’Reilly Media.

Apache Jakarta JMeter

JMeter can be run in either graphical mode or in text-only mode. You may run JMeter test plans in either mode, but you must create the test plans in graphical mode. The test plans are stored as XML configuration documents. If you need to change only a single numeric or string value in the configuration of a test plan, you can probably change it with a text editor, but it’s a good idea to edit them inside the graphical JMeter application for validity’s sake.

Before trying to run JMeter to run a benchmark test against Tomcat, make sure that you start JMeter’s JVM with enough heap memory so that it doesn’t slow down while it does its own garbage collection in the middle of trying to benchmark. This is especially important if you are doing benchmark testing in graphical mode. In the bin/jmeter  startup script, there is a configuration setting for the heap memory size that looks like this:

  # This is the base heap size — you may increase or decrease it to fit you r
  # system’s memory availablity:
  HEAP="-Xms256m -Xmx256m"

It will make use of as much heap memory as you can give it; the more it has, the less often it may need to perform garbage collection. If you have enough memory in the machine on which you’re running JMeter, you should change both of the 256 num bers to something higher, such as 512 . It is important to do this first because this setting’s default could skew your benchmark test results.

To create a test plan for the benchmark, first run JMeter in graphical mode, like this:

  $ bin/jmeter

JMeter’s screen is laid out as a tree view on the left and a selection details panel on the right. Select something in the tree view and you can see the details of that item in the details panel on the right. To run any tests, you must assemble and configure the proper objects in the tree, and then JMeter can run the test and report the results.

To set up a benchmark test like the one we did above with both ab  and siege , do this:

  1. In the tree view, right click on the Test Plan tree node and select Add -> Thread Group
  2. In the Thread Group details panel, change the Number of Threads (users) to 149 , change the Ramp-Up Period (in seconds) to 0 , and the Loop Count to 671 .
  3. Right click on the Thread Group tree node and select Add -> Sampler -> HTTP Request .
  4. In the HTTP request details panel, change the Web Server settings to point to your Tomcat server and its port number, and change the Path under the HTTP Request settings to the URI in your Tomcat installation that you would like to benchmark. For instance / .
  5. Right click on the Thread Group tree node again and select Add -> Post Processors -> Generate Summary Results .
  6. In the top pull-down menu, select File -> Save Test Plan as and type in the name of the test plan you wish to save. JMeter’s test plan file extension is .jmx, which has an unfortunate similarity to the unrelated Java Management eXtension (JMX).

Figure 4-1 shows the JMeter GUI with the test plan, assembled and ready to run. The tree view is on the left, and the detail panel is on the right.

Once you are done building and saving your test plan, you are ready to run the benchmark. Choose File -> Exit from the top pull-down menu to exit from the graphical JMeter application. Then, run JMeter in text-only mode on the command line to perform the benchmark, like this:

  $ bin/jmeter -n -t tc-home-page-benchmark.jmx
  Created the tree successfully
  Starting the test
  Generate Summary Results = 99979 in 71.0s = 1408.8/s Avg:   38 Min:    0 Max:
  25445 Err:     0 (0.00%)
  Tidying up …
  … end of run

Notice that the requests per second reported by JMeter (an average of 1408.8 requests per second) is significantly lower than that reported by both ab  and siege , for the same hardware, the same version of Tomcat, and the same benchmark. This demonstrates that JMeter’s HTTP client is slower than that of ab and siege . You can use JMeter to find out if a change to your webapp, your Tomcat installation, or your JVM, accelerates or slows the response times of web pages; however, you cannot use

Figure 4-1.  Apache JMeter GUI showing the fully assembled test plan

JMeter to determine the server’s maximum number of requests per second that it can successfully serve because JMeter’s HTTP client appears to be slower than Tomcat’s server code.

You may also graph the test results in JMeter. To do this, run JMeter in graphical mode again, then:

  1. Open the test plan you created earlier.
  2. In the tree view, select the Generate Summary Results tree node and delete it (one easy way to do this is to hit the delete key on your keyboard once).
  3. Select the Thread Group tree node, then right click on it and select Add -> Listener -> Graph Results .

  4. Save your test plan under a new name; this time for graphical viewing of test results.

  5. Select the Graph Results tree node.

Now, you’re ready to rerun your test and watch as JMeter graphs the results in real time.

Again, make sure that you give the JMeter JVM enough heap memory so that it does not run its own garbage collector often during the test. Also, keep in mind that the Java VM must spend time graphing while the test is running, which will decrease the accuracy of the test results. How much the accuracy will decrease depends on how fast the computer you’re running JMeter on is (the faster the better). But, if you’re just graphing to watch results in real time as a test is being run, this is a great way to observe.

When you’re ready to run the test, you can either select Run -> Start from the top pull-down menu, or you can hit Ctrl-R. The benchmark test will start again, but you will see the results graph being drawn as the responses are collected by JMeter. Figure 4-2 shows the JMeter GUI graphing the test results.

Figure 4-2.  Apache JMeter graphing test results

You can either let the test run to completion or you can stop the test by hitting Ctrl-. (hold down the Control key and hit the period key). If you stop the test early, it will likely take JMeter some seconds to stop and reap all of the threads in the request Thread Group . To erase the graph before restarting the test, hit Ctrl-E. You can also erase the graph in the middle of a running test, and the test will continue on, plotting the graph from that sample onward.

Using JMeter to graph the results gives you a window into the running test so you can watch it and fix any problems with the test and tailor it to your needs before running it on the command line. Once you think you have the test set up just right, save a test plan that does not Graph Results , but has a
Generate Summary Results tree node so that you can run it on the command line, and then save the test plan again under a new name that conveys the kind of test it is and that it is configured to be run from the command line. Use the results you obtain on the command line as the authoritative results. Again, the ab  benchmark tool gives you more accurate benchmark results but does not offer as many features as JMeter.

JMeter also has many more features that may help you test your webapps in numerous ways. See the online documentation for more information about this great test tool at .

{mospagebreak title=Web Server Performance Comparison}

In the previous sections, you read about some HTTP benchmark clients. Now, we show a useful example in Tomcat that demonstrates a benchmark procedure from start to finish and also yields some information that can help you configure Tomcat so that it performs better for your web application.

We benchmarked all of Tomcat’s web server implementations, plus Apache httpd standalone, plus Apache httpd ’s modules that connect to Tomcat to see how fast each configuration is at serving static content. For example, is Apache httpd  faster than Tomcat standalone? Which Tomcat standalone web server connector implementation is the fastest? Which AJP server connector implementation is the fastest? How much slower or faster is each? We set out to answer these questions by benchmarking different configurations, at least for one hardware, OS, and Java combination.

Because benchmark results are highly dependent on the hardware they were run on, and on the versions of all software used at the time, the results can and do change with time. This is because new hardware is different, and new versions of each software package are different, and the performance characteristics of a different combination of hardware and/or software change. Also, the configuration settings used in the benchmark affect the results significantly. By the time you read this, the results below will likely be out-of-date. Also, even if you read this shortly after it is published, your hardware and software combination is not likely to be exactly the same as ours. The only way you can really know how your installation of Tomcat and/or Apache httpd  will perform on your machine is to benchmark it yourself following a similar benchmark test procedure.

Tomcat connectors and Apache httpd connector modules

Tomcat offers implementations of three different server designs for serving HTTP and implementations of the same three designs for serving AJP:

JIO ( )

This is Tomcat’s default connector implementation, unless the APR Connector ’s libtcnative library is found at Tomcat startup time. It is also known as “Coyote.” It is a pure Java TCP sockets server implementation that uses the core Java network classes. It is a fully blocking implementation of both HTTP and AJP. Being written in pure Java, it is binary portable to all operating systems that fully support Java. Many people believe this implementation to be slower than Apache httpd  mainly because it is written in Java. The assumption there is that Java is always slower than compiled C. Is it? We’ll find out.

APR (Apache Portable Runtime)

This is Tomcat’s default connector implementation if you install Tomcat on Windows via the NSIS installer, but it is not the default connector implementation for most other stock installations of Tomcat. It is implemented as some Java classes that include a JNI wrapper around a small library named libtcnative written in the C programming language, which in turn depends on the Apache Portable Runtime (APR) library. The Apache httpd  web server is also implemented in C and uses APR for its network communications. Some goals of this alternate implementation include offering a server implementation that uses the same open source C code as Apache httpd  to outperform the JIO connector and also to offer performance that is at least on par with Apache httpd . One drawback is that because it is mainly implemented in C, a single binary release of this Connector cannot run on all platforms such as the JIO connector can. This means that Tomcat administrators need to build it, so a development environment is necessary, and there could be build problems. But, the authors of this Connector justify the extra set up effort by claiming that Tomcat’s web performance is fastest with this Connector implementation. We’ll see for ourselves by benchmarking it.  

NIO ( java.nio )

This is an alternate Connector implementation written in pure Java that uses the java.nio core Java network classes that offer nonblocking TCP socket features. The main goal of this Connector design is to offer Tomcat administrators a Connector implementation that performs better than the JIO Connector by using fewer threads by implementing parts of the Connector in a nonblocking fashion. The fact that the JIO Connector blocks on reads and writes means that if the administrator configures it to handle 400 concurrent connections, the JIO Connector must spawn 400 Java threads. The NIO Connector , on the other hand, needs only one thread to parse the requests on many connections, but then each request that gets routed to a servlet must run in its own thread (a limitation mandated by the Java Servlet Specification). Since part of the request handling is done in nonblocking Java code, the time it takes to handle that part of the request is time that a Java thread does not need to be in use, which means a smaller thread pool can be used to handle the same number of concurrent requests. A smaller thread pool usually means lower CPU utilization, which in turn usually means better performance. The theory behind why this would be faster builds on a tall stack of assumptions that may or may not apply to anyone’s own webapp and traffic load. For some, the NIO Connector could perform better, and for others, it could perform worse, as is the case for the other Connector designs.

Alongside these Tomcat Connectors , we benchmarked Apache httpd in both prefork and worker Multi-Process Model (MPM) build configurations, plus configurations of httpd prefork and worker where the benchmarked requests were being sent from Apache httpd  to Tomcat via an Apache httpd  connector module. We benchmarked the following Apache httpd connector modules:


This module is developed under the umbrella of the Apache Tomcat project. It began years before Apache httpd ’s mod_proxy  included support for the AJP protocol (Tomcat’s AJP Connectors implement the server side of the protocol). This is an Apache httpd module that implements the client end of the AJP protocol. The AJP protocol is a TCP packet-based binary protocol with the goal of relaying the essentials of HTTP requests to another server software instance significantly faster than could be done with HTTP itself. The premise is that HTTP is very plain-text oriented, and thus requires slower, more complex parsers on the server side of the connection, and that if we instead implement a binary protocol that relays the already-parsed text strings of the requests, the server can respond significantly faster, and the network communications overhead can be minimized. At least, that’s the theory. We’ll see how significant the difference is. As of the time of this writing, most Apache httpd  users who add Tomcat to their web servers to support servlets and/or JSP, build and use mod_jk mainly because either they believe that it is significantly faster than mod_proxy , or because they do not realize that mod_proxy  is an easier alternative, or because someone suggested mod_jk  to them. We set out to determine whether building, installing, configuring, and maintaining mod_jk was worth the resulting performance.


This is mod_proxy ’s AJP protocol connector support module. It connects with Tomcat via TCP to Tomcat’s AJP server port, sends requests through to Tomcat, waits for Tomcat’s responses, and then Apache httpd forwards the responses to the web client(s). The requests go through Apache httpd  to Tomcat and back, and the protocol used between Apache httpd and Tomcat is the AJP protocol, just as it is with mod_jk . This connector became part of Apache httpd itself as of httpd version 2.2 and is already built into the httpd that comes with most operating systems (or it is prebuilt as a loadable httpd module). No extra compilation or installation is usually necessary to use
it—just configuration of Apache httpd . Also, this module is a derivative of mod_jk , so mod_proxy_ajp ’s code and features are very similar to those of mod_jk .


This is mod_proxy ’s HTTP protocol connector support module. Like mod_ proxy_ajp , it connects with Tomcat via TCP, but this time it connects to Tomcat’s HTTP (web) server port. A simple way to think about how it works: the web client makes a request to Apache httpd ’s  web server, and then httpd  makes that same request on Tomcat’s web server, Tomcat responds, and httpd  forwards the response to the web client. All communication between Apache httpd and Tomcat is done via HTTP when using this module. This connector module is also part of Apache httpd , and it usually comes built into the httpd  binaries found on most operating systems. It has been part of Apache httpd  for a very long time, so it is available to you regardless of which version of Apache httpd  you run.

{mospagebreak title=Benchmarked hardware and software configurations}

We chose two different kinds of server hardware to benchmark running the server software. Here are descriptions of the two types of computers on which we ran the benchmarks:

Desktop: Dual Intel Xeon 64 2.8Ghz CPU, 4G RAM, SATA 160G HD 7200RPM
This was a tower machine with two Intel 64-bit 
   CPUs; each CPU was single core and hyperthreaded.

Laptop: AMD Turion64 ML-40 2.2Ghz CPU, 2G RAM, IDE 80G HD 5400RPM
This was a laptop that has a single 64-bit AMD
   processor (single core).

Because one of the machines is a desktop machine and the other is a laptop, the results of this benchmark also show the difference in static file serving capability between a single processor laptop and a dual processor desktop. We are not attempting to match up the two different CPU models in terms of processing power similarity, but instead we benchmarked a typical dual CPU desktop machine versus a typical single processor laptop, both new (retail-wise) around the time of the benchmark. Also, both machines have simple ext3 hard disk partitions on the hard disks, so no LVM or RAID configurations were used on either machine for these benchmarks.

Both of these machines are x86_64 architecture machines, but their CPUs were designed and manufactured by different companies. Also, both of these machines came equipped with gigabit Ethernet, and we benchmarked them from another fast machine that was also equipped with gigabit Ethernet, over a network switch that supported gigabit Ethernet.

We chose to use the ApacheBench (ab ) benchmark client. We wanted to make sure that the client supported HTTP 1.1 keep-alive connections because that’s what we wanted to benchmark and that the client was fast enough to give us the most accurate results. Yes, we are aware of Scott Oaks’s blog article about ab  (read it at http:// ab_considered_h.html ). While we agree with Mr. Oaks on his analysis of how ab  works, we carefully monitored the benchmark client’s CPU utilization and ensured that ab  never saturated the CPU it was using during the benchmarks we ran. We also turned up ab ’s concurrency so that more than one HTTP request could be active at a time. The fact that a single ab  process can use exactly one CPU is okay because the operating system performs context switching on the CPU faster than the network can send and receive request and response packets. Per CPU, everything is actually a single stream of CPU instructions on the hardware anyway, as it turns out. With the hardware we used for our benchmarks, the web server machine did not have enough CPU cores to saturate ab ’s CPU, so we really did benchmark the performance of the web server itself.

We’re testing Tomcat version 6.0.1 (this was the latest release available when we began benchmarking—we expect newer versions to be faster, but you never know until you benchmark it) running on Sun Java 1.6.0 GA release for x86_64, Apache version 2.2.3, mod_jk from Tomcat Connectors version 1.2.20, and the APR connector (libtcnative ) version 1.1.6. At the time of the benchmark, these were the newest versions available—sorry we cannot benchmark newer versions for this book, but the great thing about well-detailed benchmarks is that they give you enough information to reproduce the test yourself. The operating system on both machines was Fedora Core 6 Linux x86_64 with updates applied via yum . The kernel version was

Tomcat’s JVM startup switch settings were:

  -Xms384M -Xmx384M -Djava.awt.headless=true

Here is our Tomcat configuration for the tests: Stock conf/web.xml . Stock conf/server. xml , except that the access logger was not enabled (no logging per request), and these connector configs, which were enabled one at a time for the different tests:

  <!– The stock HTTP JIO connector. –>
  <Connector port="8080" protocol="HTTP/1.1"
maxThreads="150" connectionTimeout="20000"
redirectPort="8443" />

  <!– The HTTP APR connector. –>
  <Connector port="8080"
protocol="org.apache.coyote.http11. Http11AprProtocol"
enableLookups="false" redirectPort="8443"

  <!– HTTP NIO connector. –>
  <Connector port="8080"
maxThreads="150" connectionTimeout="20000"
protocol="org.apache.coyote.http11. Http11NioProtocol"/>

  <!– AJP JIO/APR connector, switched by setting LD_LIBRARY_PATH. — >
  <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />

  <!– AJP NIO connector. –>
  <Connector protocol="AJP/1.3" port="0" 

The APR code was enabled by using the HTTP APR connector configuration shown, plus setting and exporting LD_LIBRARY_PATH to a directory containing libtcnative  in the Tomcat JVM process’s environment, and then restarting Tomcat.

{mospagebreak title=Building the APR Connector} 

We built the APR connector like this:

  # CFLAGS="-O3 -falign-functions=0 -march=athlon64 -mfpmath=sse -mmmx -msse -msse2 –
  msse3 -m3dnow -mtune=athlon64" ./configure –with-apr=/usr/bin/apr-1-config —
  # make && make install

We used the same CFLAGS when building Apache httpd  and mod_jk . Here’s how we built and installed mod_jk :

  # cd tomcat-connectors-1.2.20-src/native
  # CFLAGS="-O3 -falign-functions=0 -march=athlon64 -mfpmath=sse -mmmx -msse -msse2 –
  msse3 -m3dnow -mtune=athlon64" ./configure –with-apxs=/opt/httpd/bin/apxs
[lots of configuration output removed]
# make && make install

This assumes that the root directory of the Apache httpd  we built is /opt/httpd .

We built the APR connector, httpd , and mod_jk  with GCC 4.1.1:

  # gcc –version
  gcc (GCC) 4.1.1 20061011 (Red Hat 4.1.1-30)
  Copyright (C) 2006 Free Software Foundation, Inc.
  This is free software; see the source for copying conditions. There is NO

We downloaded Apache httpd  version 2.2.3 from and built it two different ways and benchmarked each of the resulting binaries. We built it for prefork MPM and worker MPM. These are different multithreading and multiprocess models that the server can use. Here are the settings we used for prefork and worker MPM:

  # prefork MPM
  <IfModule prefork.c>
StartServers       8
  MinSpareServers    5
  MaxSpareServers   20
  ServerLimit      256
  MaxClients       256
  MaxRequestsPerChild 4000

  # worker MPM
  <IfModule worker.c>
  StartServers         3
  MaxClients         192
  MinSpareThreads      1
  MaxSpareThreads     64
  ThreadsPerChild     64
  MaxRequestsPerChild  0

We disabled Apache httpd ’s common access log so that it would not need to log any thing per each request (just as we configured Tomcat). And, we turned on Apache httpd ’s KeepAlive configuration option:

  KeepAlive O n
  MaxKeepAliveRequests 100
  KeepAliveTimeout 5

We enabled mod_proxy  one of two ways at a time. First, for proxying via HTTP:

ProxyPass /tc

ProxyPassReverse /tc

Or, for proxying via AJP:

  ProxyPass        /tc ajp://
  ProxyPassReverse /tc ajp://

And, we configured mod_jk  by adding this to httpd.conf :

LoadModule jk_module /opt/httpd/modules/

JkWorkersFile /opt/httpd/conf/

JkLogFile /opt/httpd/logs/mod_jk.log

JkLogLevel info

JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "

JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories

JkRequestLogFormat "%w %V %T"

JkMount /tc/* worker1

Plus we created a  file for mod_jk  at the path we specified in the httpd.conf file:


Of course, we enabled only one Apache httpd connector module at a time in the configuration.

Please check back next week for the continuation of this article.

[gp-comments width="770" linklove="off" ]

chat sex hikayeleri Ensest hikaye