Home arrow BrainDump arrow Tomcat Benchmark Procedure

Tomcat Benchmark Procedure

In this third part of a five-part series focusing on Tomcat performance tuning, you will learn benchmarking procedures and some of the qualities of the application that you can benchmark. This article is excerpted from chapter four of Tomcat: The Definitive Guide, Second Edition, written by Jason Brittain and Ian F. Darwin (O'Reilly; ISBN: 0596101066). Copyright © 2008 O'Reilly Media, Inc. All rights reserved. Used with permission from the publisher. Available from booksellers or direct from O'Reilly Media.

TABLE OF CONTENTS:
  1. Tomcat Benchmark Procedure
  2. Benchmark results and summary
  3. Benchmark results and summary continued
  4. What else we could have benchmarked
By: O'Reilly Media
Rating: starstarstarstarstar / 2
February 26, 2009

print this article
SEARCH DEV SHED

TOOLS YOU CAN USE

advertisement

Benchmark procedure

We benchmarked two different types of static resource requests: small text files and 9k image files. For both of these types of benchmark tests, we set the server to be able to handle at least 150 concurrent client connections, and set the benchmark client to open no more than 149 concurrent connections so that it never attempted to use more concurrency than the server was configured to handle. We set the benchmark client to use HTTP keep-alive connections for all tests.

For the small text files benchmark, we’re testing the server’s ability to read the HTTP request and write the HTTP response where the response body is very small. This mainly tests the server’s ability to respond fast while handling many requests concurrently. We set the benchmark client to request the file 100,000 times, with a possible maximum of 149 concurrent connections. This is how we created the text file:

  $ echo 'Hello world.' > test.html

We copied this file into Tomcat’sROOTwebapp and also into Apache httpd’s document root directory.

Here is the ab command line showing the arguments we used for the small text file benchmark tests:

  $ ab -k -n 100000 -c 149 http://192.168.1.2/test.html

We changed the requested URL appropriately for each test so that it made requests that would benchmark the server we intended to test each time.

For the 9k image files benchmark, we’re testing the server’s ability to serve a larger amount of data in the response body to many clients concurrently. We set the benchmark client to request the file 20,000 times, with a possible maximum of 149 concurrent connections. We specified a lower total number of requests for this test because the size of the data was larger, so we adjusted the number of requests down to compensate somewhat, but still left it high to place a significant load on the server. This is how we created the image file:

  $ dd if=a-larger-image.jpg of=9k.jpg bs=1 count=9126

We chose a size of 9k because if we went much higher, both Tomcat and Apache httpd would easily saturate our 1 Mb Ethernet link between the client machine and the server machine. Again, we copied this file into Tomcat’s ROOT webapp and also into Apache httpd’s document root directory.

Here is the ab command line showing the arguments we used for the small text file benchmark tests:

  $ ab -k -n 20000 -c 149 http://192.168.1.2/20k.jpg

For each invocation of ab, we obtained the benchmark results by following this procedure:

  1. Configure and restart the Apache httpd and/or Tomcat instances that are being tested.
  2. Make sure the server(s) do not log any startup errors. If they do, fix the problem before proceeding.
  3. Run one invocation of the ab command line to get the servers serving their first requests after the restart.
  4. Run the ab command line again as part of the benchmark.
  5. Make sure that ab reports that there were zero errors and zero non-2xx responses, when all requests are complete.
  6. Wait a few seconds between invocations of ab so that the servers go back to an idle state.
  7. Note the requests per second in the abstatistics.
  8. Go back to step 4 if the requests per second change significantly; otherwise, this iteration’s requests per second are the result of the benchmark. If the numbers continue to change significantly, give up after 10 iterations of ab, and record the last requests per second value as the benchmark result.

The idea here is that the servers will be inefficient for the first couple or few invocations of ab, but then the server software arrives at a state where everything is well initialized. The Tomcat JVM begins to profile itself and natively compile the most heavily used code for that particular use of the program, which further speeds response time. It takes a few ab invocations for the servers to settle into their more optimal runtime state, and it is this state that we should be benchmarking—the state the servers would be in if they were serving for many hours or days as production servers tend to do.



 
 
>>> More BrainDump Articles          >>> More By O'Reilly Media
 

blog comments powered by Disqus
escort Bursa Bursa escort Antalya eskort
   

BRAINDUMP ARTICLES

- Apple Founder Steve Jobs Dies
- Steve Jobs` Era at Apple Ends
- Google's Chrome Developer Tool Updated
- Google's Chrome 6 Browser Brings Speed to th...
- New Open Source Update Fedora 13 is Released...
- Install Linux with Knoppix
- iPad Developers Flock To SDK 3.2
- Managing a Linux Wireless Access Point
- Maintaining a Linux Wireless Access Point
- Securing a Linux Wireless Access Point
- Configuring a Linux Wireless Access Point
- Building a Linux Wireless Access Point
- Migrating Oracle to PostgreSQL with Enterpri...
- Demystifying SELinux on Kernel 2.6
- Yahoo and Microsoft Create Ad Partnership

Developer Shed Affiliates

 


Dev Shed Tutorial Topics: