3

I have a Medium server on EC2. I don't know that much about Apache or Tomcat - they are up and running, but other than that I don't have advanced knowledge of how to tinker. I know that I can set the min/max JVM size for the Tomcat Server, and that I can set how many threads Apache can fork off, but I don't know what "reasonable" values for these parameters are.

  1. I realize the answer is subjective, but are there common settings I should start off with?
  2. Is there a simple way to load/performance test my application?

Thanks.

EDIT:

The system is an EC2 Medium:

High-CPU Medium Instance:

  • 1.7 GB of memory
  • 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each)
  • 350 GB of instance storage
  • 32-bit platform
  • I/O Performance: Moderate
  • API name: c1.medium

The only services I am running are Apache and Tomcat. Nothing else is on the server.

skaz
  • 135
  • 7
  • What are the specs of your system and what percentage of resources would you like to be available to Tomcat (is the server solely running Tomcat or multiple services)? – Michael Aug 26 '11 at 08:15
  • @Mikaveli I have updated my question with answers to yours. – skaz Aug 26 '11 at 13:44

3 Answers3

4

Apache

Check out Apache's own documentation, it goes into more detail than I could here:

http://httpd.apache.org/docs/2.0/misc/perf-tuning.html

JVM

Set your JVM's Xmx to no more than 70% (roughly) of the total free physical RAM. The reason for this is that the perm gen and JVM libraries take additional space up too - the aim is that the total process memory will never use virtual / swap memory. If you set this too high, you'll start seeing issues like "GC overhead limit exceeded".

Your GC algorithm can have a big effect on performance - make sure that you're using some form of parallel collector and not the serial 'pause, mark and sweep'. The JVM usually does this for you automically in -server mode.

Use a tool such as JConsole or JVisual VM to inspect the GC and how much heap you're actually using, and adjust down to suit - too large a heap can impact garbage collection times.

Tomcat

As for HTTP connector threads, on a single instance of Tomcat, depending on your application you can usually up the thread count to around 600 before encountering issues - however, there's often no need for it to be quite that high - you'll just be putting more pressure on your CPU and memory.

Once you're happy with the max threads, I then set the minSpareThreads and maxSpareThreads relative to that. Upping the values if I know I'm gonna get hit with spikes in new connections etc.

Next up acceptCount. This is the maximum queued connections -c onnections that spill over this setting after using up the connector threads will receive a "connection refused".

As a minor tweek, you can set enableLookups (allow DNS hostname lookups) to false. When enabled, (slightly) adversely affects performance.

Also, check out the Tomcat Native Library, this uses native code to boost performance in certain operations (like file IO etc).

Load Testing

For basic load / performance testing, check out Apache JMeter:

http://jakarta.apache.org/jmeter/

We use it to test basic page load performance, with JMeter test scripts using hundreds of concurrent requests. You do need a fairly hefty server to run it on though (not on the same machine you're running Apache HTTPD and Tomcat).

Michael
  • 325
  • 5
  • 13
  • Wow! What a great answer! I am playing with JMeter right now on my local machine. I have a (I'm sure) basic question if you have any more time :) http://stackoverflow.com/questions/7208327/jmeter-cookie-manager – skaz Aug 26 '11 at 17:41
1

Heap, I would set ms=mx=1GB intially. 1.5 if you're app is memory hungry. I've never seen any point (or gain) in having a variable heap size in a server environment.

Thread pool sizing is a chapter all in it's own. I'm talking about Tomcat here, primarily.

If your application has a lot of synchronized sections (shared caches, external resources/integrations with only serial access and whatnot), worker threads have a diminishing return as the more they are, the more time they will spend just waiting for each other. With your specs, and knowing nothing about your app, I'll say 50 as a starting point in terms of thread pool sizing. You'll need to run some performance benchmarks to tweak this properly. Use jmeter, for instance, and create a test script the emulates one or a few of the primary use-cases on your site. Use 2 or 3 temporary EC2 instances as load-generators (you'll only need them for a short period) where you run the jmeter-server application.

Run test scenarios with permutations of, for example, 60, 120, 180 and 240 jmeter request threads and 30, 50, 70 and 90 tomcat worker threads. Compare response times and CPU and memory usage on your server. For basic CPU/memory information, you can use the standard jconsole or visualvm information out of your JVM. You can also run your tomcat JVM with verbose garbage collection (GC logging) and study the memory and GC behavior with something like tagtraums GC viewer.

pap
  • 121
  • 2
  • 1
    If he sets the Xmx anywhere near 1.5 GiB (especially with Apache HTTPD running) he's likely to run the process into swap and get issues with slow GC. – Michael Aug 26 '11 at 14:46
0

a reasonable start value is 25% New Size (Xms) compared to the Total Heap (Xmx)

I'd suggest you then profile your app - based on the peak load and observe memory utilization using LambdaProbe or similar and see what you need to change

shinynewbike
  • 101
  • 3
  • Thanks for the tip into LambdaProbe. Is this something that I should really run on my production server? – skaz Aug 26 '11 at 13:42
  • Unless you're having issues with slow heap memory allocation, it's often better to let the JVM decide its own Xms value. – Michael Aug 26 '11 at 14:47