0

I have a Tomcat 8 instance running behind a Nginx reverse proxy. It serves a regular J2EE app which we update via Maven 3 and the cargo-maven2-plugin.

Usually that works fine but eventually, Tomcat Manager (or Nginx, hard to tell really) fails, returning a 413 Entity too large. Max upload size is set to 150 MB and the WAR is 85 so that shouldn't be an issue.

[INFO] [edDeployerDeployMojo] Resolved container artifact org.codehaus.cargo:cargo-core-container-tomcat:jar:1.6.1 for container tomcat8x
[INFO] [mcat8xRemoteDeployer] Deploying [C:\source\web-0.1-SNAPSHOT.war]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:56 min
[INFO] Finished at: 2016-11-10T17:07:30+01:00
[INFO] Final Memory: 61M/656M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.cargo:cargo-maven2-plugin:1.6.1:deploy (default-cli) on project reforce-fasthi-web: Execution default-cli of goal org.codehaus.cargo:cargo-maven2-plugin:1.6.1:deploy failed: Failed to deploy [C:\source\web-0.1-SNAPSHOT.war]: Server returned HTTP response code: 413 for URL: https://server:443/manager/text/deploy?path=%2F&version=20161110-1604UTC -> [Help 1]

Any ideas what might be the issue here? I've tried a lot of things, like bypassing Nginx (in which case the transfer times out). According to Tomcat Manager's server overview, the uploads cause a lingering thread that doesn't go away unless the server is restarted. The logs say absolutely nothing!

method
  • 3
  • 4

1 Answers1

0

After many long sessions of trouble shooting we finally nailed the answer. Turns out that the amount of memory was the issue; with our configuration, Tomcat was unable to spin upp the new application within the memory available.

Strangely enough the behavior is really stochastic, sometimes it works although there is very little memory left and sometimes it doesn't although there is plenty memory available.

We ended upp increasing the maximal amount of memory available (it's a virtual server) so that it could expand during the actual deployment and then settle down again. We haven't seen the issue since (knock on wood...).

method
  • 3
  • 4