3

I have noticed that my particular instance of Trac is not running quickly and has big lags. This is at the very onset of a project, so not much is in Trac (except for plugins and code loaded into SVN).

Setup Info: This is via a SELinux system hosted by WebFaction. It is behind Apache, and connections are over SSL. Currently the .htpasswd file is what I use to control access.

Are there any recommend ways to improve the performance of Trac?

torial
  • 13,085
  • 9
  • 62
  • 89
  • You could give more information about your setup - operational system, web server, protocol used, authentication scheme used. – nosklo Oct 17 '08 at 22:52

4 Answers4

5

It's hard to say without knowing more about your setup, but one easy win is to make sure that Trac is running in something like mod_python, which keeps the Python runtime in memory. Otherwise, every HTTP request will cause Python to run, import all the modules, and then finally handle the request. Using mod_python (or FastCGI, whichever you prefer) will eliminate that loading and skip straight to the good stuff.

Also, as your Trac database grows and you get more people using the site, you'll probably outgrow the default SQLite database. At that point, you should think about migrating the database to PostgreSQL or MySQL, because they'll be able to handle concurrent requests much faster.

Justin Voss
  • 6,294
  • 6
  • 35
  • 39
  • Last I saw, their MySQL support was very beta and recommended against. I tried a postgres import of my database, but it was missing some things relating to milestones. – Jon Topper Oct 21 '08 at 10:44
  • I've been running Trac on MySQL for 8 months now and it seems to perform well. The only issue I have encountered is that the connection between Trac and MySQL is dropped if not used for 8 hours (i.e overnight). I have a script that connects to our Trac homepage at 6am to workaround this. – Chris B Jun 15 '09 at 10:04
3

We've had the best luck with FastCGI. Another critical factor was to only use https for authentication but use http for all other traffic -- I was really surprised how much that made a difference.

Pat Notz
  • 208,672
  • 30
  • 90
  • 92
  • I second the use of FastCGI. It is a bit of a pain to install if you don't already have it in your distro but not terrible. Using FCGI got around some compatibility issues between Apache and Python|modPython (Modified Centos 4.3) – Sam Corder Oct 20 '08 at 17:33
2

I have noticed that if

select disctinct name from wiki

takes more than 5 seconds (for example due to a million rows in this table - this is a true story (We had a script that filled it)), browsing wiki pages becomes very slow and takes over 2*t*n, where t is time of execution of the quoted query (>5s of course), and n is a number of tracwiki links present on the viewed page. This is due to trac having a (hardcoded) 5s cache expire for this query. It is used by trac to tell what the colour should the link be. We re-hardcoded the value to 30s (We need that many pages, so every 30s someone has to wait 6-7s).

It may not be what caused Your problem, but it may be. Good luck on speeding up Your Trac instance.

SilentGhost
  • 307,395
  • 66
  • 306
  • 293
Paweł Polewicz
  • 3,711
  • 2
  • 20
  • 24
1

Serving the chrome files statically with and expires-header could help too. See the end of this page.

Macke
  • 24,812
  • 7
  • 82
  • 118