0

We have built a dashboard on top of Jenkins which enables users to see only jobs relevant to the project and also trigger a build. The UI is built using reactJS and the backend is JAVA REST WebServices.

WebService calls the Jenkins api to fetch Job information and converts the data to JSON for feeding the UI. At present we have around 200 jobs on the Dashboard. Its taking around 2 mins for the Jenkins API to respond with the details.

Jenkins is running on a Linux box

OracleLinux 6 x Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz / 39.25 GB

Jenkins Version - 1.564 with 16 Executors and more than 2000 Jobs

Sample API  Call - http://jenkins:8080/job/jobName/api/json?tree=displayName,builds[result],lastBuild[estimatedDuration,result,duration,number,timestamp,actions[causes[userName]]]

The api is called 200 times for 200 Jobs to fetch details of each job.

Any advice on how to speed up the API response.

I considered increasing the RAM On the linux box and tuning the JVM OPTS. Also upgrading the Jenkins to latest LTS.

Upen
  • 1,388
  • 1
  • 22
  • 49
  • 1
    Does your jobs have many builds? I know the Jenkins team has been working with lazy loading of builds, don't know which versions have those improvements though. E.g. as soon as you load the job it will load all builds. In newer versions it loads those needed to present render page / query. Also, the `builds[result]` part in the tree query might be extra dangerous since in older versions (with lazy loading) this would force the job to load all builds. Reason for this is that no paging was done, e.g. in later version you have to specify the range of builds to return, default is 20 i think. – Jon S Feb 10 '17 at 06:12
  • We have kept the history of 30 builds per job. I am just worried to upgrade Jenkins core since all the plugins might not be compatible. We are using several plugins to make things work. – Upen Feb 10 '17 at 19:08
  • Okay, it's not the lazy loading then, 30 builds is not much. Only really a problem for jobs with 1000+ builds. – Jon S Feb 10 '17 at 19:26

2 Answers2

2

Low-hanging fruit:

  1. Run the requests in parallel, i.e., not one after another.
  2. If you do that and if you use the standard jetty container, try increasing the number of worker processes with the --handlerCountMax option (the default is 40).

Eventually, you should try to avoid performing 200 individual requests. Depending on your setup, the security checks for every request alone can cause a substantial overhead.

Therefore, the cleanest solution will be to gather all the data that you need from a single Groovy script on the master (you can do that via REST also):

  • this reduces the number of requests to 1
  • and it allows for further optimization, possibly circumventing the problems mentioned in the comment of Jon S above
Alex O
  • 7,746
  • 2
  • 25
  • 38
  • Being more of a Java person, I will try removing the for loop for the 200 odd calls and do it at one shot using CountdownLatch. Will keep you posted. Thanks. – Upen Feb 14 '17 at 19:01
  • I was able to improve the speed from 2 minutes to few seconds with the change. – Upen Feb 28 '17 at 22:52
  • 1
    Thanks for the feedback. That's a decent speedup : ) – Alex O Mar 01 '17 at 06:05
1

As it seems like you're not hitting any lazy loading issues on your server (since you only have 60 builds per job), the problems are probably related to overhead as Alex O suggests. Also Alex O's suggested doing it all in a single request. This can be done with the following request:

http://jenkins:8080/api/json?tree=jobs[displayName,builds[result],lastBuild[estimatedDuration,result,duration,number,timestamp,actions[causes[userName]]]]

Instead of relying on the job API we use the jenkins API where we can fetch the data for all jobs at a single request.

Jon S
  • 15,846
  • 4
  • 44
  • 45