0

Using these metrics (shown below), I was able to utilize a workload modeling formula (Little’s Law) to come up with what I believe are the correct settings to sufficiently load test the application in question.

From Google Analytics:

  • Users: 2,159
  • Pageviews: 4,856
  • Avg. Session Duration: 0:02:44
  • Pages / Session: 2.21
  • Sessions: 2,199

The formula is N = Throughput * (Response Time + Think Time)

  • We calculated Throughput as 1.35 (4865 pageviews / 3600 (seconds in an hour))
  • We calculated (Response Time + Think Time) as 74.21 (164 seconds avg. session duration / 2.21 pages per session)

Using the formula, we calculate N as 100 (1.35 Throughput * 74.21 (Response Time + Think Time)).

Therefore, according to my calculations, we can simulate the load the server experienced on the peak day during the peak hour with 100 users going through the business processes at a pace of 75 seconds between iterations (think time ignored).

So, in order to determine how the system responds under a heavier than normal load, we can double (200 users) or triple (300 users) the value of N and record the average response time for each transaction.

Is this all correct?

Jay R.
  • 5
  • 7

2 Answers2

0

Below formula always worked good for me, If you are looking to calculate pacing
"Pacing = No. of Users * Duration of Test (in seconds) / Transactions you want to achieve in said Test Duration"
You should be able to get closer to the Transactions you want to achieve using this Formula. If Its API, then its almost always accurate.

For Example, You want to achieve 1000 transactions using 5 users in one hour of Test Duration

Pacing = 5 * 3600/1000 = 18 seconds

Vishal Chepuri
  • 306
  • 5
  • 26
0

When you do a direct observation of the logs for the site, blocked by session duration, what are the maximum number of IP addresses counted in each block?

Littles law tends to undercount sessions and their overhead in favor of transactional throughput. That's OK if you have instantaneous recovery on your session resources, but most sites are holding onto them for a period longer than 110% of the longest inter-request window for a user (the period from one request to the next).

James Pulley
  • 5,606
  • 1
  • 14
  • 14
  • Hi James, what would the max number of IP addresses give you for each block? The number of users you'd want to apply at minimum? How do you come up with the pacing? – Jay R. Aug 27 '20 at 19:41
  • I use logs to model user behavior. I can query the logs for the time between first and last instance of an IP address, grouped by IP. I can use this to pull session durations. With that I can count unique IP's blocked by session duration. This will provide a count of users objectively on the system. A count of requests by Page during the high window tells me the business processes by the count of the ending pages for the process. This is related to another thread you have open where you are engaging in a high risk test. See other thread – James Pulley Aug 29 '20 at 14:40
  • James, I don't understand this sentence: "A count of requests by Page during the high window tells me the business processes by the count of the ending pages for the process." What do you mean by "...tells me the business processes by the count of the ending pages for the process"? – Jay R. Aug 31 '20 at 21:36
  • Each business process typically lands on a unique page at the end of the process. Therefore, to count the instances of each page (less the parameters after the ?) tells us how many times a page has been hit, and by extension, how many times a business process tied to a page has completed successfully – James Pulley Sep 01 '20 at 11:57