I am building a small utility that packages Locust - performance testing tool (https://locust.io/) and deploys it on azure functions. Just a fun side project to get some hands on with the serverless craze.
Here's the git repo: https://github.com/amanvirmundra/locust-serverless.
Now I am thinking that it would be great to run locust test in distributed mode on serverless architecture (azure functions consumption plan). Locust supports distributed mode but it needs the slaves to communicate with master using it's IP. That's the problem!!
I can provision multiple functions but I am not quite sure how I can make them talk to each other on the fly(without manual intervention).
Thinking out loud:
- Somehow get the IP of the master function and pass it on to the slave functions. Not sure if that's possible in Azure functions, but some people have figured a way to get an IP of azure function using .net libraries. Mine is a python version but I am sure if it can be done using .net then there would be a python way as well.
- Create some sort of a VPN and map a function to a private IP. Not sure if this sort of mapping is possible in azure.
- Some has done this using AWS Lambdas (https://github.com/FutureSharks/invokust). Ask that person or try to understand the code.
Need advice in figuring out what's possible at the same time keeping things serverless. Open to ideas and/or code contributions :)
Update
This is the current setup:
- The performance test session is triggered by an http request, which takes in number of requests to make, the base url, and no. of concurrent users to simulate.
- Locustfile define the test setup and orchestration.
- Run.py triggers the tests.
What I want to do now, is to have master/slave setup (cluster) for a massive scale perf test.
- I would imagine that the master function is triggered by an http request, with a similar payload.
- The master will in turn trigger slaves.
- When the slaves join the cluster, the performance session would start.