I have deployed an API using Laravel 8 in M6G 8X Large Server. My System architecture is as follows.
Client -> NGINX -> PHP-FPM -> Laravel API -> PGBouncer -> PostgreSQL/PostGIS
The API gets back the result to the client request from a Postgres Function which uses the information from a table of 12 billion records on an average of 300 ms. I did load testing using jmeter for 50K requests with 500 parallel requests for which I have got only a throughput of 390 Requests/Sec.
In order to mitigate this issue I have loaded the 12 billion records in Redis memory and did the same operation of postgres function in Redis using RediSearch FT.Aggregate function with Geospatial filter for which I have got the result on an average of 250-300 ms. I did the same load testing for 50K requests with 400 parallel requests in which I have got only the throughput of 45 Requests/sec which is so worse than the Postgres Function approach.
Client -> NGINX -> PHP-FPM -> Laravel API -> Redisearch -> Redis
Please let me know on how to improve the throughput for concurrent requests using Redis Approach as my assumption is there is no lock issue or connection to Postgres DB issue in Redis. Thanks.
Sample Query
FT.AGGREGATE test_idx "@location:[-84.31455662683010 33.92779720276280 20 mi]"
APPLY "geodistance(-84.31455662683010,33.92779720276280, @location)" as distance
APPLY "(@a*(( 20-(@distance/1609.344))/20))" as a1
APPLY (@a1*@b) as a1b
APPLY 1 as test
GROUPBY 1 @test
REDUCE SUM 1 @a1 as a_total
REDUCE SUM 1 @a1b as a1b_total
APPLY (@a_total/(3.14*(20^2))*100) as val_1
APPLY (@a1b_total/@a_total) as val_2