2

I'm building a REST API using Flask and connexion. (Python)

I'm adding the api to the connexion app using a swagger.yml file that contains the definitions of all the endpoints, methods, etc...

The question is, how can I add a rate limit on a specific resource/route/call ?

I can't seem to find it in the documentation.

Thanks.

Amrou
  • 371
  • 1
  • 3
  • 18

2 Answers2

1

Connexion is a flask application so many techniques that work with flask work with connexion.

We successfully used Flask-Limter to do rate limiting.

import os

import connexion

APP = connexion.FlaskApp(
    __name__, specification_dir=os.environ.get("OPENAPI_LOCATION", ".")
)
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address

import service.global_app as global_app

# This is for rate limiting (flask-limiter)
# Ref: https://limits.readthedocs.io/en/stable/storage.html#storage-scheme
APP.app.config["RATELIMIT_STORAGE_URI"] = (
    "redis://" f"{REDIS_HOST}:{REDIS_PORT}/{REDIS_DB + 1}"
)
# Kill switch for rate limiter
APP.app.config["RATELIMIT_ENABLED"] = True
# Policy for what to do if REDIS is down
APP.app.config["RATELIMIT_IN_MEMORY_FALLBACK"] = True
APP.app.config["RATELIMIT_HEADERS_ENABLED"] = True
APP.app.config["RATELIMIT_SWALLOW_ERRORS"] = os.environ.get("ENV", "") != "DEV"


LIMITER = Limiter(
    global_app.APP.app,
    key_func=get_remote_address,
    # 1
    default_limits=[DAILY_LIMIT, HOURLY_LIMIT],
)

@LIMITER.limit("3/second", override_defaults=True, exempt_when=exempt_when)
def simple_search_dsl_async(
) -> str:
    return "Hi"

MatthewMartin
  • 32,326
  • 33
  • 105
  • 164
0

You could use the X-Rate-Limit-* HTTP headers along with the http 429 status code.

This practically looks like in openapi:

  ....
  responses:
    "200":
      description: Success response
      content:
        application/json:
          schema:
            $ref: "#/components/schemas/YourResponseModel"
      headers:
        "X-Rate-Limit-Limit": {
          "description": "The number of allowed requests in the current period",
          "schema": {
            "type": "integer"
          }
        } ,
        "X-Rate-Limit-Remaining": {
          "description": "The number of remaining requests in the current period",
          "schema": {
            "type": "integer"
          }
        },
        "X-Rate-Limit-Reset": {
         "description": "The number of seconds left in the current period",
         "schema": {
           "type": "integer"
         }
      }
    "429": 
      description: Too many requests
      content:
        application/json:
          schema:
            $ref: "#/components/schemas/ErrorMessageResponse"

  ....
Zsolt Normann
  • 167
  • 2
  • 5
  • But where to I exactly set the limit of the requests ? Because, If I understood correctly, what you wrote is only going to illustrate the rate limit n the response's headers. – Amrou Jun 04 '21 at 12:48
  • Maybe I misunderstand what you are searching, but rate limit is basicaly a tool for the provider/server to defend its underlying infrastructure. This means that not the consumer, who calls the service will set the rate limit, but the server/provider will guide the caller what is the rate limit in a timeframe (X-Rate-Limit-Limit), after the call how much is left from the limit which is dedicated to the caller (X-Rate-Limit-Remaining) and when this rate limit will reset (X-Rate-Limit-Reset) to the consumable maximum. Hope this clarification helps. – Zsolt Normann Jun 05 '21 at 08:09
  • Moreover the provider could expose an endpoint to the clients to get the current state of the rate limits, similarly as eg. GitHub is doing. See https://docs.github.com/en/rest/reference/rate-limit the GET /rate_limit endpoint. – Zsolt Normann Jun 05 '21 at 08:11
  • Thanks for explaining. I already know this. What I meant is, I'm the provider right ? I'm developing the API and all the endpoints using python's connexion module. The question is, how do I set rate limits on specific endpoints ? Since the api is basically defined in the .YAML file. – Amrou Jun 06 '21 at 09:46
  • One more thing from infrastructure point of view: usually the rate limiting is not the responsibility of the provider backend service (which you would like to generate) but eg. an API Gateway which responsible for throttle function for endpoints. See eg AWS doc for this case: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html – Zsolt Normann Jun 21 '21 at 05:57