2

FastAPI documentation recommends using lru_cache decorated functions to retrieve the config file. That makes sense to avoid I/O getting the env file.

config.py
from pydantic import BaseSettings


class Settings(BaseSettings):
    app_name: str = "Awesome API"
    admin_email: str
    items_per_user: int = 50

    class Config:
        env_file = ".env"

and then in other modules, the documentation implemented a function that gets the settings

#module_omega.py
from . import config

@lru_cache()
def get_settings():
    return config.Settings()

settings = get_settings()
print(settings.ENV_VAR_ONE)

I am wondering if this method is better practice or advantageous to just initializing a settings object in the config module and then importing it like below.

#config.py
from pydantic import BaseSettings


class Settings(BaseSettings):
    app_name: str = "Awesome API"
    admin_email: str
    items_per_user: int = 50

    class Config:
        env_file = ".env"

settings = Settings()

#module_omega.py

from .config import settings

print(settings.ENV_VAR_ONE)
mowienay
  • 1,264
  • 4
  • 19
  • 32
  • 1
    These methods are pretty much equivalent, with only stylistic difference. The `lru_cache` looks slightly more intuitive if you are going to subclass Settings to support different environments. – Marat Mar 18 '21 at 02:22
  • why would it be more intuitive if supporting different environments? Is it better than having if conditions that decides which subclass to initialize? – mowienay Mar 18 '21 at 02:24
  • 1
    because you get settings in a uniform way, instead of getting a module variable in one case and instantiating a class in others. – Marat Mar 18 '21 at 02:26
  • also because you can use dependency overrides of the get_settings function, see https://fastapi.tiangolo.com/advanced/settings/#settings-and-testing – dh762 Aug 17 '23 at 09:06

1 Answers1

4

I realize it's been a while since you asked, and though I agree with the commenters that these can be functionally equivalent, I can point out another important difference that I think motivates the use of @lru_cache.

What the @lru_cache approach can help with is limiting the amount of code that is executed when the module is imported.

settings = Settings()

By doing this, like you suggested, you are exporting an instance of your settings. Which means that you're transitively executing any code that needs to be run to create your settings immediately when your module is imported.

While module exports are cached similar to how @lru_cache would do, you don't have as much control over deferring the loading of your settings, since in python we typically place our imports at the top of a file.

The @lru_cache technique is especially useful if you have more expensive settings, like looking at the filesystem, or going to the network. That way you can defer loading your settings until you really actually need them.

from . import get_settings

def do_something_with_deferred_settings():
    print(get_settings().my_setting)

if __name__ == "__main__":
    do_something_with_deferred_settings()

Other things to look into:

  • @cache in python 3.9 instead of @lru_cache
  • Module __getattr__ doesn't add anything here IMO, but it can be useful when working with dynamism and the import system.

Zev Isert
  • 915
  • 11
  • 20
  • Good point .. however, I find it unlikely that someone would want to defer settings retrieval in case of an API. But the answer explains the potential benefits in that context. – mowienay Apr 24 '21 at 13:33
  • 1
    I can give you good use case I encountered many, many times in the API. If you defer the loading of settings, you can run the test suite and disable secrets loading (network + decryption is time expensive) without hacks, monkey patching and anything like that. – Drachenfels Aug 21 '23 at 09:12