I have a Django application running on Heroku. To store and serve my static files, I'm using django-storages with my S3 bucket, as well as the standard Django ManifestFilesMixin
. I'm also using django-pipeline.
In code:
from django.contrib.staticfiles.storage import ManifestFilesMixin
from storages.backends.s3boto import S3BotoStorage
from pipeline.storage import PipelineMixin
class S3PipelineManifestStorage(PipelineMixin, ManifestFilesMixin, S3BotoStorage):
pass
The setup works, however the staticfiles.json
manifest is also stored on S3. I can see two problems with that:
My app's storage instance would have to fetch
staticfiles.json
from S3, instead of just getting it from the local file system. This makes little sense performance-wise. The only consumer of the manifest file is the server app itself, so it might as well be stored on the local file system instead of remotely.I'm not sure how significant this issue is since I suppose (or hope) that the server app caches the file after reading it once.
The manifest file is written during deployment by
collectstatic
, so if any already-running instances of the previous version of the server application read the manifest file from S3 before the deployment finishes and the new slug takes over, they could fetch the wrong static files - ones which should only be served for instances of the new slug.Note that specifically on Heroku, it's possible for new app instances to pop up dynamically, so even if the app does cache the manifest file, it's possible its first fetch of it would be during the deployment of the new slug.
This scenario as described is specific to Heroku, but I guess there would be similar issues with other environments.
The obvious solution would be to store the manifest file on the local file system. Each slug would have its own manifest file, performance would be optimal, and there won't be any deployment races as described above.
Is it possible?