In our project we decided to use presigned urls as basic authentication mechanism.
Trimmed down our setup involves
- the storage server
- the api server
- the client (angular SPA running in the browser)
We use presigned urls for uploading and downloading files from the client directly to the storage server.
Upload flow (simplified):
- client sends the api: hey I want to upload that
- api does authorization and validation, does some database stuff and returns a presigned url
- client uploads directly to the storage server
So far so good. The big problem is the "download" flow.
- client asks the api: hey, show me a list of what you have
- api does authorization, validation and returns a json list of objects which also hold the presigned get urls for showing the files (images)
- the client displays the list of object data and embeds the images downloaded directly from the storage server using the presigned urls
This works great but blows up the browser cache up to multiple GB of RAM.
This happens because the presigned urls generated on mulitple calls are not the same and differ at the authorization part (e.g. holding a new fresh lifetime) on each request. When a user clicks forward and backwards through the paginated list the client will receive different urls and the browser cache treats them as different images.
So far this seems to be a correct behaviour on the browser side (different url equals different image).
So far this seems to be a correct behaviour on the api side (new call will return a new lifetime).
Are there any intended ways how to handle this?
Are the flows themself wrong?
Any ways to solve this beside implementing a centralized presigned url cache when running multiple instances of the api?
May someone could also give advice for meaningful tags I could use.