Background: I have built a typical website which let's users upload photos and videos (nothing too large, mind). Currently, I host this website on a single DigitalOcean droplet (Nginx, PHP, MySQL, Memcached for sessions/cache) located in London, UK. I have an S3 bucket located in Ireland which I upload the files to, from the server.
The Problem: The problem is for users not located in the UK and Europe the upload speed is slower, sometimes much slower (e.g. from USA or Australia). This leaves them frustrated.
AWS have some pretty cool stuff like S3 Transfer Acceleration and S3 Presigned POSTs (letting users upload directly to a bucket w/o AWS credentials). But the latter isn't very AJAX friendly - you can ask S3 to return an XML response, but as this is cross-domain it's not readable with JS. So there is no way of knowing the key (file) name. AFAIK.
I have also tried uploading directly to an S3 bucket with the AWS JavaScript SDK, but this will mean I have to force my users to re-authenticate, I think (and I want to avoid that).
I was thinking of adding a server in the US for example and separating the services, so two web servers, a session server and a database server. Then use latency based routing with Route 53 to route to the lowest latency web server. But then I have the high latency problem with the session and DB servers in different locations to the web servers. And the sticky sessions problem.
I was wondering if anyone had any suggestions for this, maybe I'm overcomplicating it and there's a simpler solution. Any advice is greatly appreciated!