1

I've been looking at creating a highly available setup with EC2. It would involve several web server nodes acting as front-ends to multiple NFS and database serves (backends). Ideally, the platform could incorporate several load balancers to distribute traffic evenly to each front-end node.

I just came across a project that allows you to mount an S3 bucket directly into the Linux file system. It supports AWS' authentication, so you could keep non-public data there. Has anyone used this type of setup (Web Server --> S3 + DB -> Browser)?

Trent Scott
  • 959
  • 1
  • 12
  • 28

1 Answers1

1

In all honesty, it doesn't work real well. Performance isn't up to snuff, mostly. What you're far better off doing is putting all your static data into S3 (probably as a tarball) and having nodes download and extract it into ephemeral storage at boot (or at deploy time), and serve it locally.

This doesn't solve the customer assets problem, but there are still better solutions for that (I'm a fan of dedicated storage servers that serve directly, or over a higher-level, application-specific protocol, as I've discussed previously). Don't forget you can use S3 or CloudFront to serve assets directly to customers, which can do a good job in the right circumstances.

womble
  • 96,255
  • 29
  • 175
  • 230
  • Great answer, thanks! I currently use Akamai CDN for serving assets/content to customers and it works well. Thought I'd ask and you confirmed my suspicions about using S3 in this manner. – Trent Scott Sep 17 '11 at 21:59