0

Please use this pic to understand it clearly Architecture as I see Need to realize request caching at [balancer] level for VoD service.

When reqst1 for file1 is come to balancer it send it by round-robin to varnish1 server and save this request to local db/cache etc.

Next if varnish1 hasn't this file at cache it send request to local webserver (nginx-VOD1) and after that nginx got file1 from fs for varnish1.

So if the next request is for file1 balancer directly redirect it to varnish1 so the info at the varnishes doesn't double.

So the root of issue next: how to realize this scheme or how to realize the other scheme in such way - the content doesnt double at different varnish caches?

1)Is there ready free/pay solution?

2)Is this scheme justified?

3)What I should use as balancer(nginx, haproxy, varnish)?

4)Where varnish should save it cache at ssd or at memmory?

Thanks

Kein
  • 165
  • 2
  • 15
  • Have you looked at [CloudFront](https://aws.amazon.com/cloudfront/)? It seems to do much of what you're depicting, minimal config required, and only charges for bandwidth -- not storage -- so the size of the cache doesn't matter. Since you appear to be unfamiliar with exactly how to engineer this, it might be a better option. Google Cloud CDN and CloudFlare are similar services. – Michael - sqlbot Jun 13 '18 at 01:34
  • Thanks Michael. Is these services is DMCA ignored? ) – Kein Jun 13 '18 at 12:58
  • There are no legitimate service providers that ignore laws and regulations related to intellectual property rights... so, no, it is not "DMCA ignored." – Michael - sqlbot Jun 13 '18 at 19:43

1 Answers1

1

I think your schema is too complex and does not optimize the varnish cache.

I propose to simplify it by putting only one server varnish (bigger) which distributed the requests on the 4 backend.