I am interested in widening my knowledge in designing large scale distributed systems. I have come across the topics Cache and Proxy servers. For Proxy servers we are using Nginx for example, and Nginx also has it's own Cache. Now, what is the major difference between the cache inside Nginx and Redis ? Particularly I read many articles regarding Proxy servers. I am not completely getting the point of these proxy servers. Some articles do mention that load balancers, proxy and cache are done in Nginx itself, and some have separate servers for each. Could any one please elaborate these in a better way ? Thanks in advance.
Asked
Active
Viewed 1,208 times
1 Answers
3
They each serve their own purpose. NGINX could cache the static assets you have deployed and also responses from the backend servers depending on their cache-control settings. This would be useful for data that is being repeatedly downloaded, JS/HTML/image files, etc. It could also cache the HTTP response for identical requests that it receives.
However, in your application, you may decide that some data is is expensive to compute on-the-fly or will be repeatedly requested, so you decide to cache this data in Redis. This could be any data you choose, and not necessarily the entire API response for an HTTP request but even intermediary data where you don't want to hit your database but rather return results from your cache.

cppNoob
- 422
- 1
- 5
- 17
-
1I can come to conclusion that we can cache wherever we need, based on the type of data, frequency of computation and other such parameters. And cache is something that we can include wherever we might need in the design and not to completely rely on one cache unit in the entire system. Is my understanding correct ? – Nikhilesh A S Jan 13 '21 at 13:06
-
1@NikhileshAS Cache is a concept to store data repeatedly accessed in a system that provides faster access. How this cache is implemented is system and problem-specific. On a CPU this a low latency memory. In a cloud application deployed as a complex set of interacting services, there might be numerous cache and proxy services. Each of which is configured to match the workload/use case. There is no one size fits all and there can be many factors that influence the decision. Standardizing toolsets to Redis and Nginx for example. In some cases, a purpose-built cache is faster. – nelaaro Jun 08 '21 at 08:27