Microservices Caching Patterns

Sadil Chamishka
4 min readOct 12, 2022

--

The caching is a technique used in computer systems to boost the performance. The limitation of latency for the I/O bound tasks reduce the performance of the systems and caching is introduced to keep the more frequent data close to the application. There are several caching patterns which can be used in different use cases. As developers we have to be aware of these architectural patterns to design more resilient applications.

  1. Embedded Cache

It is the simplest possible caching pattern where the cache server is deployed along with the application. The frequent requests for the database can be cached in the embedded caches while reducing the load for the database in order to improve the performance of the application. The inherent challenge of embedded caches is to handle cache coherence in multi node deployments as the local caches have to be synchronised to achieve the consistency. The solution is embedded distributed caches.

2. Embedded Distributed Cache

The distributed cache comes with a solution for the stale data which could resides embedded local cache servers. The cache servers coordinates among themselves by issuing cache invalidations and make sure the consistency of the whole system. The distributed cache systems like Hazelcast provides in memory data grid capabilities by distributing cache data across the nodes with replication instead of maintaining same cached data in each and every nodes. Those distributed caches are proven to be fault tolerant and as developers we can get used these patterns to boost the performance of the applications while preserving the consistency.

3. Client — Server Cache

The caching layer is separated out from the application and exposed as a service to be accessed by the application nodes. The benefit of the approach is the cache server can be managed separately (ex — scaling, security). The communication happen through well defined protocols and, hence client libraries for different programming languages can be found to integrate with the cache servers. As the communication happen over the network, it would cost latency compared embedded caches discussed above. Many caching solutions like Redis, Memcached offer this type of solutions.

4 . Cloud Cache.

The cloud cache is backed by the client-server caching pattern while moving the cache servers to the managed clouds which reduces the cost of managing and get the benefits of features like auto scaling. The applications and the cache servers can be deployed in same VPC to reduce the possible cost of latencies.

5. Sidecar Cache.

In kubernetes world, the sidecar pattern is mostly seen to separate cross cutting concerns from the application in order to reduce the internal complexity. The sidecar is a container deployed along within the POD which ensures the containers inside the PODs deployed on same physical machine. The sidecar cache pattern can be identified as a mixture of embedded as client-server caching patterns as the cache server is deployed along with application container but act as a separate service.

6. Reverse Proxy Cache

Up to now we discused the caching patterns where the cache layer is resides in between application and persistance layer. But cache layers can be managed between the load balancer and application also. The application responses are cached against the requests handles by the application load balancer and serve cached responses when available. The application is completely unaware of the caching and cache invalidation have to be done based on TTL methods.

7. Reverse Proxy Sidecar Cache

The service mesh technology provides a sidecar container which act as a proxy for the application reduces the burdens of service discovery and other cross cutting concerns. The proxy also can be leveraged to act as caching layer which stand front of the application to serve the cached responses.

--

--