ASP.NET Core 2.2 & 3 REST API #25 — Response caching using Redis
Up Next: Pagination
This is not a tutorial on what Redis is and how it works.
Caching with Redis makes your data access queries go faster by storing frequently (or all) accessed entities in-memory, thus avoiding disk seeks, which are typically several magnitudes slower.
- Add the
Microsoft.Extensions.Caching.StackExchangeRedis
NuGet package. - Start by creating a new
IResponseCacheService
(make an implementation too) - You already know the job: create a new installer,
CacheInstaller
and just read configuration fromappsettings.json
, add this as a singleton for future usage and finally populate theRedisCacheSettings
(located under theCache
directory)
If the cache is disabled, return. If it is enabled, we are going to use the package we used to register this and then add our interface in the DI container as well.
Action
That’s it for the setup.
You don’t want to cache everything in your application. Create-Update-Delete responses should not be cached. Stuff like read/get operations though, can be cached. Take a blog for example; the posts don’t change much (if at all).
Again, there are lot’s of ways to make this possible, we are going to use [Attributes]
. This is pretty similar with the previous tutorial for Api Key-Based authentication.
Let’s create some new filter middleware, exactly like the last time.
We need to do 2 things in our service:
- Cache things (with a time-to-live)
- Retrieve things (from the cache)
Redis ultimately accepts bytes, but we will be converting the actual responses from string to bytes.
Create some stub methods in the implementation and let’s start making the service.
For the saving part, the logic is pretty straightforward.
- If the response does not exist, return and save nothing
- Else, serialise this and save this (pass the
DistributedCacheEntryOptions
in order to support TTL)
Retrieving is even simpler:
- Find entry by cache key
- If it does not exist, return
null
— else return the actual value
Serving Cached Responses
Middleware logic is pretty straightforward:
- If caching is not enabled, regularly invoke next middleware without doing a thing
- If the response is cached with proper TTL, return it. We are only caching GET Requests here, this is why we only return
application/json
withHTTP 200
code. This is adaptable as you wish.
- Else, query the database by sending to the next middleware, save the result and return
The last missing part is the GenerateCacheKeyFromRequest
method. Here’s how we are going to just use:
- The request’s path
- The request’s query parameters ordered by the keys to avoid duplication, appended as
key-value
pairs - StringBuilder usage is a must since strings are immutable and every time we are manipulating them we are just creating new instances.
We can test this easily. First, spin up a Redis instance. We highly recommend to use docker: docker run -p 6379:6379 redis
.
Let’s see what happens when we use the API. Right after using an endpoint, the request is cached inside Redis.
We can also prove this by looking at the console logs. Only the first request actually queries the database. All subsequent ones just serve from the cache.
In this example, we can also take speed into account.
- First request took
411ms
(this is terrible because it’s the first call ever made to the database). Normally this would be30-40ms
- Subsequent request should be at the
7-12ms
range. This is four+ times faster.
You can now update the docker-compose.yml
to have a Redis instance start up with the other services as well.
Up Next: Pagination
Code is available on Github and the instructional videos are located on YouTube.
Keep Coding