memory and distributed caching in .net core

Today we shall be discussing caching. The concept of caching is rather straight-forward. The idea is to store your data on a faster secondary source, typically in memory, and not just on your primary data source, typically database. That way when your application receives requests, the data is pulled from the faster source and therefore faster response time.

In .Net Core there are two options, memory or distributed caching. Memory caching is as the name implies, in memory and it’s contained within the memory of the web server the application is running on. If your application runs on multiple web servers then distributed caching (or sticky sessions) would be a better option. Distributed caching makes your application scalable, allows session data to be shared between the web servers and does not reset when a new version is deployed. Both types of caching store values as key-value pairs. In this post I will be using Redis as my choice for distributed caching. But what if we use them both at the same time? Here’s a small POC to show how they can work together. 

I created a new .Net Core solution and selected the API template. This template comes with a default WeatherForecast controller and I used that as my skeleton to implement memory and distributed caching. I figured that the temperature is a realistic value that can be cached for a few minutes since it’s not a value that changes rapidly.

I left that untouched for now and instead created a class library to act as my business layer. In there I added a new interface and this will act as my caching service. In here I implemented the following logic; check if key is in the memory cache and if found return value. If key not found then check in distributed caching and if found return value. If key not found then look up value from primary source and save value in both memory and distributed caching. In order to connect to Redis I had to download and install the Nuget package StackExchange.Redis.

public class CacheService : ICacheService
{
private readonly IConnectionMultiplexer _muxer;
private readonly IDatabase _conn;
private readonly IMemoryCache _memCache;
public CacheService(IConnectionMultiplexer muxer, IMemoryCache memCache)
{
_muxer = muxer;
_conn = _muxer.GetDatabase();
_memCache = memCache;
}
public async Task<T> GetOrSet<T>(string key, Func<Task<T>> factory, TimeSpan cacheExpiry)
{
var value = await _memCache.GetOrCreateAsync<T>(key, entry =>
{
entry.AbsoluteExpiration = DateTime.UtcNow.Add(cacheExpiry);
return GetFromRedis(key, factory, cacheExpiry);
});
return value;
}
private async Task<T> GetFromRedis<T>(string key, Func<Task<T>> factory, TimeSpan cacheExpiry)
{
try
{
var value = await _conn.StringGetAsync(key);
if (value.HasValue)
{
try
{
return JsonConvert.DeserializeObject<T>(value);
}
catch (Exception)
{
return (T)Convert.ChangeType(value, typeof(T));
}
}
var item = await factory.Invoke();
if (item != null)
{
var serializedValue = JsonConvert.SerializeObject(item);
await _conn.StringSetAsync(key, serializedValue, cacheExpiry, When.Always, CommandFlags.None);
return item;
}
return default(T);
}
catch (Exception)
{
return default(T);
}
}
}

I decided to choose an API HTTP request as my primary source instead of a database call. Sticking with the weather theme I decided to consume the Open Weather API to get that feeling of playing around with live data. Because the second parameter in the caching service endpoint is a function, I created a new weather service whose responsibility is to consume the Open Weather API. Like I said earlier this function could be a database call. In that case we would need to inject the function that retrieves the data. For completeness sake and just in case anyone would want a code snippet how to consume the Open Weather API, here’s my implementation.

public class WeatherService : IWeatherService
{
public WeatherService()
{
}
public async Task<OpenWeather> GetWeather(string cityName)
{
if (string.IsNullOrWhiteSpace(cityName))
throw new ArgumentNullException("Provide city name");
var weather = new OpenWeather();
var apiKey = "your OpenWeather API key";
using (var httpClient = new HttpClient())
{
using (var response = await httpClient.GetAsync($"https://api.openweathermap.org/data/2.5/weather?q={cityName}&appid={apiKey}&units=metric"))
{
weather = JsonConvert.DeserializeObject<OpenWeather>(await response.Content.ReadAsStringAsync());
}
}
return weather;
}
}

I then updated the default WeatherForecast controller to use the caching service and weather service. Originally this was returning some random data and was not connected to any data source whatsoever.

[ApiController]
[Route("[controller]")]
public class WeatherForecastController : ControllerBase
{
private readonly ILogger<WeatherForecastController> _logger;
private readonly ICacheService _cacheService;
private readonly IWeatherService _weatherService;
public WeatherForecastController(ILogger<WeatherForecastController> logger, ICacheService cacheService, IWeatherService weatherService)
{
_logger = logger;
_cacheService = cacheService;
_weatherService = weatherService;
}
[HttpGet]
public async Task<WeatherForecast> GetAsync(string city)
{
var weather = new OpenWeather();
var cacheExpiry = new TimeSpan(0, 0, 10);
weather = await _cacheService.GetOrSet<OpenWeather>(city, () => _weatherService.GetWeather(city), cacheExpiry);
return new WeatherForecast
{
Date = DateTime.Now,
TemperatureC = weather.main.temp,
Summary = weather.weather[0].description
};
}
}

The services were injected in the WeatherForecast controller using dependency injection and therefore I had to update the ConfigureServices method inside the Startup class and instantiate both services. I also added a reference to the memory and distributing caching services.

public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddMemoryCache();
services.AddSingleton<IConnectionMultiplexer>(provider => ConnectionMultiplexer.Connect("your redis connection string"));
services.AddScoped<ICacheService, CacheService>();
services.AddScoped<IWeatherService, WeatherService>();
}

Last but not least I also created some unit tests to wrap everything up nice and easy.

[TestClass]
public class CacheServiceTests
{
private CacheService _cacheService;
private Mock<IConnectionMultiplexer> _mockMuxer;
private Mock<IDatabase> _mockRedisDb;
public CacheServiceTests()
{
_mockMuxer = new Mock<IConnectionMultiplexer>();
_mockRedisDb = new Mock<IDatabase>();
}
[TestMethod]
public async Task GetOrSet_KeyFoundInMemoryCache_ReturnsValue()
{
// Arrange
var key = "TestKey";
var value = "TestValue";
var memoryCache = new MemoryCache(new MemoryCacheOptions());
memoryCache.Set(key, value);
_cacheService = new CacheService(_mockMuxer.Object, memoryCache);
// Act
var result = await _cacheService.GetOrSet<string>(key, () => Task.FromResult(value), TimeSpan.FromSeconds(30));
// Assert
Assert.IsInstanceOfType(result, typeof(string));
Assert.AreEqual(value, result);
}
}

You can find the entire solution in one of my Github repositories and feel free to test it out or make your own changes. This was just a proof of concept and can certainly do with some improvements such as storing sensitive keys or connection strings in a more secure location, or supplying two different expiry times for the memory and distributed caching. Equally the caching service could easily be put inside a “common” project and then re-used as a nuget package/artifact by different solutions.

That’s a wrap for today and I hope you enjoyed this blog post. Don’t be shy to leave any comments or get in touch if anything is unclear.

Peace out,
Bjorn

improving a website’s performance – part 2

A while back (i.e. more than a year ago! yeah I kept myself busy) I wrote the first part of this post, improving a website’s performance – part 1. A few tricks and tips which I’ve learnt over the years which improve a website’s performance, and in return giving the end user a better web experience. A better web experience increases and attracts traffic, and if your website generates money then that translates to profit *cha-ching*.

In the first part of this article the focus was on minification of resources, minimising HTTP requests and optimisation of images. In this post we will be looking at implementation of caching, server side improvements and useful performance tools.

Implement Caching

Latest web browsers can, and probably will, cache a website’s external files, such as CSS, JS and images. As a developer, enabling HTTP caching, will allow these files to be stored on the user’s machine after they are initially downloaded when the user visits the website for the first time. Once stored they will be used again whenever the user re-visits the website instead of re-downloading again. This is quite effective because it will improve the loading and response of the website. Think about it from this point of view, every time you’re hitting that “Back” button most of the data that is already downloaded will be fetched from the storage of your machine instead of getting packets of data from the web. One needs to be careful not to overload this feature though. Any content that features data or information that can change often must not be cached or else the user might end up reading expired information. This can be avoided by having shorter HTTP caching expiry dates to force the web browser to fetch the latest information. A good trick to make sure that the web browser is requesting the latest files from the server and not from the user’s storage is to change the file name.

Server Side Improvements

Improving a website’s performance does not necessarily mean tweaking some stuff from client side but also optimising processes from server side. Do review your code and don’t be afraid to make changes to your source code if a bottle neck is found. These optimisations will reflect on the website’s response and page load times. Tighten loops when possible by moving unnecessary logic. If a loop, that has a lot of iterations, has achieved the result after just a few iterations, then exit that loop and move on to the next process. Again, say a database connection, or even authentication calls, are being declared and populated inside the loop, then, do yourself a favour and make database calls outside of the loop to minimise the amount of calls being done. Other ways to write efficient code is to declare variable types if the data type is known. Avoid any unnecessary variable conversions and once the processing is complete set the variables which aren’t going to be used again to null, particularly if the variables’ type are of complex types and hold large amounts of data. Lastly, before writing the code, analyse the flows and consider implementing appropriate design. You would be surprised what a difference it makes in the long run to actually sit down and design the logic on a piece of paper before implementing it on your computer.

Performance Tools

To test the improvements or simply just to get an idea of the current rating of your website there are a few tools you can make use of. These tools analyse websites and grade features which are related to the website’s performance such as, minification of resources, the amount of HTTP request and image sizes being loaded, in other words the implementations that we have been discussing above and in the previous blog post. On top of that they check the website’s response time from different geographical locations and different web browsers, some of them also checking for mobile responsiveness. There is quite a selection of different tools you can use, some of them are online tools which test and produce their results via their website whereas others have to be downloaded and installed on your computer or web browser. Some of the most popular are Pingdom, KeyCDN, Website Speed Test and GTmetrix, the last two being completely free to use. Worth mentioning is that GTmetrix makes use of Yahoo’s YSlow and Google’s PageSpeed, two reliable tools that perform the same job however GTmetrix takes both results, compares them and produces the final rating.

Developing a website which looks nice and attractive is quite important if you would like to have a constant figure of visitors, because at the end of the day design is considered as a huge factor in web development. However, having said that, it is important to give the visitor a website that has a quick response time and an overall better user experience. When developing a website, it is important to keep in mind that the website should be running smoothly and implementing the features we have discussed would surely help to achieve this result.