Using the MemoryCache class can save your application a great deal of time by saving the results of expensive operations such as a database query or a web service requeste. However, great care must be taken to ensure that managing the scope of the cache and key management does not lead to headaches.
The best recommendation is to let the MemoryCache class handle its own scope by using the Default cache item.
var memoryCache = MemoryCache.Default; |
The class automatically lazily loads and references a singleton cache of objects regardless of when the class is referenced in your application. This is a good thing for your application because it means the cache is always there for you and you don't have to worry about the scope of the cache or the objects using it.
Using the cache can be as simple as the example below shows.
public class FooProvider { private readonly MemoryCache _defaultCache = MemoryCache.Default; private readonly FooExpensiveProvider _fooExpensiveProvider = new FooExpensiveProvider(); public Foo GetFooItem(string name) { var fooItem = _defaultCache.Get(name) as Foo; if (fooItem == null) { fooItem = _fooExpensiveProvider.GetFooItem(name); _defaultCache.Add(name, fooItem, DateTimeOffset.UtcNow.AddHours(1)); } return fooItem; } } public class FooExpensiveProvider { public Foo GetFooItem(string name) { return new Foo { Name = Guid.NewGuid().ToString(), Content = "Content" }; } } |
With this in place, we can write a simple test to see it working.
class Program { static void Main(string[] args) { var fooProvider = new FooProvider(); Parallel.For(0, 1000, i => { var fooItem = fooProvider.GetFooItem("test"); }); Console.WriteLine("CacheMissCount = " + fooProvider.CacheMissCount); } } |
Whoops! We got back more than 1 cache miss! What happened? In our case, there are multiple requests to the FooProvider's GetItem method that are running concurrently. Multiple calls entered the code at the same time, found the cache empty and performed the expensive operation then updated the cache.
We can prevent concurrent requests from performing the expensive operation by first locking the code around the desired area.
public class FooProvider { private readonly MemoryCache _defaultCache = MemoryCache.Default; private readonly FooExpensiveProvider _fooExpensiveProvider = new FooExpensiveProvider(); private static readonly Object _cacheLock = new object(); // must be static! public int CacheMissCount = 0; public Foo GetFooItem(string name) { var fooItem = _defaultCache.Get(name) as Foo; if (fooItem == null) { lock (_cacheLock) { fooItem = _defaultCache.Get(name) as Foo; if (fooItem == null) { fooItem = _fooExpensiveProvider.GetFooItem(name); _defaultCache.Add(name, fooItem, DateTimeOffset.UtcNow.AddHours(1)); Interlocked.Increment(ref CacheMissCount); } } } return fooItem; } } |
Now when we run the test program, we get back the expected single cache miss. Notice also, that it is necessary to again check the cache after locking the code area, because it is possible another request ahead of the current one updated the cache between the first cache check and locking the code area.
We can also demonstrate that by using MemoryCache.Default, multiple instances of the FooProvider class still use the same singleton cache and the cache item is only missed once.
class Program { static void Main(string[] args) { Task[] tasks = new Task[3]; tasks[0] = Task.Run(() => { var fooProvider = new FooProvider(); for (int i = 0; i < 100; i++) { var fooItem = fooProvider.GetFooItem("test"); } Console.WriteLine("FooProvider.CacheMissCount = " + FooProvider.CacheMissCount); }); tasks[1] = Task.Run(() => { var fooProvider = new FooProvider(); for (int i = 0; i < 100; i++) { var fooItem = fooProvider.GetFooItem("test"); } Console.WriteLine("FooProvider.CacheMissCount = " + FooProvider.CacheMissCount); }); tasks[2] = Task.Run(() => { var fooProvider = new FooProvider(); for (int i = 0; i < 100; i++) { var fooItem = fooProvider.GetFooItem("test"); } Console.WriteLine("FooProvider.CacheMissCount = " + FooProvider.CacheMissCount); }); Task.WaitAll(tasks); } } |
Using Named Caches
It is possible to used a named cache instead of the Default. It is strong recommended that you do not do this because it requires you to carefully manage the lifetime and scope of the cache item. This could lead to situations where more than one cache is created and cache related objects are not released causing memory leaks.
Do not create MemoryCache instances unless it is required. If you create cache instances in client and Web applications, the MemoryCacheinstances should be created early in the application life cycle. You must create only the number of cache instances that will be used in your application, and store references to the cache instances in variables that can be accessed globally. For example, in ASP.NET applications, you can store the references in application state. If you create only a single cache instance in your application, use the default cache and get a reference to it from the Default property when you need to access the cache. |
Caching Multiple Types of Objects
Since the recommended strategy for caching objects is to use only the Default, what if we wish to cache multiple object types? Since the cache can store any type of object, we can simply differentiate the cached items by prefixing the object type to the cache key. Notice in the example below that the only difference between this example and the previous examples is that the cache key is different - it uses a prefix to denote what kind of object we're after. Note that this technique could also be used to cache the same object types by different cache keys.
public Foo GetFooItem(string name) { string cacheKey = string.Format("Foo.ByName.{0}", name); var fooItem = _defaultCache.Get(cacheKey) as Foo; if (fooItem == null) { lock (_cacheLock) { fooItem = _defaultCache.Get(cacheKey) as Foo; if (fooItem == null) { fooItem = _fooExpensiveProvider.GetFooItem(name); _defaultCache.Add(cacheKey, fooItem, DateTimeOffset.UtcNow.AddHours(1)); Interlocked.Increment(ref CacheMissCount); } } } return fooItem; } |