Setting Up the Cache

The cache system requires a small amount of setup. Namely, you have to tell it where your cached data should live; whether in a database, on the filesystem or directly in memory. This is an important decision that affects your cache’s performance.

Your cache preference goes in the CACHES setting in your settings file.

Memcached

The fastest, most efficient type of cache supported natively by Django, Memcached is an entirely memory-based cache server, originally developed to handle high loads at LiveJournal.com and subsequently open-sourced by Danga Interactive. It’s used by sites such as Facebook and Wikipedia to reduce database access and dramatically increase site performance.
Memcached runs as a daemon and is allotted a specified amount of RAM. All it does is provide a fast interface for adding, retrieving and deleting data in the cache. All data is stored directly in memory, so there’s no overhead of database or filesystem usage.

After installing Memcached itself, you’ll need to install a Memcached binding. There are several Python Memcached bindings available; the two most common are python-memcached and pylibmc. To use Memcached with Django:

  •  Set BACKEND to django.core.cache.backends.memcached.MemcachedCache or django.core.cache.backends.memcached.PyLibMCCache (depending on your chosen memcached binding)
  • Set LOCATION to ip:port values, where ip is the IP address of the Memcached daemon and port is the port on which Memcached is running, or to a unix:path value, where path is the path to a Memcached Unix socket file.

In this example, Memcached is running on localhost (127.0.0.1) port 11211, using the python-memcached binding:

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘127.0.0.1:11211’,
}
}

In this example, Memcached is available through a local Unix socket file /tmp/memcached.sock using the python-memcached binding:

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘unix:/tmp/memcached.sock’,
}
}

One excellent feature of Memcached is its ability to share a cache over multiple servers. This means you can run Memcached daemons on multiple machines, and the program will treat the group of machines as a single cache, without the need to duplicate cache values on each machine. To take advantage of this feature, include all server addresses in LOCATION, either separated by semicolons or as a list.

In this example, the cache is shared over Memcached instances running on IP address 172.19.26.240 and 172.19.26.242, both on port 11211:

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: [
‘172.19.26.240:11211’,
‘172.19.26.242:11211’,
] }
}

In the following example, the cache is shared over Memcached instances running on the IP addresses 172.19.26.240 (port 11211), 172.19.26.242 (port 11212), and 172.19.26.244 (port 11213):

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: [
‘172.19.26.240:11211’,
‘172.19.26.242:11212’,
‘172.19.26.244:11213’,
] }
}

A final point about Memcached is that memory-based caching has a disadvantage: because the cached data is stored in memory, the data will be lost if your server crashes.

Clearly, memory isn’t intended for permanent data storage, so don’t rely on memory-based caching as your only data storage. Without a doubt, none of the Django caching backends should be used for permanent storage – they’re all intended to be solutions for caching, not storage – but we point this out here because memory-based caching is particularly temporary.

Database Caching

Django can store its cached data in your database. This works best if you’ve got a fast, well-indexed database server. To use a database table as your cache backend:

  • Set BACKEND to django.core.cache.backends.db.DatabaseCache
  • Set LOCATION to tablename, the name of the database table. This name can be whatever you want, as long as it’s a valid table name that’s not already being used in your database.

In this example, the cache table’s name is my_cache_table:

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.db.DatabaseCache’,
‘LOCATION’: ‘my_cache_table’,
}
}

Creating The Cache Table – Before using the database cache, you must create the cache table with this command:
python manage.py createcachetable

This creates a table in your database that is in the proper format that Django’s database-cache system expects. The name of the table is taken from LOCATION. If you are using multiple database caches, createcachetable creates one table for each cache. If you are using multiple databases, createcachetable observes the allow_migrate() method of your database routers. Like migrate, createcachetable won’t touch an existing table. It will only create missing tables.

Multiple Databases – If you use database caching with multiple databases, you’ll also need to set up routing instructions for your database cache table. For the purposes of routing, the database cache table appears as a model named CacheEntry, in an application named django_cache. This model won’t appear in the models cache, but the model details can be used for routing purposes.

For example, the following router would direct all cache read operations to cache_replica, and all write operations to cache_primary. The cache table will only be synchronized onto cache_primary:

class CacheRouter(object):
“””A router to control all database cache operations”””

def db_for_read(self, model, **hints):
# All cache read operations go to the replica
if model._meta.app_label in (‘django_cache’,):
return ‘cache_replica’
return None

def db_for_write(self, model, **hints):
# All cache write operations go to primary
if model._meta.app_label in (‘django_cache’,):
return ‘cache_primary’
return None

def allow_migrate(self, db, model):
# Only install the cache model on primary
if model._meta.app_label in (‘django_cache’,):
return db == ‘cache_primary’
return None

If you don’t specify routing directions for the database cache model, the cache backend will use the default database. Of course, if you don’t use the database cache backend, you don’t need to worry about providing routing instructions for the database cache model.

Filesystem Caching

The file-based backend serializes and stores each cache value as a separate file. To use this backend set BACKEND to ‘django.core.cache.backends.filebased.FileBasedCache’ and LOCATION to a suitable directory.

For example, to store cached data in /var/tmp/django_cache, use this setting:

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.filebased.FileBasedCache’,
‘LOCATION’: ‘/var/tmp/django_cache’,
}
}

If you’re on Windows, put the drive letter at the beginning of the path, like this:

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.filebased.FileBasedCache’,
‘LOCATION’: ‘c:/foo/bar’,
}
}

The directory path should be absolute – that is, it should start at the root of your filesystem. It doesn’t matter whether you put a slash at the end of the setting. Make sure the directory pointed to by this setting exists and is readable and writable by the system user under which your Web server runs. Continuing the above example, if your server runs as the user apache, make sure the directory /var/tmp/django_cache exists and is readable and writable by the user apache.

Local-Memory Caching

This is the default cache if another is not specified in your settings file. If you want the speed advantages of in-memory caching but don’t have the capability of running Memcached, consider the local-memory cache backend. To use it, set BACKEND to django.core.cache.backends.locmem.LocMemCache. For example:

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.locmem.LocMemCache’,
‘LOCATION’: ‘unique-snowflake’
}
}

The cache LOCATION is used to identify individual memory stores. If you only have one locmem cache, you can omit the LOCATION; however, if you have more than one local memory cache, you will need to assign a name to at least one of them in order to keep them separate.

Note that each process will have its own private cache instance, which means no cross-process caching is possible. This obviously also means the local memory cache isn’t particularly memory-efficient, so it’s probably not a good choice for production environments. It’s nice for development.

Dummy Caching (For Development)

Finally, Django comes with a dummy cache that doesn’t actually cache – it just implements the cache interface without doing anything. This is useful if you have a production site that uses heavy-duty caching in various places but a development/test environment where you don’t want to cache and don’t want to have to change your code to special-case the latter. To activate dummy caching, set BACKEND like so:

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.dummy.DummyCache’,
}
}

Using A Custom Cache Backend

While Django includes support for a number of cache backends out-of-the-box, sometimes you might want to use a customized cache backend. To use an external cache backend with Django, use the Python import path as the BACKEND of the CACHES setting, like so:

CACHES = {
‘default’: {
‘BACKEND’: ‘path.to.backend’,
}
}

If you’re building your own backend, you can use the standard cache backends as reference implementations. You’ll find the code in the django/core/cache/backends/ directory of the Django source.
Without a really compelling reason, such as a host that doesn’t support them, you should stick to the cache backends included with Django. They’ve been well-tested and are easy to use.
Cache Arguments
Each cache backend can be given additional arguments to control caching behavior. These arguments are provided as additional keys in the CACHES setting. Valid arguments are as follows:

  • TIMEOUT: The default timeout, in seconds, to use for the cache. This argument defaults to 300 seconds (5 minutes). You can set TIMEOUT to None so that, by default, cache keys never expire. A value of 0 causes keys to immediately expire (effectively don’t cache).
  • OPTIONS: Any options that should be passed to the cache backend. The list of valid options will vary with each backend, and cache backends backed by a third-party library will pass their options directly to the underlying cache library.
  • Cache backends that implement their own culling strategy (i.e., the locmem, filesystem and database backends) will honor the following options:
  • MAX_ENTRIES: The maximum number of entries allowed in the cache before old values are deleted. This argument defaults to 300.
  • CULL_FREQUENCY: The fraction of entries that are culled when MAX_ENTRIES is reached. The actual ratio is 1 / CULL_FREQUENCY, so set CULL_FREQUENCY to 2 to cull half the entries when MAX_ENTRIES is reached. This argument should be an integer and defaults to 3.A value of 0 for CULL_FREQUENCY means that the entire cache will be dumped when MAX_ENTRIES is reached. On some backends (database in particular) this makes culling much faster at the expense of more cache misses.
  • KEY_PREFIX: A string that will be automatically included (prepended by default) to all cache keys used by the Django server.
  • VERSION: The default version number for cache keys generated by the Django server.
  • KEY_FUNCTION: A string containing a dotted path to a function that defines how to compose a prefix, version and key into a final cache key.

In this example, a filesystem backend is being configured with a timeout of 60 seconds, and a maximum capacity of 1000 items:

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.filebased.FileBasedCache’,
‘LOCATION’: ‘/var/tmp/django_cache’,
‘TIMEOUT’: 60,
‘OPTIONS’: {‘MAX_ENTRIES’: 1000}
}
}

Back to Tutorial

Share this post
[social_warfare]
Caching
The Per-Site Cache

Get industry recognized certification – Contact us

keyboard_arrow_up