Django’s cache framework 缓存框架

来源:互联网 发布:淘宝出货单格式 编辑:程序博客网 时间:2024/05/07 13:41

Django’s cache framework

A fundamental trade-off in dynamic Web sites is, well, they’re dynamic. Each time a user requests a page, the Web server makes all sorts of calculations – from database queries to template rendering to business logic – to create the page that your site’s visitor sees. This is a lot more expensive, from a processing-overhead perspective, than your standard read-a-file-off-the-filesystem server arrangement.

★ 在动态网站里一个基本的事实就是每次发送给服务器的请求都要被计算后返回结果。这要花费很大的代价。

For most Web applications, this overhead isn’t a big deal. Most Web applications aren’t washingtonpost.com or slashdot.org; they’re simply small- to medium-sized sites with so-so traffic. But for medium- to high-traffic sites, it’s essential to cut as much overhead as possible.

★ 实际上,访问量低的网站不需要使用缓存框架 

That’s where caching comes in.

To cache something is to save the result of an expensive calculation so that you don’t have to perform the calculation next time. Here’s some pseudocode explaining how this would work for a dynamically generated Web page:

★ 缓存就是存储代价高昂的计算结果,所以不需要重复计算它们了。

given a URL, try finding that page in the cacheif the page is in the cache:    return the cached pageelse:    generate the page    save the generated page in the cache (for next time)    return the generated page

Django comes with a robust cache system that lets you save dynamic pages so they don’t have to be calculated for each request. For convenience, Django offers different levels of cache granularity: You can cache the output of specific views, you can cache only the pieces that are difficult to produce, or you can cache your entire site.

★ Django 提供了一个强壮的缓存系统 

Django also works well with “downstream” caches, such as Squid and browser-based caches. These are the types of caches that you don’t directly control but to which you can provide hints (via HTTP headers) about which parts of your site should be cached, and how.

★ Django 能完美的和 Squid 缓存或 基于浏览器缓存的系统协同工作。 这些缓存虽然不能得到它们的控制权,但是可以通过 HTTP headers 说明站点的哪一部分需要被缓存以及如何被缓存 

Setting up the cache   ★ 泛泛的讲了几种 cache 机制

The cache system requires a small amount of setup. Namely, you have to tell it where your cached data should live – whether in a database, on the filesystem or directly in memory. This is an important decision that affects your cache’s performance; yes, some cache types are faster than others.

★  缓存系统需要一些设定。也就是说,你需要告诉他“你要把缓存的数据放在哪” -- 或是在数据库里,或是文件系统,或是直接放在内存中。这是一个重要的环节,因为这个环节会决定你的缓存接下来是如何工作的。某些缓存就是比别的快。

Your cache preference goes in the CACHES setting in your settings file. Here’s an explanation of all available values for CACHES.

★ 在 settings 配置文件里的 CACHES 配置缓存的行为。 

Memcached

By far the fastest, most efficient type of cache available to Django, Memcached is an entirely memory-based cache framework originally developed to handle high loads at LiveJournal.com and subsequently open-sourced by Danga Interactive. It is used by sites such as Facebook and Wikipedia to reduce database access and dramatically increase site performance.

★ 目前为止,最快的缓存系统就是 Memcached。Memcached 戏剧性地改变了网站的行为。 

Memcached runs as a daemon and is allotted a specified amount of RAM. All it does is provide a fast interface for adding, retrieving and deleting arbitrary data in the cache. All data is stored directly in memory, so there’s no overhead of database or filesystem usage.

★ Memcached 作为后台程序,并被分配了一定数目的内存。 它所做的是在缓存里为增加,检索,删除数据提供快速的接口。所有数据都缓存到内存里,没有涉及db和文件系统。 

After installing Memcached itself, you’ll need to install a memcached binding. There are several python memcached bindings available; the two most common are python-memcached and pylibmc.

★ 设定完 Memcached 本身之后,需要安装Django对Memcached的支持插件。最长用的插件是 python-memcached 和 pylibmc 

To use Memcached with Django:   ★ 如何在 Django 里设置 Memcached 

  • Set BACKEND to django.core.cache.backends.memcached.MemcachedCache or django.core.cache.backends.memcached.PyLibMCCache (depending on your chosen memcached binding)
      ★ 设置 BACKEND 
  • Set LOCATION to ip:port values, where ip is the IP address of the Memcached daemon and port is the port on which Memcached is running, or to a unix:path value, where path is the path to a Memcached Unix socket file.
      ★ 设置 LOCATION 

★ 设置 Memcached 的具体实例

In this example, Memcached is running on localhost (127.0.0.1) port 11211, using the python-memcached binding:

CACHES = {    'default': {        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',        'LOCATION': '127.0.0.1:11211',    }}

In this example, Memcached is available through a local Unix socket file /tmp/memcached.sock using the python-memcachedbinding:

CACHES = {    'default': {        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',        'LOCATION': 'unix:/tmp/memcached.sock',    }}

One excellent feature of Memcached is its ability to share cache over multiple servers. This means you can run Memcached daemons on multiple machines, and the program will treat the group of machines as a single cache, without the need to duplicate cache values on each machine. To take advantage of this feature, include all server addresses in LOCATION, either separated by semicolons or as a list.

★ Memcached 可以在多个服务器上部署,下面是一个例子。

In this example, the cache is shared over Memcached instances running on IP address 172.19.26.240 and 172.19.26.242, both on port 11211:

CACHES = {    'default': {        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',        'LOCATION': [            '172.19.26.240:11211',            '172.19.26.242:11211',        ]    }}

In the following example, the cache is shared over Memcached instances running on the IP addresses 172.19.26.240 (port 11211), 172.19.26.242 (port 11212), and 172.19.26.244 (port 11213):

CACHES = {    'default': {        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',        'LOCATION': [            '172.19.26.240:11211',            '172.19.26.242:11212',            '172.19.26.244:11213',        ]    }}

A final point about Memcached is that memory-based caching has one disadvantage: Because the cached data is stored in memory, the data will be lost if your server crashes. Clearly, memory isn’t intended for permanent data storage, so don’t rely on memory-based caching as your only data storage. Without a doubt, none of the Django caching backends should be used for permanent storage – they’re all intended to be solutions for caching, not storage – but we point this out here because memory-based caching is particularly temporary.

★ Memcached 有一个缺点:缓存数据存放在内存里,当服务器宕机时数据会丢失。 在内存里存放数据是不是可靠的,所以不能把内存作为存放数据的场所。 记住,内存只能作为缓存不能当做存储设备,因为在内存里缓存的数据是临时性的。

Database caching

To use a database table as your cache backend, first create a cache table in your database by running this command:

★ 用一个 db 表作为缓存的后台支持,首先创建缓存表的命令如下。 

$ python manage.py createcachetable [cache_table_name]

...where [cache_table_name] is the name of the database table to create. (This name can be whatever you want, as long as it’s a valid table name that’s not already being used in your database.) This command creates a single table in your database that is in the proper format that Django’s database-cache system expects.

Once you’ve created that database table, set your BACKEND setting to "django.core.cache.backends.db.DatabaseCache", and LOCATION to tablename – the name of the database table. In this example, the cache table’s name is my_cache_table:

★ 一旦在数据库里创建了表,就可以使用缓存了。 把BACKEND 设定为 django.core.cache.backends.db.DatabaseCache ,LOCATION 设定为 tablename。例子如下:

CACHES = {    'default': {        'BACKEND': 'django.core.cache.backends.db.DatabaseCache',        'LOCATION': 'my_cache_table',    }}

The database caching backend uses the same database as specified in your settings file. You can’t use a different database backend for your cache table.

Database caching works best if you’ve got a fast, well-indexed database server.

★ 如果你有一个快速的,被建立了良好索引的数据库,数据库缓存可以很好的工作 

Database caching and multiple databases

If you use database caching with multiple databases, you’ll also need to set up routing instructions for your database cache table. For the purposes of routing, the database cache table appears as a model named CacheEntry, in an application named django_cache. This model won’t appear in the models cache, but the model details can be used for routing purposes.

★ 如果把数据库缓存用于数个数据库,必须为缓存表设置溃退路线,以便程序崩溃时减免损失 

For example, the following router would direct all cache read operations to cache_slave, and all write operations to cache_master. The cache table will only be synchronized onto cache_master:

class CacheRouter(object):    """A router to control all database cache operations"""    def db_for_read(self, model, **hints):        "All cache read operations go to the slave"        if model._meta.app_label in ('django_cache',):            return 'cache_slave'        return None    def db_for_write(self, model, **hints):        "All cache write operations go to master"        if model._meta.app_label in ('django_cache',):            return 'cache_master'        return None    def allow_syncdb(self, db, model):        "Only synchronize the cache model on master"        if model._meta.app_label in ('django_cache',):            return db == 'cache_master'        return None

If you don’t specify routing directions for the database cache model, the cache backend will use the default database.

Of course, if you don’t use the database cache backend, you don’t need to worry about providing routing instructions for the database cache model.

Filesystem caching

To store cached items on a filesystem, use "django.core.cache.backends.filebased.FileBasedCache" for BACKEND. For example, to store cached data in /var/tmp/django_cache, use this setting:

★ 为了使用文件缓存系统,把 BACKEND 设置为 django.core.cache.backends.filebased.FileBasedCache 

CACHES = {    'default': {        'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',        'LOCATION': '/var/tmp/django_cache',    }}

If you’re on Windows, put the drive letter at the beginning of the path, like this:

CACHES = {    'default': {        'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',        'LOCATION': 'c:/foo/bar',    }}

The directory path should be absolute – that is, it should start at the root of your filesystem. It doesn’t matter whether you put a slash at the end of the setting.

★ 路径必须是绝对路径,以文件系统的根目录开始 

Make sure the directory pointed-to by this setting exists and is readable and writable by the system user under which your Web server runs. Continuing the above example, if your server runs as the user apache, make sure the directory /var/tmp/django_cache exists and is readable and writable by the user apache.

★ 确保指定的路径存在且有可读写的权限。

Each cache value will be stored as a separate file whose contents are the cache data saved in a serialized (“pickled”) format, using Python’s pickle module. Each file’s name is the cache key, escaped for safe filesystem use.

★ 要注意的是,每一个缓存值被当做不同的文件处理,且文件内容被以 “pickled” 的形式存放(使用pickle模块)。每个文件名作为缓存键,为避免被文件系统的用户使用。

Local-memory caching

This is the default cache if another is not specified in your settings file. If you want the speed advantages of in-memory caching but don’t have the capability of running Memcached, consider the local-memory cache backend. This cache is multi-process and thread-safe. To use it, set BACKEND to "django.core.cache.backends.locmem.LocMemCache". For example:

★ 当不使用 Memcached 时,考虑本地内存缓存机制。本地缓存机制是多进程且线程安全的。只需设定 BACKEND。

CACHES = {    'default': {        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',        'LOCATION': 'unique-snowflake'    }}

The cache LOCATION is used to identify individual memory stores. If you only have one locmem cache, you can omit the LOCATION; however, if you have more than one local memory cache, you will need to assign a name to at least one of them in order to keep them separate.

Note that each process will have its own private cache instance, which means no cross-process caching is possible. This obviously also means the local memory cache isn’t particularly memory-efficient, so it’s probably not a good choice for production environments. It’s nice for development.

Dummy caching (for development)

★ 生产环境要使用假的缓存机制避免麻烦 

Finally, Django comes with a “dummy” cache that doesn’t actually cache – it just implements the cache interface without doing anything.

This is useful if you have a production site that uses heavy-duty caching in various places but a development/test environment where you don’t want to cache and don’t want to have to change your code to special-case the latter. To activate dummy caching, set BACKEND like so:

CACHES = {    'default': {        'BACKEND': 'django.core.cache.backends.dummy.DummyCache',    }}

Using a custom cache backend

While Django includes support for a number of cache backends out-of-the-box, sometimes you might want to use a customized cache backend. To use an external cache backend with Django, use the Python import path as the BACKEND of the CACHES setting, like so:

CACHES = {    'default': {        'BACKEND': 'path.to.backend',    }}

If you’re building your own backend, you can use the standard cache backends as reference implementations. You’ll find the code in the django/core/cache/backends/ directory of the Django source.

Note: Without a really compelling reason, such as a host that doesn’t support them, you should stick to the cache backends included with Django. They’ve been well-tested and are easy to use.

★ 这节讲了如何自己定制一个缓存系统。应该用现成的,因为他们都已经被很好的测试过了。 

Cache arguments

Each cache backend can be given additional arguments to control caching behavior. These arguments are provided as additional keys in the CACHES setting. Valid arguments are as follows:

★ 任何缓存后台都可以接受额外的参数来控制缓存的行为。

  • TIMEOUT: The default timeout, in seconds, to use for the cache. This argument defaults to 300 seconds (5 minutes).

  • OPTIONS: Any options that should be passed to the cache backend. The list of valid options will vary with each backend, and cache backends backed by a third-party library will pass their options directly to the underlying cache library.

    Cache backends that implement their own culling strategy (i.e., the locmemfilesystem and database backends) will honor the following options:

    • MAX_ENTRIES: The maximum number of entries allowed in the cache before old values are deleted. This argument defaults to 300.

    • CULL_FREQUENCY: The fraction of entries that are culled when MAX_ENTRIES is reached. The actual ratio is1 / CULL_FREQUENCY, so set CULL_FREQUENCY to 2 to cull half the entries when MAX_ENTRIES is reached. This argument should be an integer and defaults to 3.

      A value of 0 for CULL_FREQUENCY means that the entire cache will be dumped when MAX_ENTRIES is reached. On some backends (database in particular) this makes culling much faster at the expense of more cache misses.

  • KEY_PREFIX: A string that will be automatically included (prepended by default) to all cache keys used by the Django server.

    See the cache documentation for more information.

  • VERSION: The default version number for cache keys generated by the Django server.

    See the cache documentation for more information.

  • KEY_FUNCTION A string containing a dotted path to a function that defines how to compose a prefix, version and key into a final cache key.

    See the cache documentation for more information.

In this example, a filesystem backend is being configured with a timeout of 60 seconds, and a maximum capacity of 1000 items:

CACHES = {    'default': {        'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',        'LOCATION': '/var/tmp/django_cache',        'TIMEOUT': 60,        'OPTIONS': {            'MAX_ENTRIES': 1000        }    }}

Invalid arguments are silently ignored, as are invalid values of known arguments.

The per-site cache

Once the cache is set up, the simplest way to use caching is to cache your entire site. You’ll need to add 'django.middleware.cache.UpdateCacheMiddleware' and 'django.middleware.cache.FetchFromCacheMiddleware' to your MIDDLEWARE_CLASSES setting, as in this example:

★ 当正确的设置好缓存之后,最简单使用缓存的方法就是缓存整个网站。 设置如下

MIDDLEWARE_CLASSES = (    'django.middleware.cache.UpdateCacheMiddleware',    'django.middleware.common.CommonMiddleware',    'django.middleware.cache.FetchFromCacheMiddleware',)

Note

No, that’s not a typo: the “update” middleware must be first in the list, and the “fetch” middleware must be last. The details are a bit obscure, but see Order of MIDDLEWARE_CLASSES below if you’d like the full story.

Then, add the following required settings to your Django settings file:

  • CACHE_MIDDLEWARE_ALIAS – The cache alias to use for storage.
  • CACHE_MIDDLEWARE_SECONDS – The number of seconds each page should be cached.
  • CACHE_MIDDLEWARE_KEY_PREFIX – If the cache is shared across multiple sites using the same Django installation, set this to the name of the site, or some other string that is unique to this Django instance, to prevent key collisions. Use an empty string if you don’t care.

The cache middleware caches GET and HEAD responses with status 200, where the request and response headers allow. Responses to requests for the same URL with different query parameters are considered to be unique pages and are cached separately. The cache middleware expects that a HEAD request is answered with the same response headers as the corresponding GET request; in which case it can return a cached GET response for HEAD request.

Additionally, the cache middleware automatically sets a few headers in each HttpResponse:

  • Sets the Last-Modified header to the current date/time when a fresh (uncached) version of the page is requested.
  • Sets the Expires header to the current date/time plus the defined CACHE_MIDDLEWARE_SECONDS.
  • Sets the Cache-Control header to give a max age for the page – again, from the CACHE_MIDDLEWARE_SECONDS setting.

See Middleware for more on middleware.

If a view sets its own cache expiry time (i.e. it has a max-age section in its Cache-Control header) then the page will be cached until the expiry time, rather than CACHE_MIDDLEWARE_SECONDS. Using the decorators in django.views.decorators.cache you can easily set a view’s expiry time (using the cache_control decorator) or disable caching for a view (using the never_cachedecorator). See the using other headers section for more on these decorators.

If USE_I18N is set to True then the generated cache key will include the name of the active language – see also How Django discovers language preference). This allows you to easily cache multilingual sites without having to create the cache key yourself.

Cache keys also include the active language when USE_L10N is set to True and the current time zone when USE_TZ is set to True.

The per-view cache

django.views.decorators.cache.cache_page()

A more granular way to use the caching framework is by caching the output of individual views. django.views.decorators.cache defines a cache_page decorator that will automatically cache the view’s response for you. It’s easy to use:

★ 更粗糙使用缓存机制的方法是缓存视图。 django.views.decorators.cache 定义了一个装饰器 cache_page ,它会自动的缓存 view 的返回值。很容易使用。

from django.views.decorators.cache import cache_page@cache_page(60 * 15)def my_view(request):    ...

cache_page takes a single argument: the cache timeout, in seconds. In the above example, the result of the my_view() view will be cached for 15 minutes. (Note that we’ve written it as 60 * 15 for the purpose of readability. 60 * 15 will be evaluated to 900 – that is, 15 minutes multiplied by 60 seconds per minute.)

★ 需要传递一个 timeout 时间参数,来指定缓存需要的时间

The per-view cache, like the per-site cache, is keyed off of the URL. If multiple URLs point at the same view, each URL will be cached separately. Continuing the my_view example, if your URLconf looks like this:

urlpatterns = ('',    (r'^foo/(\d{1,2})/$', my_view),)

then requests to /foo/1/ and /foo/23/ will be cached separately, as you may expect. But once a particular URL (e.g., /foo/23/) has been requested, subsequent requests to that URL will use the cache.

★ 单视图缓存是以不同的 url 作为缓存 key 的

cache_page can also take an optional keyword argument, cache, which directs the decorator to use a specific cache (from your CACHES setting) when caching view results. By default, the default cache will be used, but you can specify any cache you want:

★ 默认的缓存机制会发挥作用,但使用 cache="xxx" 参数可以另外指定缓存机制。 

@cache_page(60 * 15, cache="special_cache")def my_view(request):    ...

You can also override the cache prefix on a per-view basis. cache_page takes an optional keyword argument, key_prefix, which works in the same way as the CACHE_MIDDLEWARE_KEY_PREFIX setting for the middleware. It can be used like this:

★ key_prefix="xxx" 参数的作用和 CACHE_MIDDLEWARE_KEY_PREFIX 中间件的作用一样:缓存被多个站点共享时。 

@cache_page(60 * 15, key_prefix="site1")def my_view(request):    ...

The two settings can also be combined. If you specify a cache and a key_prefix, you will get all the settings of the requested cache alias, but with the key_prefix overridden.

Specifying per-view cache in the URLconf

The examples in the previous section have hard-coded the fact that the view is cached, because cache_page alters the my_view function in place. This approach couples your view to the cache system, which is not ideal for several reasons. For instance, you might want to reuse the view functions on another, cache-less site, or you might want to distribute the views to people who might want to use them without being cached. The solution to these problems is to specify the per-view cache in the URLconf rather than next to the view functions themselves.

★ 把 cache 写在view里不是好办法,应该写在 URLconf 里。 

Doing so is easy: simply wrap the view function with cache_page when you refer to it in the URLconf. Here’s the old URLconf from earlier:

urlpatterns = ('',    (r'^foo/(\d{1,2})/$', my_view),)

Here’s the same thing, with my_view wrapped in cache_page:

from django.views.decorators.cache import cache_pageurlpatterns = ('',    (r'^foo/(\d{1,2})/$', cache_page(60 * 15)(my_view)),)

Template fragment caching

If you’re after even more control, you can also cache template fragments using the cache template tag. To give your template access to this tag, put {% load cache %} near the top of your template.

★ 用 cache 模板标签可以缓存模板,只需把 {% load cache %} 放在模板的顶部 

The {% cache %} template tag caches the contents of the block for a given amount of time. It takes at least two arguments: the cache timeout, in seconds, and the name to give the cache fragment. The name will be taken as is, do not use a variable. For example:

★ cache 标签指定缓存的秒数,它至少需要两个参数,一是 超时时间(可以是变量),二是 cache 片段的名字(不可以是变量)。

{% load cache %}{% cache 500 sidebar %}    .. sidebar ..{% endcache %}

Sometimes you might want to cache multiple copies of a fragment depending on some dynamic data that appears inside the fragment. For example, you might want a separate cached copy of the sidebar used in the previous example for every user of your site. Do this by passing additional arguments to the {% cache %} template tag to uniquely identify the cache fragment:

★ 有时,需要根据动态数据缓存数份拷贝,只要给它传递额外的参数即可。

{% load cache %}{% cache 500 sidebar request.user.username %}    .. sidebar for logged in user ..{% endcache %}

It’s perfectly fine to specify more than one argument to identify the fragment. Simply pass as many arguments to {% cache %} as you need.

If USE_I18N is set to True the per-site middleware cache will respect the active language. For the cache template tag you could use one of the translation-specific variables available in templates to achieve the same result:

★ 如果 USE_I18N 被设置为 True,那么你要特别设定模板缓存里的语言数据

{% load i18n %}{% load cache %}{% get_current_language as LANGUAGE_CODE %}{% cache 600 welcome LANGUAGE_CODE %}    {% trans "Welcome to example.com" %}{% endcache %}

The cache timeout can be a template variable, as long as the template variable resolves to an integer value. For example, if the template variable my_timeout is set to the value 600, then the following two examples are equivalent:

{% cache 600 sidebar %} ... {% endcache %}{% cache my_timeout sidebar %} ... {% endcache %}

This feature is useful in avoiding repetition in templates. You can set the timeout in a variable, in one place, and just reuse that value.

★ 超时时间可以是一个整数变量,这解决了多页面的竟态问题。

django.core.cache.utils.make_template_fragment_key(fragment_namevary_on=None)

If you want to obtain the cache key used for a cached fragment, you can use make_template_fragment_keyfragment_name is the same as second argument to the cache template tag; vary_on is a list of all additional arguments passed to the tag. This function can be useful for invalidating or overwriting a cached item, for example:

★ 此方法可以得到 cache key,可以用作使缓存无效或重写缓存。

>>> from django.core.cache import cache>>> from django.core.cache.utils import make_template_fragment_key# cache key for {% cache 500 sidebar username %}>>> key = make_template_fragment_key('sidebar', [username])>>> cache.delete(key) # invalidates cached template fragment

The low-level cache API

Sometimes, caching an entire rendered page doesn’t gain you very much and is, in fact, inconvenient overkill.

Perhaps, for instance, your site includes a view whose results depend on several expensive queries, the results of which change at different intervals. In this case, it would not be ideal to use the full-page caching that the per-site or per-view cache strategies offer, because you wouldn’t want to cache the entire result (since some of the data changes often), but you’d still want to cache the results that rarely change.

★ 有的需要重量级检索的页面不能整个缓存,可以缓存某些极少改变的部分。

For cases like this, Django exposes a simple, low-level cache API. You can use this API to store objects in the cache with any level of granularity you like. You can cache any Python object that can be pickled safely: strings, dictionaries, lists of model objects, and so forth. (Most common Python objects can be pickled; refer to the Python documentation for more information about pickling.)

★ 对于这种情况,Django 提供了一个简单的底层的缓存 API。可以缓存任何可以被 pickled 的 Python 对象。

Accessing the cache

django.core.cache.get_cache(backend**kwargs)

The cache module, django.core.cache, has a cache object that’s automatically created from the 'default' entry in the CACHES setting:

★ cache 模块,django.core.cache 有 cache 对象,这个对象是在settings里设定的且自动被创建 

>>> from django.core.cache import cache

If you have multiple caches defined in CACHES, then you can use django.core.cache.get_cache() to retrieve a cache object for any key:

★ 如果在 CACHES 里设定了数个缓存,那么就可以使用 django.core.cache.get_cache() 方法来检索 cache 对象。

>>> from django.core.cache import get_cache>>> cache = get_cache('alternate')

If the named key does not exist, InvalidCacheBackendError will be raised.

★ 如果 key 不存在,会抛出 InvalidCacheBackendError 异常对象 

Basic usage   ★ 基本使用方法

The basic interface is set(key, value, timeout) and get(key):

★ 基本接口是 set 和 get 方法 

>>> cache.set('my_key', 'hello, world!', 30)>>> cache.get('my_key')'hello, world!'

The timeout argument is optional and defaults to the timeout argument of the appropriate backend in the CACHES setting (explained above). It’s the number of seconds the value should be stored in the cache. Passing in None for timeout will cache the value forever.

★ timeout 参数是可选的,默认为在CACHES里设置的值。为cache超时时间,传递 None 会永远缓存。


Changed in Django 1.6:

Previously, passing None explicitly would use the default timeout value.

If the object doesn’t exist in the cache, cache.get() returns None:

★ 如果key在缓存里不存在,cache.get() 返回 None 

# Wait 30 seconds for 'my_key' to expire...>>> cache.get('my_key')None

We advise against storing the literal value None in the cache, because you won’t be able to distinguish between your stored Nonevalue and a cache miss signified by a return value of None.

★ 不建议让你有效的cache返回 None。因为无法区分是 Nonevalue 还是 cache miss  

cache.get() can take a default argument. This specifies which value to return if the object doesn’t exist in the cache:

★ cache.get() 方法有一个 default 参数,指定当所寻找的key不存在时返回值 

>>> cache.get('my_key', 'has expired')'has expired'

To add a key only if it doesn’t already exist, use the add() method. It takes the same parameters as set(), but it will not attempt to update the cache if the key specified is already present:

★ add 一个 key 不会覆盖已经存在的 key 

>>> cache.set('add_key', 'Initial value')>>> cache.add('add_key', 'New value')>>> cache.get('add_key')'Initial value'

If you need to know whether add() stored a value in the cache, you can check the return value. It will return True if the value was stored, False otherwise.

★ 如果需要知道 add() 方法添加的内容在 cache 里是否有值,可以检查返回值。 如果返回 True 说明值在缓存里存在,返回 False 说明值在缓存里不存在 

There’s also a get_many() interface that only hits the cache once. get_many() returns a dictionary with all the keys you asked for that actually exist in the cache (and haven’t expired):

★ get_many() 方法返回多个在缓存中存在的值 

>>> cache.set('a', 1)>>> cache.set('b', 2)>>> cache.set('c', 3)>>> cache.get_many(['a', 'b', 'c']){'a': 1, 'b': 2, 'c': 3}

To set multiple values more efficiently, use set_many() to pass a dictionary of key-value pairs:

★ 高效设定多个值的方法是 set_many() 

>>> cache.set_many({'a': 1, 'b': 2, 'c': 3})>>> cache.get_many(['a', 'b', 'c']){'a': 1, 'b': 2, 'c': 3}

Like cache.set()set_many() takes an optional timeout parameter.

★ get_many 和 set_many 方法也有一个可选的 timeout 参数 

You can delete keys explicitly with delete(). This is an easy way of clearing the cache for a particular object:

★ 可以用 delete() 方法删除缓存里的键值

>>> cache.delete('a')

If you want to clear a bunch of keys at once, delete_many() can take a list of keys to be cleared:

★ 想有效的删除数个键,用 delete_many() 方法 

>>> cache.delete_many(['a', 'b', 'c'])

Finally, if you want to delete all the keys in the cache, use cache.clear(). Be careful with this; clear() will remove everythingfrom the cache, not just the keys set by your application.

★ 想删除所有的缓存中得 keys ,用 cache.clear() 方法 

>>> cache.clear()

You can also increment or decrement a key that already exists using the incr() or decr() methods, respectively. By default, the existing cache value will incremented or decremented by 1. Other increment/decrement values can be specified by providing an argument to the increment/decrement call. A ValueError will be raised if you attempt to increment or decrement a nonexistent cache key.:

★ 可以给key做加法运算 

>>> cache.set('num', 1)>>> cache.incr('num')2>>> cache.incr('num', 10)12>>> cache.decr('num')11>>> cache.decr('num', 5)6

Note

incr()/decr() methods are not guaranteed to be atomic. On those backends that support atomic increment/decrement (most notably, the memcached backend), increment and decrement operations will be atomic. However, if the backend doesn’t natively provide an increment/decrement operation, it will be implemented using a two-step retrieve/update.

You can close the connection to your cache with close() if implemented by the cache backend.   ★ 可以关闭和缓存后台的链接

>>> cache.close()

Note

For caches that don’t implement close methods it is a no-op.

Cache key prefixing

If you are sharing a cache instance between servers, or between your production and development environments, it’s possible for data cached by one server to be used by another server. If the format of cached data is different between servers, this can lead to some very hard to diagnose problems.

★ 如果缓存数据被不同的产品或不同的环境共享,被一个服务器缓存的数据要呗另一个服务器使用。如果缓存的数据形式在每个服务器上都不同,这会引发很多难以诊断的问题(比如key冲突) 

To prevent this, Django provides the ability to prefix all cache keys used by a server. When a particular cache key is saved or retrieved, Django will automatically prefix the cache key with the value of the KEY_PREFIX cache setting.   ★ 为了避免这种情况,Django 提供了给一个服务器上的缓存key添加前缀的功能。当一个特别的 cache key被保存或被检索的时候,Django 会自动给缓存key增加前缀。

By ensuring each Django instance has a different KEY_PREFIX, you can ensure that there will be no collisions in cache values.  ★ 确保每个Django实例有一个不同的 KEY_PREFIX ,可以确保缓存数据没有冲突 

Cache versioning  ★ 略 

When you change running code that uses cached values, you may need to purge any existing cached values. The easiest way to do this is to flush the entire cache, but this can lead to the loss of cache values that are still valid and useful.

★ 当改变使用了缓存值的代码时,或许需要废弃任何现有的缓存数据。最简单的方法是刷新整个缓存,但是这会导致某些有价值的缓存数据丢失 

Django provides a better way to target individual cache values. Django’s cache framework has a system-wide version identifier, specified using the VERSION cache setting. The value of this setting is automatically combined with the cache prefix and the user-provided cache key to obtain the final cache key.

By default, any key request will automatically include the site default cache key version. However, the primitive cache functions all include a version argument, so you can specify a particular cache key version to set or get. For example:

# Set version 2 of a cache key>>> cache.set('my_key', 'hello world!', version=2)# Get the default version (assuming version=1)>>> cache.get('my_key')None# Get version 2 of the same key>>> cache.get('my_key', version=2)'hello world!'

The version of a specific key can be incremented and decremented using the incr_version() and decr_version() methods. This enables specific keys to be bumped to a new version, leaving other keys unaffected. Continuing our previous example:

# Increment the version of 'my_key'>>> cache.incr_version('my_key')# The default version still isn't available>>> cache.get('my_key')None# Version 2 isn't available, either>>> cache.get('my_key', version=2)None# But version 3 *is* available>>> cache.get('my_key', version=3)'hello world!'

Cache key transformation

As described in the previous two sections, the cache key provided by a user is not used verbatim – it is combined with the cache prefix and key version to provide a final cache key. By default, the three parts are joined using colons to produce a final string:  ★ 最终使用的 key 可能是 cache 前缀,版本号和 key 的组合 。

def make_key(key, key_prefix, version):    return ':'.join([key_prefix, str(version), key])

If you want to combine the parts in different ways, or apply other processing to the final key (e.g., taking a hash digest of the key parts), you can provide a custom key function.

The KEY_FUNCTION cache setting specifies a dotted-path to a function matching the prototype of make_key() above. If provided, this custom key function will be used instead of the default key combining function.

Cache key warnings

Memcached, the most commonly-used production cache backend, does not allow cache keys longer than 250 characters or containing whitespace or control characters, and using such keys will cause an exception. To encourage cache-portable code and minimize unpleasant surprises, the other built-in cache backends issue a warning (django.core.cache.backends.base.CacheKeyWarning) if a key is used that would cause an error on memcached.  ★ Memcached key的书写方法有一些规则,为了编写可移植的代码,减少异常发生的概率,可以使用内建的cache 提供的警告信息。 

If you are using a production backend that can accept a wider range of keys (a custom backend, or one of the non-memcached built-in backends), and want to use this wider range without warnings, you can silence CacheKeyWarning with this code in the management module of one of your INSTALLED_APPS:

★ 用下面的方法关闭提示

import warningsfrom django.core.cache import CacheKeyWarningwarnings.simplefilter("ignore", CacheKeyWarning)

If you want to instead provide custom key validation logic for one of the built-in backends, you can subclass it, override just the validate_key method, and follow the instructions for using a custom cache backend. For instance, to do this for the locmem backend, put this code in a module:    ★ 想些自己的validate_key方法,必须覆盖原来的 validate_key 方法,下面是例子。

from django.core.cache.backends.locmem import LocMemCacheclass CustomLocMemCache(LocMemCache):    def validate_key(self, key):        """Custom validation, raising exceptions or warnings as needed."""        # ...

...and use the dotted Python path to this class in the BACKEND portion of your CACHES setting.

Downstream caches

So far, this document has focused on caching your own data. But another type of caching is relevant to Web development, too: caching performed by “downstream” caches. These are systems that cache pages for users even before the request reaches your Web site.

Here are a few examples of downstream caches:

  • Your ISP may cache certain pages, so if you requested a page from http://example.com/, your ISP would send you the page without having to access example.com directly. The maintainers of example.com have no knowledge of this caching; the ISP sits between example.com and your Web browser, handling all of the caching transparently.
  • Your Django Web site may sit behind a proxy cache, such as Squid Web Proxy Cache (http://www.squid-cache.org/), that caches pages for performance. In this case, each request first would be handled by the proxy, and it would be passed to your application only if needed.
  • Your Web browser caches pages, too. If a Web page sends out the appropriate headers, your browser will use the local cached copy for subsequent requests to that page, without even contacting the Web page again to see whether it has changed.

Downstream caching is a nice efficiency boost, but there’s a danger to it: Many Web pages’ contents differ based on authentication and a host of other variables, and cache systems that blindly save pages based purely on URLs could expose incorrect or sensitive data to subsequent visitors to those pages.

For example, say you operate a Web email system, and the contents of the “inbox” page obviously depend on which user is logged in. If an ISP blindly cached your site, then the first user who logged in through that ISP would have his user-specific inbox page cached for subsequent visitors to the site. That’s not cool.

Fortunately, HTTP provides a solution to this problem. A number of HTTP headers exist to instruct downstream caches to differ their cache contents depending on designated variables, and to tell caching mechanisms not to cache particular pages. We’ll look at some of these headers in the sections that follow.

Using Vary headers

The Vary header defines which request headers a cache mechanism should take into account when building its cache key. For example, if the contents of a Web page depend on a user’s language preference, the page is said to “vary on language.”

By default, Django’s cache system creates its cache keys using the requested path and query – e.g.,"/stories/2005/?order_by=author". This means every request to that URL will use the same cached version, regardless of user-agent differences such as cookies or language preferences. However, if this page produces different content based on some difference in request headers – such as a cookie, or a language, or a user-agent – you’ll need to use the Vary header to tell caching mechanisms that the page output depends on those things.

To do this in Django, use the convenient django.views.decorators.vary.vary_on_headers() view decorator, like so:

from django.views.decorators.vary import vary_on_headers@vary_on_headers('User-Agent')def my_view(request):    # ...

In this case, a caching mechanism (such as Django’s own cache middleware) will cache a separate version of the page for each unique user-agent.

The advantage to using the vary_on_headers decorator rather than manually setting the Vary header (using something likeresponse['Vary'] = 'user-agent') is that the decorator adds to the Vary header (which may already exist), rather than setting it from scratch and potentially overriding anything that was already in there.

You can pass multiple headers to vary_on_headers():

@vary_on_headers('User-Agent', 'Cookie')def my_view(request):    # ...

This tells downstream caches to vary on both, which means each combination of user-agent and cookie will get its own cache value. For example, a request with the user-agent Mozilla and the cookie value foo=bar will be considered different from a request with the user-agent Mozilla and the cookie value foo=ham.

Because varying on cookie is so common, there’s a django.views.decorators.vary.vary_on_cookie() decorator. These two views are equivalent:

@vary_on_cookiedef my_view(request):    # ...@vary_on_headers('Cookie')def my_view(request):    # ...

The headers you pass to vary_on_headers are not case sensitive; "User-Agent" is the same thing as "user-agent".

You can also use a helper function, django.utils.cache.patch_vary_headers(), directly. This function sets, or adds to, theVary header. For example:

from django.utils.cache import patch_vary_headersdef my_view(request):    # ...    response = render_to_response('template_name', context)    patch_vary_headers(response, ['Cookie'])    return response

patch_vary_headers takes an HttpResponse instance as its first argument and a list/tuple of case-insensitive header names as its second argument.

For more on Vary headers, see the official Vary spec.

Controlling cache: Using other headers

Other problems with caching are the privacy of data and the question of where data should be stored in a cascade of caches.

A user usually faces two kinds of caches: his or her own browser cache (a private cache) and his or her provider’s cache (a public cache). A public cache is used by multiple users and controlled by someone else. This poses problems with sensitive data–you don’t want, say, your bank account number stored in a public cache. So Web applications need a way to tell caches which data is private and which is public.

The solution is to indicate a page’s cache should be “private.” To do this in Django, use the cache_control view decorator. Example:

from django.views.decorators.cache import cache_control@cache_control(private=True)def my_view(request):    # ...

This decorator takes care of sending out the appropriate HTTP header behind the scenes.

Note that the cache control settings “private” and “public” are mutually exclusive. The decorator ensures that the “public” directive is removed if “private” should be set (and vice versa). An example use of the two directives would be a blog site that offers both private and public entries. Public entries may be cached on any shared cache. The following code usesdjango.utils.cache.patch_cache_control(), the manual way to modify the cache control header (it is internally called by thecache_control decorator):

from django.views.decorators.cache import patch_cache_controlfrom django.views.decorators.vary import vary_on_cookie@vary_on_cookiedef list_blog_entries_view(request):    if request.user.is_anonymous():        response = render_only_public_entries()        patch_cache_control(response, public=True)    else:        response = render_private_and_public_entries(request.user)        patch_cache_control(response, private=True)    return response

There are a few other ways to control cache parameters. For example, HTTP allows applications to do the following:

  • Define the maximum time a page should be cached.
  • Specify whether a cache should always check for newer versions, only delivering the cached content when there are no changes. (Some caches might deliver cached content even if the server page changed, simply because the cache copy isn’t yet expired.)

In Django, use the cache_control view decorator to specify these cache parameters. In this example, cache_control tells caches to revalidate the cache on every access and to store cached versions for, at most, 3,600 seconds:

from django.views.decorators.cache import cache_control@cache_control(must_revalidate=True, max_age=3600)def my_view(request):    # ...

Any valid Cache-Control HTTP directive is valid in cache_control(). Here’s a full list:

  • public=True
  • private=True
  • no_cache=True
  • no_transform=True
  • must_revalidate=True
  • proxy_revalidate=True
  • max_age=num_seconds
  • s_maxage=num_seconds

For explanation of Cache-Control HTTP directives, see the Cache-Control spec.

(Note that the caching middleware already sets the cache header’s max-age with the value of the CACHE_MIDDLEWARE_SECONDSsetting. If you use a custom max_age in a cache_control decorator, the decorator will take precedence, and the header values will be merged correctly.)

If you want to use headers to disable caching altogether, django.views.decorators.cache.never_cache is a view decorator that adds headers to ensure the response won’t be cached by browsers or other caches. Example:

from django.views.decorators.cache import never_cache@never_cachedef myview(request):    # ...

Other optimizations

Django comes with a few other pieces of middleware that can help optimize your site’s performance:

  • django.middleware.http.ConditionalGetMiddleware adds support for modern browsers to conditionally GET responses based on the ETag and Last-Modified headers.
  • django.middleware.gzip.GZipMiddleware compresses responses for all modern browsers, saving bandwidth and transfer time. Be warned, however, that compression techniques like GZipMiddleware are subject to attacks. See the warning inGZipMiddleware for details.

Order of MIDDLEWARE_CLASSES

If you use caching middleware, it’s important to put each half in the right place within the MIDDLEWARE_CLASSES setting. That’s because the cache middleware needs to know which headers by which to vary the cache storage. Middleware always adds something to the Vary response header when it can.

UpdateCacheMiddleware runs during the response phase, where middleware is run in reverse order, so an item at the top of the list runs last during the response phase. Thus, you need to make sure that UpdateCacheMiddleware appears before any other middleware that might add something to the Vary header. The following middleware modules do so:

  • SessionMiddleware adds Cookie
  • GZipMiddleware adds Accept-Encoding
  • LocaleMiddleware adds Accept-Language

FetchFromCacheMiddleware, on the other hand, runs during the request phase, where middleware is applied first-to-last, so an item at the top of the list runs first during the request phase. The FetchFromCacheMiddleware also needs to run after other middleware updates the Vary header, so FetchFromCacheMiddleware must be after any item that does so.

0 0