Ehcache 1.5.0 User Guide - Introduction (介绍)(2)

来源:互联网 发布:唯一网络王宇杰祖籍 编辑:程序博客网 时间:2024/06/05 06:25

                 Ehcache 1.5.0 User Guide - Introduction (介绍)(2)

Ehcache 1.5.0 用户指南)

E_mail:jianglike18@163.con

Blog: http://blog.csdn.net/jianglike18

qq:29396597

Web Page Caching(网页缓存)

An observed speed up from caching a web page is 1000 times. Ehcache can retrieve a page from its SimplePageCachingFilter in a few ms.

(一个观测使用缓存的网页速度提高1000多倍,EhcacheSimplePageCachingFilter获取页面只需几毫秒。)

 

Because the web page is the end result of a computation, it has a proportion of 100%.

The expected system speedup is thus:

        1 / ((1 - 1) + 1 / 1000)

        = 1 / (0 + .001)

        = 1000 times system speedup

(因为网页是计算的最终结果,它的比例是100%

  统加速的预期如此:

  1 / ((1 - 1) + 1 / 1000)

        = 1 / (0 + .001)

        = 1000倍系统加速

 

Web Page Fragment Caching

Caching the entire page is a big win. Sometimes the liveness requirements vary in different parts of the page. Here the SimplePageFragmentCachingFilter can be used.

网页片段的缓存
缓存整个页面是一个大的成功。有时候在网页的不同部分要求动态的改变。在这里, SimplePageFragmentCachingFilter可以使用。

 

Let's say we have a 1000 fold improvement on a page fragment that taking 40% of the page render time.

比方说,占整个页面40%呈现时间的页面片段,我们有一个1000倍的提高。

The expected system speedup is thus:

        1 / ((1 - .4) + .4 / 1000)

        = 1 / (6 + .004)

        = 1.6 times system speedup

(系统加速的预期如此:

   1 / ((1 - .4) + .4 / 1000)

        = 1 / (6 + .004)

        = 1.6倍系统加速)

 

 

Cache Efficiency

In real life cache do not live forever. Some examples that come close are "static" web pages or fragments of same, like page footers, and in the database realm, reference data, such as the currencies in the world.

缓存效率
在现实生活中的缓存不会永存。一些例子,接近是静态网页或片段相同,如网页页脚,在数据库领域,关联的数据,例如象货币在世界上。

 

Factors which affect the efficiency of a cache are:

l         liveness

l         how live the data needs to be. The less live the more it can be cached

l         proportion of data cached

(影响缓存效率的因素有:

l         活跃度

l         数据要有多实时,越不实时能缓存的越多

l         数据缓存的比例

 

what proportion of the data can fit into the resource limits of the machine. For 32 bit Java systems, there was a hard limit of 2GB of address space. While now relaxed, garbage collection issues make it harder to go a lot large. Various eviction algorithms are used to evict excess entries.

(数据的比例是多少才能适合资源受限的机器。对于32位的Java系统,有一个硬限制是2GB的地址空间。虽然现在条件放松了,垃圾收集问题使缓存很难扩展到更大,各驱逐算法被用来驱逐过剩的项目。)

 

Shape of the usage distribution

If only 300 out of 3000 entries can be cached, but the Pareto distribution applies, it may be that 80% of the time, those 300 will be the ones requested. This drives up the average request lifespan.

使用形式的分布
如果3000条目中只有300被缓存了,但根据Pareto分布的应用,可能80 %的时间请求的是300的条目。这就抬高了平均请求生命周期。

 

Read/Write ratio

The proportion of times data is read compared with how often it is written. Things such as the number of rooms left in a hotel will be written to quite a lot. However the details of a room sold are immutable once created so have a maximum write of 1 with a potentially large number of reads.

/写比率

数据被经常读取或被写入次数的比例。比如旅馆剩余的房子将被多次写入,然而被售出房间的详细资料一旦创建就是不变的,因此最大的写次数是1并且潜在大量的读操作。

Ehcache keeps these statistics for each Cache and each element, so they can be measured directly rather than estimated.

Ehcache 为每一个缓存和元素保存这些统计,因此他们可以直接测量而不是估计。)

 

Cluster Efficiency(集群效率)

Also in real life, we generally do not find a single server?

Assume a round robin load balancer where each hit goes to the next server.

The cache has one entry which has a variable lifespan of requests, say caused by a time to live. The following table shows how that lifespan can affect hits and misses.

(同样在现实生活中,我们一般很难发现单一的服务器?

  设想一系列负载均衡中,每一个缓存项被访问就转到下一个服务器

   缓存有一个条目,该条目含有请求可变的生命周期,下表显示了生命周期如何影响命中和不命中。

 Server 1(服务器1    Server 2   Server 3    Server 4

 

  M             M           M           M

  H             H           H           H

  H             H           H           H

  H             H           H           H

  H             H           H           H

  ...           ...         ...         ...

The cache hit ratios for the system as a whole are as follows:

(总体来说系统的缓存命中比例如下所示:)

Entry

Lifespan         Hit Ratio             Hit Ratio             Hit Ratio   Hit Ratio

in Hits          1 Server1个服务器)2 Servers2个服务器) 3 Servers   4 Servers

 

2               1/2                   0/2                  0/2         0/2

4               3/4                   2/4                  1/4         0/4

10              9/10                  8/10                 7/10        6/10

20              19/20                18/20                 17/20       16/20

50              49/50                48/50                 47/50       46/50

The efficiency of a cluster of standalone caches is generally:

(独立缓存集群的效率一般是:)

    (Lifespan in requests - Number of Standalone Caches) / Lifespan in requests Where the lifespan is large relative to the number of standalone caches, cache efficiency is not much affected.

((请求的次数-独立缓存的个数)/请求的次数,请求次数相对于独立的缓存个数来说如果比较大,缓存的效率没有太多影响。)

However when the lifespan is short, cache efficiency is dramatically affected.

(可是当请求的次数的比较少,缓存的效率有比较大的影响。)

(To solve this problem, ehcache supports distributed caching, where an entry put in a local cache is also propagated to other servers in the cluster.)

(为了解决这个问题,ehcache支持分布式缓存,一个放入本地缓存条目,也会传播到其他集群的服务器。)

 

A cache version of Amdahl's law

From the above we now have:

 

 1 / ((1 - Proportion Sped Up * effective cache efficiency) +

 (Proportion Sped Up  * effective cache efficiency)/ Speed up)effective cache efficiency = cache efficiency * cluster efficiency

 

Web Page example

Applying this to the earlier web page cache example where we have cache efficiency of 35% and average request lifespan of 10 requests and two servers:

 

  cache efficiency = .35

 

  cluster efficiency = .(10 - 1) / 10

                     = .9

 

  effective cache efficiency = .35 * .9

                             = .315

 

        1 / ((1 - 1 * .315) + 1 * .315 / 1000)

 

        = 1 / (.685 + .000315)

 

        = 1.45 times system speedupWhat if, instead the cache efficiency is 70%; a doubling of efficiency. We keep to two servers.

 

  cache efficiency = .70

 

  cluster efficiency = .(10 - 1) / 10

                     = .9

 

  effective cache efficiency = .70 * .9

                             = .63

 

        1 / ((1 - 1 * .63) + 1 * .63 / 1000)

 

        = 1 / (.37 + .00063)

 

        = 2.69 times system speedupWhat if, instead the cache efficiency is 90%; a doubling of efficiency. We keep to two servers.

 

  cache efficiency = .90

 

  cluster efficiency = .(10 - 1) / 10

                     = .9

 

  effective cache efficiency = .9 * .9

                             = .81

 

        1 / ((1 - 1 * .81) + 1 * .81 / 1000)

 

        = 1 / (.19 + .00081)

 

        = 5.24 times system speedupWhy is the reduction so dramatic? Because Amdahl's law is most sensitive to the proportion of the system that is sped up.

 

原创粉丝点击