Cache (computing): Difference between revisions
Red Thrush (talk | contribs) m rv to revision 91026849 dated 2006-11-29 23:31:36 by Gurch |
No edit summary |
||
(2 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
{{dablink|For other uses, see [[cache (disambiguation)]] or [[caché]].}} |
{{dablink|For other uses, see [[cache (disambiguation)]] or [[caché]].}} |
||
{{wiktionarypar|cache}} |
{{wiktionarypar|cache}} |
||
In [[computer science]], a '''cache''' |
In [[computer science]], a '''cache''' [''BE'':[[International Phonetic Alphabet|kaʃ]], ''AE'':[[International Phonetic Alphabet|kæʃ]]] is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is ''expensive'' (usually in terms of access time) to fetch or compute relative to reading the cache. Once the data is stored in the cache, future use can be made by accessing the cached copy rather than refetching or recomputing the original data, so that the average access time is lower. |
||
Caches have proven extremely effective in many areas of computing because access patterns in typical computer applications have [[locality of reference]]. There are several kinds of locality, but this article primarily deals with data that are accessed close together in time. The data might or might not be located physically close to each other. |
Caches have proven extremely effective in many areas of computing because access patterns in typical computer applications have [[locality of reference]]. There are several kinds of locality, but this article primarily deals with data that are accessed close together in time. The data might or might not be located physically close to each other. |
Revision as of 14:20, 5 December 2006
In computer science, a cache [BE:kaʃ, AE:kæʃ] is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive (usually in terms of access time) to fetch or compute relative to reading the cache. Once the data is stored in the cache, future use can be made by accessing the cached copy rather than refetching or recomputing the original data, so that the average access time is lower.
Caches have proven extremely effective in many areas of computing because access patterns in typical computer applications have locality of reference. There are several kinds of locality, but this article primarily deals with data that are accessed close together in time. The data might or might not be located physically close to each other.
Operation
A cache is a block of memory for temporary storage of data likely to be used again. The CPU and hard drive frequently use a cache, as do web browsers.
A simple definition of Cache would be: A temporary storage area where frequently accessed data can be stored for rapid access.
A cache is made up of a pool of entries. Each entry has a datum, which is a copy of the datum in some backing store. Each entry also has a tag, which specifies the identity of the datum in the backing store of which the entry is a copy.
When the cache client (a CPU, web browser, operating system) wishes to access a datum presumably in the backing store, it first checks the cache. If an entry can be found with a tag matching that of the desired datum, the datum in the entry is used instead. This situation is known as a cache hit. So, for example, a web browser program might check its local cache on disk to see if it has a local copy of the contents of a web page at a particular URL. In this example, the URL is the tag, and the contents of the web page is the datum. The percentage of accesses that result in cache hits is known as the hit rate or hit ratio of the cache.
The alternative situation, when the cache is consulted and found not to contain a datum with the desired tag, is known as a cache miss. The datum fetched from the backing store during miss handling is usually inserted into the cache, ready for the next access.
If the cache has limited storage, it may have to eject some other entry in order to make room. The heuristic used to select the entry to eject is known as the replacement policy. One popular replacement policy, LRU, replaces the least recently used entry. More efficient caches compute use frequency against the size of the stored contents, as well as the latencies and throughputs for both the cache and the backing store. While this works well for larger amounts of data, long latencies, and slow throughputs, such as experienced with your hard drive and the Internet, it's not efficient to use this for cached main memory (RAM).
When a datum is written to the cache, it must at some point be written to the backing store as well. The timing of this write is controlled by what is known as the write policy. In a write-through cache, every write to the cache causes a write to the backing store. Alternatively, in a write-back cache, writes are not immediately mirrored to the store. Instead, the cache tracks which of its locations have been written over (these locations are marked dirty). The data in these locations is written back to the backing store when that data is evicted from the cache. For this reason, a miss in a write-back cache will often require two memory accesses to service: one to retrieve the needed datum, and one to write replaced data from the cache to the store.
Data write-back may be triggered by other policies as well. The client may make many changes to a datum in the cache, and then explicitly notify the cache to write back the datum.
The data in the backing store may be changed by entities other than the cache, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of that data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as coherency protocols.
Applications
CPU caches
- Main article: CPU cache.
Small memories on or close to the CPU chip can be made faster than the much larger main memory. Most CPUs since the 1980s have used one or more caches, and modern general-purpose CPUs inside personal computers may have as many as half a dozen, each specialized to a different part of the problem of executing programs.
Disk buffer
(also known as disk cache or cache buffer)
Hard disks have historically often been packaged with embedded computers used for control and interface protocols. Since the late 1980s, nearly all disks sold have these embedded computers and either an ATA, SCSI, or Fibre Channel interface. The embedded computer usually has some small amount of memory which it uses to store the bits going to and coming from the disk platter.
The disk buffer is physically distinct from and is used differently than the page cache typically kept by the operating system in the computer's main memory. The disk buffer is controlled by the embedded computer in the disk drive, and the page cache is controlled by the computer to which that disk is attached. The disk buffer is usually quite small, 2 to 16 MB, and the page cache is generally all unused physical memory, which in 2006 this may be as much as 4GB for desktop computers and 8GB for servers. While data in the page cache is reused multiple times, the data in the disk buffer is typically never reused. In this sense, the phrases disk cache and cache buffer are misnomers, and the embedded computer's memory is more appropriately called the disk buffer.
The disk buffer has multiple uses:
- Readahead / readbehind
- When executing a read from the disk, the disk arm moves the read/write head to (or near) the correct track, and after some settling time the read head begins to pick up bits. Usually, the first sectors to be read are not the ones that have been requested by the operating system. The disk's embedded computer typically saves these unrequested sectors in the disk buffer, in case the operating system requests them later.
- Speed matching
- The speed of the disk's I/O interface to the computer almost never matches the speed at which the bits are transferred to and from the hard disk platter. The disk buffer is used so that both the I/O interface and the disk read/write head can operate at full speed.
- Write acceleration
- The disk's embedded microcontroller may signal the main computer that a disk write is complete immediately after receiving the write data, before the data are actually written to the platter. This early signal allows the main computer to continue working even though the data has not actually been written yet. This can be somewhat dangerous, because if power is lost before the data are permanently fixed in the magnetic media, the data will be lost from the disk buffer, and the file system on the disk may be left in an inconsistent state. On some disks, this vulnerable period between signaling the write complete and fixing the data can be arbitrarily long, as the write can be deferred indefinitely by newly arriving requests. For this reason, the use of write acceleration can be controversial. Consistency can be maintained, however, by using a battery-backed memory system in the disk controller for caching data - although this is typically only found in high end RAID controllers. Alternately, the caching can simply be turned off when the integrity of data is deemed more important than write performance.
- Command queuing
- Newer SATA and most SCSI disks can accept multiple commands while any one command is in operation. These commands are stored by the disk's embedded computer until they are completed. Should a read reference the data at the destination of a queued write, the write's data will be returned. Command queuing is different from write acceleration in that the main computer's operating system is notified when data are actually written onto the magnetic media. The OS can use this information to keep the filesystem consistent through rescheduled writes.
Other caches
CPU caches are generally managed entirely by hardware. Other caches are managed by a variety of software. The cache of disk sectors in main memory is usually managed by the operating system kernel or file system. The BIND DNS daemon caches a mapping of domain names to IP addresses, as does a resolver library.
Write-through operation is common when operating over unreliable networks (like an ethernet LAN), because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and client-side network file system caches (like those in NFS or SMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable.
A cache of recently visited web pages can be managed by your Web browser. Some browsers are configured to use an external proxy web cache, a server program through which all web requests are routed so that it can cache frequently accessed pages for everyone in an organization. Many internet service providers use proxy caches to save bandwidth on frequently-accessed web pages.
Search engines also frequently make web pages they have indexed available from their cache. For example, Google provides a "Cached" link next to each search result. This is useful when web pages are temporarily inaccessible from a web server.
Another type of caching is storing computed results that will likely be needed again, or memoization. An example of this type of caching is cache, a program that caches the output of the compilation to speed up the second-time compilation.
The difference between buffers and cache
The terms are not mutually exclusive, and the functions are frequently combined, but there is a difference in intent. A buffer is a temporary storage location where a large block of data is assembled or disassembled. This may be necessary for interacting with a storage device that requires large blocks of data, or when data must be delivered in a different order than that in which it is produced, or merely desirable when small blocks are inefficient. The benefit is present even if the bufferend data is written to the buffer once and read from the buffer once.
A cache, on the other hand, hopes that the data will be read from the cache more often than it is written there. Its purpose is to eliminate accesses to the underlying storage, rather than make them more efficient. ...