Jump to content

Working set

From Wikipedia, the free encyclopedia
(Redirected from Process working set)

Working set is a concept in computer science which defines the amount of memory that a process requires in a given time interval.[1]

Definition

[edit]

Peter Denning (1968) defines "the working set of information of a process at time to be the collection of information referenced by the process during the process time interval ".[2] Typically the units of information in question are considered to be memory pages. This is suggested to be an approximation of the set of pages that the process will access in the future (say during the next time units), and more specifically is suggested to be an indication of what pages ought to be kept in main memory to allow most progress to be made in the execution of that process.

Rationale

[edit]

The effect of the choice of what pages to be kept in main memory (as distinct from being paged out to auxiliary storage) is important: if too many pages of a process are kept in main memory, then fewer other processes can be ready at any one time. If too few pages of a process are kept in main memory, then its page fault frequency is greatly increased and the number of active (non-suspended) processes currently executing in the system approaches zero.

The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (often approximated by the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it needs to use increases, and there is no room in RAM, the process is swapped out of memory to free the memory for other processes to use.

Often a heavily loaded computer has so many processes queued up that, if all the processes were allowed to run for one scheduling time slice, they would refer to more pages than there is RAM, causing the computer to "thrash".

By swapping some processes from memory, the result is that processes—even processes that were temporarily removed from memory—finish much sooner than they would if the computer attempted to run them all at once. The processes also finish much sooner than they would if the computer only ran one process at a time to completion since it allows other processes to run and make progress during times that one process is waiting on the hard drive or some other global resource.

In other words, the working set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible. Thus it optimizes CPU utilization and throughput.

Implementation

[edit]

The main hurdle in implementing the working set model is keeping track of the working set. The working set window is a moving window. At each memory reference a new reference appears at one end and the oldest reference drops off the other end. A page is in the working set if it is referenced in the working set window.

To avoid the overhead of keeping a list of the last k referenced pages, the working set is often implemented by keeping track of the time t of the last reference, and considering the working set to be all pages referenced within a certain period of time.

The working set isn't a page replacement algorithm, but page-replacement algorithms can be designed to only remove pages that aren't in the working set for a particular process. One example is a modified version of the clock algorithm called WSClock.

Variants

[edit]

Working set can be divided into code working set and data working set. This distinction is important when code and data are separate at the relevant level of the memory hierarchy, as if either working set does not fit in that level of the hierarchy, thrashing will occur. In addition to the code and data themselves, on systems with virtual memory, the memory map (of virtual memory to physical memory) entries of the pages of the working set must be cached in the translation lookaside buffer (TLB) for the process to progress efficiently. This distinction exists because code and data are cached in small blocks (cache lines), not entire pages, but address lookup is done at the page level. Thus even if the code and data working sets fit into cache, if the working sets are split across many pages, the virtual address working set may not fit into TLB, causing TLB thrashing.

Analogs of working set exist for other limited resources, most significantly processes. If a set of processes requires frequent interaction between multiple processes, then it has a process working set that must be coscheduled in order to progress:[3]

parallel programs have a process working set that must be coscheduled (scheduled for execution simultaneously) for the parallel program to make progress.

If the processes are not scheduled simultaneously – for example, if there are two processes but only one core on which to execute them – then the processes can only advance at the rate of one interaction per time slice.

Other resources include file handles or network sockets – for example, copying one file to another is most simply done with two file handles: one for input, one for output, and thus has a "file handle working set" size of two. If only one file handle is available, copying can still be done, but requires acquiring a file handle for the input, reading from it (say into a buffer), releasing it, then acquiring a file handle for the output, writing to it, releasing it, then acquiring the input file handle again and repeating. Similarly a server may require many sockets, and if it is limited would need to repeatedly release and re-acquire sockets. Rather than thrashing, these resources are typically required for the program, and if it cannot acquire enough resources, it simply fails.

See also

[edit]

References

[edit]
  1. ^ Denning, Peter J. (2021-02-02). "Working Set Analytics". ACM Computing Surveys. 53 (6). Association for Computing Machinery (ACM): 1–36. doi:10.1145/3399709. ISSN 0360-0300.
  2. ^ Denning, Peter J. (1968). "The working set model for program behavior" (PDF). Communications of the ACM. 11 (5): 323–333. doi:10.1145/363095.363141. S2CID 207669410.
  3. ^ Ousterhout, J. K. (1982). "Scheduling Techniques for Concurrent Systems" (PDF). Proceedings of Third International Conference on Distributed Computing Systems: 22–30.

Further reading

[edit]
  • Tanenbaum, Andrew (2009). Modern Operating Systems Third Edition. pp. 209–210
  • Denning, P.J. (1980). Working Sets Past and Present. IEEE Transactions on Software Engineering, 1/1980, Volume SE-6, pp. 64–84. [1]
  • Silberschatz, A., Galvin, P.B., & Gagne, G. (2005). Operating System Concepts, 7th edition. Palatino: Wiley. pp. 346.