Jump to content

Cell-probe model

From Wikipedia, the free encyclopedia

In computer science, the cell-probe model is a model of computation similar to the random-access machine, except that all operations are free except memory access. This model is useful for proving lower bounds of algorithms for data structure problems.

Overview

[edit]

The cell-probe model is a modification of the random-access machine model, in which computational cost is only assigned to accessing memory cells.

The model is intended for proving lower bounds on the complexity of data structure problems. One type of such problems has two phases: the preprocessing phase and the query phase. The input to the first phase, the preprocessing phase, is a set of data from which to build some structure from memory cells. The input to the second phase, the query phase, is a query parameter. The query has to consult the data structure in order to compute its result; for example, a query may be asked to determine if the query parameter was included in the original input data set. Another type of problem involves both update operations, that modify the data structure, and query operations. For example, an update may add an element to the structure, or remove one. In both cases, the cell-probe complexity of the data structure is characterized by the number of memory cells accessed during preprocessing, query and (if relevant) update. The cell probe complexity is a lower bound on the time complexity of the corresponding operations on a random-access machine, where memory transfers are part of the operations counted in measuring time.

An example of such a problem is the dynamic partial sum problem.[1][2]

History

[edit]

Andrew Yao's 1981 paper "Should Tables Be Sorted?"[3] is considered as the introduction of the cell-probe model. Yao used it to give a minimum number of memory cell "probes" or accesses necessary to determine whether a given query datum exists within a table stored in memory. In 1989, Fredman and Saks initiated the study of cell probe lower bounds for dynamic data-structure problems (i.e., involving updates and queries), and introduced the notation CPROBE(b) for the cell-probe model assuming that a memory cell (word) consists of b bits.[4]

Notable results

[edit]

Searching Tables

[edit]

Yao[3] considered a static data-structure problem where one has to build a data structure ("table") to represent a set of elements out of . The query parameter is a number and the query has to report whether is in the table. A crucial requirement is that the table consist of exactly entries, where each entry is an integer between and . Yao showed that as long as the table size is bounded independently of and is large enough, a query must perform probes in the worst case. This shows that a sorted table together with binary search for queries is an optimal scheme, in this restricted setting.

It is worth noting that in the same paper, Yao[3] also showed, that if the problem is relaxed to allow the data structure to store entries, then the queries can be performed using only two probes.[note 1] This upper bound, similarly to the lower bound described above, also requires to be sufficiently large, as a function of . Remarkably, this upper bound uses only one additional table entry than the setting for which the lower bound applies.

Dynamic Partial Sums

[edit]

The dynamic partial sum problem defines two operations Update(k,v) which sets the value in an array A at index k to be v, and Sum(k) which returns the sum of the values in A at indices 0 through k. A naïve implementation would take time for Update and time for Sum.[5]

Instead, values can be stored as leaves in a tree whose inner nodes store the sum over the subtree rooted at that node. In this structure Update requires time to update each node in the leaf to root path, and Sum similarly requires time to traverse the tree from leaf to root summing the values of all subtrees left of the query index.

Improving on a result of Fredman and Saks, Mihai Pătraşcu used the cell-probe model and an information transfer argument to show that the partial sums problem requires time per operation in the worst case (i.e., the worst of query and update must consume such time), assuming bits per word.[1][2] He further exhibited the trade-off curve between update time and query time and investigated the case that updates are restricted to small numbers (of bits).

Disjoint Set Maintenance (Union-Find)

[edit]

In the disjoint-set data structure, the structure represents a collection of disjoint sets; there is an update operation, called Union, which unites two sets, and a query operation, called Find, which identifies the set to which a given element belongs. Fredman and Saks proved that in the model CPROBE(log n), any solution for this problem requires probes in the worst case (even in expectation) to execute unions and finds.[4] This shows that the classic data structure described in the article on disjoint-set data structure is optimal.

Approximate Nearest Neighbor Searching

[edit]

The exact nearest neighbor search problem is to determine the closest in a set of input points to a given query point. An approximate version of this problem is often considered since many applications of this problem are in very high dimension spaces and solving the problem in high dimensions requires exponential time or space with respect to the dimension.[6]

Chakrabarti and Regev proved that the approximate nearest neighbor search problem on the Hamming cube using polynomial storage and word size requires a worst-case query time of . This proof used the cell-probe model and information theoretic techniques from communication complexity.

The Cell-Probe Model versus Random Access Machines

[edit]

In the cell probe model, limiting the range of values that can be stored in a cell is paramount (otherwise one could encode the whole data structure in one cell). The idealized random access machine used as a computational model in Computer Science does not impose a limit on the contents of each cell (in contrast to the word RAM). Thus cell probe lower bounds apply to the word RAM, but do not apply to the idealized RAM. Certain techniques for cell-probe lower bounds can, however, be carried over to the idealized RAM with an algebraic instruction set and similar lower bounds result.[7]

[edit]

References

[edit]

Notes

[edit]
  1. ^ See Theorem 3 of Yao's paper.

Citations

[edit]
  1. ^ a b Pătraşcu, Mihai; Demaine, Erik D. (2006). "Logarithmic lower bounds in the cell-probe model". SIAM Journal on Computing. 35 (4): 932–963. arXiv:cs/0502041. Bibcode:2005cs........2041P. doi:10.1137/s0097539705447256. S2CID 2202874.
  2. ^ a b Pătraşcu, Mihai. "Lower Bounds for Dynamic Partial Sums" (PDF). Retrieved 9 April 2014.
  3. ^ a b c Yao, Andrew Chi-Chih (July 1981). "Should Tables Be Sorted?". J. ACM. 28 (3): 615–628. doi:10.1145/322261.322274.
  4. ^ a b Fredman, Michael; Saks, Michael (1989). "The cell probe complexity of dynamic data structures". Proceedings of the twenty-first annual ACM symposium on Theory of computing - STOC '89. Vol. 21. pp. 345–354. doi:10.1145/73007.73040. ISBN 0897913078. S2CID 13470414.
  5. ^ Zatloukal, Kevin (November 10, 2010). "Notes on "Logarithmic Lower Bounds in the Cell-Probe Model"" (PDF). Retrieved 9 April 2014.
  6. ^ Chakrabarti, Amit; Regev, Oded (2004). "An Optimal Randomised Cell Probe Lower Bound for Approximate Nearest Neighbour Searching". 45th Annual IEEE Symposium on Foundations of Computer Science. pp. 473–482. doi:10.1109/FOCS.2004.12. ISBN 0-7695-2228-9. S2CID 3004976.
  7. ^ Ben-Amram, Amir M.; Galil, Zvi (2002). "Lower bounds for dynamic data structures on algebraic RAMs". Algorithmica. 32 (3): 364–395. doi:10.1007/s00453-001-0079-6. S2CID 22324845.