Talk:Fibonacci search technique
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Largely incorrect
[edit]Fibonacci search is not faster than binary search, nor is it primarily useful for sorted arrays. It takes logφ n probes in the worst case, more by a factor of 1.44 than the log2 n probes used by binary search. Its primary use is in searching unimodal arrays, and it is well-described for that use already in Golden section search. I don't see the point in listing it here as a separate article. —David Eppstein (talk) 16:24, 29 May 2008 (UTC)
- David: I am not qualified to comment on accuracy, but consulting reliable sources, and examining the sources cited in the article, should resolve that. If you do not think that the technique warrants a separate article, then you should be proposing that this article be merged to Golden section search. Merely suggesting it in a Talk page post will not make it happen. Fibonacci search already redirects to Golden section search, so merging this article would just create another redirect. Finell (Talk) 19:54, 29 May 2008 (UTC)
- I tried to calculate, for large arrays, how binary search compares to Fibonacci search. In binary search, about comparisons are required for a large array size . (For simplicity, I assume that the search is not broken off when an element equal to the searched-for item is found. We have to continue searching, for example, when we want to find the first match in an array that may contain duplications.) My calculation goes as follows (with ):
- Because (see Fibonacci number#Computation_by_rounding), we get . For large , this is approximately .
- The expected number of comparisons, as a function of , is . (With probability , the searched item is smaller than .) I assume that this is a linear function in , so for some constant . Then I get the equality , which solves to .
- Combining the two results, I get that the expected number of comparisons, as a function of , is approximately .
- So Fibonacci search would require, on average, about 4% more comparisons than binary search (on large input files). Cache effects (see my other comment below) may well lead to more than 4% acceleration, so fibonacci search may be faster in practice.
- [Edit:] The same result, about 4% slow-down as compared with binary search on average, was reported by K. J. Overholt: Efficiency of the Fibonacci search method. BIT Numerical Mathematics 13(1)1973:92–96. -- David N. Jansen (talk) 02:51, 30 June 2017 (UTC)
- I tried to calculate, for large arrays, how binary search compares to Fibonacci search. In binary search, about comparisons are required for a large array size . (For simplicity, I assume that the search is not broken off when an element equal to the searched-for item is found. We have to continue searching, for example, when we want to find the first match in an array that may contain duplications.) My calculation goes as follows (with ):
Today's changes, stating more specific circumstances under which Fibonacci search could be an effective replacement for binary search, allay my concerns, so I have removed the disputed tag. —David Eppstein (talk) 15:29, 9 June 2008 (UTC)
- The original motivation for fibonacci search, as given by Ferguson in his 1960 article, was that a “machine having no binary shift requires a division” to find the next array index in a binary search, while in fibonacci search, “successive increments are found by subtraction”. -- David N. Jansen (talk) 03:33, 30 June 2017 (UTC)
What is m?
[edit]in step 1 of the algorithm k=m is assigned, but what is m ? —Preceding unsigned comment added by 122.169.85.134 (talk) 14:21, 2 November 2010 (UTC)
- See the first line of the section: n = Fm is the array size. —David Eppstein (talk) 14:47, 2 November 2010 (UTC)
I wrote the following implementation of a Fibonacci search, for finding the largest non-negative number satisfying some predicate:
def search(test): # Standard binary search invariants: # i <= lo then test(i) # i >= hi then not test(i) # Extra invariants: # hi - lo = b # a, b = F_{k-1}, F_k a, b = 0, 1 lo, hi = 0, 1 while test(hi): a, b = b, a + b hi = b while b != 1: mi = lo + a if test(mi): lo = mi a, b = 2*a - b, b - a else: hi = mi a, b = b - a, a return lo >>> search(lambda n: n**2 <= 25) 5 >>> search(lambda n: 2**n <= 256) 8
It is useful when you're trying to do binary search over a group, so you can't take averages. If anybody find it more clear than the current descriptions, I can clean it up and add a description to the wiki page. Thomasda (talk) 00:24, 15 June 2014 (UTC)
Huh?
[edit]"Compared to binary search where the sorted array is divided into two arrays which are both examined recursively and recombined, Fibonacci search only examines locations whose addresses have lower dispersion."
- (1) The first part reads like a description of binary mergesort. In binary search only one subarray is examined and there is no recombining. (2) If I can teach algorithms for 36 years without knowing what "lower dispersion" means, it probably needs a definition. It is not one of the 21 possibilities listed at dispersion. Knuth (Vol 3, Ed 2) doesn't have it. Anyway, lower than what? And where is a source? McKay (talk) 06:30, 21 October 2016 (UTC)
- I stumbled upon this article and it caught my interest. I also was first confused what “dispersion” might actually mean, but then I made up the following possible interpretation: “Dispersion” measures how uniform the memory accesses are distributed over memory locations. In fibonacci search, array elements whose index is a Fibonacci number (or a sum of very few Fibonacci numbers) are accessed much more often than other elements, so the dispersion is low. These array elements would be copied to the cache and stay there, accelerating subsequent searches. The same happens in principle also in binary search – so dispersion is not the relevant effect to explain the efficiency of fibonacci search. However, the array indices in binary search are often selected in a way that in a non-associative cache, multiple often-accessed array elements would be mapped to the same cache line, leading to fewer cache hits. This effect is strongest if the total number of array elements is nearly a power of two. Can someone check my reasoning? If it is correct, we might copy it to the article.
- Another use of the fibonacci search technique, mentioned in the reference linked from the article, might be to accelerate searching an element on a sorted tape file, where the access time depends on the distance from the current position of the tape head. However, to get the best effect here, Step 4 [Increase ] would have to be replaced by a step that sets (the desired position of the tape head) not to (i. e. elements from the current position), but to (i. e. elements from the current position, which is a Fibonacci number smaller than ). Note that then, while the position of the tape head still divides the relevant portion of the file (approximately) in the current section, it may be that it sometimes is instead of , similar to Golden section search for the case is found (see the figure and explanation of Golden section search). -- David N. Jansen (talk) 02:51, 30 June 2017 (UTC)