Talk:Fibonacci heap
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||
|
Mergeable priority queue ADT
[edit]Has anyone else ever heard it mentioned that Fibonacci heaps can be used to efficiently implement mergeable priority queues? If you're familiar with CS then you can see from the definitions that it's clearly true (and that merging is far more efficient than if a traditional heap had been used for the priority queues) though I have yet to find an application where merging priority queues needs to be done frequently enough to make it useful :) Anyway, I'd like some opinions on adding that tidbit to the article. DaveWF 07:48, 14 February 2007 (UTC)
Edmonds' algorithm depends on the mergeable property. In that algorithm you have collapsed node sets and are interested in the optimal incoming edge over all nodes in the set. Whenever such a collapsing happens, the individual queues are merged. 2.207.170.18 (talk) 23:15, 25 November 2014 (UTC) Joerg Sonnenberger
Why "Fibonacci"
[edit]Although the article mentions that Fibonacci numbers "are used" in the running time analysis, and that Fibonacci numbers give a constraint for subtree size given the order of a node, the "why" of this is not described. I will look into it and add an explanation if I can, but if I don't, someone else could. --CyHawk (talk) 08:42, 15 August 2008 (UTC)
- This article is very confusing, technically rambling and does not provide a good intro to the topic. In fact, the basic data structure is NEVER DEFINED clearly or explicitly. The rest of the comments are more theory of computer science. Frankly, we need curated sources, or better off, send people to read a book not to waste time. 2601:647:C001:97C0:CC7:1833:5E7B:4D00 (talk) 17:18, 12 November 2023 (UTC)
I think the analysis relies on the fact that the number of ancestors of any node is exponential in its degree, n. This is proven by showing that the number of ancestors grows faster than the Fibonacci numbers do, since ~ . In particular, you can show that , and so since , , and , . A(i) denotes the number of ancestors of a node of degree i. Something like that. WuTheFWasThat (talk) 17:28, 13 September 2009 (UTC)
- Descendants. The number of descendants of a node of degree i (including the node itself) is less than a Fibonacci number. A node of degree 0 has fewer than 2 descendants. A node of degree 1 has fewer than 3 descendants. A node of degree 2 has fewer than 5 descendants. A node of degree 3 has fewer than 8 descendants. Etc. —David Eppstein (talk) 19:48, 12 November 2023 (UTC)
Summary of running times
[edit]The linked list deletemin and delete operations assumes a search is required. If one is given a pointer to the item that must be deleted, then deletion can be done in Theta(1) worst-case time. (in fact, all the running times should be made more precise - I would list them as Theta with an explanation that this is worst-case running time). —Preceding unsigned comment added by Trachten (talk • contribs) 15:14, 7 October 2009 (UTC)
This list seems to be a little inconsistent. Are accessmin and deletemin really possible in O(1) for binary trees? Also, a statement for linked lists appears to be confusing as it depends on what case (insertion or removal) is preferred by an implementation.--85.178.10.62 (talk) 20:01, 31 August 2008 (UTC)
I think it is misleading that this table states that the insert operation is O(log(n)) on a binary heap, and doesn't mention anything about average time. After each of the log(n) swaps that *could possibly* occur during insertion into a binary heap, the likelihood that the swapping must continue decreases (on average by a factor of about 2). Thus, insertion into a binary heap averages about O(1). (1/2 + 1/4 + 1/8 + ... = 1.) --128.187.80.2 (talk) 18:26, 30 July 2009 (UTC)
- The table is only concerned with worst case and amortized worst case complexity.Hiiiiiiiiiiiiiiiiiiiii (talk) 17:37, 7 March 2011 (UTC)
What about adding sorted linked list and/or sorted (dynamic) array to this summary? -- akerbos 17:55 Oct 23 2010 —Preceding unsigned comment added by 95.80.12.63 (talk) 15:55, 23 October 2010 (UTC)
This section does not seem to belong here at all. I shall remove it if nobody opposes. --Yecril (talk) 19:58, 17 January 2014 (UTC)
Introduction less overviews how the Fibonacci heap works
[edit]The introduction appears to do less than provide an overview of how the Fibonacci Heap works, and more gives a mind-numbing discussion on Big-O notation garbage. I believe that this is a fatal error. 130.56.92.62 (talk) 17:03, 19 May 2011 (UTC)
decrease key should update the minimum pointer, shouldn’t it?
[edit]It seems necessary when the key is decreased below the minimum. I understand it requires constant time, but still… --Yecril (talk) 19:46, 17 January 2014 (UTC)
Complexity in entry section
[edit]Describing find-minimum as amortized O(1) is misleading. Given the implementation afterwards, it is a constant time operation. This is a stricter boundary. Same for insert and union. The comparison with binomial heaps is also a bit confusing as three sentences are used, but only two groups are discussed. 2.207.170.18 (talk) 23:25, 25 November 2014 (UTC) Joerg Sonnenberger
@David Eppstein: Thanks for the clarifying edits and your explanations in the summaries. However, I still don't understand how the sum of two amortized complexities can be a worst-case complexity. Or, maybe, some knowledge about e.g. insert's and delete's worst-case time is tacitly used to infer the sequence's worst-case time? - Jochen Burghardt (talk) 09:58, 5 February 2016 (UTC)
- The sum of amortized times for any sequence of operations in any data structure, starting from when it is initiialized and including all the operations that are performed, is a valid bound on the worst case time for the same operation sequence. That's what amortized time bounds mean. If you're using the potential function method of amortization, the changes to the potential telescope out leaving you only the sum of actual times. —David Eppstein (talk) 16:46, 5 February 2016 (UTC)
- Oops, I confused amortized time and some notion of average time; I should have read the (somewhat hidden) formal part of amortized analysis first. Thanks for your explanation again. - Jochen Burghardt (talk) 11:10, 7 February 2016 (UTC)
In the infobox, there is information about average complexity, whereas in the article the complexity is described with amortized complexity. They are not the same, and amortized complexity is stronger than average complexity. The difference is that average complexity makes assumptions about what is an "average input", meaning that under certain circumstances, if the input wasn't random for example, the actual complexity might be a lot worse. On the other hand, amortized makes no assumptions, meaning that you can't forge a specific input for which it is going to be slow, in the long run. To be clear: if I were to run a lot of operations, each with O(1) average complexity but O(n) worst time (say, I run m of them), most of the time I'm getting O(m) complexity, but nothing prevents me from getting O(nm) complexity, especially if I carefully crafted the input. Now, I run m operations, each with O(1) amortized complexity, and O(n) worst time. I'm going to get O(m) complexity, no matter what. — Preceding unsigned comment added by TheBlackBeans (talk • contribs) 07:41, 11 February 2021 (UTC)
Time complexity of extract minimum
[edit]The article says: "Therefore, the amortized running time of this phase is O(d) = O(log n)." This seems wrong to me. I think it probably should be: "O(d) O(log n)" McGucket (talk) 16:10, 27 August 2017 (UTC)
- If you're using subset-O notation, you have it backwards, but as stated it is a correct usage of the more common =-O notation. —David Eppstein (talk) 22:41, 27 August 2017 (UTC)
- You're right. I messed the subset symbol up ('p' instead of 'b'). The "=-O" notation is just stupid. It doesn't make any sense. It talks about an equality when there is none. In this case, it's especially bad because both sides of the "equation" are sets of functions, so you have to read it a specific way, even though this is never the case with an equals sign an any other field. I have even seen lecture slides in which the "=-O" notation is written the other way around when talking about 2 functions: f = O() = g McGucket (talk) 22:50, 28 August 2017 (UTC)
- See WP:SOAPBOX. We should be following field-standard conventions, not pushing minority conventions merely because we think they're better. And wake me up when you have an alternative subset-based semantics that also works in contexts like or . —David Eppstein (talk) 00:14, 29 August 2017 (UTC)
- You're right. I messed the subset symbol up ('p' instead of 'b'). The "=-O" notation is just stupid. It doesn't make any sense. It talks about an equality when there is none. In this case, it's especially bad because both sides of the "equation" are sets of functions, so you have to read it a specific way, even though this is never the case with an equals sign an any other field. I have even seen lecture slides in which the "=-O" notation is written the other way around when talking about 2 functions: f = O() = g McGucket (talk) 22:50, 28 August 2017 (UTC)