Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2017 September 25

From Wikipedia, the free encyclopedia
Mathematics desk
< September 24 << Aug | September | Oct >> September 26 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 25

[edit]

Given a rational number x, is there a simple expression f(x) for its sum up the Stern-Brocot tree (the sum of the number and all its parents up to 1/1; for example, f(3/5) = 3/5 + 2/3 + 1/2 + 1/1 = 83/30) 68.0.147.114 (talk) 16:43, 25 September 2017 (UTC)[reply]

It sounds unlikely to me that there would be anything simpler than using the algorithm to generate the sequence and add them up but it would be quite interesting to be proved wrong. Anyway you might be interested in this nice AMS feature column on them Trees, Teeth, and Time: The mathematics of clock making Dmcq (talk) 20:05, 25 September 2017 (UTC)[reply]

Looking for an equilibrium stable solution for a lowest-unique-number game. What have I done wrong?

[edit]

The following question is inspired by the Maths puzzle challenge in the Guardian today: [1]

Suppose we have a game where independent players each choose a positive number, the winner being the one who picks the lowest unique number. Can we find an equilibrium stable solution for the game?

-- ie can we find a probability function so that if each of the players independently chose their number weighted by that probability, whichever number the player picked, they would expect the same chance of success. With players, that should work out to give each the chance of success ; so we seek for each in .

Success requires the combination of (a) no other player having chosen the same number; and (b) no lower number having already won.

The probability for (a) should be close to , the probability of getting zero from a Poisson distribution with mean .

The probability for condition (b) on its own, by construction, is ; =

If (a) and (b) could be taken as independent, this would give the requirement:

=>
=>

This appears to fall off quite nicely in a way that makes qualitative sense -- the higher numbers are a bit less likely to be bet on, so are more likely to be empty; compensating for the opposite trend that the higher the number, the more likely a lower number will already have won.

However, when I tabulate the proposed values of for , they add up to over 3.67 -- a lot more than one. So something is very wrong.

I can see that the assumption that I could treat (a) and (b) independently could be wrong -- the knowledge that there has no winning number in perhaps gives additional information as to how many players have chosen numbers in , which could change the conditional probability of none of them having chosen number . But would it make such a drastic difference? Is there a qualitative argument that that this should lead to the weight being given to higher numbers to be driven down so much, if they are still to have  ?

And if the simple combination of (a) and (b) is what is at fault, is there a way to work around it? A paper that treats a more advanced version of the question [2] goes via a probability generating function. But would that get round the problem? I'm not seeing why.

Or have I made a different mistake somewhere?

Any thoughts would be very much appreciated, because it's been starting to do my head in. Jheald (talk) 23:41, 25 September 2017 (UTC)[reply]

You're talking about a Unique bid auction. The article points to some studies of such auctions but doesn't describe the results, but it should start you to something. By the way I haven't read much of what you said but your supposition that the chance of getting it should be 1/N is wrong - it may be that no-one gets a unique lowest number. It might be an idea to try N=2 first. Dmcq (talk) 08:01, 26 September 2017 (UTC)[reply]
By the way I just had a think about that N=2 case and if you limit the top number then you get a version of the Unexpected hanging paradox. Dmcq (talk) 09:25, 26 September 2017 (UTC)[reply]
Well that's interesting. For N=2 it doesn't work at all, they'd both choose 1. For N>2 there doesn't seem to be a problem if a limiting number is set. All the other players may be eliminated because of clashes! Thanks for the puzzle. Dmcq (talk) 14:04, 26 September 2017 (UTC)[reply]
Thanks User:Dmcq. I've now looked at a couple of papers specifically on this variant of the unique bid problem -- this variant is called the "Lowest Unique Positive Integer" (LUPI) problem, distinguished in that one doesn't have to worry about the value of the bids (though these are usually trivial in the auction problem, so it makes little difference, and the broader literature of "Lowest Unique Bid Auctions" (LUBA) is certainly relevant). A rejoinder to the first paper (Flitney response to Zeng et al 2007) gives the exact ESS solution for N=2, N=3 and N=4; the second plots an ESS for N=53,783 with analysis, also looking at the behaviour of real players and the 'learnability' of the game.Ostling et al 2011.
In terms of my worries about the independence, I think the easiest way to qualitatively think about that is to start with (a), and then consider the conditional probability of (b) given (a), as opposed to the overall probability of (b). To consider the effect of (a) on Pr(b|a), the knowledge that none of the other (N-1) players has bid in bin means they must have placed their bids in other bins. For most of the range, that will very slightly increase the chance of there not being a unique number less than . Taking this into account, it would therefore make sense to put slightly more weight on than if this was not in play. In the analysis of Ostling et al, and some other papers on Unique bid auctions, this dependency is avoided (I think) by making Poisson distributed, rather than a fixed number. This has an effect of decoupling knowledge of how many bids there are in bin from how many there may be in any other bin. As they say, "Remarkably, assuming a variable number of players rather than a fixed number makes computation of equilibrium simpler, if the number of players is Poisson-distributed", and later in note 7 state, "For small N, we show in online Appendix A that the equilibrium probabilities for fixed-N Nash and Poisson-Nash equilibrium are practically indistinguishable (Figure Al)."
So it's not the independence or lack of it that was the main factor in my mis-intuition.
Instead, as you suggest, the key error appears to be my assumption that the probability of winning for each player under the ESS should be .
When N is small, as you suggest, there is a real probability of there being no winner, and this may account for why Flitney finds expected pay-offs of only 0.287 (N=3) and 0.134 (N=4).
On the other hand, for larger N, at least in the real games reported by Ostling, apparently no-win situations are rare. On the other hand, Ostling's proposed ESS for (N=53,783) strikingly has strategy weights falling to pretty much zero for above about 5,500. If I am picking up the signals in the paper correctly, with this distribution, the chance of "no win" falls below the payoff rate which is below 1/53,783 and so there is no point in picking a higher number, even if you were certain to have it to yourself.
Yet even if the probability of winning for each of the numbers less than 5,500 were close to 1/N, that would leave a total payout probability of only just over 0.1 -- so where is the rest going???
There is something I am still fundamentally missing here. Jheald (talk) 10:43, 26 September 2017 (UTC)[reply]
Hi, Jheald. There are some talks on the topic at StackExchange sites.
You may like to see an answer on Guessing the smallest unique positive integer at Computer Science and articles linked in an answer on Lowest Unique Bid at MathOverflow. --CiaPan (talk) 11:09, 26 September 2017 (UTC)[reply]
In the N=2 case the Nash equilibrium would be for both to always say 1 - which isn't at all a clever strategy as no-one ever gets anything and they just throw their entry money away. It's a bit like the WarGames scene [3] Dmcq (talk) 14:25, 26 September 2017 (UTC)[reply]
Sorted it! What I was messing up was that I wasn't distinguishing (A) my chance of success if I choose bin , from (B) the overall chance of success from somebody choosing bin .
B = A times . The two have to be different, so that I get the same overall chance that the game has a winner (about ), whether I sum over the players, or over the numbers being backed. Since we're dealing with a Poisson distribution, represents the chance that nobody else has backed number , based on which I may or may not choose to, and no lower number has won; while represents the chance that out of everybody (including me) exactly one person has backed number (and again, no lower number has won). Hence is times , because is the Poisson mean for the distribution of bids for bin . One could also think of the expectation of a win on bin , ie being shared out over the expected number of players likely to have backed bin , namely . This is slightly confusing, because an actual number couldn't have backed bin , because then it wouldn't have won. But because the statistics are Poisson, it does actually work (a unique property of Poisson games, apparently) -- under Poisson statistics, the probabilty that one person backed can get shared out ways, the probability two people backed it gets shared out , etc -- the whole thing separates. This is also what makes the ESS well calibrated -- the weight you should give a particular number matches its overall probability of success -- which is what makes the game so learnable: if it's played repeatedly, and people start picking numbers roughly in line with the observed pattern of previously successful numbers, Ostling et al found that the distribution of numbers chosen rapidly converges to something close to the ESS.
Nash equilibrium distribution
In terms of my calculation above, my mistake was to estimate the probability of a particular number winning as , when calculating the chance of a lower number having already won, when I should have been estimating that probability as , = . This automatically takes care of the normalisation, if I start building up the ESS weights from upwards. Once I get to , it is almost certain that the game has been won on a lower number, so there is no point in giving any weight to any higher numbers, so forcing the normalisation of the weight distribution to 1, as desired. (Or close enough at any rate, when is large.) So with that one tweak, the method I sketched above for drawing up a very nearly correct ESS should indeed work.
Nice set-up. Pretty, the way it all comes out. Jheald (talk) 18:01, 26 September 2017 (UTC)[reply]
To the right is the Nash equilibrium distribution calculated for N=100. Points of interest include that for all ; and that it predicts an over 50% chance of . Jheald (talk) 22:35, 26 September 2017 (UTC)[reply]
Thanks very much for that. So are you going to try some other sizes and see if you can figure out some rule from them? I find it surprising that people can approximate that sort of solution heuristically. Dmcq (talk) 08:05, 27 September 2017 (UTC)[reply]
@Dmcq: The paper by Östling et al has some interesting details of choices made and how they evolved over a period of 49 runs in a real-world game with an average of about 50,000 players, and also 49 runs of a lab game with an average of 27 players (observations include "focal" numbers, that people tended to avoid; and unusually preferred numbers, such as (probably) years of birth).
In both case players learnt quite quickly that high numbers, above the cut-off of about don't win. In the big game there was also a considerable initial excess of very low numbers which (it seems to me) might unconsciously reflect choices embodying either an exponential distribution, , or perhaps a scale-independent distribution, . This too had largely disappeared by the last week, presumably as people became more aware of the kind of numbers that were typically winning.
What seems not to disappear is a tendency in larger games for people not to bid quite as high as the cut-off suggested by the Nash distribution, something that Pigolotti et al also find. I think this makes sense, if people are basically revising their strategies to reflect the kinds of numbers that have tended to win previously. If the population is consistently betting low (compared to Nash), the winning number will tend to be slightly biased towards the higher end of the numbers they are betting on -- but it will not appear in the numbers they are not betting on. Also, 'spikes' on inappropriately preferred popular numbers will take probability away from the high end. A distribution like that in figure 8 of the Östling paper (page 20) may not be the Nash equilibrium stable distributions, but it may in some sense only be rather weakly unstable, giving only a very weak signal to the population that their behaviour is sub-optimal, so (I'd conjecture) only very very slow and weak evolution further towards the theoretical distribution.
Feel free to add a section to Unique bid auction on deviations from theoretical Nash equilibrium behaviour, if you are interested! Jheald (talk) 15:53, 27 September 2017 (UTC)[reply]