User talk:Volunteer Marek/gt
Very interesting
[edit]My game theory is totally rusty, so I'm thinking in plain English:
Typical players:
- Win at all cost ("caring")
- Willing to compromise ("still caring")
- Willing to lose on an argument if it allows greater likelihood for winning another ("still caring"), a form of compromise; or simply willing to lose ("not caring")
The degree to which any of these are willing to "out-care" the competition in defense of their position sets the stage for how any conflict plays out. While much is made of editors outvoting each other and the relative population sizes of adherents to any position and its effect on "consensus," my experience is that any conflict will give rise to two or three primary antagonists who set the tone for the conflict. When the conflict is personalized in such a manner, any of the primary antagonists may adopt any of #1, #2, or #3 at any one point a a strategy. I'll have to think on this a bit.
(There are also different methods of compromise: changing content or an implicit or explicit quid pro quo which may be intra- or inter-article, but that's getting ahead of myself.) PЄTЄRS J VЄСRUМВА ►TALK 22:28, 20 September 2010 (UTC)
- Here the Type 2 players are assumed to "care about winning" more than about "compromise" (c>a). However they still prefer "compromise" to "open conflict" (a>d). That's what creates the Prisoner's Dilemma. Type 1 players, prefer "compromise" to "care about winning" (a>c) (and of course to "open conflict) but still prefer "open conflict" to "getting screwed" (d>b).
- Your #3 is an issue I hope to consider in the repeated version of the game(s), though the thing is, repeated games typically suffer from having a buttload of equilibriums through the folk theorem (another crappy article); given that the players are patient enough, almost any kind of behavior can be an equilibrium.
- The N>2 version of a game would be interesting to consider, particularly to analyze various "alliances of convenience" that occur on Wikipedia. But I haven't even began to really think about how to set that up.
My comments:
- mind you, my game theory is probably as rusty as Peters :)
- you could link some terms to Wikipedia essays and policies (ex. when you mention battleground, I'd link WP:BATTLEGROUND)
- Is your Type 2 player the same I describe as "true believer"?
- "Of course there may be (are) individuals who prefer d>a (i.e. ones who enjoy conflict for conflict's own sake)". There are; they are called trolls :)
- 2 types, 3 different possibilities section - are shaded areas in graphs the most likely outcomes? If so, why is d,d shaded as well?
- outcomes b and d are not explained (I found explanations of outcomes a and c very useful in Case 1 - Two Type 1 players)
- "Case 2 and case 3 are indistinguishable to an outside observer" - an EXCELLENT point, one that needs further elaboration on! Why are they indistinguishable is a KEY question.
- "Type 2 will choose F she chooses F as well (since d>b, i.e. the "nobody likes to get screwed" assumption" - how about giving up and leaving? Would this be b>d or something else? I think it is a common outcome and needs to be addressed, see my argument here.
- "Type 1's will play C only if they believe the other player to also be a Type 1". I have some thoughts on how Type 1 becomes Type 2 or at least comes to believe others are Type 2 here
- "if conflict - the outcome {F,F} - occurs, both players will suffer a cost, which in turn lowers the payoff they get in that case" - but that is not always the case, often enough, only one editor is punished, not both; or the punishment vary
- "If Type 2 were to cooperate Type 1 would also choose C" - I am not sure I understand this and the distinction between b,c and c/b
- "However, it is not clear what kind of realistic "carrots" can be employed on Wikipedia which reward editors who find themselves in disputes for resolving these through compromise." - there are some barnstars and such, but it requires the situation to be noticed by somebody who is willing to give such rewards out. And their value is debatable (certainly, positive, but how significant)? Crucially, is it outweighting the stress caused by being in conflict?
- I also wonder if we can distinguish threats of a stick use from using the stick; and could a threat of a stick be seen as a carrot (as in - it is a nicer option than the stick?)
- "once a dispute or two is solved through the application of carrots the area ceases to be a battleground" - that raises a related thought: solving disputes is rarely remembered and recognized. I have solved many disputes (hundreds), but they are rarely remembered and never brought to light, whereas each failure to solve one is likely to be remembered and brought to light by a critic. This inequality I feel feeds the stick solution (as it generates evidence that such approach is necessary).
- "the more other "cooperators" are out there, the higher the average payoff to cooperating" - this raises the question of how the community has changed over the years; was once the payoff of cooperating higher? One can hypothesize that over time, cooperators burn out and leave, making the payoff less and less (in an accelerating trend). I see you are getting at the same point in the section that follows.
- I assume you have read this?
- "And by assumptions these kinds of editors enjoy conflict." - this probably needs to be expanded on; I also do not recall this assumption from your previous section (unless we are talking about the place where I commented on trolls?)
Hmmm, I think that's it for now. PS. An interesting read? --Piotr Konieczny aka Prokonsul Piotrus| talk 17:42, 22 September 2010 (UTC)
Re
2. Yeah, that kind of formatting will be done later. For now I'm just trying to get the models down. I'm a bit stuck at the evolutionary version with more than just the 2 types, which overall I think would be the most realistic and applicable version (and usually the most realistic case is hardest to model)
3. The "true believers" you describe are probably pretty close to the Type 2's but they seem to have some of the "enjoy conflict for conflict's sake" in them too. So my definition's are a bit more precise ;)
4. Yes, the point here is that even without outright trolls, certain topic areas can still (in fact, are likely) degenerate into conflict.
5. Yes, the purple cells are the resulting equilibrium(s). {F,F} is an equilibrium outcome in each case, with the corresponding payoffs (d,d). In the good case (two Type I's) it is an equilibrium because even nice people don't like to be taken advantage off and if they believe that that will happen they'll "protect themselves" by being confrontational as well (playing F). However in the "good case" this equilibrium is not "focal" in the sense of the Nobel prize winner Thomas Schelling (btw, really cool guy, got to me meet him once) - it doesn't take much for it to be avoided as an outcome, pretty much all that is needed is for someone to propose the good equilibrium for it to be achieved (by saying "hey let's compromise!"). But as soon as you get even one Type 2 in there, {F,F} ((d,d)), or "conflict", is the ONLY resulting equilibrium.
6. b and d are payoffs not outcomes. Outcomes are pairs of actions like {C,C} or {C,F}, which then map to individual payoffs. I'm probably being sloppy with game theoretic terminology somewhere in there. Anyway "b" is the payoff you get if you try to compromise but the other person "fights" - it's the "get screwed" payoffs where a player is being taken advantage off and they "loose". "d" is the payoff you get if outright conflict results; both players fight. For completeness: "a" is the payoff you get if there's a compromise. "c" is the payoff you get if you're the one doing the "screwing over" - choosing to fight when the other person wants to compromise and then you "win".
7. This is one of the key points. You CANNOT necessarily infer a person's type from their actions alone. Their actions are a response to the environment and the incentives they face. If you put them in a different environment and give them different incentives they might behave differently. And part of the point here is that even people who WANT TO compromise will not do it if they know they will be taken advantage off.
8. "Getting up and leaving" would be something that needs to be incorporated in a dynamic setting so it's better left for the evolutionary model section. Though, one could incorporate it into the static setting I guess with something like this:
Though I'd have to think about it a little more. 9. Yes, one could make a player's type a function of previous outcomes. Again this kind of thing is better left for the dynamic setting - unfortunately that case gets quite messy quite fast as it is. 10. To the extent that it's hard to predict ahead of time who exactly will be punished if conflict occurs, assuming the same cost for both parties is reasonable. Of course if it could be identified who is Type 1 and who's Type 2 then the problem wouldn't exist. 11. Again, this is the difference between an action (F or C), an outcome (a pair of actions) and payoffs which are functions of outcomes. The statement is just an analysis of a Best response strategy. IF the other player is going to choose action {C}, Type I player's best response is to choose {C} as well. But IF the other player is going to choose action {F} then the best response is to choose {F}. Of course Type 2's will never actually choose C (without changing the payoffs) but they do have the option (i.e. for them {F} is a strictly dominant strategy). Type I's of course know this and won't play {C} either. Basically to find (and understand) the equilibrium outcome we consider ALL possible strategies, even those which will not be played in equilibrium. 12. Yes, I'm thinking about writing a general section on carrots on Wikipedia. Even barnstars, whose value has probably been degraded much by inflation are generally given for things OTHER than dispute resolution (content, clean up, etc). 13. Yes I totally agree. I actually think of the Danzig/Gdansk vote of an example of successful conflict/dispute resolution "intervention". Note however that this particular intervention was very much content oriented rather than behavior oriented - which I think is what made it work (and the model suggests that conclusion) 16. Yeah I'm not being clear. They enjoy conflict not in the sense of conflict for its own sake but in the sense of they mind it less than the other type. I need to explain this better. 21:36, 22 September 2010 (UTC)
Typo[edit]Hey, I don't want to edit your user space, but you have a typo: "loose" when you intended "lose" in the Definitions section. --JaGatalk 23:45, 28 October 2010 (UTC) Informal requested move[edit]I'd suggest moving this to User Volunteer Marek/Game-theoretic models of Wikipedia behavior; your short name will still work as a redir for personal convenience, but people browsing the user essays category will have some idea what this is about. — SMcCandlish ☏ ¢ 😼 04:11, 30 June 2018 (UTC)
Simulation video[edit]First off, thank you for this awesome essay, very insightful! Here is a video of a simulated repeated evolutionary scenario, similar to the one you show: [1]. Also I would argue that although most of your assumptions make sense, I hardly see why only case 4 is more likely. Also, the game mostly assume a balanced payoff, whereas unbalanced payoffs can drastically change the equilibrium point(s). I think it should now be possible and would be very interesting to gather data from Wikipedia to estimate empirical payoff values in these situations, to better estimate or even code a simulation of what equilibriums exist in practice, that would make an awesome wiki research project --Signimu (talk) 07:46, 1 November 2019 (UTC) Symbols seem cluttered if spaces are absent[edit]I’m utterly ignorant of the orthographic principles of formal game theory, but the essay uses symbols with no spaces between them, and so it is difficult to comprehend what is written. For example, please compare the following pairs of contrasting styles (the two parts of every pair are separated by spaced commas, and pairs are separated by spaced semicolons; I am doing so in order to preserve the appearance of the text as it occurs in the article): a>b , a > b ; d>b , d > b ; c>a , c > a ; b>c , b > c . Hopefully, these examples convincingly demonstrate the greater intelligibility of symbols separated by spaces. The essay would greatly benefit from a revision which applied this style of inserting spaces between contiguous symbols. catsmoke (talk) 11:20, 31 August 2020 (UTC)
|