Jump to content

Wikipedia talk:Wikipedia Signpost/2015-03-11/Special report

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Discuss this story

This is with reference to a WMF banner last year that listed a minimum recommended donation of 3 pounds with reference to it being "the price of buying a programmer a coffee", which became the topic of a Wikipediocracy article. ResMar 06:07, 13 March 2015 (UTC)[reply]
A cup of coffee isn't particularly cheap in countries like mine. --NaBUru38 (talk) 00:00, 14 March 2015 (UTC)[reply]

I see ads on Wikipedia fairly frequent. Maybe this report supports that we need to do more about removing / preventing them. Doc James (talk · contribs · email) 06:21, 13 March 2015 (UTC)[reply]

    • "(13–20%) thought that Wikipedia is supported by advertising." 13-20% is in my opinion a good estimate of the proportion of wikipedia articles that are primarily promotional. Probably half are worth rewriting. DGG ( talk ) 06:38, 13 March 2015 (UTC)[reply]
  • There are a fair number of WP mirrors that present WP content (using the same formatting as we do), and often these mirrors do have ads (which to me are different than articles that are promotional/POV). This may explain a part of those 13-20% --Randykitty (talk) 10:37, 13 March 2015 (UTC)[reply]
I'd be surprised if that were the case as I can't imagine very many people use the mirrors. If I had to guess I'd say it'd be that people are classifying global messages they get in the header (particularly fundraising prompts) as advertisements. ResMar 14:59, 13 March 2015 (UTC)[reply]

The report from the researchers is on Commons and also on the Fundraisng Meta page. Lgruwell-WMF (talk) 15:56, 13 March 2015 (UTC)[reply]

It's very good that the WMF did a professional survey of readers. We can count on these results. It should be noticed that the results in general are technically negative - we haven't been doing anything to turn off potential donors. Sometimes negative results are really positive, and they are worth the money spent on the survey. After all, we really want to be sure that we're not killing the goose that lays the golden egg!
I'm wondering how much the survey cost? Not that I want to complain, but it may be worthwhile having a professional survey of readers and editors on other topics as well. For example, how many of our readers are women? How many of our editors are women? Evidence on the 1st question is likely hidden in the current survey, but wasn't published as far as I can tell.
The second question is more difficult to answer. The same sample selection method as the current survey wouldn't work - but another could be devised. This question is at the heart of a current controversy - how to get more women to contribute. The evidence to my reading currently adds up to "somewhere between 10-20% of editors are women" but any details such as "is the percentage of women editors increasing?" are well beyond analysis with the current data. It would be very nice to get a better handle on these questions. A professional survey using reasonable sampling methods should do the job. Smallbones(smalltalk) 17:30, 13 March 2015 (UTC)[reply]
Thanks for the questions Smallbones. I think WMF as a whole is interested in having better, randomized opinion research. This has already been very informative to the fundraising team and has sparked a lot of ideas that we want to test. We decided not to ask any questions at all about editing in this reader survey. It became clear that with a sample size of 2,300 that we would not find very many people who had edited Wikipedia from the audience of people who read Wikipedia at least one a month in these five countries. The number would not likely be statistically significant, so we really couldn't draw many conclusions from it. However, the number that I am excited about is that 49,123 of the 250,000 donors who completed the donor survey in December said they wanted to learn how to edit Wikipedia. With regard to gender, we do have demographic data paired with the responses in this reader survey. We are just beginning to get that analysis and will share if we find any interesting differences on the gender front. Again though, this was focused on understanding reader opinions of our fundraising efforts. We did not touch on editing in this research.--Lgruwell-WMF (talk) 22:47, 13 March 2015 (UTC)[reply]
Yes, it is very good to focus on one major issue at a time, especially to keep the survey short enough so that people will answer the last half as seriously as the first half. So far, so good.
But I do think that further surveys of readers, donors (I haven't seen the 2014 donor survey - is that 250,000 who completed the survey?), and editors would be very useful. In general I think folks don't take readers' opinions seriously enough here. You can get a random sample just from the people who enter the site without logging in, though there will be a selection bias just from people who don't answer. Just don't try to get everyone who enters the site. Perhaps folks who view 3 articles in a row might be a better starting population. Also sample in proportion to the readership according to different times-of-day (so you don't get a higher sample from Europe, the US and Canada, Australia, or India than is normal).
Surveying editors can be important for issues related to the site's governance. Have editors been subjected to gender discrimination? Do they see sexism in our articles? How do they feel about the editing environment (or civility in particular) on the site? Do they see commercial advertisements hidden in our articles? Are the admins and other governance mechanisms responsive to their needs? Tough issues, but sooner or later we're going to have to get a handle on how our editors view them.
The sample of editors will have to have input from the WMF (e.g. the list of editors who made more than 5 edits last month or similar), so there likely will be some concern that the data be kept confidential. I'd suggest something like the WMF lists the overall population to be sampled, a computer selects those to be sampled and sends them to an outside contractor with a key. The contractor has no idea who from the population has been sampled and only gives aggregate results to the WMF. In any case a method can be devised to keep the sample and individual responses totally confidential.
I'll also suggest several small sample surveys per year rather than one big survey. For example, 3 surveys per year (1 every 4 months) of 400 editors each will give more information than an annual survey of 1200 editors. You'll get to see if there are changes over time. The increase in the confidence interval for the smaller samples (say ±5% from ±2%) would not be that important for many issues.
But I'm not saying - follow my guidelines or else you won't get anything meaningful. I'm just saying that there are questions that are commonly discussed on Wikipedia as being important for governance, but nobody has tried to get the answers according to some of the fairly standard statistical methods. Survey professionals should be able to guide the WMF on how to do this properly. Smallbones(smalltalk) 02:38, 14 March 2015 (UTC)[reply]
@Smallbones: With regards to editor surveys, that is definitely on the radar for the new Community Engagement Department. Exact form still to be determined. —Luis V. (WMF) (talk) 18:46, 16 March 2015 (UTC)[reply]

Countries

[edit]

"The survey questioned a sample of 2,300 people who said they used Wikipedia at least once a month. They were from five primarily English-speaking countries: the United States, the United Kingdom, Canada, Australia, and New Zealand, with the last two countries conflated into one sample group."

The survey is interesting from a journalistic point of view, but it's hardly statistically significant. --NaBUru38 (talk) 23:53, 13 March 2015 (UTC)[reply]

Back of the envelope math:
So with 97.5% confidence the capped inaccuracy for the smallest sample-size population is ±5%. With α = 0.05 that is very reasonable...maybe the Foundation is using the same maths I am! ResMar 01:59, 14 March 2015 (UTC)[reply]
Dear Resident Mario, I agree that 2,300 is a great sample size. I was questioning the small selection of countries, which doesn't fit the great diversity of Wikipedia readers. --NaBUru38 (talk) 03:23, 16 March 2015 (UTC)[reply]
I'd guess the issues of importance on the English, Russian, and Indonesian Wikipedias (as well as for many other languages) can be quite different. I'll suggest that we survey one language WP at a time to avoid mixing up issues and getting the issues of the Indonesian Wikipedias watered down. For example, start with surveys (400 is a great sample size - it allows many surveys to be taken) of the English WP, then the Spanish, German, French, Russian, ... Indonesian, ... , Vietnamese, ... Catalan, etc. We wouldn't be able to do all language versions, but if we did one per month, within a year we'd be able to see how the important issues vary and be able to tell how the needs of the different language versions can be addressed. Mixing it up into one big bag would likely just confuse things. Smallbones(smalltalk) 14:40, 16 March 2015 (UTC)[reply]
Since this is a fundraising survey, you would expect them to exclude all of the (many) countries and languages that the fundraising campaign doesn't target. In fact, this particular survey was about a single campaign, called "Big English", which was (a) only on the English Wikipedia and (b) only shown to logged-out users of the English Wikipedia whose IP address geolocated to those five countries. As a survey of that particular campaign, the survey's limitations were a perfect match. WhatamIdoing (talk) 17:34, 16 March 2015 (UTC)[reply]