Jump to content

Draft:Crowd science

From Wikipedia, the free encyclopedia

Crowd science is an approach to scientific research that involves contributions from individuals outside traditional labs or research institutions.[1] These contributors, referred to as "crowds," can include anyone from laypeople to experts. The method relies on open calls for participation, enabling diverse individuals to engage in activities such as data collection, analysis, problem-solving, or even co-designing research projects. Crowd science spans many scientific disciplines, from biology and medicine to social sciences and the humanities.[2]

Crowd science is related to citizen science but conceptually distinct. The defining feature of crowd science projects is an open call for contributions from large numbers of people outside a focal organization.[3] The defining feature of citizen science projects is that participants are members of the general public and not participating as professional scientists ("amateurs").[4] Although many projects fit both classifications, projects that call primarily for contributions from other professional scientists qualify as crowd science projects but not citizen science projects. Conversely, groups of local citizens that monitor air quality would qualify as citizen science but not crowd science.[3]

The term crowd science also describes an area of scholarly research on such projects.[3] Scholars studying crowd science come from various disciplines including management, economics, innovation studies, and the natural sciences.[5][6] They often build on research in the field of crowdsourcing, which studies the benefits of challenges of involving crowds for various (non-scientific) tasks such as product development or the creation of artistic works.[7][8]

Rationales for crowd involvement

[edit]

Research projects can benefit from involving crowds for different reasons. Beck et al. discuss five different "paradigms" that clarify the underlying rationale for crowd involvement.[9][10] These include:

  • Crowd volume. This paradigm leverages large groups to contribute substantial effort to standardized tasks, such as data collection or image processing. The scalability of crowd volume allows for vast datasets to be generated quickly, cost-effectively, and across a large geographic space. Tasks are often simple, enabling broad participation. Examples include Zooniverse, where volunteers classify galaxies, animals, and other objects, producing vast amounts of processed data for researchers. eBird enlists birdwatchers worldwide to record sightings, creating one of the largest biodiversity databases.
  • Broadcast search. Broadcast search highlights that issuing an open call to diverse crowds enables individuals with unique knowledge, skills, or resources to self-select and contribute to a project. The goal is not to produce a large number of contributions but to identify a few very high value contributions. This approach is especially effective for solving specialized or complex problems that require creativity or rare expertise. Examples include Foldit, which allows gamers to solve complex protein-folding puzzles, advancing biochemistry research. On InnoCentive, experts proposed creative solutions to challenges like the Exxon Valdez oil spill cleanup.[8]
  • User crowd. This paradigm relies on participants with experiential knowledge, such as patients or practitioners, who provide unique insights that professional scientists may lack. Their lived experiences or domain-specific expertise can improve the relevance and applicability of research findings. User crowds are particularly valuable in applied fields like medicine and agriculture.[11] Examples include PatientsLikeMe, which engages patients to share their health experiences, helping researchers identify treatment effects and design new approaches. On the island of Samothraki, researchers have worked with local farmers to co-develop practical solutions to overgrazing issues.[12]
  • Community production. This paradigm emphasizes collaboration and interaction among contributors, often from diverse backgrounds, to co-create innovative solutions. By sharing knowledge and perspectives, these interactions lead to superior problem-solving, particularly for complex challenges. This paradigm thrives in settings where creativity and consensus-building are essential. Examples include Epidemium, which fosters collaborations among statisticians, medics, and social scientists to address cancer research challenges.[13]
  • Crowd wisdom. Crowd wisdom aggregates the independent judgments, estimates, or preferences of individuals to produce accurate predictions or informed decisions. The diversity of opinions and the averaging effect help neutralize individual biases, making this approach highly effective for forecasting and prioritizing research topics. It is particularly valuable in decision-making contexts. Examples include many image classifications on platforms such as Zooniverse, where multiple contributors classify the same images, enabling the organizers to use consensus mechanisms to infer the accurate classification even if individual contributors can make errors. Prediction Markets aggregate collective judgments to forecast outcomes such as the success of clinical trials.
Five paradigms: Example of Foldit
Crowd science diamond completed for project Foldit[14]

The five paradigm highlight distinct mechanisms but many real world projects benefit from multiple paradigms. Projects on Zooniverse, for example, primarily benefit from crowd volume but also from crowd wisdom (via consensus classifications).

The "crowd science diamond" is visual tool that enables projects to analyze how important each of the five paradigms is to achieve the project's scientific goals.[2]

Research on crowd science

[edit]

Research on crowd science covers a range of topics using different disciplinary lenses. Topics include:

  • Benefits of crowd involvement. This includes, among others, work estimating the contributions made by crowd members or the volume and quality of outputs.[6][15]
  • Organization of crowd science projects. This includes work on challenges such as recruiting and motivating crowd members, coordination of crowd members, the allocation of tasks to crowd members, evaluating crowd contributions, and ensuring quality.[5][16]
  • Interplay between crowds and artificial intelligence (AI). This includes studies of the potential of AI to automate tasks, augment the work of crowd members, or increase project effectiveness through algorithmic management.[17][18]

Conferences

[edit]

Conferences discussing crowd science approaches or showcasing crowd and citizen science projects include:

References

[edit]
  1. ^ Franzoni, Chiara; Sauermann, Henry (2014-02-01). "Crowd science: The organization of scientific research in open collaborative projects". Research Policy. 43 (1): 1–20. doi:10.1016/j.respol.2013.07.005. ISSN 0048-7333.
  2. ^ a b Poetz, Marion K.; Sauermann, Henry (2024-12-05), How And When To Involve Crowds In Scientific Research, Edward Elgar Publishing, doi:10.4337/9781802204315, ISBN 978-1-80220-431-5, retrieved 2024-12-30
  3. ^ a b c Franzoni, Chiara; Poetz, Marion; Sauermann, Henry (2022-02-07). "Crowds, citizens, and science: a multi-dimensional framework and agenda for future research". Industry and Innovation. 29 (2): 251–284. doi:10.1080/13662716.2021.1976627. ISSN 1366-2716.
  4. ^ Vohland, Katrin; Land-Zandstra, Anne; Ceccaroni, Luigi; Lemmens, Rob; Perelló, Josep; Ponti, Marisa; Samson, Roeland; Wagenknecht, Katherin, eds. (2021). The science of citizen science. Cham: Springer. ISBN 978-3-030-58277-7.
  5. ^ a b Barbosu, Sandra; Gans, Joshua S. (2022-01-01). "Storm crowds: Evidence from Zooniverse on crowd contribution design". Research Policy. 51 (1): 104414. doi:10.1016/j.respol.2021.104414. ISSN 0048-7333.
  6. ^ a b Sauermann, Henry; Franzoni, Chiara (2015-01-20). "Crowd science user contribution patterns and their implications". Proceedings of the National Academy of Sciences. 112 (3): 679–684. Bibcode:2015PNAS..112..679S. doi:10.1073/pnas.1408907112. PMC 4311847. PMID 25561529.
  7. ^ Afuah, Allan; Tucci, Christopher L.; Viscusi, Gianluigi, eds. (2018). Creating and capturing value through crowdsourcing. Oxford: Oxford University Press. ISBN 978-0-19-254819-1.
  8. ^ a b Jeppesen, Lars Bo; Lakhani, Karim R. (2010). "Marginality and Problem-Solving Effectiveness in Broadcast Search". Organization Science. 21 (5): 1016–1033. doi:10.1287/orsc.1090.0491. ISSN 1047-7039.
  9. ^ Beck, Susanne; Brasseur, Tiare-Maria; Poetz, Marion; Sauermann, Henry (2022-05-01). "Crowdsourcing research questions in science". Research Policy. 51 (4): 104491. doi:10.1016/j.respol.2022.104491. ISSN 0048-7333.
  10. ^ Beck, Susanne; Fraisl, Dilek; Poetz, Marion; Sauermann, Henry (2024-04-17). "Multi-disciplinary Perspectives on Citizen Science—Synthesizing Five Paradigms of Citizen Involvement". Citizen Science: Theory and Practice. 9 (1). doi:10.5334/cstp.691. ISSN 2057-4991.
  11. ^ Caron-Flinterman, J. Francisca; Broerse, Jacqueline E. W.; Bunders, Joske F. G. (2005-06-01). "The experiential knowledge of patients: a new resource for biomedical research?". Social Science & Medicine. 60 (11): 2575–2584. doi:10.1016/j.socscimed.2004.11.023. ISSN 0277-9536. PMID 15814182.
  12. ^ Petridis, Panos; Fischer-Kowalski, Marina; Singh, Simron J.; Noll, Dominik (2017-05-01). "The Role of Science in Sustainability Transitions: Citizen Science, Transformative Research, and Experiences From Samothraki Island, Greece". Island Studies Journal. 12 (1): 115–134. doi:10.24043/isj.8.
  13. ^ Benchoufi, Mehdi; Fournier, Marc; Magrez, David; Macaux, Gaspard; Barué, Vanessa; Mansilla Sanchez, Alicia; de Fresnoye, Olivier; Fillaudeau, Romain; Tauvel-Mocquet, Ozanne; Chalabi, Nassera; Petit-Nivard, Jean Frédéric; Blondel, Leo; Santolini, Marc; Ben Hadj Yahia, Béchir (2018-05-20). "Epidemium: A multidisciplinary community to tackle cancer using big and open data". Journal of Clinical Oncology. 36 (15_suppl): e13604. doi:10.1200/JCO.2018.36.15_suppl.e13604. ISSN 0732-183X.
  14. ^ Poetz, Marion K.; Sauermann, Henry (2024-12-05), How And When To Involve Crowds In Scientific Research, Edward Elgar Publishing, doi:10.4337/9781802204315, ISBN 978-1-80220-431-5, retrieved 2024-12-30
  15. ^ Lee, Jeehyung; Kladwang, Wipapat; Lee, Minjae; Cantu, Daniel; Azizyan, Martin; Kim, Hanjoo; Limpaecher, Alex; Gaikwad, Snehal; Yoon, Sungroh; Treuille, Adrien; Das, Rhiju; EteRNA Participants (2014-02-11). "RNA design rules from a massive open laboratory". Proceedings of the National Academy of Sciences. 111 (6): 2122–2127. Bibcode:2014PNAS..111.2122L. doi:10.1073/pnas.1313039111. PMC 3926058. PMID 24469816.
  16. ^ Lyons, Elizabeth; Zhang, Laurina (2019-11-21). "Trade-offs in motivating volunteer effort: Experimental evidence on voluntary contributions to science". PLOS ONE. 14 (11): e0224946. Bibcode:2019PLoSO..1424946L. doi:10.1371/journal.pone.0224946. ISSN 1932-6203. PMC 6871885. PMID 31751358.
  17. ^ Koehler, Maximilian; Sauermann, Henry (2024-05-01). "Algorithmic management in scientific research". Research Policy. 53 (4): 104985. doi:10.1016/j.respol.2024.104985. ISSN 0048-7333.
  18. ^ Palmer, Meredith S.; Huebner, Sarah E.; Willi, Marco; Fortson, Lucy; Packer, Craig (2021-07-27). "Citizen science, computing, and conservation: How can "Crowd AI" change the way we tackle large-scale ecological challenges?". Human Computation. 8 (2): 54–75. doi:10.15346/hc.v8i2.123. ISSN 2330-8001.