Jump to content

Wikipedia:Village pump (idea lab): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎question re policy for BLP items: minors below the age of 10
Line 398: Line 398:


== question re policy for BLP items ==
== question re policy for BLP items ==
[[File:Дети_-шахтёры.jpg|thumb|Miners below the age of 10 {{right|{{ndash}}[[User:EEng#s|<b style="color:red;">E</b>]][[User talk:EEng#s|<b style="color:blue;">Eng</b>]]}}]]

I have a question about usage of [[WP:BLP]]. we currently have articles on two minors below the age of ten, highlighting their royal status as members of a royal family. these two individuals have had absolutely no voice in whether these articles should be established or not. their parents have publicly distanced themselves from the referenced royal family.
I have a question about usage of [[WP:BLP]]. we currently have articles on two minors below the age of ten, highlighting their royal status as members of a royal family. these two individuals have had absolutely no voice in whether these articles should be established or not. their parents have publicly distanced themselves from the referenced royal family.



Revision as of 05:02, 23 June 2023

 Policy Technical Proposals Idea lab WMF Miscellaneous 
The idea lab section of the village pump is a place where new ideas or suggestions on general Wikipedia issues can be incubated, for later submission for consensus discussion at Village pump (proposals). Try to be creative and positive when commenting on ideas.
Before creating a new section, note:

Before commenting, note:

  • This page is not for consensus polling. Stalwart "Oppose" and "Support" comments generally have no place here. Instead, discuss ideas and suggest variations on them.
  • Wondering whether someone already had this idea? Search the archives below, and look through Wikipedia:Perennial proposals.

Discussions are automatically archived after remaining inactive for two weeks.

« Archives, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60

Unified Review Forum

The idea of a unified review forum has been raised a few times recently; the primary benefit would be that it would provide us a location where we can take closes that currently lack a clear location for reviews to take place (mergers, splits, redirects, miscellany, etc).

Depending on the specifics, it may also allow us to move RfC close reviews out, shifting the administrators' noticeboard back towards being an administrators' noticeboard - i.e., a place primarily used by administrators to coordinate administrative tasks - and away from its current state as a catch-all dramaboard.

In addition, it may also allow us to merge move and deletion reviews in; this would allow us to diversify the range of editors who contribute to those discussions as currently the boards are comparatively insular, reduce the number of noticeboards editors may wish to pay attention to, and also permit us to create a unified process by which reviews should be conducted - for example, and this would not be part of the proposal, we could always split reviews into two sections, one for uninvolved editors to !vote, and a second for involved editors to do so.

As an initial draft for an RfC on this I suggest the following:

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


RfC on creating a unified close review forum

Should WP:Close reviews be created to review all closes not currently covered by a designated forum?

RfC on creating a unified close review forum - RfC close reviews

If the forum is created, should close reviews of RfC's be relocated from WP:AN to the forum?

RfC on creating a unified close review forum - Move reviews

If the forum is created, should WP:MRV be closed and move reviews relocated to the forum?

RfC on creating a unified close review forum - Deletion reviews

If the forum is created, should WP:DRV be closed and deletion reviews relocated to the forum?

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

BilledMammal (talk) 12:40, 24 May 2023 (UTC)[reply]

First obvious question: Why WP:Village pump (close reviews) and not just WP:Close reviews or WP:Close reviews noticeboard? The general idea behind a village pump is that it's where people hang out and chat about Wikipedia (e.g., ideas for improving it, problems they need solved, etc.). None of them are for handling specific processes. WhatamIdoing (talk) 23:39, 24 May 2023 (UTC)[reply]
That's a good point. Changed to Close reviews noticeboard, thank you. BilledMammal (talk) 13:04, 25 May 2023 (UTC)[reply]
If you wanted, you could probably keep the format for the other reviews (WP:CRV currently redirects to an essay, but the redirect is only used on 63 pages). If your sub-suggestions are rejected, close review could just direct users to the other pages (WP:MRV and WP:DRV) for the type-specific close reviews.
I saw a variant of this presented (section link) at WP:VPP, and, at the time, I was going to indicate my support, but the conversation seemed to have been dying down, and I wasn't that confident about it. But, having been on Wikipedia a bit more since then, I feel a bit more confident now.
Still, based on the oppose votes, I'd be somewhat cautious about suggesting MRV and DRV be merged into a close review forum; I think that might actually lead people to be more wary of that forum existing at all (and, as some editors said then, there's some practical merit to separating off reviews that can only be done by admins). Just a thought.--Jerome Frank Disciple 17:09, 25 May 2023 (UTC)[reply]
That's a good point; I've changed to WP:Close reviews.
I'd be somewhat cautious about suggesting MRV and DRV be merged into a close review forum That's part of the reason I've split them off into separate questions, or are you thinking the general association might be enough to cause people to oppose the creation of the forum?
there's some practical merit to separating off reviews that can only be done by admins Deletion reviews typically need to be closed by admins, but move reviews don't. Perhaps if we just remove the question about DRV? BilledMammal (talk) 02:20, 29 May 2023 (UTC)[reply]
The first question ("closes not currently covered by a designated forum") is going to be unclear. Specific examples (e.g., merges and splits) would probably help.
The three sub-questions could be delayed to another day, but they could also be handled as a single question: "There are three existing designated forums: AN (for RFCs), MRV and DRV. Do you want any of those to be added to the new noticeboard, if it's created?" I could imagine editors saying, e.g., yes to everything except merging DRV into the new process. WhatamIdoing (talk) 04:29, 1 June 2023 (UTC)[reply]
Good point!
I also think WhatamIdoing makes a good point about the confusion in the first question, but I'm not sure there's any avoiding that.
Just as a potential structure, you could have:
  1. WP:CRV—close reviews—an non-noticeboard that merely lists where to review certain actions, containing:
    1. WP:DRV
    2. WP:MRV
    3. WP:OCRV (other close reviews?)
Not sure about that, but just food for thought.
Honestly I think your suggestion makes a ton of sense. The more I've thought about it, the more I support it. The only frustrations here come from the fact that this wasn't done in the first place!--Jerome Frank Disciple 20:27, 5 June 2023 (UTC)[reply]
I'd call it a review process rather than a noticeboard, to dampen the inevitable "we don't need another noticeboard" opposes. Apparently people disagree on what a noticeboard is. With merging DRV and MRV, it doesn't hurt to ask, but I doubt it will find consensus. – Joe (talk) 08:29, 19 June 2023 (UTC)[reply]

Cornell team seeking feedback on a planned user study

Hey everyone! I'm part of a research group at Cornell (together with @Cristian_at_CornellNLP) studying ways to encourage healthier online discussions. Our group has had prior successful experiences collaborating with the Wikipedia community towards this goal.1 We are now planning a user study which will directly involve the participation of Wikipedia editors. In developing this study, we share Wikipedia’s commitment to transparency and accountability, and want to ensure that the study is implemented as a true collaboration with the Wikipedia community. To this end, before we officially launch the study and start recruitment, we wanted to explain our ideas to the community and give you a chance to voice your feedback, thoughts, and questions—which you may do by replying to this thread, posting to our user talk page, or emailing us directly. We have also been consulting with Wikipedia Administrator @Moneytrees in case you’d feel more comfortable reaching out to them instead.

The planned study revolves around a prototype browser extension “ConvoWizard” which uses AI technology2 to provide Wikipedia editors with real-time warnings of rising tension within conversations. Specifically, whenever an editor who has ConvoWizard installed replies to a discussion on a talk page or noticeboard, the tool will provide an estimate of whether or not the discussion looks to be getting tense (i.e., likely to deteriorate into incivility), as well as feedback on how the editor’s own draft reply might affect the estimated tension. This is based on a tool that we previously piloted on Reddit, so those interested in finding out more can check out NPR’s coverage of the study.

Once the study officially begins, participants will be asked to install and use the ConvoWizard browser extension for a specified period of time. During this period, ConvoWizard will record participants’ commenting behavior with the tool enabled (e.g., what edits they make to their draft before posting it) to enable research on the effects of using the tool.  All data collected during the study will be stored securely and confidentially on Cornell servers, The study has been reviewed and approved under Cornell IRB #2007009714. Participants will also be asked to fill out a pre-survey and post-survey, which will ask general questions about their commenting habits and their thoughts on ConvoWizard.

Again, we are extremely interested in your thoughts and feedback before we move forward with this study, and invite you to let us know what you think. We look forward to hearing from you!

Finally, if you think you might interested in participating, or if you have any suggestions on how to best get the word out, please reach out as well.


1For a recent example, see this study we conducted on talk page moderation.

2For those with a technical background in machine learning and/or natural language processing who are interested in more details about the technology, it is introduced in this paper; the model is also open-source and its training data is publicly accessible and documented.

-- Jonathan at CornellNLP (talk) 17:42, 1 June 2023 (UTC)[reply]

Thank you and I'm interested but How can I reach out to you?
Thanks
Samuel from Addis Ababa, Ethiopia Sammthe (talk) 04:17, 2 June 2023 (UTC)[reply]
If you don't get a response here, try Special:EmailUser/Jonathan_at_CornellNLP. Graeme Bartlett (talk) 07:54, 2 June 2023 (UTC)[reply]
Thanks for your interest! As noted in the original post, the study hasn't officially begun yet, but once we begin we'll reach out with more details. In the meantime, as the other reply pointed out, you can get in touch with us either via the Wikipedia user email (if you want to keep the communication private) or by posting to my (or @Cristian_at_CornellNLP's) user talk page (if you want a public record of the communication). Jonathan at CornellNLP (talk) 13:20, 2 June 2023 (UTC)[reply]
This sounds like an interesting and very promising approach. However the opportunity I and I suspect other admins would find interesting would be a list of currently heated discussions. I'd also point out that "current" can be very asynchronous on Wikipedia with disputes running between editors in very different timezones. ϢereSpielChequers 08:45, 2 June 2023 (UTC)[reply]
That's a great insight, and we agree! We've actually been approaching this problem from multiple angles; our previous collaboration with Wikipedia actually addresses this exact idea. In a nutshell, we spoke to some admins to hear their thoughts on this idea of having a list of currently heated discussions (and in fact some of the things they brought up are closely related to the issue you mention about what constitutes "current"). If you're interested, I definitely encourage you to check out the paper (which can be found here)! Jonathan at CornellNLP (talk) 13:25, 2 June 2023 (UTC)[reply]
Very interesting. Is there any requirements for the participants? ✠ SunDawn ✠ (contact) 03:31, 5 June 2023 (UTC)[reply]
ConvoWizard itself only works on desktop browsers, and specifically requires Firefox or Chrome (in theory it will also work on any other Chromium-based browser, e.g. Arc, Brave, newer versions of MS Edge, but we only officially support Firefox and Chrome). So that's the main technical requirement. ConvoWizard is also designed to integrate with the "reply" feature on Wikipedia discussion pages, so participants will need to use that feature rather than directly editing the page source (this is necessary in order for ConvoWizard to be able to detect where you are replying). Other than these technical requirements, the only other "soft requirement" is that we ideally want editors who frequently participate in discussions, but there's no strict threshold for what "frequently" means here :) Jonathan at CornellNLP (talk) 14:18, 5 June 2023 (UTC)[reply]
Please feel free to contact the Editing team if you have any questions about interacting with the "reply" feature. ESanders (WMF) (talk) 12:57, 6 June 2023 (UTC)[reply]
Thank you all for your valuable feedback so far! We did get a question over email about whether the data collected during the study is stored permanently or only temporarily. Just to publicly answer for anyone who may have the same question: in accordance with our IRB policy, data is kept only for the duration of the research project (which encompasses the study and subsequent analysis period), and will be deleted afterwards.
As an additional note, to make it easier for people to indicate their potential interest in the study, we have set up an anonymous form you can fill out. Filling this out does not constitute any formal commitment, it is simply a way for us to gague initial interest in advance of official recruitment. Jonathan at CornellNLP (talk) 19:11, 6 June 2023 (UTC)[reply]
  1. Regarding transparency and accountability, it is recommended to start a documentation page for your research project at meta:Research:Projects (compare e.g. this page for a currently ongoing user survey). No need to get super detailed or fill out every field in the "add your project" form - you can probably mostly reuse the text from your message. This page can serve as a reference point later (instead of the post and discussion here which will get archived soon).
  2. Regarding All data collected during the study will be stored securely and confidentially on Cornell servers, it would be good to clarify what data is being collected (editors' IP addresses? user names? information from their browsing history or whatever else the browser extension may have access to?). And thanks for clarifying that any sensitive data will be deleted - that question was on my mind too when I first read your post. On the other hand, it would also be interesting to know whether it is planned to publish non-sensitive replication data from this project, and/or open-source its code. (The aforementioned research project form on Meta-wiki also asks about this.)
  3. Beyond that, I'm a bit unclear about what kind of feedback you are looking for? E.g. I'm sure it would be useful to get experienced editors' eyes on the survey questions (per general practices in survey research, as unclear or easily misunderstood questions can easily affect the quality of the data; the Wikimedia Foundation has done this often for its own surveys, too). But it doesn't seem that you have posted them for review? Also, in case you received feedback from the Wikipedia community that would make you want to change the study's design significantly, would your IRB even allow you to implement these without having to go through additional review?
  4. In any case though, thanks for reaching out proactively to the community, and in particular for aiming to turn your research into a practical tool that might have the potential to meaningfully improve Wikipedia. (Apropos, is that ConvoWizard browser extension already generally available for Reddit users, and do you have an idea how widely used it might be there?)
Regards, HaeB (talk) 02:54, 7 June 2023 (UTC)[reply]
Thanks for your detailed comments, and for pointing us to the meta-wiki research page! I've gone ahead and created a page for the project, which can be found here: ConvoWizard: Understanding the Effects of Providing Interlocutors with Information about the Trajectory of their Ongoing Conversations.
The page should contain the answers to your questions about data collection (but let us know if anything is still unclear). As for the kind of feedback we're looking for: the comments you just gave us are pretty much exactly the sort of thing we were hoping for (so thanks again!) --- that is to say, pointers to best practices for engaging with the Wikipedia community and norms around conducting research, and questions about the details of the study. Jonathan at CornellNLP (talk) 16:58, 7 June 2023 (UTC)[reply]
Is there any option of having the ConvoWizard tool installed into a user's on-wiki js, rather than a browser extension? How can one be certain it is not monitoring activity or gathering data from the user's activity on other websites? What steps are being taken to safeguard privacy in off-wiki browsing? ~ ONUnicorn(Talk|Contribs)problem solving 18:03, 7 June 2023 (UTC)[reply]
I second this comment. I'd personally feel a lot more comfy with this if it integrated into Wikipedia's current userscript infrastructure rather than being a browser extension. Loki (talk) 19:30, 7 June 2023 (UTC)[reply]
To be fair, the team already clarified what data is being collected by the browser extension.
I guess though that if you don't want to rely on statements alone, you could ask whether the extension will be open source or at least publicly available in an official browser extension repository like Chrome's web store (where there is some level of scrutiny by the browser vendor and/or their community of reviewers). From a quick Google search, this doesn't seem to be the case yet (which may also partially answer my own question 4. above).
Regards, HaeB (talk) 00:48, 8 June 2023 (UTC)[reply]
Thank you all for the feedback, this is definitely a good point to address. We use standard browser security features to ensure that the tool does not have access to any page outside of wikipedia.org domains. This is verifiable through your browser: when you first install the extension, the browser will report what permissions are being requested, and afterwards, if you click on the extension while on a non-wikipedia domain, Chrome or Firefox will report that the extension does not have access to the page contents (For more information about the browser permissions system, please see this Mozilla documentation page; it is also possible, at least on Chrome, to manually manage these permissions). Furthermore, during the Cornell IRB approval process we had to specifically declare what data is being collected when the extension is active on the wikipedia domain, and these are documented on the project's research page.
We hope this is sufficient to provide confidence in installing and using the extension, but of course feel free to follow up if you still have any other thoughts or suggestions. Jonathan at CornellNLP (talk) 17:27, 8 June 2023 (UTC)[reply]

How do we deal with sources that omit something when that omission is worth its weight in gold?

I am currently in a process of trying to find a compromise to an edit war that has been raging since at least 2 years back and has intensified ever since the person it's over decided to run for president. I'm currently concentrating only on the lead headline/introduction.

One of the main points of contention are the many mainstream articles that describe a person as something. Some people don't like it and don't agree with it. The person himself doesn't agree with it. Others like it and agree with it and want to keep it. Others more, like it, completely disagree with it and want to keep it.


So there's essentially three camps. Not just two. The person can easily be found in my edit history but I'll not mention it here so as to stay on topic. But if you want you can easily look it up.

Here's the crux of the issue. Several sources describe the person as this something. Mainstream, reliable sources. Some highly politicized but others highly neutral and scientific.

Yet many others that are easy to find completely omit that aspect of the person in describing their career, history and impact.

Others yet, equally reliable and mainstream rephrase it.


The side that is currently winning is the one that wants to focus on this particular issue and the sources that focus on it.

I'd like a balanced approach that shows that there's more to the person or that the issue is complex. To do that I'd either want to include his denial of it (that may or may not fall under https://wiki.riteme.site/wiki/Wikipedia:Mandy_Rice-Davies_applies) or as an alternative include the fact that many sources either rephrase it or omit it completely.

As its highly contentious and is just getting more and more inflamed Im looking for an "imperfect solution" to build consensus. But I'm stepping over too many toes when doing it as the solution ends up suiting nobody. Still it may ultimately come down to some kind of dispute resolution.

So what is one to do? Is there a policy to deal with it or could we start building one, if not to save this article from chaos then to save others in the future?

Very few people will introduce something or someone as something they are not. They won't for example say "XYZ is a nice fellow, college educated who loves tacos and absolutely rejects the term shoplifter"

Other sources may say "XYZ is A, B C, and likes shoplifting".

But many more may simply say "XYZ is A, B and C."

How do we account for the diversity of opinion that's shown in the media by omission and where a persons own denial may be taken by some as to fall under https://wiki.riteme.site/wiki/Wikipedia:Mandy_Rice-Davies_applies ?

The question is complicated because its not just a question of what sources are reliable or what should be included in the article as a whole but what should be highlighted as introductory and defining characteristics/aspects of the person. And in that regard I'd think that its interesting if there are many mainstream reliable sources that completely omit something.

Overall also if youd indulge me, what's everyones opinon of https://wiki.riteme.site/wiki/Wikipedia:Mandy_Rice-Davies_applies ? Doesn't it create some situations where there's no oppurtunity for a objectivity? Of course someone might reject the term shoplifter, and they may have valid reasons for it. Maybe they stole cause they were hungry and feel that term doesnt fit them, maybe they feel they were framed, etc. But naturally (most of them) "they would deny it". So doesn't that put them in an impossible situation? CompromisingSuggestion (talk) 00:59, 3 June 2023 (UTC)[reply]

This is a question of WP:DUEWEIGHT, and MANDY is just an essay. ScottishFinnishRadish (talk) 01:22, 3 June 2023 (UTC)[reply]
That's making me a bit frustrated because it came up here as if its something I as a new editor have to follow. And I didnt realize it was "just an essay"...man why would someone who isn't even involved in this dispute lie to me, or were they confused themselves?edit: they did mention it was an essay but in passing, at least I got the impression that its what defines what has weight and what doesnt. https://wiki.riteme.site/wiki/Wikipedia:Teahouse#What_can_you_do_when_you_feel_bullied_by_other_editors?
Anyway I'll read through DUEWEIGHT but how do you weigh omission? Even me whose arguing this could argue from the perspective of a devils advocate that there's a lot more weight to an accusation than to simply the lack of one. How does one balance this? CompromisingSuggestion (talk) 01:27, 3 June 2023 (UTC)[reply]
Ahh, RFK and I'm assuming without actually reading, vaccination stuff. ScottishFinnishRadish (talk) 01:24, 3 June 2023 (UTC)[reply]
  • First the WP:MANDY essay is itself highly controversial. We recently had a lengthy debate about it, with no consensus.
My own take: in a BLP, we should always give a degree of respect to what the subject says about themself… whether they are talking gender identity, political labels, criminal accusations or anything else. However, this does not mean we ignore what others say. Those other opinions should be covered as well. We should cover all significant viewpoints on a BLP subject… and that includes what they say about themselves. Blueboar (talk) 01:31, 3 June 2023 (UTC)[reply]
Is there a policy or a precedent ruling I could base this argument on because people tried it before me and have been shut down or its resulted in edit wars? Thank you. CompromisingSuggestion (talk) 01:34, 3 June 2023 (UTC)[reply]
No and Yes. I'm trying to focus the start of the lead more on his career as a human rights lawyer and environmentalist while as a compromise keeping the aspects about the increasingly inflammatory accusations (it's gone from vaccine activist years ago to anti-vaccine propagandist now) while adding at least the context that he himself rejects the label. But even adding half a sentence more about his other actions is upsetting people, so is adding his denial.
Previously "supporters" of RFK tried to remove the whole vaccine-propaganda thing, revert back to at least vaccine or anti vaccine activist but that resulted in an edit war. There's probably astrosurfers from both sides there now, its a mess. On the other side there is the third camp, supporters of his most controversial vaccination statements that actually want that in the article because they hate vaccines.
Just trying to build an understanding for existing policy or move to rapidly develop new policies so that this can be avoided in the future and in preparation of what obviously will be a dispute resolution. CompromisingSuggestion (talk) 01:33, 3 June 2023 (UTC)[reply]
You might be interested in Wikipedia:Mandy Rice-Davies does not apply. I personally find this to be more persuasive. -BRAINULATOR9 (TALK) 02:52, 7 June 2023 (UTC)[reply]

Sockblocks and ambiguous loss

A few times a year, we get some high-profile sockblock that really catches the community off-guard. In the two most recent such cases, I'd been involved in the investigation, so that's given me a perspective to watch with a bit of detachment how the community responds.

Clearly, it throws people. Many are visibly upset, even those who've been here long enough to know the dark side of the wiki. And that makes sense. Such blocks are a form of ambiguous loss, a relationship (be it close or the wiki equivalent of colleagues waving hello in the hallway) that is taken away with no real chance of closure. People not only don't know what to feel, but they don't know how to feel.

A personal anecdote: When I was in high school, a classmate confided in me and a few others that she was being abused by her parents. She said she wanted to run away to live with friends in another state, so we all chipped in to help her. When the cops found her, they thoroughly investigated her family before sending her back, and found that not only was there no evidence of abuse, but many other things she'd told all of us, even little inconsequential things, were fabricated wholecloth. That was a shock to all of us, and not one we had any expectation of being able to address with her. So we sat and talked it through, various groupings of about 40 kids sharing our feelings over the course of a few days. It wasn't closure but it was something.

WP:DENY is one of the guiding principles of dealing with sockpuppetry, and Wikipedians aren't always the most "talk about your feelings" crowd, and these things make it hard to address this kind of ambiguous loss in the same way my peers and I did in high school. But is there something we can do to address the emotional impact of these cases? Maybe just an essay to write that people could point to in the future, or maybe a place to have a discussion that would be blanked or even deleted at the end, or... I don't know. I think too often we neglect editors' emotional needs, expect everyone to be cold and stoic; but I don't have a great solution in mind here. -- Tamzin[cetacean needed] (she|they|xe) 19:43, 7 June 2023 (UTC)[reply]

You are free to write such an essay. Ruslik_Zero 20:25, 7 June 2023 (UTC)[reply]
Well, yes, but I'm wondering if anyone has any better ideas. -- Tamzin[cetacean needed] (she|they|xe) 20:38, 7 June 2023 (UTC)[reply]
What bothers me most is often the nagging doubt that the CUs have got it wrong. Maybe there was a genuine WP:LITTLEBROTHER using the same device. Maybe the birthday paradox reared its ugly head. Maybe there was some unknown unknown that wasn't even considered. No matter how low the probability, I have trouble changing my opinion of someone based on evidence I can't see.
And the ones blocked as "socks" when no one will publicly name the master are the worst. Are they some sociopath planning to game CU rights for themselves and out everyone? Or are they some teenager who was CIR-blocked a dozen times when they were 10, and are too embarrassed to request an unblock from the original account? Suffusion of Yellow (talk) 21:05, 7 June 2023 (UTC)[reply]
I agree with the lack of information being an issue. The practice of Template:Blockedwithouttags is a deliberate reduction of information to other editors, resulting not only in nagging doubts but also hampering wider ability to deal with these problems. CMD (talk) 02:53, 8 June 2023 (UTC)[reply]
But then you get vandals whose sole objective is to fill a category with themselves, which is remarkably common, and then you get impersonators, copycats, and fanboys. Personally I'll often try to remember to add a name in the block log, even if it's only periodic. Often, with checkuser blocks, like a lot of LTA and VOA blocks, we might not even have a 'master' to name. If you look at Gustin Kelly's SPI, they didn't have a public name for at least 6 months. It might also be a joe-job, which is another common problem. And with checkuser blocks, by definition, there's probably non-public data involved. But I do think most checkusers/admins will usually be happy to answer any queries, even if they can't tell you much. As for the rest of this thread, users and admins have easily located talk pages. I'm not sure what else would be wanted. -- zzuuzz (talk) 13:06, 8 June 2023 (UTC)[reply]
I understand the arguments, but as time has gone on I've felt less and less like they balance the downsides. CMD (talk) 13:57, 8 June 2023 (UTC)[reply]
I'm less concerned with the obviously disruptive users who are blocked as socks-with-no-master; that's just an indication of why no one assumed good faith. (Welcome to Wikipedia. One of your recent edits forged an admin's signature to close an AFD. So if you could not do that again, that would be great...)
I'm talking about well-established users, who, to all outward appearance, were here to build an encyclopedia. That's when I have my doubts. This is no different than wondering if the police really arrested the right person. They are innocent until proven guilty in a court of law. We don't have the resources for anything like that on Wikipedia, but that's a necessary evil that we shouldn't pretend is good. Suffusion of Yellow (talk) 16:25, 9 June 2023 (UTC)[reply]
We don't automatically revoke TPA, though even when that avenue is available it isn't often used, there's also e-mail through which correspondence can be continued even if on-wiki access is revoked, although as with TPA you may be talking to a void. In the past I've tolerated socks on my own user talk page so long as they weren't acting unhinged or disrupting things elsewhere, though I don't think my words were very often taken to heart.
Historically community discussion about these blocks has taken place across a variety of user talk pages and on the noticeboards. In a high-profile case comments are virtually guaranteed on the talk page of blocker and blockee alike, though norms limit the tenor of those discussions. Perhaps somewhere with a different set of norms would help, perhaps it would just add to the drama.
There are bad CU blocks, we don't like to talk about it, but there are. I suppose that's what ARBCOM is supposed to be for, yet there are public cases where even they are nearly evenly split, follows that holds true in some private ones too.
I guess I hit resignation a long time ago, there are some things I don't understand, and will never understand; I don't judge. My advice has generally been to approach it all with dispassion, but as with so many things, that is easier said than done. 74.73.224.126 (talk) 02:19, 8 June 2023 (UTC)[reply]
@Tamzin: Exactly which sock users are you talking about? Feel free to let me know privately if necessary, but I'm sure I'm not the only one who's curious ... I skim-read the noticeboards and these village pumps but I haven't heard of a case like this in recent memory. I do sometimes miss things though ... Graham87 08:54, 8 June 2023 (UTC)[reply]
I'm happy with the answer in the recent thread on my talk page, unless of course Tamzin you want to add something there (or by email). Graham87 10:12, 9 June 2023 (UTC)[reply]
This is inspired but kind of tangential to the idea of "balancing the downsides":
I've been thinking about the problem of disclosing information. Security is complicated. Disclosing nothing is usually the safest outcome. For one thing, if I say "Alice got blocked, but I want to reassure you that it's not for any criminal reason", but for Bob, I say "Hmm, I can't really say anything about Bob", then people are going to guess that Bob was blocked because of a criminal investigation, and having that be publicly suspected could hamper the investigation or prompt inappropriate reactions (e.g., criminal harassment of the accused). Consequently, nobody can have information about any of the cases.
Another factor is our tendency to hope for justice. Specifically, if someone gets blocked for reasons we don't understand (or agree with), we worry that we, too, could be unjustifiably blocked. For core community members, a long-term block feels like getting fired from your real-world job, or having all your friends reject you. It is like a social "death sentence". You want to stay part of the community, so you watch for the behaviors that get others thrown out: People get blocked for vandalism, so you don't vandalize (not that you would want to anyway). People get blocked for unintentionally screwing up, so you're careful about your edits. People get blocked for throwing temper tantrums, so you try your best to stay cool. People get blocked for pushing too hard for a particular point of view, so you avoid contentious topics or let it go.
Against this background of you trying really hard to fit in, someone you know gets blocked or banned. And you have no idea what happened. How do you prevent yourself from making the same mistake if you don't know what the mistake is? This event adds uncertainty to your plan. The uncertainty worries you. You feel stressed. The community may have rid itself of a problematic user, but the downside is that you are more stressed than you were before. And from where you sit, the downside is large, personal, and concrete, but the upside is remote and theoretical at best.
Sure, there's always a gossip calling for "transparency" because he wants to revel in the juicy details, and there's always someone who wants "transparency" because he wants an excuse to share his views that rape threats aren't bad enough to justify blocking an editor. But I think that most editors are concerned about this because (a) they want to avoid getting blocked, and (b) they feel that they will not be able to do that unless they know every possible block-worthy offense.
I've been thinking a lot recently about the loss of trust in institutions. This is a global, real-world thing driven largely by the pandemic's isolating effects, but it's also a problem on wiki. We trust ArbCom less than we did. We trust admins less than we did. This isn't because ArbCom or the admin corps or any other group is objectively worse than it was pre-pandemic. It's because we are individually feeling less safe in trusting anything and anyone, and that includes our institutions on wiki. We have always had editors experience uncertainty over surprise blocks. What's different now is that we are experiencing that uncertainty in combination with a loss of trust. Previously, we were shocked and surprised that the editor who was nice to us was blocked for undisclosed reasons, but many of us could reassure ourselves that the decision was likely correct, because we believed that good, smart, trusted people and groups made the decision. Now, we are shocked and surprised and not as willing to believe that anyone except ourselves can make good decisions. Without personally having full information, I can't check your work and prove to myself that you made the correct decision this time, so I assume that you are wrong. It is easier for us to believe that you are wrong than for us to believe that villains don't display bad behavior in every single action throughout their entire lives. WhatamIdoing (talk) 18:00, 8 June 2023 (UTC)[reply]
I am not sure that the community as a whole has less trust in admins than it did years ago. There has always been a current of distrust of admins, although how it is expressed may have changed. For instance, I don't remember seeing a serious rant about the admin cabal for several years, now. Of course, my viewpoint may be biased, having been an admin for more than 16 years. From my perspective, there are enough checkusers and Arbcom members who do have full access to the information behind blocks (other than office actions) to prevent one or a few checkusers from misusing blocks. Every action with the CU tools is subject to review by every other checkuser. In the same vein, there are enough Arbcom members to prevent a cabal from acting inappropriately. If the community cannot trust a committee that is elected by secret ballot under strict security in two tranches, then community cannot survive. Yes, it is annoying not being in on the details, especially if the editor who was blocked was someone you knew and liked, but there are privacy and other issues that mean some things have to remain unavailable to anyone who has not signed an NDA with the Foundation. Donald Albury 18:40, 8 June 2023 (UTC)[reply]
Well... the reliable sources say that humans around the world have lost trust in institutions during the last few years.[1][2][3][4][5][6][7][8][9] Editors are humans, and we have institutions here. I doubt that the humans who contribute to Wikipedia and the institutions they create and maintain are magically immune to an effect that is destabilizing basically all other people and all other institutions. Or, to put it another way, I start with the very Wikipedian assumption that I am not a reliable source, and that when all the sources say that trust in institutions has been declining for several years and that it got worse during the pandemic, then I assume that they are more likely to be correct than my own personal experience. I assume that you do, too. WhatamIdoing (talk) 23:57, 8 June 2023 (UTC)[reply]
But, do any reliable sources says that Wikipedia editors have lost trust in its institutions? I'm pretty sure that there isn't any polling history on the topic. Donald Albury 19:32, 9 June 2023 (UTC)[reply]
We have reliable sources saying that people in general have lost trust in all institutions. We would need a source that says "except Wikipedia" to overcome that. We shouldn't start from a position of Wikipedian exceptionalism.
The only polling history I'm aware of is m:Community Insights, and it doesn't ask specifically about ArbCom (probably because it's a global survey, and very few wikis have an ArbCom). WhatamIdoing (talk) 19:52, 9 June 2023 (UTC)[reply]
We trust ArbCom less than we did. Please substantiate this claim. This isn't about "exceptionalism", it's about having a source that actually says that 1) ArbCom acts as such a body and 2) that there is evidence that people indeed trust it less. From what I have observed, the wiki seems to trust it more rather than less in the past 5 years. Extrapolating from both corporate and political bodies to onwiki bodies which are neither of those two is a false equivalence. Izno (talk) 22:22, 9 June 2023 (UTC)[reply]
I've been away for about a decade. Admins are better now than they were back when I was an admin. The same is true of ArbCom. The community has higher standards. --A. B. (talkcontribsglobal count) 23:25, 9 June 2023 (UTC)[reply]
I do not think there is any solution. As some may remember, some time ago I blocked two users for edit-warring, was dragged to ANI, was forced to apologize ("either you unblock and apologize immediately or you get a personal ArbCom case filed against you"), got a personal case anyway, was fully cleared by ArbCom, and now one of the two users is blocked as a sock and another one is topic banned, mind you, for exactly this bad behavior - did anyone on Wikipedia, just any user, from that ANI crowd, came to my talk page and apologize? Nope. It just remains my personal problem to remember who is a piece of shit here. Ymblanter (talk) 14:32, 10 June 2023 (UTC)[reply]
I'm sorry you had to go through that. It's good that your blocks have been affirmed later on. But man, that's tough. SWinxy (talk) 18:14, 10 June 2023 (UTC)[reply]
If Wikipedia editors feel emotionally bereft when someone is blocked then they are making a serious category error. People that you only "know" from an online site are not your friends. They are people who you have actually met and have an attachment to. And being blocked only means that you can't edit just one web site. It's not as if blocking takes away your money, your liberty or (in some countries) your life, as criminal sanctions can do. If the emotional impact of someone being blocked lasts more than a few minutes then that's a sure sign that you are spending too long on Wikipedia. Phil Bridger (talk) 20:06, 10 June 2023 (UTC)[reply]
If an editor feels emotionally bereft after someone is blocked from (or ghosts) an online space, that means they are capable of forming emotional attachments in more than one way, which expands their potential domain of experience and connection with others. The flip side, of course, is that they can feel real loss if it is interrupted, but that's the risk you take when you open yourself up, but one that is worth taking, at least to those who experience it. Mathglot (talk) 04:02, 12 June 2023 (UTC)[reply]
Ignoring problems because their causes are things that someone should not have done, when a large amount of people do it anyway, is not how we make progress, even if you are right. Snowmanonahoe (talk · contribs · typos) 14:36, 12 June 2023 (UTC)[reply]
Telling people how they "should" feel is generally a waste of time, but I don't agree with this. Of course, some of us have met editors in person; beyond the usual in-person events, I understand that there are a handful of marriages in this community. But it's "the community" that I think is the key aspect. If you feel like you belong, that you have been accepted, that you are part of this community, then having the community kick you out is going to hurt. IMO you can't have a community without having some people be inside the group and others be outside of it. If you want to be inside the fold, and the others force you out, then of course you're going to feel rejected and excluded. That's just how normal humans respond to being rejected and excluded. WhatamIdoing (talk) 03:59, 14 June 2023 (UTC)[reply]

Encouragement to use the article's talk page

How to encourage participation in the discussion on the article talk page? Is there any template for article to add in main space, maybe in external links? Eurohunter (talk) 05:33, 11 June 2023 (UTC)[reply]

Encouraging participation in the discussion on an article's talk page on Wikipedia can be a valuable way to engage with other editors and improve the quality of the article. There are several effective strategies to encourage discussion: being proactive, providing clear subject headings, being respectful and open-minded, inviting specific editors, notifying relevant WikiProjects, and using inline templates. Always assume good faith, remain civil, and focus on improving the encyclopedia. Royalesignature (talk) 10:10, 11 June 2023 (UTC)[reply]
Hi @Royalesignature. Are you using any AI tools, generative models or other automated processes (ChatGPT or otherwise) to produce content for Wikipedia? Barnards.tar.gz (talk) 10:34, 11 June 2023 (UTC)[reply]
Further context: 1, 2, 3. Folly Mox (talk) 11:38, 11 June 2023 (UTC)[reply]
@Royalesignature: What do you mean by inline templates? Eurohunter (talk) 10:36, 11 June 2023 (UTC)[reply]
No, I don't make use of and Ai tools inline template means how the content of an article are arranged and flagged if needed warranty in the sense that it easily draw the attention of an expert editor to an article Royalesignature (talk) 11:40, 11 June 2023 (UTC)[reply]
Let's remember that AGF is not a suicide pact. Phil Bridger (talk) 11:48, 11 June 2023 (UTC)[reply]
I believe that User:Royalesignature is acting in good faith, and also that they are being dishonest in denying inappropriate use of ChatGPT or a similar tool, possibly because they didn't understand that / why it was wrong. Folly Mox (talk) 12:55, 11 June 2023 (UTC)[reply]
User:Eurohunter, the only template I'm aware of that invites discussion from mainspace is Template:Dubious, which only applies to certain types of discussion. Usually a single neutral notification to Wikiprojects is the route. Sometimes, we're the only one active on Wikipedia who cares enough. Folly Mox (talk) 15:49, 12 June 2023 (UTC)[reply]

CSD for LLM written articles requiring WP:TNT

We're having more and more articles that are clearly written by large language model with fake sources. I just deleted Voice acting in India as a G3 hoax, as it was made up by a language model, and then a bunch of fake sources were added. Should we have a CSD specifically to cover LLM creations, or is G3 sufficient? ScottishFinnishRadish (talk) 17:27, 12 June 2023 (UTC)[reply]

I think it’s too early in the LLM era to state definitively that articles created by (or with the assistance of) LLMs cannot be valid. Models are improving in capability very rapidly. We should monitor this and be vigilant, but in the meantime, I think G3 (and possibly A11) should suffice. Barnards.tar.gz (talk) 18:43, 12 June 2023 (UTC)[reply]
We already have a draft WP:LLM policy which strongly discourages their use.
There's two main issues with them:
1. LLMs are completely unaware of Wikipedia policy and are thus very likely to commit copyright infringement, say something libelous about a living person, rely on sources we wouldn't consider acceptable, or violate NPOV.
2. LLMs are very likely to hallucinate completely false info and even fake citations. Loki (talk) 18:52, 12 June 2023 (UTC)[reply]
Not sure. Could you describe the scale of the problem in more concrete terms? Loki (talk) 18:52, 12 June 2023 (UTC)[reply]
The scale? No idea. I noticed this one because the editor's user page was on my watchlist from some warnings I left them a year ago. I know others have found LLM created articles as well. It's also an issue that's going to get worse. ScottishFinnishRadish (talk) 22:26, 12 June 2023 (UTC)[reply]
I think the LLM-written articles that we need to worry about are the ones where the use of an LLM is not obvious. That would not then be a valid CSD reason. Phil Bridger (talk) 19:29, 12 June 2023 (UTC)[reply]
This is the problem. LLM detection tools are prone to both false positives and false negatives. Often, they themselves use LLMs. Policies like these tend to be based on the obvious cases, but the reason those cases are obvious is because they flagrantly violate some other existing policy. What happens when it's unclear? Gnomingstuff (talk) 15:22, 13 June 2023 (UTC)[reply]
I think any new article filled with fake sources, regardless of what tools might have been used to create it, should be subject for deletion, and G3 as a blatant hoax seems like a suitable criterion. Not having any sources at all is a trickier issue, since traditionally deletion discussions are based on whether or not the subject meets English Wikipedia's standards for having an article, rather than the current state of the article. isaacl (talk) 20:51, 12 June 2023 (UTC)[reply]
That's why I specifically called out a TNT situation. It's likely Voice acting in India is a notable enough topic, but what was there was irredeemably tainted. G3 somewhat applies, but CSD is normally pretty tightly interpreted. If there's consensus that G3 covers it, in happy with that, but it seems like it might be stretching in some situations. ScottishFinnishRadish (talk) 22:24, 12 June 2023 (UTC)[reply]
When you asked "Should we have a CSD specifically to cover LLM creations," did you mean just LLM creations that had fake sources? If not, then the question would be when the current state of an article should be deemed irredeemable as a starting point. Personally I would prefer to focus on the characteristics of the article that make it irredeemable, regardless of what tools may have been used to create it. isaacl (talk) 23:06, 12 June 2023 (UTC)[reply]
I should have repeated it in my main question, but as the section heading says, specifically for TNT situations. There's nothing worth keeping when an article is a hallucinatory essay written by an algorithm. ScottishFinnishRadish (talk) 23:28, 12 June 2023 (UTC)[reply]
If an article warrants deletion due to no redeeming characteristics, it doesn't matter if it was written entirely by hand or with the assistance of a program. I'm not sure there's a good way to define that in a clear-cut manner in the nature of the speedy deletion criteria, though. isaacl (talk) 23:44, 12 June 2023 (UTC)[reply]
We delete bad content rather than taking an ad hominem (ad machina?) approach, but it's useful to have some way of finding pages that smell of AI so they can be judged, not on author but on article quality (or lack of it). Certes (talk) 00:01, 13 June 2023 (UTC)[reply]
(ad machinam) Folly Mox (talk) 23:04, 13 June 2023 (UTC)[reply]
How large is the problem quantifiably? Are the number of articles created that this criterion would apply to large enough that AfD would be stressed? Tazerdadog (talk) 22:22, 12 June 2023 (UTC)[reply]
AfD is already stressed. Aside from that, is AfD the venue to bring an article that was created with no effort by an algorithm with no real sources? Editor time is the most valuable resource in Wikipedia, so anything that avoids waste is important. ScottishFinnishRadish (talk) 22:31, 12 June 2023 (UTC)[reply]
When we say "editor time" in this context, we often mean "people who don't really create content". We don't seem to care much about wasting the time of the editors who created the articles.
I see two relevant possibilities:
  • The accurate identification: The article was created by an LLM, and was correctly identified as being a problematic article.
  • The false accusation: The article was not created by an LLM, but someone wants to get rid of it (for any reason, including a genuine mistake about its origin).
In the first case, AFD might waste the AFD respondents' time; in the second case, CSD would definitely waste the content creator's time. The question is: Whose time do we want to waste?
Since whether a page was created by an LLM is not uncontroversial, and CSD is supposed to be for uncontroversial deletions, I don't think we can stick to the principles of CSD and also have CSD for articles that one editor claims, usually without indisputable evidence, that its origin involved LLM. WhatamIdoing (talk) 04:10, 14 June 2023 (UTC)[reply]
By editors we should mean editors, rather than splitting the community based on personal opinions. Wasting the time of any editor on articles that took seconds to create and are likely full of AI hallucinations, is time that they could have spent on improving the encyclopedia. -- LCU ActivelyDisinterested transmissions °co-ords° 20:09, 14 June 2023 (UTC)[reply]
Do we have any tools (bots?) for checking that the source URLs in new articles exist? Of course, sources that don't exist may be valid (RS deleted an ephemeral news item) or good-faith errors (mistyped URL); and sources which exist may be invalid (unreliable source cited, either wilfully or in error); but a check that they exist might flag up some LLM content in a useful way. Certes (talk) 23:53, 12 June 2023 (UTC)[reply]
I suppose there's IABot, which can analyze a page and then tag everything as a dead link. Snowmanonahoe (talk · contribs · typos) 23:56, 12 June 2023 (UTC)[reply]
I wonder if it could wave a red flag in a suitable venue if it finds a new article full of dead links. Certes (talk) 23:58, 12 June 2023 (UTC)[reply]
Not a bad idea. Snowmanonahoe (talk · contribs · typos) 23:59, 12 June 2023 (UTC)[reply]
Not a bad idea but an imperfect one, since LLMs can also hallucinate print sources (or, worse, spit out the name of a real book that has absolutely nothing to do with the "fact" it stated). Gnomingstuff (talk) 15:29, 13 June 2023 (UTC)[reply]
There was no consensus for a new CSD at WT:CSD a month ago or four months ago. So I support G3 for this. No one wants to waste hours fixing LLM outputs that took a second to add, and no one wants to come to Wikipedia to read raw LLM outputs when they can get the same shitty output directly from the LLM. Many LTAs will delight in adding this stuff. If we don't clamp down on that spam, many editors & readers will give up on Wikipedia altogether, and its newfound perception of reliability among the press will wither away. See also broken windows theory: if vandals add raw LLM outputs and notice that nothing is done about it, they'll be emboldened, and if readers notice these articles, they might join in on the "fun". To respond to WAID, I'll repeat what I said at WT:LLM: right now, it's mostly easy to tell and uncontroversial (for example, see this unanimous MfD), so this isn't an issue. DFlhb (talk) 12:49, 14 June 2023 (UTC)[reply]

Rethinking autoconfirm rules

Right now, autoconfirmed means 10 edits and 4 days, in either order. That's good enough to catch random impulsive vandals, but we all know that LTAs routinely warehouse sleepers which they can trivially activate whenever they need a new confirmed account. I'm thinking it would work better if it the 4 day clock started running after your 10th edit. Then at least it would become more obvious which accounts are sleepers and which are perfectly innocent new users who simply haven't edited yet. There's a number of technology changes in the past year or two which have really eaten into the utility of checkuser. This would help move the balance back in the other direction.

The number of edits and number of days are configurable per-wiki, but this would require code changes. Let's assume for the moment that's not a blocker. -- RoySmith (talk) 17:13, 13 June 2023 (UTC)[reply]

It's not a bad idea. One reservation I have is that it would be pretty annoying for xwiki users without global rollback, especially if this were to be adopted by other wikis with edit counts for autoconfirmed. Snowmanonahoe (talk · contribs · typos) 17:20, 13 June 2023 (UTC)[reply]
I'm not following what the xwiki issue would be. -- RoySmith (talk) 17:23, 13 June 2023 (UTC)[reply]
That seems like a pretty sensible change. A fair number will still still create accounts, make 10 edits to their sandbox and wait to log back on, but that is a bit more visible. ScottishFinnishRadish (talk) 17:26, 13 June 2023 (UTC)[reply]
Is there a way within the existing settings that we could exclude edits to commonly gamed pages from counting towards autoconfirmed? Or would that just lead to the same behaviour, just on different targets? Sideswipe9th (talk) 18:17, 13 June 2023 (UTC)[reply]
  • Sorry, but I'm going to reject that needing software to be created and deployed to production isn't a blocker, because it is. There are other potential options in moving autoconfirmed out of base autopromote and in to flaggedrevs that are at least somewhat more feasible. In flaggedrevs there are all these options:
		$wgFlaggedRevsAutoconfirm = [
			'days'				=> 30, # days since registration
			'edits'			   => 50, # total edit count
			'spacing'			 => 3, # spacing of edit intervals
			'benchmarks'		  => 7, # how many edit intervals are needed?
			'excludeLastDays'	 => 2, # exclude the last X days of edits from edit counts
			// Either totalContentEdits reqs OR totalCheckedEdits requirements needed
			'totalContentEdits'   => 150, # $wgContentNamespaces edits OR...
			'totalCheckedEdits'   => 50, # ...Edits before the stable version of pages
			'uniqueContentPages'  => 8, # $wgContentNamespaces unique pages edited
			'editComments'		=> 20, # how many edit comments used?
			'email'			   => false, # user must be emailconfirmed?
			'neverBlocked'		=> true, # Can users that were blocked be promoted?
		];
  • So I think exploring what already has some support would be better. (And with enwiki being huge moving to this may not be feasible, but it is much more feasible than rewriting the autopromote software). — xaosflux Talk 17:32, 13 June 2023 (UTC)[reply]
    OK, fair enough. Putting on my software developer hat, one of the things that drives me nuts is requirements which are half "this is what I want to do" and half "this is how you should implement it". So, guilty as charged on that count. If there's a better way to implement what I want, I'm all for it. -- RoySmith (talk) 17:48, 13 June 2023 (UTC)[reply]
    Just being realistic, no way an extension for just enwiki is going to go over for this; and I really doubt that core autopromote will be rewritten (but feel free to open a feature request in the meantime, worst case it just gets ignored). On the other hand, there are other projects using FR options. I expect the "what you want" (dealing with sleepers) can be addressed with some of those options (putting aside the "use x days THEN y edits" implementation part, much less building software to implement that requirement). Looking over th FR options, could you see some of those fixing the underlying issue? (They work in AND mode). — xaosflux Talk 17:54, 13 June 2023 (UTC)[reply]
    (For what it is worth, this also has other tech issues, as the "autoconfirmed" mechanism is designed primarily to stop spam-bots - but we could make another group and move certain permissions from autoconfirmed to it). — xaosflux Talk 17:57, 13 June 2023 (UTC)[reply]
    I've never used FR, so I'd have to do some research, but from a naive reading of the flag definitions, it sounds like "edits >= 10 and excludeLastDays == 4" is pretty much what I'm asking for. -- RoySmith (talk) 17:59, 13 June 2023 (UTC)[reply]
    @Xaosflux OK, If I understand things right, flaggedrevs is only exposed on enwiki via the pending changes protection mechanism? I've never used that before, so I've played around a little with Wikipedia:Pending changes/Testing/10. I'm kind of hazy on the details. It looks like you have to pre-define sets of flaggedrevs criteria, and the only one that currently exists is PC1 (Review revisions from new and unregistered users). How do I create other sets? -- RoySmith (talk) 18:13, 14 June 2023 (UTC)[reply]
    @RoySmith that requires configuration requests; the possible idea was to use the flaggedrevs promotion system that is more flexible as a possible option - but what the desired outcome really needs to be sussed out (e.g. delay the ability to create 'articles', require captcha more often, prevent moves, etc). Introducing a new "protection level" probably isn't needed. — xaosflux Talk 18:17, 14 June 2023 (UTC)[reply]
    spacing and benchmarks look interesting: that could avoid promoting someone who makes ten edits in quick succession to game the system. However, it might make it harder to explain to someone why they've not yet been autoconfirmed despite making twenty unfortunately spaced edits over eight days. Certes (talk) 17:57, 13 June 2023 (UTC)[reply]
    @Certes indeed. An underlying item to consider is: what do you want these people to not be able to do? (All of the autoconfirmed permissions, just a subset of them?) — xaosflux Talk 17:59, 13 June 2023 (UTC)[reply]
    That's a question for RoySmith, but I'd guess we want to stop them creating client biography/CV articles. Certes (talk) 18:05, 13 June 2023 (UTC)[reply]
    If that is the real primary problem, "createpagemainns" permissions could come off of "autoconfirmed" and get applied to a new higher threshold. — xaosflux Talk 18:22, 13 June 2023 (UTC)[reply]
    Another thing paid editors tend to do is create a new draft, make ten edits to it, and then move it to mainspace. Perhaps the uniqueContentPages param could be set higher. Sungodtemple (talkcontribs) 12:55, 14 June 2023 (UTC)[reply]
    If that is a common pattern, we should not make it harder to detect by making people edit differently. With paid editing, the focus needs to be on catching it (we can't prevent that it happens unless we destroy the wiki by locking down everything). —Kusma (talk) 13:04, 14 June 2023 (UTC)[reply]
Genuine new non-autoconfirmed editors already face enough hurdles and are made unwelcome. Vandalism is overall quite low compared to 15 years ago. In my view the suggested change goes in the wrong direction. —Kusma (talk) 18:18, 13 June 2023 (UTC)[reply]
Come work SPI for a while. -- RoySmith (talk) 18:21, 13 June 2023 (UTC)[reply]
Apart from the fact that your suggestion is still trivial to bypass for determined LTAs, I think the threat to the wiki from pissing off potential new editors is greater than that of socks. I don't have data to back up my gut feeling, do you? —Kusma (talk) 18:43, 13 June 2023 (UTC)[reply]
Unfortunately, the privacy requirements around checkuser prevent me from giving specific examples, but I can say that on a regular basis, when I do range checks, I often find lots of newly created accounts which are good technical matches but with zero edits, it's difficult to justify a block as a sleeper. I'm sure any CU will tell you the same. On the other hand, with a brand new account which made 10 garbage edits and then didn't do anything else, it would be pretty obvious what was going on. -- RoySmith (talk) 18:58, 13 June 2023 (UTC)[reply]
The problem is that the kind of clever and obsessive LTAs that this targets tend to watch things closely and would adapt quite quickly. What are we going to do if they start making easy minor constructive changes to start instead? By what measure is a garbage edit anyway? Our obsolete markup isn't familiar to most people and there's a dizzying array of policies, guidelines, norms, and expectations to navigate for good-faith new users, how do we avoid false positives? Even if we take the view that all accounts that make exactly 10 edits before going quite should be blocked, they'll quickly shift to making some random number between 11 and 20.
The Autoconfirmed permission is simply tied to too many different things to be tinkered with casually, and I doubt there would be community consensus to do so anyway.
Soft-blocking potential sleepers is another option, but not one well supported by current policy. 74.73.224.126 (talk) 19:25, 13 June 2023 (UTC)[reply]
I read RoySmith as addressing the subset of accounts that are good technical matches which make exactly 10 edits and then go quiet, not all such accounts with that behaviour. Not sure how that affects your analysis, IP editor. Folly Mox (talk) 20:07, 13 June 2023 (UTC)[reply]
Good technical match is a bit of a squishy term; in some parts of the world people editing on common technically indistinguishable mobile devices can share IPs within minutes of each other. But as I said, the bigger concern is that LTAs are not static, we make a move, they make a move. In this case several low-effort countermoves that leave us exactly where we were before suggest themselves with just a moments thought, so we should find a better move instead. Feel free to refer to me as 74, since no other unregistered users beginning with those numbers are currently commenting in this discussion. 74.73.224.126 (talk) 20:19, 13 June 2023 (UTC)[reply]
I ran a query recently looking for suspicious editing patterns (exactly ten edits in quick succession, then nothing). I found very few, and those I did find seemed to be constructive good-faith editors, rather than obvious red flags such as adding and removing a space five times. Certes (talk) 21:50, 13 June 2023 (UTC)[reply]
mw:Growth/Personalized first day/Welcome survey#2022 responses might be worth a perusal, but I'm not sure it applies directly. Folly Mox (talk) 19:22, 13 June 2023 (UTC)[reply]
I'd also like to make good-faith new editors feel more welcome, though I do appreciate the problem with unwanted editing, especially UPE. My comments above were addressing the technical aspects rather than whether we should tighten things up at all. As others have implied, it might be better to improve the identification of UPEs rather than create hurdles which will bamboozle genuine newcomers but soon be overcome by a professional sock farmer. Certes (talk) 13:25, 14 June 2023 (UTC)[reply]
I don't think "tighten things up" is the right way to think about this. By making it easier to differentiate between legitimate new users and obvious sleeper creations, this will reduce the number of erroneous blocks of legitimate new users because right now there's no way to tell, and in a situation where sleepers are coming out of the woodwork, you're likely to err on the side of blocking. Also, we often semi-protect pages that are frequent targets and then have to bump that up to ECP because semi is providing no protection in the face of large sleeper warehouses. If it were harder to warehouse sleepers, we wouldn't have to resort to ECP as often, and that would be a net plus for that large set of editors who don't meet ECP requirements. I also disagree with the argument that "the hard-core LTAs and UPEs will just adjust their game". With an attitude like that, we wouldn't do anything at all. -- RoySmith (talk) 14:33, 14 June 2023 (UTC)[reply]
I'm still mulling over the suggestion writ large (particularly vis-a-vis the technical obstacles), but I agree with RoySmith that the proposed change's effect on new editors operating only their one account would be minimal, and that the net effect would be that fewer new editors would be erroneously blocked for sockpuppetry. signed, Rosguill talk 14:42, 14 June 2023 (UTC)[reply]
Speaking only for myself and North800 encapsulates the same idea below quite succinctly, the point in bringing up the dynamic nature of the issue is not to say we shouldn't look for ways to minimize disruption with as little collateral as possible, just that we need to think through the problems to find solutions that are more optimal and efficient find a better move.
Take for example ECP. The reason it works is not because it flawlessly prevents all disruption, it can be and is in fact gamed routinely, but because it shifts an important dynamic in our favor. They spend an hour to game it, and we b/lock them in two minutes. It's far from perfect, and the collateral is greater than we would like, but judiciously applied it's quite satisfactory. Now we are dealing with obsessives, so it doesn't stop them, but it does reduce the frequency.
For this proposal however, and even setting aside the technical issues, I don't really see what non-trivial dynamics this shifts in our favor. More formally it's unclear that costs will outweigh benefits. I'm not trying to shut down discussion by any means this is what VPI is for after all, I'm all ears for a better way to handle LTAs, I just don't think this is it. 74.73.224.126 (talk) 16:43, 14 June 2023 (UTC)[reply]

Wouldn't it be ultra easy for bad actors to adapt to this change? If so, the benefit would be microscopic. North8000 (talk) 16:37, 14 June 2023 (UTC)[reply]

Pronouns for Individuals

I think that in the biography section of Individuals it should list their pronouns, therefore making it easier for someone to search how someone identifies without going to check across the page. wookiepedia added that feature and it really helps CatdemonBlahaj (talk) 21:09, 14 June 2023 (UTC)[reply]

What would you consider appropriate sourcing for such statements? AndyTheGrump (talk) 21:26, 14 June 2023 (UTC)[reply]
WP:ABOUTSELF allows any source published by the individual themselves to be reliable sources.
Maybe this can be useful for people whose pronouns are not immediately obvious from the picture/prose. A parameter in the Infobox would be a nice place for it. Carpimaps talk to me! 14:33, 18 June 2023 (UTC)[reply]

Bot creation to replace a blacklisted ref

Following this discussion with an admin, it appears a pseudoscience source frequently used by novice medical editors is going to be blacklisted by consensus at WP:RSN. The admin reports the source is already in use at some 500 articles, which may be disrupted by a blacklisting notice when completed.

Could a bot be generated - similar to the rapid function of AnomieBOT to restore deleted references - to find and replace the blacklisted source with [citation needed|date]? Zefr (talk) 18:10, 16 June 2023 (UTC)[reply]

User:Zefr: I just did this for another RSN case at WP:URL_change_requests#Purging_all_mainspace_links_to_fmg.ac/Projects/MedLands which was in about 1000 pages .. you can make a request on that page, I can probably do it. -- GreenC 19:03, 16 June 2023 (UTC)[reply]
GreenC - thanks for the reply and possible solution. I agree with your recommendation in that discussion to remove the content and the blacklisted source together (rather than just leaving a [cn] notice), although that may need admin input. Would the bot 1) find and revert the blacklisted source and edit for the 500 existing uses (note: as an example, AnomieBOT gives an explanation of its activity) and 2) notify future input editors that the source is blacklisted and prevent the edit before publishing per WP:SPB? cc: Ohnoitsjamie. Zefr (talk) 21:31, 16 June 2023 (UTC)[reply]
I think for 1) the answer is yes. My bot can (only) 'terminate with extreme prejudice' eg. eliminate the entire reference between ref tags including the ref tags, links in external links, etc.. everything related to this source including named refs like <ref name="example" /> disappears. The text the cite sources would stay in place, there is no revert of prior edits, only deletion of citations. It could replace ref citations with a cite needed. If you want to keep the reference but eliminate the URLs, I think Headbomb's program might be able to do that. For 2) the bot is not involved with the spam blacklist. --GreenC 22:15, 16 June 2023 (UTC)[reply]
This sounds on the face of it like a project that will require some human oversight. In cases where there are bundled citations in a single pair of ref tags, or multiple references supporting the same prose, the default behaviour won't lead to good outcomes. Regarding removing named references to blacklisted sites, you'll have to track down any other uses of the named references to make sure Anomie Bot doesn't rescue them and restore the blacklisted source. Folly Mox (talk) 22:27, 16 June 2023 (UTC)[reply]
Every edit is checked, there is no way to do this kind of work 100% full auto with no checks. -- 22:35, 16 June 2023 (UTC) GreenC 22:35, 16 June 2023 (UTC)[reply]
Headbomb, this seems to be in your wheelhouse if you have any thoughts given WP:UPSD. KoA (talk) 19:24, 16 June 2023 (UTC)[reply]
I'm waiting for formal closure before updating WP:UPSD Headbomb {t · c · p · b} 19:27, 16 June 2023 (UTC)[reply]

Trust Network

There used to be a "Vertrauensnetz" (Web of Trust) on the German Wikipedia, I was hoping we could do something similar here but with a Template on a smaller more decentralized scale, I was thinking it could have three parameters, 1st the name of user B whom user A trusts, then a reason parameter (Voluntary) with for example "this user trusts user B, because of user Bs extensive knowledge in X topic" and one uneditable parameter which always says "This user trusts" so nobody can manipulate it into saying something mistrusting about another user. Is something like this even allowed? Crainsaw (talk) 16:52, 17 June 2023 (UTC)[reply]

It would be a nice idea, but, unfortunately, I'm sure it would be gamed by untrustworthy editors trusting one another. The way that content is trusted is via reliable sources, so the particular Wikipedia editor doesn't come into it. Phil Bridger (talk) 17:37, 17 June 2023 (UTC)[reply]
Untrustworthy editors trusting each other is not necessarily a problem. A web of trust is supposed to give a path from someone you trust to other people; if there's no connection between your web and the untrustworthy editors' web, the fact that they all "trust" each other isn't supposed to make a difference.
The question I'd have about this is what this "trust" is supposed to be for. Does anything we do here rely on trust where a web would be useful? RFA and other elevated permissions rely on trust to some extent, but a web of it doesn't seem too useful there. Anomie 11:38, 18 June 2023 (UTC)[reply]
Makes sense, the Trust Network shouldn't have any formal power, it should just be a tool that makes Editors trust each other more, for example if a certain editor is on someone else's Trust list, they'll become friendlier, and trust the other editor more. Crainsaw (talk) 11:42, 18 June 2023 (UTC)[reply]
You say "There used to be a "Vertrauensnetz" (Web of Trust) on the German Wikipedia" - what happened to it, and why? Johnbod (talk) 12:30, 18 June 2023 (UTC)[reply]
It was shut down because along with the Trust network, they also created a Mistrust Network (Self-explanatory), it was created to stop canvassing (Since editor A couldn't ping other editors if their page said that editor B distrusts edition C who's in discussion with editor A). But then people started to view it as personal attacks, and for reasons I still don't understand the rage somehow spilled over to the Trust network and both were shutdown. That's why I'm only for a template rather than entire user subpages dedicated to the criteria about how they trust/mistrust user's, and their extensive lists of users. And also why I'm only for a trust network and not a mistrust network Crainsaw (talk) 13:38, 18 June 2023 (UTC)[reply]
Interestingly, the template-based/decentralised method seems to have been attempted in 2006, as far as I can work out from Wikipedia:Trust network - I don't think it got any traction. Andrew Gray (talk) 19:33, 18 June 2023 (UTC)[reply]
It was also tried by some users such as here, but I was thinking a smaller more compact template which you would put at the end of your userpage, with the template being:
This user trusts:
User A
User B
...

Crainsaw (talk) 19:39, 18 June 2023 (UTC)[reply]

Merging help forums Teahouse and Help desk

Is there any reason why WP:Teahouse and WP:Help Desk are two different forums? They essentially achieve the same thing (asking about "how to use or edit Wikipedia"). I think this is an unnecessary split of volunteers and is confusing for beginners. I would like to propose a merge, but I want to know if I missed anything. Carpimaps talk to me! 14:28, 18 June 2023 (UTC)[reply]

The key question is what do the participating volunteers think? The help desk and teahouse have different approaches and so may interest different types of volunteers. isaacl (talk) 15:43, 18 June 2023 (UTC)[reply]
Help Desk volunteers are people who are willing to answer the question "how do I create an article" twice a day, Teahouse volunteers are people who are willing to answer it more than twice a day.[Humour] -- Random person no 362478479 (talk) 16:11, 18 June 2023 (UTC)[reply]
They are intended for different audiences: the Teahouse for complete beginners, the Help Desk for people who know their way around Wikipedia in general, but have questions on details. Looking at the questions asked in both this separation seems to work to a certain degree. Of course some questions end up in the "wrong" place, but in general Teahouse questions do seem to tend to be on a more basic level. I don't know whether or not there really is an issue that new users are too intimidated to ask at the Help Desk. As a still relatively new user I asked my first question at the Teahouse, but I would have had no issues asking it at the Help Desk. In general I don't have the impression that people asking elementary questions at the Help Desk are treated poorly, but of course it is possible that this is only due to the fact that they don't pop up all the time. One thing I noticed is that Help Desk questions tend to get more answers than Teahouse questions and while that can sometimes be explained by the nature of the question that is not always the case. On the question of merging the two I have no strong opinions one way or the other. -- Random person no 362478479 (talk) 16:26, 18 June 2023 (UTC)[reply]
As a regular at both venues, I oppose a merge per my short answer over at WT:HD. —Tenryuu 🐲 ( 💬 • 📝 ) 03:30, 19 June 2023 (UTC)[reply]

New design

Hey everyone, I suggest considering that the Template:village pump has been the same for years, a more modern and attractive design should be considered for it. 𝔹𝕒𝕣𝕗𝕚𝕣𝕥𝕒𝕝𝕜 18:37, 19 June 2023 (UTC)[reply]

Do you have a proposal for a design ? —TheDJ (talkcontribs) 21:13, 19 June 2023 (UTC)[reply]

A Wikipedia Museum

in my opinion i think that if what vandals did were archived in some way it would be good.

i think that when people see that Wikipedia doesn't tolerate vandal it will discourage people from becoming vandals. showing the horrors of what vandals did would put into peoples minds the idea that vandalism of Wikipedia is bad and shall not be tolerated .

i would like to hear your opinions on the mater

Pastalavist (talk) 18:43, 19 June 2023 (UTC)[reply]

Oppose. We should WP:DENY recognition to vandals. I think it unlikely the proposed target audience of "potential vandals" will see it. There are already numerous ways good-faith/curious readers can learn about the damage vandalism can do and that it is not acceptable. DMacks (talk) 18:52, 19 June 2023 (UTC)[reply]
yo think of it as glorification i think of it as learning about the dangers of vandalism Pastalavist (talk) 18:58, 19 June 2023 (UTC)[reply]
the "yo" was meant to be you
sorry Pastalavist (talk) 18:59, 19 June 2023 (UTC)[reply]
A Wikipedia Museum could be valuable, but it should not curate vandalism per WP:DENY. A collection of landmark discussions that those with institutional memory consider to have been important in shaping the current culture and policy could be a nice thing to have. @Graham87 and Iridescent:, do you know of any such collection? Folly Mox (talk) 19:00, 19 June 2023 (UTC)[reply]
it should not curate vandalism. rather it should curate anti-vandalism Pastalavist (talk) 19:02, 19 June 2023 (UTC)[reply]
Every contribution to Wikipedia is archived, except for deleted pages and deleted revisions. It is its own museum, but of course it is far too large and complex to be appreciated in one visit. There are various attempts to curate the history and guide the visitor. A few pages such as Wikipedia:List of hoaxes on Wikipedia document specific types of vandalism, but generally it is something we prefer not to celebrate or encourage. Certes (talk) 19:19, 19 June 2023 (UTC)[reply]
thank you certes for giving me clarity Pastalavist (talk) 19:26, 19 June 2023 (UTC)[reply]
@Folly Mox: There's an old page at Wikipedia:History of Wikipedian processes and people, Wikipedia:Milestones (for earlier years especially), and the Historical archive for some really out-of-the way pages. For more recent news there's the Signpost archives which go back almost uninterrupted to 2005. Curating a history of Wikipedia would be difficult because different people's ideas of what is and is not historically significant vary wildly and the importance of a particular page/discussion might not become apparent until much later. To this end I made a personal Wikipedia timeline which might interest some here. Graham87 04:23, 20 June 2023 (UTC)[reply]
Thank you User:Graham87; I knew you were the right person to ask. Very interesting and educational. Folly Mox (talk) 05:37, 20 June 2023 (UTC)[reply]

Can chatgpt be used on Wikipedia

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


please tell me Pastalavist (talk) 19:08, 19 June 2023 (UTC)[reply]

if bots can be used to get rid of vandalism why shouldn't they be used to create articles Pastalavist (talk) 19:09, 19 June 2023 (UTC)[reply]
This question is currently being debated at Wikipedia:Large language models and its talk page. ChatGPT might be helpful if used carefully and with close supervision, but we're very unlikely to allow AI bots to create articles without human scrutiny. Certes (talk) 19:15, 19 June 2023 (UTC)[reply]
No. ChatGPT is a stochastic parrot that generates superficially believable word salad and fake references. We already have more than enough trouble dealing with misfiring bots. The last thing we need is to hook Wikipedia up to an algorithmic sewer pipe. XOR'easter (talk) 19:55, 19 June 2023 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

question re policy for BLP items

Miners below the age of 10
EEng

I have a question about usage of WP:BLP. we currently have articles on two minors below the age of ten, highlighting their royal status as members of a royal family. these two individuals have had absolutely no voice in whether these articles should be established or not. their parents have publicly distanced themselves from the referenced royal family.

I am wondering if BLP can be used to preserve the privacy of individuals who have not reached adulthood, and hopefully please remove the articles until they reach adulthood. I am sincerely trying to consider the long-term well-being of these two minors. eventually they will presumably gain access to the Internet, once they are old enough to do so. furthermore, they do not currently reside in the country where they would have royal status.

I don't feel that wikipedia should be increasing these minors' public visiblity, before they've had a chance to decide what public role they wish to have, if any. can anything be done to help with this situation? I'm truly open to any ideas. I recognize that our policy may or may not apply. thanks. --Sm8900 (talk) 19:56, 21 June 2023 (UTC)[reply]

You might try asking on WP:BLPN too. You are very vague about the details, so it's hard to tell, but for instance your description applies to the children of Prince Harry, Duke of Sussex. In that case, they are for better or worse subjects of intense media attention, and they are naturally going to be the subject of editors' interest; I can't think of any policy-based reason that we shouldn't have an article on them (which isn't to take a position on whether we morally should, just to say that current BLP policy allows it!), and I don't know that without a strong policy based reason you would be able to find consensus to delete or even merge these articles. There is an essay, WP:MINORS, which basically says "be even more careful editing about living children than even other living subjects", but even that doesn't suggest that we shouldn't have articles on notable minors. Caeciliusinhorto-public (talk) 12:06, 22 June 2023 (UTC)[reply]
There's also WP:BLPREQUESTDELETE which says that if a relatively unknown person requests deletion of their article, a no-consensus discussion can be closed as delete; even if we took that to include parents requesting deletion on behalf of their minor children I don't know whether e.g. Harry & Meghan's kids would count as "relatively unknown"... Caeciliusinhorto-public (talk) 12:08, 22 June 2023 (UTC)[reply]
@Caeciliusinhorto-public ok. let's expand the usage of "relatively unknown." because in fact no one knows these kids. in any way. they have made no public statements. even their own grandfathers don't know them! and one of those grandfathers is the basis for any supposed royal status! their utter absence of any public role, actions or statements, does in fact extend this protection to them. Sm8900 (talk) 14:27, 22 June 2023 (UTC)[reply]
  • I don’t think there is a “one size fits all” rule for this. In the case of children of royalty, I would say that (as a minimum) they should be discussed in the article about their Royal parent… but whether and when they would deserve a separate article is a more difficult question. Blueboar (talk) 12:24, 22 June 2023 (UTC)[reply]
the points above from all of you are all notable. however, for me the problem is that these kids literally haven't done anything at all in any public role at all... except, ya know, be born, and live in Montecito. shouldn't there be some way to take down an article on a minor, if it is based on a public role that they do not and probably will not ever actually assume in any practical way?
In other words, the minute that one of them actually makes any public statement, or takes any action, or even visits their supposed home country for literally the first time, (other than as infants), then maybe we could consider if any article is warranted or justified. --Sm8900 (talk) 14:05, 22 June 2023 (UTC)[reply]
Well, you are welcome to nominate the articles for deletion at WP:AFD, but I suspect the consensus will be to keep. People do tend to think Royals are notable just for existing, and there are sources that have discussed them. Your best arguments would probably be “privacy of a minor”, and “merge” into article on parents. Good luck. Blueboar (talk) 14:21, 22 June 2023 (UTC)[reply]
thanks @Blueboar! in 10 or 15 years, the press coverage may have melted away. these kids may be trying to get on with their own normal adolescent lives. meanwhile, do we really need an entire wikipedia article on them? what happens when they try to go to their first house party in montecito, and just want to chill with their preteen friends? haven't we all been there at some point in our lives? do we really have any basis or reason to put this albatross of an article onto them? your reply above is helpful. Sm8900 (talk) 14:25, 22 June 2023 (UTC)[reply]
I’m not the one you have to convince. Blueboar (talk) 17:45, 22 June 2023 (UTC)[reply]
  • This is an area where I am definitely in disagreement with current Wikipedia policy. Current policy makes no distinction according to the age of anyone mentioned on Wikipedia. It should. I don't have the time to undertake the process for such a change myself, but I would certainly support any reasonable proposal made in that direction. Phil Bridger (talk) 17:50, 22 June 2023 (UTC)[reply]
    that's an excellent point. i may formulate something, and then post it on the tab for policy here at village pump, Sm8900 (talk) 02:28, 23 June 2023 (UTC)[reply]
    how this?
    proposed text:
    I would like to propose a new rule to be added to BLP; the proposed rule is that any minors who have made no public actions and have done nothing notable on their own, should not have any article created about them based on the public role of their parents.
    does that make sense, @Phil Bridger? what else can or should be added, to fully address this? Sm8900 (talk) 02:55, 23 June 2023 (UTC)[reply]