Jump to content

Wikipedia:Bot requests/Archive 37

From Wikipedia, the free encyclopedia
Archive 30Archive 35Archive 36Archive 37Archive 38Archive 39Archive 40


A more user-friendly archive bot

True, we have ClueBot III and MiszaBot, but they're not user-friendly. We should have a more user-friendly archive bot that does not overwhelm users with syntax. RussianReversal (talk) 02:34, 18 June 2010 (UTC)

What specifically is not user-friendly? What would you want changed? In your opinion, how should the syntax work? -- Cobi(t|c|b) 02:37, 18 June 2010 (UTC)
Archives should be daily and have a searchable box by default. It should have relatively few options: freq, age, format, and box. Format should be md,y; mdy; ymd; or dmy. Now that's user-friendly! RussianReversal (talk) 02:46, 18 June 2010 (UTC)
No that's just not flexible. FinalRapture - 03:24, 18 June 2010 (UTC)
Most talk pages don't get enough traffic to be archived by day. Furthermore:
  • As for formatCB3 has this. Just the values are slightly different. "F j, Y"; "F j Y"; "Y F j"; "j F Y" respectively.
  • As for age — CB3 has this and allows for fine-grained control down to the hour.
  • As for freq — I don't know what you want this to do, and how it's different from age.
  • As for box — CB3 also has this, but called archivebox.
  • As for a search box — CB3 has this, too.
I don't understand what else you want. -- Cobi(t|c|b) 04:14, 18 June 2010 (UTC)

Perhaps make a PHP script that generates the code for them. The user the user simply select the options they want, and then copies the code to their talk page. --Chris 11:04, 23 June 2010 (UTC)

Removing unnecessary hyphens

I often see adverbs ending in -ly as part of compound modifiers which are incorrectly hyphenated, e.g. "a highly-motivated Wikipedian" should be just "a highly motivated Wikipedian". It seems like a simple regex could search for this pattern and correct it in a semi-automated fashion. Are there bots available to do this already or should I make a new one? (I am proficient in C# .NET). Thanks. -Cwenger (talk) 00:39, 23 June 2010 (UTC)

Bots are not permitted to do spell-checking, but maybe this could be added to WP:AWB/T? –xenotalk 17:44, 25 June 2010 (UTC)

We need another User:WebCiteBOT

That one never quite got the job done, and its runner was talented, so we need someone who really knows there stuff. Probably also someone who is willing to license their bot code under a free license in case they don't want to run it. Thanks. - Peregrine Fisher (talk) 06:21, 13 June 2010 (UTC)

It not running AFAIK because WebCite couldn't handle the load. Checklink does some light preemptive archiving, but they still seem to have some trouble with that load. — Dispenser 20:05, 13 June 2010 (UTC)
I could write it easily enough, but I agree with Dispenser that the service is somewhat weak. Tim1357 talk 21:18, 13 June 2010 (UTC)
Any ideas on how we could just do it ourselves? Why couldn't they do it? I never saw any on-wiki conversation about it. What should we do? Who should I ask about it? Thanks. - Peregrine Fisher (talk) 07:35, 20 June 2010 (UTC)
I don't know. Making another Web Citation service is a huge amount of work. Not only would the operator have to worry about developing the service, but also about copyright issues, and how/where to host it. I think our best bet is to use an external, 3rd party service like WebCitation. Perhaps as they further develop, they will be more reliable, and bots like WebCiteBOT will make more sense. Tim1357 talk 01:10, 26 June 2010 (UTC)
You're probably right. Even when webcitation is working, I don't think the bot is run. Can someone take it over, or something? I'd be happy to email webcitation, if I though we had a bot that would use them. - Peregrine Fisher (talk) 03:18, 26 June 2010 (UTC)
I emailed them about 4 weeks ago and have a bot ~50% complete. However I was going to do higher priority links first (e.g. all FAs, GAs, FLs, restricted TLDs like .edu and .gov, most cited links), not the newest links and then work my way down from there. The only limitation webciation.org asked that we keep was the 1 request every 5 seconds one, and to notify them when the project comes operational. Though they did say "In the future we will likely remove this limitation."
As far as creating another service, that would be ideal solution since you don't have to trust for another service not to go down. The main issue there is who is going to host it, using the toolserver is out of the question. Even if a project like that didn't break the rules (it breaks rule 4 and possibly 3) I can almost guarantee they wouldn't be able to provide the disk space you would need. It needs to be running on server(s) in a datacenter not off of someone's laptop in their bedroom, since the latter is virtually guaranteed to be even less reliable than webcitation.org. The only real solution for something like that is either colocation or renting dedicated servers. Dedicated servers run from anywhere from ~$70-300/mo per server depending on the specs of the server and whether you get other addons like IDS systems, automated backup, or support. Colocation isn't that much cheaper but you have the benefit of not having to pay the monthly fee for upgrades to the server, like for additional hard drive space or RAM and has the disadvantage of you actually having to buy a server and install it or have it installed usually for a fee. Colocation is cheaper in the long run but most datacenters want to deal with people renting 1/4 rack or more, not some 1U server. According to this article you’re looking at about 305 KB per page. At that rate you should be able to store about 3.2 million pages on a 1 TB hard drive without a backup (931 GB on a 1 TB drive due to the difference of 1000 and 1024 for sizes), though you might be able to increase that significantly with compression. For backup reasons you'd need to make sure you stored that on two different drives so really you’re looking at 1.6 million per 1 TB drive uncompressed. Keep in mind those numbers are based on that article which is 3 years old, it's very likely that the trend has continued since then and the average web page is significantly larger. Also note that is the average for web pages with images. If you were to archive other types of content like pdfs, or flash objects those would be much larger. As far as copyright that shouldn't be too big of a deal as long as you obey any takedown requests and other standard spidering standards like robots.txt as you should be good under the DMCA's safe harbor provision if you’re operating in the US (see the recent ruling on Viacom Inc. v. Youtube). --nn123645 (talk) 00:25, 28 June 2010 (UTC)

(redent) Wow!! You're on it. WP thanks you for looking into all this. In the short term, if we can get a bot that works, that should take care of it. After that, I guess we talk to the foundation. I'll drop a pointer to this discussion on Jimbo's page. Maybe that will do something. - Peregrine Fisher (talk) 03:12, 28 June 2010 (UTC) Still digesting what you said, but prioritizing FAs on down is a great idea. - Peregrine Fisher (talk) 03:14, 28 June 2010 (UTC)

If we could get the foundation to host something that it would be awesome :D. If we wanted to host something independatly there are serveral thousand hosting providers around that provide dedicated server hosting, one of the more popular, and more expensive being serverbeach, though there is also 1&1, theplanet, hivelocity, and many others available with a google search. I think you'd probably be able to host it for around $300-500 per month or $3,600-$6,000 per year if you compressed the pages that you are archiving. Considering how much data that is it really isn't too bad, but that is more than I am personally able or willing to contribute. You might be able to offset the costs through advertising, but I don't think you'd make enough to pay for the project and you might run into copyright problems of montizing other people's content. --nn123645 (talk) 03:39, 28 June 2010 (UTC)
We need to get the foundation to do it. It's too much money to ask one person to pay for it. I don't think ads will work, for the reason you mention. - Peregrine Fisher (talk) 03:55, 28 June 2010 (UTC)

Track deprods

Hi. Please write a bot (or add functionality to an existing bot) to track deprods per Wikipedia_talk:Edit_filter/Archive_4#Filter_200.2C_or_should_the_EF_be_engaged_to_track_non-abusive.2C_non-.22wrong.22_edits.3F. A typical behavior of simply reverting addition of a PROD should be trackable, and the articles that happens to should get more attention (say via a category or listing on a page), possibly resulting in an AfD nomination as a contested PROD. Thanks!   — Jeff G. ツ 20:10, 15 June 2010 (UTC)

Here is a prototype that is running in user space: User:PSBot/Deprods. Admittedly, it is very incomplete, but after some hours you should see some pages listed. Source code is on the talk page. PleaseStand (talk) 06:10, 20 June 2010 (UTC)
Excellent work so far. Might I suggest using {{Ln}} for each lineitem? Thanks!   — Jeff G. ツ 15:22, 20 June 2010 (UTC)
I have changed the script to use such a template. I hope the new version of my code works. PleaseStand (talk) 19:11, 20 June 2010 (UTC)
Just a note that I've run a similar thing with SDPatrolBot, where it lists all the deprods at User:SDPatrolBot/prodResults. However, I've recently been very bad at running this, so it's not exactly up to date. If PleaseStand wants to take over that's fine, although SDPatrolBot did also notify users if their prod had been removed, which you could consider doing as well PleaseStand...? If you want I can help with the coding (although I only really know .NET), or feel free to use User:SDPatrolBot/source#ProdNotify.cs. Best, - Kingpin13 (talk) 17:45, 25 June 2010 (UTC)
What is the advantage in leaving a notification of a PROD removal? WP:PROD says, "Consider adding the article to your watchlist." PleaseStand (talk) 04:08, 29 June 2010 (UTC)
Yes, but just having the page on your watchlist doesn't mean you're necessarily going to spot the deproding, since they rarely have a nice edit summary, and some people prod enough pags that it's impractical to have them all on their watchlist. I know that while I was running the bot, a number of people found the notifications useful, - Kingpin13 (talk) 05:08, 29 June 2010 (UTC)

French footballers

I have created a new WikiProject Football France task force and now a load of existing articles need tagging on the talk page. I am currently finding articles already tagged with the {{Football}} or {{WikiProject Football}} templates, and adding the parameter France=yes. Using AWB, I have tagged around 500 of these but it's getting tedious so I wonder whether a bot could do the same job? I am currently performing a normal regex find & replace for ({{\s*)([Ff]ootball)(\s*(?:\s*|⌊⌊⌊⌊M?\d+⌋⌋⌋⌋\s*)?(\|((?>[^\{\}]+|\{(?<DEPTH>)|\}(?<-DEPTH>))*(?(DEPTH)(?!))))?)\}\} or ({{\s*)([Ww]ikiProject Football)(\s*(?:\s*|⌊⌊⌊⌊M?\d+⌋⌋⌋⌋\s*)?(\|((?>[^\{\}]+|\{(?<DEPTH>)|\}(?<-DEPTH>))*(?(DEPTH)(?!))))?)\}\} and replacing with $1$2$3|France=yes}}. Any talk pages already tagged with the France=yes parameter need skipping. I would like the bot to do this on all articles in the categories Category:French footballers, Category:Footballers in France by club and Category:French football managers. Cheers, BigDom 07:28, 20 June 2010 (UTC)

This is a pretty standard request, and can bypass BRFA. I suggest asking Xenobot Mk V to help you out (however, any of the WikiProject tagging bots will do). Follow the instructions on the bot's page, and hopefully Xeno will be able to run his bot to do this. Tim1357 talk 22:50, 25 June 2010 (UTC)
Thanks, I'll have a look there. BigDom 05:58, 26 June 2010 (UTC)

 Doing... on DodoBot. There's a fairly large number of pages, so probably won't be done till this evening or even tomorrow. - EdoDodo talk 09:23, 27 June 2010 (UTC)

 Done Should all be tagged. Feel free to leave me a message on my talk page if you think of any other categories that could be tagged. - EdoDodo talk 19:45, 27 June 2010 (UTC)

Tagging duplicate files from Commons with NowCommons

Can a bot tag all duplicates from Commons that do not have {{NowCommons}} with NowCommons? Also, any images that are OLDER on Commons should be tagged with {{Db-f8}}, explaining in the text of the tag that the Commons version is older. There is no realistic circumstance where an image is older on Commons and should still be here. The bot should skip images with {{NoCommons}}, {{C-uploaded}}, and {{M-cropped}}. CommonsClash - Tool listing duplicates on the English language Wikipedia and Wikimedia Commons. Backlog on 9 May: about 34.000 media files. Less than half of these images are marked in any way as duplicates. ▫ JohnnyMrNinja 13:58, 22 June 2010 (UTC)

So I'm, pretty sure that when nobody responds to a bot request (not even to tell you how much your request can't be done), it means that the request is a seething pile of gibberish. Let's start over. There are at least 34000 files that are duplicated here and on Commons (maybe a lot more now, actually). While many of them are marked with the tag {{NowCommons}}, these are still much less than half of the total. The others are only distinguishable by a feature of MW that shows up a little link at the bottom of the page. These images aren't even added to a category automatically. The only templates we have now for these images are NowCommons (or the derivative NCD) and {{db-f8}}. Of these 34000 files, probably about 33900 should be deleted. I would simply like someone to put these images into categories so that people can start easily processing them. I do not want a bot to delete anything, and a human will eventually be able to tell if any one of these images shouldn't be deleted, and will mark the image as such. This is not creating more work, this is making work that is already being done easier. If this seems too controversial, or doesn't make sense, or doesn't seem worth it, please let me know. ▫ JohnnyMrNinja 17:37, 25 June 2010 (UTC)
There is a bot that did this, or something similar. Fact is, there are not many admins cleaning the nowcommons category so it will probably just add on to the backlog. –xenotalk 17:43, 25 June 2010 (UTC)
I think there are more admins than normal working on this right now, we just had one user who was moving tons of images per day. That was part of the reason I wanted to have this bot start working now, because there are so many eyes on these images right now. Images are now being sorted into subcategories, and there is a reviewer tag for images to be checked by non-admins so admins can delete faster (that's only a few days old). And I don't think I'd agree that it's adding to the backlog; it's just making visible a portion that was invisible, as these images already should have been tagged when they were duplicated in the first place. ▫ JohnnyMrNinja 18:10, 25 June 2010 (UTC)
So maybe contact that user and see if they have some code for this... –xenotalk 18:12, 25 June 2010 (UTC)
I'm pretty sure that his bot was going through images marked as "candidates to be moved to Commons", as that is what I usually see his bot doing, and that's why so many images at once. That is what the other editor was also doing recently, moving thousands per day. That is increasing the backlog. I will still ask them to drop by here. ▫ JohnnyMrNinja 18:29, 25 June 2010 (UTC)

Note, for whoever does this, don't rely on the hash values of the images to check, there is no guarantee that they will be there for older images or that they will be the same (for example, if someone cropped off whitespace or the likes/colour corrects etc), You should also check the height/width attributes of the images before tagging, and the template that is placed should have a bot switch so the admin that does delete can clearly see that it is a bot check and can pay more attention to it. Peachey88 (Talk Page · Contribs) 08:34, 26 June 2010 (UTC)

I like the idea. But perhaps we should try to do something about the images allready tagged at "NowCommons" before we add more to this backlog. We need admins to delete the images but every user can help check and fix the images. Perhaps we can all try to advertise this project to make more users help checking the images. If we get 100 users to check 100 images each we have fixed 10,000 images :-)
Ofcourse, that does not mean that this idea has to be killed. Lets create the bot so it is ready to roll when the backlog has been reduced. I clearing the backlog takes to long we could create a new template to add to the images that does not match 100 % so they end up in a new category. --MGA73 (talk) 17:27, 26 June 2010 (UTC)
Again, this isn't adding to the backlog, this is just labeling it. If it is controversial for whatever reason, it can wait. Could someone generate a list to make sure that this is even worthwhile? A list of images that are duplicated at Commons and are not marked with a {{NowCommons}} tag? ▫ JohnnyMrNinja 07:48, 28 June 2010 (UTC)
I did a bot run again earlier this week. All files with the same hash (SHA1) here and at Commons are tagged with {{NowCommons}}. When the backlog gets a bit lower I have this bot run on a regular interval. multichill (talk) 08:20, 30 June 2010 (UTC)
Gadzooks. Ok, I guess you're awesome. Thanks. ▫ JohnnyMrNinja 09:37, 30 June 2010 (UTC)

Dusty Articles (WP:DUSTY) needs to be updated

Administrator User:Fuhghettaboutit referred me here after I posted at Wikipedia:Village pump (miscellaneous) about the fact that Wikipedia:Dusty articles page hasn't been updated since 15 Feb 2010. He informed me that User:DustyBot went offline at that date, and its operator, User:Wronkiew, has not edited any pages since 4 July 2009.

I would like to know if it is possible to get another bot to update Wikipedia:Dusty articles, as it was a useful tool to help keep articles clean and updated. --Eastlaw talk ⁄ contribs 07:53, 27 June 2010 (UTC)

I am going to contact Wronkiew by email to see if he would be willing to get the bot working again or give the code to somebody else so they can run it. - EdoDodo talk 07:58, 27 June 2010 (UTC)

It seems that Svick has opened a BRFA for a bot to do this task. - EdoDodo talk 16:12, 27 June 2010 (UTC)

Collaborative filtering of Wikipedia content

I have been working on fast distributed algorithms for classification, clustering, and collaborative filtering and I want to build a proof of concept to demonstrate the value such deep analytics can provide. My first thought was to leverage the Wikipedia articles and create a mashup that allows social networks and customer behavior to collaboratively filter Wiki content. I just got started to see what tools exist to interact with the Wikipedia content and found the bots and different bot frameworks.

The mash-up I have in mind would only read Wiki content and social network user management and behavior logging would be handled in a side channel.

Is that an idea that has been suggested or done? If yes, would you be so kind to educate me [User:Tomtzigt] and point me to the principals. If not, what are the thoughts about such functionality? —Preceding unsigned comment added by Tomtzigt (talkcontribs) 22:32, 28 June 2010 (UTC)

Article Milestones

Can somebody make a bot that adds the article milestones to a page whenever a peer review, good article review, featured article review, etc are done?--Iankap99 (talk) 07:08, 30 June 2010 (UTC)

Like User:GimmeBot does? Anomie 10:56, 30 June 2010 (UTC)
Yes but it doesn't always work, for example, Bronx High School of Science has had a good article nomination and a peer review, yet it doesn't have that template.--Iankap99 (talk) 17:36, 30 June 2010 (UTC)
You should probably take that up at User talk:GimmeBot. Anomie 18:00, 30 June 2010 (UTC)
I don't think its an error, i don't think the bot was ever meant to do that.--Iankap99 (talk) 21:36, 30 June 2010 (UTC)
Does anybody know if it was meant to do that? Because if not, a bot is certainly needed to do that.--Iankap99 (talk) 20:52, 1 July 2010 (UTC)
If not, GimmeBot would probably be the best to do it if Gimmetrow is interested. And the best way to find out whether or not GimmeBot is meant to do that is to ask at User talk:GimmeBot. Anomie 21:08, 1 July 2010 (UTC)
Ok, i dropped a note,--Iankap99 (talk) 21:36, 1 July 2010 (UTC)

I've never written a bot before, so probably this should be considered a request for someone else to write a bot.

I think it would be useful to automatically add links from all the species information pages of the Hawaiian Ecosystems at Risk project (HEAR) to the appropriate Wikipedia species pages (see links from http://www.hear.org/species/). The information necessary to do this (species name [& family] & URL of the species info page) is available via relational database (so could be provided as XML or whatever). Species that don't exist as pages already on Wikipedia should probably just be ignored (because the HEAR database includes many older/non-current names), but if there were a bot that could add links to existing pages it could be run periodically to update Wikipedia. (It would need to check to ensure that the link doesn't already exist on the Wikipedia page.) It could add the links to the "references" or "other links" site (or go through an appropriate list of headings & auto-add to the appropriate heading if it exists, else create a new heading if it doesn't).

Additionally, similar auto-gen'd links could be added specifically for IMAGES for other HEAR pages (e.g., see links from http://www.hear.org/starr/images/?o=plants).

Anyone interested in helping me work on this?

Aloha, Philip Thomas pt@philipt.com (aka webmaster@hear.org)

I suppose this could be done by a bot reading the |binomial= parameter of the taxobox (or perhaps also |trinomial=, if HEAR includes subspecies), but it's not exactly clear what you're asking: you first write about links from HEAR to Wikipedia, then imply that Wikipedia should be updated. Ucucha 18:54, 4 July 2010 (UTC)

i need a bot that detects spam and bad words and reports them to the wikia founder, the page with it and the user who wrote it. It also does writing stuff on lots of pages quickly.

Try asking on wikia. Q T C 03:14, 5 July 2010 (UTC)

The Vitis International Variety Catalogue web site is used as a reference in many articles covered by WikiProject Wine. It is primarily used as a reference in grape articles for pedigrees and synonyms. The web site has had its name/URL changed. It is now vivc.de instead of vivc.bafz.de, as the bafz.de domaine has been deactivated.

Removing the "bafz" part seems to restore proper links; it seems that the index for database entries is unchanged. Due to the sheer number of links to change (see http://wiki.riteme.site/w/index.php?title=Special:LinkSearch&target=*.vivc.bafz.de), this task is best suited to a bot such as DeadLinkBOT or Polbot. ~Amatulić (talk) 20:42, 1 July 2010 (UTC)

Would it be acceptable for you if MerlLinkBot would do this job instead? This is the only bot flagged at 50+ Wikimedia Wikis for this job. [1] Merlissimo 21:28, 1 July 2010 (UTC)
Y Done Merlissimo 13:04, 2 July 2010 (UTC)
Thanks! I must have missed MerLinkBot in the list of active bots. ~Amatulić (talk) 18:58, 5 July 2010 (UTC)

DustyBot replacement for WP:FPC

Crosslinked at commons:Commons:Bots/Work_requests#DustyBot_replacement_for_en:WP:FPC

DustyBot formerly did the important job of keeping English Wikipedia Featured Pictures noted as such on Commons. I'd like to request a bot to do the following, in order to replace it:

Main tasks

1. [Once a week to once a month] Go through all images in Category:Featured pictures, and, if the images are on Commons as well, mark them with {{Assessments|enwiki=1}}, if no Assessments tag exists already; otherwise, add the enwiki=1 parameter to the existing Assessments tag.

2. [Once a week to once a month] Go through all files in Category:Featured sounds, same as above, only use enwiki=3 instead.

Optional extras [Only do these if not too much trouble]:

A. [Once a day] Attempt to keep up with new FPs, with a daily check of the archives for the current month and [if near the start of a month] the previous. These archives are always of the form Wikipedia:Featured_picture_candidates/July-2010. They contain transcluded nominations; those that are promoted will almost always contain the exact text: {{FPCresult|Promoted|File:FILENAME.EXT}} - e.g. {{FPCresult|Promoted|File:Ardea modesta.jpg}} would tell you that File:Ardea modesta.jpg had been promoted. Featured sounds are a much slower process (as well as lacking a standardised closing), and so can likely be skipped for this.

B. [Once a month] Check featured pictures appear in articles, leave a note on WT:FPC if there are any that miss this requirement.

C. [Once a month] Collate Featured pictures thumbs, the Featured picture pages, and Category:Featured pictures; leave a note (on WT:FPC) if any appear in one or two of them, but not all three. This would indicate a mistake was made in closing or delisting, and should be dealt with manually.

This would be extremely helpful to our featured media projects. Adam Cuerden (talk) 00:13, 6 July 2010 (UTC)

If the bot is editing commons files, it should probably be proposed over there. Peachey88 (Talk Page · Contribs) 01:57, 6 July 2010 (UTC)
It's a mixture of en-wiki (Tasks B and C) and Commons work (1,2,A), but I'll put a request there as well. Adam Cuerden (talk) 03:04, 6 July 2010 (UTC)

Is it possible for someone to do a quick check through Category:Merge by month for two things:

(1) Look to see if any talk/userspace pages are tagged? I've seen a few and that's short work to clean up.
(2) Look to see if any of the mergers are only on one of the two pages (dated by month so I can see how long ago that was). For example, Culture of Chennai had a merger listing from December 2007 up but Chennai doesn't the merger template listed anymore so I removed it as moot.

Just create a subpage in my userspace and inform me if you can. Thanks! -- Ricky81682 (talk) 18:51, 4 July 2010 (UTC)

Comment: To clarify, translucations only, not direct links. I can see from this there's plenty of duplicate notices (both on talk and on front page) but I don't want to have to run through all the various templates to try and cut down the backlog. -- Ricky81682 (talk) 02:11, 8 July 2010 (UTC)

Category moves

Please move Category:Ulsan Hyundai Horang-i players to Category:Ulsan Hyundai FC players. This Korean association football club's official name is changed from Ulsan Hyundai Horang-i to Ulsan Hyundai FC.

I think speedy renaming at CFD is probably a better way to go. -- Ricky81682 (talk) 08:25, 8 July 2010 (UTC)

Category: Cryptozoology Bot

Category:Cryptozoology requires a bot to automatically update its list of relevant articles...--Gniniv (talk) 10:31, 17 July 2010 (UTC)

Italicising article names for species and genera

The scientific names of species and genera are italicised by convention. There are vrious ways of doing this. See a discussion about it that started at WikiProject Arthropods. In short there are various templates and work arounds to acheive it. I started a process of adding "{{italictitle}}" to relevant pages using AWB, but it's takes a while. As suggested by other project participants, here's a bot request. (I couldn't find anything suitable at the Bot status page.

The bot would need to check whether any of the "genus", "species", or "binomial" parameters specified in the Taxobox template exactly match the title of the page. If it does, the bot should prepend the page with "{{italic title}}". There are two forms of the template: "{{italictitle}}" and "{{italic title}}". The bot should prepend with the latter. (I've just discovered this morning that "{{italic title}}" is the active form.

Of course, the bot should ignore any article containing "{{italictitle}}" or "{{italic title}}".

Lastly, some pages will achieve the italic title by using the "DISPLAYTITLE" magic word. If possible, it would be better to delete lines of text containing "DISPLAYTITLE" and replacing with "{{italic title}}". Otherwise, ignore pages containing "DISPLAYTITLE".

Although this started as a conversation in WikiProject Arthropods, the issue exists through all WikiProjects under the WikiProject Tree of Life. Thus, there is no need to limit the bot to particular categories or other article classifications.

I don't know how often the bot should run, but suggest that something like weekly would be enough (if it even works that way).

Happy to assist with any clarification, testing and refinement required. Heds (talk) 00:35, 26 June 2010 (UTC)

I oppose this as currently framed. With the right parameters, the {{Taxobox}} already automatically italicizes the title; what Arthropods has been doing is adding some useless code to make it more explicit that the title is being italicized. That causes edits like this, which do not change the display of the page in any way. If the Arthropod projects wants to do this, but there is no consensus to do this across Tree of Life. Ucucha 06:18, 26 June 2010 (UTC)
According to Template talk:Italic title, an RFC some time last year did show consensus for italicizing titles in Tree of Life. I also found a small poll from around the same time in WT:WikiProject Tree of life/Archive26 that showed no consensus for preferring either {{taxobox}} or {{italic title}} for doing so. Maybe later discussions reached a different consensus, please link them if so.
Note that a bot can detect the display title for a page using a query like this. Also note that the edit you complain about doesn't actually do what is requested here, as it (uselessly?) adds the "name=" parameter to {{taxobox}} which then makes the article require {{italictitle}} for proper italicization.
That said, before having AnomieBOT work on this I would want to see a clear specification of which articles specifically should be touched and a new poll at WT:WikiProject Tree of Life to verify consensus. Anomie 15:24, 26 June 2010 (UTC)
That the title should be italicized is not in question, but what Heds, and the Arthropods project, appear to want is adding {{italictitle}} and the redundant |name= parameter, instead of using the built-in italictitle functionality of the taxobox; they believe this makes the syntax more intuitive. Ucucha 15:37, 26 June 2010 (UTC)
Firstly, I would modify the characteristic implemented in the taxobox, I mean: Now, if the field "name" was not indicated, and the field "binomial" matches with the title of the article, this title automatically appears in italics. I think would be better: simply when the field "binomial" matches with the title, apply italics, independently what appears in "name". This would solve the most of the cases, without bots, and without {{italictitle}}. This is the most logical too. We should suppose that if the author chose the binomial as title, and he rejected the common name, he knew what he was doing, don't you think? Flakinho (talk) 05:14, 1 July 2010 (UTC)
For the binomial, that would work, yes (or at least I can't think of any examples where it won't work). But we also need to italicize genera, where the situation is a little more complicated. For example, chinchilla shouldn't have an italicized title, even though the |genus= parameter does match the article title. Ucucha 05:40, 1 July 2010 (UTC)
Yes, there are articles with a common name title. These ought not have an italic title, hence the use of {{italictitle}} or the current {{taxobox}} approaches. Come the weekend I'll go back to earlier discussions at WT:WikiProject Tree of Life (thanks to pointers), and if it's been a while, seek a new poll. Thanks for the thoughts here so far. I'll report back once the poll is done, either the restate or cancel the request. Cheers. Heds (talk) 11:08, 1 July 2010 (UTC)
Wow, the example about chinchilla is very good. In these cases, could be a good idea to add a parameter to the taxobox to shut off the automatic italicization when needed? Flakinho (talk) 13:44, 2 July 2010 (UTC)
This parameter is unnecessary: just specify |name=Chinchilla and the taxobox and article titles will not be italicized. Martin (Smith609 – Talk) 20:00, 12 July 2010 (UTC)
That might be best, yes; I imagine a situation like at chinchilla is fairly rare. I think we can italicize if |binomial=, |trinomial=, or |genus= equals {{PAGENAME}}, unless a parameter is set (for example, |ital=no). There are other problems, though, with articles with parentheses in the page title (for example, Ambondro (genus)). That leaves us with the question of what this will do to improve the current situation. Ucucha 19:22, 2 July 2010 (UTC)
There are a bunch of different cases (you can see an example of the currect complexity in this table: es:Wikipedia:Votaciones/2010/Títulos en cursiva/Complejidad en la nomenclatura binomial usada como título de artículos (Spanish Wikipedia)). But the complex cases are the minor part. For "title (genus)" you can use {{italictitle}} and for the other, or make the code of the taxobox more complex to include every excepcion, or detect puntuation symbols in the title and avoid italics... there are many options, I think. The important is that the most of articles are simple: "genus" or "genus species". And, in my opinion, an easy change in the code will improve the current situation. Flakinho (talk) 19:51, 2 July 2010 (UTC)
Title (genus) is already italicized to Title (genus) by Template:Taxobox. Martin (Smith609 – Talk) 20:01, 12 July 2010 (UTC)

In my opinion, although it is very clever coding that makes the taxobox force an italic title, it is a barrier to understanding. The down-side to {{Italic title}} is that people then copy the syntax to articles about poems, books etc. where we had (maybe still have) consensus not to use italics. Rich Farmbrough, 11:41, 8 July 2010 (UTC).

Heds, if nobody else will code/run this for you, I will. It will be a while because I am busy, maybe a few weeks untill I can sit down to do this. Alternatively, I think you can do all of this from within AWB, with some clever regexes. If you want to run this bot yourself, I can help you with the regexes. Tim1357 talk 20:39, 17 July 2010 (UTC)
Such a bot should not be run before there is consensus for it; it would involve editing thousands of pages with an edit that does not change the appearance of the page in any way. Ucucha 18:05, 17 July 2010 (UTC)

Hello. Could someone program a bot to replace "List of hydroelectric power stations" with "List of conventional hydroelectric power stations"? The page was recently moved and is now a disambiguation page. Thanks. Rehman(+) 12:16, 17 July 2010 (UTC)

Just to be clear, for each page currently listed on List of conventional hydroelectric power stations, you want to replace the links to List of hydroelectric power stations in the 'See Also' section of each page? This would be pretty easy to do with AutoWikiBrowser, and wouldn't require any bot approval. I'd be happy to take care of it if you like. Winston365 (talk) 23:51, 17 July 2010 (UTC)
Yes, thats what I want to do. Problem is, I have never used AutoWikiBrowser and don't want to screw things (for now), so if you don't mind, may I ask you to do it? Rehman(+) 03:12, 18 July 2010 (UTC)
 Done Many of the pages on that list didn't have a link to List of hydroelectric power stations, but the 28 that did should be fixed. Winston365 (talk) 03:52, 18 July 2010 (UTC)
Thank you. I will manually fix any remaining links. Kind regards. Rehman(+) 07:47, 18 July 2010 (UTC)

Bots for tagging

WP:WikiProject Cryptozoology needs a bot to tag articles of interest to the project.--Gniniv (talk) 11:17, 17 July 2010 (UTC)

Any of the bots in Category: WikiProject tagging bots can do this for you, I suggest taking a look at User:Xenobot_Mk_V make sure you show that you have concensus from the wikiproject before leaving a request. Good luck, Tim1357 talk 16:05, 17 July 2010 (UTC)
Give me a list of articles that you need tagged, and I can have my bot do the work. –MuZemike 19:45, 17 July 2010 (UTC)
Articles that mention Cryptozoology without having the tag should be tagged, and generally any new article related to the topic subject should be tagged as well. Thank you for your assistance..--Gniniv (talk) 02:47, 18 July 2010 (UTC)
Tagging any article that mentions something will often tag irrelevant articles. For example Lancaster, Ohio and List of Greek words with English derivatives both link to Cryptozoology, but I doubt you'd want either of them tagged. I suggest you put together a list of categories you'd like us to tag, for example: Category:Cryptozoology, Category:Cryptozoologists, etc. since this will contain more relevant articles. - EdoDodo talk 06:57, 18 July 2010 (UTC)

Thanks for the help! We need tags on Category:Dragons and Category:Mythology.--Gniniv (talk) 07:05, 18 July 2010 (UTC)

Category:Mythology contains articles such as Mythological conundrum, which clearly have nothing to do with cryptozoology. I also question the wisdom of tagging for a WikiProject that seems to have exactly one active user. Anomie 13:01, 18 July 2010 (UTC)

WikiProject Stagecraft

Stagecraft now has a newsletter. Requesting a bot, or suggestions of a pre-existing one, which can deliver notifications of new issues to membership. DJSparky huh? 21:47, 23 July 2010 (UTC)

Well, my new bot MessageDeliveryBot has just been approved for a trial, and would love to have it deliver your first issue as part of its trial run. The way it works, is that when you have a new issue you add a message delivery request on this web page (you can also schedule message deliveries ahead of time, if you like, by changing the date). Requests are manually checked by me, or other approved users but if you make requests regularly then I can add you to the list of approved users, so your requests are automatically marked as checked. The bot then runs all the checked requests, at midnight UTC. Sounds good? If not, then there's plenty of other bots to deliver newsletters, see Category:Newsletter delivery bots. - EdoDodo talk 06:44, 24 July 2010 (UTC)

Request for (existing) bot iw maintainment help

The interwiki links of other language sisters of the article Common periwinkle are in many cases rather incomplete. The probable reason is that many of them referred to the wrongly capitalised redirect side Common Periwinkle instead. I've corrected this link by hand in the various language versions, and hoped that some iw bot would catch this, and update the respective iw links lists. This has not happened; I do not know why.

Hence, I wonder if someone could put their iw bot on Common periwinkle, updating it, as if all its iw links were new.

If this is the wrong place to ask for this assistance, please give me a hint of where I should make it. (This is not a request for a new 'bot; nor for a modification of existing ones, unless the reason for no bot to react was some easily identifiable deficiency.)

Best, JoergenB (talk) 16:04, 26 July 2010 (UTC)

SineBot on other Wikipedias/Wikiprojects

I wish to implement SineBot on other Wikipedias/Wikiprojects, most notably the simple:Simple English Wikipedia. See the discussion here. :| TelCoNaSpVe :| 20:11, 15 July 2010 (UTC)

Judging by User talk:Slakr/Archive 12#Sinebot 5, it doesn't appear Slakr is willing to provide the code for public or private use and judging from other threads it doesn't appear they are willing to run it elsewhere. (Though that doesn't mean someone can't re-write a similar bot from scratch) –xenotalk 20:22, 15 July 2010 (UTC)
Hmmm... how so? "From scratch" seems like long strings of code to me. :| TelCoNaSpVe :| 18:02, 17 July 2010 (UTC)
Perhaps you could ask the owner of de:User:CopperBot. Merlissimo 00:08, 18 July 2010 (UTC)
He's German! :| TelCoNaSpVe :| 00:22, 18 July 2010 (UTC)
 Not done I've posted a message to him, but it doesn't look like he's responding. :| TelCoNaSpVe :| 17:20, 27 July 2010 (UTC)

Substing Template:Ut

This template should either always be substed or not exist. At the moment, it's transcluded on 700+ pages which is clearly not necessary. Axem Titanium (talk) 09:48, 3 August 2010 (UTC)

Hmm... Why is it necessary to subst: these? While I understand that there is no harm in substituting these, and that transclusion is not necessary, I see no benefit in substituting all of them either. Also, neither Wikipedia:SUBST#Templates that should be substituted, nor its documentation page say that it should be substituted, so if there is a reason why it should be substituted then perhaps those should be updated. - EdoDodo talk 12:58, 3 August 2010 (UTC)

Serbian towns and infoboxes

  1. Can someone replace Template:Geobox (example Ledinci) with Template:Infobox settlement (example Bački Jarak)?
  2. Is it possible to add Template:Infobox settlement in articles that don't have any of those two infoboxes? Datas can be extracteed from Serbian or French wikipedias. If it is too complicated, I would be satisfied with just |official_name = {{PAGENAME}} -- Bojan  Talk  07:48, 5 August 2010 (UTC)

No footnotes, morefootnotes, unreferenced

I would like a bot to check all articles that transclude {{No footnotes}} and do the following:

  • If article has already inline citations to change "No footnotes" with "More footnotes"
  • If article has no references at all to change "No footnotes" with "Unreferenced"

The first part can be done with AWB and my bot, Yobot, has approval for auto-tagging and can do it but I am not sure about the second part so I would like help on that. -- Magioladitis (talk) 09:45, 5 August 2010 (UTC)

Many of the problem articles are stubs or starts by non-experienced editors. There may be footnotes that are not references or references that aren't formatted as footnotes. I've seen people doing "This stuff's good.[1] *[1] Book About This Stuff." with manual formats. Also footnotes can be things like "[1]British English spelling is without "r"". This needs a clear way of defining what is and what isn't a reference footnote. —  HELLKNOWZ  ▎TALK 10:29, 5 August 2010 (UTC)
The only one that I can think of that wouldn't return any false positives is changing 'no footnotes' to 'more footnotes' only if a citation template is found (i.e. <ref>{{cite), anythings else is prone to errors because of the things that H3llkn0wz mentioned. - EdoDodo talk 13:03, 5 August 2010 (UTC)
Yes, that's what AWB does. As I said this is the easy case. The others need some good ideas. -- Magioladitis (talk) 13:07, 5 August 2010 (UTC)
The unreferenced one is impossible for a bot to do, in my opinion. A bot has no way of identifying references that are not done using footnotes. A bot has no way of knowing if references are under References, Sources, Notes, or even not under a section heading. As for changing no footnotes to more footnotes I suppose this could be done by checking for both citation templates or a References section that contains a reflist, but that would still be very much prone to error, so I doubt it would get approved. - EdoDodo talk 18:02, 5 August 2010 (UTC)
Also relevant: Template talk:Multiple issues#Pagenumbers + No footnotes. Magog the Ogre (talk) 21:05, 5 August 2010 (UTC)

BeamerBot

I would like to request a bot called: BeamerBot. I was requesting for a bot because I was going to use it for warnings of not tolerated Wikipedia rules. Regards, Beamer103 18:03, 6 August 2010 (UTC)

Bot requests for approval is that way. Also, make sure your bot has been tested and coded right. Ensure that your bot operates within community norms such as vandalism, civility, etc. –MuZemike 18:05, 6 August 2010 (UTC)

Changing line break tags from <br /> to <br>

<br> is the form currently to be used per WP:LINEBREAK and the explanation here. However, many, many articles on the English Wikipedia, including guidelines, essays, and other docs in the "Wikipedia:" and "Template:" subspaces, still use <br /> (probably also <br/>). It's a simple replacing task, so a bot makes the most sense to use here. Prime Blue (talk) 11:31, 6 August 2010 (UTC)

While I understand why <br> is preferred over <br />, I really don't think it would be a good idea to run a bot that will edit thousands, if not millions, of pages for something so trivial. Bots should generally not be editing to make only minor changes like this one. It may be a good idea to have it added to AWB though, so that it will gradually get fixed by users and bots using AWB. - EdoDodo talk 11:46, 6 August 2010 (UTC)
Sorry, I never worked with bots before, I thought that was the only possibility for automated editing. I'll read up on AWB. Prime Blue (talk) 12:05, 6 August 2010 (UTC)
This will indeed create a lot of trivial edits, in violation of bots doing minor typo edits, so better update AWB general fix base. Also, that explanation is not a true discussion. —  HELLKNOWZ  ▎TALK 12:10, 6 August 2010 (UTC)
AWb already fixes the similar WP:CHECKWIKI error 2: Article with <br\> or <\br> or <br.>. Are you sure we have consensus to change <br />? -- Magioladitis (talk) 12:14, 6 August 2010 (UTC)
Mediawiki 1.16 uses XHTML. There are plans to change this to HTML 5 in future releases but thats not implemented yet. Merlissimo 12:17, 6 August 2010 (UTC)
So, still <br /> until then? Prime Blue (talk) 12:32, 6 August 2010 (UTC)
I have always used <br>. But many use <br/> or <br />. Just a preference, like placing or not placing spaces between == == in title. I suppose the correct is <br>, that's why it can be proposed for AWB general fixes list. —  HELLKNOWZ  ▎TALK 12:42, 6 August 2010 (UTC)
I wouldn't want to change it if some users disagree. Is there a good place to discuss this? The talk page of WP:LINEBREAK is not very useful. Prime Blue (talk) 12:49, 6 August 2010 (UTC)

The MediaWiki software automatically converts <br> or <br/> into <br /> so that it follows proper XHTML rules. I used to razz on users for this until I found that out. –MuZemike 13:17, 6 August 2010 (UTC)

Yeah, it really makes to difference to how the page is rendered by MediaWiki, which is why this change is completely trivia. - EdoDodo talk 16:31, 6 August 2010 (UTC)

Found this older discussion. More people were leaning towards using <br>, but there was no definite result on what to use. My gripe was just with how the different versions across pages confuse editors. Prime Blue (talk) 17:30, 6 August 2010 (UTC)

Before this is added to the general fixes for AWB there should be strong consensus for it. Perhaps you could start a RFC or a village pump discussion. - EdoDodo talk 22:14, 6 August 2010 (UTC)
There is a stronger consensus to add the slash back in. Making it consistent with other XML style self closing tag (e.g. <ref/>). Mirrors may not necessarily be running HTML tidy and would have invalid XHTML. The extra markup does not break HTML, but removing it breaks XHTML; it makes sense keeping it. Finally, the space is irrelevant and was only a recommendation that a browser could operate in the short tags mode of SGML. No one has found one, as they would have been unusable at the time of their development. — Dispenser 05:27, 7 August 2010 (UTC)

From what I'm seeing in this discussion, it seems that there isn't very strong consensus either way, so perhaps it would be better to do nothing. It's a fairly trivial change anyway, since as mentioned the MediaWiki software will change it automatically. - EdoDodo talk 12:17, 7 August 2010 (UTC)

  •  Not done Task requested is an exceedingly trivial edit which doesn't have firm consensus anyway (Did anyone mention that the edit toolbar provides the br / version?). –xenotalk 14:22, 10 August 2010 (UTC)

Apply template to album and single durations in Infobox album & Track listing

Please can someone convert album and single lengths, inside {{Infobox album}} and {{Track listing}}, to use {{Duration}}. Here's a sample edit.

If you like, at the same time convert {{Singles}}' release date to use {{Start date}}, as in the same edit, but only if {{Start date}} is already in use (or converted at the same time) for the main album release date. df=y must be used where appropriate.

This will allow dates and durations to be emitted as metadata. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 00:32, 9 August 2010 (UTC)

Metacritic urls

Metacritic has changed their site's design and URLs, in a way shown at this edit. In general, it looks like the URL change is just the entry page's product title with dashes between spaces. Dan56 (talk) 11:29, 12 August 2010 (UTC)

The problem is that we don't know the page product title if the old link is already dead. Merlissimo 13:06, 12 August 2010 (UTC)
The old URL sends me to a page called "redirectcritic", so it looks like the site is supposed to be redirecting to the appropriate new URL and it's just not working. Reach Out to the Truth 14:17, 12 August 2010 (UTC)
It's working only sometimes: http://www.metacritic.com/film/titles/simpsonsmovie vs. http://www.metacritic.com/film/titles/40yearoldvirgin/ Merlissimo 14:24, 12 August 2010 (UTC)
Year! The solution is to remove the slash. Merlissimo 14:26, 12 August 2010 (UTC)
But there are only 250 links ending with a slash. Films and music/artics (above think maybe broken before) without slash are redirecting fine (so no rewrite necessary). Books/authors (84) and film/awards seems to be removed completely. Merlissimo 14:44, 12 August 2010 (UTC)

Hi all - I've just done the page move mentioned in this section title, to turn the basic title into a dab page - the Canadian district is one of several Hamilton Wests, but not the one which garners the most ghits or wikipage hits. Unfortunately, it is the one with the most internal links. I've moved all the "Hamilton West" links that were not about Canada but were meant to point to the Hamilton West in New Zealand, or the one in Scotland, or the Nicaraguan footballer - but that leaves about 120 internal links to the Canadian electorate. Could I please get someone to run a bot to dab the links currently to Hamilton West, all of which should now point to the Canadian electoral district? Thanks in advance... Grutness...wha? 12:08, 6 August 2010 (UTC)

BRFA filed (DodoBot 3) I've filed a BRFA for this. Even though it's a fairly simple task I need approval for it first because it's the first time my bot will be doing work with links. Apologies for the delay. - EdoDodo talk 12:29, 6 August 2010 (UTC)
No hurry. Better make the links point to Hamilton West (electoral district), BTW, due to Canadian naming conventions, despite the likelihood of confusion with electorates and constituencies in New Zealand and Scotland. Thanks again, Grutness...wha? 01:37, 7 August 2010 (UTC)

Well, it looks like meanwhile this was taken care of semi-automatically by YUL89YYZ, so I've withdrawn the BRFA :(. - EdoDodo talk 11:28, 13 August 2010 (UTC)

bot to remove the word infamously from articles

This word "infamously" is POV and should be removed from all articles. I was doing this by hand, but discovered that there are now 1000s of articles that need attention in this manner.

It would be nice to remove the word "infamous" from articles by use of a bot, but that is more complicated because some articles are about something called Infamous. Kingturtle (talk) 19:47, 9 August 2010 (UTC)

Bots are not well-suited for "typo fixing" or "de-euphamizing" or other similar changes to prose. Have you considered using WP:AWB to assist? –xenotalk 19:53, 9 August 2010 (UTC)
Alternatively you could use user:Botlaf, this would enable you to exclude articles where it is legit, such as in quotations. ϢereSpielChequers 23:36, 9 August 2010 (UTC)
That bot doesn't seem to be approved for edits outside of the bot's userspace, and probably won't be because, as Xeno said, bots are generally not allowed to fix typos. - EdoDodo talk 00:55, 10 August 2010 (UTC)
It doesn't fix typos, and it doesn't need to edt outside its own namespace as it just produces reports which you then process manually. I've filed most of the requests at User:Botlaf/Job requests and it works well for keeping tabs on this sort of thing. It is particularly useful for ongoing problem words like posses where there are over 100 valid uses of the word that you don't want to trawl through each time you check for new occurrences. ϢereSpielChequers 05:57, 10 August 2010 (UTC)
Ah, okay. Makes sense :). - EdoDodo talk 11:06, 10 August 2010 (UTC)

I'll look into AWB. Kingturtle (talk) 05:41, 10 August 2010 (UTC)

  • Some uses will be in quotations, or accurate in the NPOV sourced sense (though most won't be), so a blanket removal of all instances wouldn't be appropriate. It would have to be manually assisted. FT2 (Talk | email) 11:09, 16 August 2010 (UTC)

singlechart formatting consistency

Trying to make record charts comply with WP:ACCESS#Data tables means that I have to have the option to make the record chart name output by {{singlechart}} be formatted as a row header, not a data cell. Unfortunately, people have been using the template inconsistently: this article uses | before each template call, which forces it to be a data cell, and I can't override it to turn it back into a header. As you can see in this version, it formats perfectly without the |. It seems to be somewhere around 30% of articles using the macro use the pipe.

What I need is a bot/script that will go through all of the articles in Category:Singlechart, look for pipes before the singlechart calls, and remove the pipe. Obviously the bot shouldn't be tricked by the pipe in {{tl|singlechart}}.—Kww(talk) 22:52, 11 August 2010 (UTC)

So, correct me if I'm wrong, the bot would just have to go through Category:Singlechart and replace all instances of |{{singlechart with {{singlechart, or are there different cases that should be accounted for? Also, although needed for WP:ACCESS#Data tables compliance, this seems like a fairly trivial change. Is there consensus that it would be useful for a bot to do it? - EdoDodo talk 23:08, 11 August 2010 (UTC)
I'm pretty much the maintainer of {{singlechart}}, and it is being discussed at WT:Record charts#Accessibility Issues. This is a ground clearing operation: once it's in place, I can either do nothing, make the addition of !scope="row" turned on by a parameter, make it a default that is turned off by a parameter, or make it mandatory behaviour depending on what people decide. As it stands, I'm paralyzed: any change I make is either ineffective in 70% of articles or breaks the other 30%. This change in itself does nothing to the generated HTML. The only special case I can thing of is when someone is discussing the template, and the pipe in {{tl|singlechart}} itself shouldn't get removed. As for it being a bot job, there are around 7000 uses of the template scattered over 713 articles. Certainly not one to do by hand.—Kww(talk) 23:30, 11 August 2010 (UTC)
BRFA filed (DodoBot 4) Sounds good, I've opened a BRFA. This is my second BRFA open at the same time... I didn't find anything against the rules in opening two at once, but apologies to BAG if it is annoying. - EdoDodo talk 02:04, 12 August 2010 (UTC)
I'm afraid that it is required to make sure that at least all of the charts rendered using the {{singlechart}} comply with the accessbility changes. Like Kww said because people use the template with the "pipe" before it, a change in syntax would have no affect on improving the accessibility of the chart table -- Lil_℧niquℇ №1 (talk2me) 03:51, 12 August 2010 (UTC)
Yeah, I understand. - EdoDodo talk 11:08, 12 August 2010 (UTC)

 Done by DodoBot. - EdoDodo talk 12:56, 16 August 2010 (UTC)

Bot for {{subst:nld}}

Apparently there used to be one or two bots that could tag an image when it's uploaded without a license. Could we create another bot for this? It's a task that screams for bot attention: the coding is fairly straightforward, it's very tedious to do as a human, and it's a massive task. Magog the Ogre (talk) 21:00, 11 August 2010 (UTC)

User:ImageTaggingBot does it right now, it seems. fetch·comms 01:41, 17 August 2010 (UTC)

Bot to comment out or remove redlinked files

As at July 6, there were over 13,000 articles containing redlinked files used: Wikipedia:Database reports/Articles containing red-linked files. A bot similar to CommonsDelinker (talk · contribs) should delink these files by commenting out or removing the offending image. –xenotalk 17:58, 30 July 2010 (UTC)

Ive been doing some work regarding this myself, Ill get to as many as I can, but some often require manual cleanup due to some crazy formatting. ΔT The only constant 01:55, 31 July 2010 (UTC)
The issue is knowing why a link has gone red. Was it deleted from Commons? Was it deleted locally? Was an image redirect deleted on accident? Did the page title change and cause a bot to mistakenly mark it for fair use violation without anyone noticing? Was the page vandalized? Was the image code changed by a rogue script that the user didn't notice (this happens quite a bit more than you'd imagine). A bit of discussion about this is here and a bit is here. There has also been discussion on past bot requests, I'm sure. The counter-argument is, of course, that most manual editors don't check the reason for an image now being a red link either. I'm not sure that's a great argument for bot intervention, even though the red links do make an article look much more unprofessional.

It's been suggested that a change to MediaWiki could 'cause red linked images to not be so obvious (or maybe not be so obvious to logged out users). The world is bright with possibilities. --MZMcBride (talk) 02:08, 31 July 2010 (UTC)

Hey, if your are looking for a bot like CommonsDelinker (talk · contribs), I might be able to program one. I know a fair amount about Regular Expressions and HTML and I can program in Perl and similar languages so I might be able to write a bot to parse articles and remove red-linked files. Usb10 Let's talk 'bout it! 17:46, 6 August 2010 (UTC)
I was actually looking for something like this myself, albeit for userspace rather than mainspace. It would make it handier to cleanup a lot of maintenance reports, like User:Avicennasis/todo/orphtem and User:Avicennasis/todo/csdf8. I know of several other users could make good use of this as well. Avicennasis @ 00:47, 10 August 2010 (UTC)

I totally disagree with red link removal without prejudgise. Red links are usually articles to be created. In some cases they are caused by wrong capitalisation, diacritics, etc. Check Red link recovery project for more. -- Magioladitis (talk) 19:12, 14 August 2010 (UTC)

This isn't about redlinks to articles, it's about redlinks to files. - EdoDodo talk 19:16, 14 August 2010 (UTC)

I run User:ImageRemovalBot to do this for deleted files. The bot is currently offline while I figure out how to handle two situations:

  1. How to comment out an image with a comment in the image's caption (HTML doesn't handle nested comments).
  2. How to handle redirects to images (the bot's original design assumed that an image could only have one name).

If I get around to fixing these, I'll reactivate the bot and start working through the backlog. --Carnildo (talk) 03:09, 15 August 2010 (UTC)

Is there a way to do this with linked files (vs. displayed), or, say, template links without transclusion? And if so, is the source code open? :) Avicennasis @ 03:38, 7 Elul 5770 / 17 August 2010 (UTC)

There's broken links here to this template. If these can be removed from all the articles, that would be great. Special:WhatLinksHere/Template:Muslims and controversies. --Matt57 (talkcontribs) 21:36, 16 August 2010 (UTC)

I will have User:SporkBot do it. This is actually the one and only task it is approved to perform. I usually patrol WP:TFD/H, but I have been generally off wiki for the past several days. Plastikspork ―Œ(talk) 22:11, 16 August 2010 (UTC)
Cool, thanks. --Matt57 (talkcontribs) 17:07, 17 August 2010 (UTC)

Anti vandalism bot work

I've spotted a good area where a bot could identify a particular kind of vandalism that's both common and difficult to spot otherwise, and would have a low error rate. I'd rather not give WP:BEANS.

The bot would need to be able to pull diffs corresponding to the RC feed as opposed to the revision text alone, but some bots already do that. The logic required is probably fairly simple.

Anyone with bot development experience willing to discuss this please email me. Thanks. FT2 (Talk | email) 11:07, 16 August 2010 (UTC)

I have an idea, too, and I program all day, though I have not written a bot. Maybe it's aligned with yours. How will we know or get approval without giving beans? -- ke4roh (talk) 23:05, 21 August 2010 (UTC)

Orphaning a template

If anyone wants to help me orphan {{Do not delete}}, please feel free. The majority of transclusions are due to a lag in "what links here" cache. Hence, they can be cleared by simply opening the page, making no changes, and saving, which will purge the cache. I currently have SporkBot working on it, but with over 25k and me being on the road it will take some time. If you don't feel comfortable making any edits, you can always just do the purging part, which will add nothing to your edit history, but will help weed out the real transclusions. Thanks! Plastikspork ―Œ(talk) 21:47, 24 August 2010 (UTC)

I wish I could help, Ive already got code for this. But due to my restrictions I cannot help. However the cache will automatically be purged by the server. ΔT The only constant 22:47, 24 August 2010 (UTC)
Null edits shouldn't be done, just let the job queue do it. There's no rush... (See also WT:RIF#Like watching grass grow...) –xenotalk 22:49, 24 August 2010 (UTC)
Okay, I suppose massive numbers of null edits would be a bad idea, given that it would just add to the server load. However, if anyone wants to help out with the orphaning, I would still appreciate it. The actual transclusions won't orphan themselves. By the way, it is currently listed as having over 190k transclusions on Wikipedia:Database reports/Templates transcluded on the most pages. Even with say one edit per 5 seconds this will take some time (at least a week or two). I am hopeful that these are mostly false transclusions and the server will indeed weed many of them out. Plastikspork ―Œ(talk) 23:07, 24 August 2010 (UTC)
I just checked the toolserver database and there are 171009 inclusions, and 1-3 are disappearing per second. ΔT The only constant 01:00, 25 August 2010 (UTC)
I will help I could help but on further thought, I'm not so sure about the necessity of orphaning or deleting this template. It's a template that emits nothing and just because CAT:TEMP is deprecated doesn't mean that administrators won't be tempted to sometimes deleted user pages... –xenotalk 13:49, 25 August 2010 (UTC)
The transclusion count is dropping quite a bit, which appears to be mostly from the server catching up after I removed it from various user warning/sockpuppet/publicIP templates. I don't have a strong opinion on the deletion/orphaning, but it was closed as such at WP:TFD and as far as I can tell there has been no WP:DRV. However, I think we both agree that removing it isn't a high priority. You are right about the utility of the template, which is basically nil as far as I can tell. Thanks! Plastikspork ―Œ(talk) 14:27, 25 August 2010 (UTC)
Oh, absolutely - the TFD was a delete result and I don't fault you at all for commencing the removal task. (P.S. this should clear a lot too) I'm just not sure the participants thought about the fact that deleting this template will have an essentially null effect, so it may as well just be left in place rather than committing edits. Will think about this some more. –xenotalk 14:28, 25 August 2010 (UTC)

Change discontinued template to replacement

Hello, several Food and Drink related WikiProjects have been been folded back in to the main Food and Drink WikiProject due to lack of interest. What we need to do now is to have a bot preferably go through all the talk pages that have the old projects templates and replace them with the Food and Drink Template.

Here is what needs to be done:

This is about a thousand or so pages total.

Thanks, --Jeremy (blah blahI did it!) 06:00, 25 August 2010 (UTC)

I'll have AnomieBOT start doing this in a little bit. Anomie 11:47, 25 August 2010 (UTC)
A few comments/questions:
  • It would have been easier had you left {{WikiProject Soft Drinks}} unredirected until after the merge, as then the bot could have just looked for "{{WikiProject Soft Drinks}} and all redirects" instead of "certain specific redirects to {{WikiProject Food and drink}}". Oh well.
  • How about {{WPSD|coffee=yes}} and {{WPSD|tea=yes}}?
  • To be clear, {{WPSD|c&t=yes}} should be changed to {{WikiProject Food and drink|c&t=yes}}, not {{WikiProject Food and drink|c&t=yes|soda=yes}}?
  • You mentioned that "several" projects have been merged. It would be more efficient to do them all at once.
  • Also, would you like the bot to auto-assess stubs and/or copy the assessment and importance from the {{WPSD}} or from any other WikiProject banner?
Anomie 14:02, 25 August 2010 (UTC)

Thanks allot, that would be very helpful.

This is allot of pages, this won't cause any issues? I don't want to rile anyone or cause any problems.

Thank you again, --Jeremy (blah blahI did it!) 20:03, 25 August 2010 (UTC)

Shouldn't cause any problem. Doing... Anomie 03:19, 26 August 2010 (UTC)
Y Done 1379 pages were edited. 12 had multiple banners, 803 had soda banners, 58 had ice cream banners, 508 had mixed drink banners (with or without bar=yes), and 0 had {{WikiProject Bartending}} or {{WPBAR}} (they had probably been merged into mixed drinks sometime in the past). Anomie 14:41, 26 August 2010 (UTC)

Thank you very much for that. --Jeremy (blah blahI did it!) 18:37, 28 August 2010 (UTC)

Lists redirects

Hi. I think that a bot could create redirects for articles like List of hospitals in Spain. The variations purposed are:

This can be applied to all title lists with "in/of". We can split the "List of" prefix and create redirects too. Regards. emijrp (talk) 09:15, 29 August 2010 (UTC)

Has there been discussion showing that people think these redirects would be useful? Anomie 14:41, 29 August 2010 (UTC)
@emijrp: the search engine is supposed to display all of the most appropriate results at the top. Anyway, I'm sure that we can't reliably choose between the paronyms to create only one link. JackPotte (talk) 15:57, 29 August 2010 (UTC)

Archiving on WP:ITN/C

I'm wondering if it's possible to get a bot to archive the oldest day on WP:In the news/Candidates at 0000 UTC every day and add the new day of nominations. I don't think it would be very complicated for someone who knows what they're doing. HJ Mitchell | Penny for your thoughts? 02:14, 31 August 2010 (UTC)

So basically what was done in this and this? I'll look at having AnomieBOT do it. Anomie 03:44, 31 August 2010 (UTC)
BRFA filed Anomie 14:51, 31 August 2010 (UTC)
Yeah, that's right. Thanks a lot for the help! HJ Mitchell | Penny for your thoughts? 14:59, 31 August 2010 (UTC)

Replacement for Wolterbot

From WP:VPT

Is it difficult for someone to replace the late Wolterbot with a bot that goes through the FAs and coutns how many cleanup categories are also listed at the bottom to update Wikipedia:Featured articles/Cleanup listing. I cannot imagine it to be a difficult task YellowMonkey (new photo poll) 00:54, 23 August 2010 (UTC)

Maybe request a listing at WP:DBR? fetch·comms 02:36, 23 August 2010 (UTC)
Try asking at WP:BOTREQ Peachey88 (T · C) 07:22, 23 August 2010 (UTC)
 Done-I've made a similar listing at Wikipedia:Featured_articles/reports/Aug_2010_Cleanup_listing...(it does need a little cleaning though). Let me know if it helps. Smallman12q (talk) 15:32, 28 August 2010 (UTC)
(Other FAR delegate jumping in here!) Thank you Smallman, this is indeed helpful. However, it would be really great if we could get either a reincarnated Wolterbot or a very similar bot. The listings by type of template, number of templates, etc were extremely helpful for figuring out which articles were in need of the quickest attention. Also, Wolterbot was used to maintain similar listings for many projects (see WP:WikiProject Equine/Cleanup listing for one project's list) and to do it on a fairly regular schedule (at least every couple of months) in order to keep it semi-updated. While your listing is helpful, it includes many templates that aren't cleanup templates, which makes it rather more cluttered, and the listing by article rather than cleanup template is harder to use, IMO. When you had all of, for example, the articles with NPOV banners in one place, it made it easier to focus on these without having to sort through all of the articles with fact tags or some other minor issue. I know this really sounds like I'm complaining about your work, but please believe me when I say that I'm not trying to be witchy. I'm just hoping there's a possibility of either fixing Wolterbot or coming up with something quite similar. Dana boomer (talk) 17:02, 28 August 2010 (UTC)
Ye, I'll code something similar to Wolterbot...this was a very rudimentary post more for my own interest to see just how clean FA's were. I'll let you know when its done.Smallman12q (talk) 17:46, 28 August 2010 (UTC)
I've looked at it again, and I'm not sure if a bot is needed. One could do the same with Magnu's Catscan2. For example, to get all FA's with citation needed, you could do
FA's with citation needed

. Perhaps a table of catscan2 templates would provide the same functionality?Smallman12q (talk) 20:45, 28 August 2010 (UTC)

That is an interesting tool - I hadn't seen it before, although I am familiar with the original CatScan. One of my favorite things about WolterBot was that it listed articles with multiple templates together - for example all articles with five templates were listed first, then all with four, etc., then provided a breakdown of per-template articles. I think a table of CatScan2 templates would actually be more helpful for the latter (as it has the added benefit of being constantly updated), but I'm not sure how it would work for the former. The main reason I'm interested in the former is because it allowed you to see at a glance which articles needed the most work. Although not always foolproof (one NPOV tag can take up more editing time than a dozen fact tags), it was handy. Also, would there be a way of making a bot to make the tables of CatScan2 templates for each WikiProject, possibly on the existing "Project/Cleanup listing" pages that WB used? (Forgive me if any of these questions/comments are really dumb - I have essentially no experience in coding! Also, I'm watching this page, so tb templates are not necessary, unless you really like using them!) Dana boomer (talk) 21:13, 28 August 2010 (UTC)
Wolterbot was invaluable to us at WPEQ, where we have to keep an eye on something like 3,000 articles. It would be good to have it reactivated or replaced with something very similar. Montanabw(talk) 03:21, 29 August 2010 (UTC)

I'll see if I can write something up this week. From a technical standpoint, its a fairly easy bot...just a matter of downloading the categories to an arrays and checking if an article is in both of them (intersection), and posting the result. I'm a bit dismayed to see that a replacement hasn't been written by anyone else.Smallman12q (talk) 22:50, 29 August 2010 (UTC)

I've finished writing the bot...it's pretty short (less than 400 lines)...just waiting for trial approval...see Wikipedia:Bots/Requests for approval/CleanupListingBot.Smallman12q (talk) 20:17, 30 August 2010 (UTC)
Yay! Let WPEQ know when it's live! Or, better yet, see if you can make it run for us! Montanabw(talk) 02:44, 31 August 2010 (UTC)
Thank you for making this bot! The WikiProject Chemistry is vastly appreciative. Please add us to this bot!Scientific29 (talk) 04:03, 31 August 2010 (UTC)
WikiProject Cryptozoology has greatly benefited by the services of the old Wolter bot. Please update us onto your list as well!--Gniniv (talk) 06:51, 31 August 2010 (UTC)

I've made a sample posting for WPEQ at User:CleanupListingBot/WPEQ Report and User:CleanupListingBot/WPEQ Report (Table). It's a work in progress...but let me know what you think.Smallman12q (talk) 00:50, 1 September 2010 (UTC)

Anyone?Smallman12q (talk) 23:11, 1 September 2010 (UTC)
I'm looking for some feedback as to whether the sortable table and the listing is what people had in mind....Smallman12q (talk) 23:41, 2 September 2010 (UTC)
Sorry for the slow response. The report/table combination looks like exactly what we would need. Don't know if it's possible to put them on the same page - WolterBot (are you getting tired of being compared to this yet?!) would surpress table-style listings of articles with only 1-2 categories to reduce page length if needed. My only concern/question is that in the table some articles seem to have the same category listed multiple times, sometimes 4-5 listings for the same category, which is artificially inflating the cleanup category count and throwing them off a bit. Any way to resolve this? Thanks so much for your hard work on this Smallman - it will be great to have a bot for this back in place! Dana boomer (talk) 01:10, 3 September 2010 (UTC)
Dittoes to Dana's, nice work, and with only a few tweaks, we'll be happy. Here are some of mine: In my browser (Safari) there are no columns, which Wolterbot had, and were quite handy, I work off a laptop with limited vertical scroll and it helps to use the horizontal space as much as possible. I personally (and this is just me, Dana may be different) prioritize articles by the cleanup category (merges, references, etc.) or by number of flags more than by date. Montanabw(talk) 05:06, 3 September 2010 (UTC)
I'll add columns, replace the subsections that have months with (month year) in parenthesis (this would eliminate "duplicate categories", and I'm also planning to add article quality and wikiproject importance as well....this could be done with a few more category scans. In addition to the regular listing, have you also look at the sortable table produced? I don't mind being compared to WolterBot...software has evolution too=D. It'll probably be another week or so before its all good to go...I still need to optimize the program, and vb.net is awful with arraylists full of arraylists full of objects...so I'll likely have to port a piece to C++ dll. Once the bot is done though, I don't plan to run it myself...I'll look for a dedicated bot op.Smallman12q (talk) 22:58, 3 September 2010 (UTC)

Typobot

I am just curious. Is there a bots for fixing typos. That could be of good use. Jhenderson 777 23:38, 30 August 2010 (UTC)

Not a good general task for bot. See WP:BFDB —  HELLKNOWZ  ▎TALK 23:50, 30 August 2010 (UTC)
I didn't seem to think that seemed possible but I am just checking. I also like the idea of bots that get rid of red links and deleted image links. Jhenderson 777 23:58, 30 August 2010 (UTC)
AWB does a pretty good job of fixing typos. See Wikipedia:AutoWikiBrowser/Typos. -- œ 09:10, 31 August 2010 (UTC)
Red links are seen as a "Good Thing" and there is discussion about removing deleted images, also some bots commented these out in the past. Rich Farmbrough, 17:31, 4 September 2010 (UTC).

Need a bot to find free images on other wikis that can be used here

Hi, I noticed following some interwikis that some wikis have free images of the subjects that are not being used in the English version, It would be really cool if a bot could run through a specific catagory checking the interwikis and identifying 1) What en articles are lacking images, 2) which interwiki have an image that is hosted on commons and 3) which interwikis have images hosted on the other wiki (if possible sorted into license type). I could then go through the lists fairly quickly with AWB and add the appropriate ones to the en articles. Cheers Spartaz Humbug! 17:55, 31 August 2010 (UTC)

Have you seen m:FIST? it does something very similar. ΔT The only constant 18:32, 31 August 2010 (UTC)
Nope, I don't work on images much and bots and programming could be ancient greek for all I understand them. That looks like just the ticket though, thanks thingy. Spartaz Humbug! 18:41, 31 August 2010 (UTC)
Wow, what a great tool. thanks for that Spartaz Humbug! 19:33, 31 August 2010 (UTC)
Best thing is to move free images from "the other wiki" to Commons. then they are available to all projects. Rich Farmbrough, 17:32, 4 September 2010 (UTC).

If possible, can someone have a bot replace all uses of {{amg}} with {{allmusic}} (the former is a redirect to the latter). {{amg}} currently has 2466 transclusions, which is a bigger job than I can handle myself with AWB. My reasons for wanting to orphan this redirect are twofold:

  1. {{amg}} would make more sense as a redirect to {{AMG}}.
  2. "amg" is ambiguous: it's not immediately obvious whether it refers to "All Music Guide" or "All Movie Guide". No such problems with the {{AMG}} template, which covers both.

Thanks in advance for any assistance! PC78 (talk) 22:25, 31 August 2010 (UTC)

I can do that. My bot has approval in updating templates already. -- Magioladitis (talk) 22:27, 31 August 2010 (UTC)
Marvelous, thanks! PC78 (talk) 22:31, 31 August 2010 (UTC)
 Doing... 2,440 files loaded and Yobot started running. -- Magioladitis (talk) 22:35, 31 August 2010 (UTC)
 Done 2,438 edits. I fixed 8 occurrences in non-mainspace manually. -- Magioladitis (talk) 00:54, 1 September 2010 (UTC)
I was under the impression we don't do simple template redirect changes like that? Peachey88 (T · C) 01:05, 1 September 2010 (UTC)
The way I understand it is that {{amg}} will no longer redirect to {{allmusic}}, but to {{AMG}}. Before that redirect could be made {{amg}} had to be replaced with {{allmusic}} or all articles meant to be transcluding {{allmusic}} through {{amg}} would be transcluding {{AMG}} instead. §hepTalk 01:31, 1 September 2010 (UTC)
Exactly. We try to avoid templates that vary only in capitalisation and redirects that vary only in capitalisation and have different targets. -- Magioladitis (talk) 09:35, 1 September 2010 (UTC)
Thanks again Magioladitis, I've now redirected {{amg}} to {{AMG}}. PC78 (talk) 15:43, 1 September 2010 (UTC)
Indeed: see Wikipedia:Templates with names differing only in capitalization. Rich Farmbrough, 17:34, 4 September 2010 (UTC).

Reminder bot

I have a page at User:Hammersoft/tick file, which is a reminder list for myself. It's useful, but not particularly so; I have to remember to look at it all the time. It would be much more useful if there were a bot that scanned the file once a day, and if it found an event in the past would:

  • Mark the event as having passed on the file (so it doesn't keep reminding of past events)
  • Notify the owner of the file about the event.

Such a file would have to adhere to some standards of course, but those would not be hard. Thoughts? --Hammersoft (talk) 15:56, 1 September 2010 (UTC)

Well, if you need a message sent to yourself, or someone else, on a specific date then you could make a MessageDeliveryBot request for that date, and that would serve as a reminder. Not quite the same as having an on-wiki list but just a thought (and a bit of adverting for my bot ;)). It would also involve a lot less edits as there would just be one edit, the bot leaving you the message, instead of a minimum of three edits (you add a reminder, the bot reminds you, the bot marks it as done). - EdoDodo talk 16:01, 1 September 2010 (UTC)
I actually thought about doing something like that a few weeks ago, but I haven't pursued it yet because I wasn't sure anyone would want to use it. I may look at it now. Anomie 16:41, 1 September 2010 (UTC)
  • Hmm, no, there isn't, it's a good idea though, I might add it. By the way, I think I may have misunderstood what you were asking, it's probably better to have a bot specific for this, as the more I think about it the more it seems that a bot specific to this task would work better (for example, it could integrate to reminders on the same day into one, not having to make the confirmation edits, etc.). - EdoDodo talk 17:06, 1 September 2010 (UTC)
  • Yeah, I'm also thinking this notional bot could remove the manual checking aspect of this. The bot could be restricted to editing only the tick file page, and the talk page of the owner of the tick file. It couldn't be used to spam, canvas, etc. Just send reminders to yourself. Also, being able to manage the tick file would allow self management of what you get reminded of. Thoughts? --Hammersoft (talk) 17:10, 1 September 2010 (UTC)

Is this personal use bot really a good idea? The only edits seem to be personal calendar reminders. This serves no bigger encyclopaedic purpose beyond editor's own convenience. I agree it's nifty and useful to some, but one should really use proper tools for the job, like Google calendar or any other of many organiser apps. I would want to see this with a general purpose proposal for a project page, such as, "WikiProject Future Events\Task List" rather than individual checklists. —  HELLKNOWZ  ▎TALK 18:05, 1 September 2010 (UTC)

  • Huh? WP:UP exhorts us to use our userpages for Wikipedia work. Obviously, if we put something in a tick file to be reminded of, it would be for wikipedia work. A generic task list for the entire project is completely meaningless to the individual user. My personal tick file contains only things for Wikipedia. For example, my latest edition is a reminder to myself that in one month's time, I need to check back in on List of Irish counties' coats of arms, as it will have been one month since I added {{non-free}} to it, and it bears looking at for removal of guideline violating fair use content. Why in heck would I use some obtuse Google calendar for this when a bot here locally could assist me in conducting my own Wikipedia work without having to do anything but a single edit to one's tick file per task, as opposed to logging into a separate service, learning that service, and figuring out how to make it work specifically for Wikipedia tasks. I'm sorry, but that just doesn't make sense to me. --Hammersoft (talk) 18:13, 1 September 2010 (UTC)

I've had a bit of a look at this, and while I like the idea I'm having a little trouble figuring out some aspects of it. First, here is what I have determined:

  1. Either the "tick file" pages will be required to be a subpage in the user's userspace, or they will have to be something like "Template:Remind me/Username". Not both, please. If the former, they will need either a template or a category so the bot can find them (this could possibly be the same as the template in #2).
  2. The format of the entries on the page needs to be something easy for the bot to parse and at the same time easy for users to use. IMO, template syntax fits the bill here well: people who would use this should already have at least a passing familiarity with using templates, and AnomieBOT is quite good at parsing even complicated template invocations. The template should have a (named) delivery date parameter (for which the bot can easily accept DMY, MDY, and YMD, both with and without a time of day included) and a (named) parameter for the text of the reminder; a (possibly optional) parameter for the section title wouldn't be amiss either.
  3. The bot will keep a whilelist of other bots and completely ignore any edits to the "tick file" page that are not by the page's owner or a whitelisted bot (i.e. it will go through the history). This prevents malicious editors from putting spam on an unwatched "tick file" page and having the bot deliver it for them.
  4. The bot will ignore a "tick file" for any user that is blocked or has a fully-protected talk page or "tick file" page. It may as well also respect {{nobots}} on both the talk and "tick file" pages.
  5. The bot will, of course, only work for registered users. No IPs.

The things I am having trouble with:

  1. Which option for #1? User subpages are much more straightforward for users and are less "overhead" if this bot idea doesn't really take off, but usernamed subpages of a central template (as with Template:Editnotices/Page/...) can be easier to manage and can much more easily have a helpful editnotice.
  2. Is Template:Remind me a good name, or would something like User:ReminderBot/Remind me be better? The former is more straightforward, but the latter doesn't "pollute" the Template namespace.
  3. What should the output of "Template:Remind me" look like? What should the documentation say to be easy to understand?
  4. Should the bot mark delivered notices (by adding a "delivered=yes" parameter to the template), or just remove them from the "tick file" page?
  5. Should I even create a "ReminderBot" account, or just run it under AnomieBOT? The difference here is trivial (it's more work for me to create the Wikipedia account than to set up the AnomieBOT subprocess to use it), and for this I lean towards "ReminderBot".
  6. Should the bot allow for specifying time of day for each reminder, or should it only accept dates and deliver once a day at a time chosen by the user in the "tick file" page header template? If the former, what granularity should it honor? For example, if the user specifies "05:05", should the bot only worry about delivering it sometime between 05:00 and 06:00?

Anomie 04:25, 5 September 2010 (UTC)

This is actually a request for two scripts (although it would also work for a bot to run these once, automatically, when articles show up at the SHIPS A-class review page).

  • FAC reviewers sometimes require that we link terms only once in an article (not counting infoboxes or endsections). It's a really tedious job to go through an article checking for this, and seems ideal for an automated job; it's rare that we actually need or want to link a term twice.
  • Even better: a script that checks a list of terms that are commonly linked in SHIPS articles, then runs through an article adding the standard link at the first occurrence of each of those terms. The results won't be perfect all the time, but if the script leaves me in the "changes" screen so I can quickly check the results for myself before committing to the edit, that would be awesome. - Dank (push to talk) 03:57, 8 September 2010 (UTC)

Bypassing redirects with accent characters

I have written code for bypassing redirects on disambiguation pages (per MoS), this code is similar to sorting collation. It treats accented and unaccented characters the same, as well having substitutions for various symbols and romanizations (so Yūgiō and Yu-Gi-Oh! are considered the same).

I'm thinking about incorporating this into my common fixes library. However, I would like to know what objection, if any, people would have. Specially, cases where the author may use accent characters to differentiate. — Dispenser 20:48, 8 September 2010 (UTC)

Bot to assist in updating sparse citations

I go through citations and update them, adding Author, Date, Archiveurl, etc. and would like a helper bot to assist in that job. What I am looking for is something that would search an article for the type of citation used (cite web, cite news, etc.) and bring up an input form related to that template. It would show a link to the reference on the web and a link to the Internet Archive (http://web.archive.org/web/*/http://SomeWebSite) so I could check for archives. This would allow one to check the reference and add any missing information in to the form. The form would have an Update and a Skip to next button to either update that reference or skip to the next reference. See this sort-of-example for what I mean. Is this kind of thing possible for a bot? Thanks. -      Hydroxonium (talk) 19:55, 8 September 2010 (UTC)

I think what you're looking for is what would normally be called a script or an editing tool (such as Twinkle or AutoWikiBrowser, respectively). Bots generally carry out lots of similar edits on their own with little to no human interaction; what you're envisioning would require frequent input from the user for direction on how to proceed. You might be able to find someone who could help at Wikipedia:WikiProject User scripts/Requests. Hersfold (t/a/c) 20:35, 8 September 2010 (UTC)
Have you seen WP:REFLINKS? –xenotalk 20:50, 8 September 2010 (UTC)
Thanks very much for the help. I did not know about REFLINKS. I checked it out a little, but I think I will have to spend some more time with it in order to get it to do what I want. Thanks. -      Hydroxonium (talk) 01:35, 9 September 2010 (UTC)

Bot to tag not English newpages

We have a problem at newpage patrol in that some newbies submit new articles in their own language without realising that this is just the English language version of wikipedia. We have procedures and templates to tag these as {{Not English}}, refer the article for translation and ideally tell the author about the Wikipedia in their language. Sometimes this doesn't work, not least because some newpage patrolers and also some admins don't have the software on their PC to identify articles in scripts such as Sinhalese, so නිරිපොල would appear to them as a row of boxes. Would it be possible for someone to write a bot that:

  1. Checked all new pages and identified ones that were predominately in a non-Latin script.
  2. Ideally identified which script - Arabic, Sinhalese etc etc had been used
  3. Tagged those articles with {{Not English}}
  4. Added a section at Wikipedia:Pages needing translation into English
  5. Ideally if the Bot is able to identify the script, post a note on the authors talkpage using the appropriate template from Wikipedia:Pages needing translation into English/Templates for user talk pages

I'm not sure how often this would happen, we currently have a few such articles identified per day - but we don't know how many more are being deleted {{G1}} because neither the tagger nor the admin can see that it is non-English material. ϢereSpielChequers 12:15, 9 September 2010 (UTC)

Google has open sourced its compact language detection library which can detected which language text has been written in. — Dispenser 12:34, 9 September 2010 (UTC)

We have these two redundant categories. As in the English-language Wikipedia the name of this province is Gipuzkoa and not Guipúzcoa, all the instances of the text Category:People from Guipúzcoa contained in this Wikipedia's articles should be replaced with Category:People from Gipuzkoa. Thank you in advance. --Xabier Armendaritz(talk) 12:41, 10 September 2010 (UTC)

 Done There were only 36 articles so I did them semi-manually using WP:AWB. -- Magioladitis (talk) 13:37, 10 September 2010 (UTC)

USGS Image library import

The USGS has a a photo library consisting of 30,000+ high resolution images. Perhaps it could be imported to wikipedia or the commons?Smallman12q (talk) 20:49, 10 September 2010 (UTC)

I Just dropped them an email, but their site is fairly complex and extracting images from their site will be difficult unless we can get some assistance from them. ΔT The only constant 21:16, 10 September 2010 (UTC)
It's not terribly complex...still, probably best to get a response from them before we download 30,000+ high res images. I'm also interested in their portrait collection, though small, it does have superior high-res images compared to ours such as that for William Henry Holmes as ours and theirs. Do let us know when they respond.Smallman12q (talk) 00:14, 11 September 2010 (UTC)

WebCiteBOT still down, replacement growing more urgent

Prior discussion lead to Wikipedia:Bot_requests/Archive_37#We_need_another_User:WebCiteBOT. Now the question has again risen at WP:Village pump (policy)/Archive 78#WebCiteBOT is down - Dead links at record high. Any word of progress on either front? LeadSongDog come howl! 18:41, 24 August 2010 (UTC)

The original bot owner posted a link to the source code here (Wikipedia:Bots/Requests for approval/WebCiteBOT 2) if anybody is interested in working on this.
- Hydroxonium (talk | contribs) 00:51, 25 August 2010 (UTC)
Hey guys. I've been away for awhile, but the monitoring portion of WebCiteBOT was still running - just not the portion that edits Wikipedia. Thus, the easiest solution would be for me to just start actually running it again. I'll look over the code, make sure everything is up to date, and get it up & running again w/in the next few days. --ThaddeusB (talk) 02:13, 26 August 2010 (UTC)
Welcome back! Could you clarify whether the transactions with WebCitation have been continuing too? There's been some anxiety over the question of whether they've got a sustainable and scalable future. LeadSongDog come howl! 06:26, 26 August 2010 (UTC)

┌─────────────────────────────────┘
@ThaddeusB, we missed you. Welcome back. Thanks for WebCiteBOT, it's one of the best bots on Wikipedia.

  • Can we get a 2nd WebCiteBOT running, please?

It's alright if the answer is no. I would just like to know one wat or the other. Thanks. - Hydroxonium (talk | contribs) 10:17, 26 August 2010 (UTC)

You can do what WebCiteBOT does using Checklinks, minus the automatically saving part. — Dispenser 04:12, 27 August 2010 (UTC)
It's been about 2 weeks and so far ThaddeusB is a no show. On a positive note, nn123645 is back and is working on another bot for WecCite and could use some help. He is allowing me to post a link to the code he has on the toolserver tools:~nn123645/webcite/source-9-7-10.zip. Can anybody help with this please? Thanks very much for any input. -      Hydroxonium (talk) 20:23, 7 September 2010 (UTC)
I will reactivate my Wayback link bot in a short while after I'm done with my work deadline, that may be of some help. I can probably set it to check WebCite archives as well afterwards. But given how slow browsing and revision parsing is, there certainly needs to be more of these bots. Tim1357 had one, not sure if it's working on this task now. —  HELLKNOWZ  ▎TALK 22:45, 7 September 2010 (UTC)
Thanks very much, Hellknowz. Dead links have been increasing exponentially for the last few months, so any help is greatly appreciated. I appreciate your support. Thanks. -      Hydroxonium (talk) 00:10, 8 September 2010 (UTC)

Just restarted mine. It's throttled heavily to not be too much of a strain on the toolserver, but I'll let it run for a long time. Tim1357 talk 00:29, 8 September 2010 (UTC)

That's great, Tim. Thank you very much. I really appreciate all the support. Thanks. -      Hydroxonium (talk) 12:17, 8 September 2010 (UTC)
Which bot is it? It sounds great, and I'd like to see its contribs. - Peregrine Fisher (talk) 03:22, 10 September 2010 (UTC)
Tim1357's bot is DASHBot (busy with other tasks), mine would be H3llBot (in development/testing). I am unaware of any other active and fully automated bots doing archiving. —  HELLKNOWZ  ▎TALK 11:43, 10 September 2010 (UTC)
I had forgotten how buggy my bot was. Its down for the moment. Tim1357 talk 01:44, 11 September 2010 (UTC)

Bot needed for Article Feedback Pilot

For the upcoming Article Feedback Tool pilot (see description from the Signpost, we need to add every article in WikiProject United States Public Policy to a hidden category, Category:Article Feedback Pilot. I was hoping a bot operator could help; it should be a pretty trivial bot task.--Sage Ross - Online Facilitator, Wikimedia Foundation (talk) 18:29, 14 September 2010 (UTC)

I think that this is a AWB task. I'll do this. I-20the highway 22:31, 15 September 2010 (UTC)
Awesome. Don't start in just yet, because I want to double check that we aren't excluding any articles or changing the category name or anything else. But I'll leave a message on your talk page probably tomorrow confirming that it's a go.--Sage Ross - Online Facilitator, Wikimedia Foundation (talk) 23:13, 15 September 2010 (UTC)
Alright. All edits will be checked manually by me and made under the account AutoGeek, my semi-automated editing account. I-20the highway 01:03, 16 September 2010 (UTC)
Great! Thanks.--Sage Ross - Online Facilitator, Wikimedia Foundation (talk) 01:12, 16 September 2010 (UTC)

Dup category cleanup bot requested

Bot needed, to go through Category:Wikipedian usernames editors have expressed concern over and Category:User talk pages with conflict of interest notices - and remove all users currently blocked. Thank you, -- Cirt (talk) 16:22, 10 September 2010 (UTC)

Already doing the first with KingpinBot (haven't been running much over summer, and computer just got wiped+OS reinstalled, so bear with me). I could probably do the other one using pretty much the same code. - Kingpin13 (talk) 16:24, 10 September 2010 (UTC)
Okay, great, thank you so much! :) -- Cirt (talk) 18:45, 10 September 2010 (UTC)
Is this working? -- Cirt (talk) 19:16, 14 September 2010 (UTC)
Eh, still doing CAT:UAA, haven't got around to doing the other one yet I'm afraid. I'll submit the BRfA in a second. Sorry about not getting around to this earlier, I've not been particularly good at following these on-wiki things up recently, especially bots, so thanks for the poke :). Best, - Kingpin13 (talk) 19:25, 14 September 2010 (UTC)
No prob, keep me posted. :) -- Cirt (talk) 19:26, 14 September 2010 (UTC)
BRFA filed - Kingpin13 (talk) 19:32, 14 September 2010 (UTC)

Y Done - Kingpin13 (talk) 15:52, 16 September 2010 (UTC)

{{Football player statistics end}} is meant as a replacement for the |} that comes after using {{Football player statistics 1}}. Using it resolves. This needs to be applied to 3197 articles at this moment, after I applied it to 30 or so manually to see that indeed, it is the first |} to occur after {{Football player statistics 1}} that needs to be replaced. Any takers, or should I continue doing it manually? Or maybe someone wants to suggest a location in which this needs to be discussed first? I proposed it at Template talk:Football player statistics 1#Suggestion and implemented due to acquiescence.--Muhandes (talk) 15:50, 16 September 2010 (UTC)

Wikipedia:Bots/Requests for approval/KarlsenBot has been started for this task. Peter Karlsen (talk) 03:47, 17 September 2010 (UTC)

Looking for a new archive bot for Graphic Lab subpages

Currently Dycedarg's bot DyceBot archives the Graphic Lab subpages. I was looking to fiddle with the time frame the bot uses to archive sections and noticed that it looks like it's something that Dycedarg would need to change on their end. As they don't seem to be active anymore I was hoping someone here could recommend a bot, that would be able to do what DyceBot is currently doing, with an active owner. Any recommendations on who I should contact? Or if someone is interested in coding something up I can give more detail on the specifics. Thanks, §hepTalk 21:48, 17 September 2010 (UTC)

A bot to export images from Wikipedia for quick uploading at Wikia

Special:Export doesn't export the .jpg or other image files in the articles. Anyway to get the articles and their images within them? Also, some images are in categories such as Category:Xbox Live game covers. Any way to save all of them on my hard drive at once? Dream Focus 16:14, 17 September 2010 (UTC)

If I understand you correctly, you're looking for a tool to download all the pictures in a category and also to download all the pictures in articles in that category?Smallman12q (talk) 22:49, 17 September 2010 (UTC)
Yes. I can export the articles themselves in mass, but the pictures don't go along with them. Dream Focus 01:14, 18 September 2010 (UTC)

Hyphen consistency bot

I'd like to see a bot that flags articles that have inconsistent hyphen usage between word pairs. It would search for the first occurrence of a hyphen and store the bounding words. If the same word combinations exist elsewhere without the hyphen then have the article noted for human cleanup/intervention. I'm not sure how many false positives this approach would bring up but it should be easy to find out.--Hooperbloob (talk) 17:30, 18 September 2010 (UTC)

Chances are this would be best done using a database dump to find the pages. You may want to ask Wikipedia:WikiProject Check Wikipedia, as they make this sort of list already for many other types of cleanup. Anomie 20:07, 18 September 2010 (UTC)

John Hopkins University -> Johns Hopkins University

It is very common (to the point where there are redirect pages for most of them) for John*s* Hopkins University to be written as John Hopkins University. This is *especially* common in references which refer to John Hopkins University Press. While there might be a few places where John Hopkins University might be appropriate to keep in stories about people's confusion, botting a change of all of John Hopkins University Press to Johns Hopkins University Press might be useful (and wikilinking it if it isn't already). According to Google there are 4000 hits on "John Hopkins University Press" on wikipedia.Naraht (talk) 02:40, 10 September 2010 (UTC)

Bots that fix common misspellings are generally not approved, because direct quotations should not be changed, and the bot has no way of knowing if text is a direct quotation or not (it can make a reasonable guess from quotes and {{quote}} but not safe enough). However, if this misspelling is common it may be a good idea to add it to WP:AutoWikiBrowser/Typos so that users using AWB will semi-automatically fix it. If nobody disagrees I can add it later today. - EdoDodo talk 05:38, 10 September 2010 (UTC)
To me, this is sort of a quarter-loaf, but thank you very much. I think that John Hopkins University *Press* is much less likely to be in a quote. Would either only changing John Hopkins University Press in *references* be safe enough, or in {{cite}}, or even {{cite|publisher=}}?--Naraht (talk) 10:39, 10 September 2010 (UTC)
Even so, I doubt it would be approved because fully automatic fixing of typos is explicitly against the bot policy ("Bot processes may not fix spelling mistakes in an unattended fashion, as accounting for all possible false positives is unfeasible."). However, now that it is in AWB's typos list it will probably get done semi-automatically fairly quickly. There's only 1000 instances or so of it, so it shouldn't take too long. I might do some semi-automatically myself later. - EdoDodo talk 14:15, 10 September 2010 (UTC)
OK, I understand policy. Thank you for putting it in the typo's list. The 4000 number I was using was from site:wikipedia.org, and in some ways it is really odd that roughly 3/4 of the misspellings are from sites other than en. Shows how often it (with press) gets used as the publisher in a reference, I guess. Now all I have to do is find the equivalent to AWB for all of the other languages... 1/2 :) Thank you for all of the help.--Naraht (talk) 15:26, 10 September 2010 (UTC)
FWIW, with Google's aid I could only find 43 pages actually using "John Hopkins University" in en.wikipedia. Perhaps someone has already went through those 1000 and corrected them. I corrected the 43 I could find. If you think this is a recurring issue I will add it to my weekly TODO list. --Muhandes (talk) 11:16, 12 September 2010 (UTC)
No, still 989. Type "john hopkins university" site:wiki.riteme.site into the search box. Even 199 if you exclude "John Hopkins University Press" (which need to be fixed as well).Naraht (talk) 03:55, 13 September 2010 (UTC)
You are correct, from some reason AWB does not return all of them on a google search. I will fix the rest, time permitting. --Muhandes (talk) 06:44, 13 September 2010 (UTC)
Cool.Naraht (talk) 12:01, 13 September 2010 (UTC)

 Done The 400ish articles reported by this google search are corrected. I will wait a couple of weeks and do another google search to make sure, but for now there is not much else I can do. If you have a better way to find such instances than that google search, let me know. --Muhandes (talk) 22:08, 18 September 2010 (UTC)

Category change request

On 20 October, I posted the following request: "All players in Category:Philadelphia Quakers players need to be moved into Category:Philadelphia Phillies players, as they are the same team. The deprecated category for the Philadelphia Blue Jays players was also emptied, and this is a preparatory step for the expansion of the Philadelphia Phillies all-time roster. Thanks." See Wikipedia:Bot requests/Archive 38#Category change for the original. No response was ever made to the thread before it was archived, so I'm reposting it. — KV5Talk12:21, 1 November 2010 (UTC)

Link to a relevant discussion? -- Magioladitis (talk) 17:52, 8 November 2010 (UTC)
I would suggest initiating a CfD nomination (see WP:CFD#HOWTO for instructions). If there is consensus for the merge, then User:Cydebot will carry it out automatically . -- Black Falcon (talk) 01:33, 14 November 2010 (UTC)

Are there any bots that can improve this article?

  1. First of all, please remember to sign your posts using "--~~~~".
  2. Second, can you please explain what actually needs to be improved? If you explain what needs to be done and it's simple enough we may be able to refer you to a bot. If it's about the suspected copyright violations tag on the page you may also want to go to Wikipedia:Copyright problems. --vgmddg (look | talk | do) 02:04, 15 November 2010 (UTC)

Template:NFLplayer

WikiProject National Football League has designed a new template (Template:NFLplayer) which simplifies the usual text that has to be inputted into roster templates. The previous formatting is * <span style="font-family: Courier New;">99</span> [[Player name]]. After we have converted all current NFL roster templates to the new format, the NFL team season pages will have to have their roster templates converted as well. For the complete list of pages that would need to be changed, see Category:National Football League seasons. Essentially, a bot is requested to change * <span style="font-family: Courier New;">99</span> [[Player name]] into {{NFLplayer|99|Player name}} as well as the other specific parameters listed at Template:NFLplayer/doc. The new roster template is at Template:NFL roster also. Eagles 24/7 (C) 20:49, 14 November 2010 (UTC)

That category has quite a few subcategories, pages, etc. Can you provide an example page where such a change would be made (or has been made)? Plastikspork ―Œ(talk) 03:33, 19 November 2010 (UTC)
Just an example of where the change was made, [2], and the changes would be made at a page like 2007 New England Patriots season for all of the roster templates. Eagles 24/7 (C) 03:35, 19 November 2010 (UTC)

Cleanup CAT:AWBC

If some bot could be made to periodically go over CAT:AWBC and correct at least some of the pages it would be great. Most (but not all) of the pages in that category lack a title field when using {{cite web}}, which I believe some bots are capable of doing. The rest would probably need to be done by hand, so I would suggest tagging them for manual editing (say, in a new category Category:Articles with broken citations requiring manual attention. --Muhandes (talk) 12:30, 2 September 2010 (UTC)

This shouldn't prove too difficult, and I can start looking into programming it; are there any other common but easily fixed errors? Hersfold (t/a/c) 12:10, 8 September 2010 (UTC)
I don't see how the other common errors can be automatically corrected, but I'll list them anyway:
  • Using "archivedate" without "archiveurl". This is either because of a misunderstanding of what "archivedate" is for, which is corrected by removing the "archivedate", or because of forgetting, or making an error, in listing the archiveurl, which requires going to search where it is archived. Distinguishing between the cases is not always easy. If the link is not dead, someone may still want to use an archiving site and just make a mistake in listing the archiveurl.
  • Using "archiveurl" without "archivedate". This actually can be corrected automatically if the website is http://www.archive.org. The date appears as part of the url. I'm not sure it's worth it though, it isn't that common, and can easily be done by hand.
  • Listing the title after the url, without "|title=". Very common when people move from using [url title] to using {{cite web}}, but I don't see how it can be identified.
  • Using {{cite web}} for a PDF, not providing a title. Must be done by hand.
Anyway, I think adding a title when it is missing will solve 90% of the cases anyway, leaving maybe several a day, which can be done manually. If tagged in a category as I suggested, it would be easy to maintain.
--Muhandes (talk) 14:35, 8 September 2010 (UTC)
  • If it requires a human judgment call as to what exactly the problem is, then it probably can't be easily managed by any bot. Unless you just want it to assume the first case and simply remove any instance of |archivedate= that doesn't have a |archiveurl= ?
  • Do you have an example archive.org address I can use? This sounds managable, but those not on archive.org probably can't be fixed.
  • This might be correctable with a few additions to the code I've already written. I'll have to check. - Edit: Added to code, but we'll see how it works in testing.
  • Definitely can't be done by a bot, you're right, but at least I know to watch out for that now. I'll have the bot ignore any url ending in .pdf or .PDF. - Edit: Added to code.
Also, I'm seeing a number of issues with the manual tracking category you suggested, but I do have an alternative. Rather than stick each article into a category, I can have the bot edit a page at the end of every run with a list of pages that need to have manual work done. We can link to this page from the main category, and pages will get removed from the list by the bot as their problems get corrected. Would that work? Hersfold (t/a/c) 16:11, 8 September 2010 (UTC)
Just passing through:
Anomie 17:29, 8 September 2010 (UTC)
Thanks, Anomie. As for the PDF metadata, I'm not sure it would be worth the trouble. We'll see how often they come up. Hersfold (t/a/c) 19:05, 8 September 2010 (UTC)
Also, are there common errors with other citation templates that the bot could fix? This is probably less likely since presumably other templates don't involve the internet, but just in case. Hersfold (t/a/c) 17:11, 8 September 2010 (UTC)
BRFA filed Hersfold (t/a/c) 05:56, 10 September 2010 (UTC)
Apologies for not being here to answer swiftly, Rosh Hashana followed by Shabbat was three excruciating days this year. Everything you suggested is fine. Where human judgment is required I think it's better to leave it alone than to assume anything. I'm very fine with your page instead of category idea. I never looked at a PDF metadata so I can't say if it will be worth the trouble. As for other templates, all of them require manual handling I'm afraid. --Muhandes (talk) 22:54, 11 September 2010 (UTC)
Since Hersfold has withdrawn due to lack of time, and the category now has over 450 articles, I humbly request if someone else is willing to take this instead.--Muhandes (talk) 09:26, 17 November 2010 (UTC)