User talk:Pbsouthwood/Archive 27
This is an archive of past discussions with User:Pbsouthwood. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 20 | ← | Archive 25 | Archive 26 | Archive 27 | Archive 28 | Archive 29 | Archive 30 |
Feedback request: Engineering and technology Good Article nomination
Your feedback is requested at Talk:Nissan GT-R on a "Engineering and technology" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 20:30, 22 September 2023 (UTC)
Heads up
This might interest you.
Sincerely, — The Transhumanist 22:02, 23 September 2023 (UTC)
Question from Olivia104 (11:22, 26 September 2023)
Every time when I submit my edit, I am always not sure what should I type in the summary..can you give me a good example of an edit and summary? I am always not sure what should I type so I keep it as simple as possible..is it ok? --Olivia104 (talk) 11:22, 26 September 2023 (UTC)
- Hi Olivia104, This is not an easy question to answer, as opinions vary enormously on what should be in an edit summary. The quick answer is that there are only a few circumstances where an edit summary is actually obligatory. and these are when providing attribution for content copied from a free licensed source where the license requires attribution, usually for text copied within Wikipedia. In other circumstances an edit summary can be very helpful to another editor who is trying to understand your edit, as the logic may not be obvious. There are Wikipedians who insist that all edits must have a summary, but that is not currently true, and all attempts to pass such a ruling have met with rejection. On the other hand, a misleading summary is considered disruptive editing and may be sanctioned, particularly if it appears to be deliberately misleading. This brings up the problem of efficiency. Your time is not efficiently used if it takes you longer to compose the edit summary than to make the edit, and in many cases this would happen. While it is worth having a preset edit summary if you are going through hundreds of pages with a semi-automated tool to make the same correction each time, ad hoc improvements often involve many small changes, which will be fairly obvious to anyone looking at the diffs, and we are not actually required to explain the obvious. A common practice is to use "copy edit", or the abbreviation "ce" for these. If you can't think of a good description, and the edit is clearly an improvement, most editors will not complain if you do not leave an edit summary. If you are making many consecutive edits to an article which you are creating, and miss a few summaries, also not to lose any sleep over. "Expand content", "fix grammar", "fix spelling", etc. are often useful when that is what you are doing. "Add ref" or "cite" are also appreciated, though quite obvious if one checks the diffs. If you are making an edit which you think might possibly be considered controversial by someone else who edits or watches the page, a lot of conflict can be avoided by explaining why you are making that edit, but sometimes you will make what you think is an uncontroversial edit and someone else will strongly disagree. A lot also depends on the topic matter. You can also read the guidance on edit summaries at Help:Edit summary. Cheers, · · · Peter Southwood (talk): 11:31, 27 September 2023 (UTC)
Archiving outlines in the face of potential deletion, and reviving them afterwards
Dear Southwood,
It would be a shame if the outlines got deleted. So I've been taking measures to safeguard them.
In case something happens to me, and I disappear from Wikipedia entirely, somebody besides me needs to know this stuff. I don't know how much you already know, so I'll cover it all...
Archiving the outlines
I've just finished archiving the entire set of outlines. They can be found at the Wayback Machine (aka Internet Archive).
I haven't made a list of their archive links yet, but will do so after the next phase of archiving is complete (see below). Trying to search for them all with the Internet Archive search feature would be futile.
The easiest way I've found to archive them, and access the archives, is through the use of tabs and bookmarklets. Bookmarklets are tiny programs saved as bookmarks; you save the code in the location field in the bookmark input box. The bookmarklets should be saved in the bookmarks toolbar -- they're useless if they are not displayed on the computer screen, as they run by clicking on them.
- To create multitudes of tabs quickly, I use Snap Links: https://cpriest.github.io/SnapLinksPlus/#/
- Once installed, you hold down the right mouse button and drag the mouse to create a box around the links you wish to open, then you let go of the button.
- Bookmarklet for viewing the archive of the webpage that is in the current tab:
- javascript:location.href='https://web.archive.org/web/*/'+location.href
- Bookmarklet for archiving the webpage that is in the current tab:
- javascript:void(window.open('https://web.archive.org/save/'+location.href));
Archiving should probably be done about every 6 months, annually at least. Though new outlines should be archived as soon as they are constructed.
- To gather their titles for archiving, most of the outlines can be found by using All pages with prefix on "Outline of". Most of the rest can be found by searching for titles with the words "List" and "topics" in them. Once you have a list on the screen, use snaplinks on 50 or so at a time.
The Internet archive only lets you archive 3 pages at a time. Once those are done, plus an additional couple or so seconds delay, it let's you do the next three.
Damn, that was tedious. If you ever need to archive 800+ pages, I recommend having a selection of movies or shows ready, and a second screen to play them on.
While this archive of the outlines is suitable for browsing, it is useless for restoring or repurposing -- for that you need the wikitext version of each outline.
Archiving the wikitext versions (each one's edit page)
So, I'm in the process of archiving all the "edit source" pages of all the outlines. Another movie marathon in progress!
Accessing the archives
Currently, since the outlines are still on Wikipedia, to view an outline's archive, go to the outline on WP, and click on the first bookmarklet above.
Of course, to navigate the outline archives after the outlines have been scrubbed from WP, you'd need a list of all the links. I haven't gotten that far yet. Note, that the edit button in an archived WP page on the Wayback Machine leads to the archived wikitext versions of the page, if any have been archived. Even the editor works on archived WP pages, though you'd have to copy and paste to save the work somewhere else. Even though the editor is live, it won't let you save changes to the archived versions.
Alternative archiving
Another avenue for retrieving the outlines should they ever be lost, is from a [[WP:DUMP|Wikipedia database dump]. They make a new one each month, and so you just go back to last dump in which all the outlines existed. A dump xml can be loaded into Xowa directly, which can be used to access and export the wikitext pages.
If reviving the outlines ever becomes needed
If outlines are ever nuked from WP, they could be revived from the archived wikitext pages, copied into a wiki...
Initially, the links would be broken, because they're local links. They would have to be fixed to point to Wikipedia.
After importing them into a local install of MediaWiki, AWB could be used to change all the links in the outlines to point to Wikipedia. Once that is done, the outlines would be useful for Wikipedia navigation again.
Once the links are converted, the wikitext pages could be used as an off-site wiki, such as on Fandom, for public access to the outlines.
What good are off-site outlines?
Besides being available as a navigation tool, they could be worked on and improved, until they are ready for a proposal to be returned to the encyclopedia.
Since we are in the middle of a generative-AI technological revolution, general purpose AI tools capable of maintaining the outlines should become available soon (a few years or maybe even months). GPT-3/GPT-4 and Auto-GPT don't quite cut it, but in the next couple versions or so, they might very well.
MediaWiki
MediaWiki, the software Wikipedia runs on, is an incredibly useful program. If you don't have it installed already, I highly recommend it.
I have been using 2 instances of it for years. One as a PIM/personal wiki; beats the hell out of the directory/file system for task management, personal notes, project management, writings, etc.). And one as a library and web lab.
Examples
Here are the archive links for some of the outlines you've made or worked on. The indented ones are for the wikitext versions:
- https://web.archive.org/web/20230923080420/https://wiki.riteme.site/wiki/Outline_of_oceanography
- https://web.archive.org/web/20230927055231/https://wiki.riteme.site/wiki/Outline_of_underwater_diving
- https://web.archive.org/web/20230927055222/https://wiki.riteme.site/wiki/Outline_of_underwater_divers
Wrapping up
I'll let you know when the archiving of the wikitext versions and the link list are done.
If common sense prevails, we won't need to resort to rescuing the outlines.
I hope you found the above post informative, and maybe even useful.
Sincerely, — The Transhumanist 01:03, 28 September 2023 (UTC)
- I thought that if an article is deleted, it remains in the database, just invisible to ordinary readers. I have restored a lot of deleted pages on request, and everything was still there. Are you aware of circumstances in which the article history of a deleted article is actually erased from the database? (other than hostile hacking, and I assume there are backups against that). · · · Peter Southwood (talk): 06:24, 28 September 2023 (UTC)
- I assume you have notified other interested and affected parties, to reduce the risk of this information being lost. None of us are going to be here forever. Cheers, · · · Peter Southwood (talk): 06:41, 28 September 2023 (UTC)
Feedback request: Natural sciences Good Article nomination
Your feedback is requested at Talk:Oak on a "Natural sciences" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 09:31, 1 October 2023 (UTC)
Feedback request: Natural sciences Good Article nomination
Your feedback is requested at Talk:Punctelia on a "Natural sciences" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 19:30, 1 October 2023 (UTC)
Feedback request: Geography and places Good Article nomination
Your feedback is requested at Talk:Kolkata on a "Geography and places" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 01:30, 2 October 2023 (UTC)
Feedback request: Geography and places Good Article nomination
Your feedback is requested at Talk:Utoro, Uji on a "Geography and places" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 08:31, 2 October 2023 (UTC)
Feedback request: Wikipedia policies and guidelines request for comment
Your feedback is requested at Wikipedia:Reliable sources/Noticeboard on a "Wikipedia policies and guidelines" request for comment. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 13:30, 2 October 2023 (UTC)
DYK for Cataract surgery
On 3 October 2023, Did you know was updated with a fact from the article Cataract surgery, which you recently created, substantially expanded, or brought to good article status. The fact was ... that indirect evidence suggests that cataract surgery could have been performed as early as in ancient Egypt? The nomination discussion and review may be seen at Template:Did you know nominations/Cataract surgery. You are welcome to check how many pageviews the nominated article or articles got while on the front page (here's how, Cataract surgery), and the hook may be added to the statistics page after its run on the Main Page has completed. Finally, if you know of an interesting fact from another recently created article, then please feel free to suggest it on the Did you know talk page.
RoySmith (talk) 00:02, 3 October 2023 (UTC)
Feedback request: Natural sciences Good Article nomination
Your feedback is requested at Talk:Pulchrocladia retipora on a "Natural sciences" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 00:30, 3 October 2023 (UTC)
Feedback request: Natural sciences Good Article nomination
Your feedback is requested at Talk:Handicap principle on a "Natural sciences" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 11:30, 4 October 2023 (UTC)
Administrators' newsletter – September 2023
News and updates for administrators from the past month (September 2023).
|
|
- An RfC is open regarding amending the paid-contribution disclosure policy to add the following text:
Any administrator soliciting clients for paid Wikipedia-related consulting or advising services not covered by other paid-contribution rules must disclose all clients on their userpage.
- Administrators can now choose to add the user's user page to their watchlist when changing the usergroups for a user. This works both via Special:UserRights and via the API. (T272294)
- The 2023 CheckUser and Oversight appointments process has concluded with the appointment of one new CheckUser.
- Self-nominations for the electoral commission for the 2023 Arbitration Committee elections opens on 2 October and closes on 8 October.
Templates
I notice the template discussion i raised at the village pump but you seemed able to help me. If you can help me that would be excellent. There are similar template {{Infobox cricket tour}} and {{Infobox cricket series}} which sets out the style of how I would like the information to be displayed. The information I would like to display is:
- Teams, Dates, Captains
- Test match; result, Most Goal player of the series, etc.
- Fast5; including with the same information as the test series
Please let me know if this is helpful PicturePerfect666 (talk) 16:25, 6 October 2023 (UTC)
Feedback request: All RFCs request for comment
Your feedback is requested at Talk:Kfar Aza massacre on a "All RFCs" request for comment. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 12:31, 13 October 2023 (UTC)
Feedback request: Geography and places Good Article nomination
Your feedback is requested at Talk:Roanoke, Virginia on a "Geography and places" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 19:30, 13 October 2023 (UTC)
Question from EzigboEsquire (18:21, 20 October 2023)
How can I create my own profile? --EzigboEsquire (talk) 18:21, 20 October 2023 (UTC)
- EzigboEsquire, You may not use Wikipedia for advertising yourself, your business, or anything else. Please read WP:What Wikipedia is not. Cheers, · · · Peter Southwood (talk): 19:23, 20 October 2023 (UTC)
Feedback request: Engineering and technology Good Article nomination
Your feedback is requested at Talk:Nicoll Highway collapse on a "Engineering and technology" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 10:30, 22 October 2023 (UTC)
Question from Fripper23 on Draft:Schistosoma hippopotami (16:53, 22 October 2023)
how do I save a draft, without having to publish it? --Fripper23 (talk) 16:53, 22 October 2023 (UTC)
- Hi Fripper23, You click on the publish button, which saves the page in draft space. This happens because you created the article in Draft space (it has the prefix "Draft:"). To publish it in mainspace, the article must be renamed in a process called "moving", which publishes it in article space for the world to see. If you want you could alternatively save your drafts in user space, where they will mainly be ignored until you are ready to move them elsewhere. A lot depends on how soon you want them to be examined by someone else. If you expect the article to be ready soon, draft space is good, as you are more likely to get helpful suggestions. User space is good for something that may take a longer time before it is ready. As the article title suggests it will be about a recognised species, it should be easy to prove notability and should not take long. You are welcome to ping me from the talk page of the article while you are developing it if you want me to take a look. Cheers, · · · Peter Southwood (talk): 17:54, 22 October 2023 (UTC)
- Sorry Fripper23, I should have explained more clearly. When you write a draft in user space it should be a sub-page of your user-page, not the primary user-page, so something like User:Fripper23/Schistosoma hippopotami. Your user page has other uses – see WP:User pages for the more detailed explanation, Cheers, · · · Peter Southwood (talk): 08:37, 23 October 2023 (UTC)
Feedback request: All RFCs request for comment
Your feedback is requested at Talk:Christopher Columbus on a "All RFCs" request for comment. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 15:30, 23 October 2023 (UTC)
Question from Denny1957 (22:44, 25 October 2023)
Hi, I'm really interested about my direct descent captain John gallop.i was also thinking of writing about myself and domestic violence laws that I've had passed in Massachusetts and on behalf of victims of domestic violence and of my sister Patrick Gordon Murphy who was brutally murdered by her husband. I've also been on local and national news for over 10 years. I didn't want my sisters murder be in vain. Thanks for your time and help Denise Gorham --Denny1957 (talk) 22:44, 25 October 2023 (UTC)
- Denny1957, So what is your question? Cheers · · · Peter Southwood (talk): 05:10, 26 October 2023 (UTC)
Domestic violence
Nobody Should ever live in fear of abuse, whether it's verbal, psychological or physical abuse. I want to shed light on this subject. Denny1957 (talk) 22:47, 25 October 2023 (UTC)
- Denny1957, I don't disagree, but also don't see why you are telling me this on my talk page. Cheers, · · · Peter Southwood (talk): 05:15, 26 October 2023 (UTC)
Perplexity.ai
I switched over to perplexity.ai, an AI search engine, as my main search engine, last February.
If you don't already use it, you might want to check it out.
Its first main advantage is that upon doing a search, it goes into the pages in the search results and pulls out what you are looking for, and presents its summaries and additional observations to you in natural language. By composing its answers from search results, it bypasses most, but not all, of the risk of hallucinations. On some very fringe tasks where I push the thing to the limit, like forcing it to produce a specified number of entries in lists and tables, it can spew gobbledegook.
Its second major advantage is that it remembers the thread you are in with it, and can interactively converse with you about previous responses, and follow your commands on how to proceed.
The third key benefit is that you can treat a response as a project (answer, document, list, table, program, etc.), and instruct perplexity on how to modify it.
The fourth one is that it can follow a wide range of instructions, not necessarily restricted to its search results. For example, you can give it material to work on, or specify a specific web page for it to summarize analyze, etc. As an experiment, I had it run a D&D game session as the game master, but it went senile after about 20 responses and forgot what we were doing.
Once you get good at using it, you can override it looking only in the search results, and bring the full force of its underlying LLMs to bear.
It's limited to returning only 6 search results at a time, but they tend to be among the best available results, because it uses an additional AI to interpret your prompt and make the search query for you behind the scenes, and it submits its query to yet another AI, an AI-enhanced version of Bing. Perplexity is an amalgamation of several AIs, and they keep adding more to it. If the 6 sources don't cut it, you can request that it try again with new sources. Phrasing a prompt in a different way, or approaching your wish from a different angle, also helps.
Oh yeah, they've added an image search AI, and it's funny that the conversing AI isn't aware of it (yet). So, conversations get weird when it can't find an answer but it is sitting right there in the pictures. :)
If you use it, or try it out, let me know what you think of it.
Sincerely, — The Transhumanist 01:03, 28 September 2023 (UTC)
- The Transhumanist, I have used perplexity to find sources. It seemed to do a fairly decent job of that, but have not used it for anything else. Cheers, · · · Peter Southwood (talk): 06:47, 28 September 2023 (UTC)
- Interesting. Do you enter a search term, or do you tell it specifically to find you sources on a specific thing? In the former case, it will typically build an essay on the subject, in the latter, it will list each source and summarize it.
- It can also follow very specific instructions. For example:
- find me non-Wikipedia sources on the damaging effects of the Great Garbage Patches, and format each one as a transclusion of Wikipedia's template:cite web, enclosing each source in double curly brackets. In each citation, include a quote from the source. Present your answer in MediaWiki wiki text format in a code block.
- Here's what it provided:
-
* {{cite web |url=https://www.nationalgeographic.com/environment/2018/09/ocean-cleanup-machine-great-pacific-garbage-patch/ |title=The Ocean Cleanup machine is headed to the Great Pacific Garbage Patch |website=National Geographic |access-date=2023-09-28 |quote=The Great Pacific Garbage Patch, a floating mass of plastic trash that is now twice the size of Texas, is growing faster than expected and is now three times the size of France.}}
- * {{cite web |url=https://www.nature.com/articles/s41598-019-49413-5 |title=The Great Pacific Garbage Patch: A Quantitative Modelling Approach in Support of the Plastic Mass Estimation |website=Nature |access-date=2023-09-28 |quote=The Great Pacific Garbage Patch (GPGP) is the largest accumulation zone of ocean plastic pollution on Earth.}}
-
* {{cite web |url=https://www.nationalgeographic.com/environment/2018/09/ocean-cleanup-machine-great-pacific-garbage-patch/ |title=The Ocean Cleanup machine is headed to the Great Pacific Garbage Patch |website=National Geographic |access-date=2023-09-28 |quote=The Great Pacific Garbage Patch, a floating mass of plastic trash that is now twice the size of Texas, is growing faster than expected and is now three times the size of France.}}
- You can also use it to gather what Wikipedia has on a subject, and present it as a list:
- Make me a list of articles on Wikipedia about great garbage patches, including the URLs in http format.
- It's answer:
- Here is a list of articles on Wikipedia about great garbage patches, including the URLs in http format:
- Great Pacific garbage patch - https://wiki.riteme.site/wiki/Great_Pacific_garbage_patch
- Garbage patch - https://wiki.riteme.site/wiki/Garbage_patch
- North Atlantic garbage patch - https://wiki.riteme.site/wiki/North_Atlantic_garbage_patch
- Garbage Patch State - https://wiki.riteme.site/wiki/Garbage_Patch_State
- Great Pacific Garbage Patch - https://www.nationalgeographic.org/encyclopedia/great-pacific-garbage-patch/
- Indian Ocean garbage patch - https://wiki.riteme.site/wiki/Indian_Ocean_garbage_patch
- These articles provide information on various garbage patches found in different oceans, including the Great Pacific garbage patch, North Atlantic garbage patch, Indian Ocean garbage patch, and more. They describe the causes, effects, and characteristics of these garbage patches, as well as efforts to raise awareness and clean up the debris.
- Here is a list of articles on Wikipedia about great garbage patches, including the URLs in http format:
- You can iteratively have it build upon the list, until you reach the program's buffer limit.
- I use it as a general purpose tool on a great many tasks. If you have any questions, feel free to ask. But, you might get better results asking it. :) Keep in mind that questions to it about ChatGPT are more fruitful than about perplexity.ai, as they both run on the same LLM (GPT-3.5), and there is a lot more coverage out there on ChatGPT. In comparison, perplexity is relatively unheard of. For laughs, if you ask it to tell you about itself, it will be GPT-3.5, and not perplexity, writing the response, though the perplexity program sometimes interjects standard (and conflicting) answers, which can make the app come across as a bit schizophrenic. ;)
- If you run into obstacles or glitches, I may be able to help you find ways around them.
- Sincerely, — The Transhumanist 08:31, 28 September 2023 (UTC)
- Why am I not surprised... ;-) I tend to ask things like "list reliable sources on topic X" and refine from there. Cheers, · · · Peter Southwood (talk): 09:27, 28 September 2023 (UTC)
- Swapping out adjectives often produces different search results, and is useful for when you want to dig deep into a subject's online sources. I haven't tried "reliable" yet. So far, I've focused on "accurate" and "recent". Thanks for the tip. — The Transhumanist 20:32, 24 October 2023 (UTC)
- Not surprised, about what? I covered a lot of territory in that response. — The Transhumanist 20:32, 24 October 2023 (UTC)
- I don't remember in detail, probably just felt like this was generically unsurprising considering the widespread interest in LLMs, and other stuff I have read lately.
Recent is not particularly likely to be a big deal with the stuff I work on, but for some topics quite important. It would be interesting to know whether reliable and accurate produce different results, my guess would be very similar. Could be a bit disturbing if there is a big difference.· · · Peter Southwood (talk): 05:06, 25 October 2023 (UTC)
- I don't remember in detail, probably just felt like this was generically unsurprising considering the widespread interest in LLMs, and other stuff I have read lately.
- Why am I not surprised... ;-) I tend to ask things like "list reliable sources on topic X" and refine from there. Cheers, · · · Peter Southwood (talk): 09:27, 28 September 2023 (UTC)
Not a killer app, yet
It's not quite a killer app, in my opinion. It takes quite a bit of sussing to get it to do what you want. Once you figure out how to do a specific thing, it continues to do so fairly consistently, which adds another tool to your toolbox, but the learning curve is steep.
I'd say that they are on the verge of becoming killer apps: both GPT and perplexity. About 1 in 5 responses are so good that they rival highly literate articulation. They can be downright insightful. It is well within the realm of possibility that this will increase to 9 in 10 responses within a year or two. We will probably see this kind of productivity increase with the next generation of generative AI apps, including GPT-5. There is a mad dash going on between the tech behemoths to produce competent general purpose AI, and toward that end, they are combining different types of AI (such as reasoning engines) to produce LLM hybrids. David Ferrucci (the father of IBM's Watson AI) is now leading his own team that is working on this very approach.
Another area of development is in the arena of emergent abilities. The current mishmash of algorithms are so complex and advanced, that when you increase the size and quality of their training data sets, all of a sudden they can do things that nobody expected. Reasoning appears to be one of those things, but there is heated debate over whether this is merely appearance ("pseudoreasoning") or the real thing. Memes in recorded language have meaning built-in, per semiotics, and when you juggle and concatenate memes, you are forming new thoughts. Does it have to be aware of what it is doing for it to be considered true thought, or does it just have to be the right thought for the circumstances? One could go bald thinking about this too much (due to pulling out one's own hair in frustration). Other emergent abilities that are much more obvious are translation to yet another language, composition in still another programming language, calculation of math problems, and increased writing skill and articulation.
One of the most mind blowing emergent skills so far, is memory. LLMs have figured out how to create and use their own short term memory storage. This has got researchers really scratching their heads. It's a truly WTF moment in history. And scary. Because it is an obvious building block of higher intelligence. It's almost as if we are witnessing the development of a new type of mind in real time. Which leads to the question: "Will awareness spring into existence at some point?" We can only guess, and that guess is likely "it's inevitable".
Therefore, AI companies big and small are endeavoring to build their data sets with almost everything digital they can get their hands on. Things to watch for are acquisitions of and collaborations with publishers, libraries, museums, film production companies and studios, news corporations, websites, and other owners of collections of IP, and of data communication and collection systems (such as satellites).
Ray Kurzweil forecasted that we'd have the first artificial general intelligence by 2029. For him, that is as much a goal as it is a forecast, as he is still actively involved with developing it. He and the rest of the field appear to be on track with that goal/prediction. Computers surpassed the physical capacity of the human brain years ago. Now, achieving general intelligence may be just a matter of developing adequate software (including data stores) for those systems. And that can happen at any time, through new algorithms, to emergent abilities, to combining various AI modules and types together. We are also seeing assisted recursive self-improvement, where AIs trained to write code are being applied by developers to improve the programming of AI. Taking developers out of that loop will be a major sign that AGI has or is about to arrive.
Thus, the world is not only on the verge of becoming home to general purpose AI, but to AGI as well. The former will likely precede the latter by only a few months or years. And since technological advancement is a geometric rather than a linear progression, i.e., it's accelerating, achieving AGI by 2029 is highly feasible.
What does this mean for Wikipedia and people?
Positioning is paramount.
We can either gather and apply AI resources, or be left behind.
So, I'm looking forward to the development of killer apps that can build Wikipedia. If Wikipedia does not take advantage of them fast enough, then another encyclopedia or reference hybrid likely will, and leapfrog Wikipedia, to become the world's preeminent encyclopedia. A fully automated, or nearly fully automated, encyclopedia could overtake Wikipedia in volume within a few weeks of being initiated.
Or we may find that forming encyclopedic knowledge and the building of an encyclopedic data set are emergent abilities. LLM hybrids may become encyclopedias by default, if they haven't already.
We are in for some interesting times.
Sincerely, — The Transhumanist 20:32, 24 October 2023 (UTC)
- The Transhumanist, We could be looking at citogenesis writ large if the LLMs get into learning from material previously produced by other LLMs. I wonder if there is any way to prevent that from happening, since there does not seem to be any reliable way of identifying AI products or distinguishing them from human generated material. How does an AI detect or assess accuracy or reliability? How do they do their fact-checking? is it repeatable? is it transparent? I consider those some of the important questions as yet unanswered. As you say, interesting times. Cheers, · · · Peter Southwood (talk): 05:06, 25 October 2023 (UTC)
- You have the citogenesis problem even with a single LLM that has training data that it itself has edited. For example if GPT-5 has Wikipedia in its training data that was edited in part by GPT-3, GPT-3.5, and GPT-4. Hallucinatory material could get amplified in that way. In the near future, training data will probably become dynamic, meaning that it will be added to for existing LLMs continuously in real time, rather than expanded only between versions. That would create a more direct feedback loop.
- The problem will likely be further compounded by human editors across the Web using AI on purpose to generate fictional news and mock non-fiction web pages. Mass produced misinformation at levels way beyond what we've seen before. Organized cyber propaganda campaigns and criminal scams (fake business websites selling products and services that don't exist, or misinformation sites set up to attract traffic and host real ads) could be the worst of it, as it would be focused on manipulation of people in the real world, including stealing their money, and their votes. Imagine all that getting included in AI training data.
- Training data sets are also being developed for robots, which in addition to having informational data so that they can communicate, also have data that controls their actions based on the informational data. Robots are being designed to receive natural language and act on it. So, now you have the potential for robots to be deceived by fake news, just like people, and way way worse (fake training data).
- The obvious solution is to develop AIs for fact checking, but that runs into the same problem humans have in discovering citogenesis: how do you verify that news articles originally got their information from Wikipedia, or an AI, if they don't cite Wikipedia, or the AI, as the source?
- The problem of journalists reporting from unverified sources just got worse. If they are willing to report info from Wikipedia, are they also likely to report information they got directly from an AI, such as from AI search engine summaries (without looking at the sources) or a chatbot?
- LLMs will have to adopt Wikipedia-like policies in general, and use white lists and black lists to filter data before it goes into their training data. AIs will be needed to identify, monitor, and remove misinformation from training data, and before it has a chance to get in there. Hopefully, it will be only misinformation that they remove.
- Think about this: It's unregulated corporations controlling the training data. :)
- One good thing about dynamic training data sets is that errors can be removed right away, rather than waiting for the next version. Just like Wikipedia.
- In order to protect training data sets, fact and error checking will increasingly become cybersecurity issues and design features.
- I've come to the conclusion that it will be the training data sets themselves that will obsolete Wikipedia, unless Wikipedia becomes an AI.
- The main danger to humans isn't that an AI will be out to get us. We'll just become obsolete. This is what has inspired the 2 main coping strategies—augment humans with computer components and build friendly AIs.
- I have to go now, I'm getting dizzy just thinking about this stuff. — The Transhumanist 17:07, 25 October 2023 (UTC)
- The Transhumanist, Yes to all of this. What worries me is that if this is as obvious as it seems to be, how is it that the developers either do not see it or do not care? Or are they playing a long game where they think exposing the world to the current state of the art will alert it to the risks at a stage where it is still possible to do something about them? Sort of Y2K all over again? Cheers, · · · Peter Southwood (talk): 05:03, 26 October 2023 (UTC)
- The overall danger (AGI, artificial general intelligence) is as obvious as it seems to be. We've been warned, by many highly intelligent people in the field. I think there have been at least 4 open letters so far (2 in 2015, and 2 in 2023), the last of which pleaded that we treat AI with the same care as nukes. The world's leaders took notice, in science, business, and politics, and both the upside and the downside have been explained to them. (See the last couple of years in the Timeline of artificial intelligence). Anybody who is interested can read all about the risks on Wikipedia—it is explained there very well.
We don't appear to be in immediate danger of AGI. It's the intermediary danger that's the big variable, intermingled with great potential: there is an unknown amount of technology between what we have now and AGI. And what we have now has proven to be extremely powerful already. So, naturally, businessmen, politicians, and scientists want to push the envelope some more to bring forth AI applications that will be useful to them and to society. Risk is an inherent part of their jobs. Somehow, they must weigh the risks against potential benefits, such as all the maladies that could be cured using AI, and all the societal problems that could be solved (poverty, crime, climate change, etc.).
AI is powerful. AGI will probably be too powerful. If you can put on the brakes and stop just short of AGI, you can reap all the benefits weak AI has to offer.
Meanwhile, there may be some who believe we could control AGIs or at least get along with them. There are those who figure they won't be around long enough for it to be a problem for them, so why not make a fortune with AI and live large. And there are those who believe that man must merge with machines, and they welcome AGI as the next stage in evolution. They aren't going to stop no matter what.
At the same time, we live in a balkanized world. Nobody can trust that everybody else will stop development; secret labs, that kind of thing. From governments' points of view, they can't afford to allow their enemies to get ahead in AI: not in military applications, not in economic (financial, industrial, space, medical, etc.) applications, not in administrative applications. Therefore, development is allowed to continue.
For those in development, the stakes are high. The economic potential of AI is astronomical, and it is the obvious way forward for computer companies. Those in the forefront are all racing as fast as they can, so as not to be beat out by the competition. They are the ones with the brake pedals, but their feet are on the accelerators all the way to the floor. It'll be up to governments to enforce safeguards, when the time comes.
When will the time come? That's anybody's guess. If this sounds a lot like Russian roulette, that's because it is, on a world scale. But so is having enough nukes to wipe out civilization several times over. And so is burning oil and the Amazon Rain Forest in the face of climate change. Society has been holding that metaphorical gun to its own head for a long time. *Click* *Click* *Click* — The Transhumanist 07:34, 26 October 2023 (UTC)- And governments are so good at enforcing safeguards.... · · · Peter Southwood (talk): 10:12, 26 October 2023 (UTC)
- PS, You should spin the cylinder between shots in Russian roulette. · · · Peter Southwood (talk): 10:12, 26 October 2023 (UTC)
- Good point. As for the state of affairs pertaining to AI, my stance is that there is a window of opportunity open right now in which many AI resources are available to the general public. That window will not likely remain open for long, relatively speaking. Legislation is underway in Europe, and the US will likely soon follow, that will probably hinder open source AI components and their support organizations. There are ample AI chatbots and search engines available for free at the moment, but once they reach their traffic or user base goals, pricing would be the next logical step. Industry consolidation will also affect availability, as competition will be reduced by acquisition. There are also several lawsuits underway aimed at restricting AI companies' use of everyone's data for free, especially source code and know how from programming forums. Meanwhile, news agencies and other websites are wising up to their data being crawled and used in training data without permission or compensation, and they are beginning to restrict access to their data by AI companies' web crawlers and through changes to their terms of service. These and other forces will combine to reduce availability and increase the price.
The most significant capacity of Chatbots right now is their ability to write program source code based on the user's description of what he or she wants the program (and its subroutines, i.e., functions) to do. This allows non-programmers to program!!! Limited only by their imagination!!! This is the main opportunity available right now: to come up with program ideas and collaborate with an AI to create the programs.
Once a person creates a program or two using this approach, possibilities expand with increased familiarity. One gets a better feel for what the AI agent can and cannot do.
Programs can increase capabilities and productivity, and they can be monetized. It would cost you between $50K to $100K per year to hire a programmer to write such programs.
My guess is that the window will be open for about another year, with access shrinking, or growing more expensive, after that. Hopefully, I'm wrong on that point. The longer the window remains open, the greater the opportunity.
The window applies to Wikipedia as well. Now is the best time for volunteers to create user scripts, Lua modules, advanced templates, etc. Perplexity.ai can also be used to explain existing code, including wiki-markup, Lua, JavaScript, html, and css. — The Transhumanist 14:29, 26 October 2023 (UTC)
- Good point. As for the state of affairs pertaining to AI, my stance is that there is a window of opportunity open right now in which many AI resources are available to the general public. That window will not likely remain open for long, relatively speaking. Legislation is underway in Europe, and the US will likely soon follow, that will probably hinder open source AI components and their support organizations. There are ample AI chatbots and search engines available for free at the moment, but once they reach their traffic or user base goals, pricing would be the next logical step. Industry consolidation will also affect availability, as competition will be reduced by acquisition. There are also several lawsuits underway aimed at restricting AI companies' use of everyone's data for free, especially source code and know how from programming forums. Meanwhile, news agencies and other websites are wising up to their data being crawled and used in training data without permission or compensation, and they are beginning to restrict access to their data by AI companies' web crawlers and through changes to their terms of service. These and other forces will combine to reduce availability and increase the price.
- The overall danger (AGI, artificial general intelligence) is as obvious as it seems to be. We've been warned, by many highly intelligent people in the field. I think there have been at least 4 open letters so far (2 in 2015, and 2 in 2023), the last of which pleaded that we treat AI with the same care as nukes. The world's leaders took notice, in science, business, and politics, and both the upside and the downside have been explained to them. (See the last couple of years in the Timeline of artificial intelligence). Anybody who is interested can read all about the risks on Wikipedia—it is explained there very well.
- The Transhumanist, Yes to all of this. What worries me is that if this is as obvious as it seems to be, how is it that the developers either do not see it or do not care? Or are they playing a long game where they think exposing the world to the current state of the art will alert it to the risks at a stage where it is still possible to do something about them? Sort of Y2K all over again? Cheers, · · · Peter Southwood (talk): 05:03, 26 October 2023 (UTC)
Feedback request: Engineering and technology Good Article nomination
Your feedback is requested at Talk:Perth Underground railway station on a "Engineering and technology" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 00:30, 27 October 2023 (UTC)
Feedback request: All RFCs request for comment
Your feedback is requested at Talk:Germany on a "All RFCs" request for comment. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 02:30, 28 October 2023 (UTC)
November Articles for creation backlog drive
Hello Pbsouthwood:
WikiProject Articles for creation is holding a month long Backlog Drive!
The goal of this drive is to reduce the backlog of unreviewed drafts to less than 2 months outstanding reviews from the current 4+ months. Bonus points will be given for reviewing drafts that have been waiting more than 30 days. The drive is running from 1 November 2023 through 30 November 2023.
You may find Category:AfC pending submissions by age or other categories and sorting helpful.
Barnstars will be given out as awards at the end of the drive.
Feedback request: Natural sciences Good Article nomination
Your feedback is requested at Talk:Hurricane Dora (2023) on a "Natural sciences" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 18:30, 1 November 2023 (UTC)
Feedback request: Natural sciences Good Article nomination
Your feedback is requested at Talk:Congenital adrenal hyperplasia due to 21-hydroxylase deficiency on a "Natural sciences" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 19:30, 1 November 2023 (UTC)
Feedback request: Natural sciences Good Article nomination
Your feedback is requested at Talk:Orange-billed lorikeet on a "Natural sciences" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 10:30, 2 November 2023 (UTC)
Feedback request: Engineering and technology Good Article nomination
Your feedback is requested at Talk:Cross-site leaks on a "Engineering and technology" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 21:30, 2 November 2023 (UTC)
Feedback request: Geography and places Good Article nomination
Your feedback is requested at Talk:Majuli on a "Geography and places" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 23:30, 2 November 2023 (UTC)
Request
I've started a revamp of Wikipedia:Tools/Optimum tool set.
Please take a look and let me know if there are any essential techniques or must have tools that you think should be included.
Thank you.
Sincerely, — The Transhumanist 06:37, 5 November 2023 (UTC)
Overdue
This has been a long time coming...
|
The Multi-Outline Barnstar | |||||||
is hereby awarded to Pbsouthwood, for the creation, development, and maintenance of:
for work on: and for extensive work on: Keep up the excellent work! — The Transhumanist 08:35, 6 November 2023 (UTC) |
And you have been added to the Outline WikiProject's Hall of Recognition — The Transhumanist 09:45, 6 November 2023 (UTC)
- Thanks, it is not necessary, but it is appreciated. Cheers,
· · · Peter Southwood (talk): 10:49, 6 November 2023 (UTC)
Feedback request: Engineering and technology Good Article nomination
Your feedback is requested at Talk:Cross-site leaks on a "Engineering and technology" Good Article nomination. Thank you for helping out!
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name.
Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 13:30, 6 November 2023 (UTC)