Talk:AI capability control
This article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||
|
Tags
[edit]An editor who appears to have a COI shouldn't be just blithely removing tags. Fix the issues noted in them, else it just comes across as whitewashing - David Gerard (talk) 13:08, 12 May 2014 (UTC)
David, please assume good faith. There's no whitewashing or conflict of interest here. I think the I.J. Good / MIRI conception of an Intelligence Explosion and non-friendly singleton AGI is profoundly mistaken. My only published work on the topic has been entirely critical. I'd considered adding a reference to e.g. Stephen Hawking's Sunday Independent article (cf. http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html) Do feel free to cite it (or some similar item) if you think the topic needs Hawking's imprimatur. --Davidcpearce (talk) 13:57, 12 May 2014 (UTC)
Problem with title of page
[edit]The article is exclusively focused on a physical AI box, but the title 'AI capability control' can refer generally to hardware, programming, and social measures. The title needs to be AI box, i.e., it requires a page move. I intend changing the title to AI box within 48 hours unless someone else does. Johncdraper (talk) 12:23, 18 September 2022 (UTC)
- Add: Okay, I can see that capability control is now being used for more than boxing, but still mainly for 'physical' hardware approaches, apart from oracle AI, which is a combination of hardware, software, and wetware options. Still, 'capability control' is rarely used in the academic literature. A search of the SCOPUS database using TITLE-ABS-KEY ( "capability control" AND ai OR "artificial intelligence" ) yields two articles, both Armstrong oracle articles. Unless there are any major textbooks using the terms, we should be using something like 'Hardware approahces to constraining the existential risk from AGI', or similar terms used by the published AGI risk taxonomists. Johncdraper (talk) 16:47, 18 September 2022 (UTC)
- Here is what I mean by the term used by some of the AGI taxonomists: Kaj Sotala and Roman V Yampolskiy 2015 Phys. Scr. 90 018001. They use the term 'AGI confinement' or 'AI confinement problem', a term used by Yampolskiy in four articles using an equivalent SCOPUS search. This is also the term found in various books, like Callaghan, Miller, & Yampolskiy in 2017; Dawson, Eltayeb, and Omar, 2016; Yampolskiy, 2015; with Abbas, 2019, listing AI confinement and AI boxing in the index; interestingly, Müller in 2012 lists AI confinement in addition to mentioning AI boxing as a form of AI capability control. I don't have access to Müller, so I cannot quite see the hierarchy of these terms, especially with regard to AI confinement. However, this all raises another issue: do we mean AI capability control or AGI capability control? Müller referred to OAI (oracle AI)... Johncdraper (talk) 08:30, 19 September 2022 (UTC)
- Another add: So, I have found a few references in textbooks (other than to the capability approach in technology, which is an estavlished subject). In 2012, Vincent C. Müller included a subsection on capability control, focusing on physical constraints and epistemic measures (oracle AI). Interestingly, I also found an early (1985) Oak Ridge National Laboratory Review mention of learning capability control of AI. In 2021, Andrew Leigh referred to the "capability control" approach in double quote marks. Chinen in 2019 also mentioned it, and Bostrom mentioned it in 2012. Dawson, Eltayeb, and Omar also referenced it. There are a few other scattered mentions, but I can't see any books on it, or chapters, or book sections. That might just be because Bostrom's (?) classification of capability control and motivation control is not that widespread, or it might just reflect that AGI existential risk management is not that widespread. Following this review, I'm tending to think "AI capability control" is the correct term, but I'm not yet convinced it can hold its own as an article title. The article may benefit from an etymology/history section.Johncdraper (talk) 17:53, 18 September 2022 (UTC)
- Final add: After much reading, I'm finally satisfied that the article title is correct. However, I'm also sure that AI confinement should be included. So, I've added it as a broadly synonymous term to the lede. Johncdraper (talk) 11:43, 20 September 2022 (UTC)