Jump to content

Talk:Artificial intelligence/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

DO NOT EDIT OR POST REPLIES TO THIS PAGE. THIS PAGE IS AN ARCHIVE.

This archive page covers approximately the dates between DATE and DATE.

Post replies to the main talk page, copying or summarizing the section you are replying to if necessary.

Please add new archivals to Talk:Artificial intelligence/Archive02. (See Wikipedia:How to archive a talk page.) Thank you. moxon 01:20, 21 October 2005 (UTC)[reply]

Modern AI

[edit]

si

Missing Topics

[edit]

Some of the more technical parts of AI are missing such as links to the Rule based languages, fuzzy logic, Rete Algorithm, forward chaining, backward chaining, expert systems, perceptron, neural networks, simulated annealing, etc.

My suggestion is to add a subtopic such as "AI Implementation" or "AI Technology"

Ethical problems

[edit]

I don't know if you'd agree, but I think the original vision of AI has by now been thoroughly discredited by ethical problems. Creatures that satisfy the original definition of "intelligence", e.g. Great Apes, are not accorded the respect of personhood. Meanwhile, stupid programs of bad research continue to propagate themselves due to funding inertia and influence of top researchers like Minsky, who haven't produced anything worth a damn in years. Welcome to tenure, I guess, but the people building robot insects or hooking up humans into cyborg colonies (woops forget to mention Steve Mann under collective intelligence) or talking about augmenting apes witih speech synthesizers just don't believe any of the nonsense that Minsky believed...

Steve Mann is a moron with cool toys. Minsky, on the other hand, was a visionary.

Great Apes

[edit]

Deception is the key difference between humans and Great Apes. Like four year old children, Great Apes do not have a "theory of mind" that enables them to lie *convincingly* by imagining how the other will think things are. Human children acquire this at four and a half or so. Great Apes never do. They lie very badly.

The thing called "intelligence" seems to me to be a combination of perception, planning, empathy, cognition and deception. One decides for oneself which to test for, and what to emphasize, and what to extinct. AI is a nonsense goal, just "infinite symbol manipulation" really, some of which symbols are maybe good enough maps to walk over rough terrain in a robot insect, but none of which are good enough to deceive a suspicious adult human.

Specific nitpicks

[edit]

Firstly, thanks for the edits. The argument is now much more readable.

I still have some serious problems with the article as it stands, however.

Firstly, some specific nitpicks:

good, I'll answer in depth, although we might want to fix cognitive bias first as some of these same issues are mentioned there... relating AI and cogsci is easy enough, relating AI to cognitive *bias* might be harder... may require good articles on culture bias and notation bias first... personally I believe strongly that recognition that "oh, that's intelligent" is a combination of cognitive, culture, and notation bias. That we will give up the word and concept soon when we realize we're engaged in a hate exercise. So, my bias here revealed, here's why I believe that:
Turing also helped author the Church-Turing Thesis, an important advance in the philosophy of mathematics , which seems to imply that adult human linguistic or symbolic intelligence can be no more complex than the process of creating a mathematical proof itself.
I believe Hofstader and Daniel Dennett claimed that this was one implication of the CTT - that to understand "intelligence" as understood in the Western world we had to understand the process of mathematical proving and discrediting of proofs... as cognitive, social or notational/formal as that may be. This would not be a controversial position w.r.t. proofs and intelligence given other beliefs in the philosophy of mathematics - and the experience of Erdos which seems to prove that mathematics is at least to some degree "a social activity". So the bad word here may be "creating"... it's maybe "discovering", or "inspecting", or "discrediting"... is "creating" a sum of these? Hmm...

Modern theorists often reject the assertion that this, or playing chess, is in fact what humans mean when they recognize each other as being concious, wise, or aware. Turing's Test highlights these questions by suggesting that adult humans perhaps assume too much based on mere language - while paradoxically rejecting or ignoring the intelligence of Great Apes, who can master 2000-4000 word vocabularies.

Firstly, all the Church-Turing thesis actually says is that anything that can be effectively computed, can be computed on a Turing machine. Many people do draw the implication that anything the brain "computes" can be computed on a Turing machine, so any limitations of a Turing machine are also limitations of the brain. Beyond that, I can't see your contention. Your earlier

It's the relationship between what mathematicians and scientists understand as "computed" versus what living creatures with bodies walking or swimming around with other bodies in ecologies would say had been successfully decided. Difference between "decided" and "computed" being bodily commitment. Key distinction made by body philosophers back to Wittgenstein. Turing and Wittgenstein talked about this but in terms most people don't seem to understand as being "about this"...

comment that "all forms of symbolic or linguistic intelligence as being equivalent to the Turing Machine, which in turn was equivalent to "mathematicians doing proofs in the usual way" isn't directly reflected in the article, and the second part of that comment isn't obvious to me. Could you spell it out for me really slowly and clearly?

I'm best to dig for my Locke, e.g. "...Human Understanding", "Men often stay not warily to examine the agreement or disagreement of two ideas which they are desirous or concerned to know; but, either incapable of such attention as is requisite in a long train of gradations, or impatient of delay, lightly cast their eyes on, or wholly pass by the proofs; and so, without making out the demonstration, determine of the agreement or disagreement of two ideas, as it were by a view of them as they are at a distance, and take it to be the one or the other, as seems most likely to them upon such a loose survey. This faculty of the mind, when it is exercised immediately about things, is called judgment; when about truths delivered in words, is most commonly called assent or dissent"
I guess if you believe Locke, it's obvious that judgement is involved in the acceptance of the CTT or any mathematical proof of same. Assent or dissent to it by any given mathematician, "delivered in words" or otherwise made clear. So, whose judgement is involved in determining if something is intelligent... in science, largely, the mathematician, or the experimenter, or the *funder* (an underexplored problem only now getting serious attention). So:
Hmmm. Whilst it doesn't appear to be crucial for your argument, if I'm reading your comment correctly you are misunderstanding the CTT. It is a thesis, not a theorem. It can never be proven.
"It can never be proven"? Well, perhaps Dennet and Hofstader intended to make it into a theorem. Parallel to what happened in Gaia theory when the Gaia Hypothesis (untestable to any ethical player) evolved into Gaia Theory (proper). It isn't crucial to this argument, but I think a thesis that cannot be proven is not a thesis at all... no falsifiability?
the funding and experimenting decisions we would recognize as guided by some very-hard-to-formalize situational and bodily ethics and constraints... leaving only the mathematician to decide if something is "intelligent" without such obvious constraints... his is the most "neutral point of view".
the fact that they are "the experts" on logic and proofs, and decide if any given evidence has invalidated a thesis via mathematical prediction, makes the community of mathematicians and its collective assent or dissent critical. How do they "commune"? "By doing proofs in the usual way", e.g. the social way of Erdos, the inspirational way of Galois, the instructional way of Euler... "Read Euler, he is the master of us all" - advice to Galois.

Secondly, who are "modern theorists", and what is the "this" in the phrase "the assertion that this"?

"this" being "symbolic or linguistic intelligence" of adult (>5 years anyway) humans. The "modern theorists" are sometimes called post-linguists, many are primatologists or others doing field studies with animal subjects, or building insect robots, etc. - basically a mixed bag of people who reject the Turing test, consider Great Apes to be "people" in every moral/emotional sense and are pushing for this legal status (http://personhood.org), *OR* are determined to work only on survival-type intelligence like that of insects... ignoring language as irrelevant. Chomsky sits firmly in the linguistic camp. However Chomsky also thinks Great Apes aren't "intelligent" in the sense of humans, which seems like "whatever computers haven't done yet, and Great Apes can't do" type of human racism. Best known people here? Those concerned with timing in language, like Goffman,


Thirdly, whilst it's hardly a peer-reviewed academic journal, ABC news article credits chimpanzees with a 240-word vocabulary, an order of magnitude less than the 2000-4000 quoted in the article (a stat that tallies with my own recollection of the topic). Furthermore, from what I remember of my undergraduate psychology studies, signing great apes can't construct actual sentences - the best they can do is possibly construct two-word phrases - "tickle me" and "feed me" being by far the most common :)

seems to be limited to direct-object (mostly two word) verb phrases, old studies gave rise to the figure "125", a more modern

one lists 150-1000, and is written by anti-personhood experimenters. Seems to argue equivalent of two year old human child's skills - while the advocates argue that they're more like four year olds. The famous Koko the gorilla had 1000 words in ASL. One group claims that adult orang-utans could master 2000 words "he already has a 2000-word vocabulary in sign language" and is shifting apparently to satisfy skeptics, bonobos trained from childhood can master 4000, according to the people doing the keyboard work. Can't validate the 4000 - maybe they withdrew it until they can satisfy all the skeptics - or maybe they projected that number based on comparisons of early progress? It does seem to require intensive training to get this far. There seem to be no challengers to Koko's claim to naming and simple direct verb object skills. There is some question whether she can invent words. but not all of this is interesting to AI except insofar as differences between species may eventually tell us much about cognitive skills of the highest order of living creatures closest to us... but ok enough, let's limit claims to the 150-1000 and note the "disputed claims of 2000 or more" arising from sign language.

What difference does it make to AI whether chimps, gorillas, dolphins or parrots are "intelligent" or not? It might matter to ethicists, theologans, and psychologists, but to me it seems of little philosophical import to the practice of building systems to solve problems which computers currently don't do very well but which humans (and to a large extent animals) do well, which to me seems to be the practical goal of AI.

to have "artificial intelligence" the naive mind expects you must be able to measure or recognize "intelligence". this is a good summary of non-linguistic "intelligence" that apes share with humans
Ah, this is where I disagree with the "naive mind". Consider a machine translation system. Who really cares whether it's intelligent or not, as long as it accurately translates prose from the source to the target language? The definitional arguments are certainly fascinating, but you can do lots of very useful and interesting things without getting caught up in them.
who is judging "accurately"? Avoiding these arguments is utterly unethical - it leaves the realm of science and enters mere technologies of persuasion to get more funding to do more persuasion... If you submit a message direct from God that is seeming gibberish, and a program translates it as "Vote Green Or Die", that is quite good enough for me...

As to your "ethical argument", you're absolutely right - I disagree. Whilst the "personhood" of the great apes is certainly open to debate, it has no relevance to AI research. As to the frankly disappointing research of AI research to date, that is certainly true - AI research haves't achieved nearly as much as many thought it would. However, I don't see that it follows that AI is fundamentally impossible.

some scientists call this a moral hazard, others an opportunity: extinct the culture that wild chimps and gorillas live in, and then you don't have to address their wild intelligence either - you can define intelligence to get grant money for whatever you can convince another human "intelligence" is... while I would say that preserving those cultures to understand intelligence as such is an opportunity, and probably driving the falsifiability disputes.
I think AI "progress" is "disappointing" because it sets up a false goal - deception itself - honest assessments of intelligence would set the Great Apes up as benchmarks and assume that humans are the deluded ones making up criteria for their own prestige (as a species, or as researchers specializing in that criteria).
Finally, AI that doesn't respect nearly-human creatures won't respect us either - it's not like we can patch in a moral code when we notice that it wants to slaughter us all as we are slaughtering apes... has to be part of the foundation ontology to recognize certain empathic common grounds... so this is an extension of the insect-makers' argument that you must solve the problems of getting around, finding food, getting along with others of your kind who find the same food in the same place (and maybe fight over it) before you can look at these absurdly abstract problems like chess or "go"... which are meaningless as tests of anything a living being would care about.
WRT your discussion of "whether a system should be described as intelligent" or not, you can debate until the cows come home whether something is truly "intelligent", but, for many AI researchers, there is a big shrug of "who cares"? If it does something useful, isn't that significant in itself?
useful *TO WHOM*? If I write a program to rationalize absurd corporate structures that loot shareholders blind and still satisfy the auditors, then the significance is not scientific nor "usefulness" in the sense of creating value for living beings with real life concerns. Those AI "researchers" who say "who cares" are simply criminal frauds, whose entire career is an aspiration to be Andersen... the word "persuaders" is far more appropriate for such thugs...
I'm a living being, and I care about chess :) More seriously, AI researchers lost interest in chess a very long time ago, as it became obvious that brute-force search, a technique basically useless for the more real-world problems, is the most effective method for computer chess. See the discussion I wrote on this topic in the chess article.
I once asked a fellow on an elevator "why play chess if machines play it better?" He answered "because the machine can't talk about the game afterwards". It was the best answer ever, and I did not catch his name. I submit that you as a living being care about chess because it distracts from the problems of living...

"why do you drink?" - The Little Prince "to forget" - The Tippler "what must you forget?" "that I drink!"

As to instilling morals in AI systems, that kind of debate is so far ahead of the capabilities of current systems as to be essentially irrelevant at this time.
two options: 1. ignore moral concerns and nurturing of nascent intelligences by mothers, wait until they are capable of building their own fusion reactors and extruding their own DNA sequencers. 2. ban all smart-as-primate-or-better AI not raised by a real living primate mother, and work out the problems well in advance. Lee Kent suggested this in May 2001 long before AI teleprompting salesmen persuading suicide bombers was in vogue: "I believe the most logical solution is to parent such a machine. Have a

'mother' bond with the machine and vice-versa. It may sound ridiculous to think of it this way but it is the way humans develop and if we wish to protect the machine from a wrong direction in progression then we have to provide an example that means something to it. And anyway its nothing more than a brand manager's job only taking the job very seriously, applying a human perspective to the machine."

I submit that The Precautionary Principle argues strongly for Lee's view.

More generally

[edit]

OK, there's the nitpicks out of the way.

More generally, I find the tone of the article overly negative and somewhat narrow view of AI. There are quite a few alternative definitions of what AI is about, and most workers in the area are focussed on modest tasks and, over

perhaps a comparison chart that sets up "ant, bee, bird, rabbit, lesser ape, great ape, human" degrees of intelligence and uses the criteria here for non-linguistic intelligence? That seems very non-controversial, although lots of work, and a good balance to the Turing Test arguments (which have the merit of simplicity and honest reflections of how humans deem each other intelligent in conversation].
Non-controversial? Gimme a break! In any case, I go back to the question I posed before: this might be all very relevant to a discussion of *intelligence*, but how does it specifically relate to *artificial intelligence*, the topic of this article.
because the insect-robot-builders, ape-augmenters, collective-intelligencers, etc, are all different responses/objections to linguistic AI as defined by Turing (perfectly in my view, the Turing Test is an ontological thesis and has cosmology implications if you believe in The Daniel Test... AIs that convince you they are God and end the test by turning you into a suicide bomber to get them more power).

the decades, made real progress at them. I intend to add considerably more material on this later, at which point I hope to continue the discussions with you guys :) --Robert Merkel

it's all a question of how we structure it, and how we compare ecological, sexual/linguistic, ethical, or moral/restraint levels of intelligence - and also if we think 'evolution is intelligence and intelligence is evolution' as Karl Popper seemed to think...
Hmmm. Can I just suggest that while the "philosphy of AI" is a topic worthy of extensive discussion, there's much more to the field than that? I think that sums up my biggest objection to the current tone of this article. --Robert Merkel
"theology of AI" would be better as the concerns tend to be moral, ontological, and cosmological... and have nothing to do with ethics, epistemology, or metaphysics... use the "theology" breakdown when concerned with role of life forms in universe, use the "philosophy" breakdown when concerned with getting tenure. :-)

Removal of assertion (context not given)

[edit]

Removed from the article:

, who can master 2000-4000 word vocabularies

What evidence is there for this? Cites?

all sign language studies, and the linguists are disputing those into nonexistence by claiming cueing. I think this is the last move by selfish people trying to preserve their provably-worthless non-field (e.g. Chomsky), and that the only linguists worth a damn are looking at polysemy, plus maybe George Lakoff whose theories are replacing Chomsky's as political darlings on both sides of the AI/not fence. He's carved out quite a niche!
You're obviously a big fan of this Lakoff character, but do you really think his theories have had time to be properly assessed by the wider academic community, particularly if they have the cross-disciplinary implications they appear to by what you've said about them? My big problem with some of your references to this stuff is perhaps that you're pushing a line that hasn't been widely (emphasis *widely*) debated yet. --Robert Merkel
Lakoff has been saying the same thing for 20 years to different communities. The review of "the wider academic community" has included so far about 100 gurus, a dozen solid reviews, and several intersting debates. This is worth writing about here. Is there more to say? Yes, but for that you have to join a mailing list: http://groups.yahoo.com/group/the-embodiment
and, the reviews do not suggest there is any dispute about his making strong metaphorical bindings between human cognitive science and foundations of mathematics - specifically Euler's Identity. So far I've kept discussion of Lakoff et at in cognitive science of mathematics and not generalized this to cover the great work in statistics bias of the 1960s. But, if pressed, I'll be writing a textbook here... and every single step towards that thesis defies and destructs both the particle physics foundation ontology and artificial intelligence along with Mutual Assured Destruction - all sad delusions from my point of view, but predictable given Descartes' error of assumption re: "Other".


The goal is

[edit]

Firstly, the goal of this article, like any other article on Wikipedia, is to present the topic fairly and accurately, and let people draw their own conclusions. When the topic is an academic discipline or school of thought, a fair and accurate presentation includes fair and accurate coverage of the criticisms of such a school of thought, attributed to the people who make them.

Given that, we need to:

  • Define the goals of AI (as there are quite a few definitions, present some of the major ones and compare and contrast).
  • Discuss the different methodologies used to try and achieve them, with some sense of the progression of AI research.
  • Present the results of using those methodologies, both applied to the original goals and elsewhere.
  • Discuss how, clearly, the original goals have not been achieved in their entirety, and some of the speculation as to how the original goals might yet be achieved.
  • Discuss speculation of what it might mean if those goals *are* achieved.
  • Discuss the views of the AI skeptics - including people who believe it can't be done with existing methodologies, believe it can't be done, full stop, and people who think it's unethical to do so.

I believe an article presenting things in pretty much that order is the way to go.

What do you think?

Church-Turing misunderstood?

[edit]

Unintelligible?

[edit]

Yanked from the article:

The philosophy of mathematics asks if adult human linguistic or symbolic intelligence is any more complex than mathematical proof itself, and whether this is in fact what humans mean when they recognize each other as being concious, wise, or aware. Turing's Test highlights these questions by suggesting that adult humans perhaps assume too much based on mere language:

Taking the second sentence first, that's a rather novel interpretation of the significance of the Turing Test, to say the least, and one that doesn't really fit with the reading of Turing's original paper IMHO. It might be a more reasonable response to some of the counterarguments raised about the Turing Test (notably the Chinese Room).

As to the first sentence, I can't parse it. --Robert Merkel


Fixed. The "mere language" issue is now illustrated by the apparent "human racism" of the theorists who reject Great Apes' intelligence. It's not that Turing's paper highlighted the issue of what matters in language but rather than Turing's Test itself did - by failing to convince people it was decisive.

The first sentence is also fixed, and refers more directly to the CHurch-Turing Thesis, which was the specific contribution, and which defined all forms of symbolic or linguistic intelligence as being equivalent to the Turing Machine, which in turn was equivalent to "mathematicians doing proofs in the usual way"

Incorrect and obscure

[edit]

I removed the following paragraph:

The Church-Turing thesis, which demonstrates the undecidability of complex mathematical systems, has been interpreted by some mathematical philosophers to imply that adult human linguistic or symbolic intelligence can be no more complex than the process of creating a mathematical proof itself.
  1. The Church-Turing thesis does not demonstrate the undecidability of anything. It simply says that all "reasonable" definitions of algorithm amount to the same thing, namely to Turing machines. Maybe it was confused with Turing's proof of the undecidability of the Halting problem.
  2. The second half-sentence is unintelligible. I suspect it refers to the work of Penrose, who is not a philosopher of mathematics. His main argument is based on Gödel's incompleteness theorems, and he argues that no machine can ever do everything a human can do. This is an extreme minority position.

AxelBoldt, Wednesday, April 10, 2002

What successes has AI had?

[edit]

For the benefit of the skeptics among us, wouldn't it be desirable to list some of the outstanding successes of AI so far? (other than beating Kasparov at chess, I'm not sure what those are; and even that I'm not sure counts as genuine AI) --Seb

There are much less such successes than average person would expect. There are some essayes about AI chatbots on http://www.alicebot.org/ (ALICE chatbot won "best chatbot" Loebner prize in 2000 and 2001, so author known what he is talking about), where author says that there was almost no progress in this field of AI since Eliza.

The most important things to notice:

  • we still don't have machine translation
  • we still don't have something that can do Turing Test better than Eliza
  • we still don't have automatic algorithm proving
  • no "important" math theorem was proved by automatic proving machine
  • computers almost universally are based on strictly deterministic algorithms, not heuristics

--Taw

"we still don't have machine translation": this is not really an argument. When people say they want to have machine translation, they usually mean "perfect machine translation". This however is unfair, because humans cannot translate perfectly either. I am pretty sure we can get near-perfect machine translation just using statistical methods, but I would not necessarily call that intelligence.--branko

Rewrites of the article

[edit]

Refactor the article by providing an outline

[edit]

213, you suggest you'd like to refactor this article. Would you like to suggest an outline (have a look at my brief suggestion above) so we can collaborate on such a rewrite (which I've been meaning to do myself but haven't got around to)? BTW, have you considered getting yourself a handle? 213.x.x.x is so impersonal :> --Robert Merkel 11:11 Oct 10, 2002 (UTC)

History of AI

[edit]

I am thinking about (re?)writing a section, the History of AI, reusing some information already present and also introduce more events of interest. It is probably best to break this section out as a separate article I believe. Any ideas? /Vidstige 11:50, 2 Mar 2004 (UTC)

Humorous quote in response to von Neumann's

[edit]

Regarding the von Neumann quote "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!" I come to think about a simmilat quote that first appeared in the context of high altitude baloons and later with space probes. "There is only one thing humans can do that instruments can not, but why would anyone want to do that there?". // Liftarn

Research into apes' intelligence and implications for AI

[edit]

Could somebody cite specific interest from AI researchers in ape intelligence? Neural networks were originally inspired by brain research (though latter neural nets don't resemble the biological model much), and there has been some research into "artificial insect" ideas, but ape cognition doesn't seem to be of sufficient interest to single it out. --Robert Merkel 03:33, 20 Aug 2003 (UTC)

Comment on "neat" and "scruffy" moved from article

[edit]

The following comment added by an anonymous user, was removed from the article and moved here --Lexor 07:26, 13 Jan 2004 (UTC)

The following description of the terms "neat" and "scruffy" is incorrect. See http://dictionary.reference.com/search?q=neats%20vs.%20scruffies for a better description.
In brief, the neat/scruffy distinction was created before connectionism, or other numerical AI techniques, were. The distinction was between formalists and those that used heuristics. Both of those groups were symbolists. It would not be valid to attempt to apply that old distinction to the current debate between symbolicists and connectionists.
Also the word "evolve" below is used to describe connectionist learning, which is also incorrect usage. Connectionistism and genetic algorithms do have similarities, but this entry will have to be expanded to bring those in.

Im afraid some of us have to disagree on that. As shown in Planet of the Apes, soon apes may take over for humans at the end of the humans' rein. However, this corresponds startlingly to the plight of the robots, though they evolve only by the hands of mankind. Someday in the future, apes, robots, and the like may take over the world that has been on the fingertips of our race forever. - Legolas of the Elves of Mirkwood

Mistaken Predictions about AI

[edit]

was titled "A million words of memory"

In the sixties, an eminent AI research—possibly John McCarthy—said something to the effect that there were no longer any significant theoretical or practical barriers to the achievement of AI except hardware limitations, and that he could demonstrate AI as soon as someone would fund his acquisition of a machine with a million words of memory. This is just fuzzy middle-aged memory and I don't have a citation for it. Does anyone have one? Seems to me this would be worth a mention in the main article if it could be confirmed. The exact wording and context are important, of course. I'm guessing the reference would be to an IBM 709, 7090, or 7094, in which case a million words would correspond to about four-and-a-half megabytes.  :-) Dpbsmith 16:56, 4 Feb 2004 (UTC)

You have found an eminent AI researcher who made a bold claim about AI which turns out to be false. Is thereby the field of AI discredited? If so then all fields of research are discredited because all of them have suffered outlandish claims. E.g. Linus Pauling won a Nobel Prize for Chemistry and went on to make outlandish claims re Vitamin C. Is Chemistry discredited? No. Shall we insert these ludicrous claims into the Chemistry page? No. Onto the Linus Pauling page? Yes. It is McCarthy (if it were him) who is discredited, not AI. Psb777 10:36, 10 Feb 2004 (UTC)
Hmmm. Interesting point, but don't you think there's merit in a section called "Mistaken Predictions About AI by Reputable Proponents"? It does highlight the point that some people in the field have been over-optimistic about the computing resources (at least, cf Shadows of the Mind and The Emperor's New Mind) required to realise it, and that the workings of the human brain are more difficult to emulate (through difference from conventional computing equipment and developmental processes, and through greater complexity) than has been thought at various times in the past. An exploration of why these mistakes have been made (the confusion caused by the superior calculating capabilities of machines, for instance) would be valuable. Conversely, one could point out that many of the most inaccurate predictions were made some time ago, and that many other mistaken predictions have been made. Mr. Jones 20:28, 22 Feb 2004 (UTC)
Artificial Intelligence is an extraordinary field because it questions our presumed special position in the Universe. The Strong AI position in particular many seem to find threatening and it provokes a near-religious opposition. Why should this article have a "list of failures" when we do not see this in articles which would be much more deserving of it. Even established hard disciplines have had their failures. On the other hand, the last thing that should happen is that the AI failures should be swept under the carpet. Perhaps we should have a section of "Those Successes of AI which were said to be Impossible by Reputable Opponents". Of course, many of these are defined by them (now) as not being "intelligence". Paul Beardsell 01:18, 23 Feb 2004 (UTC)
I think that would be closer to the ideal of NPOV. Mr. Jones 20:08, 4 Jul 2004 (UTC)

Incorrect definition: AI is not research

[edit]

This article (currently) starts

Artificial intelligence, commonly abbreviated as AI, also known as machine intelligence, is the practice of developing algorithms that make machines (usually computers) able to make seemingly intelligent decisions, or act as if possessing intelligence of a human scale.

No. That's like saying evolution is the practice of working out how the different species came to exist. I deliberately choose a controversial (in the US at least) topic. Evolution is thought by some (most!) to be the way that different species have come to exist, it isn't the practice of describing how. Now please do not argue with me about evolution: I was just trying to find a controversial subject to compare with AI.

Similarly: AI is artifical intelligence, whether you believe that such a thing is possible or not. It is not some process of trying to make something appear intelligent, however unlikely you thing AI is. Just because you think AI unlikely doesn't mean you can deny what it is (or might be).

Another example: UFOs is not the practice of faking the photographs. AI, similarly, is not the practice of making something (i.e. artificial) appear intelligent. It is the artifical intelligence itself, whether you believe in it or not.

One or two of the contributors to this article seem to have an axe to grind. Wikipedia is supposed to be neutral. I am going to re-write that 1st paragraph.

Psb777 23:54, 24 Jan 2004 (UTC)

Arthur T. Murray

[edit]

Regarding Wikipedia edits by Murray

[edit]

Arthur T. Murray, a.k.a. Mentifex, is a notorious net.kook who has been spamming and mass-mailing his pseudoscientific writings for over thirty years. He is now repeatedly adding to Wikipedia pages inappropriate references to his own work, and repeatedly removing from pages information which presents an opposing point of view to his theories or gives evidence which may cause others to see them in a negative light. For example, he has repeatedly inserted his own name in the "List of Prominent AI Theorists" section of the AI article.

As no serious AI researcher considers Murray's work to be anything but crackpottery, please help keep this page and others related to AI free of kookery.

Please see the Arthur T. Murray/Mentifex FAQ for further details on Murray's claims and posting history. This FAQ links to much of Murray's own writing so you can make your own independent assessment of it.

Psychonaut 16:18, 20 Feb 2004 (UTC)

Mentifex article

[edit]

I've started up an article stub on Mentifex, although it's very hard to maintain NPOV on something like this. Any additional info or NPOV changes would be greatly appreciated. --FleaPlus 19:10, 6 May 2004 (UTC)[reply]

FleaPlus, there previously existed articles on Mentifex and Arthur T. Murray, but because this fellow and his program aren't quite encyclopædic material, the articles were deleted after a discussion on Votes for Deletion. No offence, but I'm going to suggest your article for a speedy deletion candidate. Psychonaut 09:08, 7 May 2004 (UTC)[reply]
Is Naked and Petrified encyclopedia material? --Wikiwikifast 13:43, 7 May 2004 (UTC)[reply]
@Wikiwikifast Apparently not lol Espinozagabe (talk) 20:09, 28 July 2023 (UTC)[reply]

While I can understand the deletion, I still think it's useful to be able to look up information on suspected cranks (i.e. Time Cube) to try to get an unbiased description and analysis of their claims. Even if the article I wrote up wasn't NPOV and encyclopedic enough, I hope that someone else would still be able to write one. I personally believe that Mentifex has been around for such a long time and had such an impact on Internet discussions that he deserves an article. --NeuronExMachina 10:42, 22 Jul 2004 (UTC)

Kooks can be a big pain to deal with. Are you and the rest of the Wikipedia community going to take responsibility for watching and changing a Mentifex article on a daily basis to prevent Murray from introducing his own bias? That's exactly what he did repeatedly to the previous Mentifex article and, for a time, to the articles on Artificial intelligence and Technological singularity. I'm just trying to make sure you realize that keeping articles on kooks free of vandalism by the kooks themselves can in some cases be more effort than it's worth. I, for one, would rather have no Mentifex article than to have one I'd have to keep changing every day. —Psychonaut 12:38, 22 Jul 2004 (UTC)

Ok, I see what you mean. I still hope in the future that there might be a Mentifex article, though it would indeed likely need a number of caretakers dedicated to maintaining its NPOV. --NeuronExMachina 09:19, 23 Jul 2004 (UTC)

Perhaps there needs to be a kookwiki. 170.35.224.64 15:49, 10 Jan 2005 (UTC)

"In my view, although Arthur sometimes *presents* his ideas in a somewhat kooky way (by the standards of the mainstream scientific community, and even by the standards of this list), the ideas themselves are significantly better than most of what passes for cognitive science and AI. There is some deep thinking there. If anyone else but me is trying to survey all serious thinking on AGI, Murray's two papers I cited above should be looked at for sure. -- Ben G. "

Is the neutrality of this article still disputed?

[edit]

It seems to me this article is much improved. That all views are represented. There is a lot to be disputed on this, the Talk page, but the article seems NPOV to me. Can we remove the The neutrality of this article is disputed. tag now? Psb777 10:11, 10 Feb 2004 (UTC)

The NPOV dispute was added by kook Arthur T. Murray (16:59, 3 Feb 2004) because I was removing references he had added to his self-published book. Murray has written other vanity articles on Wikipedia which have since been deleted by the admins. I don't know what the procedure is for removing NPOV dispute tags, but if anyone can do it, then yes, go ahead. AFAICT the only one who believes Murray should be referenced in this article is Murray himself. —Psychonaut 10:22, 10 Feb 2004 (UTC)

Cyberstalker Alert: User Psychonaut cyberstalking Mentifex

[edit]

<outing content redacted>

This is an example of Arthur's typical behavior, and shows that if anyone is prone to "cyberstalking" it is he. --User:172.195.232.182
Indeed. Murray is incoherent, using wrong and outdated information, and presents a very weak case. Good showing, Psychonaut! -- Anonymous

But there is no Mentifex who has "contributed" here. We are having to put up with an Arthur Murray who keeps on trying to foist his drivel on us and Psychonaut is doing an excellent job beating him off. Maybe you have the wrong page? Psb777 15:39, 10 Feb 2004 (UTC)

Mentifex is just one of Arthur T. Murray's aliases. Ignore him unless he starts to vandalize pages again. —Psychonaut 15:49, 10 Feb 2004 (UTC)
So I figured. Psb777 16:18, 10 Feb 2004 (UTC)

To remove AI material without any knowledge of its substance is vandalism. Instead of suppressing new ideas, Wikipaedophiles ought to welcome them. --User:66.248.100.42 (presumably Arthur T. Murray)

If Wikipedia becomes a dumping ground for every kook on the Internet there will be no room for valid content. Arthur T. Murray has a LONG history of self-promoting his "work", which has been reviewed by a number of credible AI people and found to be basically meaningless. Further, Mr. Murray has been claiming to have "solved" AI, which provides and easy kook test: is his AI smart? No, it is just a random sentence generator. The code is open, any decent programmer can review for the same conclusion. --User:172.195.232.182
It is extremely bad karma to interfere with such a disruptive technology as the Mentifex AI design. Do we want to have a chilling effect upon independent scholarship?
When the claim is made that AI has been solved, the proper response is to examine the particulars and not to engage in a priori name-calling, which merely shows the darker side of Wikipedia.
AI4U by Murray, ISBN 0595654371 should be included in the main article on artificial intelligence so as potentially to earn money for Wikipedia and so as to enhance the coming Wikipedia 1.0 CD-ROM.
--User:66.248.100.110 (presumably Arthur T. Murray)
I completely agree with Mr. Murray when he says that his material should not be removed by those who have no knowledge of it. I have therefore written a FAQ on Murray which provides a brief analysis of his theory, his Usenet posting history, and opinions of mainstream researchers on his work. Dozens of references to original material by Murray are provided should you doubt my analysis. If, after reading this FAQ and/or the original sources by Murray, you are convinced of the majority opinion (i.e., that Mentifex is a crackpot theory which has no place in a reference such as Wikipedia), then by all means, help us remove whatever self-aggrandizing information Murray inserts. —Psychonaut 16:18, 20 Feb 2004 (UTC)
Shouldn't Wikipedia consider banning Murray somehow, especially references to him inside wikipeia itself? He is a known haunter of many many different discussions on wide ranging topics not related to his crazy ai theories (and lies about his jargon-spewing software). While I agree that just because a person espouses a view not literally taken to be valid doesn't mean that the person's view should be censored, Arthur Murray acts as a stalker. By the way, shouldn't some psychiatrists be investigating him by now?--68.95.130.24 02:10, 3 Jul 2004 (UTC)
I'm amenable to the idea of banning persistent vandals, but since Murray always makes his edits from anonymous IP addresses from a variety of ISPs, this would entail locking out a large number of potential users. As for psychiatric treatment, Murray has mentioned on a number of occasions that his father is a psychiatrist. Perhaps some kind Seattle Wikipedian could try phoning all the Dr. Murrays in the Seattle area until he finds the right man, and beg him to put his son away (or at least on the appropriate antipsychotic medication). —Psychonaut 11:19, 4 Jul 2004 (UTC)

I apologise for removing the material reinstated by Psychonaut. Paul Beardsell 14:52, 20 Feb 2004 (UTC)

Paragraph not understood

[edit]

The second is much harder, raising questions of consciousness and self, mind (including the unconscious mind) and the question of what components are involved in the only type of intelligence it is universally agreed we have available to study: that of human beings. Study of animals and artificial systems that are not just models of what exists already are widely considered very pertinent, too.

Could someone be so kind to translate the above text to plain English? Vidstige 18:15, 8 Mar 2004 (UTC)

What don't you understand? "The second" is the quesion "What is intelligence?" Does it make sense now? Mr. Jones 20:11, 4 Jul 2004 (UTC)

Massive Confusion on This Page

[edit]

Is the content of this page really appropriate for its topic? You'd think that the page "Artificial Intelligence" would act as an overview of the field and a gateway to subtopics in AI (which may not exist yet) -- and would be in understandable English. The sections and topics are still badly opinionated, too. The arguments aren't presented in anything resembling order. And what's with the "Electronic wavelet holographic interference" stuff?

Maybe this page's problems stem from its position as a major topic page, or it could just be people wiki-stomping on it all the time. Whatever's gone wrong, however, it needs to be fixed.

I'm halfway tempted to rewrite the page from scratch, if nobody minds. It's a bloddy mess.

  • Update: Ok, I got a little hot-headed there. The last guy to edit added a few blatently opinionated things in there about "electronic wavelets" in bad english and inapporpriate areas, and ticked me off. I've removed it, and feel much better now. Khaydarian 02:23, 8 Jul 2004 (UTC)

I do agree that we need some more coverage of the field, rather than only the philosophical controversies, which are well-known in the field but generally don't take up much of its time (if only because many of them are basically intractable—"yes computers can be sentient" or "no computers can't be sentient"). In particular, a good overview of the symbolic vs. subsymbolic controversy is a necessary starting place, and then some overviews of various other approaches within the field. I'll try to start adding some when I get some time. --Delirium 18:40, Sep 27, 2004 (UTC)

First sentence

[edit]

Currently: "Artificial intelligence, also known as machine intelligence, is defined as intelligence exhibited by anything manufactured (i.e. artificial) by humans or other sentient beings or systems (should such things ever exist on Earth or elsewhere)."

I propose: "Artificial intelligence, also known as machine intelligence, is defined as intelligence exhibited by anything manufactured (i.e. artificial)."

Mention of "other sentient beings" and "Earth or elsewhere" is needlessly speculative and sounds almost kookish. Don't want to tread on toes though.

An automated Wikipedia link suggester has some possible wiki link suggestions for the Artificial_intelligence article, and they have been placed on this page for your convenience.
Tip: Some people find it helpful if these suggestions are shown on this talk page, rather than on another page. To do this, just add {{User:LinkBot/suggestions/Artificial_intelligence}} to this page. — LinkBot 01:01, 18 Dec 2004 (UTC)


Orginal research

[edit]

I just reverted a change in which what appeared to be a sketch paper was added to the last section. I did this because firstly the change was inappropriate content for an encyclopedia and secondly because any paper should be published elsewhere before it comes close to being of encylclopedic relevence. Barnaby dawson 08:19, 8 May 2005 (UTC)[reply]

Mainly it looks like systematic vandalism by the insertion of gibberish. Look at some of the section headings, such as "Dogfooding"? or the references, for example
[7] Hoare, C. A. R., and Schroedinger, E. On the understanding of checksums. In Proceedings of ECOOP (Sept. 2000).
Hoare and Erwin Schcroedinger writing a joint paper? On checksums?
That address also vandalized the logic page earlier today, again by adding utter nonsense. Whoever is responsible for these attacks has access to books and papers on theoretical computer science. --CSTAR 14:05, 8 May 2005 (UTC)[reply]

Fashinable research areas

[edit]

The first paragraph needs the addition of more fashionable research areas. Artificial life might not be fashionable any more and Bayesian Networks are certainly not the only fashionable research area in AI. What is attracting funding and conference attendee attention these days?

"Pseudoscience links"

[edit]

A special edition of Journal of Consciousness Studies (a peer-reviewed journal) dedicated to Machine Consciousness [1]. Machine Consciousness and other variations are described under Artificial Consciousness, which is wider term (for example possible systems with biological components). So please study the issue a bit more, and be more careful, while deleting. Even psychology is not, strictly speaking, a science, neither are consciousness studies etc. This was not a good reason to delete that link, if someone has other consideretions, please say.Tkorrovi 22:10, 25 May 2005 (UTC)[reply]

AI of the Robots

[edit]

Many of us are fixed on the area of the production of the robots. However, not many of us have stopped and thought about what might happen to civilization if robots go too far..... and begin to develop a mind of their own. Maybe, in the future, technology will grant them the ability to walk, and talk, and maybe do all sorts of things that humans can do today. mayhaps they will form an army, and proceed to obliterate all true life on the planet. to get a better grasp on this concept, read eoin colfer's novel, "The Supernaturalist"

Please feel free to add your opinion to this matter

Legolas of the Elves of Mirkwood

Nitpicks - blameseeking for strong AI failure

[edit]

I find the third paragraph to be pretty annoying. It currently reads:

Historically, AI researchers aimed for the loftier goal of so-called strong AI, of simulating complete, human-like intelligence. This goal is epitomised by the fictional strong AI computer HAL 9000 in the film 2001: A Space Odyssey. This goal is unlikely to be met in the near future and is no longer the subject of most serious AI research. The label "AI" has something of a bad name due to the failure of these early expectations, and aggravation by various popular science writers and media personalities such as Professor Kevin Warwick whose work has raised the expectations of AI research far beyond its current capabilities. For this reason, many AI researchers say they work in cognitive science, informatics, statistical inference or information engineering in an attempt to distance themselves from such charlatanism.

Not to be nasty here, but why is Kevin Warwick here? He appears to just be a minor character (at least his notoriety is recent) in the field though I grant he could possibly be aggrevating. Why aren't we mentioning here, for example, Marvin Minsky or Japan's Fifth Generation project? KarlHallowell 20:12, 22 August 2005 (UTC)[reply]

Criteria for AI researcher?

[edit]

I was checking out the latest entry, Sankar K Pal to the list of AI researchers. His publication list (on Citeseer) is rather small (though several other people on the list have similar records) and he appears (at a glance) to have written at least one book and have a long career as a founder and administrator of AI-related research programs in India. OTOH, it's somewhat pedantic, but his research does seem rather limited.

So what qualities does someone need to warrant inclusion on this list? -- KarlHallowell 14:45, 30 August 2005 (UTC)[reply]

Names I have marked (Ref to publications?) lack references to publications. I removed them in the corresponding portal article. --Mneser 01:20, 11 October 2005 (UTC)[reply]
There are thousands of researchers in this field. For these researchers I suggest them to be added to the Category link. For researchers to be mentioned here, some criteria is needed e.g. significant academic publications on the subject.--moxon 20:35, 17 October 2005 (UTC)[reply]
here here (agreed). Only famous reserachers should be listed on this page (published major works on the subject or who have concepts named after them, etc) . One thing though, why are many famous researchers (Alan Turing for one) not in the category? Is this an oversight or a statement? Broken S 20:40, 17 October 2005 (UTC)[reply]

Portal & Split

[edit]

I have started a AI portal. The idea is that the information in this article not concerning the definition of AI be moved to appropriate sub-articles. Assistance in this process will be appreciated. --Mneser 16:56, 9 October 2005 (UTC)[reply]

Link: Portal:Artificial intelligence. I can help. — Asbestos | Talk (RFC) 00:41, 11 October 2005 (UTC)[reply]

Please do. I hope the preliminary new links I created are sufficient. The idea is not to expand the portal to much. --Mneser 01:20, 11 October 2005 (UTC)[reply]

Delete section?

[edit]

The "Machines displaying some degree of intelligence" section is silly. There are many programs which display some level of intelligence. Also most of those listed are actually programs not machines. I suggest replacing it with "Famous implementaions of AI" and include deeper blue and other famous AI bots (and excluding links, if it doesn't have an article it doesn't deserve to be listed). Objecions? This article does need a lot of work. Broken S 20:21, 17 October 2005 (UTC)[reply]