Jump to content

Minimum intelligent signal test

From Wikipedia, the free encyclopedia

The minimum intelligent signal test, or MIST, is a variation of the Turing test proposed by Chris McKinstry in which only boolean (yes/no or true/false) answers may be given to questions. The purpose of such a test is to provide a quantitative statistical measure of humanness, which may subsequently be used to optimize the performance of artificial intelligence systems intended to imitate human responses.

McKinstry gathered approximately 80,000 propositions that could be answered yes or no, e.g.:

  • Is Earth a planet?
  • Was Abraham Lincoln once President of the United States?
  • Is the sun bigger than my foot?
  • Do people sometimes lie?

He called these propositions Mindpixels.

These questions test both specific knowledge of aspects of culture, and basic facts about the meaning of various words and concepts. It could therefore be compared with the SAT, intelligence testing and other controversial measures of mental ability. McKinstry's aim was not to distinguish between shades of intelligence but to identify whether a computer program could be considered intelligent at all.

According to McKinstry, a program able to do much better than chance on a large number of MIST questions would be judged to have some level of intelligence and understanding. For example, on a 20-question test, if a program were guessing the answers at random, it could be expected to score 10 correct on average. But the probability of a program scoring 20 out of 20 correct by guesswork is only one in 220, i.e. one in 1,048,576; so if a program were able to sustain this level of performance over several independent trials, with no prior access to the propositions, it should be considered intelligent.

Discussion

[edit]

McKinstry criticized existing approaches to artificial intelligence such as chatterbots, saying that his questions could "kill" AI programs by quickly exposing their weaknesses. He contrasted his approach, a series of direct questions assessing an AI's capabilities, to the Turing test and Loebner Prize method of engaging an AI in undirected typed conversation.

Critics[who?] of the MIST have noted that it would be easy to "kill" a McKinstry-style AI too, due to the impossibility of supplying it with correct answers to all possible yes/no questions by ways of a finite set of human-generated Mindpixels: the fact that an AI can answer the question "Is the sun bigger than my foot?" correctly does not mean that it can answer variations like "Is the sun bigger than (my hand | my liver | an egg yolk | Alpha Centauri A | ...)" correctly, too.

However, the late McKinstry might have replied that a truly intelligent, knowledgeable entity (on a par with humans) would be able to work out answers such as (yes | yes | yes | don't know | ...) by applying its knowledge of the relative sizes of the objects named. In other words, the MIST was intended as a test of AI, not as a suggestion for implementing AI.

It can also be argued that the MIST is a more objective test of intelligence than the Turing test, a subjective assessment that some might consider to be more a measure of the interrogator's gullibility than of the machine's intelligence. According to this argument, a human's judgment of a Turing test is vulnerable to the ELIZA effect, a tendency to mistake superficial signs of intelligence for the real thing, anthropomorphizing the program. The response, suggested by Alan Turing's essay "Can Machines Think?", is that if a program is a convincing imitation of an intelligent being, it is in fact intelligent. The dispute is thus over what it means for a program to have "real" intelligence, and by what signs it can be detected.

A similar debate exists in the controversy over great ape language, in which nonhuman primates are said to have learned some aspects of sign languages but the significance of this learning is disputed.

[edit]