Timeline of artificial intelligence
Appearance
(Redirected from Timeline of AI)
This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.
Antiquity, Classical and Medieval eras
[edit]Date | Development |
---|---|
Antiquity | Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent automata (such as Talos) and artificial beings (such as Galatea and Pandora).[1] |
Sacred mechanical statues built in Egypt and Greece were believed to be capable of wisdom and emotion. Hermes Trismegistus would write "they have sensus and spiritus ... by discovering the true nature of the gods, man has been able to reproduce it."[2] | |
10th century BC | Yan Shi presented King Mu of Zhou with mechanical men which were capable of moving their bodies independently.[3] |
384 BC–322 BC | Aristotle described the syllogism, a method of formal, mechanical thought in the Organon.[4][5][6] Aristotle also described means–ends analysis (an algorithm for planning) in Nicomachean Ethics, the same algorithm used by Newell and Simon's General Problem Solver (1959).[7] |
3rd century BC | Ctesibius invents a mechanical water clock with an alarm. This was the first example of a feedback mechanism.[citation needed] |
1st century | Hero of Alexandria created mechanical men and other automatons.[8] He produced what may have been "the world's first practical programmable machine:"[9] an automatic theatre. |
260 | Porphyry wrote Isagogê which categorized knowledge and logic, including a drawing of what would later be called a "semantic net".[10] |
~800 | Jabir ibn Hayyan developed the Arabic alchemical theory of Takwin, the artificial creation of life in the laboratory, up to and including human life.[11] |
9th Century | The Banū Mūsā brothers created a programmable music automaton described in their Book of Ingenious Devices: a steam-driven flute controlled by a program represented by pins on a revolving cylinder.[12] This was "perhaps the first machine with a stored program".[9] |
al-Khwarizmi wrote textbooks with precise step-by-step methods for arithmetic and algebra, used in Islam, India and Europe until the 16th century. The word "algorithm" is derived from his name.[13] | |
1206 | Ismail al-Jazari created a programmable orchestra of mechanical human beings.[14] |
1275 | Ramon Llull, Mallorcan theologian, invents the Ars Magna, a tool for combining concepts mechanically based on an Arabic astrological tool, the Zairja. Llull described his machines as mechanical entities that could combine basic truth and facts to produce advanced knowledge. The method would be developed further by Gottfried Wilhelm Leibniz in the 17th century.[15] |
~1500 | Paracelsus claimed to have created an artificial man out of magnetism, sperm and alchemy.[16] |
~1580 | Rabbi Judah Loew ben Bezalel of Prague is said to have invented the Golem, a clay man brought to life.[17] |
1600-1900
[edit]Date | Development |
---|---|
1620 | Francis Bacon developed empirical theory of knowledge and introduced inductive logic in his work Novum Organum, a play on Aristotle's title Organon.[18][19][6] |
1623 | Wilhelm Schickard drew a calculating clock on a letter to Kepler. This will be the first of five unsuccessful attempts at designing a direct entry calculating clock in the 17th century (including the designs of Tito Burattini, Samuel Morland and René Grillet).[a] |
1641 | Thomas Hobbes published Leviathan and presented a mechanical, combinatorial theory of cognition. He wrote "...for reason is nothing but reckoning".[20][21] |
1642 | Blaise Pascal invented a mechanical calculator,[b] the first digital calculating machine.[22] |
1647 | René Descartes proposed that bodies of animals are nothing more than complex machines (but that mental phenomena are of a different "substance").[23] |
1654 | Blaise Pascal described how to find expected values in probability, in 1662 Antoine Arnauld published a formula to find the maximum expected value, and in 1663, Gerolamo Cardano's solution to the same problems is published 116 years after it was written. The theory of probability is further developed by Jacob Bernoulli and Pierre-Simon Laplace in the 18th century.[24] Probability theory would become central to AI and machine learning from the 1990s onward. |
1672 | Gottfried Wilhelm Leibniz improved the earlier machines, making the Stepped Reckoner to do multiplication and division.[25] |
1676 | Leibniz derived the chain rule.[26] The rule is used by AI to train neural networks, for example the backpropagation algorithm uses the chain rule.[9] |
1679 | Leibniz developed a universal calculus of reasoning (alphabet of human thought) by which arguments could be decided mechanically. It assigned a specific number to each and every object in the world, as a prelude to an algebraic solution to all possible problems.[27] |
1726 | Jonathan Swift published Gulliver's Travels, which includes this description of the Engine, a machine on the island of Laputa: "a Project for improving speculative Knowledge by practical and mechanical Operations" by using this "Contrivance", "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."[28] The machine is a parody of Ars Magna, one of the inspirations of Gottfried Wilhelm Leibniz' mechanism. |
1738 | Daniel Bernoulli introduces the concept of "utility", a generalization of probability, the basis of economics and decision theory, and the mathematical foundation for the way AI represents the "goals" of intelligent agents.[29] |
1739 | David Hume described induction, the logical method of learning generalities from examples.[6] |
1750 | Julien Offray de La Mettrie published L'Homme Machine, which argued that human thought is strictly mechanical.[30] |
1763 | Thomas Bayes's work An Essay Towards Solving a Problem in the Doctrine of Chances, published two years after his death, laid the foundations of Bayes' theorem and used in modern AI in Bayesian networks.[24] |
1769 | Wolfgang von Kempelen built and toured with his chess-playing automaton, The Turk, which Kempelen claimed could defeat human players.[31] The Turk was later shown to be a hoax, involving a human chess player. |
1795-1805 | The simplest kind of artificial neural network is the linear network. It has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Adrien-Marie Legendre (1805)[32] and Carl Friedrich Gauss (1795)[33] for the prediction of planetary movement.[9][34] |
1800 | Joseph Marie Jacquard created a programmable loom, based on earlier inventions by Basile Bouchon (1725), Jean-Baptiste Falcon (1728) and Jacques Vaucanson (1740).[35] Replaceable punched cards controlled sequences of operations in the process of manufacturing textiles. This may have been the first industrial software for commercial enterprises.[9] |
1818 | Mary Shelley published the story of Frankenstein; or the Modern Prometheus, a fictional consideration of the ethics of creating sentient beings.[36] |
1822–1859 | Charles Babbage & Ada Lovelace worked on programmable mechanical calculating machines.[37] |
1837 | The mathematician Bernard Bolzano made the first modern attempt to formalize semantics.[38] |
1854 | George Boole set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a calculus", inventing Boolean algebra.[39] |
1863 | Samuel Butler suggested that Darwinian evolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.[40] |
20th century
[edit]1901–1950
[edit]This section needs additional citations for verification. (February 2018) |
Date | Development |
---|---|
1910-1913 | Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which showed that all of elementary mathematics could be reduced to mechanical reasoning in formal logic.[41] |
1912-1914 | Leonardo Torres Quevedo built an automaton for chess endgames, El Ajedrecista. He was called "the 20th century's first AI pioneer".[9] In his Essays on Automatics (1914), Torres published speculation about thinking and automata and introduced the idea of floating-point arithmetic.[42][43] |
1923 | Karel Čapek's play R.U.R. (Rossum's Universal Robots) opened in London. This is the first use of the word "robot" in English.[44] |
1920-1925 | Wilhelm Lenz and Ernst Ising created and analyzed the Ising model (1925)[45] which can be viewed as the first artificial recurrent neural network (RNN) consisting of neuron-like threshold elements.[9] In 1972, Shun'ichi Amari made this architecture adaptive.[46][9] |
1920s and 1930s | Ludwig Wittgenstein's Tractatus Logico-Philosophicus (1921) inspires Rudolf Carnap and the logical positivists of the Vienna Circle to use formal logic as the foundation of philosophy. However, Wittgenstein's later work in the 1940s demonstrates that context free symbolic logic is incoherent without human interpretation. |
1931 | Kurt Gödel encoded mathematical statements and proofs as integers, and showed that there are true theorems that are unprovable by any consistent theorem-proving machine. Thus "he identified fundamental limits of algorithmic theorem proving, computing, and any type of computation-based AI,"[9] laying foundations of theoretical computer science and AI theory. |
1935 | Alonzo Church extended Gödel's proof and showed that the decision problem of computer science does not have a general solution.[47] He developed the Lambda calculus, which will eventually be fundamental to the theory of computer languages. |
1936 | Konrad Zuse filed his patent application for a program-controlled computer.[48] |
1937 | Alan Turing published "On Computable Numbers",[49] which laid the foundations of the modern theory of computation by introducing the Turing machine, a physical interpretation of "computability". He used it to confirm Gödel by proving that the halting problem is undecidable. |
1940 | Edward Condon displayed Nimatron, a digital machine that played Nim perfectly. |
1941 | Konrad Zuse built the first working program-controlled general-purpose computer.[50] |
1943 | Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity", the first mathematical description of an artificial neural networks.[51] |
Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics". Wiener's popular book by that name published in 1948. | |
1945 | Game theory which would prove invaluable in the progress of AI was introduced with the 1944 paper "Theory of Games and Economic Behavior" by mathematician John von Neumann and economist Oskar Morgenstern. |
Vannevar Bush published "As We May Think" (The Atlantic Monthly, July 1945) a prescient vision of the future in which computers assist humans in many activities. | |
1948 | Alan Turing produces "Intelligent Machinery" report, regarded as the first manifesto of Artificial Intelligence. It introduces many concepts including the logic-based approach to problem solving, that intellectual activity consists mainly of various kinds of search, and a discussion of machine learning in which he anticipates the Connectionism AI approach.[52] |
John von Neumann (quoted by Edwin Thompson Jaynes) in response to a comment at a lecture that it was impossible for a machine (at least ones created by humans) to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church–Turing thesis which states that any effective procedure can be simulated by a (generalized) computer. | |
1949 | Donald O. Hebb develops Hebbian theory, a possible algorithm for learning in neural networks.[53] |
1950s
[edit]Date | Development |
---|---|
1950 | Alan Turing published "Computing Machinery and Intelligence", which proposes the Turing test as a measure of machine intelligence and answered all of the most common objections to the proposal "machines can think".[54] |
Claude Shannon published a detailed analysis of chess playing as search.[55] | |
Isaac Asimov published his Three Laws of Robotics.[56] | |
1951 | The first working AI programs were written in 1951 to run on the Ferranti Mark 1 machine of the University of Manchester: A checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz.[53] |
1952–1962 | Arthur Samuel (IBM) wrote the first game-playing program, for checkers (draughts), to achieve sufficient skill to challenge a respectable amateur.[57] His first checkers-playing program was written in 1952, and in 1955 he created a version that learned to play.[58][59] |
1956 | The Dartmouth College summer AI conference is organized by John McCarthy, Marvin Minsky, Nathan Rochester of IBM and Claude Shannon. McCarthy coins the term artificial intelligence for the conference.[60][61] |
The first demonstration of the Logic Theorist (LT) written by Allen Newell, Cliff Shaw and Herbert A. Simon (Carnegie Institute of Technology, now Carnegie Mellon University or CMU). This is often called the first AI program, though Samuel's checkers program also has a strong claim. This program has been described as the first deliberately engineered to perform automated reasoning, and would eventually prove 38 of the first 52 theorems in Russell and Whitehead's Principia Mathematica, and find new and more elegant proofs for some.[62] Simon said that they had "solved the venerable mind–body problem, explaining how a system composed of matter can have the properties of mind".[63] | |
1958 | John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language.[58] |
Herbert Gelernter and Nathan Rochester (IBM) described a theorem prover in geometry.[58] It exploited a semantic model of the domain in the form of diagrams of "typical" cases.[citation needed] | |
Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's "Programs with Common Sense" (which proposed the Advice taker application as a primary research goal)[58] Oliver Selfridge's "Pandemonium", and Marvin Minsky's "Some Methods of Heuristic Programming and Artificial Intelligence". | |
1959 | The General Problem Solver (GPS) was created by Newell, Shaw and Simon while at CMU.[58] |
John McCarthy and Marvin Minsky founded the MIT AI Lab.[58] | |
Late 1950s, early 1960s | Margaret Masterman and colleagues at University of Cambridge design semantic nets for machine translation.[citation needed] |
1960s
[edit]This section needs additional citations for verification. (March 2007) |
Date | Development |
---|---|
1960s | Ray Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction. |
1960 | "Man-Computer Symbiosis" by J.C.R. Licklider. |
1961 | James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level. |
In Minds, Machines and Gödel, John Lucas[64] denied the possibility of machine intelligence on logical or philosophical grounds. He referred to Kurt Gödel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior. | |
Unimation's industrial robot Unimate worked on a General Motors automobile assembly line. | |
1963 | Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests. |
Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence.[65][66][67][68] | |
Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt. | |
1964 | Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly. |
Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems. | |
1965 | Alexey Ivakhnenko and Valentin Lapa developed the first deep learning algorithm for multilayer perceptrons in Soviet Union.[69][70][9] |
Lotfi A. Zadeh at U.C. Berkeley publishes his first paper introducing fuzzy logic, "Fuzzy Sets" (Information and Control 8: 338–353). | |
J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language. | |
Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed. | |
Edward Feigenbaum initiated Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system. | |
1966 | Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated semantic nets. |
Machine Intelligence[71] workshop at Edinburgh – the first of an influential annual series organized by Donald Michie and others. | |
Negative report on machine translation kills much work in natural language processing (NLP) for many years. | |
Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford University) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning. | |
1967 | Shun'ichi Amari was the first to use stochastic gradient descent for deep learning in multilayer perceptrons.[72] In computer experiments conducted by his student Saito, a five layer MLP with two modifiable layers learned useful internal representations to classify non-linearily separable pattern classes.[9] |
1968 | Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics. |
Richard Greenblatt (programmer) at MIT built a knowledge-based chess-playing program, Mac Hack, that was good enough to achieve a class-C rating in tournament play. | |
Wallace and Boulton's program, Snob (Comp.J. 11(2) 1968), for unsupervised classification (clustering) uses the Bayesian minimum message length criterion, a mathematical realisation of Occam's razor. | |
1969 | Stanford Research Institute (SRI): Shakey the robot, demonstrated combining animal locomotion, perception and problem solving. |
Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale University) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner. | |
Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge. | |
First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford. | |
Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating previously unrecognized limits of this feed-forward two-layered structure. This book is considered by some to mark the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI. However, by the time the book came out, methods for training multilayer perceptrons by deep learning were already known (Alexey Ivakhnenko and Valentin Lapa, 1965; Shun'ichi Amari, 1967).[9] Significant progress in the field continued (see below). | |
McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence". |
1970s
[edit]This section needs additional citations for verification. (March 2007) |
Date | Development |
---|---|
Early 1970s | Jane Robinson and Don Walker established an influential Natural Language Processing group at SRI.[73] |
1970 | Seppo Linnainmaa publishes the reverse mode of automatic differentiation. This method became later known as backpropagation, and is heavily used to train artificial neural networks.[74] |
Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer assisted instruction based on semantic nets as the representation of knowledge. | |
Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding. | |
Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks. | |
1971 | Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English. |
Work on the Boyer-Moore theorem prover started in Edinburgh.[75] | |
1972 | Prolog programming language developed by Alain Colmerauer. |
Earl Sacerdoti developed one of the first hierarchical planning programs, ABSTRIPS. | |
1973 | The Assembly Robotics Group at University of Edinburgh builds Freddy Robot, capable of using visual perception to locate and assemble models. (See Edinburgh Freddy Assembly Robot: a versatile computer-controlled assembly system.) |
The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities. | |
1974 | Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems. |
1975 | Earl Sacerdoti developed techniques of partial-order planning in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems. |
Austin Tate developed the Nonlin hierarchical planning system able to search a space of partial plans characterised as alternative approaches to the underlying goal structure of the plan. | |
Marvin Minsky published his widely read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together. | |
The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a refereed journal. | |
Mid-1970s | Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in Natural language processing. |
David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception. | |
1976 | Douglas Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely guided search for interesting conjectures). |
Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford. | |
1978 | Tom Mitchell, at Stanford, invented the concept of Version spaces for describing the search space of a concept formation program. |
Herbert A. Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing". | |
The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments. | |
1979 | Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells". |
Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge. | |
Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming. | |
The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab. | |
BKG, a backgammon program written by Hans Berliner at CMU, defeats the reigning world champion (in part via luck). | |
Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance. | |
Late 1970s | Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration. |
1980s
[edit]This section needs additional citations for verification. (March 2007) |
Date | Development |
---|---|
1980s | Lisp machines developed and marketed. First expert system shells and commercial applications. |
1980 | First National Conference of the American Association for Artificial Intelligence (AAAI) held at Stanford. |
1981 | Danny Hillis designs the connection machine, which utilizes parallel computing to bring new power to AI, and to computation in general. (Later founds Thinking Machines Corporation) |
1982 | The Fifth Generation Computer Systems project (FGCS), an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism. |
1983 | John Laird and Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on Soar (program). |
James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events. | |
Mid-1980s | Neural Networks become widely used with the Backpropagation algorithm, also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa in 1970 and applied to neural networks by Paul Werbos. |
1985 | The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments). |
1986 | The team of Ernst Dickmanns at Bundeswehr University of Munich builds the first robot cars, driving up to 55 mph on empty streets. |
Barbara Grosz and Candace Sidner create the first computation model of discourse, establishing the field of research.[76] | |
1987 | Marvin Minsky published The Society of Mind, a theoretical description of the mind as a collection of cooperating agents. He had been lecturing on the idea for years before the book came out (c.f. Doyle 1983).[77] |
Around the same time, Rodney Brooks introduced the subsumption architecture and behavior-based robotics as a more minimalist modular model of natural intelligence; Nouvelle AI. | |
Commercial launch of generation 2.0 of Alacrity by Alacritous Inc./Allstar Advice Inc. Toronto, the first commercial strategic and managerial advisory system. The system was based upon a forward-chaining, self-developed expert system with 3,000 rules about the evolution of markets and competitive strategies and co-authored by Alistair Davidson and Mary Chung, founders of the firm with the underlying engine developed by Paul Tarvydas. The Alacrity system also included a small financial expert system that interpreted financial statements and models.[78] | |
1989 | The development of metal–oxide–semiconductor (MOS) Very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.[79] |
Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network), which was used in the Navlab program. |
1990s
[edit]This section needs additional citations for verification. (March 2007) |
Date | Development |
---|---|
1990s | Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics. |
Early 1990s | TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players. |
1991 | DART scheduling application deployed in the first Gulf War paid back DARPA's investment of 30 years in AI research.[80] |
1992 | Carol Stoker and NASA Ames robotics team explore marine life in Antarctica with an undersea robot Telepresence ROV operated from the ice near McMurdo Bay, Antarctica and remotely via satellite link from Moffett Field, California.[81] |
1993 | Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds (1 meter/second). |
Rodney Brooks, Lynn Andrea Stein and Cynthia Breazeal started the widely publicized MIT Cog project with numerous collaborators, in an attempt to build a humanoid robot child in just five years. | |
ISX corporation wins "DARPA contractor of the year"[82] for the Dynamic Analysis and Replanning Tool (DART) which reportedly repaid the US government's entire investment in AI research since the 1950s.[83] | |
1994 | Lotfi A. Zadeh at U.C. Berkeley creates "soft computing"[84] and builds a world network of research with a fusion of neural science and neural net systems, fuzzy set theory and fuzzy systems, evolutionary algorithms, genetic programming, and chaos theory and chaotic systems ("Fuzzy Logic, Neural Networks, and Soft Computing", Communications of the ACM, March 1994, Vol. 37 No. 3, pages 77–84). |
With passengers on board, the twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars. | |
English draughts (checkers) world champion Tinsley resigned a match against computer program Chinook. Chinook defeated 2nd highest rated player, Lafferty. Chinook won the USA National Tournament by the widest margin ever. | |
Cindy Mason at NASA organizes the First AAAI Workshop on AI and the Environment.[85] | |
1995 | Cindy Mason at NASA organizes the First International IJCAI Workshop on AI and the Environment.[86] |
"No Hands Across America": A semi-autonomous car drove coast-to-coast across the United States with computer-controlled steering for 2,797 miles (4,501 km) of the 2,849 miles (4,585 km). Throttle and brakes were controlled by a human driver.[87][88] | |
One of Ernst Dickmanns' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles from Munich to Copenhagen and back, in traffic, at up to 120 mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes. | |
1996 | Steve Grand, roboticist and computer scientist, develops and releases Creatures, a popular simulation of artificial life-forms with simulated biochemistry, neurology with learning algorithms and inheritable digital DNA. |
1997 | The Deep Blue chess machine (IBM) defeats the (then) world chess champion, Garry Kasparov. |
First official RoboCup football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators. | |
Computer Othello program Logistello defeated the world champion Takeshi Murakami with a score of 6–0. | |
Long short-term memory (LSTM) was published in Neural Computation by Sepp Hochreiter and Juergen Schmidhuber.[89] | |
1998 | Tiger Electronics' Furby is released, and becomes the first successful attempt at producing a type of A.I to reach a domestic environment. |
Tim Berners-Lee published his Semantic Web Road map paper.[90] | |
Ulises Cortés and Miquel Sànchez-Marrè organize the first Environment and AI Workshop in Europe ECAI, "Binding Environmental Sciences and Artificial Intelligence".[91][92] | |
Leslie P. Kaelbling, Michael L. Littman, and Anthony Cassandra introduce POMDPs and a scalable method for solving them to the AI community, jumpstarting widespread use in robotics and automated planning and scheduling[93] | |
1999 | Sony introduces an improved domestic robot similar to a Furby, the AIBO becomes one of the first artificially intelligent "pets" that is also autonomous. |
Late 1990s | Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web. |
Demonstration of an Intelligent room and Emotional Agents at MIT's AI Lab. | |
Initiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network. |
21st century
[edit]2000s
[edit]This section needs additional citations for verification. (March 2007) |
Date | Development |
---|---|
2000 | Interactive robopets ("smart toys") become commercially available, realizing the vision of the 18th century novelty toy makers. |
Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describing Kismet (robot), with a face that expresses emotions. | |
The Nomad robot explores remote regions of Antarctica looking for meteorite samples. | |
2002 | iRobot's Roomba autonomously vacuums the floor while navigating and avoiding obstacles. |
2004 | OWL Web Ontology Language W3C Recommendation (10 February 2004). |
DARPA introduces the DARPA Grand Challenge requiring competitors to produce autonomous vehicles for prize money. | |
NASA's robotic exploration rovers Spirit and Opportunity autonomously navigate the surface of Mars. | |
2005 | Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in restaurant settings. |
Recommendation technology based on tracking web activity or media usage brings AI to marketing. See TiVo Suggestions. | |
Blue Brain is born, a project to simulate the brain at molecular detail.[94] | |
2006 | The Dartmouth Artificial Intelligence Conference: The Next 50 Years (AI@50) AI@50 (14–16 July 2006) |
2007 | Philosophical Transactions of the Royal Society, B – Biology, one of the world's oldest scientific journals, puts out a special issue on using AI to understand biological intelligence, titled Models of Natural Action Selection[95] |
Checkers is solved by a team of researchers at the University of Alberta. | |
DARPA launches the Urban Challenge for autonomous cars to obey traffic rules and operate in an urban environment. | |
2008 | Cynthia Mason at Stanford presents her idea on Artificial Compassionate Intelligence, in her paper on "Giving Robots Compassion".[96] |
2009 | An LSTM trained by connectionist temporal classification[97] was the first recurrent neural network to win pattern recognition contests, winning three competitions in connected handwriting recognition.[98][9] |
2009 | Google builds autonomous car.[99] |
2010s
[edit]Date | Development |
---|---|
2010 | Microsoft launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infra-red detection, enabling users to play their Xbox 360 wirelessly. The award-winning machine learning for human motion capture technology for this device was developed by the Computer Vision group at Microsoft Research, Cambridge.[100][101] |
2011 | Mary Lou Maher and Doug Fisher organize the First AAAI Workshop on AI and Sustainability.[102] |
IBM's Watson computer defeated television game show Jeopardy! champions Rutter and Jennings. | |
2011–2014 | Apple's Siri (2011), Google's Google Now (2012) and Microsoft's Cortana (2014) are smartphone apps that use natural language to answer questions, make recommendations and perform actions. |
2012 | AlexNet, a deep learning model developed by Alex Krizhevsky, wins the ImageNet Large Scale Visual Recognition Challenge with half as many errors as the second-place winner.[103] This is a turning point in the history of AI; over the next few years dozens of other approaches to image recognition were abandoned in favor of deep learning.[104] Krizhevsky is among the first to use GPU chips to train a deep learning network.[105] |
2013 | Robot HRP-2 built by SCHAFT Inc of Japan, a subsidiary of Google, defeats 15 teams to win DARPA’s Robotics Challenge Trials. HRP-2 scored 27 out of 32 points in eight tasks needed in disaster response. Tasks are drive a vehicle, walk over debris, climb a ladder, remove debris, walk through doors, cut through a wall, close valves and connect a hose.[106] |
NEIL, the Never Ending Image Learner, is released at Carnegie Mellon University to constantly compare and analyze relationships between different images.[107] | |
2015 | Two techniques were developed concurrently to train very deep networks: highway network,[108] and the residual neural network (ResNet).[109] They allowed over 1000-layers-deep networks to be trained. |
In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI.[110][111] | |
In July 2015, an open letter to ban development and use of autonomous weapons was signed by Hawking, Musk, Wozniak and 3,000 researchers in AI and robotics.[112] | |
Google DeepMind's AlphaGo (version: Fan)[113] defeated three-time European Go champion 2 dan professional Fan Hui by 5 games to 0.[114] | |
2016 | Google DeepMind's AlphaGo (version: Lee)[113] defeated Lee Sedol 4–1. Lee Sedol is a 9 dan professional Korean Go champion who won 27 major tournaments from 2002 to 2016.[115] |
2017 | Asilomar Conference on Beneficial AI was held, to discuss AI ethics and how to bring about beneficial AI while avoiding the existential risk from artificial general intelligence. |
Deepstack[116] is the first published algorithm to beat human players in imperfect information games, as shown with statistical significance on heads-up no-limit poker. Soon after, the poker AI Libratus by different research group individually defeated each of its four-human opponents—among the best players in the world—at an exceptionally high aggregated winrate, over a statistically significant sample.[117] In contrast to Chess and Go, Poker is an imperfect information game.[118] | |
In May 2017, Google DeepMind's AlphaGo (version: Master) beat Ke Jie, who at the time continuously held the world No. 1 ranking for two years,[119][120] winning each game in a three-game match during the Future of Go Summit.[121][122] | |
A propositional logic boolean satisfiability problem (SAT) solver proves a long-standing mathematical conjecture on Pythagorean triples over the set of integers. The initial proof, 200TB long, was checked by two independent certified automatic proof checkers.[123] | |
An OpenAI bot using machine learning played at The International 2017 Dota 2 tournament in August 2017. It won during a 1v1 demonstration game against professional Dota 2 player Dendi.[124] | |
Google Lens image analysis and comparison tool released in October 2017, associates millions of landscapes, artworks, products and species to their text description. | |
Google DeepMind revealed that AlphaGo Zero—an improved version of AlphaGo—displayed significant performance gains while using far fewer tensor processing units (as compared to AlphaGo Lee; it used same amount of TPU's as AlphaGo Master).[113] Unlike previous versions, which learned the game by observing millions of human moves, AlphaGo Zero learned by playing only against itself. The system then defeated AlphaGo Lee 100 games to zero, and defeated AlphaGo Master 89 to 11.[113] Although unsupervised learning is a step forward, much has yet to be learned about general intelligence.[125] AlphaZero masters chess in four hours, defeating the best chess engine, StockFish 8. AlphaZero won 28 out of 100 games, and the remaining 72 games ended in a draw. | |
Transformer architecture was invented, which led to new kinds of large language models such as BERT by Google, followed by the generative pre-trained transformer type of model introduced by OpenAI. | |
2018 | Alibaba language processing AI outscores top humans at a Stanford University reading and comprehension test, scoring 82.44 against 82.304 on a set of 100,000 questions.[126] |
The European Lab for Learning and Intelligent Systems (aka Ellis) proposed as a pan-European competitor to American AI efforts, with the aim of staving off a brain drain of talent, along the lines of CERN after World War II.[127] | |
Announcement of Google Duplex, a service to allow an AI assistant to book appointments over the phone. The Los Angeles Times judges the AI's voice to be a "nearly flawless" imitation of human-sounding speech.[128] | |
2019 | DeepMind's AlphaStar reaches Grandmaster level at StarCraft II, outperforming 99.8 percent of human players.[129] |
2020s
[edit]This article needs to be updated.(September 2023) |
Date | Development |
---|---|
2020 | In February 2020, Microsoft introduces its Turing Natural Language Generation (T-NLG), which is the "largest language model ever published at 17 billion parameters".[130] |
In November 2020, AlphaFold 2 by DeepMind, a model that performs predictions of protein structure, wins the CASP competition.[131] | |
OpenAI introduces GPT-3, a state-of-the-art autoregressive language model that uses deep learning to produce a variety of computer codes, poetry and other language tasks exceptionally similar, and almost indistinguishable from those written by humans. Its capacity was ten times greater than that of the T-NLG. It was introduced in May 2020,[132] and was in beta testing in June 2020. | |
2022 | ChatGPT, an AI chatbot developed by OpenAI, debuts in November 2022. It is initially built on top of the GPT-3.5 large language model. While it gains considerable praise for the breadth of its knowledge base, deductive abilities, and the human-like fluidity of its natural language responses,[133][134] it also garners criticism for, among other things, its tendency to "hallucinate",[135][136] a phenomenon in which an AI responds with factually incorrect answers with high confidence. The release triggers widespread public discussion on artificial intelligence and its potential impact on society.[137][138] |
A November 2022 class action lawsuit against Microsoft, GitHub and OpenAI alleges that GitHub Copilot, an AI-powered code editing tool trained on public GitHub repositories, violates the copyrights of the repositories' authors, noting that the tool is able to generate source code which matches its training data verbatim, without providing attribution.[139] | |
2023 | By January 2023, ChatGPT has more than 100 million users, making it the fastest-growing consumer application to date.[140] |
On January 16, 2023, three artists, Sarah Andersen, Kelly McKernan, and Karla Ortiz, file a class-action copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists by training AI tools on five billion images scraped from the web without the consent of the original artists.[141] | |
On January 17, 2023, Stability AI is sued in London by Getty Images for using its images in their training data without purchasing a license.[142][143] | |
Getty files another suit against Stability AI in a US district court in Delaware on February 6, 2023. In the suit, Getty again alleges copyright infringement for the use of its images in the training of Stable Diffusion, and further argues that the model infringes Getty's trademark by generating images with Getty's watermark.[144] | |
OpenAI's GPT-4 model is released in March 2023 and is regarded as an impressive improvement over GPT-3.5, with the caveat that GPT-4 retains many of the same problems of the earlier iteration.[145] Unlike previous iterations, GPT-4 is multimodal, allowing image input as well as text. GPT-4 is integrated into ChatGPT as a subscriber service. OpenAI claims that in their own testing the model received a score of 1410 on the SAT (94th percentile),[146] 163 on the LSAT (88th percentile), and 298 on the Uniform Bar Exam (90th percentile).[147] | |
On March 7, 2023, Nature Biomedical Engineering writes that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time."[148] | |
In response to ChatGPT, Google releases in a limited capacity its chatbot Google Bard, based on the LaMDA and PaLM large language models, in March 2023.[149][150] | |
On March 29, 2023, a petition of over 1,000 signatures is signed by Elon Musk, Steve Wozniak and other tech leaders, calling for a 6-month halt to what the petition refers to as "an out-of-control race" producing AI systems that its creators can not "understand, predict, or reliably control".[151][152] | |
In May 2023, Google makes an announcement regarding Bard's transition from LaMDA to PaLM2, a significantly more advanced language model.[153] | |
In the last week of May 2023, a Statement on AI Risk is signed by Geoffrey Hinton, Sam Altman, Bill Gates, and many other prominent AI researchers and tech leaders with the following succinct message: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[154][155] | |
On July 9, 2023, Sarah Silverman files a class action lawsuit against Meta and OpenAI for copyright infringement for training their large language models on millions of authors' copyrighted works without permission.[156] | |
In August, 2023, the New York Times, CNN, Reuters, the Chicago Tribune, Australian Broadcasting Corporation (ABC) and other news companies block OpenAI's GPTBot web crawler from accessing their content, while the New York Times also updates its terms of service to disallow the use of its content in large language models.[157] | |
On September 13, 2023, in a serious response to growing anxiety about the dangers of AI, the US Senate holds the inaugural bipartisan "AI Insight Forum", bringing together senators, CEOs, civil rights leaders and other industry reps, to further familiarize senators with the nature of AI and its risks, and to discuss needed safeguards and legislation.[158] The event is organized by Senate Majority Leader Chuck Schumer (D-NY),[159] and is chaired by U.S. Senator Martin Heinrich (D-N.M.), Founder and co-chair of the Senate AI Caucus.[160] Reflecting the importance of the meeting, the forum is attended by over 60 senators,[161] as well as Elon Musk (Tesla CEO), Mark Zuckerberg (Meta CEO), Sam Altman (OpenAI CEO), Sundar Pichai (Alphabet CEO), Bill Gates (Microsoft co-founder), Satya Nadella (Microsoft CEO), Jensen Huang (Nvidia CEO), Arvind Krishna (IBM CEO), Alex Karp (Palantir CEO), Charles Rivkin (chairman and CEO of the MPA), Meredith Stiehm (president of the Writers Guild of America West), Liz Shuler (AFL-CIO President), and Maya Wiley (CEO of the Leadership Conference on Civil and Human Rights), among others.[158][159][161] | |
On October 30, 2023, US President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.[162][163] | |
In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[164] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[165][166] | |
Google releases Gemini 1.0 Ultra. | |
2024 | On February 15, 2024, Google releases Gemini 1.5 in limited beta, capable of context length up to 1 million tokens. |
Also, on February 15, 2024, OpenAI publicly announces Sora, a text-to-video model for generating videos up to a minute long. | |
Google DeepMind unveils DNA prediction software AlphaFold which helps to identify cancer and genetic diseases. | |
On 22 February, StabilityAI announces Stable Diffusion 3, using a similar architecture to Sora. | |
On June 10, Apple announced "Apple Intelligence" which incorporates ChatGPT into new iPhones and Siri. | |
On October 9, Co-founder and CEO of Google DeepMind and Isomorphic Labs Sir Demis Hassabis, and Google DeepMind Director Dr. John Jumper were co-awarded the 2024 Nobel Prize in Chemistry for their work developing AlphaFold, a groundbreaking AI system that predicts the 3D structure of proteins from their amino acid sequences. |
See also
[edit]Notes
[edit]- ^ Please see Mechanical calculator#Other calculating machines
- ^ Please see: Pascal's calculator#Competing designs
References
[edit]- ^ McCorduck 2004, pp. 4–5.
- ^ McCorduck 2004, p. 4-5.
- ^ Needham 1986, p. 53.
- ^ Richard McKeon, ed. (1941). The Organon. Random House with Oxford University Press.
- ^ Giles, Timothy (2016). "Aristotle Writing Science: An Application of His Theory". Journal of Technical Writing and Communication. 46: 83–104. doi:10.1177/0047281615600633. S2CID 170906960.
- ^ a b c Russell & Norvig 2021, p. 6.
- ^ Russell & Norvig 2021, p. 7.
- ^ McCorduck 2004, p. 6
- ^ a b c d e f g h i j k l m Schmidhuber 2022.
- ^ Russell & Norvig 2021, p. 341.
- ^ O'Connor, Kathleen Malone (1994), The alchemical creation of life (takwin) and other concepts of Genesis in medieval Islam, University of Pennsylvania, pp. 1–435, archived from the original on 5 December 2019, retrieved 10 January 2007.
- ^ Hill, Donald R., ed. (1979) [9th century]. The Book of Ingenious Devices. Dortrecht, Netherlands; Boston; London: D. Reidel. ISBN 978-90277-0-833-5.
- ^ Russell & Norvig 2021, p. 9.
- ^ A Thirteenth Century Programmable Robot Archived 19 December 2007 at the Wayback Machine
- ^ McCorduck 2004, pp. 10–12, 37; Russell & Norvig 2021, p. 6
- ^ McCorduck 2004, pp. 13–14.
- ^ McCorduck 2004, pp. 14–15, Buchanan 2005, p. 50
- ^ Sir Francis Bacon (1620). The New Organon: Novem Organum Scientiarum.
- ^ Sir Francis Bacon (2000). Francis Bacon: The New Organon (Cambridge Texts in the History of Philosophy). Cambridge University Press.
- ^ Russell & Norvig 2021, p. 6
- ^ McCorduck 2004, p. 42.
- ^ Russell & Norvig 2021, p. 6; McCorduck 2004, p. 26
- ^ Russell & Norvig 2021, p. 6; McCorduck 2004, pp. 36–40
- ^ a b Russell & Norvig 2021, p. 8.
- ^ McCorduck 2004, pp. 41–42.
- ^ Leibniz, Gottfried Wilhelm Freiherr von (1920). The Early Mathematical Manuscripts of Leibniz: Translated from the Latin Texts Published by Carl Immanuel Gerhardt with Critical and Historical Notes (Leibniz published the chain rule in a 1676 memoir). Open court publishing Company. ISBN 9780598818461.
- ^ Russell & Norvig 2021, p. 6; McCorduck 2004, pp. 41–42
- ^ Quoted in McCorduck 2004, p. 317
- ^ Russell & Norvig 2021, p. 10.
- ^ McCorduck 2004, pp. 43.
- ^ McCorduck 2004, p. 17.
- ^ Adrien-Marie Legendre (1805). Nouvelles méthodes pour la détermination des orbites des comètes (in French). Ghent University. F. Didot.
- ^ Stigler, Stephen M. (1981). "Gauss and the Invention of Least Squares". Ann. Stat. 9 (3): 465–474. doi:10.1214/aos/1176345451.
- ^ Stigler, Stephen M. (1986). The History of Statistics: The Measurement of Uncertainty before 1900. Cambridge: Harvard. ISBN 0-674-40340-1.
- ^ Russell & Norvig (2021, p. 15); Razy, C. (1913), p.120.
- ^ McCorduck 2004, pp. 19–25.
- ^ Russell & Norvig 2021, p. 15; McCorduck 2004, pp. 26–34
- ^ Cambier, Hubert (June 2016). "The Evolutionary Meaning of World 3". Philosophy of the Social Sciences. 46 (3): 242–264. doi:10.1177/0048393116641609. ISSN 0048-3931. S2CID 148093595.
- ^ Russell & Norvig 2021, p. 8; McCorduck 2004, pp. 48–51
- ^ Project Gutenberg eBook Erewhon by Samuel Butler.Poes..... Archived 30 April 2021 at the Wayback Machine
- ^ Linsky & Irvine 2022, p. 2.
- ^ McCorduck 2004, pp. 59–60
- ^ Randell, Brian. "From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush" (PDF). Archived from the original (PDF) on 21 September 2013. Retrieved 9 September 2013.
- ^ McCorduck 2004, p. 25
- ^ Brush, Stephen G. (1967). "History of the Lenz-Ising Model". Reviews of Modern Physics. 39 (4): 883–893. Bibcode:1967RvMP...39..883B. doi:10.1103/RevModPhys.39.883.
- ^ Amari, Shun-Ichi (1972). "Learning patterns and pattern sequences by self-organizing nets of threshold elements". IEEE Transactions. C (21): 1197–1206.
- ^ Church, A. (1936). "An unsolvable problem of elementary number theory (first presented on 19 April 1935 to the American Mathematical Society)". American Journal of Mathematics. 58 (2): 345–363. doi:10.2307/2371045. JSTOR 2371045.
- ^ K. Zuse (1936). Verfahren zur selbsttätigen Durchführung von Rechnungen mit Hilfe von Rechenmaschinen. Patent application Z 23 139 / GMD Nr. 005/021, 1936.
- ^ Turing, Alan Mathison (12 November 1936). "On computable numbers, with an application to the Entscheidungsproblem" (PDF). Proceedings of the London Mathematical Society. 58: 230–265. doi:10.1112/plms/s2-42.1.230. S2CID 73712.
- ^ McCorduck 2004, pp. 61–62 and see also The Life and Work of Konrad Zuse
- ^ McCorduck (2004, pp. 55–56); Russell & Norvig (2021, p. 17)
- ^ Copeland, J (Ed.) (2004). The Essential Turing: the ideas that gave birth to the computer age. Oxford: Clarendon Press. ISBN 0-19-825079-7.
- ^ a b Russell & Norvig 2021, p. 17.
- ^ Crevier (1993, pp. 22–25); Russell & Norvig (2021, pp. 18–19)
- ^ Russell & Norvig 2021, p. 155.
- ^ Russell & Norvig 2021, p. 1007.
- ^ Samuel (1959); Russell & Norvig (2021, p. 17)
- ^ a b c d e f Russell & Norvig 2021, p. 19.
- ^ Schaeffer, Jonathan. One Jump Ahead:: Challenging Human Supremacy in Checkers, 1997, 2009, Springer, ISBN 978-0-387-76575-4. Chapter 6.
- ^ Russell & Norvig (2021, p. 18)
- ^ Novet, Jordan (17 June 2017). "Everyone keeps talking about A.I.—here's what it really is and why it's so hot now". CNBC. Archived from the original on 16 February 2018. Retrieved 16 February 2018.
- ^ McCorduck 2004, pp. 123–125, Crevier 1993, pp. 44–46 and Russell & Norvig 2021, p. 18
- ^ Quoted in Crevier 1993, p. 46 and Russell & Norvig 2021, p. 18
- ^ "Minds, Machines and Gödel". Users.ox.ac.uk. Archived from the original on 19 August 2007. Retrieved 24 November 2008.
- ^ Feigenbaum, Edward; Feldman, Julian, eds. (1963). Computers and thought : a collection of articles (1 ed.). New York: McGraw-Hill. OCLC 593742426.
- ^ "This week in The History of AI at AIWS.net – Edward Feigenbaum and Julian Feldman published "Computers and Thought"". AIWS.net. Archived from the original on 24 April 2022. Retrieved 5 May 2022.
- ^ "Feigenbaum & Feldman Issue "Computers and Thought," the First Anthology on Artificial Intelligence". History of Information. Archived from the original on 5 May 2022. Retrieved 5 May 2022.
- ^ Feigenbaum, Edward A.; Feldman, Julian (1963). Computers and Thought. McGraw-Hill, Inc. ISBN 9780070203709. Archived from the original on 5 May 2022. Retrieved 5 May 2022 – via Association for Computing Machinery Digital Library.
- ^ Ivakhnenko, A. G. (1973). Cybernetic Predicting Devices. CCM Information Corporation.
- ^ Ivakhnenko, A. G.; Grigorʹevich Lapa, Valentin (1967). Cybernetics and forecasting techniques. American Elsevier Pub. Co.
- ^ "The Machine Intelligence series". www.cs.york.ac.uk. Archived from the original on 5 November 1999.
- ^ Amari, Shun'ichi (1967). "A theory of adaptive pattern classifier". IEEE Transactions. EC (16): 279–307.
- ^ Grosz, Barbara J.; Hajicova, Eva; Joshi, Aravind (2015). "Jane J. Robinson". Computational Linguistics. 41 (4): 723–726. doi:10.1162/COLI_a_00235. Retrieved 23 January 2024.
- ^ Linnainmaa, Seppo (1970). Algoritmin kumulatiivinen pyöristysvirhe yksittäisten pyöristysvirheiden Taylor-kehitelmänä [The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors] (PDF) (Thesis) (in Finnish). pp. 6–7.
- ^ "The Boyer-Moore Theorem Prover". Archived from the original on 23 September 2015. Retrieved 15 March 2015.
- ^ Grosz, Barbara; Sidner, Candace L. (1986). "Attention, Intentions, and the Structure of Discourse". Computational Linguistics. 12 (3): 175–204. Archived from the original on 10 September 2017. Retrieved 5 May 2017.
- ^ Harry Henderson (2007). "Chronology". Artificial Intelligence: Mirrors for the Mind. NY: Infobase Publishing. ISBN 978-1-60413-059-1. Archived from the original on 15 March 2023. Retrieved 11 April 2015.
- ^ "EmeraldInsight". Archived from the original on 2 February 2014. Retrieved 15 March 2015.
- ^ Mead, Carver A.; Ismail, Mohammed (8 May 1989). Analog VLSI Implementation of Neural Systems (PDF). The Kluwer International Series in Engineering and Computer Science. Vol. 80. Norwell, MA: Kluwer Academic Publishers. doi:10.1007/978-1-4613-1639-8. ISBN 978-1-4613-1639-8. Archived (PDF) from the original on 6 November 2019. Retrieved 24 January 2020.
- ^ DART: Revolutionizing Logistics Planning
- ^ Stoker, Carol R. (1995). Wolfe, William J.; Chun, Wendell H. (eds.). "From Antarctica to space: use of telepresence and virtual reality in control of a remote underwater vehicle". Mobile Robots IX. 2352: 288. Bibcode:1995SPIE.2352..288S. doi:10.1117/12.198976. S2CID 128633069. Archived from the original on 17 July 2019. Retrieved 17 July 2019.
- ^ "ISX Corporation". Archived from the original on 5 September 2006. Retrieved 15 March 2015.
- ^ "DART overview". Archived from the original on 30 November 2006. Retrieved 24 July 2007.
- ^ Zadeh, Lotfi A., "Fuzzy Logic, Neural Networks, and Soft Computing," Communications of the ACM, March 1994, Vol. 37 No. 3, pages 77-84.
- ^ "AAAI-first-ai-env-workshop.HTML". Archived from the original on 28 July 2019. Retrieved 28 July 2019.
- ^ "Ijcai-first-ai-env-workshop". Archived from the original on 28 July 2019. Retrieved 28 July 2019.
- ^ Jochem, Todd M.; Pomerleau, Dean A. "No Hands Across America Home Page". Archived from the original on 27 September 2019. Retrieved 20 October 2015.
- ^ Jochem, Todd. "Back to the Future: Autonomous Driving in 1995". Robotic Trends. Archived from the original on 29 December 2017. Retrieved 20 October 2015.
- ^ Hochreiter, Sepp; Schmidhuber, Jürgen (1 November 1997). "Long Short-Term Memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. ISSN 0899-7667. PMID 9377276. S2CID 1915014.
- ^ "Semantic Web roadmap". W3.org. Archived from the original on 6 December 2003. Retrieved 24 November 2008.
- ^ Mason, Cindy; Sànchez-Marrè, Miquel (1999). "Binding Environmental Sciences and Artificial Intelligence". Environmental Modelling & Software. 14 (5): 335–337. Archived from the original on 15 March 2023. Retrieved 27 October 2021.
- ^ "BESAI - Homepage". Archived from the original on 4 July 2019. Retrieved 12 August 2019.
- ^ Kaelbling, Leslie Pack; Littman, Michael L; Cassandra, Anthony R. (1998). "Planning and acting in partially observable stochastic domains" (PDF). Artificial Intelligence. 101 (1–2): 99–134. doi:10.1016/s0004-3702(98)00023-x. Archived (PDF) from the original on 17 May 2017. Retrieved 5 May 2017.
- ^ "Bluebrain – EPFL". bluebrain.epfl.ch. Archived from the original on 19 March 2019. Retrieved 2 January 2009.
- ^ "Modelling natural action selection". Pubs.royalsoc.ac.uk. Archived from the original on 30 September 2007. Retrieved 24 November 2008.
- ^ "Giving Robots Compassion, C. Mason, Conference on Science and Compassion, Poster Session, Telluride, Colorado, 2012". ResearchGate. Retrieved 17 July 2019.
- ^ Graves, Alex; Fernández, Santiago; Gomez, Faustino; Schmidhuber, Juergen (2006). "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks". Proceedings of the International Conference on Machine Learning, ICML 2006: 369–376. CiteSeerX 10.1.1.75.6306.
- ^ Graves, Alex; and Schmidhuber, Jürgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552
- ^ Fisher, Adam (18 September 2013). "Inside Google's Quest To Popularize Self-Driving Cars". Popular Science. Archived from the original on 22 September 2013. Retrieved 10 October 2013.
- ^ "Jamie Shotton at Microsoft Research". Microsoft Research. Archived from the original on 3 February 2016. Retrieved 3 February 2016.
- ^ "Human Pose Estimation for Kinect – Microsoft Research". Archived from the original on 3 February 2016. Retrieved 3 February 2016.
- ^ "AAAI Spring Symposium - AI and Design for Sustainability". Archived from the original on 29 July 2019. Retrieved 29 July 2019.
- ^ Christian (2020, p. 24); Russell & Norvig (2021, p. 26)
- ^ Wong (2023).
- ^ Christian 2020, p. 25.
- ^ "DARPA Robotics Challenge Trials". US Defense Advanced Research Projects Agency. Archived from the original on 11 June 2015. Retrieved 25 December 2013.
- ^ "Carnegie Mellon Computer Searches Web 24/7 To Analyze Images and Teach Itself Common Sense". Archived from the original on 3 July 2015. Retrieved 15 June 2015.
- ^ Srivastava, Rupesh Kumar; Greff, Klaus; Schmidhuber, Jürgen (2 May 2015). "Highway Networks". arXiv:1505.00387 [cs.LG].
- ^ He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE. pp. 770–778. arXiv:1512.03385. doi:10.1109/CVPR.2016.90. ISBN 978-1-4673-8851-1.
- ^ Sparkes, Matthew (13 January 2015). "Top scientists call for caution over artificial intelligence". The Telegraph (UK). Retrieved 24 April 2015.
- ^ "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter". Future of Life Institute. Retrieved 14 September 2023.
- ^ Tegmark, Max. "Open Letter on Autonomous Weapons". Future of Life Institute. Archived from the original on 28 April 2016. Retrieved 25 April 2016.
- ^ a b c d Silver, David; Schrittwieser, Julian; Simonyan, Karen; Antonoglou, Ioannis; Huang, Aja; Guez, Arthur; Hubert, Thomas; Baker, Lucas; Lai, Matthew; Bolton, Adrian; Chen, Yutian; Lillicrap, Timothy; Fan, Hui; Sifre, Laurent; Driessche, George van den; Graepel, Thore; Hassabis, Demis (19 October 2017). "Mastering the game of Go without human knowledge" (PDF). Nature. 550 (7676): 354–359. Bibcode:2017Natur.550..354S. doi:10.1038/nature24270. ISSN 0028-0836. PMID 29052630. S2CID 205261034. Archived (PDF) from the original on 24 November 2020. Retrieved 27 September 2020.
- ^ Hassabis, Demis (27 January 2016). "AlphaGo: using machine learning to master the ancient game of Go". Google Blog. Archived from the original on 7 May 2016. Retrieved 25 April 2016.
- ^ Ormerod, David. "AlphaGo defeats Lee Sedol 4–1 in Google DeepMind Challenge Match". Go Game Guru. Archived from the original on 17 March 2016. Retrieved 25 April 2016.
- ^ Moravčík, Matej; Schmid, Martin; Burch, Neil; Lisý, Viliam; Morrill, Dustin; Bard, Nolan; Davis, Trevor; Waugh, Kevin; Johanson, Michael; Bowling, Michael (5 May 2017). "DeepStack: Expert-level artificial intelligence in heads-up no-limit poker". Science. 356 (6337): 508–513. arXiv:1701.01724. Bibcode:2017Sci...356..508M. doi:10.1126/science.aam6960. ISSN 0036-8075. PMID 28254783. S2CID 1586260.
- ^ "Libratus Poker AI Beats Humans for $1.76m; Is End Near?". PokerListings. 30 January 2017. Archived from the original on 17 March 2018. Retrieved 16 March 2018.
- ^ Solon, Olivia (30 January 2017). "Oh the humanity! Poker computer trounces humans in big step for AI". The Guardian. Archived from the original on 8 April 2018. Retrieved 19 March 2018.
- ^ "柯洁迎19岁生日 雄踞人类世界排名第一已两年" (in Chinese). May 2017. Archived from the original on 11 August 2017. Retrieved 4 September 2021.
- ^ "World's Go Player Ratings". 24 May 2017. Archived from the original on 1 April 2017. Retrieved 4 September 2021.
- ^ "Google's AlphaGo Continues Dominance With Second Win in China". Wired. 25 May 2017. Archived from the original on 27 May 2017. Retrieved 4 September 2021.
- ^ "After Win in China, AlphaGo's Designers Explore New AI". Wired. 27 May 2017. Archived from the original on 2 June 2017. Retrieved 4 September 2021.
- ^ "The Science of Brute Force". ACM Communications. August 2017. Archived from the original on 29 August 2017. Retrieved 5 October 2018.
- ^ "Dota 2". Openai Blog. 11 August 2017. Archived from the original on 11 August 2017. Retrieved 7 November 2017.
- ^ Greenemeier, Larry (18 October 2017). "AI versus AI: Self-Taught AlphaGo Zero Vanquishes Its Predecessor". Scientific American. Archived from the original on 19 October 2017. Retrieved 18 October 2017.
- ^ Alibaba's AI Outguns Humans in Reading Test Archived 17 January 2018 at the Wayback Machine. 15 January 2018
- ^ Sample, Ian (23 April 2018). "Scientists plan huge European AI hub to compete with US". The Guardian (US ed.). Archived from the original on 24 April 2018. Retrieved 23 April 2018.
- ^ Pierson, David (2018). "Should people know they're talking to an algorithm? After a controversial debut, Google now says yes". Los Angeles Times. Archived from the original on 17 May 2018. Retrieved 17 May 2018.
- ^ Sample, Ian (2019). "AI becomes grandmaster in 'fiendishly complex' StarCraft II". The Guardian. Archived from the original on 29 December 2020. Retrieved 30 July 2021.
- ^ Sterling, Bruce (13 February 2020). "Web Semantics: Microsoft Project Turing introduces Turing Natural Language Generation (T-NLG)". Wired. ISSN 1059-1028. Archived from the original on 4 November 2020. Retrieved 31 July 2020.
- ^ Sample, Ian (2 December 2018). "Google's DeepMind predicts 3D shapes of proteins". The Guardian. Retrieved 19 July 2019.
- ^ Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla (22 July 2020). "Language Models are Few-Shot Learners". arXiv:2005.14165 [cs.CL].
- ^ Thompson, Derek (8 December 2022). "Breakthroughs of the Year". The Atlantic. Archived from the original on 15 January 2023. Retrieved 18 December 2022.
- ^ Scharth, Marcel (5 December 2022). "The ChatGPT chatbot is blowing people away with its writing skills. An expert explains why it's so impressive". The Conversation. Archived from the original on 19 January 2023. Retrieved 30 December 2022.
- ^ Rachini, Mouhamad (15 December 2022). "ChatGPT a 'landmark event' for AI, but what does it mean for the future of human labor and disinformation?". CBC. Archived from the original on 19 January 2023. Retrieved 18 December 2022.
- ^ Vincent, James (5 December 2022). "AI-generated answers temporarily banned on coding Q&A site Stack Overflow". The Verge. Archived from the original on 17 January 2023. Retrieved 5 December 2022.
- ^ Cowen, Tyler (6 December 2022). "ChatGPT Could Make Democracy Even More Messy". Bloomberg News. Archived from the original on 7 December 2022. Retrieved 6 December 2022.
- ^ "The Guardian view on ChatGPT: an eerily good human impersonator". The Guardian. 8 December 2022. Archived from the original on 16 January 2023. Retrieved 18 December 2022.
- ^ Vincent, James (8 November 2022). "The lawsuit that could rewrite the rules of AI copyright". The Verge. Retrieved 7 December 2022.
- ^ Milmo, Dan (2 December 2023). "ChatGPT reaches 100 million users two months after launch". The Guardian. ISSN 0261-3077. Archived from the original on 3 February 2023. Retrieved 3 February 2023.
- ^ Vincent, James (16 January 2023). "AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit". The Verge.
- ^ Korn, Jennifer (17 January 2023). "Getty Images suing the makers of popular AI art tool for allegedly stealing photos". CNN. Retrieved 22 January 2023.
- ^ "Getty Images Statement". newsroom.gettyimages.com/. CNN. 17 January 2023. Retrieved 24 January 2023.
- ^ Belanger, Ashley (6 February 2023). "Getty sues Stability AI for copying 12M photos and imitating famous watermark". Ars Technica.
- ^ Belfield, Haydn (25 March 2023). "If your AI model is going to sell, it has to be safe". Vox. Archived from the original on 28 March 2023. Retrieved 30 March 2023.
- ^ "SAT: Understanding Scores" (PDF). College Board. 2022. Archived (PDF) from the original on 16 March 2023. Retrieved 21 March 2023.
- ^ OpenAI (2023). "GPT-4 Technical Report". arXiv:2303.08774 [cs.CL].
- ^ "Prepare for truly useful large language models". Nature Biomedical Engineering. 7 (2): 85–86. 7 March 2023. doi:10.1038/s41551-023-01012-6. PMID 36882584. S2CID 257403466.
- ^ Elias, Jennifer (31 January 2023). "Google is asking employees to test potential ChatGPT competitors, including a chatbot called 'Apprentice Bard'". CNBC. Archived from the original on 2 February 2023. Retrieved 2 February 2023.
- ^ Elias, Jennifer (February 2023). "Google asks employees to rewrite Bard's bad responses, says the A.I. 'learns best by example'". CNBC. Archived from the original on 16 February 2023. Retrieved 16 February 2023.
- ^ Ortiz, Sabrina (29 March 2023). "Musk, Wozniak, and other tech leaders sign petition to halt further AI developments". ZD Net. Retrieved 13 September 2023.
- ^ "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 13 September 2023.
- ^ Lappalainen, Yrjo; Narayanan, Nikesh (14 June 2023). "Aisha: A Custom AI Library Chatbot Using the ChatGPT API". Journal of Web Librarianship. 17 (3): 37–58. doi:10.1080/19322909.2023.2221477. ISSN 1932-2909. S2CID 259470901.
- ^ "Statement on AI Risk AI experts and public figures express their concern about AI risk". Center for AI Safety. Retrieved 14 September 2023.
- ^ Edwards, Benj (30 May 2023). "OpenAI execs warn of "risk of extinction" from artificial intelligence in new open letter". Ars Technica. Retrieved 14 September 2023.
- ^ Queen, Jack (10 July 2023). "Sarah Silverman sues Meta, OpenAI for copyright infringement". Reuters. Retrieved 14 September 2023.
- ^ Bogle, Ariel (24 August 2023). "New York Times, CNN and Australia's ABC block OpenAI's GPTBot web crawler from accessing content". The Guardian. Retrieved 14 September 2023.
- ^ a b Johnson, Ted (13 September 2023). "Elon Musk Says "Something Good Will Come Of This" After Senate's AI Forum, Chuck Schumer Signals AI Legislation Coming "In The General Category Of Months" — Update". Deadline. Retrieved 13 September 2023.
- ^ a b Kang, Cecelia (13 September 2023). ""In Show of Force, Silicon Valley Titans Pledge 'Getting This Right' With A.I."". The New York Times. Retrieved 13 September 2023.
- ^ Read Out: Heinrich Convenes First Bipartisan Senate AI Insight Forum, 13 September 2023, retrieved 13 September 2023
- ^ a b Feiner, Lauren (13 September 2023). "Elon Musk, Mark Zuckerberg, Bill Gates and other tech leaders in closed Senate session about AI". CNBC. Retrieved 13 September 2023.
- ^ Morrison, Sara (31 October 2023). "President Biden's new plan to regulate AI. Now comes the hard part: Congress". Vox News. Retrieved 3 November 2023.
- ^ Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 30 October 2023, retrieved 3 November 2023
- ^ Milmo, Dan (3 November 2023). "Hope or Horror? The great AI debate dividing its pioneers". The Guardian Weekly. pp. 10–12.
- ^ "The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023". GOV.UK. 1 November 2023. Archived from the original on 1 November 2023. Retrieved 2 November 2023.
- ^ "Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration". GOV.UK (Press release). Archived from the original on 1 November 2023. Retrieved 1 November 2023.
Sources
[edit]- Buchanan, Bruce G. (2005), "A (Very) Brief History of Artificial Intelligence" (PDF), AI Magazine, pp. 53–60, archived from the original (PDF) on 26 September 2007, retrieved 30 August 2007
- Christian, Brian (2020). The Alignment Problem: Machine learning and human values. W. W. Norton & Company. ISBN 978-0-393-86833-3. OCLC 1233266753.
- Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3.
- Linsky, Bernard; Irvine, Andrew David (Spring 2022). Edward N. Zalta (ed.). "Principia Mathematica". The Stanford Encyclopedia of Philosophy.
- McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 978-1-56881-205-2
- Needham, Joseph (1986). Science and Civilization in China: Volume 2. Taipei: Caves Books Ltd.
- Russell, Stuart J.; Norvig, Peter. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Hoboken: Pearson. ISBN 978-0134610993. LCCN 20190474.
- Samuel, Arthur L. (July 1959), "Some studies in machine learning using the game of checkers", IBM Journal of Research and Development, 3 (3): 210–219, CiteSeerX 10.1.1.368.2254, doi:10.1147/rd.33.0210, S2CID 2126705, archived from the original on 3 March 2016, retrieved 20 August 2007
- Schmidhuber, Jürgen (2022). "Annotated History of Modern AI and Deep Learning".
- Wong, Matteo (19 May 2023), "ChatGPT Is Already Obsolete", The Atlantic
Further reading
[edit]- Berlinski, David (2000), The Advent of the Algorithm, Harcourt Books
- Brooks, Rodney (1990), "Elephants Don't Play Chess" (PDF), Robotics and Autonomous Systems, 6 (1–2): 3–15, CiteSeerX 10.1.1.588.7539, doi:10.1016/S0921-8890(05)80025-9, retrieved 30 August 2007
- Darrach, Brad (20 November 1970), "Meet Shakey, the First Electronic Person", Life Magazine, pp. 58–68
- Doyle, J. (1983), "What is rational psychology? Toward a modern mental philosophy", AI Magazine, vol. 4, no. 3, pp. 50–53
- Dreyfus, Hubert (1972), What Computers Can't Do, MIT Press
- Feigenbaum, Edward A.; McCorduck, Pamela (1983), The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World, Michael Joseph, ISBN 978-0-7181-2401-4
- Feigenbaum, Edward; Feldman, Julian, eds. (1963), Computers and thought (1 ed.), New York: McGraw-Hill, OCLC 593742426
- Hobbes (1651), Leviathan
- Hofstadter, Douglas (1980), Gödel, Escher, Bach: an Eternal Golden Braid
- Howe, J. (November 1994), Artificial Intelligence at Edinburgh University: a Perspective, retrieved 30 August 2007
- Kaplan, Andreas; Haenlein, Michael (2018), "Siri, Siri in my Hand, who's the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence", Business Horizons, 62: 15–25, doi:10.1016/j.bushor.2018.08.004, S2CID 158433736
- Kurzweil, Ray (2005), The Singularity is Near, Viking Press
- Lakoff, George (1987), Women, Fire, and Dangerous Things: What Categories Reveal About the Mind, University of Chicago Press., ISBN 978-0-226-46804-4
- Lenat, Douglas; Guha, R. V. (1989), Building Large Knowledge-Based Systems, Addison-Wesley
- Levitt, Gerald M. (2000), The Turk, Chess Automaton, Jefferson, N.C.: McFarland, ISBN 978-0-7864-0778-1
- Lighthill, Professor Sir James (1973), "Artificial Intelligence: A General Survey", Artificial Intelligence: a paper symposium, Science Research Council
- Lucas, John (1961), Minds, Machines and Gödel, archived from the original on 19 August 2007, retrieved 24 July 2007
- McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, archived from the original on 26 August 2007
- McCarthy, John; Hayes, P. J. (1969), "Some philosophical problems from the standpoint of artificial intelligence", Machine Intelligence, 4: 463–502
- McCullough, W. S.; Pitts, W. (1943), "A logical calculus of the ideas immanent in nervous activity", Bulletin of Mathematical Biophysics, 5 (4): 115–127, doi:10.1007/BF02478259
- Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall
- Minsky, Marvin; Seymour Papert (1969), Perceptrons: An Introduction to Computational Geometry, The MIT Press
- Minsky, Marvin (1974), A Framework for Representing Knowledge, archived from the original on 7 January 2021, retrieved 27 December 2007
- Minsky, Marvin (1986), The Society of Mind, Simon and Schuster
- Moravec, Hans (1976), The Role of Raw Power in Intelligence
- Moravec, Hans (1988), Mind Children, Harvard University Press
- United States National Research Council (1999), "Developments in Artificial Intelligence", Funding a Revolution: Government Support for Computing Research, National Academy Press, retrieved 30 August 2007
- Newell, Allen; Simon, H. A. (1963), "GPS: A Program that Simulates Human Thought", in Feigenbaum, Edward; Feldman, Julian (eds.), Computers and Thought, New York: McGraw-Hill
- Newquist, HP (1994), The Brain Makers: Genius, Ego, And Greed In The Quest For Machines That Think, New York: Macmillan/SAMS, ISBN 978-0-9885937-1-8
- Pearl, J. (1988), Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, San Mateo, California: Morgan Kaufmann
- Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2
- Poole, David; Mackworth, Alan; Goebel, Randy (1998), Computational Intelligence: A Logical Approach, Oxford University Press., ISBN 978-0-19-510270-3
- Searle, John (1980), "Minds, Brains and Programs" (PDF), Behavioral and Brain Sciences, 3 (3): 417–457, doi:10.1017/S0140525X00005756, S2CID 55303721
- Simon, H. A.; Newell, Allen (1958), "Heuristic Problem Solving: The Next Advance in Operations Research", Operations Research, 6 (1): 1, doi:10.1287/opre.6.1.1
- Simon, H. A. (1965), The Shape of Automation for Men and Management, New York: Harper & Row
- Turing, Alan (1936–1937), "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, 2, s2-42 (42): 230–265, doi:10.1112/plms/s2-42.1.230, S2CID 73712
- Turing, Alan (October 1950), "Computing machinery and intelligence", Mind, LIX (236): 433–60, doi:10.1093/mind/LIX.236.433, archived from the original on 2 July 2008
- Weizenbaum, Joseph (1976), Computer Power and Human Reason, W.H. Freeman & Company
External links
[edit]- "The history of artificial intelligence: Complete AI timeline", Enterprise AI, TechTarget, 16 August 2023
- "Brief History (timeline)", AI Topics, Association for the Advancement of Artificial Intelligence