Jump to content

User:Veritas Aeterna/sandbox

From Wikipedia, the free encyclopedia

A sample citation.[1]

--Veritas Aeterna (talk) 23:17, 18 February 2012 (UTC)

Article 3

1. No State Party shall expel, return ("refouler") or extradite a person to another State where there are substantial grounds for believing that he would be in danger of being subjected to torture.

2. For the purpose of determining whether there are such grounds, the competent authorities shall take into account all relevant considerations including, where applicable, the existence in the State concerned of a consistent pattern of gross, flagrant or mass violations of human rights.


The fact we still see a positive imbalance despite the prolonged solar minimum isn't a surprise given what we've learned about the climate system...But it's worth noting, because this provides unequivocal evidence that the sun is not the dominant driver of global warming.[1]


The fact we still see a positive imbalance despite the prolonged solar minimum isn't a surprise given what we've learned about the climate system...But it's worth noting, because this provides unequivocal evidence that the sun is not the dominant driver of global warming.[1]

The fact we still see a positive imbalance despite the prolonged solar minimum isn't a surprise given what we've learned about the climate system...But it's worth noting, because this provides unequivocal evidence that the sun is not the dominant driver of global warming.[1]

Italics is a piped link to a section within another page.

Track I refers to CIA's Track I.

Track II refers to CIA's Track II.

Here I quote the Amazon reference.[2]

The extent of Kissinger's involvement in or support of these plans was a subject of controversy[3] although Stephen Kinzer has opined [4]that "Kissinger would be more directly responsible for what happened in Chile than any other American, with the possible exception of Nixon himself."[5]: 177 

Chilean Socialist Party presidential candidate Salvador Allende was elected by a plurality in 1970, causing serious concern in Washington, D.C. due to his openly socialist and pro-Cuban politics. The Nixon administration authorized the Central Intelligence Agency (CIA) to encourage a military coup that would prevent Allende's inauguration, but the plan was not successful.[6] The extent of Kissinger's involvement in or support of these plans was a subject of controversy[3] until recently declassified papers, part of the Chile Declassification Project, established the key role he played in these covert plans beyond doubt. In Kornbluh's analysis of these documents, he says that "...in September of 1973 the Nixon Administration had achieved Kissinger's goal, enunciated in the fall of 1970, to create conditions which could lead to Allende's collapse or overthrow.",[1]: 115  and later that, as a result of his role in the covert actions, "Kissinger would become the first U.S. official to be "Pinocheyed"—followed by the threat of legal proceedings from country to country."[1]: 495  Kinzer is more succinct: "Kissinger would be more directly responsible for what happened in Chile than any other American, with the possible exception of Nixon himself."[5]: 177 

References to Chapters in the Three Machine Learning Books

[edit]

Machine learning

[edit]

Symbolic machine learning approaches were investigated to address the knowledge acquisition bottleneck. One of the earliest is Meta-DENDRAL. Meta-DENDRAL used a generate-and-test technique to generate plausible rule hypotheses to test against spectra. Domain and task knowledge reduced the number of candidates tested to a manageable size. Feigenbaum described Meta-DENDRAL as

...the culmination of my dream of the early to mid-1960s having to do with theory formation. The conception was that you had a problem solver like DENDRAL that took some inputs and produced an output. In doing so, it used layers of knowledge to steer and prune the search. That knowledge got in there because we interviewed people. But how did the people get the knowledge? By looking at thousands of spectra. So we wanted a program that would look at thousands of spectra and infer the knowledge of mass spectrometry that DENDRAL could use to solve individual hypothesis formation problems. We did it. We were even able to publish new knowledge of mass spectrometry in the Journal of the American Chemical Society, giving credit only in a footnote that a program, Meta-DENDRAL, actually did it. We were able to do something that had been a dream: to have a computer program come up with a new and publishable piece of science.[7]

In contrast to the knowledge-intensive approach of Meta-DENDRAL, Ross Quinlan invented a domain-independent approach to statistical classification, decision tree learning, starting first with ID3[8] and then later extending its capabilities to C4.5.[9] The decision trees created are glass box, interpretable classifiers, with human-interpretable classification rules.

Advances were made in understanding machine learning theory, too. Tom Mitchell introduced version space learning which describes learning as search through a space of hypotheses, with upper, more general, and lower, more specific, boundaries encompassing all viable hypotheses consistent with the examples seen so far.[10] More formally, Valiant introduced Probably Approximately Correct Learning (PAC Learning), a framework for the mathematical analysis of machine learning.[11]

Symbolic machine learning encompassed more than learning by example. E.g., John Anderson provided a cognitive model of human learning where skill practice results in a compilation of rules from a declarative format to a procedural format with his ACT-R cognitive architecture. For example, a student might learn to apply "Supplementary angles are two angles whose measures sum 180 degrees" as several different procedural rules. E.g., one rule might say that if X and Y are supplementary and you know X, then Y will be 180 - X. He called his approach "knowledge compilation". ACT-R has been used successfully to model aspects of human cognition, such as learning and retention. ACT-R is also used in intelligent tutoring systems, called cognitive tutors, to successfully teach geometry, computer programming, and algebra to school children.[12]

Inductive logic programming was another approach to learning that allowed logic programs to be synthesized from input-output examples. E.g., Ehud Shapiro's MIS (Model Inference System) could synthesize Prolog programs from examples.[13] John R. Koza applied genetic algorithms to program synthesis to create genetic programming, which he used to synthesize LISP programs. Finally, Manna and Waldinger provided a more general approach to program synthesis that synthesizes a functional program in the course of proving its specifications to be correct.[14]

As an alternative to logic, Roger Schank introduced case-based reasoning (CBR). The CBR approach outlined in his book, Dynamic Memory,[15] focuses first on remembering key problem-solving cases for future use and generalizing them where appropriate. When faced with a new problem, CBR retrieves the most similar previous case and adapts it to the specifics of the current problem.[16] Another alternative to logic, genetic algorithms and genetic programming are based on an evolutionary model of learning, where sets of rules are encoded into populations, the rules govern the behavior of individuals, and selection of the fittest prunes out sets of unsuitable rules over many generations.[17]

Symbolic machine learning was applied to learning concepts, rules, heuristics, and problem-solving. Approaches, other than those above, include:

  1. Learning from instruction or advice—i.e., taking human instruction, posed as advice, and determining how to operationalize it in specific situations. For example, in a game of Hearts, learning exactly how to play a hand to "avoid taking points."[18]
  2. Learning from exemplars—improving performance by accepting subject-matter expert (SME) feedback during training. When problem-solving fails, querying the expert to either learn a new exemplar for problem-solving or to learn a new explanation as to exactly why one exemplar is more relevant than another. For example, the program Protos learned to diagnose tinnitus cases by interacting with an audiologist.[19]
  3. Learning by analogy—constructing problem solutions based on similar problems seen in the past, and then modifying their solutions to fit a new situation or domain.[20]
  4. Apprentice learning systems—learning novel solutions to problems by observing human problem-solving. Domain knowledge explains why novel solutions are correct and how the solution can be generalized. LEAP learned how to design VLSI circuits by observing human designers.[21]
  5. Learning by discovery—i.e., creating tasks to carry out experiments and then learning from the results. Doug Lenat's Eurisko, for example, learned heuristics to beat human players at the Traveller role-playing game for two years in a row. [22]
  6. Learning macro-operators—i.e., searching for useful macro-operators to be learned from sequences of basic problem-solving actions. Good macro-operators simplify problem-solving by allowing problems to be solved at a more abstract level.[23]


Anglo-American Bias

Track I


Some test text.

[edit]

With bold and italics. And a citation, for example to Amazon[2] And another one to a Wikipedia article[24]

Citations

[edit]
  1. ^ a b c d e f Kornbluh, Peter (2003). The Pinochet File. New York: The New Press. p. 171. ISBN 1-56584-936-1. Cite error: The named reference "The Pinochet File" was defined multiple times with different content (see the help page).
  2. ^ a b Test link to Amazon, just to try it out.
  3. ^ a b Alleged Assassination Plots Involving Foreign Leaders (1975), Church Committee, pages 246–247 and 250–254.
  4. ^ Anderson, John (1983). "Learning with ACT-R". In Michalski, Ryszard; Carbonell, Jaime; Mitchell, Tom (eds.). Machine learning : an artificial intelligence approach. Los Altos, Calif.: M. Kaufmann. ISBN 978-0-08-051054-5. OCLC 755005231.
  5. ^ a b Kinzer, Stephen (2006). Overthrow: America's Century of Regime Change from Hawaii to Iraq. New York: Times Books. ISBN 978-0-8050-8240-1.
  6. ^ "Church Report". U.S. Department of State. December 18, 1975. Retrieved 2006-11-20.
  7. ^ Cite error: The named reference Feignebaum Interview was invoked but never defined (see the help page).
  8. ^ Quinlan, J. Ross. "Chapter 15: Learning Efficient Classification Procedures and their Application to Chess End Games". In Michalski, Carbonell & Mitchell (1983).
  9. ^ Quinlan, J. Ross (1992-10-15). C4.5: Programs for Machine Learning (1st ed.). San Mateo, Calif: Morgan Kaufmann. ISBN 978-1-55860-238-0.
  10. ^ Mitchell, Tom M.; Utgoff, Paul E.; Banerji, Ranan. "Chapter 6: Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics". In Michalski, Carbonell & Mitchell (1983).
  11. ^ Valiant, L. G. (1984-11-05). "A theory of the learnable". Communications of the ACM. 27 (11): 1134–1142. doi:10.1145/1968.1972. ISSN 0001-0782. Retrieved 2022-08-19.
  12. ^ Koedinger, K. R.; Anderson, J. R.; Hadley, W. H.; Mark, M. A.; others (1997). "Intelligent tutoring goes to school in the big city". International Journal of Artificial Intelligence in Education (IJAIED). 8: 30–43. Retrieved 2012-08-18.
  13. ^ Shapiro, Ehud Y (1981). "The Model Inference System". Proceedings of the 7th international joint conference on Artificial intelligence. IJCAI. Vol. 2. p. 1064.
  14. ^ Manna, Zohar; Waldinger, Richard (1980-01-01). "A Deductive Approach to Program Synthesis". ACM Trans. Program. Lang. Syst. 2: 90–121. doi:10.1145/357084.357090.
  15. ^ Schank, Roger C. (1983-01-28). Dynamic Memory: A Theory of Reminding and Learning in Computers and People. Cambridge Cambridgeshire : New York: Cambridge University Press. ISBN 978-0-521-27029-8.
  16. ^ Hammond, Kristian J. (1989-04-11). Case-Based Planning: Viewing Planning as a Memory Task. Boston: Academic Press. ISBN 978-0-12-322060-8.
  17. ^ Koza, John R. (1992-12-11). Genetic Programming: On the Programming of Computers by Means of Natural Selection (1st ed.). Cambridge, Mass: A Bradford Book. ISBN 978-0-262-11170-6.
  18. ^ Mostow, David Jack. "Chapter 12: Machine Transformation of Advice into a Heuristic Search Procedure". In Michalski, Carbonell & Mitchell (1983).
  19. ^ Mitchell, Tom; Mabadevan, Sridbar; Steinberg, Louis. "Chapter 10: LEAP: A Learning Apprentice for VLSI Design". In Kodratoff & Michalski (1990), pp. 271-289.
  20. ^ Bareiss, Ray; Porter, Bruce; Wier, Craig. "Chapter 4: Protos: An Exemplar-Based Learning Apprentice". In Michalski, Carbonell & Mitchell (1986), pp. 112-139.
  21. ^ Mitchell, Tom; Mabadevan, Sridbar; Steinberg, Louis. "Chapter 10: LEAP: A Learning Apprentice for VLSI Design". In Kodratoff & Michalski (1990), pp. 271-289.
  22. ^ Lenat, Douglas. "Chapter 9: The Role of Heuristics in Learning by Discovery: Three Case Studies". In Michalski, Carbonell & Mitchell (1983), pp. 243-306.
  23. ^ Korf, Richard E. (1985). Learning to Solve Problems by Searching for Macro-Operators. Research Notes in Artificial Intelligence. Pitman Publishing. ISBN 0-273-08690-1.
  24. ^ Wikipedia policies to show how that works.

References

[edit]

Michalski, Ryszard; Carbonell, Jaime; Mitchell, Tom, eds. (1983). Machine Learning : an Artificial Intelligence Approach. Vol. I. Palo Alto, Calif.: Tioga Publishing Company. ISBN 0-935382-05-4. OCLC 9262069.

Michalski, Ryszard; Carbonell, Jaime; Mitchell, Tom, eds. (1986). Machine Learning : an Artificial Intelligence Approach. Vol. II. Los Altos, Calif.: Morgan Kaufman. ISBN 0-934613-00-1.

Kodratoff, Yves; Michalski, Ryszard, eds. (1990). Machine Learning : an Artificial Intelligence Approach. Vol. III. San Mateo, Calif.: Morgan Kaufman. ISBN 0-934613-09-5. OCLC 893488404.