RAMnets
This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (June 2018) |
RAMnets is one of the oldest practical neurally inspired classification algorithms. The RAMnets is also known as a type of "n-tuple recognition method" or "weightless neural network".
Algorithm
[edit]Consider (let us say N) sets of n distinct bit locations are selected randomly. These are the n-tuples. The restriction of a pattern to an n-tuple can be regarded as an n-bit number which, together with the identity of the n-tuple, constitutes a `feature' of the pattern. The standard n-tuple recognizer operates simply as follows:
A pattern is classified as belonging to the class for which it has the most features in common with at least one training pattern of that class.
This is the = 0 case of a more general rule whereby the class assigned to unclassified pattern u is
where Dc is the set of training patterns in class c, = x for , for , is the Kronecker delta(=1 if i=j and 0 otherwise.)and is the ith feature of the pattern u:
Here uk is the kth bit of u and is the jth bit location of the ith n-tuple.
With C classes to distinguish, the system can be implemented as a network of NC nodes, each of which is a random access memory (RAM); hence the term RAMnet. The memory content at address of the ith node allocated to class c is set to
=
In the usual = 1 case, the 1-bit content of is set if any pattern of Dc has feature and unset otherwise. Recognition is accomplished by summing the contents of the nodes of each class at the addresses given by the features of the unclassified pattern. That is, pattern u is assigned to class
RAM-discriminators and WiSARD
[edit]The RAMnets formed the basis of a commercial product known as WiSARD (Wilkie, Stonham and Aleksander Recognition Device) was the first artificial neural network machine to be patented.
A RAM-discriminator consists of a set of X one-bit word RAMs with n inputs and a summing device (Σ). Any such RAM-discriminator can receive a binary pattern of X⋅n bits as input. The RAM input lines are connected to the input pattern by means of a biunivocal pseudo-random mapping. The summing device enables this network of RAMs to exhibit – just like other ANN models based on synaptic weights – generalization and noise tolerance.
In order to train the discriminator one has to set all RAM memory locations to 0 and choose a training set formed by binary patterns of X⋅n bits. For each training pattern, a 1 is stored in the memory location of each RAM addressed by this input pattern. Once the training of patterns is completed, RAM memory contents will be set to a certain number of 0’s and 1’s.
The information stored by the RAM during the training phase is used to deal with previous unseen patterns. When one of these is given as input, the RAM memory contents addressed by the input pattern are read and summed by Σ. The number r thus obtained, which is called the discriminator response, is equal to the number of RAMs that output 1. r reaches the maximum X if the input belongs to the training set. r is equal to 0 if no n-bit component of the input pattern appears in the training set (not a single RAM outputs 1). Intermediate values of r express a kind of “similarity measure” of the input pattern with respect to the patterns in the training set.
A system formed by various RAM-discriminators is called WiSARD. Each RAM-discriminator is trained on a particular class of patterns, and classification by the multi-discriminator system is performed in the following way. When a pattern is given as input, each RAM-discriminator gives a response to that input. The various responses are evaluated by an algorithm which compares them and computes the relative confidence c of the highest response (e.g., the difference d between the highest response and the second highest response, divided by the highest response). A schematic representation of a RAM-discriminator and a 10 RAM-discriminator WiSARD is shown in Figure 1.[1]
See also
[edit]- Artificial Neural Network
- Kronecker delta
- Pattern Recognition
- Unsupervised learning
- Erlang distribution
- Machine learning
- Erlang (unit)
References
[edit]- ^ Advances in computational intelligence and learning : 17th European Symposium on Artificial Neural Networks; ESANN 2009; Bruges, Belgium, April 22-23-24, 2009; proceedings. Verleysen, Michel, Université catholique de Louvain, ESANN (17 2009.04.22-24 Bruges), European Symposium on Artificial Neural Networks (17 2009.04.22-24 Bruges). Evere: d-side. 2009. ISBN 978-2930307091. OCLC 553956424.
{{cite book}}
: CS1 maint: others (link)
- Michal Morciniec and Richard Rohwer(1995) "The n-tuple Classifier: Too Good to Ignore"
- Hastie, Trevor; Tibshirani, Robert (2009). The Elements of Statistical Learning: Data mining, Inference, and Prediction. New York: Springer. pp. 485–586. doi:10.1007/978-0-387-84858-7_14. ISBN 978-0-387-84857-0.
- Hinton, Geoffrey; Sejnowski, Terrence J., eds. (1999). Unsupervised Learning: Foundations of Neural Computation. MIT Press. ISBN 0-262-58168-X. (This book focuses on unsupervised learning in neural networks)
- Y. Guan, J.G. Taylor, D. Gorse, .G. Clarkson (1993). "Generalization in probabilistic RAM nets". IEEE Transactions on Neural Networks. 4 (2): 360–363. doi:10.1109/72.207603. PMID 18267737.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - A brief introduction to Weightless NeuralSystems (2009)
Further reading
[edit]- N. M. Allinson, A. R. Kolcz (1997). N-Tuple Neural Networks. Springer Science+Business Media New York: Springer, Boston, MA. ISBN 978-1-4615-6099-9.
- Fukunaga, Keinosuke (1990). Introduction to Statistical Pattern Recognition (2nd ed.). Boston: Academic Press. ISBN 0-12-269851-7.
- Hornegger, Joachim; Paulus, Dietrich W. R. (1999). Applied Pattern Recognition: A Practical Introduction to Image and Speech Processing in C++ (2nd ed.). San Francisco: Morgan Kaufmann Publishers. ISBN 3-528-15558-2.
- An introductory tutorial to classifiers (introducing the basic terms, with numeric example)