Jump to content

NETtalk (artificial neural network)

From Wikipedia, the free encyclopedia
NETtalk structure.

NETtalk is an artificial neural network that learns to pronounce written English text by being shown text as input and matching phonetic transcriptions for comparison.[1]

It is the result of research carried out in the mid-1980s by Terrence Sejnowski and Charles Rosenberg. The intent behind NETtalk was to construct simplified models that might shed light on the complexity of learning human level cognitive tasks, and their implementation as a connectionist model that could also learn to perform a comparable task. The authors trained it by backpropagation.[2]

The network was trained on a large amount of English words and their corresponding pronunciations, and is able to generate pronunciations for unseen words with a high level of accuracy. The success of the NETtalk network inspired further research in the field of pronunciation generation and speech synthesis and demonstrated the potential of neural networks for solving complex NLP problems. The output of the network was a stream of phonemes, which fed into DECtalk to produce audible speech, It achieved popular success, appearing on the Today show.[3]: 115 

Training

[edit]

The training dataset was a 20,008-word subset of the Brown Corpus, with manually annotated phoneme and stress for each letter. The development process was described in a 1993 interview. It took three months -- 250 person-hours -- to create the training dataset, but only a few days to train the network.[4][5]

After it was run successfully on this, the authors tried it on a phonological transcription of an interview with a young Latino boy from a barrio in Los Angeles. This resulted in a network that reproduced his Spanish accent.[3]: 115 

The original NETtalk was implemented on a Ridge 32, which took 0.275 seconds per learning step (one forward and one backward pass). Training NETtalk became a benchmark to test for the efficiency of backpropagation programs. For example, an implementation on Connection Machine-1 (with 16384 processors) ran at 52x speedup. An implementation on a 10-cell Warp ran at 340x speedup.[6][7]

The following table compiles the benchmark scores as of 1988.[6][7][8] Speed is measured in "millions of connections per second" (MCPS). For example, the original NETtalk on Ridge 32 took 0.275 seconds per forward-backward pass, giving MCPS. Relative times are normalized to the MicroVax.

Performance Comparison (as of 1988)
System MCPS Relative Time
MicroVax 0.008 1
Sun 3/75 0.01 1.3
VAX-11 780 0.027 3.4
Sun 160 with FPA 0.034 4.2
DEC VAX 8600 0.06 7.5
Ridge 32 0.07 8.8
Convex C-1 1.8 225
16,384-core CM-1 2.6 325
Cray-2 7 860
65,536-core CM-1 13 1600
10-cell Warp 17 2100
10-cell iWarp 36 4500

Architecture

[edit]

The network had three layers and 18,629 adjustable weights, large by the standards of 1986. There were worries that it would overfit the dataset, but it was trained successfully.[3]

The input of the network has 203 units, divided into 7 groups of 29 units each. Each group is a one-hot encoding of one character. There are 29 possible characters: 26 letters, comma, period, and word boundary (whitespace).

The hidden layer has 80 units.

The output has 26 units. 21 units encode for articulatory features (point of articulation, voicing, vowel height, etc.) of phonemes, and 5 units encode for stress and syllable boundaries.

Sejnowski studied the learned representation in the network, and found that phonemees that sound similar are clustered together in representation space. The output of the network degrades, but remains understandable, when some hidden neurons are removed.[9]

Achievements and limitations

[edit]

NETtalk was created to explore the mechanisms of learning to correctly pronounce English text. The authors note that learning to read involves a complex mechanism involving many parts of the human brain. NETtalk does not specifically model the image processing stages and letter recognition of the visual cortex. Rather, it assumes that the letters have been pre-classified and recognized, and these letter sequences comprising words are then shown to the neural network during training and during performance testing. It is NETtalk's task to learn proper associations between the correct pronunciation with a given sequence of letters based on the context in which the letters appear. In other words, NETtalk learns to use the letters around the currently pronounced phoneme that provide cues as to its intended phonemic mapping.

References

[edit]
  1. ^ J, SEJNOWSKI T. (1987). "Parallel Networks that Learn to Pronounce English Text". Complex System. 1: 145–168.
  2. ^ Sejnowski, Terrence J., and Charles R. Rosenberg. "Parallel networks that learn to pronounce English text." Complex systems 1.1 (1987): 145-168.
  3. ^ a b c Sejnowski, Terrence J. (2018). The deep learning revolution. Cambridge, Massachusetts London, England: The MIT Press. ISBN 978-0-262-03803-4.
  4. ^ Anderson, James A.; Rosenfeld, Edward, eds. (2000-02-28). Talking Nets: An Oral History of Neural Networks. The MIT Press. doi:10.7551/mitpress/6626.001.0001. ISBN 978-0-262-26715-1.
  5. ^ See nettalk.names file in the original dataset file. https://archive.ics.uci.edu/dataset/150/connectionist+bench+nettalk+corpus
  6. ^ a b Pomerleau; Gusciora; Touretzky; Kung (1988). "Neural network simulation at Warp speed: How we got 17 million connections per second". IEEE International Conference on Neural Networks. IEEE. pp. 143–150 vol.2. doi:10.1109/icnn.1988.23922. ISBN 0-7803-0999-5.
  7. ^ a b Borkar, S.; Cohn, R.; Cox, G.; Gleason, S.; Gross, T. (1988-11-01). "iWarp: an integrated solution of high-speed parallel computing". Proceedings of the 1988 ACM/IEEE Conference on Supercomputing. Supercomputing '88. Washington, DC, USA: IEEE Computer Society Press: 330–339. ISBN 978-0-8186-0882-7.
  8. ^ Blelloch, Guy; Rosenberg, Charles R. (1987-08-23). "Network learning on the connection machine". Proceedings of the 10th International Joint Conference on Artificial Intelligence - Volume 1. IJCAI'87. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.: 323–326.
  9. ^ "Learning, Then Talking". The New York Times. August 16, 1988. Retrieved November 4, 2024.
[edit]