Artificial Intelligence/Neural Networks/Distributed Processing

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Parallel Distributed Processing[edit | edit source]

In the 1970s after Marvin Minski noted the linearity of the perceptron's output the bloom came off Neural Networks, and the near miracle status that they had been hyped to, made many people think that they were no longer a valid study. Neuroscientists knew better, but those who had hoped to earn money from the new systems, abandoned them temporarily.

One Scientist called Herman Hesse, however wrote a paper, on Parallel Distributed Processing, that showed that neural networks could be seen as a method of decomposing complex tasks into simple procedures, that were manageable with only a rudimentary processing element. Since that time, Neural networks has split into two separate fields, Natural Neural Networks, and Parallel Distributed Processing which is sometimes called Artificial Neural Networks.

The primary difference between these two fields is that Natural Neural Networks is limited to attempting to model real natural neural networks, while Parallel Distributed Processing is free to make any changes it wants to the basic model, in order to get better speed for the same process, or to get a better fit to a particular processing task. Some models like Igor Aleksander's weightless Neurons, seem so far from the standard neural network that they might be mistaken for an Artificial Neural Network, but they are applied to the task of modelling a natural neural network, so they fall within that school of thought even if there is no reason to assume that there are natural neurons that do not have synapses.

Because the point of Parallel Distributed Processing is to decompose complex functions into smaller easily processed chunks, the nature of the models changes, and we no longer need both a neuron model and a network model, and the nature of the learning algorithm depends on the application.

Thus some Artificial Neural Networks lack neurons, some Artificial Neural Networks lack Networks, and some Artificial Neural Networks have customized learning algorithms. The title of Neural network seems somewhat tenuous in these cases, but the Parallel Distributed Processing people don't see calling their programs a neural network as a problem. Besides enough of the original neural networks still exist to keep them current in the journals.

As an example consider the SOM or self-organizing map. Although this simulation is meant to simulate some functions it is thought might be found in the visual cortex, The processing element owes more to topology than it does to neural models. Essentially what it does is reduce the dimensional complexity of a highly dimensional topology to a relatively small usually two dimensional topology while attempting to retain the topological features of the higher dimensional structure.

It is possible to see the Neural Network within this structure even though the neurons have nothing to do with nerves, and the network is built to fit the topology, rather than fitting the topology to an existing network.