Artificial Intelligence/Neural Networks/Natural Neural Networks

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Natural Neural Networks[edit | edit source]

The primary difference between a natural neural network and a Distributed Processing analog of a Neural Network, is the attempt to capture the function of a real neuron and real natural arrangements of neurons in the model. From the Hebbian model used in early perceptrons to modern neural models that attempt to capture the biochemical threads that implement different forms of memory within the same cell, The idea has all along been to find a reasonable model for the neuron, and learn from implementations of that model knowledge of how natural neural systems might work. It has been a long hard road, and while neural networks have gained and lost prominence in A.I., Neuro-scientists have been forced to go back to the Neural Network model, time and time again, as the most ethical approach to learning about natural networks of neurons.

Unlike other forms of Neuroscience, Neural models do not kill animals, in order to get their neurons, they do not torture animals in order to see how they will react, they do not even involve real animals, instead they torture recyclable electrons by making them flow through computer circuits. Something that can't even be seen, and since electrons are not thought to be alive, there is no ethical reason for concern, except perhaps for the use of electricity, but modelling is not a severe use of electrical amperage, since computers are so efficient. In short there is very little ethically wrong with torturing electrons.

The problem has then become What are the best models for:

  1. Neurons
  2. Networks
  3. Learning Algorithms

Neuron Models[edit | edit source]

Neuron models can be classified in a number of ways:

For instance some neuron models have binary outputs, only a few have integer outputs, and most have floating point outputs. Some neurons have what is called spiking neural outputs. Some have linear outputs with regards to their inputs, and some have non-linear or dynamic outputs.

Another instance is the formula for the calculation of output, some have a simple integration of synapses, some have a second order calculation, and some have no synapses at all. Then there is the range function, which maps the input values to the output, which can be several shapes, depending on the type of neuron being modeled, and so on.

There are many different additional complications, such as habituation, different types of synapses that can affect the way the cell operates, beyond the vanilla inhibitory and excitory synapses. Then there is the question of whether the dendrite complexity will be absorbed into the neuron model, or externalized into the network model.

Network Models[edit | edit source]

Then there is the question of whether the learning algorithm is part of the neuron model, or part of the network model. It doesn't make sense to have a back propagation network, if the learning algorithm is internal to the neuron.

Network models include:

Feedforward models where each layer feeds the next in succession Feedback models where at least a portion of the succeeding layers output gets fed back into the previous layer Recurrent models, where even the same neuron can feed back into itself

Back Propagation where an "Error" signal is fed back from the succeeding layer to "Train" the previous layer.

Hidden Layer Models, were conceived of as a sop to Marvin Minsky, as a way of expanding the dynamics of a neural network without redesigning the model. Essentially they capture the complexity of the dendritic mass, in "Hidden Layers" since a branching of a dendrite is mathematically the same as a soma, except for the output conditioning, and threshold to firing.

At first networks were static in design, but over time it became necessary to model networks that could grow new connections and prune existing connections to get rid of the excess.

Learning Algorithms[edit | edit source]

The original Learning rule, was based on the idea that synapses got stronger the more they were used, and weakened gradually when they weren't.

A variation on this Learning rule was the Delta rule, which increased the rate of learning and responded to an "error" signal that had to be back propagated in multi-layer networks. Because of its tendency to overlearn, Delta rule networks were only able to learn in supervised mode.

The latest cellular Learning rule, is the two rule system, that involves a pre-synaptic rule, and a Post-synaptic rule that operate from within the neuron itself. Because it doesn't require back propagation, it can be balanced not to need to overlearn and can therefore be used in unsupervised learning.

To add to this rule however we must take into account the fact that new models of neural systems, incorporate learning threads that operate in parallel and implement short term, long term and perhaps even medium term memories. Since some models of cellular long-term memory require physical connections, modeling the growth of these connections becomes part of the model.

Neural Groups, and Groups of Neurons[edit | edit source]

Neurons never come alone. In fact small networks of neurons connect to larger networks of neurons, forming a network throughout the whole body, with centers that process specific types of information in a number of centers of the body. One of these centers is the brain, which connects literally billions of neurons together into a very complex network that we are only now beginning to be able to even think of modeling.

If we are going to deal with a system of the complexity of the brain we need new neural network models that can capture the variety in the structure of neurons, can explain the functions of Groups of different types of neurons, and explain why some similar neurons act as if they were a single solid group, where only one or two neurons fire for the whole group.

Neural Networks that model Natural Networks of Neurons, are coming back into vogue, if only because, our understanding of Neuroscience is expanding due to new tools that bring us closer than ever towards understanding what the neurons are doing. As we work towards understanding the brain it becomes possible to imagine that we could build a conscious machine sometime within the next 20 years.