Artificial Intelligence/Neural Networks/Introduction

From Wikibooks, open books for an open world
Jump to: navigation, search

Artificial Intelligence has had its fair share from the field of neuroscience. Neuroscience is the study of nervous system, particularly the brain. How the brain enables human beings to think has remained a mystery until the present day. But significant leaps and bounds in the field have enabled scientists to come close to the nature of thought processes inside a brain.

Understanding the structure of the brain[edit]

Golgi-stained neurons in the somatosensory cortex of the macaque monkey.

A brain is a jelly-like structure made of grey matter which is not rigid and could not be dissected for examination under a microscope, until in 1873, Camillo Golgi developed a staining technique that allowed the observation of the elements that make up the brain. Unlike other cells that form the foundations of organs in the body, what the observers came across while studying the brain was astonishing. The brain cell or neuron was unlike any cell in the body.

With works by Santiago Ramón y Cajal, it was declared that the unit of a nervous system is certainly the neuron. Like the conventional procedures used by many in programming, a neuron has three basic functions: it takes signals from other neurons as input, processes them and sends a signal to other neurons. Because the earlier scientists thought that brain's thoughts emerge out of a network of such neurons, they set about to replicate the structure of this network. Thus was formed the first ever artificial neural network.

Artificial Neural Networks[edit]

With the lack of information available on neural networks as such, Warren McCulloch and Walter Pitts sat down together in 1943 to try and explain the workings of the brain demonstrating how individual neurons can communicate with others in a network. Largely based on the feedback theory by Norbert Wiener, their paper on this atomic level of psychology enthrilled Marvin Minsky and Dean Edmonds so much as to build the first ever neural network in 1951 out of three hundred vacuum tubes and a surplus automatic pilot from a B-24 bomber[1]

In 1958 Professor Frank Rosenblatt of Cornell proposed the Perceptron, A little later In 1969 Marvin Minsky and Seymore A. Papert, released a book called Perceptrons in which they pointed out the linear nature of perceptron calculations. This killed the interest that had been generated by the perceptron, and the first lull in neural networks was experienced.

Neural Network research has gone through a number of these lulls, as new methods have been created have shown brief promise, have been over-promoted, and have suffered from some setback. However scientists have always come back to the technology because it is a real attempt to model neural mechanisms despite the hype.

Neural Networks can be loosely separated into Neural Models, Network Models and Learning Rules. the earliest mathematical models of the Neuron pre-date Mcullock and Pitts who developed the first Network models to explain how the signals passed from one neuron to another within the network. When you hear of a network being described as a feed forward or feedback network, they are describing how the network connects neurons in one layer to neurons in the next. Weiners work allowed Mculloch and Pitts to describe how these different connection types would affect the operation of the network.

In a feed forward network the output of the network does not affect the operation of the layer that is producing this output. In a feedback network however the output of a layer after the layer being fed back into, can affect the output of the earlier layer. Essentially the data loops through the two layers and back to start again. This is important in control circuits, because it allows the result from a previous calculation to affect the operation of the next calculation. This means that the second calculation can take into account the results of the first calculation, and be controlled by them. Weiners work on cybernetics was based on the idea that feedback loops were a useful tool for control circuits. In fact Weiner coined the term [2]cybernetics based on the Greek kybernutos or metallic steersman of a fictional boat mentioned in the Illiad.

Neural models ranged from complex mathematical models with Floating point outputs to simple state machines with a binary output. Depending on whether the neuron incorporates the learning mechanism or not, neural learning rules can be as simple as adding weight to a synapse each time it fires, and gradually degrading those weights over time, as in the earliest learning rules, Delta rules that accelerate the learning by applying a delta value according to some error function in a back propagation network, to Pre-synaptic/Post-synaptic rules based on biochemistry of the synapse and the firing process. Signals can be calculated in binary, linear, non-linear, and spiking values for the output.

Today there are literally hundreds of different models, that all call themselves neural networks, even if some of them no longer have models of nerves, or no longer actually require networks to achieve similar effects. Because scientists still have not yet described fully the structure of mammalian neural cells or nerves, we must accept that for now, we will have to wait for the definitive nerve model before we can have the definitive Neural Network. In the meantime this is a rich area of research because it has the potential to be both phenomenal and computational and thus to capture perhaps a greater range of the operation of the brain than computational models have by themselves.

Notes[edit]

  1. Daniel Crevier. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. ISBN 0465029973. BasicBooks, HarperCollins Publishers Inc:New York. p.29-35.
  2. Norbert Weiner, Cybernetics:or, Control and Communications in the Animal and the Machine(1947) 2nd Edition MIT Press (1965) ISBN 026273009X