

Next:Quantum
Computation and NonlinearityUp:Models
for Quantum NeuralPrevious:Introduction
Classical Neural Networks
The first logical neuron was developed by W. S. McCulloch and W.A. Pitts
in 1943 [2].
It described the fundamentals functions and structures of a neural cell
reporting that a neuron will fire an impulse only if a threshold value
is exceeded.
|
Figure 1: McCulloch-Pitts neuron model.
Figure 1
shows the basic elements of McCulloch-Pitts model:
is the input vector,
is a weights vector,
is output,
is number of elements in input and
is the activation function that determine the output value. A simple
choice for
is the signal function
.
In this case, the weights are used to calculate a weighted sum of the inputs.
If it exceeds the threshold
the output is
else the value of
is
,
that is:
 |
(1) |
But the McCulloch-Pitts neuron did not have a mechanisms for learning.
Based on biological evidences, D.O. Hebb suggested a rule to adapt the
weights, that is, a learning rule [2].
This biological inspired procedure can be expressed in the following manner:
 |
(2) |
where
and
are adapted weights and initials weights respectively,
is a real parameter to control the rate of learning and
is the desired (know) output. This learning rule plus the elements
of Figure 1
is called the perceptron model for a neuron. The learning typically occurs
for example through training, or exposure to a known set of input/output
data. The training algorithm iteratively adjusts the connection weights
analogous to synapses in biological nervous. These connection weights store
the knowledge necessary to solve specific problems.


Next:Quantum
Computation and NonlinearityUp:Models
for Quantum NeuralPrevious:Introduction
Gilson Giraldi 2002-07-02