Figure 1 shows the basic elements of McCulloch-Pitts model: is the input vector, is a weights vector, is output, is number of elements in input and is the activation function that determine the output value. A simple choice for is the signal function . In this case, the weights are used to calculate a weighted sum of the inputs. If it exceeds the threshold the output is else the value of is , that is:
But the McCulloch-Pitts neuron did not have a mechanisms for learning. Based on biological evidences, D.O. Hebb suggested a rule to adapt the weights, that is, a learning rule [2]. This biological inspired procedure can be expressed in the following manner: where and are adapted weights and initials weights respectively, is a real parameter to control the rate of learning and is the desired (know) output. This learning rule plus the elements of Figure 1 is called the perceptron model for a neuron. The learning typically occurs for example through training, or exposure to a known set of input/output data. The training algorithm iteratively adjusts the connection weights analogous to synapses in biological nervous. These connection weights store the knowledge necessary to solve specific problems.