Principles
From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously—and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights.
This original principle is perhaps the simplest form of weight selection. While this means it can be relatively easily coded into a computer program and used to update the weights for a network, it also prohibits the number of applications of Hebbian learning. Today, the term Hebbian learning generally refers to some form of mathematical abstraction of the original principle proposed by Hebb. In this sense, Hebbian learning involves weights between learning nodes being adjusted so that each weight better represents the relationship between the nodes. As such, many learning methods can be considered to be somewhat Hebbian in nature.
For example, we have heard the word Nokia for many years, and are still hearing it. We are pretty used to hearing Nokia Mobile Phones, i.e. the word 'Nokia' has been associated with the word 'Mobile Phone' in our Mind. Every time we see a Nokia Mobile, the association between the two words 'Nokia' and 'Phone' gets strengthened in our mind. The association between 'Nokia' and 'Mobile Phone' is so strong that if someone tried to say Nokia is manufacturing Cars and Trucks, it would seem odd.
The following is a formulaic description of Hebbian learning: (note that many other descriptions are possible)
where is the weight of the connection from neuron to neuron and the input for neuron . Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections are set to zero if (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.
Another formulaic description is:
- ,
where is the weight of the connection from neuron to neuron, is the number of training patterns, and the th input for neuron . This is learning by epoch (weights updated after all the training examples are presented). Again, in a Hopfield network, connections are set to zero if (no reflexive connections).
A variation of Hebbian learning that takes into account phenomena such as blocking and many other neural learning phenomena is the mathematical model of Harry Klopf. Klopf's model reproduces a great many biological phenomena, and is also simple to implement.
Read more about this topic: Hebbian Theory
Famous quotes containing the word principles:
“It must appear impossible, that theism could, from reasoning, have been the primary religion of human race, and have afterwards, by its corruption, given birth to polytheism and to all the various superstitions of the heathen world. Reason, when obvious, prevents these corruptions: When abstruse, it keeps the principles entirely from the knowledge of the vulgar, who are alone liable to corrupt any principle or opinion.
”
—David Hume (17111776)
“All those who write either explicitly or by insinuation against the dignity, freedom, and immortality of the human soul, may so far forth be justly said to unhinge the principles of morality, and destroy the means of making men reasonably virtuous.”
—George Berkeley (16851753)
“With our principles we seek to rule our habits with an iron hand, or to justify, honor, scold, or conceal them:Mtwo men with identical principles are likely to be seeking fundamentally different things with them.”
—Friedrich Nietzsche (18441900)