4.1.1 A demonstration of a Kohonen network

Below is an interactive demonstration of a Kohonen network. This model builds on the competitive learning model by allowing more than one unit to update its weights on a given pattern.

The network consists of an input layer with 49 units, shown below in a 7x7 grid, and a layer of 8 competing units, each fully connected to the input layer. Thus, each competing unit has 49 incoming weights; they are also graphically depicted in a 7x7 grid, to mirror the input configuration.

Training patterns:The model is trained on a set of 8 patterns. The default patterns are a set of bars of different orientations. These patterns may be edited using the mouse.

Activation rule:When the network is presented with a training pattern, the units in the competitive layer compete to respond. The unit whose weights are closest (in Euclidean distance) to the current input pattern is deemed to be the winner and becomes activated. Then that unit, as well as other units in its "neighborhood" (described below), do some learning. The activation level of each unit is represented by the color; the distance of each unit's weights from the input is shown numerically.

Neighborhood:For the network shown here, the neighborhood of the winning unit depends on two things: how far to the left or right a unit is from the winner, and the value of the neighborhood size parameter. The neighborhood size is initialized to a large value of 4 at the start of learning, and is gradually shrunk down to 0 by the end of learning. When the neighborhood size is 0 only the winner adapts its weights. When the neighborhood size is 1, the winner, one unit to the left, and one unit to the right adapt their weights. If the winner is at one end, the units at the opposite end are also considered to be its neighbors. When the neighborhood size is 2, units that are immediate or next-to-nearest neighbors of the winner all adapt their weights, etc. So when the neighborhood size is 4, every unit in the competing layer adapts its weights. The gradual shrinkage of neighborhood size helps the network to find a good ordering of patterns so that all nearest neighbors respond to similar patterns.

Note that in this network, the neighborhood is defined in terms of neighbors to the left and right; thus, it is a one-dimensional neighborhood. A two-dimensional neighborhood is also commonly used; this may be more like what's in the brain. Higher-dimensional neighborhood arrangements could also be simulated, though in practice, one rarely goes beyond a dimensionality of one or two.

Learning:The network learns by repeatedly presenting a pattern at the input layer, activating the unit in the competing layer who wins the competition, selecting all the units in the winner's neighborhood, and then adjusting each of those unit's weights so as to be closer to the input vector. After each learning iteration, the neighborhood size is automatically shrunk by a small amount. To train the network, click on animate; learning will continue until you click on that same button, which will now say stop, or until you have reached total learning iterations (which you may adjust). The amount each weight changes on each learning iteration depends on the learning rate parameter, which may be clicked on to have its value changed.



Please report any difficulties with this software to: becker@mcmaster.ca