Summary of "Self Organizing Feature Map Kohonen Maps Solved Example | Self Organizing Networks by Mahesh Huddar"
Summary of "Self Organizing Feature Map Kohonen maps Solved Example | Self Organizing Networks by Mahesh Huddar"
This video provides a detailed walkthrough of training a Self-Organizing Feature Map (SOFM), also known as a Kohonen map, using a simple solved example. The main focus is on how to assign input vectors to output units and update the weights iteratively until convergence.
Main Ideas and Concepts
- Self-Organizing Feature Maps (SOFM): An unsupervised learning technique used to classify input vectors by mapping them to discrete output units based on similarity.
- Network Setup:
- input vectors: 4 training samples, each with 4 features.
- Output units: 2 units (Unit 1 and Unit 2).
- Initial weights provided for each output unit.
- learning rate set to 0.6.
- Objective: Assign each input vector to one of the output units based on minimum Euclidean distance and update the weights accordingly.
Methodology / Step-by-Step Instructions
- Initialization:
- Given input vectors: - X1 = [1, 0, 1, 0] - X2 = [1, 3, 0, 0] (likely a transcription error, assumed [1, 0, 0, 0]) - X3 = [1, 1, 1, 1] - X4 = [0, 1, 1, 0]
- Initial weights for Unit 1: [0.3, 0.5, 0.7, 0.2]
- Initial weights for Unit 2: [0.6, 0.5, 0.4, 0.2]
- learning rate (α) = 0.6
- Calculate Euclidean distance:
- For each input vector, compute the squared Euclidean distance to each output unit's weight vector:
d² = ∑(w_i - x_i)² - The unit with the smallest distance is considered the "winner" for that input vector.
- For each input vector, compute the squared Euclidean distance to each output unit's weight vector:
- Update Weights of Winning Unit:
- Update rule:
w_j(t+1) = w_j(t) + α × (x - w_j(t)) - Where: - w_j(t) = current weight vector of winning unit - x = input vector - α = learning rate
- Update rule:
- Iterate Over All Training Samples:
- Repeat the distance calculation and weight update for each input vector in sequence.
- This completes one epoch (one full pass over the training data).
- Repeat Epochs Until Convergence:
- Continue training for multiple epochs until the assignment of input vectors to output units stabilizes (no changes in class membership).
- This indicates that the network has converged.
- Final Mapping:
- After convergence, each input vector is permanently assigned to one output unit.
- In the example, after one epoch: - X1 and X2 assigned to Unit 1 - X3 and X4 assigned to Unit 2
Key Lessons
- SOFM training involves iterative competition between output units to "win" input samples based on distance.
- Weight vectors are continuously adjusted towards winning inputs to better represent clusters.
- Convergence is achieved when input assignments no longer change across epochs.
- The process is fully unsupervised and useful for clustering and dimensionality reduction.
Speakers / Sources
- Mahesh Huddar — Presenter and instructor explaining the Self Organizing Feature Map concept and example.
Category
Educational
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.
Preparing reprocess...