<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.sarg.dev/index.php?action=history&amp;feed=atom&amp;title=Learning_vector_quantization</id>
	<title>Learning vector quantization - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.sarg.dev/index.php?action=history&amp;feed=atom&amp;title=Learning_vector_quantization"/>
	<link rel="alternate" type="text/html" href="https://wiki.sarg.dev/index.php?title=Learning_vector_quantization&amp;action=history"/>
	<updated>2026-04-16T16:46:16Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.44.2</generator>
	<entry>
		<id>https://wiki.sarg.dev/index.php?title=Learning_vector_quantization&amp;diff=494855&amp;oldid=prev</id>
		<title>imported&gt;Dujo: Update URL</title>
		<link rel="alternate" type="text/html" href="https://wiki.sarg.dev/index.php?title=Learning_vector_quantization&amp;diff=494855&amp;oldid=prev"/>
		<updated>2025-09-09T21:06:56Z</updated>

		<summary type="html">&lt;p&gt;Update URL&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;In [[computer science]], &amp;#039;&amp;#039;&amp;#039;learning vector quantization&amp;#039;&amp;#039;&amp;#039; (&amp;#039;&amp;#039;&amp;#039;LVQ&amp;#039;&amp;#039;&amp;#039;) is a [[prototype|prototype-based]] [[supervised learning|supervised]] [[Statistical classification|classification]] [[algorithm]]. LVQ is the supervised counterpart of [[vector quantization]] systems. LVQ can be understood as a special case of an [[artificial neural network]], more precisely, it applies a [[winner-take-all (computing)|winner-take-all]] [[Hebbian learning]]-based approach. It is a precursor to [[self-organizing map]]s (SOM) and related to [[neural gas]] and the [[k-nearest neighbor algorithm]] (k-NN). LVQ was invented by [[Teuvo Kohonen]].&amp;lt;ref&amp;gt;T. Kohonen. Self-Organizing Maps. Springer, Berlin, 1997.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition ==&lt;br /&gt;
An LVQ system is represented by prototypes &amp;lt;math&amp;gt;W=(w(i),...,w(n))&amp;lt;/math&amp;gt; which are defined in the [[feature space]] of observed data. In winner-take-all training algorithms one determines, for each data point, the prototype which is closest to the input according to a given distance measure. The position of this so-called winner prototype is then adapted, i.e. the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly.&lt;br /&gt;
&lt;br /&gt;
An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain.&amp;lt;ref&amp;gt;{{citation|author=T. Kohonen|contribution=Learning vector quantization|editor=M.A. Arbib|title=The Handbook of Brain Theory and Neural Networks|pages=537–540|publisher=MIT Press|location=Cambridge, MA|year=1995}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
LVQ systems can be applied to [[multi-class classification]] problems in a natural way. &lt;br /&gt;
&lt;br /&gt;
A key issue in LVQ is the choice of an appropriate measure of distance or similarity for training and classification. Recently, techniques have been developed which adapt a parameterized distance measure in the course of training the system, see e.g. (Schneider, Biehl, and Hammer, 2009)&amp;lt;ref&amp;gt;{{cite journal|author1=P. Schneider |author2=B. Hammer |author3=M. Biehl |title=Adaptive Relevance Matrices in Learning Vector Quantization|journal= Neural Computation|volume=21|issue=10|pages=3532–3561|year=2009|doi=10.1162/neco.2009.10-08-892|pmid=19635012|citeseerx=10.1.1.216.1183|s2cid=17306078}}&amp;lt;/ref&amp;gt; and references therein.&lt;br /&gt;
&lt;br /&gt;
LVQ can be a source of great help in classifying text documents.{{Citation needed|date=December 2019|reason=removed citation to predatory publisher content}}&lt;br /&gt;
&lt;br /&gt;
==Algorithm==&lt;br /&gt;
The algorithms are presented as in.&amp;lt;ref&amp;gt;{{Citation |last=Kohonen |first=Teuvo |title=Learning Vector Quantization |date=2001 |work=Self-Organizing Maps |volume=30 |pages=245–261 |url=https://link.springer.com/chapter/10.1007/978-3-642-56927-2_6 |place=Berlin, Heidelberg |publisher=Springer Berlin Heidelberg |doi=10.1007/978-3-642-56927-2_6 |isbn=978-3-540-67921-9|url-access=subscription }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set up:&lt;br /&gt;
&lt;br /&gt;
* Let the data be denoted by &amp;lt;math&amp;gt;x_i \in \R^D&amp;lt;/math&amp;gt;, and their corresponding labels by &amp;lt;math&amp;gt;y_i \in \{1, 2, \dots, C\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
* The complete dataset is &amp;lt;math&amp;gt;\{(x_i, y_i)\}_{i=1}^N&amp;lt;/math&amp;gt;.&lt;br /&gt;
* The set of code vectors is &amp;lt;math&amp;gt;w_j \in \R^D&amp;lt;/math&amp;gt;.&lt;br /&gt;
* The learning rate at iteration step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is denoted by &amp;lt;math&amp;gt;\alpha_t&amp;lt;/math&amp;gt;.&lt;br /&gt;
* The hyperparameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon&amp;lt;/math&amp;gt; are used by LVQ2 and LVQ3. The original paper suggests &amp;lt;math&amp;gt;\epsilon \in [0.1, 0.5]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w \in [0.2, 0.3]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== LVQ1 ===&lt;br /&gt;
Initialize several code vectors per label. Iterate until convergence criteria is reached.&lt;br /&gt;
&lt;br /&gt;
# Sample a datum &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt;, and find out the code vector &amp;lt;math&amp;gt;w_j&amp;lt;/math&amp;gt;, such that &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; falls within the [[Voronoi diagram|Voronoi cell]] of &amp;lt;math&amp;gt;w_j&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If its label &amp;lt;math&amp;gt;y_i&amp;lt;/math&amp;gt; is the same as that of &amp;lt;math&amp;gt;w_j&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;w_j \leftarrow w_j + \alpha_t(x_i - w_j)&amp;lt;/math&amp;gt;, otherwise, &amp;lt;math&amp;gt;w_j \leftarrow w_j - \alpha_t(x_i - w_j)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== LVQ2 ===&lt;br /&gt;
LVQ2 is the same as LVQ3, but with this sentence removed: &amp;quot;If &amp;lt;math&amp;gt;w_j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; have the same class, then &amp;lt;math&amp;gt;w_j \leftarrow w_j - \alpha_t(x_i - w_j)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w_k \leftarrow w_k + \alpha_t(x_i - w_k)&amp;lt;/math&amp;gt;.&amp;quot;. If &amp;lt;math&amp;gt;w_j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; have the same class, then nothing happens.&lt;br /&gt;
&lt;br /&gt;
=== LVQ3 ===&lt;br /&gt;
[[File:Apollonian_circles.svg|thumb|Some Apollonian circles. Every blue circle intersects every red circle at a right angle. Every red circle passes through the two points &amp;#039;&amp;#039;{{mvar|C, D}}&amp;#039;&amp;#039;, and every blue circle separates the two points.]]&lt;br /&gt;
Initialize several code vectors per label. Iterate until convergence criteria is reached.&lt;br /&gt;
&lt;br /&gt;
# Sample a datum &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt;, and find out two code vectors &amp;lt;math&amp;gt;w_j, w_k&amp;lt;/math&amp;gt; closest to it.&lt;br /&gt;
# Let &amp;lt;math&amp;gt;d_j := \|x_i - w_j\|, d_k := \|x_i - w_k\|&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;\min \left(\frac{d_j}{d_k}, \frac{d_k}{d_j}\right)&amp;gt;s &amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;s=\frac{1-w}{1+w}&amp;lt;/math&amp;gt;, then&lt;br /&gt;
#* If &amp;lt;math&amp;gt;w_j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; have the same class, and &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; have different classes, then &amp;lt;math&amp;gt;w_j \leftarrow w_j + \alpha_t(x_i - w_j)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w_k \leftarrow w_k - \alpha_t(x_i - w_k)&amp;lt;/math&amp;gt;.&lt;br /&gt;
#* If &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; have the same class, and &amp;lt;math&amp;gt;w_j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; have different classes, then &amp;lt;math&amp;gt;w_j \leftarrow w_j - \alpha_t(x_i - w_j)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w_k \leftarrow w_k + \alpha_t(x_i - w_k)&amp;lt;/math&amp;gt;.&lt;br /&gt;
#* If &amp;lt;math&amp;gt;w_j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; have the same class, then &amp;lt;math&amp;gt;w_j \leftarrow w_j - \epsilon\alpha_t(x_i - w_j)&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;w_k \leftarrow w_k + \epsilon\alpha_t(x_i - w_k)&amp;lt;/math&amp;gt;.&lt;br /&gt;
#* If &amp;lt;math&amp;gt;w_k&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; have different classes, and &amp;lt;math&amp;gt;w_j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; have different classes, then the original paper simply does not explain what happens in this case, but presumably nothing happens in this case.&lt;br /&gt;
# Otherwise, skip.&lt;br /&gt;
Note that condition &amp;lt;math&amp;gt;\min \left(\frac{d_j}{d_k}, \frac{d_k}{d_j}\right)&amp;gt;s &amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;s=\frac{1-w}{1+w}&amp;lt;/math&amp;gt;, precisely means that the point &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; falls between two [[Apollonian circles|Apollonian spheres]]. &lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
* {{cite journal |last1=Somervuo |first1=Panu |last2=Kohonen |first2=Teuvo |date=1999 |title=Self-organizing maps and learning vector quantization for feature sequences |journal=Neural Processing Letters |volume=10 |issue=2 |pages=151–159 |doi=10.1023/A:1018741720065}}&lt;br /&gt;
* {{Cite journal |last=Nova |first=David |last2=Estévez |first2=Pablo A. |date=2014-09-01 |title=A review of learning vector quantization classifiers |url=https://link.springer.com/article/10.1007/s00521-013-1535-3 |journal=Neural Computing and Applications |language=en |volume=25 |issue=3 |pages=511–524 |doi=10.1007/s00521-013-1535-3 |issn=1433-3058|arxiv=1509.07093 }}&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://www.cis.hut.fi/research/lvq_pak/ lvq_pak] official release (1996) by Kohonen and his team&lt;br /&gt;
&lt;br /&gt;
[[Category:Artificial neural networks]]&lt;br /&gt;
[[Category:Classification algorithms]]&lt;/div&gt;</summary>
		<author><name>imported&gt;Dujo</name></author>
	</entry>
</feed>