Instead of quantizing the input samples one-by-one, we take n input samples: X=[x1β,x2β,β¦,xnβ]βRn
in other words: we quantize vectors, not scalars
the rate of the source code is R=nlog2βKβ bits / source sample
Rn is partitioned into Kregions, {Riβ}i=1Kβ and Q(X)=X^iβ,βXβRiβ
{Riβ}i=1Kβ is the set of Voronoi regions
The Voronoi regions comprise the (Euclidean) space wherein a given vector, X is closer to a specific codeword / centroid, i, than any other, jξ =iβiβ={XβRnβ£βXβXiββ<βXβXjββ,Β forΒ allΒ iξ =j}
Optimal VQ: X^iβ is the centroid of the Voronoi region, Riβ:
X^iβ:=E[Xβ£XβRiβ]
A vector quantizing maps n-dimensional vectors XβRn to a finite set of vectors
Each vector is called a code vector or codeword, and the set of all codewords, X is called a codebook
Associated with each codeword, X^iβ
Can obtain codewords via e.g. K-means clustering (Lloydβs algorithm)