Abstract
Vector quantization (VQ) is a well-known method for signal compression. One of the main problems remaining unsolved satisfactorily in a VQ compression system is its encoding speed, which seriously constrains the practical applications of the VQ method. The reason is that in its encoding process VQ must perform many k-dimensional (k-D) expensive Euclidean distance computations so as to determine a best-matched codeword in the codebook for the input vector by finding the minimum Euclidean distance. Apparently, the most straightforward method in a VQ framework is to deal with a k-D vector as a whole vector. By using the popular statistical features of the sum and the variance of a k-D vector to estimate real Euclidean distance first, the IEENNS method has been proposed to reject most of the unlikely candidate codewords for a given input vector. Because both the sum and the variance are approximate descriptions for a vector and they are more precise when representing a shorter vector, it is better to construct the partial sums and the partial variances by dealing with a k-D vector as two lower dimensional subvectors to replace the sum and the variance of a whole vector. Then, by equally dividing a k-D vector in half to generate its two corresponding (k/2)-D subvectors and applying the IEENNS method again to each of the subvectors, the IEENNS method has been proposed recently. The SIEENNS method is so far the most search-efficient subvector-based encoding method for VQ, but it still has a large memory and computational redundancy. This paper aims at improving the state-of-the-art SIEENNS method by (1) introducing a new 3-level data structure to reduce the memory redundancy; (2) avoiding using two partial variances of the two (k=2)-D subvectors to reduce the computational redundancy, and (3) combining two partial sums of the two (k=2)-D subvectors together to enhance the capability of the codeword rejection test. Experimental results confirmed that the proposed method in this paper can reduce the total memory requirement for each k-D vector from (k + 6) to (k + 1) and meanwhile remarkably improve the overall search efficiency to 72.3–81.1% compared to the SIEENNS method.
Similar content being viewed by others
References
N. M. Nasarabadi and R. A. King: IEEE Trans. Commun. 36 (1988) 957.
L. Guan and M. Kamel: Pattern Recognition Lett. 13 (1992) 693.
C. H. Lee and L. H. Chen: IEE Proc. Vision Image Signal Process. 141 (1994) 143.
S. Baek, B. Jeon and K. Sung: IEEE Signal Process. Lett. 4 (1997) 325.
J. S. Pan, Z. M. Lu and S. H. Sun: IEEE Trans. Image Process. 12 (2003) 265.
Z. Pan, K. Kotani and T. Ohmi: Proc. 2004 IEEE Int. Conf. Multimedia and Expo., 2004.
Z. Pan, K. Kotani and T. Ohmi: Opt. Rev. 12 (2005) 161.
Z. Pan, K. Kotani and T. Ohmi: IEEE Signal Process. Lett. 11 (2004) 637.
Z. Pan, K. Kotani and T. Ohmi: Proc. 2005 IEEE Int. Symp. Circuits and Systems, 2005, p. 6332.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Pan, Z., Kotani, K. & Ohmi, T. Subvector-Based Fast Encoding Method for Vector Quantization Without Using Two Partial Variances. OPT REV 13, 410–416 (2006). https://doi.org/10.1007/s10043-006-0410-1
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/s10043-006-0410-1