Click here for full text:
On the Determination of Optimal Parameterized Prefix Codes for Adaptive Entropy Coding
Keyword(s): entropy coding; Golomb-Rice codes; data compression
Abstract: While complex data sources, like images and audio, require sophisticated coding contexts and source modeling, commonly the high cost of estimating conditional probabilities and computing optimal codes can be avoided by storing sets of codewords, and selecting the best code based on source estimates. Golomb-Rice codes are commonly used because they are parameterized with a single integer, are easy to implement, and are optimal for sources with geometric distribution. In this work, we consider the fact that the Golomb-Rice codes are truly optimal only when the source's single parameter is known with certainty, which in practice is never the case. We investigate how these codes perform, depending on how the source is estimated from previous samples. Next, we analyze possible changes, and propose the class of unary-stem codes, which increase robustness, while keeping the useful structural properties, by defining sets of codewords parameterized by several integers. While this is somewhat similar to other proposed codes, it is significantly more general, in order to better evaluate how source uncertainty affects the structure of the optimal codes. We show how the new codes can be designed by updating maximum-likelihood or Bayesian estimations, or optimized according to posterior probabilities. We also show how data for modeling the source uncertainty can be accurately computed, and develop algorithms capable of dealing with the infinite number of symbols, to quickly find the optimal codes. Numerical results show that the optimal codes are, as expected, always better than Golomb-Rice codes, and the difference can be quite significant. Furthermore, the analysis of the numerical results shows the advantages of integrating mappings from source-sample data directly to code selection.
Back to Index