Abstract
Solving inverse problems usually calls for adapted priors such as the definition of a well chosen representation of possible solutions. One family of approaches relies on learning redundant dictionaries for sparse representation. In image processing, dictionary learning is applied to sets of patches. Many methods work with a dictionary with a number of atoms that is fixed in advance. Moreover optimization methods often call for the prior knowledge of the noise level to tune regularization parameters. We propose a Bayesian non parametric approach that is able to learn a dictionary of adapted size. The use of an Indian Buffet Process prior permits to learn an adequate number of atoms. The noise level is also accurately estimated so that nearly no parameter tuning is needed. We illustrate the relevance of the resulting dictionaries on numerical experiments.
Similar content being viewed by others
Notes
\(\mathcal {N}(\boldsymbol {\mu },\boldsymbol {\Sigma })\) : Gaussian distribution with expectation μ and covariance Σ.
\( \mathcal {G} (x;a,b)= x^{a-1}b^{a}\exp (-bx)/{\Gamma }(a) \text { for } x>0 \).
Matlab code by R. Rubinstein is available at http://www.cs.technion.ac.il/ronrubin/software.html
Matlab code by M. Zhou is available at http://mingyuanzhou.github.io/Code.html
References
Tosic, I., & Frossard, P. (2011). Dictionary learning: What is the right representation for my signal. IEEE Signal Processing Magazine, 28, 27–38.
Ahmed, N., Natarajan, T., & Rao, K.R. (1974). Discrete cosine transform. IEEE Transactions on Computers, C-23, 90–93.
Mallat, S. (2008). A wavelet tour of signal processing, third edition the sparse way: Academic Press.
Olshausen, B.A., & Field, D.J. (1996). Emergence of simple-cell receptive properties by learning a sparse code for natural images. Nature, 381, 607–609.
Aharon, M., Elad, M., & Bruckstein, A. (2006). K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54, 4311–4322.
Elad, M., & Aharon, M. (2006). Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 15, 3736–3745.
Mairal, J., Elad, M., & Sapiro, G. (2008). Sparse representation for color image restoration. IEEE Transactions on Image Processing, 17, 53–69.
Mairal, J., Bach, F., Ponce, J., Sapiro, G., & Zisserman, A. (2009). Non-local sparse models for image restoration. IEEE International Conference on Computer Vision, 2272– 2279.
Rao, N., & Porikli, F. (2012). A clustering approach to optimize online dictionary learning. IEEE International Conference on Acoustics, Speech and Signal Processing, 1293– 1296.
Mairal, J., Bach, F., Ponce, J., & Sapiro, G. (2010). Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research, 11, 19–60.
Mazhar, R., & Gader, P.D. (2008). Ek-svd: Optimized dictionary design for sparse representations. International Conference on Pattern Recognition, 1–4.
Feng, J., Song, L., Yang, X., & Zhang, W. (2009). Sub clustering k-svd: Size variable dictionary learning for sparse representations. IEEE International Conference on Image Processing, 2149– 2152.
Rusu, C., & Dumitrescu, B. (2012). Stagewise k-svd to design efficient dictionaries for sparse representations. IEEE Signal Processing Letters, 19, 631–634.
Marsousi, M., Abhari, K., Babyn, P., & Alirezaie, J. (2014). An adaptive approach to learn overcomplete dictionaries with efficient numbers of elements. IEEE Transactions on Signal Processing, 62, 3272–3283.
Zhou, M., Chen, H., Paisley, J., Ren, L., Li, L., Xing, Z., Dunson, D., Sapiro, G., & Carin, L. (2012). Nonparametric bayesian dictionary learning for analysis of noisy and incomplete images. IEEE Transactions on Image Processing, 21, 130–144.
Griffiths, T., & Ghahramani, Z. (2006). Infinite latent feature models and the indian buffet process, advances in NIPS 18. MIT Press, 475–482.
Griffiths, T.L., & Ghahramani, Z. (2011). The indian buffet process: An introduction and review. Journal of Machine Learning Research, 12, 1185–1224.
Knowles, D A, & Ghahramani, Z (2011). Nonparametric Bayesian sparse factor models with application togene expression modeling. The Annals of Applied Statistics, 5, 1534–1552.
Doshi-Velez, F., & Ghahramani, Z. (2009). Accelerated sampling for the indian buffet process. International Conference on Machine Learning, 273–280.
Andrzejewski, D. Accelerated Gibbs sampling for infinite sparse factor analysis, LLNL Technical Report (LLNL-TR-499647).
van Dyk, D. A., & Park, T. (2008). Partially collapsed gibbs samplers. Journal of the American Statistical Association, 103, 790–796.
Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16, 2080–2095.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Dang, H.P., Chainais, P. Towards Dictionaries of Optimal Size: A Bayesian Non Parametric Approach. J Sign Process Syst 90, 221–232 (2018). https://doi.org/10.1007/s11265-016-1154-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11265-016-1154-1