Skip to main content
Log in

Variational Bayes estimation of hierarchical Dirichlet-multinomial mixtures for text clustering

  • Original paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

In this paper, we formulate a hierarchical Bayesian version of the Mixture of Unigrams model for text clustering and approach its posterior inference through variational inference. We compute the explicit expression of the variational objective function for our hierarchical model under a mean-field approximation. We then derive the update equations of a suitable algorithm based on coordinate ascent to find local maxima of the variational target, and estimate the model parameters through the optimized variational hyperparameters. The advantages of variational algorithms over traditional Markov Chain Monte Carlo methods based on iterative posterior sampling are also discussed in detail.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

Available on the websites referenced in the article.

Code availability

Upon request.

References

Download references

Acknowledgements

We wish to thank the Associate Editor for his help and support and the two anonymous referees for their careful and constructive reviews.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

The authors contributed to the manuscript equally.

Corresponding author

Correspondence to Massimo Bilancia.

Ethics declarations

Conflict of interest

The authors have no conflict of interest to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A The Dirichlet-multinomial distribution

The j-th class-conditional distribution of the proposed hierarchical model can be written in closed form by integrating out the Multinomial parameters (in what follows \(z_i = j\)):

$$\begin{aligned} p(y_i \vert z_i, \theta )&= \int p(y_i, \beta \vert z_i, \theta )d\beta = \int p(y_i\vert \beta , z_i) p(\beta \vert \theta ) d\beta =\nonumber \\&= \int p(y_i\vert \beta _j) \prod _{s=1}^K p(\beta _s \vert \theta )d\beta _s = \nonumber \\&= \int p(y_i \vert \beta _j) p(\beta _j \vert \theta )d\beta _j \underbrace{\int \prod _{-j} p(\beta _{-j} \vert \theta )d\beta _{-j}}_1 \nonumber = \\&= \int \left( {\begin{array}{c}y_{i+}\\ y_i\end{array}}\right) \prod _{\ell = 1}^p \beta _{j\ell }^{y_{i\ell }} \frac{\Gamma \left( p\theta \right) }{\Gamma (\theta )^p}\prod _{\ell =1}^p \beta _{j\ell }^{\theta - 1} d\beta _{j\ell } = \nonumber \\&= \left( {\begin{array}{c}y_{i+}\\ y_i\end{array}}\right) \frac{\Gamma \left( p\theta \right) }{\Gamma (\theta )^p} c^{-1} \underbrace{\int c \prod _{\ell =1}^p \beta _{j\ell }^{y_{i\ell } + \theta - 1} d\beta _{j\ell }}_1, \end{aligned}$$

where the inverse of the normalization constant c has expression:

$$\begin{aligned} c^{-1}=\frac{\prod _{\ell =1}^p \Gamma \left( y_{i\ell } +\theta \right) }{\Gamma \left( \sum _{\ell = 1}^p \left( y_{i\ell } +\theta \right) \right) }. \end{aligned}$$

Using the standard notation for the multivariate Beta function:

$$\begin{aligned} B(x)=B(x_1,x_2,\ldots ,x_p) = \frac{\prod _{\ell =1}^p \Gamma (x_\ell )}{\Gamma \left( \sum _{\ell = 1}^p x_\ell \right) }, \end{aligned}$$

the class-conditional likelihood can be rewritten as:

$$\begin{aligned} p(y_i \vert z_i,\theta ) = \left( {\begin{array}{c}y_{i+}\\ y_i\end{array}}\right) \frac{B(y_i+\theta )}{B(\theta )}. \end{aligned}$$

This probability mass function (pmf) defines the Dirichlet-Multinomial distribution. It was studied, among others, by Mosimann (1962), who showed that the variance of each marginal component of the j-th class-conditional distributions is given by:

$$\begin{aligned} {\textsf{Var}}(y_{i\ell } \vert z_i, \theta ) = y_{i+}{\textsf{E}}(\beta _{j\ell }) {\textsf{E}}(1 - \beta _{j\ell })\left( \frac{y_{i+} + p\theta }{1 + p\theta } \right) , \end{aligned}$$

Thus, the variance of each class-conditional marginal likelihood exhibits overdispersion with respect to the standard Multinomial distribution. The magnitude of this overdispersion, which depends on the semantic heterogeneity of the underlying documents, is controlled by the term \(p\theta\), with higher values corresponding to lower overdispersion.

Appendix B Calculating the ELBO in explicit form

We begin by writing the joint distribution of the latent variables and model parameters that appears in the first term of the ELBO (11):

$$\begin{aligned} p(y,\beta ,z, \lambda \vert \theta , \alpha )=\prod _{i=1}^n p(y_i \vert \beta , z_i) \times \prod _{i=1}^n p(z_i \vert \lambda )\times \prod _{j=1}^k p(\beta _j \vert \theta ) \times p(\lambda \vert \alpha ) \end{aligned}$$

that is:

$$\begin{aligned}&\log p(y,\beta ,z, \lambda )= \\&\quad = \sum _{i=1}^n \log p(y_i \vert \beta , z_i) \,+ & \boxed {\text {A1}} \\&\qquad + \sum _{i=1}^n \log p(z_i \vert \lambda )\,+&\quad \boxed {\text {A2}}\\&\qquad + \sum _{j=1}^k \log p(\beta _j \vert \theta )\, +&\quad \boxed {\text {A3}}\\&\qquad + \log p(\lambda \vert \alpha ) &\quad \boxed {\text {A4}} \end{aligned}$$

We calculate the expected values of these quantities.

\(\boxed {\text {A1}\ }\) By definition, \(y_i \vert \beta , z_i \sim \textsf {Multinomial}_p(\beta _{s})\), where the index s corresponds to the index of the only component of the vector \(z_i\) that is equal to 1. It follows that:

$$\begin{aligned} \log p(y_i\vert \beta , z_i) \propto \log \prod _{\ell =1}^p \beta _{s\ell }^{y_{i\ell }}= \sum _{\ell = 1}^p y_{i\ell }\log \beta _{s\ell }, \end{aligned}$$

and that:

$$\begin{aligned} {\textsf{E}}_q \left[ \sum _{i=1}^n \log p(y_i \vert \beta , z_i) \right]&= {\textsf{E}}_q \left[ \sum _{i=1}^n \sum _{\ell = 1}^p y_{i\ell }\log \beta _{s\ell } \right] = \\&= \sum _{i=1}^n \sum _{\ell = 1}^p y_{i\ell } {\textsf{E}}_q \left[ \log \beta _{s\ell } \right] = \\&= \sum _{i=1}^n \sum _{\ell =1}^p \sum _{j=1}^k y_{i\ell } \gamma _{ij}{\textsf{E}}_q \left[ \log \beta _{j\ell } \right] , \end{aligned}$$

given (14), since the term \({\textsf{E}}_q \left[ \log \beta _{j\ell } \right]\) is a function of the random variable \(z_i\) through the index s. We now observe that the variational distribution of \(\beta _j\) can be written as:

$$\begin{aligned} q(\beta _j \vert \phi _j) =\exp \left\{ \sum _{\ell =1}^p (\phi _{j\ell }-1) \log \beta _{j\ell }- \left[ \sum _{\ell =1}^p \log \Gamma (\phi _{j\ell }) - \log \Gamma \left( \sum _{\ell =1}^p \phi _{j\ell } \right) \right] \right\} , \end{aligned}$$

which is a multiparametric exponential family with:

  • \(\log \beta _{j\ell }\): minimal sufficient statistics for \(\ell =1,2,\ldots ,p\).

  • \(u_{j\ell } =\phi _{j\ell } -1\): natural (or canonical) parameters for \(\ell =1,2,\ldots ,p\).

By defining:

$$\begin{aligned} A(u_j) = \sum _{\ell =1}^p \log \Gamma (u_{j\ell } + 1) - \log \Gamma \left( \sum _{\ell =1}^p u_{j\ell } + 1 \right) , \end{aligned}$$

it is well known that (in what follows \(\phi _j - 1 \equiv u_j\) componentwise):

$$\begin{aligned} {\textsf{E}}_q \left[ \log \beta _{j\ell } \right]&= \frac{\partial A(u_j)}{\partial u_{j\ell }} = \frac{\partial A(u_j)}{\partial \phi _{j\ell }} = \ \frac{\partial \phi _{j\ell }}{\partial u_{j\ell }} = \\&= \frac{\partial A(\phi _j - 1)}{\partial \phi _{j\ell }} \frac{\partial (u_{j\ell } + 1)}{\partial u_{j\ell }} = \frac{\partial A(\phi _j - 1)}{\partial \phi _{j\ell }} = \\&= \frac{\partial }{\partial \phi _{j\ell }} \left[ \sum _{h=1}^p \log \Gamma (\phi _{jh}) - \log \Gamma \left( \sum _{\ell =1}^p \phi _{jh} \right) \right] = \\&= \frac{\partial }{\partial \phi _{j\ell }} \log \Gamma (\phi _{j\ell }) - \frac{\partial }{\partial \phi _{j\ell }} \log \Gamma \left( \sum _{\ell =1}^p \phi _{jh} \right) = \\&= \Psi (\phi _{j\ell }) - \Psi \left( \sum _{\ell =1}^p \phi _{j\ell } \right) . \end{aligned}$$

Putting everything together, we get the summand (16) of ELBO. \(\square\)

\(\boxed {\text {A2}}\) Using the independence between the latent indicator variables \(z_i\) and \(\lambda\) under the variational distribution, and exploiting the representation of the variational distribution of \(\lambda\) as a multiparametric exponential family, we easily obtain the term (17):

$$\begin{aligned} {\textsf{E}}_q \left[ \sum _{i=1}^n \log p(z_i \vert \lambda )\right]&= {\textsf{E}}_q \left[ \sum _{i=1}^n \sum _{j=1}^k z_{ij} \log \lambda _j \right] = \\&= \sum _{i=1}^n \sum _{j=1}^k {\textsf{E}}_q \left[ z_{ij} \right] {\textsf{E}}_q \left[ \log \lambda _j \right] = \\&= \sum _{i=1}^n\sum _{j=1}^k \gamma _{ij}\left\{ \Psi (\eta _j) - \Psi \left( \sum _{j=1}^k \eta _j\right) \right\} . \end{aligned}$$

\(\square\)

\(\boxed {\text {A3}}\) From \(\beta _j \vert \theta \sim {\textsf{Dirichlet}}_p(\mathbbm {1}_p\theta )\) it readily follows that:

$$\begin{aligned} \log p(\beta _j \vert \theta ) = \log \Gamma (p\theta ) - p \log \Gamma (\theta ) + \sum _{\ell = 1}^p (\theta - 1) \log \beta _{j\ell }, \end{aligned}$$

and:

$$\begin{aligned}&{\textsf{E}}_q \left[ \sum _{j=1}^k \log p(\beta _j \vert \theta ) \right] = \\&\quad = {\textsf{E}}_q \left[ k\log \Gamma (p\theta ) - kp \log \Gamma (\theta ) + \sum _{j=1}^k\sum _{\ell = 1}^p (\theta - 1) \log \beta _{j\ell } \right] = \\&\quad = k \log \Gamma (p\theta ) - k p \log \Gamma (\theta ) + \sum _{j=1}^k\sum _{\ell = 1}^p (\theta - 1) {\textsf{E}}_q \left[ \log \beta _{j\ell } \right] =\\&\quad = k\log \Gamma (p\theta ) - kp \log \Gamma (\theta ) + \sum _{j=1}^k\sum _{\ell = 1}^p (\theta - 1) \left\{ \Psi (\phi _{j\ell }) - \Psi \left( \sum _{\ell =1}^p \phi _{j\ell } \right) \right\} , \end{aligned}$$

that is the expression in (18). \(\square\)

\(\boxed {\text {A4}}\) As in the previous point, from \(\lambda \vert \alpha \sim {\textsf {Dirichlet}}_k (\mathbbm {1}_k \alpha )\) we have:

$$\begin{aligned} \log p(\lambda \vert \alpha ) = \log \Gamma (k\alpha ) - k \log \Gamma (\alpha ) + \sum _{j=1}^k (\alpha - 1)\log \lambda _j, \end{aligned}$$

from which (19) follows that:

$$\begin{aligned}&{\textsf{E}}_q \left[ \log p(\lambda \vert \alpha ) \right] = \\&\quad = \log \Gamma (k\alpha ) - k \log \Gamma (\alpha ) + \sum _{j=1}^k(\alpha - 1) {\textsf{E}}_q \left[ \log \lambda _{j} \right] = \\&\quad = \log \Gamma (k\alpha ) - k \log \Gamma (\alpha ) + \sum _{j=1}^k(\alpha - 1) \left\{ \Psi (\eta _j) - \Psi \left( \sum _{j=1}^k \eta _j\right) \right\} . \end{aligned}$$

\(\square\)

If we consider the second addend of the ELBO we have the following factorization:

$$\begin{aligned} q(\beta ,z,\lambda \vert \nu ) = \prod _{j=1}^k q(\beta _j \vert \phi _j) \times \prod _{i=1}^n q(z_i \vert \gamma _i) \times q(\lambda \vert \eta ), \end{aligned}$$

that is:

$$\begin{aligned}&\log q(\beta ,z,\lambda \vert \nu ) =\\&\quad = \sum _{j=1}^k \log q(\beta _j \vert \phi _j)\, + &\quad \boxed {\text {B1}}\\&\qquad + \sum _{i=1}^n \log q(z_i \vert \gamma _i)\,+&\quad \boxed {\text {B2}}\\&\qquad + \log q(\lambda \vert \eta )&\quad \boxed {\text {B3}} \end{aligned}$$

If we compute the expected value of \(\log q(\beta ,z,\lambda \vert \nu )\) with respect to the variational distribution q, using a simple algebra and the representation of the Dirichlet distribution as a multiparametric exponential family, which we have already seen, we find that the expected values with respect to q of \(\boxed {\text {B1}}, \boxed {\text {B2}} {\mbox{ and }} \boxed {\text {B3}}\) correspond to (20), (21) and (22) except the sign, respectively.

Appendix C Maximizing the ELBO

Since we need to maximize each term individually, holding all others constant, we first isolate the terms in the ELBO that depend on the parameter that is being updated, and then compute the maximum point.

\(\boxed {\gamma _{ij}}\) (\(i=1,2,\ldots ,n\), \(j=1,2,\ldots , k\)). It appears in (16), (17), and (21). We isolate the factors containing \(\gamma _{ij}\) and add a Lagrangian to the objective function to account for the condition that such Multinomial parameters sum to 1 for fixed i:

$$\begin{aligned} {\mathcal {L}}_{[\gamma _{ij}]}&= \sum _{\ell =1}^p y_{i\ell }\gamma _{ij} \left\{ \Psi (\phi _{j\ell }) - \Psi \left( \sum _{\ell =1}^p \phi _{j\ell } \right) \right\} + \\&\quad + \gamma _{ij}\left\{ \Psi (\eta _{j}) - \Psi \left( \sum _{\ell =1}^p \eta _{j} \right) \right\} - \\&\quad - \gamma _{ij}\log \gamma _{ij} - L \left( \sum _{s=1}^k \gamma _{is} - 1 \right) . \end{aligned}$$

We take the partial derivatives to \(\gamma _{ij}\) and set them equal to zero:

$$\begin{aligned} \frac{\partial {\mathcal {L}}_{[\gamma _{ij}]}}{\partial \gamma _{ij}}&= \sum _{\ell =1}^p y_{i\ell } \left\{ \Psi (\phi _{j\ell }) - \Psi \left( \sum _{\ell =1}^p \phi _{j\ell } \right) \right\} + \\&\quad + \left\{ \Psi (\eta _{j}) - \Psi \left( \sum _{\ell =1}^p \eta _{j} \right) \right\} - \\&\quad - \log \gamma _{ij} - \gamma _{ij}\frac{1}{\gamma _{ij}} - L = 0, \end{aligned}$$

from which we obtain:

$$\begin{aligned} \log \gamma _{ij}&= -1-L + \sum _{\ell =1}^p y_{i\ell } \left\{ \Psi (\phi _{j\ell }) - \Psi \left( \sum _{\ell =1}^p \phi _{j\ell } \right) \right\} + \\&\quad + \left\{ \Psi (\eta _{j}) - \Psi \left( \sum _{\ell =1}^p \eta _{j} \right) \right\} , \end{aligned}$$

that is:

$$\begin{aligned} \gamma _{ij}&= \exp (-1-L) \;\times \\&\quad \times \exp \left\{ \sum _{\ell =1}^p y_{i\ell } \left[ \Psi (\phi _{j\ell }) - \Psi \left( \sum _{\ell =1}^p \phi _{j\ell } \right) \right] \right\} \,\times\\&\quad \times \exp \left\{ \Psi (\eta _{j}) - \Psi \left( \sum _{j=1}^k \eta _{j} \right) \right\} \propto\\&\quad \propto \exp \bigg \{ \sum _{\ell =1}^p y_{i\ell } {\textsf{E}}_q \left[ \log \beta _{j\ell } \right] \bigg \}\exp \bigg \{ {\textsf{E}}_q \left[ \log \lambda _{j} \right] \bigg \}, \end{aligned}$$

which must be normalized to 1 for each fixed i according to (26). \(\square\)

\(\boxed {\eta _{j}}\) (\(j=1,2,\ldots , k\)). Isolating \(\eta _j\), which appears in (17), (19) and (22), we have:

$$\begin{aligned} {\mathcal {L}}_{[\eta _j]}&= \sum _{i=1}^n \gamma _{ij}\left\{ \Psi (\eta _j) - \Psi \left( \sum _{j=1}^k \eta _j\right) \right\} + \\&\quad + (\alpha -1) \left\{ \Psi (\eta _j) - \Psi \left( \sum _{j=1}^k \eta _j\right) \right\} - \\&\quad - \log \Gamma \left( \sum _{j=1}^k\eta _j\right) + \log \Gamma (\eta _j) \,-\\&\quad - (\eta _j-1)\left\{ \Psi (\eta _j) - \Psi \left( \sum _{j=1}^k \eta _j\right) \right\} = \\&= \left\{ \Psi (\eta _j) = \Psi \left( \sum _{j=1}^k \eta _j\right) \right\} \left[ \sum _{i=1}^n \gamma _{ij} + \alpha - \eta _j \right] - \\&\quad - \log \Gamma \left( \sum _{j=1}^k\eta _j\right) +\log \Gamma (\eta _j). \end{aligned}$$

As above, taking the partial derivatives with respect to \(\eta _j\) and setting them to 0, we have:

$$\begin{aligned} \frac{\partial {\mathcal {L}}_{[\eta _{j}]}}{\partial \eta _j}&= \Psi '(\eta _j)\left[ \sum _{i=1}^n \gamma _{ij} + \alpha - \eta _j \right] + \Psi (\eta _j)(-1) \, -\\&\quad - \Psi '\left( \sum _{j=1}^k \eta _j \right) \left[ \sum _{i=1}^n \gamma _{ij} + \alpha - \eta _j \right] - \Psi \left( \sum _{j=1}^k \eta _j\right) (-1) \, - \\&\quad - \Psi (\eta _j) - \Psi \left( \sum _{j=1}^k \eta _j\right) \, =\\&= \Psi '(\eta _j)\left[ \sum _{i=1}^n \gamma _{ij} + \alpha - \eta _j \right] \, -\\&\quad - \Psi '\left( \sum _{j=1}^k \eta _j \right) \left[ \sum _{i=1}^n \gamma _{ij} + \alpha - \eta _j \right] = 0, \end{aligned}$$

which is equivalent to the following equation in \(\eta _j\):

$$\begin{aligned} \Psi '(\eta _j)\left[ \sum _{i=1}^n \gamma _{ij} + \alpha - \eta _j \right] = \Psi '\left( \sum _{j=1}^k \eta _j \right) \left[ \sum _{i=1}^n \gamma _{ij} + \alpha - \eta _j \right] . \end{aligned}$$

For positive arguments, the Digamma function has exactly one root, so it is obvious that \(\Psi '(\eta _j)\) and \(\Psi '\left( \sum _{j=1}^k \eta _j \right)\) cannot be simultaneously zero. Therefore, this equation admits a unique solution if and only if:

$$\begin{aligned} \sum _{i=1}^n \gamma _{ij} + \alpha - \eta _j = 0, \end{aligned}$$

that is if and only if:

$$\begin{aligned} \eta _j = \alpha + \sum _{i=1}^n \gamma _{ij}. \end{aligned}$$

\(\boxed {\phi _{j\ell }}\) (\(j=1,2,\ldots ,k\), \(\ell =1,2,\ldots , p\)). Isolating \(\phi _{j\ell }\) in (16), (18) and (20):

$$\begin{aligned} {\mathcal {L}}_{[\phi _{j\ell }]}&= \sum _{i=1}^n y_{i\ell }\gamma _{ij} \left\{ \Psi (\phi _{j\ell }) - \Psi \left( \sum _{\ell =1}^p \phi _{j\ell } \right) \right\} \, + \\&\quad +(\theta - 1) \left\{ \Psi (\phi _{j\ell }) - \Psi \left( \sum _{\ell =1}^p \phi _{j\ell } \right) \right\} \, - \\&\quad - \log \Gamma \left( \sum _{\ell =1}^p \phi _{j\ell } \right) + \log \Gamma (\phi _{j\ell })\, - \\&\quad - (\phi _{j\ell }- 1)\left\{ \Psi (\phi _{j\ell }) - \Psi \left( \sum _{\ell =1}^p \phi _{j\ell } \right) \right\} \, = \\&= \left\{ \Psi (\phi _{j\ell }) - \Psi \left( \sum _{\ell =1}^p \phi _{j\ell } \right) \right\} \left[ \sum _{i=1}^n y_{i\ell }\gamma _{ij}+\theta -\phi _{j\ell } \right] \, -\\&\quad - \log \Gamma \left( \sum _{\ell =1}^p \phi _{j\ell } \right) + \log \Gamma (\phi _{j\ell }). \end{aligned}$$

Taking the first derivative and setting it to 0:

$$\begin{aligned} \frac{\partial {\mathcal {L}}_{[\phi _{j\ell }]}}{\partial \phi _{j\ell }}&= \Psi '(\phi _{j\ell })\left[ \sum _{i=1}^n y_{i\ell }\gamma _{ij}+\theta -\phi _{j\ell } \right] \, + \\&\quad + \Psi (\phi _{j\ell })(-1) - \Psi '\left( \sum _{\ell =1}^p\phi _{j\ell }\right) \left[ \sum _{i=1}^n y_{i\ell }\gamma _{ij}+\theta -\phi _{j\ell } \right] \, - \\&\quad - \Psi '\left( \sum _{\ell =1}^p\phi _{j\ell }\right) (-1) - \Psi '\left( \sum _{\ell =1}^p\phi _{j\ell }\right) + \Psi _{j\ell } \, = \\&= \Psi '(\phi _{j\ell })\left[ \sum _{i=1}^n y_{i\ell }\gamma _{ij}+\theta -\phi _{j\ell } \right] \, - \\&\quad - \Psi '\left( \sum _{\ell =1}^p\phi _{j\ell }\right) \left[ \sum _{i=1}^n y_{i\ell }\gamma _{ij}+\theta -\phi _{j\ell } \right] = 0, \end{aligned}$$

which, as in the previous case, it has a unique solution in \({\phi _{j\ell }}\) given by:

$$\begin{aligned} \phi _{j\ell }= \theta + \sum _{i=1}^n y_{i\ell }\gamma _{ij}. \end{aligned}$$

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bilancia, M., Di Nanni, M., Manca, F. et al. Variational Bayes estimation of hierarchical Dirichlet-multinomial mixtures for text clustering. Comput Stat 38, 2015–2051 (2023). https://doi.org/10.1007/s00180-023-01350-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-023-01350-8

Keywords

Navigation