Probability laws in multi-dimensional noisy HK-models

The classical Hegselmann-Krause opinion dynamics model is known for the simplicity of its interaction rules and for the striking complexity of its collective behavior observed in computer simulations. The nonlinearity of the dynamics makes an analytical study of the model a very hard task. Recently there is some interest in introducing a random noise in the HK dynamics. In our study we focus on exact analysis of probability laws for a noisy HK model and give a detailed description of distribution transformations related to such systems. Our results shed some light on a long-time behavior of the noisy HK model and open new possibilities for future analytical and numerical studies in this area.


Introduction
HK model is one of the most popular models of opinion dynamics introduced by Hegselmann and Krause in [1]. Usually, models of opinion dynamics consist of N interacting particles x i (t), i = 1, . . . , N also known as agent opinions, where every x i (t) takes values in one common opinion space. In [1] the authors considered a discrete-time model and provided sufficient conditions for consensus phenomenon which means that all agent opinions converge to the same opinion as t → ∞. Hegselmann and Krause took R as opinion space and defined the update rule as follows.
x i (t + 1) is the average of opinions x j (t) that are within confidence interval (ε-neighbourhood) of x i (t). Such systems are called bounded confidence models [2][3][4].
We modify the classic HK model by implementing a noise in it. Similar models were considered by many authors [3,[5][6][7][8]. Most of these papers are devoted to homogeneous situations and are focused on different properties of final configurations (consensus phenomenon, polarisation, distance between clusters, etc.). Here we study HK models with R d -valued agent opinions in the presence of heterogeneous noise and we are mainly focused on the research of their pre-limit behaviour. Our primary goal is to get a deeper understanding how the probability distribution of the process evolves in time.

Noisy HK Model
We consider a discrete-time stochastic model with N interacting particles which belong to R d Thus, x(t) ∈ R N ×d . Let us also call x i (t) an i-th agent opinion, i an agent index and R d an opinion space. The update rule x(t) −→ x(t + 1) is the following where ξ(t) are i.i.d. R N ×d -valued random variables (random noise) 1 , x 0 is an arbitrary deterministic R N ×d -valued vector, and the map HK[·] is defined as x j (t). (1) ε} is a set of agent indices that are within the confidence interval of x i (t) and · is an Euclidean norm in R d . The stochastic process x(t) is a discretetime Markov process with state space R N ×d . Notice that for d = 1 and ξ(t) ≡ 0 we have the usual classic HK model (see [1]).

Piecewise linear structure of HK-transformation
Here we study the structure of HK-transformation and provide results about how it changes the distribution of a random variable. Let B = B(R N ×d ) denote the Borel subsets of R N ×d .
Note that one can represent the transformation µ and study each step separately. The second step can be represented as the convolution of 2 random variables HK[x(t)] and ξ(t) in the following way for an arbitrary B ∈ B. Here µ ξ denotes the distribution of random variables ξ(t). Thus, the main problem for us is to understand the structure of the first step. But we can ask a more general question. What is the structure of transformation µ η −→ µ HK[η] , where η is an arbitrary R N ×d -valued random variable?
. The function P u maps the vector u to an N × N matrix. Let us show that the range of the function is finite. Indeed, the matrix P u is fully determined by a finite set of indicator functions The latter set has the cardinality N (N − 1)/2 and consists of indicators which 1 Note that we do not require a distribution of ξ(t) to be bounded as it is done in some other works (see [5])

Numerical Experiments
It is reasonable to consider the behaviour of x(t) taking into account the partition B k . That is why in the section we present several Monte-Carlo simulations of µ x(t) (B k ). Thus, from the examples below one can see how distribution flows from one B k to another. We note that it is reasonable to provide more detailed research, but, unfortunately, we don't have enough space for it. Let us consider where vector (σ 1 , σ 2 , σ 3 ) takes 3 different values as on figure 1. The partition R N ×d = K k=1 B k in the case of 3 agents consists of K = 8 sets B k . That is why there are 8 plots on figure 1. Moreover, from the proof of theorem 1 one can derive that every B k can be bijectively corresponded with a matrix consisted of zeros and ones in which the value in line i and column j equals It is these matrices that are placed on every plot on figure 1. Moreover, one can see that for (σ 1 , σ 2 , σ 3 ) = (0.5, 0.5, 2) the probability of the event when all three opinions are far from each other tends to one (upper left plot). Actually, it is one of the most common situations in our model. But it is reasonable to note that if one takes (σ 1 , σ 2 , σ 3 ) = (0.25, 0.25, 2), tending to one becomes much slower. And for (σ 1 , σ 2 , σ 3 ) = (0.21, 0.21, 2) it is subtle.

Singular Value Decomposition
As it proved in theorem 1 the distribution µ HK[η] can be represented as sum of distributions µ A k η (B ∩ B k ) of linearly transformed random variables. By this reason in the section we study how an arbitrary linear transformation changes a distribution a random variable.
Theorem 2 (Singular Value Decomposition [9]). Let A be given N × N matrix, and suppose that rank A = r. There are orthogonal N × N matrices V and W , and a diagonal matrix Σ = diag(σ 1 , . . . , σ N ) such that σ 1 ≥ σ 2 ≥ .... ≥ σ r > 0 = σ r+1 = ... = σ N and A = V ΣW T . Let A be N × N matrix and η be R N ×d -valued random variable. One can derive the distribution µ Aη using SVD theorem 2 by the following procedure The first and the third steps do not cause any problems. These are just orthogonal linear transforms. One can derive the distribution µ Cη for an arbitrary orthogonal C ∈ Mat N ×N (R) by the following way . So, our main problem for now is to study the second step.
Theorem 3. Let Σ ∈ Mat N ×N (R) be a diagonal matrix which has the form as in theorem 2. Let also η be an arbitrary R N ×d -valued random variable. Then where µ(du 1 . . . du r ) is a measure on Borel subsets in R r×d and δ 0 (du r+1 . . . du N ) is a Dirac measure in R (N −r)×d . Moreover, if η is absolutely continuous, then . . , e is , ∀i = 1, . . . , N.
Note that for all u ∈ R N ×d vector Σu ∈ e 1 , . . . , e r d , i.e the whole distribution µ Ση is concentrated on e 1 , . . . , e r d . Thus, µ Ση (du 1 . . . du N ) = µ(du 1 . . . du r ) × δ 0 (du r+1 . . . du N ). And for µ we have where B is an arbitrary Borel set in R r×d and 0 is a null vector in R d . From the exact form of Σ one can derive that the affine space {u | Σu = v} for v ∈ e 1 , . . . , e r d has the form Thus, we derived the desired form of p.d.f. p(u 1 , . . . , u r ) of the measure µ(du 1 , . . . , du r ).

Spread of opinions
2 a spread of random variable η, where · is Euclidean norm.
Theorem 4. Let Eξ(t) = 0 and N = 2, 3, then for t > 0 Remark 1. One can consider V as an analogue of Lyapunov functions in ODE theory. Indeed, by theorem 4 under the strong spread of noise V ξ the coefficient C N > 0, thus in this case lim t→∞ V x(t) = ∞, which means that the opinions x i (t) on average are drifting apart, or in other words, the process x(t) becomes unstable.