The Secret Key Capacity of a Class of Noisy Channels with Correlated Sources

This paper investigates the problem of secret key generation over a wiretap channel when the terminals observe correlated sources. These sources are independent of the main channel and the users overhear them before the transmission takes place. A novel outer bound is proposed and, employing a previously reported inner bound, the secret key capacity is derived under certain less-noisy conditions on the channel or source components. This result improves upon the existing literature where the more stringent condition of degradedness is required. Furthermore, numerical evaluation of the achievable scheme and previously reported results for a binary model are presented; a comparison of the numerical bounds provides insights on the benefit of the chosen scheme.


Introduction
The wiretap channel, introduced by Wyner [1], is the basic model for analyzing secrecy in wireless communications. In this model, the transmitter, named Alice, wants to communicate reliably with Bob while keeping the transmitted message-or part of it-secret from an eavesdropper, named Eve, overhearing the communication through another channel. Secrecy is characterized by the amount of information that is not leaked, which can be measured by the equivocation rate-the remaining uncertainty about the message at the eavesdropper. The secrecy capacity of the wiretap channel is thus defined as the maximum transmission rate that can be attained with zero leakage. In their influential paper [2], Csiszár and Körner determined the rate-equivocation region of a general broadcast channel with any arbitrary level of security, which also establishes the secrecy capacity of the wiretap channel. These schemes guarantee secrecy by exploiting an artificial random noise that saturates the eavesdropper's decoding capabilities.
On the other hand, Shannon [3] showed that it is also possible to achieve a positive secrecy rate by means of a secret key. Alice and Bob can safely communicate over a noiseless public broadcast channel as long as they share a secret key. The rate of this key, however, must be at least as large as the rate of the message to attain zero leakage. The main question that arises in this scenario is therefore: how do the legitimate users safely share the secret key? The answer is that the users should not communicate the key itself, which would then be compromised. Instead, they should only convey enough information to allow themselves to agree upon a key without disclosing, at the same time,

Related Work
Maurer [6] and Ahlswede and Csiszár [7] were among the first to study the use of correlated observations available at the legitimate users as a means to agree upon a key. In addition to the correlated observations, the terminals may communicate over a public broadcast channel of infinite capacity to which the eavesdropper has also access. Two models are proposed in [7]: the "source model", where the users observe correlated random sources controlled by nature, and the "channel model", where the users observe inputs and outputs of a noisy channel controlled by one of the users. In [8], Csiszár and Narayan studied the first model but assumed that the public broadcast channel has finite capacity and there is a third "helper" node who is not interested in recovering the key but rather helping Alice and Bob. The same authors also analyzed the channel model with only one [9] or with multiple channel inputs [10]. Capacity results are presented in [8][9][10] assuming that there is only one round of communication over the public channel. General inner and outer bounds for both source and channel models with interaction over the public channel were introduced by Gohari and Anantharam [11,12].
More recently, Khisti et al. [13] investigated the situation where there is no helper node, the users communicate over a wiretap channel, and a separate public discussion channel may or may not be available. The simultaneous transmission of a secret message along with a key generation scheme using correlated sources was analyzed by Prabhakaran et al. [14]. They obtained a simple expression that reveals the trade-off between the achievable secrecy rate and the achievable rate of the secret key. The corresponding Gaussian channel with correlated Gaussian sources but independent of the channel components is recently studied in [15]. Closed form expressions for both secret key generation and secret message transmission are derived. On the other hand, Salimi et al. [16] considered simultaneous key generation of two independent users over a multiple access channel with feedback, where each user eavesdrops the other. In addition, the receiver can actively send feedback, through a private noiseless (or noisy) link, to increase the size of the shared keys.
The authors of [13][14][15] did not assume interactive communication, i.e., there is only one round of communication. Salimi et al. [16], however, allowed the end user to respond once through the feedback link. Other authors have analyzed key generation schemes that rely on several rounds of transmissions. Tyagi [17] characterized the minimum communication rate required to generate a maximum-rate secret key with r rounds of interactive communication. He showed that this rate is equal to the interactive common information (a quantity he introduces) minus the secret key capacity. In his model, two users observe i.i.d. correlated sources and communicate over an error-free channel. Hayashi et al. [18] studied a similar problem but consider general (not necessarily i.i.d.) source sequences of finite length. Their proposed protocol attains the secret key capacity for general observations as well as the second-order asymptotic term of the maximum feasible secret key length for i.i.d. observations. They also proved that the standard one-way communication protocol fails to attain the aforementioned asymptotic result. Courtade and Halford [19] analyzed the related problem of how many rounds of public transmissions are required to generate a specific number of secret keys. Their model assumes that there are n terminals connected through an error-free public channel, where each terminal is provided with a number of messages before transmission that it uses to generate the keys. More recently, Boche et al. [20] investigated the computability of the secret key in the source model with only one forward communication. They showed that the corresponding secret key capacity is not Turing computable when the public communication is rate-limited, and consequently there is no algorithm that can simulate or compute the secret key capacity.
As previously mentioned, the focus of the present work is on sources that are independent of the main channel; nonetheless, some works have addressed the general situation of correlated sources and channels. Prior work on secrecy for channels with state include Chen and Vinck's [21] and Liu and Chen's [22] analyses of the wiretap channel with state. These works employ Gelfand and Pinsker's scheme [23] to correlate the transmitted codeword with the channel state at the same time that it saturates the eavesdropper's decoding capabilities. A single-letter expression of the secrecy capacity for this model is still unknown, although a multi-letter bound was provided by Muramatsu [24] and a novel lower bound is recently reported in [25]. As a matter of fact, the complexity of this problem also lies in the derivation of an outer bound that can handle simultaneously secrecy and channels with state.
To the best of our knowledge, only a handful of works have studied the problem of key generation for channels with state. The previously mentioned result of Prabhakaran et al. [14] is one of these examples. Zibaeenejad [26] analyzed a similar scenario where there is also a public channel of finite capacity between the users and he provides an inner and an outer bound for this model. Although the inner bound is developed for a channel with state, it is possible to apply it to the model used in the present work, i.e., sources independent of the main channel. However, some steps of the proof reported in [26] appear to be obscure and a constraint seems to be missing in the final expression; the resulting achievable rate was recently shown in [27] to be in certain cases unachievable. As a consequence, we decided not to compare our inner bound to this previously reported scheme.

Contributions and Organization of the Paper
In this work, we introduce a novel outer bound (Theorem 2) for the problem of secret key generation over a wiretap channel with correlated sources at each terminal. The correlated sources are assumed to be independent of the main channel and, thanks to a previously reported inner bound (Theorem 1), we obtain the capacity region (Propositions 1-3) whenever the channel and/or source components satisfy the specific less-noisy conditions described in Table 1. In contrast, the proposed schemes in [13][14][15][16] are optimal only when the stronger degradedness condition holds true for the channel and source components.
The results and tools introduced in this work have connections to ones in a previous work of ours [28], where we studied both the secrecy capacity and the secret key capacity of the wiretap channel with generalized feedback. In [28], we determined some capacity regions for the problem dealt here as a secondary result of the main problem. It is not surprising that, by being the main focus of the present work, the capacity results shown here are more general than those in [28]. Furthermore, we go deeper into the analysis of secret key agreement schemes and we show, in Section 4, the suboptimality of a previously published achievable scheme. This paper is organized as follows. Section 2 provides some definitions and the previously reported inner bound. In Section 3, we first present the outer bound for the problem of secret key agreement and then we enumerate the cases where we obtain the capacity region. Section 4 illustrates with a binary example the benefit of the present inner bound over a previously reported scheme. Finally, Section 5 summarizes and concludes the work, while some technical proofs are deferred to the appendices.

Notation and Conventions
Throughout this work, we use the standard notation of El Gamal and Kim [29]. Specifically, given two integers i and j, the expression [i : j] denotes the set {i, i + 1, . . . , j}, whereas for real values a and b, [a, b] denotes the closed interval between a and b. We use the notation x j i = (x i , x i+1 , . . . , x j ) to denote the sequence of length j − i + 1 for 1 ≤ i ≤ j. If i = 1, we drop the subscript for succinctness, i.e., x j = (x 1 , x 2 , . . . , x j ). Lowercase letters such as x and y are mainly used to represent constants or realizations of random variables, capital letters such as X and Y stand for the random variables in itself, and calligraphic letters such as X and Y are reserved for sets, codebooks, or special functions.
The set of nonnegative real numbers is denoted by R + . The probability distribution (PD) of the random vector X n , p X n (x n ), is succinctly written as p(x n ) without subscript when it can be understood from the argument x n . Given three random variables X, Y, and Z, if its joint PD can be decomposed as p(xyz) = p(x)p(y|x)p(z|y), then they form a Markov chain, denoted by We denote typical and conditional typical sets by T n δ (X) and T n δ (Y|x n ), respectively.

Problem Definition
Consider the wiretap channel with correlated sources at every node (A, B, E), as shown in Figure 1. The legitimate users (Alice and Bob) want to agree upon a secret key K ∈ K while an eavesdropper (Eve) is overhearing the communication. Let A, B, E , X , Y, and Z be six finite sets. Alice, Bob, and Eve observe the random sequences (sources) A n , B n , and E n , respectively, drawn i.i.d. according to the joint distribution p(abe) on A × B × E . Alice communicates with Bob through m instances of a discrete memoryless channel with input X ∈ X and output Y ∈ Y. Eve is listening the communication through another channel with input X ∈ X and output Z ∈ Z. This channel is defined by its transition probability p(yz|x) and it is independent of the sources' distribution.
Definition 1 (Code). A (2 nR k , n, m) secret key code c n for this model consists of: • a key set K n [1 : 2 nR k ], where R k is the rate of the secret key; • a source of local randomness R r ∈ R r at Alice; • an encoding function ϕ : A n × R r → X m ; • a key generation function ψ a : A n × R r → K n ; and • a key generation function The rate of such a code is defined as the number of channel uses per source symbol m n .
Given a code, let K = ψ a (A n , R r ) and X m = ϕ(A n , R r ); then, the performance of the (2 nR k , n, m) secret key code c n is measured in terms of its average probability of error in terms of the information leakage and in terms of the uniformity of the keys Definition 2 (Achievability). A tuple (η, R k ) ∈ R 2 + is said to be achievable for this model if, for every > 0 and sufficiently large n, there exists a (2 nR k , n, m) secret key code c n such that The set of all achievable tuples is denoted by R and is referred to as the secret key region.

Inner Bound
The following theorem gives an inner bound on R , i.e., it defines the region R in ⊆ R .

Sketch of Proof.
Alice employs the two-layer description (U, V) to compress the source A and she transmits it through the two-layer channel codeword (Q, T). Each layer of the description must fit in the corresponding layer of the channel codeword according to Equation (6). In brief, the encoder randomly picks codewords u n (s 1 ) from T n δ (U) and, for each one, it randomly picks codewords v n (s 1 , s 2 ) from T n δ (V|u n (s 1 )). After observing the source sequence a n , the encoder selects the indices (ŝ 1 ,ŝ 2 ) of the codewords that are jointly typical with a n . The codewords u n (s 1 ) and v n (s 1 , s 2 ) are distributed in bins, i.e., u n (s 1 ) ∈ B 1 (r 1 ) and v n (s 1 , s 2 ) ∈B 2 (s 1 , r 2 , r p ), and it is the bin indices (r 1 ,r 2 ,r p ) which are transmitted through the noisy channel. The channel codewords q m (r 1 , r 2 ) are randomly picked from T m δ (Q) and, for each q m (r 1 , r 2 ), the codewords t m (r 1 , r 2 , r p , k 2 , r f ) are randomly picked from T m δ (T|q m (r 1 , r 2 )). In addition to the bin indices from the two-layer description of the source, the encoder uses the noisy channel to transmit a part of the secret key (k 2 ), which is protected using a wiretap code; the dummy index r f corresponds to the artificial noise used to exhaust the decoding capabilities of the eavesdropper. Once the decoder successfully decodes the channel and source codewords using its side information b n , it can obtain the other part of the key (k 1 ) from another bin index of the source codeword, i.e., v n (s 1 , s 2 ) ∈B 2 (s 1 , r 2 , k 1 ). We note that the achievable secret key rate in Equation (5) is a combination of the secret bits transmitted through the noisy channel in the manner of the wiretap channel and the secret bits obtained by the reconstruction of the source at Bob.
The inner bound in [30] is obtained using the weak secrecy and uniformity conditions, i.e., L(c n ) ≤ n and U(c n ) ≤ n . However, an improved proof of the inner bound is found in [31], which shows that the strong secrecy and uniformity conditions in Equation (4) also hold true. We refer the interested reader to [30,31] for a detailed proof of the inner bound. Remark 1. By setting U = ∅, the region in Theorem 1 recovers the results in ( [13], Theorems 1 and 4), when the eavesdropper has access to a correlated source, and ( [14], Theorem 2), when there is no secret message to be transmitted. The advantage of having two layers of description is that Theorem 1 can potentially achieve higher secret key rates (see Section 4) and it recovers the result of Csiszár and Narayan [8] (see Remark 6).

Remark 2.
The inner bound in Theorem 1 is a special case of the inner bound recently proposed in [27] for a more general system model.

Remark 3.
The region in Theorem 1 also recovers the result in ( [32], Theorem 1) which was published after the original submission of Bassi et al. [30]. In that work, Alice and Bob communicate over a public noiseless channel of rate R 1 and a secure noiseless channel of rate R 2 . The proposed achievable scheme in [32] sends the codeword Q through the public channel, i.e., I(Q; Y) = R 1 , and the codeword T through the secure channel, i.e., I(T; Y|Q) = R 2 and I(T; Z|Q) = 0. The reader may verify that, by using the aforementioned quantities and η = 1, both regions are equal.

Main Results
In this section, we first introduce an outer bound for the secret key region (Theorem 2). We then study some special cases where the inner bound of Theorem 1 turns out to achieve the (optimal) secret key region (Propositions 1-3).

Outer Bound
The following theorem gives an outer bound on R , i.e., it defines the region R out ⊇ R .
Proof. Refer to Appendix A for details.
Theorem 2 shows that the secret key generated between Alice and Bob has two components. The first two terms on the r.h.s. of Equation (7) represent the part of the key that is securely transmitted through the noisy channel (given by the random variable T) as in the wiretap channel. On the other hand, the last two terms on the r.h.s. of Equation (7) characterize the part of the key that is securely extracted from the correlated sources (given by the random variables U and V). Since the source and channel variables are independent in the model, it should not be surprising that the variable T is independent of (U, V). However, given that the users need to agree on common extracted bits from the source, the noisy channel imposes the restriction in Equation (8) on the amount of information exchanged during that agreement.

Remark 4.
The regions R out and R in do not coincide in general. This is due to the presence of the condition in Equation (6a) in the inner bound, and the looser condition in Equation (8) in the outer bound with respect to Equation (6b). We present in Section 3.2 a few special cases where these differences disappear and both regions coincide.

Remark 5.
We note that, although the model defines source and channel sequences of potentially different lengths, the final bounds in Equations (7) and (8) are single-letter and they are calculated using the single-letter probability distribution in Equation (9). The difference in sequence length is captured by the coefficient η defined in Equation (4).

Optimal Characterization of the Secret Key Rate
The inner bound R in is optimal under certain less-noisy conditions on the channel and/or source components. These special cases are summarized in Table 1 and explained in the sequel.

Eve Has a Less Noisy Channel
If Eve has a less noisy channel than Bob, i.e., Z X Y, the information transmitted over the channel is compromised. Therefore, the amount of secret key that can be generated only depends on the statistical differences between the sources. Proposition 1. If Z X Y, a tuple (η, R k ) ∈ R 2 + is achievable if and only if there exist random variables U, V, and X on finite sets U , V, and X , respectively, with joint distribution p(uvabexyz) = p(u|v)p(v|a)p(abe)p(x)p(yz|x), which verify  (7) is maximized with T = ∅. On the other hand, the region in Equation (10) is achievable by setting auxiliary RVs Q = T = X in R in .

Remark 6.
The secret key capacity of the wiretap channel with a public noiseless channel of rate R ([8], Theorem 2.6) turns out to be a special case of Proposition 1, where X = Y = Z and defining η H(X) = η log |X | ≡ R.

Eve Has a Less Noisy Source
If Eve has a less noisy source than Bob, i.e., E A B, the amount of secret key that can be generated depends on the amount of secure information transmitted through the wiretap channel.
+ is achievable if and only if there exist random variables T and X on finite sets T and X , respectively, with joint distribution p(txyz) = p(tx)p(yz|x), which verify R k ≤ η I(T; Y) − I(T; Z) . (11) Proof. Given the less-noisy condition on Eve's source, i.e., I(V; B) ≤ I(V; E) for any RV V such that V − − A − − (BE), the bound in Equation (7) is maximized with U = V and independent of the sources. The region in Equation (11) is achievable by using the same auxiliary RVs in the inner bound as in the outer bound. (11) is equal to the secrecy capacity of the wiretap channel. (11) becomes independent of the sources sequences (A n , B n , E n ), we assume n = 0, and thus the rate η is finite.

Bob Has a Less Noisy Channel and Source
If Bob has a less noisy channel and source than Eve, i.e., Y X Z and B A E, the lower layers of the channel and source codewords are no longer needed.

Proposition 3.
If Y X Z and B A E, a tuple (η, R k ) ∈ R 2 + is achievable if and only if there exist random variables V and X on finite sets V and X , respectively, with joint distribution p(vabexyz) = p(v|a)p(abe)p(x)p(yz|x), which verify Proof. Given the less-noisy conditions on Bob's channel and source, the bound in Equation (7) is maximized with U = ∅ and T = X. The region in Equation (12) is achievable by also setting auxiliary RVs U = Q = ∅ and T = X in the inner bound.

Secret Key Agreement over a Wiretap Channel with BEC/BSC Sources
As mentioned in Remark 1, the inner bound introduced in Section 2.2 employs two layers of description, and thus it is an improvement over previously reported results. In this section, we compare the performance of this achievable scheme with the scheme in [13] for a specific binary source and channel model. Figure 2. The main channel consists of a noiseless link from Alice to Bob and a binary symmetric channel (BSC) with crossover probability ζ ∈ 0, 1 2 from Alice to Eve (see Figure 2a). Additionally, the three nodes have access to correlated sources; in particular, Alice observes a binary uniformly distributed source, i.e., A ∼ B 1 2 , which is the input of two parallel channels, as shown in Figure 2b. Bob observes the output of a binary erasure channel (BEC) with erasure probability β ∈ [0, 1], and Eve, a BSC with crossover probability ∈ 0, 1 2 . For simplicity, we assume η = 1 in the sequel.  [33], specifically:

Performance of the Coding Scheme
The following proposition provides a simple expression of the inner bound from Theorem 1. The expression is obtained by simplifying the maximization process of the input distribution, and thus it might not be optimal. However, this suffices to show the higher rates achieved by this scheme as we see later.
Proposition 4. The tuple (η = 1, R k ) ∈ R in if there exist u, v, q ∈ 0, 1 2 such that: Proof. The bound in Equation (13) is directly calculated from Equations (5) and (6a) with the following choice of input random variables: As previously mentioned, we provide next the inner bound presented in ( [13], Theorem 4) as a means of comparison. This inner bound is similar to Theorem 1 but with only one layer of description for the source A; thus, its achievable region is denoted R in-1L . We note that Theorem 4 from [13] is actually a capacity result assuming that A − − B − − E and X − − Y − − Z. In our present example, only the second Markov chain holds independently of the value of the parameters β and , but this does not invalidate the use of the inner bound.
Proof. See Appendix B.
Remark 11. Proposition 5 is a special case of Proposition 4 with u = q = 1 2 , and v = 0 or v = 1 2 . As mentioned in Remark 1, the inner bound ( [13], Theorem 4) is a special case of Theorem 1 with U = ∅ (thus u = 1 2 ). Moreover, given that in this model the Markov chain X − − Y − − Z holds, the channel codebook of Proposition 5 only has one layer (thus q = 1 2 ). On the other hand, there are two layers of description in Proposition 4, and whenever U = ∅ (i.e., u < 1 2 ), we have that Q = ∅ (i.e., q < 1 2 ). This relationship is determined by Equation (13b).
We performed numerical optimization of the bound in Equation (13) for different values of β while fixing ζ = 0.01 and = 0.05; the results are shown in Figure 3 along with the bound in Equation (14). We see in the figure the advantage of having two layers of description for the source A. The proposed scheme in Proposition 4 attains higher secret key rates than the scheme with only one layer of description (Proposition 5) for intermediate values of β. It is in this regime, when the source B is no longer less noisy than E, that two layers of description are needed.

Summary and Concluding Remarks
In this work, we investigated the problem of secret key generation over a noisy channel in presence of correlated sources (independent of the main channel) at all terminals. We introduced a novel outer bound for this channel model, which allowed us to show that a particular achievable scheme is optimal for all classes of less-noisy sources and channels (Propositions 1-3). In Section 4, we further compared the performance of the aforementioned achievable scheme with a previously reported result for a simple binary model. Numerical computation of the corresponding bounds provided interesting insights on the regimes where the achievable scheme outperforms the previous one.
This work, however, does not address the scenario where the sources and the noisy channel are correlated. The extension of the previously mentioned result of Prabhakaran et al. [14] by using two description layers is a natural consequence. Indeed, this extension-posterior to the short version of the present work in [30]-has been recently addressed in [27]. By using two description layers, the proposed achievable scheme recovers the present inner bound for η = 1 provided that the sources are independent of the channel.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: independent and identically distributed r.h.s. right-hand side w.r.t. with respect to

Appendix A. Proof of Theorem 2
The outer bound is derived by following similar steps to those in ( [28], Theorem 4) which assumes η = 1. It is reproduced here for completeness.
Let (η, R k ) be an achievable tuple according to Definition 2, and > 0. Then, there exists a (2 nR k , n, m) secret key code c n with functions ϕ(·), ψ a (·), and ψ b (·) such that that verify where we have dropped the conditioning on the codebook c n from Equations (A2b)-(A2d) and all subsequent calculations for clarity. Before continuing, we present the following remark that is useful to establish Markov chains between the random variables.
Remark A1. From the fact that random variables A i , B i , E i are independent across time and the channel X → (Y, Z) is memoryless and without feedback, the joint distribution of (K, A n , B n , E n , X m , Y m , Z m ) can be written as follows. For each i ∈ [1 : n] and each j ∈ [1 : m], we have p(k, a n , b n , e n , x m , y m , z m ) = p(a i−1 , b i−1 , e i−1 ) p(a i , b i , e i ) p(a n i+1 , b n i+1 , e n i+1 ) × p(k, x m |a n ) p(y j−1 , z j−1 |x j−1 ) p(y j , z j |x j ) p(y m j+1 , z m j+1 |x m j+1 ) , (A3) where P ϕ (x m |a n ) = ∑ ∀ k p(k, x m |a n ) and P ψ a (k|a n ) = ∑ We may now carry on with the derivation of the outer bound. First, consider, where • Equation (A4a) stems from the uniformity of the keys in Equation (A2d).
• Equation (A4b) is due to the security condition in Equation (A2c).
We now study separately the "source" term R s and the "channel" term R c . Hence, where • Equation (A5a) is due to Csiszár sum identity.
• Equation (A5b) follows from the definition of the auxiliary RVs U i = (Y m B i−1 E n i+1 ) and V i = (KU i ). This establishes the "source" term in Equation (A4d) with auxiliary RVs (U, V) that satisfy the following Markov chain The first part of Equation (A6) is trivial given the definition V i = (KU i ), whereas the second part follows from the i.i.d. nature of the sources and that they are correlated to the main channel only through the encoder's input in Equation (A1a), see Equation (A3), The "channel" term R c can be single-letterized similarly, where we first define the auxiliary RVs Q i = (E n Y i−1 Z m i+1 ) and T i = (KQ i ), we then introduce the auxiliary RV L uniformly distributed over [1 : m], and we finally define Q = (Q L L), T = (T L L), Y = Y L , and Z = Z L . The auxiliary RVs in this term, i.e., (Q, T), satisfy the following Markov chain where the nontrivial part is due to the memoryless property of the channel and Equation (A1b), provided the joint probability distribution satisfies Equation (A3). Since neither Q nor T appears on other parts of the outer bound, we may expand R c as If we let (n, m) → ∞ and take arbitrarily small , we obtain the bound in Equation (7). To obtain Equation (8), we use the following Markov chain that is a consequence of Equation (A1a), provided the joint probability satisfies Equation (A3): Due to the data processing inequality, we have where in the last inequality we use the memoryless property of the channel. Next, consider which gives the condition in Equation (8) as we let (n, m) → ∞ and take an arbitrarily small . Although the definition of the auxiliary RVs (TUV) used in the proof makes them arbitrarily correlated, the bounds in Equations (7) and (8) only depend on the marginal PDs p(tx) and p(uv|a).