On Superposition Lattice Codes for the K-User Gaussian Interference Channel

In this study, we work with lattice Gaussian coding for a K-user Gaussian interference channel. Following the procedure of Etkin et al., in which the capacity is found to be within 1 bit/s/Hz of the capacity of a two-user Gaussian interference channel for each type of interference using random codes, we work with lattices to take advantage of their structure and potential for interference alignment. We mimic random codes using a Gaussian distribution over the lattice. Imposing constraints on the flatness factor of the lattices, the common and private message powers, and the channel coefficients, we find the conditions to obtain the same constant gap to the optimal rate for the two-user weak Gaussian interference channel and the generalized degrees of freedom as those obtained with random codes, as found by Etkin et al. Finally, we show how it is possible to extend these results to a K-user weak Gaussian interference channel using lattice alignment.


Introduction
Interference is one of the major issues of wireless communications.One important scenario corresponds to the interference channel, where each transmitter wishes to communicate with its correspondent receiver but, as all users share the wireless medium, there is interference between them.Interference is classified according to its level, from very strong to low.When interference is very strong, it has been demonstrated [1] that the capacity is the same as if there was no interference at all.This is because interference is decoded first.Interference is low when it falls below the level of noise.In this case, there is no loss of data rate due to interference.The problem is still open for moderate or weak interference.For this case, the conventional technique consists of orthogonalizing the signals using frequency or time division multiple access schemes.Interference alignment has been proposed from the scope of information theory to align interference at each receiver, using only half of the signal space and leaving the other half for the intended signal, independent of the number of users that the channel has.The sum capacity for the K-user interference channel has been characterized in [2], and it was found that at a high signal-to-noise ratio (SNR), a factor of K/2 dominates the capacity.This factor represents the degrees of freedom (DoF).
One of the main achievements in finding the capacity for a two-user interference channel can be seen in the work of Han and Kobayashi [3], who found the inner bound for the two-user interference channel using superposition coding.The method to determine the capacity of such an interference channel consists of using private and common messages from each transmitter.The private message of the interferer is treated as noise, while both common messages and the desired private message are decoded at each receiver.Obtaining similar results when K users are considered is desirable.It has been shown in [4] that, through using lattice codes, the interference due to one interferer can be made the same as that caused by many interferers.At each receiver, the signals can be scaled in such a way that the interference signals lie in a lattice, which can be distinguished from the lattice containing the desired signal.This was defined in [4] as lattice alignment.The signal scale idea has been studied in [5][6][7][8] to obtain the DoF of different interference channel models.In [5], a deterministic channel approach was applied to an interference channel, where signals are represented in base Q.In [9], the generalized degrees of freedom (GDoF) were found for different types of interference according to the SNR and interference-to-noise ratio (INR) for a two-user interference channel.Following the ideas of [5,9], in [6], the GDoF was found for different levels of interference for the K-user interference channel.The signals are represented in base Q, and a detailed scheme was given for different types of interference.New approaches have been made to find the GDoF for the K-user interference channel.In particular, in [10], the GDoF of a K-user interference channel was studied when treating interference as noise, which was found to be optimal depending on the relationship between the desired signal strength and the sum of the strengths of the strongest interference from and to this user.In [11], the GDoF of a K-user interference channel was studied using a multi-layer interference alignment scheme with successive decoding.The optimal sum of the GDoF was characterized by the exponents of each of the channel strengths.
Recently, interference alignment has been applied to different scenarios such as wireless interference channels for Smart Grids [12], unmanned aerial vehicles in heterogeneous networks [13] and space-air-ground integrated networks [14].On the other hand, many of the lattice code techniques that are used in this paper have previously been considered for security.This is the case for [15,16], who worked with the secure capacity of wiretap channels, or [17], who worked with the secure DoF of the K-user interference channel.However, to the best of our knowledge, few researchers have recently studied the GDoF or constant gap to the optimal rate of the K-user interference channels using lattice alignment.
Following the ideas of [9], in [18], the GDoF of the two-user symmetric interference channel is found using a lattice Gaussian distribution.In this study, we propose extending these results for the K-user interference channel, using additive white Gaussian noise (AWGN)-good lattices.First, we begin with a two-user Gaussian interference channel, and work with lattice codes as we want to use the potential of lattices to align interference for a K-user Gaussian interference channel.For this purpose, we propose a lattice Gaussian coding scheme with some constraints over the powers of the messages and the flatness factor of the lattices.Using the intersection of two two-user multiple access channel rate regions, we find that we can achieve the conditions to obtain the same constant gap to the optimal rate and, thus, the same GDoF for a two-user weak interference channel, as found in [9], with lattice Gaussian codes.Finally, we show how to apply these results to a K-user interference channel using lattice alignment, with a careful selection of the lattices for each user.

Roadmap
The remainder of this paper is organized as follows: In Section 2, the upper and inner bounds and the GDoF of the two-user interference channel obtained in [9] are shown, and important Lemmas and Theorems of the lattice Gaussian coding [19] are explained.The main results of this work are stated in Theorems 3 and 4 in Section 4, which identify the channel coefficient conditions to obtain the same GDoF as in [9].To prove this, we perform the following: • In Section 3.1.1,we show it is possible to obtain the HK rate region for a two-user interference channel with the intersection of two two-user multiple access channels.• In Section 3.1.2,we express the HK rate region for a two-user Gaussian interference channel with lattice distribution (Section 3.1.2for a K-user Gaussian interference channel).For this, we introduce restrictions over the flatness factor of lattices given by Lemmas 3 and 4, as well as Theorem 2. • Finally, in Section 4.1, for Lemma 9, we apply power constraints to the private and common messages of a two-user weak Gaussian interference channel (Lemma 10 for a K-user weak Gaussian interference channel).These constraints are then applied to obtain conditions for the channel coefficients (Theorem 3 for a two-user weak Gaussian interference channel and Theorem 4 for a K-user weak Gaussian interference channel), which finally lead to the constant gap to the optimal rate and the GDoF of the two-user interference channel obtained in [9].
In Section 5, we discuss and highlight the results obtained.Finally, the conclusions of this work are drawn in Section 6.

Preliminaries
A study by Etkin et al. [9] revealed the capacity of the two-user interference channel within 1 Bit/s/Hz.When the power of the interference is smaller than the power of the desired signal, a range of values in which the Han and Kobayashi achievable rate (hereafter, the HK rate) is contained can be found.The GDoF is found through normalizing this rate by the capacity of the point-to-point AWGN channel and taking the limit of this ratio when the SNR and INR → ∞.In order to do this, random Gaussian codes and a simple HK scheme are used.In this section, we show the results of [9] for a two-user weak interference channel and, later, we present the main results on lattice Gaussian coding [19], the Lemmas and Theorems of which are used for our later results.

Outer and Inner
Bounds for the Two-User Weak Gaussian Interference Channel [9] The channel model given in [9] is expressed as: where i, j = 1, 2, x j ∈ C are subject to a power constraint E |x j | 2 = P j and the noise is z i ∼ CN (0, N 0 ).The channel coefficients from transmitter i to receiver j are represented by h ji .Let SNR i = |h ii | 2 P i N 0 also be the SNR of user i, and The authors in [9] provide a new outer bound for the two-user weak and mixed Gaussian interference channel.Here, we show their results for the weak interference case: Later, as presented in [3], superposition coding is considered.The private message of user i = 1, 2 is represented as u i , while the common message is represented as w i .User i transmits the signal given by x i = u i + w i .The private codeword u i is meant to be decoded only by user i, while it is treated as noise by the other user.Both w 1 and w 2 are decoded by both users.In [9], the codebooks are generated using i.i.d.random Gaussian variables, and the interference-to-noise ratio created by the private message is defined as INR p and chosen as equal to 1.A simplified HK scheme is used in order to find the achievable region within 1 bit/s/Hz of the outer bound.To begin, in ( [20] [Section 3.2]), a simplification of the HK rate region is found, which relies on the fact that many of the limits found in [3] are redundant.This has also been acknowledged by Han and Kobayashi in [21].Consider the auxiliary variables given in [3], U 1 , U 2 , W 1 , W 2 and Q, where U i represents the private information from user i, W i represents the common information from user i = 1, 2, and Q is a time sharing parameter.Given the set , the HK capacity rate region R(Z) is the set of all simultaneously achievable rate pairs (R 1 , R 2 ) that satisfy ( [20] [Section 3.2]): Later, in [9], the authors showed that a simple HK scheme can achieve within one bit of the capacity of the two-user interference channel, considering three cases: (1)  To complete this study, we present their results within one bit of the capacity rate region of the Gaussian interference channel for the weak interference channel.In Theorem 5 of [9], the authors proved that the achievable region R(min(1, I 2 ), min(1, I 1 )) is within one bit of the capacity region of the two-user weak Gaussian interference channel.For this, note that both the outer bound rate region and the HK rate region are delimited by straight lines of slopes 0, −1/2, −1, −2, ∞, defined by the bounds R 1 , R 2 , R 1 + R 2 , 2R 1 + R 2 and R 1 + 2R 2 .In [9], this outer bound is given by ( 2)- (8).Those bounds are denoted by for the outer bound and HK regions, respectively.The difference between these bounds is denoted by Thus, the following condition is sufficient for the achievable region to be within 1 bit/s/Hz [9]: This is achieved by dividing the proof into four cases [9]: contains all the rate pairs (R 1 , R 2 ), satisfying: The capacity is defined in [9] by C(SNR 1 , SNR 2 , INR 1 , INR 2 ) with the parameters SNR 1 , SNR 2 , INR 1 and INR 2 .Define the GDoF using [9] Using various approximations, the GDoF for the weak interference channel is given by [9]:

Lattice Gaussian Coding
In this study, we use lattices due to their potential to align interference by means of lattice alignment for any number of users in an interference channel.Lattice codes also allow us to use higher dimensions, and some lattices are said to be AWGN-good if they are good for AWGN channels.We also note that the randomness of the codewords is useful, particularly when part of the codeword has to be treated as noise.Furthermore, the capacity to be within 1 Bit/s/Hz, as demonstrated in [9] and the GDoF, is based on Gaussian random codes.Due to this need, lattice Gaussian codes [19] are considered.In this section, we present the main results on this topic, which will be applied in the following sections for the interference channel.
Definition 3 (AWGN-good [19]).A sequence of lattices Λ (n) of increasing dimension n and volume-to-noise ratio (VNR), defined as γ = V(Λ) 2 n σ 2 , where V(Λ) is the fundamental volume of the lattice Λ, is AWGN-good if, for all P e ∈ (0, 1), lim n→∞ γ Λ (n) (σ) = 2πe and if, for a fixed VNR greater than 2πe, P e vanishes in n, where P e = P{W n / ∈ V(Λ)} is the error probability of minimum-distance lattice decoding, and where σ 2 is the power of the i.i.d Gaussian noise W n .
Definition 4 (Discrete Gaussian distribution [19]).Define the discrete Gaussian distribution over Λ centered at c ∈ R n as the following discrete distribution taking values in λ ∈ Λ: We also consider the Λ-periodic function: Definition 5 (Discrete Gaussian distribution over a coset [19]).The discrete Gaussian distribution over a coset of Λ; that is, the shifted lattice Λ − c is defined as . Thus, they are a shifted version of each other.Definition 6 (Flatness factor [23]).In [24], the notion of the flatness factor of a lattice Λ was introduced.An equivalent definition of the flatness factor is applied in [15,23]: Thus, the ratio between f σ,Λ (w) and the uniform dis- tribution over R(Λ) ⊂ R n are within the range of [1 − ϵ Λ (σ), 1 + ϵ Λ (σ)], where R(Λ) is a fundamental region of the lattice Λ.
The flatness factor of Λ is then given by [15]: Theorem 1 ([19]).∀σ and ∀δ, there exists a sequence of mod-p lattices that is, the flatness factor can exponentially reach zero for any fixed VNR γ From [25], mod-p lattices are defined as Λ c = pZ n + C, where p is a prime and C is a linear code over Z p , which is the ring of integers modulo-p.
The following Lemma shows that when the flatness factor is small, the variance per dimension of the discrete Gaussian D Λ,σ,c is not so far from the one of the continuous Gaussian.
The next lemma shows that the probability of a lattice Gaussian distribution falling outside of a ball of a radius larger than √ nσ is exponentially small.
Therefore, x ∼ D Λ,σ,c can be seen as semi-spherical noise.It is known that the sum of semi-spherical noise and AWGN is semi-spherical [26].
The following Lemma shows that if the flatness factor is small, the sum of the discrete Gaussian distribution and a continuous Gaussian distribution is very close to a continuous Gaussian distribution.

Lemma 4 ([19]
).Given any vector, c ∈ R n and σ 0 , Consider the continuous distribution r on R n obtained by adding a continuous Gaussian of variance σ 2 to a discrete Gaussian D Λ−c,σ 0 : As the distance between points is not uniform, the decoding is performed using MAP decoding.It is demonstrated in [19] that MAP decoding is equivalent to MMSE lattice decoding.The following lemma is given for the error performance of the AWGN-good lattices.

Lemma 5 ([19]
).If L is AWGN-good, the average error probability of the MAP decoder is bounded by where σ is defined in Lemma 4 and E p (µ) denotes the Poltyrev exponent It is shown in [19] that in order to achieve this bound, the condition σ 2 0 σ 2 > e must be fulfilled; that is, the SNR is larger than e.Thus, the following theorem shows that by using a lattice Gaussian codebook, we can achieve a rate arbitrarily close to the channel capacity while making the error probability vanish exponentially, as long as SNR > e.

Theorem 2 ([19]
).Consider a lattice code whose codewords are drawn from the discrete Gaussian distribution D L−c,σ s for an AWGN-good lattice.Assuming that ε t and ε are as defined in Lemma 1, ε ′ is as defined in Lemma 2, and for some small ε ′′ → 0, if SNR> e, then any rate (as defined in [19]) up to the channel capacity is achievable, while the error probability of MMSE lattice decoding vanishes exponentially fast as in (39).
The development and proof of Theorem 2 can be found in [19].

Lattice Gaussian Coding for the Two-User Gaussian Interference Channel
In this section, we analyze the case for the two-user weak Gaussian interference channel using lattice Gaussian codes.Consider the following channel model: where h ii and h ji are the real direct and indirect channel gains, respectively; x i is the signal transmitted by transmitter i; x j is the signal transmitted by transmitter j; and z i is the additive white Gaussian noise with variance σ 2 and zero mean, i, j = 1, 2, i ̸ = j.As in [3,9], the transmitted symbols are constructed using a common and a private message, given by w i and u i , respectively, for user i = 1, 2. Thus, x i = w i + u i .At receiver i, the common messages of both transmitters, h ii w i and h ji w j and the private desired message h ii u i are decoded, while the interference private message h ji u j is considered as noise, where j = 1, 2, j ̸ = i.Define S i as the signal-to-noise ratio of user i and I i as the interference-to-noise ratio of user i.Furthermore, define, as presented in [9], as the common and private signal-to-noise ratios of user i, respectively, and as the common and private interference-to-noise ratios of user i, respectively.Thus, S i = S ic + S ip and I i = I ic + I ip , considering the weak interference case, where I i < S j .

Finding the Han-Kobayashi Rate Region with the Intersection of Two Two-User MACs
In [3], the best achievable rate region for a two-user interference channel was found using superposition coding.We will show that we can separate the problem into two multiple access channels (MACs), which can be intersected to obtain the achievable rate region obtained in [3].Lemma 6.The extreme points for the achievable region of MAC 1 and MAC 2, respectively, are given by (see Figure 1): Proof: In order to find each of the MAC rate regions, we follow the procedure explained in [3], Appendix A. First, we notice that the MAC rate regions are delimited from above by only four straight lines, as opposed to the IC region, which is delimited by five.This is due to the fact that each MAC user only needs to decode both common messages and their own private messages.Thus, the only possible slopes for MAC 1 are given by 0, −1/2, −1, ∞, and for MAC 2, they are given by 0, −1, −2, ∞.Following the procedure explained in [3], Appendix A, it is straightforward to find (47)-(50).We found that point B is equal to C; therefore, we only have three slopes given by 0, −1, ∞.It is also possible to find point H, where We can follow a similar analysis for MAC 2, from (51) to (54).In this case, we find that point B' is equal to A'; therefore, we have only three slopes as previously given by 0, −1, ∞.It is also possible to find point H ′ , where Lemma 7. The achievable rate region found in ([3] [Theorem 4.1]) for a two-user interference channel can be found by intersecting the achievable rate region of two two-user multiple access channels, and this is given by: The proof of Lemma 7 can be found in Appendix A.

Two-User Gaussian Interference Channel Using Lattice Gaussian Coding
We assume that h ji w j and h ji u j for any i, j = 1, 2 are actually lattice codes and, more importantly, are in a lattice Gaussian distribution.Let us define the lattices properly in the lattice Gaussian distribution.We define that h ii w i ∼ D ∆ i ,δ i where where s ∼ D Λ,σ indicates that s distributes as the discrete lattice Gaussian distribution over Λ, centered in zero and with variance σ 2 .Note that x i is the superposition of two lattice Gaussians.Figure 2 illustrates an example of a lattice Gaussian distribution of the private and common messages of x i .
Based on the ideas of [3,9], at each receiver, both common and private desired messages must be decoded, along with the common interference message.However, the private interference message is considered noise.To decode similarly to [3], we consider successive decoding.Thus, while decoding one of the messages, the others are considered noise.This is not a problem when the codes used are Gaussian codes, as presented in [3,9].Therefore, we not only work with lattice codes but with lattice Gaussian codes.The common messages are designed such that they are decodable at both receivers, while the private message must be designed such that it is decodable only at the desired receiver, and at the other receiver, it must be considered noise.Let us define the power of the private and common messages for transmitter i as σ 2 u i and σ 2 w i , respectively, where i = 1, 2. From Lemma 4 it can be observed that if then h ji u j + z i is not far from a continuous distribution, and we can treat h ji u j as noise.Thus, the new noise zi = h ji u j + z i is an AWGN with variance h 2 ji σ 2 u j + σ 2 .Lemma 3 is applied to h ii w i , h ji w j and h ii u i with the flatness factor conditions: One important result of the separation of the problem on two MAC regions is the visualization of the decoding strategy.As in [3] the decoding strategy is the following.For MAC1, as can be observed from regions (47)-(50), we either decode W 2 , then W 1 and finally U 1 or W 1 , then U 1 and finally W 2 , in both cases leaving the private interference message as noise.For MAC2, the approach is similar.From regions (51)-(54), we either decode W 2 , then U 2 and finally W 1 or W 1 , then W 2 and finally U 2 .This can be formally expressed as follows.Considering a system given by (42), we will show the two possible ways of decoding at receiver i, i = 1, 2: 1. Decoding h ii w i , then h ii u i and finally h ji w j : If we decode the desired common message first, w i , to consider the rest of the messages as noise, we must apply Lemmas 4 and 3. Lemma 4 is applied to h ji u j , while Lemma 3 is applied to h ji w j and h ii u i .Thus, we decode w i from y k = h ii w i + žk , where žk = h ji w j + h ii u i + h ji u j + z k is the new semi-spherical noise.This is valid from Lemma 4 with the flatness factor condition and from Lemma 3 with the flatness factor conditions and Consider now Theorem 2. We have that with the condition Here, we decode the desired private message with a subset of the flatness factor conditions that were already defined in the first step.Thus, we decode h ii u i from y i − h ii ŵi = h ii u i + h ji w j + h ji u j + z i , where ŵi is the estimated w i , considering (64) and (66), which are the flatness factor conditions that make h ji u j and h ji w j part of the noise.Utilizing Theorem 2, we obtain where Finally, we can decode w j using y k − h ii ŵi − h ii ûi = h ji w j + (h ji u j + z k ), where ŵi and ûi are the estimated w i and u i , respectively.Again, using Lemma 4, we can consider h ji u j as part of the noise, with its respective flatness factor condition (64), and we can apply Theorem 2 to obtain where 2. Decoding h ji w j , then h ii w i and finally h ii u i : If we start by decoding the interference common message first, h ji w j , to consider the rest of the messages as noise, we apply Lemma 3 to h ii w i and h ii u i with the flatness factor conditions (65) and (67), and Lemma 4 to h ji u j with the flatness factor condition (64).Then, using Theorem 2, we obtain where Here, we decode the desired common message w i from y i − h ji ŵj = h ii w i + h ii u i +h ji u j + z i again, where ŵj is the estimated w j , considering, as previously mentioned, h ii u i and h ji u j as noise with the conditions (67) and (64).Using Theorem 2, we obtain where Finally, once both common messages have been found, we can decode u i using y k − h ii ŵi − h ji ŵj = h ii u i + (h ji u j + z k ), where ŵi and ŵj are the estimated w i and w j , respectively.Again using Lemma 4, we can consider h ji u j as part of the noise, with its respective flatness factor condition (64), and we can apply Theorem 2 to obtain where

Lattice Gaussian Coding for the K-User Interference Channel
In this section, we demonstrate how to use the previous results for the K-user interference channel utilizing lattice Gaussian coding.
Consider a K-user interference channel model given by: where h ii and h ji are the real direct and indirect channel gains, respectively; x i is the signal transmitted by transmitter i; x j is the signal transmitted by transmitter j; and z i is the additive white Gaussian noise with variance σ 2 and zero mean, where i, j = 1, • • • , K.

K-User Gaussian Interference Channel Using Lattice Gaussian Coding
The main idea of using lattice codes is to apply lattice alignment to the receivers so that we can ensure the model mimics a two-user interference channel.Lemma 8.In a K-user Gaussian interference channel where lattice alignment is used, such that the channel resembles K two-user MACs, the number of lattice Gaussian codes needed to align interference is given by 2 + 4K.
Proof.To prove this let us begin with a three-user interference channel example.Our goal is to mimic the idea of the two-user interference channel where we can intersect the two-user MAC.For simplicity, let us consider the following channel model with only common messages: In order to mimic a two-user interference channel, we will say that each user will see only one interference user in the following way: For user i: We have user i and the interference user, which is now the addition of two interferers; namely, user int i (see Figure 3).Assigning the lattices when K ≥ 3 is more challenging than for the two-user case.Suppose we assign the lattices in the same way as the two-user interference channel.In this case, we would have that h ii w i ∈ ∆, ∑ h ji w j ∈ Θ, ∑ h jl w j ∈ Θ, ∑ h ij w i ∈ Θ, for i, j = 1, 2, 3 and i ̸ = j.Then, we would have We can see it is not be possible to decode at int i as we cannot decode ∑ j,l̸ =i,l̸ =j h lj w j from ∑ j̸ =i h ij w i .Let us consider the following strategy shown in Table 1.
In the first column, we show the perspective of each user.User 1 assigns h ii w i ∈ ∆ for i = 1, 2, 3, while it assigns h j1 w j ∈ Π and h 1j w 1 ∈ Π for j = 2, 3. User 2 assigns h ii w i ∈ ∆ for i = 1, 2, 3, while it assigns h j2 w j ∈ Θ and h 2j w 2 ∈ Θ for j = 1, 3. User 3 assigns h ii w i ∈ ∆ for i = 1, 2, 3, while it assigns h j3 w j ∈ Υ and h 3j w 3 ∈ Υ for j = 1, 2. The combination of all possible lattices for the case of h ji w j for i, j = 1, 2, 3, i ̸ = j is given in the last line of Table 1.Note that, for example, the lattice ΠΘ is not necessarily a combination of lattices Π and Θ.It simply symbolizes a lattice that is useful for both h 21 w 2 for users 1 and 2 and h 12 w 1 for users 1 and 2. The same can be applied for h 31 w 3 for users 1 and 3, h 13 w 1 for users 1 and 3, h 23 w 2 for users 2 and 3 and h 32 w 3 for users 2 and 3. Thus, for each user, we obtain the following: For user 1: Similarly, for user 2: Furthermore, for user 3: Here, let us focus on decoding.In order to obtain the same decoding rates at the desired receiver as in the previous two-user case, we need to be able to decode the following: common interferers, common desired messages and, finally, private desired messages or common desired messages, private desired messages and, finally, common interferers.For the interferer receiver, we need to be able to decode the following: common interferer messages, private interferer messages and, finally, common messages of user i or common messages of user i, common interferer messages and, finally, private interferer messages.In our three-user example without private messages, this means the following: • At receiver 1, we decode to lattice ∆ and then to lattice Π 1 , where (ΠΘ + ΠΥ) ⊆ Π 1 , or to lattice Π 1 and then to lattice ∆. • At receiver int 1 , we decode to lattice ∆ 1 , where (∆ + ΘΥ + ΘΥ + ∆) ⊆ ∆ 1 , and then to Π 1 or first to Π 1 and then to ∆ 1 .
The process is similar for the other users.Thus, for our three-user interference channel example using only common messages, we need seven lattices to be able to decode three users and three interference users.This can be observed in Figure 4. Following the same strategy, we find that for any K-user interference channel with common and private messages, we would need 2 + 4K lattices.Note that by using this strategy, there are lattices that repeat both at user i and user int i , thus allowing us to reduce the number of lattices that are needed to decode.
The channel model is now given by: where i, j, l = 1, • • • , K. Let us properly define the lattices as follows: , where i, j, l = 1, • • • , K, j ̸ = i and where we will assume that We can now define S ic ≜ . From (97) and (98) for each i = 1, • • • , K , we have two MAC regions with two possible rates, R i and R int i .Therefore, the interference channel rate region is given by: In this case, Lemmas 3 and 4 can still be fulfilled using: As for the two-user case, we will consider decoding in the following manner.From (97) to (98): 1. Decoding at receiver i: (a) Decoding h ii w i , then h ii u i and finally ∑ h ji w j : If we decode the desired common message first, h ii w i , to consider the rest of the messages as noise, we have to apply Lemma 4 to ∑ h ji u j and Lemma 3 to ∑ h ji w j and h ii u i .Thus, we decode w i from y i = h ii w i + žk , where ži = ∑ h ji w j + h ii u i + ∑ h ji u j + z i is the new semi-spherical noise.This is valid from Lemma 4 with the flatness factor condition and from Lemma 3 with the flatness factor conditions and From Theorem 2, we have that with the condition We now decode the desired private message with a subset of the flatness factor conditions, which were already defined in the first step.Thus, we decode h ii u i from y i − ŵi = h ii u i + ∑ h ji w j + ∑ h ji u j + z i , where ŵi is the estimated h ii w i , considering (104) and ( 106), which are the flatness factor conditions that make ∑ h ji u j and ∑ h ji w j part of the noise.Utilizing Theorem 2, we obtain where Finally, we can decode ∑ h ji w j using y k − ŵi − ûi = ∑ h ji w j + (∑ h ji u j + z k ), where ŵi and ûi are the estimated h ii w i and h ii u i , respectively.Again, using Lemma 4, we can consider ∑ h ji u j as part of the noise, with its respective flatness factor condition (104), and we can apply Theorem 2 to obtain where (b) Decoding ∑ h ji w j , then h ii w i and finally h ii u i : If we start by decoding the interference common message first, ∑ h ji w j , to consider the rest of the messages as noise, we apply Lemma 3 to h ii w i and h ii u i with the flatness factor conditions (105) and (108) and Lemma 4 to ∑ h ji u j with the flatness factor conditions (104).Then, using Theorem 2, we obtain where Here, we decode the desired common message h ii w i from y i − ŵj = h ii w i +h ii u i + ∑ h ji u j + z i , where ŵj is the estimated ∑ h ji w j , considering, as previously mentioned, h ii u i and h ji u j as noise with the conditions (108) and (104).Using Theorem 2, we obtain where Finally, once both common messages have been found, we can decode h ii u i using , where ŵi and ŵj are the estimated h ii w i and ∑ h ji w j , respectively.Again, using Lemma 4, we can consider ∑ h ji u j as part of the noise, with its respective flatness factor condition (104), and we can apply Theorem 2 to obtain where 2. We will now decode at receiver int i : (a) Decoding ∑ h jl w j + ∑ h jj w j , then ∑ h jl u j + ∑ h jj u j and finally ∑ h ij w i : If we decode the desired common message first, ∑ h jl w j + ∑ h jj w j , to consider the rest of the messages as noise, we must apply Lemmas 4 and 3. Lemma 4 is applied to ∑ h ij u i , while Lemma 3 is applied to ∑ h jl w j + ∑ h jj w j and ∑ h ij w i .Thus, we decode ∑ h jl w j + ∑ h jj w j from y int i = ∑ h jl w j + ∑ h jj w j + žk , where ži = ∑ h ij w j + ∑ h jl u j + ∑ h jj u j + ∑ h ij u i + z int i is the new semi-spherical noise.This is valid from Lemma 4 with the flatness factor condition (104) and from Lemma 3 with the flatness factor conditions (109) and (106).Thus, from Theorem 2, we have that with the condition Here, we decode the desired private message with a subset of flatness factor conditions, which were already defined in the first step.Thus, we decode ∑ h jl u j + ∑ h jj u j from y int i − ŵj = ∑ h jl u j + ∑ h jj u j + ∑ h ij u i + z int i , where ŵj is the estimated ∑ h ji + ∑ h jj w j , considering (109) and (106), which are the flatness factor conditions that make ∑ h jl u j + ∑ h jj u j and ∑ h ij w i part of the noise.Using Theorem 2, we obtain where Finally, we can decode ∑ h ij w i using where ŵj and ûj are the estimated ∑ h jl + ∑ h jj w j and ∑ h jl + ∑ h jj u j , respectively.Again, using Lemma 4, we can consider ∑ h ij u i as part of the noise, with its respective flatness factor condition (104), and we can apply Theorem 2 to obtain where Decoding ∑ h ij w i , then ∑ h jl w j + ∑ h jj w j and, finally, ∑ h jl u j + ∑ h jj u j : If we start by decoding the interference common message first, ∑ h ij w i , to consider the rest of the messages as noise, we apply Lemma 3 to ∑ h jl w j + ∑ h jj w j and ∑ h jl u j + ∑ h jj u j with the flatness factor conditions (107) and (109) and Lemma 4 to ∑ h ij u i with the flatness factor conditions (104).Then, utilizing Theorem 2, we obtain where Here, we decode the desired common message ∑ h jl w j + ∑ h jj w j from y int i − ŵi = ∑ h jl w j + ∑ h jj w j + ∑ h jl u j + ∑ h jj u j + ∑ h ij u i + z int i again, where ŵi is the estimated ∑ h ij w i , considering, as previously mentioned, ∑ h jl u j + ∑ h jj u j and ∑ h ij u i as noise with the conditions (109) and (104).Using Theorem 2, we obtain where Finally, once both common messages have been found, we can decode ∑ h jl u j + ∑ h jj u j by y int i − ŵi − ŵj = ∑ h jl u j + ∑ h jj u j + (h ij u i + z int i ), where ŵi and ŵj are the estimated ∑ h ij w i and ∑ h jl + ∑ h jj w j , respectively.Again, using Lemma 4, we can consider ∑ h ij u i as part of the noise, with its respective flatness factor condition (104), and we can apply Theorem 2 to obtain where

Results
Although some lemmas were obtained in Section 3, such as Lemmas 6-8, in this section, we will present the main results of this work.

The Power Constraints and GDoF of the Two-User Weak Gaussian Interference Channel with Lattice Gaussian Coding
Using the results from the previous section, we now find the power constraints for the private and common messages.These are stated in the next Lemma: Lemma 9.For any type of interference, we have the following power constraints from (72), ( 74), ( 76), ( 78), ( 80) and (82), (137) e(e + 1) 2 h 2 ji σ 2 for i, j = 1, 2, j ̸ = i, and where we consider that h > e 2 (e + 1) 2 , which does not contradict the weak interference scenario as, for weak interference, we need In order to fulfill the restrictions on the flatness factors, we can apply the same approach as in [19] where, for mod-p lattices, we can satisfy a small flatness factor if: where we consider a discrete lattice Gaussian distribution over L, centered on zero and with variance σ 2 s .Then, for each of the defined lattices, to satisfy each of the flatness factor conditions, we must satisfy the following volume constraints: where we consider that the dimension n is the same for all lattices.From ( 43)-( 46), we can express the rates obtained in Section 3.1.1,Lemma 7 (equivalently (71), ( 73), ( 75), ( 77), ( 79) and ( 81)), with the following, where we have reduced the equations where possible, For the weak interference scenario S 1 > I 2 and S 2 > I 1 .As in [9], the aim is to prove the constant gap and, ultimately, that we can obtain the same GDoF as in [9].
In [9], the HK region is used by R I 1p , I 2p , where I ip is approximated by 1.The aim is to find the difference between the outer bound rate region and the HK rate region; in particular, a constant gap.In [9], the authors found that, in some cases, I ip = 1 is not enough to reduce the gap between the outer bound and the HK rate for R 1 and R 2 ; therefore, it is necessary to assign more power to the private interference.Thus, the achievable rate region is given by [9] R(min(1, I 2 ), min(1, I 1 )).This leads to four cases of reaching the constant gap.As in ( [20] [Section 3.2]), we define k i = I jp , where k i can take values from 1 to I j , and that S ip = S i I j I jp .We obtain the following: Let us define the difference between the outer bound rate region and the HK rate region, as presented in [9], as: Let us focus on ∆ R 1 and ∆ R 2 .Depending on the value of k 1 and k 2 , the left or right part of the term inside the min in (154) or (155) is active.It was found in [9,20] that reassigning the value of k i and assigning more power to the private interference allows for the reduction in the gap between the outer bound and the HK rate for R 1 and R 2 .In [9], the authors can also consider when I i < 1.In our case, we find that that is not possible since I i > 1 by construction.Thus, the lowest gap in R 1 and R 2 , as presented in [9], is given by: The above leads to the main Theorem of this section.
Theorem 3. The constant gap obtained in [9] for a two-user Gaussian interference channel using Gaussian codes is the same as that obtained using lattice Gaussian distribution when for i, j = 1, 2 and i ̸ = j.
The proof of Theorem 3 can be found in Appendix C.
It is then straightforward to obtain the GDoF, which is the same as in (30)-(34).

The Power Constraints and GDoF of the K-User Weak Gaussian Interference Channel with Lattice Gaussian Coding
Here, let us consider the case for the K-user Gaussian interference channel, as presented in Section 3. Assume that σ w i = σ w and σ u i = σ u for i, j, l = 1, • • • , K, i ̸ = j ̸ = l.We have the following Lemma that shows the power constraints obtained for the private and common messages: Lemma 10.For any type of interference, we have the following power constraints from (114), ( 116), (118), (120), ( 122), ( 124), ( 126), ( 128), ( 130), ( 132), ( 134) and (136): The proof of Lemma 10 can be found in Appendix D. As for the two-user interference channel, we must satisfy the following volume conditions for each lattice: (183) As for the two-user case, we can formally express the K-user interference channel rates with alignment as follows: We can observe that this is equivalent to the results obtained in (145)-( 151) for the two-user case.Thus, the procedure is the same as the one for (161)-(165).
The main result of this section can be stated in the following theorem: Theorem 4. The constant gap obtained in [9] for a two-user weak Gaussian interference channel using Gaussian codes is the same as that obtained for a K-user weak Gaussian interference channel using lattice Gaussian distribution when The proof of Theorem 4 can be found in Appendix E.

Discussion
In this section, we summarize and highlight the results of this research.First, in Section 3, to understand the achievable rate of a two-user interference channel presented by [3], we divide the problem into two two-MAC regions.This allows us to visualize both the contribution of each user to the HK rate and the decoding order.Both of these are key for later designing the lattice Gaussian codes, particularly when extended to a K-user interference channel.
Second, in Section 3, in order to use the HK decoding method, we want to use lattice Gaussian distribution.For this, we define the lattices and the constraints for each of the lattice distributions.We begin with a two-user interference channel first, where we must consider Lemmas 3 and 4 to treat the common and private messages as noise in each step of the decoding process.Next, we apply Theorem 2 to each of the rates we found for the two-user interference channel, following the decoding order defined before.
From these, we demonstrate how to extend these results to the K-user interference channel, as explained in Section 3. We mimic a two-user interference channel using alignment, thus obtaining two users: i and int i .The challenge is the strategy to choose the lattices to decode both at user i and user int i .In addition, the strategy by which to choose the lattices (as shown in example Table 1) allows us to visualize that some lattices repeat for both user i and user int i , allowing us to reduce the number of lattices used.We again verify Lemmas 3 and 4 and Theorem 2 to each of the rates obtained.
In Section 4, the main results are presented.First, for the two-user weak Gaussian interference channel, we obtain the common and private power constraints.From this, we verify that we can approximate I ip for i = 1, 2 to 1 if the condition h 2 ii h 2 ij > 2e(e + 1) holds (Theorem 3).Thus, this allows us to apply the same constraint as in [9], which leads to naturally obtaining the same constant gap and GDoF.We repeat the process for the K-user weak Gaussian interference channel, obtaining that we can also approximate I ip to 1 if the conditions > e(e + 1) hold (Theorem 4).Note that, in this case, the conditions are weaker than for the two-user interference channel, but we have an extra penalty, given by ∑ j̸ =i h 2 ji = ∑ j̸ =i h 2 ij .

Conclusions
In this paper, we presented a lattice Gaussian coding scheme for the K-user interference channel.We show that, through the use of random coding, we can obtain the same conditions that lead to the constant gap to the optimal rate and the GDoF for a two-user interference channel as obtained in [9].Herein, we use the HK scheme with private and common messages and lattice Gaussian coding to obtain randomness within the structure of the lattice.We proved that we can obtain the conditions to find the same constant gap and GDoF as with random coding for the weak interference scenario.This was achieved by using various properties of the flatness factor of the lattices, with some constraints on the common and private message powers as well as the channel coefficients.We also show how this can be extended to a K-user weak Gaussian interference channel, as the interference can be aligned at the receivers using lattice Gaussian coding.u 2 and finally w 1 , or at user 1 decode w 1 , then u 1 and finally w 2 , while at user 2, decode w 1 , then w 2 and finally u 2 and so on).These cases are shown in Table A1.

Figure 1 .
Figure 1.Representation of MAC 1 and MAC 2 rate regions.

Figure 3 .
Figure 3. Representation of a three-user interference channel without (left) and with (right) the proposed alignment scheme, for i = 1, 2, 3.

Figure 4 .
Figure 4. Lattice codes as seen by each receiver for the example described.
1 and INR 2 < 1.In this case, the achievable region R(INR 2 , 1) is similar to the one before.(4) INR 1 < 1 and INR 2 < 1.In this case, the achievable region R(INR 2 , INR 1 ) contains only the following rate pairs:

Table 1 .
Example of a three-user interference channel lattice assignment.

Table A1 .
Common and private power messages obtained for each user, considering the decoding strategy presented in Section 3.1.2.