Skip to main content

Quantized Network Coding for correlated sources

Abstract

In this paper, we present a data gathering technique for sensor networks that exploits correlation between sensor data at different locations in the network. Contrary to distributed source coding, our method does not rely on knowledge of the source correlation model in each node although this knowledge is required at the decoder node. Similar to network coding, our proposed method (which we call Quantized Network Coding) propagates mixtures of packets through the network. The main conceptual difference between our technique and other existing methods is that Quantized Network Coding operates on the field of real numbers and not on a finite field. By exploiting principles borrowed from compressed sensing, we show that the proposed technique can achieve a good approximation of the network data at the sink node with only a few packets received and that this approximation gets progressively better as the number of received packets increases. We explain in the paper the theoretical foundation for the algorithm based on an analysis of the restricted isometry property of the corresponding measurement matrices. Extensive simulations comparing the proposed Quantized Network Coding to classic network coding and packet forwarding scenarios demonstrate its delay/distortion advantage.

1 Introduction

Flexible, low cost, and long-lasting implementation of wireless sensor networks has made them an unavoidable alternative for conventional wired sensing structures in a wide variety of applications, including medicine, transportation, and military [1]. As a relatively new technology, more challenges are faced in the networking aspects of communication than in the aspects of classic physical [2]. One of the introduced challenges is the gathering of sensed data at a central node of the network, where delivery delay, precision, and robustness to network changes are emerging issues.

Packet forwarding via routing is widely used in different implementations of sensor networks. While it achieves capacity rates in the case of multiple session unicast in lossless networks [3], packet forwarding requires an appropriate routing [4] protocol to be run. However, packet forwarding can lead to difficulties because of its slow adaptation to the network changes, caused by deploying new node(s) or link failure(s).

Further, in the case of lossy networks, network coding offers a better error correction capability than packet forwarding, as a result of network diversity. Network coding [3] has been proposed as an alternative for packet forwarding in sensor networks [5, 6]. Specifically, network coding sends a function of incoming packets to the intermediate nodes, as opposed to sending their original content. Furthermore, the usage of random linear functions, also known as random linear network coding, is proved to be sufficient in lossless networks [7, 8]. Moreover, theoretical analysis shows that when network coding is used for transmission, no queuing is required to achieve the capacity rates of the network [3]. Network coding in lossy networks can result in improved achieved rate regions, compared to packet forwarding [9, 10].

In the case of correlated sources, distributed source coding [11, 12] on top of packet forwarding is proved to be sufficient, when dealing with networks of lossless links [13]. Similar to packet forwarding, network coding can be separately applied on top of distributed source coding for correlated sources [14, 15]. However, one has to perform joint source network decoding in order to achieve theoretical performance limits, which may not be feasible because of its computational complexity [15]. Different solutions have been proposed to tackle this practicality issue [1618], by using low-density codes and sum product algorithm [19] for decoding.

Distributed source coding requires the availability of appropriate marginal coding rates at each encoder node; similarly, the deployment of joint source network decoding requires some knowledge of the correlation model of the sources on the encoding side. The assumption of this knowledge might not be practical in all cases, even more so when the source characteristics change over time.

Motivated by this observation, we aim to develop a data gathering and transmission scheme that, similar to network coding, does not rely on routing but at the same time can intrinsically take advantage of the source correlation. Our approach models source correlation through a sparsity or compressibility assumption; combined with a specific data gathering scheme inspired by network coding but acting in the real field, this assumption allows us to develop recovery algorithms at the sink node, which allow approximate data recovery with low delay. Our recovery mechanism will be based on ideas borrowed from compressed sensing [20, 21] in which the inter-node correlation model of the messages, interpreted as near-sparsity in some domain, is used.

Recently, the idea of using compressed sensing and sparse recovery concepts in sensor networks has drawn a lot of attention [2225]. Specifically, with the aid of the compressed sensing concepts, compression of inter-node correlated data without using their correlation model is done in [22, 23]. Morevoer, in [26, 27], theoretical discussion on sparse recovery of graph constrained measurements with an interest in network monitoring application is presented. Joint source, channel, and network coding was also proposed in [28], where random linear mixing was proposed for compression of temporally and spatially correlated sources. In [29], practical possibility of finite field network coding of highly correlated sources was investigated, with the aid of low-density codes and belief propagation-based decoding. Unfortunately, a solid theoretical investigation on the feasibility of adopting sparse recovery in random linear network coding has not been done previously.

Real network coding has shown interesting advantages over the conventional finite field network coding [30]. In our earlier work [31], we combined the idea of using real field network coding with the concepts of compressed sensing and proposed a non-adaptive distributed compression scheme, called Quantized Network Coding (QNC), for exactly sparse sources. Furthermore, in [32], we initiated a discussion on the theoretical feasibility of compressed sensing-based network coding, using restricted isometry property of random matrices. In this paper, we extend our previous work from [31, 32] in two specific ways: (i) we extend the network source model used from exactly sparse to near-sparse signals, and (ii) we provide a detailed mathematical and numerical justification of the usage of sparse recovery algorithms (including a bound on the reconstruction error) for this source model. Finally, extensive computer simulations are used to compare the performance of the proposed QNC scenario with respect to other network transmission scenarios. Specifically, our focus is to study the distributed compression capabilities of the proposed QNC scenario in a lossless scenario. The study of robust transmission in lossy cases will be done in a future work.

Although the idea of using compressed sensing has been initially proposed in [22], its theoretical and practical possibilities have not been studied by providing a mathematical formulation. Additionally, we discuss on using compressed sensing in a network coding-based scenario, which involves quantization and is different from the work in [22].

As another contribution of our work, we discuss the satisfaction of RIP in a network coding scenario, which has not been addressed in other works. Specifically, in [25, 28], the authors do not discuss explicit conditions for which compressed sensing encoding (and decoding) works properlya. In this work, we propose conditions for network coding coefficients which ensure a robust recovery of messages, by using restricted isometry property.

Finally, our QNC scenario is different from other proposed schemes, in the sense that we perform quantization to fulfill limited lossless communication between the nodes, as opposed to only using analog network coding. Specifically, we study the behavior of the so-called tail probability [32] in our QNC scheme and show that its behavior is similar to that which is observed in the classic (identically and independently distributed (i.i.d.) Gaussian measurement matrix [33, 34]. This leads us to conclude that our scheme requires a number of received measurements of the same order as that classic case (see Section 4). A detailed description of the data gathering scenario studied in this paper, as well as some notations, is presented in Section 2. In Section 3, we introduce and formulate our proposed Quantized Network Coding algorithm, followed by a discussion on its theoretical feasibility, using the restricted isometry property, in Section 4. In Section 5, we present the decoding algorithm used to recover quantized network coded packets and derive a performance bound on recovery error. Our simulation setup and results are presented in Section 6. Finally in Section 7, we conclude the paper with a discussion on the proposed method and ongoing work.

2 Problem description and notation

In this paper, we limit our study to a network with lossless links with limited capacity. This model could also correspond to lossy networks, where appropriate channel coding would have been applied. A more realistic lossy network model is left as a future work.

2.1 Network

As shown in Figure 1, we represent the network by a directed graph, G=(V,E), where and are the sets of nodes (vertices) and directed edges (links). Each node, v, is from the finite sorted set V={1,,n}, and each edge, e, is from the finite sorted set E={1,,|E|}. Further, each edge (link) can maintain a lossless transmission from tail (e) to head (e), at a maximum finite rate of C e bits per use. Transmission over each link is assumed to have no interference involved from other links or nodes. One may modify the capacities of each link to reflect the effect of interference over each link.

Figure 1
figure 1

Directed graph representing a data gathering sensor network.

We define the sets of incoming and outgoing edges of node v, denoted by In (v) and Out (v), respectively, as follows:

In(v)={e:eE,head(e)=v},
(1)
Out(v)={e:eE,tail(e)=v}.
(2)

The content of edge e at time instant t are represented by Y e (t), where t represents the discrete (integer) time index, during which a block of L channel symbolsb is transmitted. Y e (t) is from a finite alphabet of size 2 L C e , where denotes rounding down to the nearest integer. In the rest of the paper, the realizations of all capital letter random variables are denoted by lowercase letters.

2.2 Source signals

The nodes of the network are equipped with sensors; specifically, we model the sensed signals in each node v as an information source, X v , where X v . To reflect the natural correlation between sensed data at each node, we assume that the set of signals X v are near-sparse in some transform domain.

More specifically, defining the sorted vectorc of X v ’s,

X ̲ = X v : v V ,
(3)

we assume that X ̲ is near-sparse in some orthonormal transform domain ϕn×n. Explicitly, for S ̲ = ϕ T · X ̲ , and a small positive ε k , we have

S ̲ - S ̲ k 1 S ̲ 1 ε k ,
(4)

where S ̲ k is such that

S ̲ k 0 =k,
(5)

i.e., S ̲ k is k-sparse. An example of the sparsifying transform matrix, ϕ, is the Karhunen Loeve transform of the messages.

Moreover, we assume that messages, X v ’s, take their values in a bounded interval between -qmax and +qmax. This is also a reasonable assumption as the sensing range of sensors is usually limited. The choice of qmax can be made after a statistical study of realizations of X v ’s and can be chosen as some confidence region, in which most of the realizations of X v ’s lay. Note that the sparsity model used in this paper is different from the conventional joint sparse model (JSM) [35], in that our node source signals or messages are scalar random variables, without correlation over time in each node. This is a valid assumption as a local transform coding could be applied to the time samples and generate a set of samples with no time redundancy.

2.3 Data gathering

Having these correlated information sources and the information network characterized, we study the transmission of X v ’s to a single gateway node. The gateway or decoder node, denoted by v0, v 0 V, has high computational resources and is usually in charge of forwarding the information to a next level network, e.g., a wired backbone network. The described (single session) incast of sources to the unique decoder node is referred to as data gathering.

3 Quantized Network Coding

3.1 Principle

Random linear network coding for multicast of independent sources has been proposed and studied in [8], where the algebraic operations are carried out in a finite field. Since our work is motivated by the concepts of compressed sensing, in which the results are valid in the infinite field of real number, we have to use a real field alternative for conventional finite field network coding. On the other hand, the finite capacity of the edges has to be appropriately coped with. As a result, we propose a method that we call Quantized Network Coding, which uses quantization to match infinite alphabet of real field network coded packets to the limited capacity of the network links.

In [31], for each network node vV and each outgoing edge eOut(v), we defined QNC at node v, according to

Y e (t)= Q e e In ( v ) β e , e ( t ) · Y e ( t - 1 ) + α e , v ( t ) · X v ,
(6)

for t>1 and with Y e (1)=0,eE, to ensure initial rest condition in the network. This means that, at time t, the message on any outgoing edge of a node is made up of a quantized linear combination of the messages received by the node at the previous time instant and the information X v measured by the node. The messages, X v ’s, are assumed to be constant until the transmission is complete, which is why X v ’s do not depend on t. The local network coding coefficients, β e , e (t)’s and αe,v(t)’s, are real-valued, and the determination of their value will be discussed in Section 4. The quantizer operator, Q e (▪), corresponding to outgoing edge e, is designed based on the values of C e and L, and the distribution of its input (i.e., random linear combinations). A simple diagram of QNC at node v is shown in Figure 2.

Figure 2
figure 2

A simple diagram of Quantized Network Coding.

3.2 End-to-end equations

Denoting the quantization noise of Q e (▪) at time t, by N e (t), we can reformulate (6) as follows:

Y e (t)= e In ( v ) β e , e (t)· Y e (t-1)+ α e , v (t)· X v + N e (t).
(7)

We define the adjacency matrix, [ F ( t ) ] | E | × | E | , as well as matrix [ A ( t ) ] | E | × n as

{ F ( t ) } e , e = β e , e ( t ) , tail ( e ) = head ( e ) 0 , otherwise ,
(8)
{ A ( t ) } e , v = α e , v ( t ) , tail ( e ) = v 0 , otherwise .
(9)

We also define the vectors of edge contents, Y ̲ (t), and quantization noises, N ̲ (t), according to

Y ̲ (t)= Y e ( t ) : e E ,
(10)
N ̲ (t)= N e ( t ) : e E .
(11)

As a result, (7) can be re-written in the following form:

Y ̲ (t)=F(t)· Y ̲ (t-1)+A(t)· X ̲ + N ̲ (t).
(12)

Depending on the network deployment, matrix [ B ] | In ( v 0 ) | × | E | defines the relation between the content of edges, Y ̲ (t), and the received packets at the decoder node v0. Explicitly, we define the vector of received packets at time t at the decoder:

Z ̲ (t)= Y e ( t ) : e In ( v 0 ) =B· Y ̲ (t),
(13)

where

{ B } i , e = 1 , i corresponds to e In ( v 0 ) 0 , otherwise .
(14)

By considering (12) as the difference equation, characterizing a linear system with X ̲ and N ̲ (t)’s as its inputs, and Z ̲ (t) its output, and using the results in [36], one gets

Z ̲ (t)=Ψ(t)· X ̲ + N ̲ eff (t),
(15)

where the measurement matrix, Ψ(t), and the effective noise vector, N ̲ eff (t), are calculated as follows:

Ψ ( t ) = B · t = 2 t F prod ( t + 1 ; t ) A ( t ) ,
(16)
N ̲ eff ( t ) = B · t = 2 t F prod ( t + 1 ; t ) N ̲ ( t ) .
(17)

In (16) and (17), Fprod(▪;▪) is defined as

F prod ( t 1 ; t 2 )= F ( t 2 ) · F ( t 2 - 1 ) F ( t 1 ) , t 2 t 1 I | E | × | E | , otherwise
(18)

and I denotes the identity matrix.

By storing Z ̲ (t)’s, at the decoder, we build up the total measurement vector, Z ̲ tot (t), as follows:

Z ̲ tot (t)= Z ̲ ( 2 ) Z ̲ ( t ) m × 1 ,
(19)

where m=(t-1)|In(v0)|. Therefore, the following can be established:

Z ̲ tot (t)= Ψ tot (t)· X ̲ + N ̲ eff,tot (t),
(20)

where the m×n total measurement matrix, Ψtot(t), and the total effective noise vector, N ̲ eff,tot (t), are the concatenation result of measurement matrices, Ψ(t)’s, and effective noise vectors, N ̲ eff (t). Because of our assumption to start transmission from t=1, measurements in Z ̲ (1) are not useful for decoding, and therefore

Ψ tot (t)= Ψ ( 2 ) Ψ ( t ) ,
(21)
N ̲ eff,tot (t)= N ̲ eff ( 2 ) N ̲ eff ( t ) .
(22)

In conventional linear network coding, the total number of measurements, m (see (19)), is at least equal to the number of data sources, n (the number of nodes in the network here). Typically, the total measurement matrix is of full column rank, and if there is no uncertainty involved because of measurement noise, we are able to uniquely find a solution. In this paper, we are interested in investigating the feasibility of robust recovery of X ̲ , when fewer number of measurements are received at the decoder than the number of messages, i.e., m<n.

The characteristic Equation (20) describing the QNC scenario can be treated as a compressed sensing measurement equation. This gives us an opportunity to apply the results in the literature of compressed sensing and sparse recovery [20, 37] to our QNC scenario with near-sparse messages. However, one needs to examine the required conditions which guarantee sparse recovery in the proposed QNC scenario. In the following, we discuss theoretical and practical feasibility of robust recovery with a compressed sensing perspective.

4 Restricted isometry property

One of the main advantages of the compressed sensing approach is that it relies on a simple model of correlation for the sources; if sparse reconstruction can be applied successfully to recover X ̲ from Equation 20 at a given time t, this is achieved without requiring the encoders (network nodes) to know much about the underlying signal correlation. This section discusses the design of the linear mixing coefficients αe,v(t) and β e , e (t) and the impact of this design on the ability to apply sparse reconstruction techniques at the sink node v0 to approximately recover the n source signals X ̲ from m measurements Z ̲ (t) at a given time t, where mn.

4.1 The restricted isometry property

One of the properties that is widely used to characterize appropriate measurement matrices in the compressed sensing literature is the restricted isometry property (RIP) [33]. Roughly speaking, this property provides a measure of norm conservation under dimensionality reduction [34]. In compressed sensing, the RIP of the measurement matrix between the sparse domain and the measurement domain allows to draw strong conclusions about the possibility to recover the original data from a small set of measurements [33]. In our case, this means that the RIP should hold for the measurement matrix Θtot(t)=Ψtot(t)ϕ.

An m×n matrix Θtot(t) is said to satisfy RIP of order k with constant δ k , if for all k-sparse vectors s ̲ k n , we have

1- δ k Θ tot ( t ) s ̲ k 2 2 s ̲ k 2 2 1+ δ k .
(23)

Random matrices with i.i.d. zero-mean Gaussian entries are known to be appropriate measurement matrices for compressed sensing. Explicitly, an m×n i.i.d Gaussian random matrix, denoted G, with entries of variance 1 m , satisfies RIP of order k and constant δ k , with a probability exceeding 1- e - κ 1 m , (called overwhelming probability) if m> κ 2 klog( n k ), where κ1 and κ2 only depend on the value of δ k (theorem 5.2 in [38]).

Using the results above, it can be understood that an m×n i.i.d Gaussian random matrix, G, satisfies RIP of order k, with a high probability, when the order of number of measurements, m, is k log(n/k), formally writing:

m=O k log n k ,
(24)

which is smaller than the order of n, the size of the data [38].

4.2 QNC design for RIP

We now turn to the design of QNC coefficients in Equation 6 so that the overall design satisfies RIP with high probability. We assemble here several results from the literature and additional simulations to motivate the proposed design.

In [31, 32], we proposed a design for local network coding coefficients, β e , e (t)’s and αe,v(t)’s, which results in an appropriate total measurement matrix, Ψtot(t), in the compressed sensing framework.

Theorem 1(Theorem 3.1 in [32])

Consider a Quantized Network Coding scenario, in which the network coding coefficients, αe,v(t) and β e , e (t), are such that:

  • αe,v(t)=0, t>2.

  • αe,v(2)’s are independent zero-mean Gaussian random variables.

  • β e , e (t)’s are deterministic.

For such a scenario, the entries of the resulting Ψtot(t) are zero-mean Gaussian random variables. Further, the entries of different columns of Ψtot(t) are mutually independent. ■

It is also numerically shown in [32] that a locally orthogonal set of β e , e (t)’s is a better choice than non-orthogonal setsd. This choice of coefficients is defined, for each node v and for all e,eOut(v), as

e ′′ In ( v ) β e , e ′′ ( t ) · β e , e ′′ ( t ) = 0 , e e , e In ( v ) β e , e 2 ( t ) = 1 | In ( v ) | 2 .
(25)

In cases where the number of outgoing edges is greater than the number of incoming edges, i.e., |Out(v)|>|In(v)|, some of the outgoing edges are randomly removed (not used for transmission) to ensure that the generated β e , e (t)’s are locally orthogonal. Furthermore, the second equation in (25) is a coefficient normalization which has no specific impact at this stage of the analysis, but which will be important in the study of bounds on sparse recovery performance in Section 5. Heuristically, such choice of orthogonal set makes each outgoing packets (of each node) to be innovative.

In [32], we established the relation between the satisfaction of RIP and the so-called tail probability

p tail ( Ψ tot ( t ) , ε ) = max x ̲ , P Ψ tot ( t ) x ̲ 2 2 - 1 > ε , subject to x ̲ 2 = 1
(26)

by proving the following theorem.

Theorem 2(Theorem 4.1 in [32]).

Consider Ψtot(t) with the tail probability, as defined in (26), and an orthonormal transform matrix ϕ. Then, Θtot(t)=Ψtot(tϕ satisfies RIP of order k and constant δ k , with a probability exceeding,

1- n k 42 δ k k p tail Ψ tot ( t ) , ε = δ k 2 .
(27)

In [32], we have derived a detailed expression of the tail probability (26). Our ultimate goal would be to use this expression to directly conclude that the number of necessary measurements m in the QNC scenario is of the same order as that of a well-known Gaussian measurement matrix, as defined above. However, the relationship between the network and QNC parameters on the one hand and the measurement matrix Ψtot(t) on the other hand is too complicated to easily draw conclusions (see Equations 8, 9, and 16). We therefore resort to the following reasoning: we first show through simulations that the tail probabilities for the QNC and Gaussian measurement matrices are of the same order; we then conclude to a similar behavior of QNC and Gaussian measurement matrices in terms of RIP satisfaction and thus in terms of the required number of measurements.

In Figure 3, we present the numerical values of tail probabilities (defined in (26)) for the QNC measurement matrix Ψtot(t), ptail(Ψtot(t),ε), using the local network coding coefficients proposed in Theorem 1 with β e , e (t)’s satisfying the locally orthogonal conditione of (25). These tail probabilities are compared with those of i.i.d. Gaussian matrices, G, versus the number of measurements, m, in each case.

Figure 3
figure 3

Numerical values of tail probabilities. Logarithmic tail probability versus logarithmic ratio of minimum required number of measurements in our QNC scenario and i.i.d. Gaussian measurement matrices, for n=100 nodes, different RIP constants, and different degrees of nodes. The explanation of network deployments for which tail probabilities are calculated is presented in Section 6. (a) Average hops=3.9 (d0=0.25,GWcorner). (b) Average hops=2.3 (d0=0.25,GWcenter).

Our numerical evaluations in Figure 3 show that for the same value of tail probability, the QNC measurement matrix, Ψtot(t), and the i.i.d. Gaussian matrix, G, require a number of measurements m of the same order.

We can therefore also say, using Theorem 2, that the QNC measurement matrix, Ψtot(t), and the i.i.d. Gaussian matrix, G, have a similar behavior in terms of satisfying RIP as a function of m, so that they will typically require values of m of the same order to ensure sparse recovery.

In the following section, we extend our discussion to the robust recovery in QNC scenario, by using the guarantees, implied from the satisfaction of RIP.

5 Decoding using sparse recovery

In this section, we will explore the performance of decoding using sparse recovery based on Equation 20 and the QNC design proposed in Theorem 1. It is well known that recovery of exactly sparse vectors from an under-determined set of linear measurements can be done with no error, using linear programming [39]. Specifically, theoretical works show that the NP-hard 0 minimization can be replaced with 1 minimization without any associated error, when dealing with noiseless measurements [37, 39]. However, when dealing with noisy measurements, 1-min recovery does not necessarily offer a minimum mean squared error solution. There is still a lot of work being done to develop practical and near minimum mean squared error recovery algorithms for noisy cases. Sparse recovery from quantized measurements has been recently studied in a number of works [4042]. For instance, the authors in [41] consider the estimation problem of sparse vectors from measurements that are quantized and corrupted by Gaussian noise. The main aspect that differentiates our model from that in [41] is that in our QNC scenario the resulting effective total measurement noises are non-linear functions of quantization noises at each edge.

Along the lines of [20, 33], the compressed sensing-based decoder for the QNC scenario solves the following convex optimization:

X ̂ ̲ ( t ) = ϕ · arg min S ̲ S ̲ 1 , subject to Z ̲ tot ( t ) - Ψ tot ( t ) ϕ S ̲ 2 ε rec ( t )
(28)

which can be solved by using linear programming [39]. The following theorems present our results on the recovery error using 1-min decoding of (28).

Theorem 3.

Consider the QNC scenario where the absolute value of messages are bounded by qmax and the local network coding coefficients are such that:

  • αe,v(t)=0, t>2.

  • αe,v(2)’s are independent zero-mean Gaussian random variables with variance σ 0 2 .

  • β e , e (t)’s are deterministic and locally orthogonal according to (25).

In such scenario, overflowing of linear combinations (over the limit of qmax) within the nodes happens with a probability less than or equal to

2|E|Q( σ 0 - 1 ),
(29)

where Q(▪) is the tail probability of standard normal distributions (i.e., one-sided Q function).

Proof

Using Cauchy-Schwartz inequality, for t≥3, we have

e In ( v ) | β e , e ( t ) | e In ( v ) | β e , e ( t ) | 2 e In ( v ) 1 2
(30)
= 1 | In ( v ) | 2 | In ( v ) | 2 = 1
(31)

As a result, since αe,v(t)’s are zero for t≥3, it is straightforward to imply that overflow may not happen for t≥3.

For t=2, since only the node message X v is available at each node, the values of β e , e (2)’s do not affect anything. Hence, only the value of αe,v(2) can result in overflow and therefore |αe,v(2)| should be less than or equal to one to prevent overflow. Moreover, because of the Gaussian distribution of αe,v(2)’s, each αe,v(2) may have an absolute value more than one, with a probability of 2Q( σ 0 - 1 ). Therefore, using the union bound, the probability that there is at least one αe,v(2) with |αe,v(2)|>1 is upper bounded by 2|E|Q( σ 0 - 1 ).

Theorem 4.

Consider a QNC scenario where, for all vV, the network coding coefficients satisfy the conditions in Theorem 3, and for which, based on the discussion in Section 4, the measurement matrix Θtot(t)=Ψtot(t)ϕ satisfies RIP of order 2k with constant δ 2 k < 2 -1. The edge quantizers, Q e (▪)’s, are assumed to be uniformf with the step size Δ e . Then, with a probability exceeding

1-2|E|Q( σ 0 - 1 ),
(32)

for the 1-min decoding of (28), we have

X ̂ ̲ ( t ) - X ̲ 2 c 1 ε rec + c 2 2 ε k ,
(33)

where ε rec 2 (t) is defined in (34),

ε rec 2 (t)= 1 4 t = 2 t e In ( v 0 ) t ′′ = 2 t e = 1 | E | | { F prod ( t ′′ + 1 ; t ) } e , e | Δ e 2 ,
(34)

and the matrix product Fprod(▪,▪) in (34) is defined in (18). The constants c1 and c2 are also defined as follows:

c 1 =4 1 + δ 2 k 1 - ( 1 + 2 ) δ 2 k ,
(35)
c 2 =2 1 - ( 1 - 2 ) δ 2 k 1 - ( 1 + 2 ) δ 2 k .
(36)

Proof.

According to Theorem 3, the conditions on the local network coding coefficients ensures that overflow does not happen with a probability exceeding 1-2|E|Q( σ 0 - 1 ). Further, since the network is lossless, the only associated measurement noise is resulting from the quantization noise at the edges. For each uniform quantizer Q e (▪), eE, we have

- Δ e 2 N e (t)+ Δ e 2 .
(37)

This implies

N ̲ eff,tot ( t ) 2 2 = t = 2 t i = 1 | In ( v 0 ) | N ̲ eff ( t ) i 2
(38)
= t = 2 t e In ( v 0 ) t ′′ = 2 t F prod ( t ′′ + 1 ; t ) N ̲ ( t ′′ ) e 2
(39)
= t = 2 t e In ( v 0 ) t ′′ = 2 t e = 1 | E | F prod ( t ′′ + 1 ; t ) e , e N e ( t ′′ ) 2
(40)
t = 2 t e In ( v 0 ) t ′′ = 2 t e = 1 | E | | F prod ( t ′′ + 1 ; t ) e , e | | N e ( t ′′ ) | 2
(41)
1 4 t = 2 t e In ( v 0 ) t ′′ = 2 t e = 1 | E | | F prod ( t ′′ + 1 ; t ) e , e | Δ e 2
(42)
= ε rec 2 ( t ) ,
(43)

where (39) holds because of the one-to-one mapping structure of B matrix. This provides an upper bound on the 2-norm of measurement noise in our QNC scenario.

According to theorem 4.2 in [21], when the measurement matrix satisfies RIP of appropriate order and constant (as in the assumptions of Theorem 4) and the measurement noise is bounded, 1-min recovery can yield an estimate with bounded recovery error. Explicitly, the bound is as in (33), considering the near-sparsity model of the messages and the obtained bound on the measurement noise.

According to the preceding theorem, the upper bound, c1εrec, is decreased when the quantization steps, Δ e ’s, are decreased. Since Δ e =2 q max / 2 L C e , a smaller upper bound on the 2 norm of the recovery error can be obtained by increasing the block length, L. Although this can be done practically, it will simultaneously increase the point to point transmission delays in the network, which may not be desirable. This creates a trade-off between reconstruction quality and delay, which will be explored in detail in Section 6.

As discussed in Theorem 4, the local network coding coefficients, proposed in (25), ensure that the normalization is respected and overflow does not happen, with high probability. More precisely, an appropriate choice of σ0 should also be picked for this purpose. For example, when the number of edges is in the order of 1,000, selecting σ0=0.25 would result in a low probability for overflow.

It was also discussed in Section 4 that the resulting Θtot(t)=Ψtot(t)ϕ satisfies the RIP condition with a high probability, when the local network coding coefficients are generated according to the assumptions of Theorem 1, with a number of measurements m of the same order as would be required for a i.i.d. Gaussian measurement matrix. Based on Theorem 4, if the resulting Ψtot(t) satisfies the RIP of appropriate order with a high probability, then the robust recovery can be guaranteed with high probability.

Therefore, putting all these numerical and theoretical results together, QNC will result in bounded error recovery (33) with a number of measurements (number of packets received at the decoder) of smaller order than the number of messages. This saving in the required number of received packets can be interpreted as an embedded distributed compression, achieved by Quantized Network Coding at the nodes: the more packets are received at the decoder, the larger m will be and the lower the reconstruction error will be.

6 Simulation results

In this section, we evaluate the performance of Quantized Network Coding, by using different numerical simulations. The main motivation behind the proposed Quantized Network Coding technique is to allow for reconstruction of the correlated source signals at the sink node or decoder with a limited number of measurements. To this end, we will compare delay distortion curves for different data gathering algorithms. Our performance analysis includes statistical evaluation of the proposed QNC scenario versus packet forwarding and conventional finite field network coding schemes. The resulting analysis will provide a comprehensive comparison between these transmission methods for different network deployments and correlation of sources.

6.1 Network deployment and message generation

To set up the simulations, we generate random deployments of networks with directed links, obtained from a transmission power loss model. Specifically, a certain number of nodes, n, are deployed in a unit square two-dimensional region, according to a uniform distribution. One of the deployed nodes in the network is randomly picked to be the gateway node, v0, in which the messages are decoded. In our simulations, we examine two different probability models to pick the gateway node. In the first model, denoted by GWcorner, the gateway node is uniformly picked from the nodes within the region in the corners of the unit square, as shown in Figure 4a. In the second model, denoted by GWcenter, the gateway node is uniformly picked from the nodes within the region in the center of a unit square, as shown in Figure 4a.

Figure 4
figure 4

Network deployment with transmission power decay model. (a) Gateway node selection. (b) A deployed network in GWcorner mode with d0=0.15.

The asymmetric connectivity (which is different from full duplex transmission over links) of two nodes is determined according to an exponential power decay model: if there is a distance between node i and node j, denoted di,j, then there is an edge (link) from i to j; if

d i , j d 0 ,
(44)

and

P i , j P 0 ,
(45)

where d0 is a threshold which determines the communication range of sensor nodes, Pi,j is a uniform random variable between 0 and 1, and P0 (0<P0≤1) tunes the average percentage of nodes in the communication range of a sensor toward which there will be a link. We change the value of d0 (and typically keep P0=0.9) to generate networks with different number of edges and different maximum hop distances, as described later in this section. Different settings for generating network deployments, the resulting average degree of nodes |In(v)|, and the resulting average hop distances of nodes from the gateway node are presented in Table 1.

Table 1 Number of hops in each deployment setting with n=100 nodes

In our simulation, each communication link (edges) can maintain a lossless communication of 1 bit per use, i.e., C e =1, for all eE. We also assume that there is no interference involved from transmission in other nodes which may have been achieved by using a time multiplexing strategy. A sample network deployment is shown in Figure 4b, where the arrows represent the directed links between the nodes.

To generate a realization of messages, X ̲ , we first generate a k-sparse random vector, S ̲ k , whose non-zero components are uniformly distributed between - 1 2 and + 1 2 . Then, a near-sparse vector, s ̲ , is obtained such that elements of ( s ̲ - s ̲ k ) are drawn from independent zero-mean uniform random variables and

s ̲ - s ̲ k 1 s ̲ 1 = ε k .
(46)

This is followed by generation of an orthonormal random matrix, ϕ, and calculating random messages: x ̲ =ϕ· s ̲ . To ensure that x j ’s are bounded, they are normalized between -qmax and +qmax (x j ’s are multiplied by a constant value). The value of qmax used for the simulations does not affect the simulation results, since we are using average SNR as a measure of decoding quality. We study the performance of different transmission scenarios by repeating our simulations for different values of sparsity factor, k n , and near-sparsity parameter, ε k .

The average signal-to-noise ratio (SNR) is used as the quality measure in our numerical comparisons. Explicitly, for the decoded messages in a scheme, x ̲ ̂ , the average SNR is defined as

SNR=20 log 10 x ̲ 2 ¯ x ̲ - x ̲ ̂ 2 ¯ ,
(47)

where ( ) ¯ stands for the average over different realizations of network deployments. For each realization of network deployment, we only generate one realization of messages, and therefore, taking the average over different network deployments is enough to obtain the average SNR values.

The payback measure in our comparisons is the corresponding average delivery delay, to achieve the required quality of service (average SNR). Explicitly, delivery delay for a transmission which has terminated at t is equal to (t-1) L in all cases of transmission scenarios. In the case of packet forwarding, we do not consider the learning period required to find the routes from each sensor node to the decoder node.

The used simulation parameters are listed in Table 2. In the table, we describe the different simulated transmission scenarios.

Table 2 The parameters of messages and the networks used in our simulations

6.2 Quantized Network Coding

For each generated random network deployment, we perform QNC with 1-min decoding. Local network coding coefficients, αe,v(t)’s and β e , e (t)’s, are generated according to the conditions of Theorem 3, where σ0=0.25. Edge quantizers, Q e (▪)’s, have uniform characteristic with a range of [-qmax,+qmax] and 2L intervals (since C e =1, e). Random αe,v(2)’s and β e , e (t)’s can be generated in a pseudo-random way, and therefore, only the generator seed needs to be transmitted to the decoder in a packet header.

At the decoder, the received measurements up to t, Z ̲ tot (t), are used to recover the original messages. Specifically, for a realization of messages, X ̲ , we define x ̂ ̲ QNC (t) to be the recovered messages, using 1-min decoding, according to (28). The convex optimization, involved in (28), is solved by using the open source implementation of disciplined convex programming [43]. Moreover, the network deployment is assumed to be known at the decoder in order to build up Ψtot(t) matrices (the random generator seed is enough to regenerate local network coding coefficients). Although the exact sparsity of messages, k, does not need to be known for performing 1-min decoding, the sparsifying transform, ϕ, should be known. The block length, L, has to be known at the decoder to be able to calculate the level of the effective measurement noise, i.e., εrec(t)’s.

6.3 Quantization and packet forwarding

For each deployment, we also simulated a routing-based packet forwarding and compared it with the results for QNC. To find the routes from each node to the gateway node, we find the shortest path from each node to the gateway node using the Dijkstra algorithm [44]. Further, the real-valued messages, x v ’s, are quantized at their corresponding source nodes, by using similar uniform quantizers, as used in QNC transmission. The system delivers all x v ’s to the decoder node over a certain period of time and keeps track of delivered messages over time, t, in the recovered vector of messages, x ̂ ̲ PF (t). Moreover, if a message, x v , is not delivered by time index, t, zero is used as its recovered value:

{ x ̂ ̲ PF ( t ) } v =0.
(48)

6.4 Quantization and packet forwarding with CS decoding

The quantization and packet forwarding with CS decoding (QandPFwithCS) scenario is exactly the same as the quantization and packet forwarding (QandPF) scenario, except at the decoder side. Specifically, at the decoder node, if the messages of some nodes are still not delivered, the decoder tries to recover them from the other received (quantized) messages, using compressed sensing decoding. Explicitly, we define Ψtot,PF(t) to be the mapping matrix from the messages to the received quantized messages, i.e.,

{ Ψ tot.PF ( t ) } i , v = 1 , Q ( x v ) delivered by t and corresponds to i th received packet, 0 , Q ( x v ) not delivered by t ,
(49)

and z ̲ tot,PF (t) to be the set of received (delivered via PF) quantized messages at the decoder. In such case, the following 1 minimization is solved:

x ̂ ̲ PFCS,0 ( t ) = ϕ · arg min s ̲ s ̲ 1 , subject to z ̲ tot,PF ( t ) - Ψ tot,PF ( t ) ϕ s ̲ 2 ε rec,PF ( t ) ,
(50)

where εrec,PF(t) is the upper bound on the 2-norm of quantized delivered messagesg. Then, for each v if its quantized messages Q(x v ) is still not delivered, we use { x ̂ ̲ PFCS,0 ( t ) } v , meaning

{ x ̂ ̲ PFCS } v = Q ( x v ) , Q ( x v ) delivered by t , { x ̂ ̲ PFCS,0 ( t ) } v , Q ( x v ) not delivered by t .
(51)

As it can be predicted, compressed sensing-based decoding tries to find an approximate estimation for the un-delivered messages by using the redundancy of messages and improves the overall performance in terms of recovery error norm.

6.5 Quantization and network coding

Conventional finite field network coding is also simulated for transmission of messages to the decoder node. In this scenario, similar to packet forwarding, the messages are first quantized at their source nodes, by using a uniform quantizer. The quantizers have a range between -qmax and +qmax, and their step size depends on the transmission block length, L. Then, the quantized messages are transmitted to the decoder node by running a classical batch-based finite field network coding [7, 8]. The field size in network coding is determined by the value of L, and the network coding coefficients are picked randomly and uniformly from the field elements. At the decoder node, the received finite field packets are collected until n of them are stored, and the transmission is then stopped. If the finite field matrix, which maps the messages to the received packets at the decoder node, has full column rank, then the quantized messages can be reconstructed without any error. However, if the field size is not large enough and matrix inversion is not possible, then none of the messages can be decoded. In such case, we set the reconstructed (decoded) messages to be equal to their mean value (i.e., 0 in our simulations):

{ x ̂ ̲ QandNC ( t ) } v =0,v.
(52)

This is referred to as all or nothing decoding in the conventional network coding literature. Similar to QNC scenario, the network deployment is assumed to be known at the decode node, and the mapping matrix (from messages to received packets) can be built up by only receiving the seed of pseudo-random generators.

6.6 Analysis of simulation results

For a fixed block length, L=9, the average SNR values versus the average delivery delay is depicted in Figure 5. In Figure 5a,b, the horizontal axis represents the product (t-1)L, which is the delivery delay, corresponding to L=9, for different values of t≥1. The vertical axis is the average SNR, calculated according to (47), for QNC, QandPF (with and without compressed sensing decoding), and quantization and network coding (QandNC) scenarios.

Figure 5
figure 5

Average SNR versus average delivery delay for QNC, PF, and QandNC scenarios, when ε k =0 . (a) d0=0.15, GWcenter, average hops=5.3, L=9. (b) d0=0.35, GWcenter, average hops=1.7, L=12.

As it is shown in Figure 5a,b, when using the same block length, QNC achieves significant improvement, compared to PF, for low values of delivery delay. These low delays correspond to the initial t’s in the transmission, at which a small number of packets are received at the decoder. As promised by the theory of compressed sensing, fewer measurements enable message recovery, with an associated measurement noise. After enough packets are received at the decoder, QNC achieves its best performance (where the curve is almost flat). This best performance improves (i.e., average SNR increases) when the correlation of messages is higher (sparsity factor k/n is lower).

The best performance for QandPF, QandPFwithCS, and QandNC happens after a longer period of time than for QNC. As it can be seen, this is the best achievable quality (SNR value), which is limited only by quantization noises at the source nodes, for both QandPF and QandNC scenarios. As it is also expected, using compressed sensing decoding (as in QandPFwithCS scenario) provides a better estimation of the messages before all the packets are delivered. Furthermore, as opposed to QandPF which shows a progressive improvement in the quality, QandNC has an all or nothing characteristic, as mentioned earlier. It is also interesting to note that low-density adjacency matrices in networks with small degree of nodes result in having (finite field) measurement matrices that are not of full rank in the QandNC scenario. Hence, as shown in Figure 5a, QandNC scheme fails to work properly.

The quantization noises and their propagation through the network does not allow QNC to achieve the same best performance as in PF and QandNC scenarios (where only source quantization noise is involved). However, as it is shown in the following, QNC outperforms QandPF (with and without compressed sensing decoding) and QandNC scenarios in a wide range of delay values, when an appropriate block length is chosen.

After simulating QNC, QandPF, QandPFwithCS, and QandNC scenarios for different block lengths and calculating the corresponding delay and recovery error norms, we find the best values of block length for each specific average SNR value. The resulting L-optimized curves for each of these scenario are shown in Figure 6.

Figure 6
figure 6

Average SNR versus average delivery delay for ε k =0 . (a) d0=0.15, GWcenter, average degree =5.3, average hops =5.3. (b) d0=0.15, GWcorner, average degree =5.3, average hops =9.7. (c) d0=0.25, GWcenter, average degree =13.9, average hops =2.3. (d) d0=0.25, GWcorner, average degree =13.9, average hops =3.9. (e) d0=0.35, GWcenter, average degree =24.8, average hops =1.7. (f) d0=0.35, GWcorner, average degree =24.8, average hops =2.7.

It can be seen in Figure 6a,b,c,d that, when the network does not have too many links (i.e., when the average hop distances are low), the proposed QNC scenario outperforms both routing-based packet forwarding (with and without compressed sensing decoding) and conventional QandNC scenarios. This is true for a wide range of average SNR values, varying up to around 35 dB, which is considered as high quality in many applications. Moreover, as it is expected, the average SNR of QNC scenario increases when the correlation of messages increases (i.e., when the sparsity factor, k/n, decreases).

As shown in Figure 6e,f, when dealing with networks with very high number of edges, which results in small average hop distances, the proposed QNC scenario cannot outperform QandNC scenario, for very high SNR values (explicitly for average SNR values higher than 40 dB). This may be a result of quantization noise propagation through the network during the QNC steps, which strengthen the effective measurement noise above the level that sparse recovery can compensate.

By comparing the figures, in which only the location of gateway node has changed, i.e., from GWcenter to GWcorner (Figure 6a to Figure 6b and Figure 6c to Figure 6d), we can understand that QNC shows a more robust behavior than PF and QandNC schemes. In other words, QNC does not suffer from the complications (especially happening in packet forwarding) caused by asymmetric distribution of network flow. Using compressed sensing decoding for packet forwarding, as in QandPFwithCS scenario, improves the performance of packet forwarding in this situation, although it cannot outperform QNC scenario.

We have also studied the effect of the near-sparsity parameter, ε k , on the performance of our QNC scheme. Those results are shown in Figure 7, where the average SNR is depicted versus the average delivery delay, for different settings of network deployment and a fixed sparsity factor of k/n=0.01. Increasing the near-sparsity parameter, ε k , means that the generated messages are getting further away from the sparsity model. As a result, the performance of QNC degrades when ε k increases, which can be seen in Figure 7a,b,c,d,e,f. A more sophisticated correlation model, which would incorporate in the decoding procedure other prior information about the messages than only sparsity may improve the performance of the QNC scenario. We are currently studying such possibility, and our initial findings are reported in [45, 46].

Figure 7
figure 7

Average SNR versus average delivery delay for k/n=0.01 . (a) d0=0.15, GWcenter, average degree =5.3, average hops =5.3. (b) d0=0.15, GWcorner, average degree =5.3, average hops =9.7. (c) d0=0.25, GWcenter, average degree =13.9, average hops =2.3. (d) d0=0.25, GWcorner, average degree =13.9, average hops =3.9. (e) d0=0.35, GWcenter, average degree =24.8, average hops =1.7. (f) d0=0.35, GWcorner, average degree =24.8, average hops =2.7.

In the routing-based packet forwarding scenarios (with and without compressed sensing decoding), the intermediate (sensor) nodes have to go through route training and queuing of packets. One of the main advantages of QNC is that the intermediate nodes should only carry out simple linear combination and quantization, which reduces the required computational power of intermediate sensor nodes (they still have to perform sensing and physical layer transmission). On the other hand, at the decoder sides, QNC requires an 1-min decoder which is potentially more complex than the receiver required for packet forwarding. However, since the gateway node is usually capable of handling higher computational operations, this may not be an issue in practical cases.

6.7 QNC in lossy networks

Although it is not the main focus of our paper, we have run some numerical simulations to assess the robustness of QNC scenario in lossy networks. Specifically, we consider a network model similar to the one used for the lossless case, but with the presence of packet losses. More precisely, all the links are assumed to have a bit dropping rate of pdrop, i.e., a bit (which corresponds to a symbol in the case of C0=1 considered in the simulations) is dropped (lost) during the transmission with a probability of pdrop. When dealing with packets of length L, a packet is considered as being dropped if one or more of its bits are lost. This will be applied to all different transmission schemes, described in Section 6.1.

During the packet forwarding, if a packet is not successfully transmitted over a channel, it needs to be re-transmitted completely. Moreover, in the QandNC and QNC scenarios where finite field network coding and Quantized Network Coding are adopted, loss of a packet (transmitted over a link) is reflected by a zero value for the corresponding local network coding coefficient.

The simulation results for this lossy network scenario are shown in Figure 8. Similar to the case of lossless network, the curves are obtained by finding the appropriate packet length for each SNR value. We have used a wide range of bit loss rates pdrop for our simulations and shown results for a few representative values of pdrop. Specifically, we present the performance curves for a low loss rate of pdrop=10-5,10-4 and a high loss rate of pdrop=10-2.

Figure 8
figure 8

Average SNR versus average delivery delay in the presence of packet loss. (a) d0=0.15, GWcenter. (b) d0=0.25, GWcorner.

Since the low SNR values (low decoding quality) in QNC scenario are obtained by using small packet lengths (small values of L), the probability of having a bit drop in the packet is smaller, compared to a larger packet length (larger L). As a result, the resulting performance curves are not very different, when having different loss rates. This is shown in Figure 8a,b, where there is a small gap between the curves of different pdrop values at low SNR values.

Moreover, since the compressed sensing decoder exploits the correlation between the messages, it is able to reconstruct some messages when their corresponding linear measurements are lost in the transmission. This fact can also be seen in QandPF scenario when compressed sensing decoding is adopted.

7 Conclusions

Joint source network coding of correlated sources was studied with a sparse recovery perspective. In order to achieve encoding of correlated sources without requiring the encoders to know the source correlation model, we proposed Quantized Network Coding, which incorporates real field network coding and quantization to take advantage of decoding using linear programming. Thanks to the work in the literature of compressed sensing, we discussed theoretical guarantees to ensure efficient encoding and robust decoding of messages. Moreover, we were able to make conclusive statements about the robust recovery of messages, when fewer number of received packets than the number of source signals (messages) were available at the decoder. Finally, our computer simulations verified the reduction in the average delivery delay, by using Quantized Network Coding.

Currently, we are studying the feasibility of near minimum mean squared error decoding, when other forms of prior information are available about the source. Specifically, we have suggested the use of belief propagation-based decoding [45] in a Bayesian scenario. However, more theoretical work is needed to derive mathematical guarantees for robust recovery. Studying the general case of lossy networks with interference between the links is also one of the proposed future directions.

Endnotes

a They only mention that dense networks satisfy restricted eigenvalue condition and do not prove it.

b Although the impact and value of L are not discussed at this point, it is an important design parameter, which will be extensively discussed in Section 6.

c In this paper, all the vectors are column-wise.

d This choice reduces the tail probabilities defined later on in Equation 26 and, as such, increases the probability of the measurement matrix satisfying RIP.

e Explicitly, we have a predetermined set of orthogonal matrices, used as β e , e (t)’s. Further, the variance of αe,v(2)’s are picked the same such that the mean of 2-norms (defined in [32]) is equal to 1.

f Although a uniform quantizer may not be the best choice for some message distributions, it is still widely used in practice. It also allows us to simplify the mathematical analysis to provide a theoretical bound on the resulting recovery error. The study of the impact of different quantizer designs is left as a future work.

g This depends on the characteristic of quantizers used at the source node to quantize each message before packet forwarding. Specifically, in our simulations where we used uniform quantizers with step size Δ Q , εrec,PF(t) is equal to the product of Δ Q and the number of delivered quantized messages.

References

  1. Akyildiz I, Su W, Sankarasubramaniam Y, Cayirci E: A survey on sensor networks. IEEE Commun. Mag 2002, 40(8):102-114. 10.1109/MCOM.2002.1024422

    Article  Google Scholar 

  2. Chong C, Kumar S: Sensor networks: evolution, opportunities, and challenges. Proc. IEEE 2003, 91(8):1247-1256. 10.1109/JPROC.2003.814918

    Article  Google Scholar 

  3. Ahlswede R, Cai N, Li S-Y, Yeung R: Network information flow. IEEE Trans. Inf. Theory 2000, 46: 1204-1216. 10.1109/18.850663

    Article  MathSciNet  Google Scholar 

  4. Al-Karaki J, Kamal A: Routing techniques in wireless sensor networks: a survey. IEEE Wireless Commun 2004, 11(6):6-28. 10.1109/MWC.2004.1368893

    Article  Google Scholar 

  5. Ho T, Koetter R, Medard M, Karger D, Effros M: The benefits of coding over routing in a randomized setting. In IEEE International Symposium on Information Theory. IEEE, Piscataway; 2003:442-442.

    Google Scholar 

  6. Fragouli C: Network coding for sensor networks. In Handbook Array Processing Sensor Networks. Wiley Online Library; 2009:645-667.

    Google Scholar 

  7. Koetter R, Médard M: An algebraic approach to network coding. IEEE Trans. Netw 2003, 11(5):782-795. 10.1109/TNET.2003.818197

    Article  Google Scholar 

  8. Ho T, Medard M, Koetter R, Karger D, Effros M, Shi J, Leong B: A random linear network coding approach to multicast. IEEE Trans. Inf. Theory 2006, 52: 4413-4430.

    Article  MathSciNet  Google Scholar 

  9. Lim S, Kim Y, El Gamal A, Chung S: Noisy network coding. IEEE Trans. Inf. Theory 2011, 57(5):3132-3152.

    Article  MathSciNet  Google Scholar 

  10. Dana A, Gowaikar R, Palanki R, Hassibi B, Effros M: Capacity of wireless erasure networks. IEEE Trans. Inf. Theory 2006, 52: 789-804.

    Article  MathSciNet  Google Scholar 

  11. Slepian D, Wolf J: Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19(4):471-480. 10.1109/TIT.1973.1055037

    Article  MathSciNet  Google Scholar 

  12. Xiong Z, Liveris A, Cheng S: Distributed source coding for sensor networks. IEEE Signal Process. Mag 2004, 21(5):80-94. 10.1109/MSP.2004.1328091

    Article  Google Scholar 

  13. Han TS: Slepian-wolf-cover theorem for networks of channels. Inf. Control 1980, 47(1):67-83. 10.1016/S0019-9958(80)90284-3

    Article  Google Scholar 

  14. Ho T, Médard M, Effros M, Koetter R, Karger D: Network coding for correlated sources. In Proceedings of Conference on Information Sciences and Systems. CiteSeer; 2004.

    Google Scholar 

  15. Ramamoorthy A, Jain K, Chou PA, Effros M: Separating distributed source coding from network coding. IEEE Trans. Netw 2006, 14: 2785-2795.

    MathSciNet  Google Scholar 

  16. Wu Y, Stankovic V, Xiong Z, Kung S: On practical design for joint distributed source and network coding. IEEE Trans. Inf. Theory 2009, 55(4):1709-1720.

    Article  MathSciNet  Google Scholar 

  17. Maierbacher G, Barros J, Médard M: Practical source-network decoding. In 6th International Symposium on Wireless Communication Systems. IEEE, Piscataway; 2009:283-287.

    Google Scholar 

  18. Cruz S, Maierbacher G, Barros J: Joint source-network coding for large-scale sensor networks. In IEEE International Symposium on Information Theory Proceedings. IEEE, Piscataway; 2011:420-424.

    Google Scholar 

  19. Kschischang F, Frey B, Loeliger H: Factor graphs and the sum-product algorithm. IEEE Trans. Inf. Theory 2001, 47(2):498-519. 10.1109/18.910572

    Article  MathSciNet  Google Scholar 

  20. Donoho D: Compressed sensing. IEEE Trans. Inf. Theory 2006, 52: 1289-1306.

    Article  MathSciNet  Google Scholar 

  21. Baraniuk R, Davenport M, Duarte M, Hegde C: An Introduction to Compressive Sensing. Boston: Addison-Wesley; 2011.

    Google Scholar 

  22. Haupt J, Bajwa W, Rabbat M, Nowak R: Compressed sensing for networked data. IEEE Signal Process. Mag 2008, 25: 92-101.

    Article  Google Scholar 

  23. Nguyen N, Jones D, Krishnamurthy S: Netcompress: coupling network coding and compressed sensing for efficient data communication in wireless sensor networks. In 2010 IEEE Workshop on Signal Processing Systems. IEEE, Piscataway; 2010:356-361.

    Chapter  Google Scholar 

  24. Luo C, Wu F, Sun J, Chen CW: Compressive data gathering for large-scale wireless sensor networks. In Proceedings of the 15th Annual International Conference on Mobile Computing and Networking. ACM, New York; 2009:145-156.

    Chapter  Google Scholar 

  25. Feizi S, Médard M, Effros M: Compressive sensing over networks. In 48th Annual Allerton Conference on Communication, Control, and Computing. IEEE, Piscataway; 2010:1129-1136.

    Google Scholar 

  26. Xu W, Mallada E, Tang A: Compressive sensing over graphs. In IEEE International Conference on Computer Communications (INFOCOM). IEEE, Piscataway; 2011:2087-2095.

    Google Scholar 

  27. Wang M, Xu W, Mallada E, Tang A: Sparse recovery with graph constraints: fundamental limits and measurement construction. In IEEE International Conference on Computer Communications (INFOCOM). IEEE, Piscataway; 2012:1871-1879.

    Google Scholar 

  28. Feizi S, Medard M: A power efficient sensing/communication scheme: joint source-channel-network coding by using compressive sensing. In 49th Annual Allerton Conference on Communication, Control, and Computing. Piscataway: IEEE,; 2011:1048-1054.

    Google Scholar 

  29. Bassi F, Chao L, Iwaza L, Kieffer M: Compressive linear network coding for efficient data collection in wireless sensor networks. In Proceedings of the 2012 European Signal Processing Conference. IEEE, Piscataway; 2012:1-5.

    Google Scholar 

  30. Dey B, Katti S, Jaggi S, Katabi D, Medard M, Shintre S: “Real” and “complex” network codes: promises and challenges. In Fourth Workshop on Network Coding, Theory and Applications. 2008 NetCod 2008. IEEE, Piscataway; 2008:1-6.

    Google Scholar 

  31. Nabaee M, Labeau F: Quantized network coding for sparse messages. In IEEE Statistical Signal Processing Workshop. IEEE, Piscataway; 2012:832-835.

    Google Scholar 

  32. Nabaee M, Labeau F: Restricted isometry property in quantized network coding of sparse messages. In IEEE Global Telecommunications Conference. IEEE, Piscataway; 2012.

    Google Scholar 

  33. Candes EJ: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique 2008, 346(9-10):589-592. 10.1016/j.crma.2008.03.014

    Article  MathSciNet  Google Scholar 

  34. Baraniuk R: Compressive sensing. IEEE Signal Process. Mag 2007, 24(4):118-121.

    Article  Google Scholar 

  35. Duarte MF, Sarvotham S, Wakin MB, Baron D, Baraniuk RG: Joint sparsity models for distributed compressed sensing. In Proceedings of the Workshop on Signal Processing with Adaptative Sparse Structured Representations. IEEE, Piscataway; 2005.

    Google Scholar 

  36. Kailath T: Linear Systems. Englewood Cliffs: Prentice-Hall; 1980.

    Google Scholar 

  37. Candes E, Romberg J: Sparsity and incoherence in compressive sampling. Inverse Probl 2007, 23(3):969. 10.1088/0266-5611/23/3/008

    Article  MathSciNet  Google Scholar 

  38. Baraniuk R, Davenport M, Devore R, Wakin M: A simple proof of the restricted isometry property for random matrices. Constr. Approx 2007, 28(3):253-263.

    Article  MathSciNet  Google Scholar 

  39. Candes E, Tao T: Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 51: 4203-4215. 10.1109/TIT.2005.858979

    Article  MathSciNet  Google Scholar 

  40. Dai W, Pham HV, Milenkovic O: Distortion-rate functions for quantized compressive sensing. In IEEE Information Theory Workshop on Networking and Information Theory. IEEE, Piscataway; 2009:171-175.

    Chapter  Google Scholar 

  41. Zymnis A, Boyd S, Candes E: Compressed sensing with quantized measurements. IEEE Signal Process. Lett 2010, 17(2):149-152.

    Article  Google Scholar 

  42. Jacques L, Hammond DK, Fadili JM: Dequantizing compressed sensing: when oversampling and non-Gaussian constraints combine. IEEE Trans. Inf. Theory 2011, 57(1):559-571.

    Article  MathSciNet  Google Scholar 

  43. Grant M, Boyd S: CVX: Matlab software for disciplined convex programming, version 1.21. 2012.http://cvxr.com/cvx . Accessed Aug 2012

    Google Scholar 

  44. Dijkstra E: A note on two problems in connexion with graphs. Numerische Mathematik 1959, 1(1):269-271. 10.1007/BF01386390

    Article  MathSciNet  Google Scholar 

  45. Nabaee M, Labeau F: Non-adaptive distributed compression in networks. In 2013 IEEE Digital Signal Processing and Signal Processing Education Meeting (DSP/SPE). IEEE, Piscataway; 2013:239-244.

    Chapter  Google Scholar 

  46. Nabaee M, Labeau F: Bayesian quantized network coding via generalized approximate message passing. In 2014 Wireless Telecommunications Symposium. IEEE, Piscataway; 2014.

    Google Scholar 

Download references

Acknowledgements

This work was supported by Hydro-Québec, the Natural Sciences and Engineering Research Council of Canada, and McGill University in the framework of the NSERC/Hydro-Québec/McGill Industrial Research Chair in Interactive Information Infrastructure for the Power Grid.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mahdy Nabaee.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Nabaee, M., Labeau, F. Quantized Network Coding for correlated sources. J Wireless Com Network 2014, 40 (2014). https://doi.org/10.1186/1687-1499-2014-40

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2014-40

Keywords