Scalable Multiple-Description Image Coding Based on Embedded Quantization

,


INTRODUCTION
Performing compression is of paramount importance in modern multimedia systems in order to improve the bandwidth usage and to reduce the costs associated with the signal transport.Sending image and video data in an efficient way over an ideal (error-free) channel basically consists in removing the redundancy from the input signal.Additionally, in the context of browsing through large data sets and fast access to large images transmitted over low-bandwidth channels, employing a compression scheme with progressive transmission capabilities is of critical importance.Scalable image coding technologies [1] enable the media providers to generate, in a single compression step, a unique bitstream from which appropriate subsets, producing different visual qualities, frame rates, and resolutions can be extracted to meet the preferences and the bitrate requirements of a broad range of clients.Moreover, at the decoder side it is possible to refine the image quality as more data is received.
On the other hand, in data communications over unreliable channels (e.g., mobile wireless or best-effort networks), achieving overall performance optimization is not always similar to minimizing the redundancy within the input stream.Hence, streaming data over networks involves much more than taking the output of a standard coder and writing it to the socket, justifying entirely the need for a new paradigm called robust source coding.In this context, several multiple-description (MD) coding techniques (e.g., [2][3][4][5][6]) have been introduced to efficiently overcome the channel impairments over diversity-based systems, allowing the decoders to extract meaningful information from a subset of the transmitted data.MD coding systems are able to generate more than one description of the source, such that (i) each description independently describes the source with certain fidelity, and (ii) when more than one description is available at the decoder, these descriptions can be combined to enhance the quality of the decoded signal.
In light of the above, we may conclude that on one hand, modern image compression systems have to provide scalability in order to meet the heterogeneous nature of networks and clients in nowadays communication systems, and on the other hand, efficient robust coding needs to be provided as well, in order to cope with the challenges posed by multimedia communication over error-prone channels.The paper addresses this combined problem, and proposes a class of scalable MD image coding systems as a solution to this problem.The key component in these systems is given by embedded multiple-description scalar quantization (EMDSQ) [7][8][9][10][11].EMDSQ belongs to the broad family of multipledescription coding approaches based on scalar quantization.The basic principle of MD based on scalar quantization (MDSQ) was firstly proposed by Vaishampayan in [3], while the optimal design of central and side quantizers has been extensively studied for the fixed-rate case in [12,13].The classical MDSQs are fixed-rate quantizers.This will lead to the impossibility to design embedded coding schemes having the ability to refine the image quality at the decoder side as more information is received.This problem has been solved in [14], where multiple-description uniform scalar quantizers (MDUSQ) have been proposed, simultaneously enabling multiple-description coding and a scalable encoding of the input source.
More recently, we proposed generic EMDSQ [7][8][9][10][11] producing double-deadzone central quantizers, known to be optimal and very nearly optimal at high and low rates, respectively, [1].This will allow the coding systems employing EMDSQ to provide state-of-the-art coding results in data communications under realistic network conditions, and due to the embedded nature of EMDSQ, also to support progressive transmission and fine-grain rate adaptation of the output stream.
In contrast to MDUSQ, the design of EMDSQ treats the generic case of an arbitrary number of descriptions.Another important feature of EMDSQ consists in their unique ability to control not only the overall redundancy but also the redundancy, at each distinct quantization level between the descriptions.
In this paper, we propose a new scalable MD coding approach for images that couple EMDSQ and a customized version of our wavelet-based quadtree (QT) coding algorithm, originally proposed in [15,16].The system will be referred to as MD-QT.
In the proposed MD-QT approach, EMDSQ is used in order to produce a scalable MD representation of the wavelet coefficients, while QT coding is employed in order to encode for each description the localization information, indicating the positions of the coefficients that are found to be significant at each quantization level.However, simply coupling EMDSQ with QT coding only provides a multipledescription representations of the quantization indices.A critical design aspect is to extend the MD paradigm and to produce multiple representations of the localization information as well.This idea is followed in our design of an enhanced MD-QT image coding system.The experimental results show that in the context of transmission over errorprone packet networks, the enhanced MD-QT system provides a substantial performance increase in comparison to a simple MD-QT approach.
The rest of the paper is structured as follows.Section 2 presents the EMDSQ focusing on two main issues: Section 2.1 describes the way EMDSQs are constructed by recursive splitting of an index assignment (IA) matrix, while Section 2.2 describes the mechanism that allows for a flexible redundancy allocation for each distinct quantization level.The proposed MD-QT algorithm is detailed in Section 3.1.It is followed with Section 3.2 where an enhanced version of the algorithm extending the MD coding paradigm to all types of encoded information is presented.The experimental results are presented in Section 4, illustrating the rate-distortion performances of the proposed still-image codecs in data transmission over error-prone networks.Finally, Section 5 summarizes the conclusions of our work.

Embedded index assignment
An EMDSQ is an embedded scalar quantizer designed to work in a diversity-based communication system.We can define EMDSQ as a set of embedded side quantizers generating two descriptions denoted by , where m represents the description index (m = 1, 2) and p represents the quantization level (0 ≤ p ≤ P).The corresponding set of embedded central quantizers is denoted by Q 0 , Q 1 , . . ., Q P .For any p, p < P, the partition cells of any quantizers Q p m and Q p are embedded in the partition cells of the quantizers [1].Consider an IA matrix M. The matrix M is recursively split along each dimension m = 1, 2 for a number of P levels.Then, for every level p, with 0 ≤ p ≤ P, M can be considered as a block matrix of the form and it is represented as follows: The corresponding side quantizers for the dimension m at level p will be Q p m, containing a number of J p m cells.In order to obtain balanced descriptions at each distinct quantization level p, the condition J p 1 = J p 2 = J p for any p, 0 ≤ p < P, has to be satisfied.Further, we consider an arbitrary block B to obtain the indices of lower-rate quantizers by discarding the components of higher-rate quantizers, similar to embedded scalar quantization [1].Hence, C q p m = C q p+1 m q p m are embedded in C q p+1 m for any p and m, and the side-quantizer indices are embedded.This shows that the recursive splitting of M generates embedded side quantizers.Consequently, the central quantizers are embedded as well.The relation between the numbers of cells contained in any Q p m and L p is ( One can conclude that by splitting B p+1 j1 j2 , a number L p × L p of blocks B p i1i2 will result at the lower level p.The recursive splitting of M along each dimension is illustrated in Figure 1.
Furthermore, following the recursive splitting of a balanced embedded IA M as described above, we define a strictly increasing sequence of natural numbers b p as follows: Given b p , any output index n m of Q 0 m for a source sample x ∈ R can be uniquely represented in the Smarandache general numeration base [17] as follows: where 0 ≤ q p m ≤ L p − 1.It is noticeable, as described above, that q p m = (q P m , q P−1 m , . . ., q p m) is an embedded quantizer index representing the output of the embedded side quantizer Q p m.
As previously indicated in the literature on scalar quantization, an entropy-constrained uniform quantizer is very nearly optimal for input sources with smooth PDFs (probability density functions) [18,19].For embedded quantization, a notable example where all embedded quantizers can be optimal is the uniform case [1].The conditions that need to be satisfied by an embedded IA defined as M = [B p j1, j2 ] 1≤ jm≤Jp to yield embedded uniform central quantizers are as follows: (1) the corresponding central quantizer for the highest rate (Q 0 ) has to be uniform; (2) for all p, 0 ≤ p ≤ P, in each B p j1 j2 = [0] (1 ≤ j m ≤ J p ), a constant number of consecutive nonzero indices are mapped (by [0] we denote the zero matrix).
The proof of the theorem from which the above conditions are derived is given in [8].
We define the operator nnz(M) determining the number of nonzero elements contained in a matrix M. The number of blocks

Redundancy control
Denote by R m the rates and by D m (R m ) the corresponding side-description distortions.Also, denote by D 0 the central distortion.In single-description (SD) coding, one minimizes D 0 for a given rate R 0 .The redundancy is the bitrate sacrificed by MD coding compared to SD coding in order to achieve the same central D 0 distortion: In the embedded case, the overall rate is the cumulated value of rates corresponding to each distinct quantization level, and it can be written analytically as R 0 = P p=0 R p 0 , where R p 0 represents the rate at level p.In the same way for the case of embedded MD coding, we can write for each of the side descriptions R m = P p=0 R p m, where R p m represents the side rate at level p.This will result in the expression of the redundancy per quantization level: . From this, the overall redundancy between the descriptions can be interpreted as the sum of the redundancies at each distinct quantization level: ρ = P p=0 ρ p .In our case, R m = log 2 ( P i=p L i ) and R 0 = log 2 ( P i=p N i ).This leads to ρ p = 2 log 2 ( P i=p L p ) − log 2 ( P i=p N p ) which can be written as This simple formulation shows that for any EMDSQ instantiation, the redundancy is directly dependent on the level p.
In addition, the redundancy can be controlled at each distinct quantization level by changing the ratio N p /L p (via the N p and L p parameters).
It is noticeable that the redundancy is rate-dependent and ranging in [0, R 0 ].In order to have a rate-independent analytical expression of redundancy (which is more appropriate for measurement purposes), we rely on the normalized redundancy, lying in the range [0, 1]: In progressive coding relying on embedded quantization, the information within the stream is prioritized according to its impact on the overall rate-distortion performance.Hence, in order to improve the protection provided by an MD coding system, the redundancy in the layers corresponding to the coarser quantization levels should be higher than the redundancy corresponding to the finer levels.In other words, the redundancy has to increase with level p as

Multiple-description quadtree image codec
In this section, we present a still-image MD coding system, enabling progressive transmission over unreliable channels.The proposed MD quadtree (MD-QT) coding approach is a wavelet-based system derived from our singledescription square-partitioning (SQP) codec proposed in [15,16].SQP employs successive approximation quantization (SAQ) of the wavelet coefficients, followed by quadtree coding of the significance maps and adaptive arithmetic entropy coding.Other quadtree-based embedded image coding techniques have been proposed in the past, including for instance the nested quadtree splitting (NQS) algorithm of [20], the wavelet-based quadtree (WQT) codec of [15], and the set-partitioned embedded block coding (SPECK) approach of [21].A later improvement of SQP is the QT-L codec of [22,23], providing comparative compression performance against state-of-the-art codecs, such as JPEG2000 [1] or SPIHT [24].The choice of the underlying singledescription (SD) coding system is thus justified by the proven fact [22,23] that intraband coding based on quadtree coding of the significance maps provides competitive compression performance against the state of the art.
In the proposed MD-QT approach, the classical SAQ is replaced by EMDSQ.This allows for producing more than one description of the input data.Additionally, EMDSQ retains the capability to provide fine-grain rate adaptation and progressive transmission of each description.Finally, the employed EMDSQ provides embedded deadzone quantization not only for the central quantizer, but for the side quantizers as well.Due to this feature, the proposed codec provides competitive rate-distortion characteristics for both central and side decoders.
At each quantization level p, 0 ≤ p ≤ P, the proposed MD-QT coding algorithm performs the same coding passes as our SQP codec of [15]: the significance pass, in which the significant wavelet coefficients w, T p ≤ |w| < T p+1 , exceeding a certain significance threshold of the current level T p are localized and quantized, and the refinement pass, during which the quantization accuracy of the already significant coefficients (i.e., w: T p+1 ≤ |w|) is refined.With the exception of the full-redundancy case, a different side quantizer is employed for each side description.Hence, it is natural to allocate for each distinct description a different set of significance thresholds, denoted as T p m, with m representing the description index (m = 1, 2 in the following).Once the positions and the corresponding symbols of the significant coefficients are encoded, p is set to P − 1 and the significance pass is restarted to identify the new significant coefficients.At every quantization level p, p < P, only the significance of the previously nonsignificant coefficients is encoded, and the corresponding quantization step is applied.
In order to detail the algorithm, we begin by defining the QT partition rule used in the significance pass and introduce the related notations.Consider the wavelet-transformed image W as a matrix of dimension V 1 × V 2 and denote by w(l) a wavelet coefficient contained in where v 1 and v 2 represent the quadrant's width and height, respectively.In view of simplification, we assume identical power-of-two quadrant dimensions v 1 and v 2 , that is, v 1 = v 2 = 2 r for some r ∈ N. Hence, the quadrant QT(k, v) can be considered as a square matrix containing the wavelet coefficients w(l) and it is defined as follows: According to the above notations, the wavelet-transformed image can be considered as a quadrant denoted as W = QT(0, V), where 0 = [ 0 0 ], and V = [ V1 V2 ] denotes the image size.
The significance of any quadrant QT(k, v) ∈ W, v = [ 1 1 ], with respect to the applied threshold T p m is determined via the significance operator defined as follows: The binary matrix QT p m (k, v), which indicates the significance σ p (w(l)) of each coefficient w(l) ∈ QT(k, v) with respect to the applied threshold T p m, is defined as Finally, we define a partitioning rule that divides a significant quadrant and the relative matrix QT p m (k, v) into four adjacent minors, as follows: where α = [ α1 0 0 α2 ] with α i = 0, 1 for i = 1, 2 and k Next describe, for a number of two description, the significance and refinement passes performed by the encoder.Starting with the coarsest quantization level p = P, the significance pass is activated first and the significance of the wavelet image W is determined with respect to the threshold T P m as in (11).If σ P (W) = 1, a significance symbol is emitted and the significance map QT p m (0, V) is split into four quadrants QT p m (Vα/2, V/2) according to the partitioning rule (13).Then, following a depth-first technique [25], the descendent quadrants are further tested for significance and only the significant ones are iteratively spliced as in (13).The recursive process ends when all the 4 × 4 leaf nodes, containing at least one significant coefficient, are isolated and quantized, applying the proper EMDSQ side quantizer and the output symbols are added to the stream.
Conversely (σ P (W) = 0), all coefficients belonging to nonsignificant quadrants need not be explicitly quantized and a single nonsignificance symbol suffices to map all elements in a quadrant to the side quantizer deadzone.
Thus, the significance pass (i) records the positions l of all the leaf nodes newly identified as significant, using a recursive tree-structure of quadrants, and (ii) quantizes the values of the coefficients contained in the significant leaf nodes.The tree structure of matrices, produced by the partitioning rule, can be represented by the corresponding tree structure of significant or nonsignificant symbols.The employed depth-first scanning procedure allows the encoder to map the tree structure to a one-dimensional stream of symbols.Similarly, by inverting such mapping, the decoder reconstructs, at each quantization level, the significance matrices from the received stream.
It is noticeable that for an individual wavelet coefficient, we no longer apply the significant operator σ p , and instead we use the quantizer-index allocation operator, which we denote by δ(w(l)), determining in which partition cell the wavelet coefficient is contained.To do so, the wavelet coefficients have to be compared with the partition-cell boundary points.Consider that an arbitrary partition cell at level p is divided into a maximum number L p−1 of partition cells at level p − 1.In the index allocation process, the operator δ determines the symbol associated to each quantized coefficient as follows: where All the coefficients contained in significant leaf nodes are stored in a significance list; we denote by w p (l) the coefficients contained in the significance list at level p.
Subsequently, the significance pass is followed by the refinement pass.This pass is activated at every level p, p < P, in order to refine the quantization accuracy for the coefficients recorded in the significance list, which were already coded as significant at a previous quantization level q, p < q ≤ P. The coefficients stored in the significance list are refined by applying the quantizer index allocation operator δ that corresponds to the current coding pass.Before applying the operator δ, all the coefficients contained in the significance list must be rescaled according to a refinement threshold T r = T p m.The rescaling is performed by subtracting the value of the significance threshold from the coefficient magnitude till the resulting value is smaller than the refinement threshold.This can be expressed analytically as where w p (l) is the rescaled value obtained from w p+1 (l), and n is the minimum integer for which w p+1 (l Last, p is decremented and the described procedure is repeated, until the finest quantization accuracy is achieved, that is, p = 0, or the target bitrate is met.
Finally, we illustrate the manner in which the significance thresholds are computed, in the particular case of an EMDSQ instantiation [10] with M = 2, as depicted in Figure 2.For the coarsest quantization accuracy, p = P, the starting thresholds corresponding to each channel are T P 1 = 2T and T P 2 = T, respectively.Since it is not desirable that the quantizer is characterized by an overload region, the T value is related to the highest absolute magnitude w max of the wavelet coefficients as Hence, the maximum number of quantization levels is P = log 3 (w max /3) + 1.In general, for p < P, the significance thresholds used for each channel m = 1, 2 are given by

Enhanced MD-QT image codec
This section introduces an improved version of the MD-QT approach presented in Section 3.1.To start with, it is important to observe that the MD-QT coder described above applies the MD coding paradigm only at the level of producing multiple quantized (and coded) representations of the wavelet coefficients.However, the output bitstreams are not composed solely of this type of information, but also of additional localization (or QT) information.Practically, the QT information is the data stored in the output stream generated by encoding the locations of the significant leaf nodes, as obtained by (11) for each applied threshold.Motivated by this observation, we extend in this section the MD paradigm to all types of information generated by the coder, thus providing an enhanced version of MD-QT coding.As demonstrated experimentally, in case of erasures, this will lead to a systematically better rate-distortion performance in comparison to the MD-QT coder presented in Section 3.1.
To justify this design approach, let us analyze the types of information generated by the proposed MD-QT algorithm.The principle behind MD-QT is to generate robust descriptions over erasure channels by producing more descriptions of the input data.The MD capability is attained by employing  EMDSQ.The level of redundancy between the side quantizers' output can be adjusted via the index allocation (IA) of the EMDSQ [26] and is related to the channel probability of failure.
At every quantization level, each MD output stream is composed of the following encoded symbols: the quantized information, the sign information, and the QT information.The quantized and sign information are generated during the significance and refinement passes.For each quantization symbol resulting from the significance pass, we have a corresponding QT piece of information that serves to localize the position of the corresponding sample in the wavelettransformed image.
The distribution of the symbols generated for a single coding step, corresponding to one quantization level p is illustrated in Figure 3. Additionally, Table 1 gives the relative percentage magnitude of each symbol type within the total number of symbols for each of the first five coding steps, corresponding to the quantization levels p, P − 4 ≤ p ≤ P.
The numbers are calculated by averaging, for each coding step, the number of symbols obtained by encoding a set of ten images.Notice that for the first coding step, there are no refinement symbols and that the number of sign symbols equals the number of quantization symbols.This experiment reveals that at least half of the amount of information contained in the output stream at each coding pass consists of QT symbols.Now, let us consider that each of the descriptions is separately packetized and transmitted over the network.It is obvious that in the case of an erasure at the level of the quantization symbols (both the significance and refinement symbols), these symbols can be recovered with a certain fidelity from the received description at the decoder side.In the case of lost sign data for one description, the second description will provide complete recovery since the sign information is completely redundant in each description.However, the QT information needs to be protected against potential errors as well.In fact, if compared to the impact of errors occurring on quantization information, the errors that occur at the level of QT information will have a greater impact on the decoded  data.Indeed, if there is no mechanism that correlates the QT information among the different descriptions, then there is no way to enable QT data recovery from a received description.
From the above discussion, the following conclusions can be drawn.
(i) Despite the fact that multiple descriptions are provided by the system of Section 3.1, the MD principles are limited only to the quantization and sign information.The information represented by the QT symbols is nevertheless present in all descriptions, but is not correlated, thus leading to the impossibility of recovering such data in case of erasures.(ii) The QT symbols stand for the largest part of information generated in each coding pass.Therefore, if the channel is characterized by a uniformly distributed probability of failure, there is a greater likelihood that the QT information will become erroneous.
for an example with granular region ranging from 0 to 26.The significance-map coding is performed for each of the two descriptions with respect to the set of thresholds (T P 1 , T P 2 ), . . ., (T 0 1 , T 0 2 ).
In the following, we present a new QT encoding approach that will lead to redundant QT information among the resulting descriptions.The starting point in this design is the coding scheme described in Section 3.1.Similar to this scheme, each coding step is composed of significance and refinement passes, with the exception of the first step, corresponding to the coarsest quantization level, in which only the significance pass is performed.
It is important to observe that in order to produce similar QT (i.e., localization) information for all descriptions, we need to apply a common set of significance thresholds when we construct the descriptions.A simple solution to obtain such a set is to combine the different sets of significance thresholds used to generate each distinct description in the MD-QT scheme of Section 3.1.Consider for example the EMDSQ instantiation depicted in Figure 2. In this case, the first set of thresholds is T P 1 , T P−1 1 , . . ., T 0 1 and the second set of thresholds is T P 2 , T P−1 2 , . . ., T 0 2 , where T p m is given by (16).The common set of thresholds will be of the form (T P 1 , T P 2 ), (T P−1 1 , T P−1 2 ), . . ., (T 0 1 , T 0 2 ). Figure 4 depicts the same EMDSQ instantiations as given in Figure 2. It is noticeable that in the case of enhanced MD-QT, two thresholds correspond to each quantization level.This results into two significance passes and one refinement pass for each coding step.In general, by following this approach of merging the sets of significance thresholds, the algorithm will perform M significance passes and one refinement pass for every coding stage p, p < P.

EXPERIMENTAL RESULTS
In this section, we present experimental results testing different aspects of our proposed approach.For all the experiments presented in this section, the MDC system is providing a number of two descriptions.
The first set of experiments focuses on (i) the redundancy control mechanism and (ii) the comparative performance assessment between instantiations of EMDSQ and the state-ofthe-art MDUSQ of [14].In order to demonstrate the redundancy control mechanism, three instantiations of EMDSQ at different overall redundancy levels are employed in the MD-QT coding system.The MD-QT yielding ρ = 0.9 employs an EMDSQ instantiation that corresponds to a two-diagonal embedded IA [26,27].It is noticeable that the EMDSQ cells are not disconnected in this case.The MD-QT yielding a total redundancy of ρ = 0.4 employs an EMDSQ instantiation that corresponds to a two-diagonal embedded IA for all quantization levels except the finest one (p = 0) [8,11].For the finest quantization level, we allocate no redundancy in between the two descriptions.Following the notations from Section 2.2, this can be written as N 0 = (L 0 ) 2 .It is noticeable that for the last quantization level, EMDSQ employs disconnected cells [8,11].Finally, for the last employed EMDSQ instantiation we allocate no redundancy at all for the final two quantization levels, that is, N 0 = (L 0 ) 2 and N 1 = (L 1 ) 2 .This case will result in a total redundancy value of ρ = 0.3.[8,11].
Figure 5 depicts the rate distortion behavior of the MD-QT central decoder employing the above-mentioned EMDSQ instantiations.The experiments demonstrate that both the overall redundancy and redundancy per each quantization level can be controlled.Additionally, (9) suggests that one can adapt the EMDSQ in order to provide an unequal error protection scheme where we can tune the redundancy according to the importance of the layer being encoded-for example, more redundancy can be allocated to the base layer (corresponding to the coarser quantization levels) and less to the enhancement layers, corresponding to the finer quantization levels.
Additionally, Figure 5 shows the rate-distortion results obtained with MD-QT incorporating MDUSQ at an overall redundancy ρ = 0.9.It is important to notice that for EMDSQ, it is possible to control the redundancy at each distinct quantization level, while the MDUSQ do not feature this important control mechanism.These results show also that at the same redundancy level, EMDSQ outperforms the state of the art on the whole range of rates.
Finally, Figure 5 depicts the rate-distortion results of the corresponding SD coder in order to allow for a comparison with the corresponding MD coder in the case of error-free channels.It is noticeable that in applications that require a  lesser protection level (which will be reflected in less redundancy between the two descriptions), the rate-distortion gap between the SD and MD coders can be narrowed.
The goal of the second sets of experiments from this section is to evaluate the performance of the proposed MD-QT codecs operating in a realistic data communication scenario.Transmission systems are increasingly using packetization techniques.Therefore, we will asses how our errorresilient MD-QT systems are able to cope with packet losses in comparison to the equivalent single-description coding (SDC) system based on QT coding.
For this purpose, the output stream is divided into packets and is transmitted via the channel.We assume that the employed packetization method provides packets with a number of payload bytes representing the coded data plus sufficient header information.The header information will allow detecting the lost packets at the decoder side, and thus the correct sequencing of the remaining packets could be maintained.In the following experiments, we chose a packet payload of 640 bytes.For each probability of loss, all the possible erasure patterns are explored and the resulting PSNR values are averaged, as described next.
Let us consider a certain number of packets N that are to be transmitted, and let p L be the average packet-loss probability.The average number of packets being lost is k = p L • N, and there will be k N combinations to lose k packets out of N. The average distortion is then calculated by measuring and averaging the MSE over all possible combinations.Three sets of experiments are performed on two standard images, that is, 512 × 512 gray-scale Lena, and Barbara images, which have been compressed using (i) the simple and (ii) enhanced MD-QT codecs employing EMDSQ, and (iii) the equivalent single-description SQP coder, employing successive approximation quantization.
In a first round of experiments, we transmit N = 14 packets, corresponding to a coding rate of 0.27 bit per pixel (bpp).For this test, we use a channel model with a loss rate varying between 0% and 35%.Although 35% may seem high for current networks, such high loss rates commonly occur with wireless networks and on the Internet at peak times.The results given in Figure 6 show that in case of no error, the SDC system (i.e., SQP) provides better performance.On the other hand, in the presence of errors, even with a small probability of failure, the SDC system experiences a large drop in performance.This justifies the need for MD coding and demonstrates the robustness of the proposed approach over a broad range of packet-loss rates.
These results also demonstrate that generating common QT information for both descriptions by using a common significance threshold set comes at practically no cost in the error-free case, and significantly improves the performance in the error-prone case.Finally, it is important to point out that these results are systematically observed on a broad set of test images [26].
The results for the second experiment are presented in Figure 7 and demonstrate comparative progressive transmission capability (or quality scalability) in the context of communication over error-prone channels for the MD-QT versus SQP.The vertical axis represents the average PSNR obtained for all the possible loss patterns, while the horizontal axis represents the number of received packets.
For this second experiment, we send a total of 26 packets to represent the Lena image (corresponding to a coding rate of 0.51 bpp) and 40 packets for the Barbara image (corresponding to 0.78 bpp).In Figure 7, a probability of failure around 4% is considered.Several conclusions can be drawn from this experiment.First, we can notice that an MD-QT coding system is a robust progressive transmission system since it allows for an increase in image quality with each packet received at the decoder side, in the context of erasures.Second, it is noticeable that even for a small error rate and a small number of received packets, the MD-QT coder outperforms the SQP coder by a large margin.Finally, it is noticeable that for MD-QT, the distortion is monotonically decreasing with the number of received packets, while the one of SQP is characterized by a plateau effect.This effect indicates that the average SDC performance can no longer be improved as the probability of correctly receiving an increasing number of packets diminishes drastically.

DISCUSSION AND CONCLUSIONS
This paper presents a new type of scalable erasure-resilient image codecs.In the proposed approach, scalability and packet-erasure resilience are jointly provided via EMDSQ.A scalable multiple-description image coding system (MD-QT) is presented relying on quadtree coding of the EMDSQ output, along with an enhanced version of it.It is found experimentally that extending the MD paradigm by generating common localization information across descriptions comes at practically no cost in the error-free case, and significantly improves the performance in the error-prone case.
The advantages of both MD coding systems are demonstrated in the context of image transmission over packetlossy networks.The experimental results demonstrate that both the overall redundancy and redundancy per quantization level can be controlled.A comparative performance assessment between instantiations of EMDSQ and the stateof-the-art MDUSQ of [14], both incorporated in a common MD-QT coding system, is performed.The experimental results demonstrate that at the same redundancy level, EMDSQ outperforms the state of the art on the whole range of rates.
Finally, we notice that even for a small error rate and a small number of transmitted packets, the MD-QT codec outperforms the single-description coding equivalent (SQP) by a large margin.Moreover, for the MD-QT codec, the distortion is monotonically decreasing with the number of received packets, while the SQP codec is characterized by a plateau effect.This justifies the need for MD coding and demonstrates the robustness of the proposed approach over a broad range of packet-loss rates.
These results show that while transmitting over reliable links, the coding penalty associated with the proposed MD approaches versus single-description coding is controllable and can be reduced by reducing the overall redundancy.In other words, the "cost" of MDC can become negligible, while preserving significant benefits when transmitting over errorprone channels.
Pursuant to this definition and based on the conditions above, for any of the block matrices B p+1 j1 j2 = [B p i1i2 ] 1≤im≤Lp , the number of blocks B p j1 j2 = [0] is constant and is given by N p = nzb(B p+1 j1 j2 , p).Additionally, at level P, N P = nzb(M, P) represents the number of blocks B P j1 j2 = [0] within M. The total number of indices mapped in each block

Figure 2 :
Figure 2: Example of a three-level representation of Q p m for two-description EMDSQ (M = 2, 0 ≤ p ≤ 2) with granular region ranging from 0 to 26.The significance-map coding is performed with respect to the set of thresholds T p m, as defined by (16).

Figure 3 :
Figure 3: Distribution of the symbols generated in a coding pass of a MD-QT coding system.

Figure 5 :
Figure 5: Comparative central rate-distortion performance for SDC and MD-QT incorporating the EMDSQ at different overall redundancies and MDUSQ of [7] applied on the (a) Lena and (b) Goldhill images.

Figure 6 :
Figure 6: Effect of packet loss on average PSNR for all loss patterns for a number of 14 transmitted packets and probability of loss varying between 0% and 35% for (a) Lena and (b) Barbara.The employed coders are the simple MD-QT (MDC) and enhanced MD-QT (MDC-enh) and the corresponding SDC version, SQP.

Figure 7 :
Figure 7: Effect of 4% packet loss on average PSNR for progressive transmission of (a) Lena and (b) Barbara.The employed coders are the MD-QT (MDC) and the corresponding SDC version, SQP.

Table 1 :
Distribution of the coding symbols in a bitstream generated in the first five coding passes of a wavelet-based MD-QT coding system.