EURASIP Journal on Applied Signal Processing 2005:6, 784–794 c ○ 2005 Hindawi Publishing Corporation On Rate-Compatible Punctured Turbo Codes Design

We propose and compare some design criteria for the search of good systematic rate-compatible punctured turbo code (RCPTC) families. The considerations presented by S. Benedetto et al. (1998) to find the "best" component encoders for turbo code construction are extended to find good rate-compatible puncturing patterns for a given interleaver length. This approach is shown to lead to codes that improve over previous ones, both in the maximum-likelihood sense (using transfer function bounds) and in the iterative decoding sense (through simulation results). To find simulation and analytical results, the coded bits are transmitted over an additive white Gaussian noise (AWGN) channel using an antipodal binary modulation. The two main applications of this technique are its use in hybrid incremental ARQ/FEC schemes and its use to achieve unequal error protection of an information sequence.


INTRODUCTION
In this paper, we propose a new criterion for the choice of the puncturing patterns, based on the analytical technique introduced in [1], that leads to systematic rate-compatible codes improving over known ones with respect to both maximumlikelihood and iterative decoding criteria.
The concept of rate-compatible codes has been presented for the first time in [2], where a particular family of convolutional codes, called in the paper rate-compatible punctured convolutional codes, is obtained by adding a ratecompatibility restriction to the puncturing rule. This restriction requires that the rates are organized in a hierarchy, where all coded bits of a higher-rate code are used by all lower-rate codes; or, in other words, the high-rate codes are embedded into the lower-rate codes of the family. The concept of ratecompatible codes has been extended to turbo codes in [3,4]. Design criteria for the puncturing patterns have successively appeared in [5,6].
The two main applications of this technique are the following.

Modified type-II automatic repeat request/forward-error correction (ARQ/FEC) schemes
The principle of this hybrid ARQ/FEC scheme [7] is not to repeat information or parity bits if the transmission is unsuccessful, as in previous type-II ARQ/FEC schemes, but to transmit additional code bits of a lower-rate code, until the code is powerful enough to achieve error free decoding. Namely, if the higher-rate codes are not sufficiently powerful to correct channel errors, only supplemental bits, which were previously punctured, have to be transmitted in order to upgrade the code. This implies several decoding attempts on the receive side.

Unequal error protection (UEP)
Since codes are compatible, rate variation within a data frame is possible to achieve unequal error protection: this is required when different levels of error protection (i.e., different code rates) are needed for different parts (or blocks) of an information sequence (see, e.g., [2] and the examples therein).
The paper is organized as follows. Section 2 presents an overview of RCPTCs. In Section 3, the design criteria for the search of good systematic RCPTC families are outlined. In Section 4, their performance is addressed. Finally, Section 5 summarizes the main results and gives some conclusions.

AN OVERVIEW OF RATE-COMPATIBLE PUNCTURED TURBO CODES
The first proposal of RCPTCs has been introduced in [3] to achieve unequal error protection: parallel concatenated convolutional codes (PCCC) with two constituent encoders were described with a variable rate of 1/2 to 1/3 using the same mother encoder. The idea has been extended in [4] to multidimensional turbo codes to be used in hybrid FEC/ARQ protocols, with a variable rate of 1 to 1/M, where M − 1 is the number of constituent encoders. The rate variation is achieved by puncturing, with M puncturing matrices, an underlying rate 1/M turbo encoder, consisting of one rate 1/2 recursive systematic convolutional (RSC) encoder cascaded in parallel with M − 2 rate 1 RSCs. The puncturing scheme is periodic, but not limited to parity bits, so that both systematic and partially systematic RCPTCs can be obtained.

Design criteria
The specification of an RCPTC consists in finding suitable mother encoder(s), the interleaver(s), and the puncturing patterns to obtain the desired code rate range. The first paper dealing with design criteria is [5], where both systematic and partially systematic RCPTCs were considered. The mother code was selected to be a rate 1/3 PCCC. The design method in [5] included the following three consecutive steps.
(1) First, the constituent encoders were selected among those yielding good performance at low signal-tonoise ratios, with particular attention to decoding complexity. The final choice was the optimum 4-state recursive systematic encoder [1]. (2) Next, the turbo-code interleaver was designed based on the codeword weight distribution and on the achievable performance on the additive white Gaussian noise (AWGN) channel, using a maximumlikelihood approach. The selected interleaver was based on Berrou's approach [8]. (3) Finally, the puncturing schemes were selected based again on both the weight distribution and the achievable performance on the AWGN channel. Both cases of systematic and partially systematic RCPTCs were addressed.
In [9], design criteria for rate R = k/(k + 1) (2 ≤ k ≤ 16) punctured turbo codes were given in detail, deriving highrate codes via puncturing a basic rate 1/3 PCCC. To obtain a code rate of k/(k + 1), one parity bit only is transmitted every k information bits presented to the encoder input. The rates of the two constituent encoders after puncturing are assumed to be the same and the parity bits to be transmitted alternate between the two encoders. Therefore, for every 2k in-put bits, only two parity bits are transmitted by the puncturing scheme, one from each of the two constituent encoders (there are some exceptions to this rule, i.e., for some rates and memory sizes, puncturers with period other than 2k are needed). The design parameters are (1) the generator polynomials, (2) the interleaver I, (3) the puncturing pattern P.
Since weight-two and weight-three inputs and their multiplicities, N 2 and N 3 , are assumed to dominate the performance, the design criterion is the maximization of d 2 and d 3 (i.e., the minimum weight turbo-codeword for weight-2 and weight-3 inputs, respectively) and the minimization of N 2 and N 3 over the above parameters. In the paper, the authors also suggested how to obtain a chain of RCPTCs with rates V = {1/3, 1/2, 2/3, 4/5, 8/9, 16/17}, starting from a puncturing period of 32 bits which is halved when passing from one rate to the next lower rate. In this operation, the surviving parity bits at one rate are kept for the following. With this technique, however, only rates of the kind k/(k + 1), k = 2 i , are possible.
In [6], the authors propose criteria for designing puncturing patterns applicable to multidimensional PCCCs with rate variable from 1 to 1/M, where M − 1 is the number of constituent encoders. Owing to the application they are interested in (hybrid ARQ techniques), the authors propose as the design criterion the minimization of the slope of the average distance spectrum limited to the first 30 codeword weights.

THE NEW DESIGN CRITERION
The design of a turbo-like code using two constituent encoders and one interleaver involves the choice of the interleaver and the constituent encoders. The joint optimization, however, seems to lead to prohibitive complexity problems. The only way to achieve significantly good results seems to pass through a decoupled design in which one first designs the constituent encoders, and then tailors the interleaver on their characteristics. To achieve this goal, a uniform interleaver approach has been proposed in [10], where the authors suggested replacing the actual interleaver with the average interleaver. 1 Following this approach, in [1] the best constituent encoders for turbo code construction are found. In this paper, as in [6], we will base our design on the uniform interleaver approach.
For an RCPTC, the code choice consists essentially in finding the puncturing patterns satisfying some optimality criteria subject to the compatibility constraint. We discuss here the following design criteria for the puncturing patterns based on the input-output weight enumerating function (IOWEF) of the RCPTC employing a uniform interleaver [10].
Free-distance criterion. Select the candidate puncturing pattern yielding the largest free distance (defined as the minimum output weight of the RCPTC [11]).
Minimum slope criterion [6]. Fit a regression line to the first 30, or so, terms of the output weight enumerating function. The slope of this fitted line represents a measure of the rate of growth of the weight enumerating function (WEF) with the output distance d. Select the candidate puncturing pattern yielding the minimum slope.
Optimization of the sequence (d w , N w ). Define by d w the minimum weight of codewords generated by input words with weight w, and by N w the number of nearest neighbors (multiplicities) with weight d w . Determine, as in [1], the pairs (d w , N w ) for w = 2, . . . , w max . Select the candidate yielding the optimum values for (d w , N w ), that is, the one which sequentially optimize the pairs (d w , N w ) (first d w is maximized and then N w is minimized).
The third criterion, introduced in this work, is compared with the other two criteria, previously introduced in the literature (see, e.g., [6]). This analysis is done by comparing the residual bit error rates (BERs) and frame error rates (FERs) of the RCPTCs obtained by applying these three criteria. The third criterion is expected to give promising results, like those obtained in [1], where it was applied to find good constituent convolutional codes for the construction of turbo codes. The advantage over the other two criteria is that this criterion can also be applied separately to the IOWEF of the constituent encoders, by extending the considerations presented in [1] to the search for the "best" rate-compatible puncturing patterns, given the interleaver size N. This feature leads to a dramatic reduction of the computational complexity needed for the third criterion, with respect to the complexity associated to the first two.
For each of the above-mentioned criteria, several assumptions can be made, and each of them should be discussed.
(1) Information bits may be punctured or not, leading to a partially systematic or to a systematic punctured code, respectively. (2) The puncturing pattern may be periodic or not: in the second case, of course, the optimal puncturing pattern search is more general, even if computationally heavier. (3) The puncturing pattern may be homogeneous or not.
Namely, there are two sets of parity bits: the ones at the output of the first constituent code (CC1) and those at the output of the second constituent code (CC2). When we perform a homogeneous puncturing, the punctured bits are spread evenly among CC1 and CC2 parity bits; namely, CC1 and CC2 parity bits are punctured with the same percentage. When a nonhomogeneous puncturing is performed, the punctured bits are not spread evenly among CC1 and CC2 parity bits; namely, CC1 and CC2 parity bits are punctured in different percentages. To obtain a partiallysystematic RCPTC, systematic bits are also punctured. In this case, when we perform a homogeneous (non-homogeneous) puncturing, the punctured bits are (are not) spread evenly among systematic, CC1, and CC2 parity bits.
In this work, we focus on the search of good rate-compatible systematic punctured codes. Rate-compatible partially systematic punctured codes are to be investigated in future work. However, notice that when puncturing applies also to information bits, the invertibility of the RCPTC has to be guaranteed, that is, the existence of a one-to-one correspondence between information and encoded sequences. 2 In fact, since one important application of this technique is its use in variable-redundancy hybrid ARQ schemes, it is desirable to split the encoded sequence into subsequences to be sent in successive transmissions. A basic requirement is that the first subsequence must permit the recovery of the original information in case of no errors [13].

RESULTS AND COMPARISONS AMONG DIFFERENT CRITERIA
A family of rate-compatible codes is usually described [2,6] by the mother code of rate R = 1/M and the puncturing period P, which determines the range of code rates: ranging between P/(P + 1) and 1/M. The rate-compatible codes are obtained from the mother code with puncturing matrices a(l) = a i j (l), where a i j ∈ (0, 1) and 0 means puncturing. The rate-compatibility restriction implies the following rule: if a i j l 0 = 1 then a i j (l) = 1, ∀l ≥ l 0 ≥ 1 (2) or, equivalently, In this section, we will compare through analysis and simulation the various design criteria previously described. They are applied to find a family of systematic RCPTCs based on a rate 1/3 mother PCCC. This is obtained by concatenating an 8-state rate 1/2 and a rate 1 convolutional encoders. The resulting mother code uses the component codes specified in the WCDMA and CDMA2000 standards [14]. A general approach for optimal puncturing search has been followed, including periodic and nonperiodic, as well as homogeneous and nonhomogeneous patterns.
The algorithm to find a well-performing puncturing pattern (where "well-performing" is intended to be according to one of the three criteria previously mentioned in Section 3) works sequentially, by puncturing one bit at a time in the optimal position, 3 subject to the constraint of rate compatibility. This sequential puncturing is performed starting from the lowest rate code (i.e., the mother code) and ending up at the highest possible rate.
To compare the relative merits of the different design criteria, we have simulated the performance of the resulting RCPTCs using a random interleaver (with size 100, and, in one case, 1000) and 10 decoding iterations (curves with empty markers). The interleaver used in the simulation is selected randomly frame by frame: in this sense, it comes down to averaging simulation results over many randomly chosen interleavers. This choice has been made since the uniform interleaver approach has been applied to design the puncturing patterns: thus, the optimality of a given pattern is not specific to a particular interleaving scheme, but to the optimality of the average distance spectrum of the code obtained, applying that pattern. Thus, the simulation is not performed using a specific interleaving scheme but a random interleaver, yielding also a better closeness of simulation results to the union bound [15]. Moreover, since we are interested in the comparison of the three criteria considered, a short interleaver length, such as 100, has been considered. This choice is due to the fact that, when increasing the interleaver size, the computational complexity needed to apply the first two criteria becomes more and more prohibitive. Actually, only the new criterion introduced in this paper (the third one), if applied separately to the IOWEF of the constituent encoders, can be easily extended to longer interleaver lengths.
Finally, we have evaluated the analytical upper bounds to the bit error probability [16] based on maximum-likelihood soft decoding [11] (curves with filled markers). To evaluate simulation and analytical results, the coded bits are transmitted over an additive white Gaussian noise channel using an antipodal binary modulation [11].
A first set of simulation and bound results is shown in Figure 1, where we report the E b /N 0 required to obtain a bit error rate (BER) of 10 −5 versus the RCPTCs rate R c (E b /N 0 being the ratio of the received energy per bit (E b ) to the spectral noise density (N 0 )). The interleaver size is set to N = 100. Systematic RCPTCs are considered, that is, only parity check bits are punctured. The puncturing patterns are selected to be homogeneous.
The best performance is obtained, applying the optimization of the sequence (d w , N w ) to the spectrum of the first component encoder, taken separately: the puncturing pattern for the whole turbo code scheme is obtained by applying the puncturing pattern obtained for the first constituent to the second-constituent check bits (dashed curves with "♦"). The best puncturing patterns for N = 100 are reported in Table 1 for some rates, and are given, for each rate, in octal form for the first-constituent check bit positions going from 1 to N. Notice that the puncturing pattern search is per- 3 The optimal puncturing position is the one giving the best code performance from the point of view of the criterion applied.  formed bit by bit for the first-constituent check bits, and then applied to the second-constituent check bits: thus, the puncturing pattern obtained is not only homogeneous but also symmetrical. Notice also that the mother code has an actual rate slightly lower than 1/3, since termination bits are considered. Together with the puncturing patterns, we report, for each rate, the free distance d free and its multiplicity N free . We also report the effective distance d f, eff that is, the minimum Hamming weight of codewords generated by weight-2 information words, and its multiplicity N f, eff .
An almost equivalent performance is obtained applying the optimization of the sequence (d w , N w ) to the spectrum of the whole parallel concatenated code, when puncturing homogeneously only check bits (i.e., puncturing the first-constituent and the second-constituent check bits alternately). This performance is not shown in the figure since the corresponding curve is almost superimposed to the best one.
We stress that the difference between the application of the (d w , N w ) sequence optimization to the spectrum of the first component encoder and to the spectrum of the whole parallel concatenated code concerns not only the obtained performance, but also the computational complexity. Namely, since the best criterion found is based on the evaluation of the spectrum of the first component encoder only, its implementation requires a much lower computational complexity (this is not the case when the spectrum of the whole   1 777 777 777 777 777 777 777 777 777 777 777  1 777 777 777 777 777 777 777 777 777 777 777  1 777 777 777 777 777 777 777 777 777 777 777  1 777 777 777 777 777 777 777 777 777 777 777  1 777 777 777 777 777 777 777 777 777 777 777  1 777 777 777 777 777 777 777 777 777 777 777  1 777 777 777 777 777 777 777 777 777 777 777  1 777 777 777 777 777 777 777 777 777 777 777  1 777 777 777 777 777 777 777 777 777 777    parallel concatenated code has to be computed). Thus, this criterion is efficient not only from the point of view of performance, but also from the computational point of view and can be easily applied to longer interleaver lengths. The best puncturing patterns for N = 1000 are reported in Table 2 for some rates, and are given, for each rate, in octal form for the first-constituent check bit positions going from 1 to N (grouped in the table in 100 positions per row).
On the other hand, in order to apply the other two criteria, that is, free-distance and minimum slope criteria, the computation of the spectrum of the whole parallel concatenated code is compulsory, and this is computationally more cumbersome. However, if these two criteria are applied homogeneously to find good families of systematic ratecompatible codes, that is, puncturing alternately the firstconstituent and the second-constituent check bits, the search to find the optimal puncturing position, at each step, is simplified, since the number of bit positions to be analyzed is reduced. However, as shown in Figure 1, the application of these two criteria homogeneously leads to degraded performances with respect to the best one (see the dash-dotted and solid curves in the figure).
It should be noted that, since the optimization of the puncturing pattern is performed bit by bit (i.e., puncturing, at each step of the optimization algorithm, one bit at a time), the goal of finding the optimal puncturing pattern for each rate R c RCPTC is reached through a series of steps. Of course the choice, made at each step, of the optimal bit to be punctured affects the choices that will be made afterwards. Thus, even if we reach, at each step, an optimum value of the selected cost function, the global puncturing pattern we obtain after a given number of steps is not necessarily optimal, but could be suboptimal. 4 In other words, even if we could reasonably expect that a nonhomogeneous puncturing pattern performs better than a homogeneous one (since no restrictions are imposed at each step of the search procedure in choosing the optimal bit to be punctured), this prediction is not necessarily true for each RCPTC family. In fact, as it can be easily observed from Figures 1 and 2, where the puncturing patterns are selected to be homogeneous and nonhomogeneous, respectively, the nonhomogeneous puncturing patterns (curves with " ") give better performance results, with respect to the homogeneous ones, only when the minimum slope criterion is used (compare solid curves with "♦" and " " in the two figures). On the other hand, the homogeneous puncturing patterns (curves with "♦") give better performance results, with respect to the nonhomogeneous ones, when the (d w , N w ) sequence optimization criterion and free-distance criterion are used (compare dashed and dashdotted curves with "♦" and " ," respectively, in the two figures).
Thus, to summarize these results, as far as the minimum slope criterion is concerned, the best results are obtained applying this criterion nonhomogeneously to find systematic rate-compatible codes (solid curves with " " shown in Figure 2). The corresponding best puncturing patterns are reported in Table 3 for some rates, and are given, for each rate, in octal form for the first-constituent (first line) and for  the second-constituent (second line) check bit positions going from 1 to N, with N = 100. As for the free-distance criterion, the best results are obtained applying this criterion homogeneously to find systematic rate-compatible codes (dashdotted curves with "♦" shown in Figure 1). The corresponding best puncturing patterns are reported in Table 4 for some rates, and are given, for each rate, in octal form for the first-constituent (first line) and for the second-constituent (second line) check bit positions going from 1 to N, with N = 100.
Finally, as far as the criterion based on the optimization of the sequence (d w , N w ) is concerned, the best results are obtained applying this criterion homogeneously and symmetrically, that is, performing the puncturing pattern search on the first-constituent check bits, and then applying it to the second-constituent check bits. The corresponding best puncturing patterns for N = 100 and N = 1000 are reported in Tables 1 and 2, respectively. Since, as shown in Figures 1 and 2, the gains achievable using the different puncturing search criteria vary with the rate R c , in Figures 3, 4, and 5, we report, respectively, BER results for the best rate 1/2, rate 2/3, and rate 4/5 systematic RCPTCs obtained by applying the different criteria. The corresponding puncturing patterns are reported in Tables 1,  2, 3, and 4, respectively. Simulation results are obtained for 10 iterations of the decoding algorithm, using a random interleaver (empty markers). Transfer function bound results are reported for each case using filled markers. The curves with "♦" are related to homogeneous puncturing patterns, whereas the ones with " " are related to nonhomogeneous puncturing patterns.
Focusing, for instance, on Figure 4, a comparison between the different puncturing techniques, leading to the best rate 2/3 systematic RCPTCs, can be seen. The application of the criterion based on the optimization of the sequence (d w , N w ) homogeneously and symmetrically leads to a d free = 2 rate 2/3 systematic RCPTC, as shown in Tables 1 and 2 for N = 100 and N = 1000, respectively (the dashed curves with "♦" and " " show the corresponding BER performances). The application of the minimum slope criterion nonhomogeneously leads to a d free = 3 rate 2/3 systematic RCPTC, as shown in Table 3 (the solid curves with " " report the corresponding BER performances for N = 100). The application of the free-distance criterion homogeneously leads to a d free = 2 rate 2/3 systematic RCPTC, as shown in Table 4 (the dash-dotted curves with "♦" report the corresponding BER performances for N = 100). The performance of the rate 2/3 code obtained applying the optimization of the sequence (d w , N w ) homogeneously and symmetrically is the best one, for 0 ≤ E b /N 0 ≤ 10 dB, since this technique minimizes N free , even if the free distance obtained is not maximized. The reduction in N free is of about 3 orders of magnitude (see Table 1 at rate 2/3), with respect to the multiplicity N free obtained applying the minimum slope criterion nonhomogeneously (see Table 3 at rate 2/3).
As shown in Figures 3, 4, and 5, the application of the optimization of the sequence (d w , N w ) and of the free-distance criterion leads, as expected, to very similar results at the different rates R c (the curves showing the BER performance are always parallel in the error floor region) but, however, the optimization of the sequence (d w , N w ) always gives better results for the target error rate values being significant for the type of applications considered 5 (since, although the codes obtained applying these two criteria have the same d free at rates 1/2, 2/3 and 4/5, the application of the optimization of the sequence (d w , N w ) leads to a minimum N free , as can be seen from Tables 1 and 4, respectively).
Finally, also good periodic puncturing patterns have been searched using the methods described above: the resulting performance is, as expected, worse than the performance of the RCPTCs obtained using the corresponding nonperiodic puncturing patterns, since a heavy restriction on the best puncturing positions search is added.
The criteria for optimal puncturing design may be based on the spectrum of the whole parallel concatenated code, as proposed in [6], or on the spectrum of the component encoders, taken separately, as in [1]. To decouple ratecompatible puncturing pattern design from the interleaver design, a uniform interleaver has been considered.
We have focused, in particular, on the search for good rate-compatible systematic punctured codes.
The best performance has been obtained applying the optimization of the sequence (d w , N w ) to the spectrum of the first component encoder, taken alone, and puncturing only its check bits: the puncturing pattern for the whole turbo code scheme is obtained by applying the puncturing pattern obtained for the first-constituent to the secondconstituent check bits. Since this best criterion is based on the evaluation of the spectrum of the first component encoder only, its application requires a much lower computational complexity. Thus, it is efficient not only from the point of view of performance, but also from the computational point of view and can be easily applied to longer interleaver lengths. In the paper, we have shown the results of its application for two interleaver lengths, that is, N = 100 and N = 1000.
In order to apply the other two criteria under investigation, that is, the free-distance and the minimum slope criteria, the spectrum of the whole parallel concatenated code has to be computed, and this leads to a much higher computational complexity. Moreover, the codes obtained applying these two criteria have a worse performance with respect to those obtained applying the best criterion, thus rendering their application of little interest for the design of systematic RCPTC families.     Fulvio Babich received the Doctoral degree, (Laurea), cum laude, in electrical engineering from the University of Trieste in July 1984. After graduation, he was with Telettra, working on optical system design. Then he was with Zeltron, working on communication protocols. In 1992, he joined the Department of Electrical Engineering (DEEI), University of Trieste, where he is an Associate Professor of digital communications. His current research interests are in the field of wireless networks and personal communications. He is involved in channel modeling, hybrid ARQ techniques, channel coding, cross-layer design, and multimedia transmission over heterogeneous networks. Fulvio Babich is a Senior Member of IEEE. Communications Company working on the innovative design and implementation of a third-generation WCDMA receiver. He is an author of more than 100 papers published in international journals and conference proceedings. His interests are in the area of channel coding and wireless communications, particularly in the analysis and design of concatenated coding schemes and study of iterative decoding strategies. Francesca Vatta received a Laurea degree in ingegneria elettronica in 1992 from University of Trieste, Italy. From 1993 to 1994, she has been with Iachello S.p.A., Olivetti Group, Milano, Italy, as system engineer working on design and implementation of computer-integrated building (CIB) architectures. Since 1995, she has been with the Department of Electrical Engineering (DEEI), University of Trieste, where she received her Ph.D. degree in telecommunications, in 1998, with a Ph.D. thesis concerning the study and design of source-matched channel coding schemes for mobile communications. In November 1999, she became an Assistant Professor at University of Trieste. In 2002 and 2003, she spent two months as a Visiting Scholar at University of Notre Dame, Notre Dame, Ind, USA. She is an author of more than 50 papers published in international journals and conference proceedings. Her current research interests are in the area of channel coding concerning, in particular, the analysis and design of concatenated coding schemes for wireless applications.