EURASIP Journal on Applied Signal Processing 2005:6, 795–807 c ○ 2005 Hindawi Publishing Corporation Convergence Analysis of Turbo Decoding of Serially Concatenated Block Codes and Product Codes

The geometric interpretation of turbo decoding has founded a framework, and provided tools for the analysis of parallel-concatenated codes decoding. In this paper, we extend this analytical basis for the decoding of serially concatenated codes, and focus on serially concatenated product codes (SCPC) (i.e., product codes with checks on checks). For this case, at least one of the component (i.e., rows/columns) decoders should calculate the extrinsic information not only for the information bits, but also for the check bits. We refer to such a component decoder as a serial decoding module (SDM). We extend the framework accordingly, derive the update equations for a general turbo decoder of SCPC, and the expressions for the main analysis tools: the Jacobian and stability matrices. We explore the stability of the SDM. Specifically, for high SNR, we prove that the maximal eigenvalue of the SDM's stability matrix approaches , where is the minimum Hamming distance of the component code. Hence, for practical codes, the SDM is unstable. Further, we analyze the two turbo decoding schemes, proposed by Benedetto and Pyndiah, by deriving the corresponding update equations and by demonstrating the structure of their stability matrices for the repetition code and an SCPC code with information bits. Simulation results for the Hamming and Golay codes are presented, analyzed, and compared to the theoretical results and to simulations of turbo decoding of parallel concatenation of the same codes.


INTRODUCTION
The turbo decoding algorithm is, basically, a suboptimal decoding algorithm for compound codes which were created by code concatenation. Most works on turbo codes focus on code construction, establishment of unified framework for decoding of convolutional and block turbo codes [1], adapting a turbo coding scheme for specific channels, or reducing the decoding complexity. But a comprehensive framework for the analysis of turbo decoding has yet to be found.
Richardson [2] presented a geometric interpretation of the turbo decoding process, creating analysis tools for parallel concatenation code (PCC). Based on this interpretation, [3] has checked the convergence points and trajectories of PCCs and deduced practical stopping criteria, and [4,5] analyzed the convergence of turbo decoding of parallelconcatenated product codes (PCPC).
In this paper, we extend the analysis to turbo decoding of serially concatenated codes (SCC), and focus our attention on turbo decoding of serially concatenated product codes (SCPC) (also known as product codes with checks on checks). For this case, at least one of the components (i.e., row/column) decoders should calculate the extrinsic information of not only the information bits (as in turbo decoding of parallel-concatenated codes), but also of the check bits. We refer to such a decoder as a serial decoding module (SDM). Hence, we begin by showing how Richardson's theory [2] can be extended to apply for this decoding scheme, and how the analysis tools can be adapted accordingly. We use these tools to investigate the convergence of several variants of the decoding algorithm.
In Section 2 we describe the serial concatenation scheme, and the special case of SCPC. We review Pyndiah [6], Fang et al. [7], and Benedetto et al. [8] variants of the iterative decoding algorithm. We then explain why the turbo decoder should include at least one SDM (which calculates the extrinsic information for the check bits as well) to take full effect of the entire code.
In Section 3, we show how Richardson's theory can be extended for serial concatenation, and specifically for the product code case. We then show how the analysis tools are adapted. First, the new turbo decoding update equations are derived. Then we derive the expressions for the Jacobian and stability matrices, and investigate their special structure for several variants of the turbo decoding algorithm. Specifically, we show that these matrices can be viewed as a generalization of the corresponding matrices for the PCPC.
In Section 4 we analyze the SDM and prove that for high SNR, the maximal eigenvalue of the SDM's stability matrix approaches d − 1, where d is the minimum Hamming distance of the component code. Hence, for practical codes, the SDM is unstable (note that an unstable decoding process does not necessarily imply wrong decisions at the decoder's output).
In Section 5 we derive the update equations of Pyndiah's and Benedetto's decoding schemes. We then derive and analyze the corresponding stability matrices for two simple component codes: the repetition code and a code with 2×2 information bits. This demonstrates the structure of the stability matrices and the instability of the SDM.
In Section 6 we present simulation results, which support the theoretical analysis. The simulations are performed for the Hamming [ (7,4,3)] 2 and Golay [(24, 12, 8)] 2 codes, and compared to turbo decoding of parallel concatenation of the same codes.

SERIALLY CONCATENATED CODES
Serial concatenation of codes is a well-known method to increase coding performance. In this scheme, the output of one component code (the outer code) is interleaved and encoded by a second component code (the inner code). Product codes (with checks on checks) are an interesting case of serially concatenated block codes [9]. They are suitable for burst and packet communication systems [7], which require short encoding-decoding delays, since they provide reasonable SNR to BER performance for relatively short code-lengths. Let C R be an (n R , k R , d R ) linear code and C C an (n C , k C , d C ) linear code. A linear (n R n C , k R k C ) product code can be formed by arranging the information bits in a k C × k R rectangular array, and encoding each row and column using C R and C C , respectively, as in Figure 1 (where x stands for the information bits, y and z for the checks on rows and columns, respectively, and w for the checks on checks). SCPC has a minimum Hamming distance of d = d R d C , compared to PCPC with a minimum Hamming distance that is lower bounded by d ≥ d R + d C − 1. SCPC may therefore match applications requiring stronger codes (at least asymptotically, i.e., for very low BER) better than those using PCPC. We now review different decoding algorithms, which can be applied to general serial concatenation schemes (i.e., not only for product codes). Without loss of generality, we will treat the row's code as the inner code, and the column's code as the outer.

Benedetto's decoding algorithm
Benedetto et al. [8] proposed the following algorithm: the first decoder decodes the rows. Its inputs are the likelihood ratios of the received code p(x|x), p(ỹ|y), p(z|z), p(w|w) and the extrinsic information (to be treated as the a priori information) of the data rows (x) and their column check bits (z) gained from the outer decoder q (m−1) C,x , q (m−1) C,z (initialized to 1 at the first iteration). The decoder calculates the extrinsic information for both the rows in the x block (the information rows) and in the z block (which contains the checks on the columns of the x block, but serves as information for the [z, w] rows code). We denote this extrinsic information by q (m) R,x , q (m) R,z . The outer decoder decodes the columns. It uses the extrinsic information from the row decoder (q (m) R,x , q (m) R,z ) as the channel's likelihood ratios, and sets the a priori input to be a constant 1. It then calculates the extrinsic information of the information bits q (m) C,x , as well as of the check bits of the column's code q (m) C,z . This latter decoder output (q (m) C,z ) distinguishes the SCPC decoder from PCPC decoding algorithms, since extrinsic information is calculated for the check bits as well.

Pyndiah's decoding algorithm
Pyndiah [6] and later Fang et al. [7] suggested other decoding algorithms for the serial code. While these algorithms differ in their implementation details, they are both derived from a common basic scheme. In this scheme both the inner and outer decoders calculate and exchange the extrinsic information for both the information and the check bits. In this paper we will focus on this basic generic decoding scheme and consider it when we refer to Pyndiah's scheme.
The following paragraph provides a detailed description of this scheme.
The inner decoder decodes the rows. Its inputs are the likelihood ratios of the received bits from the channel p(x|x), p(ỹ|y), p(z|z), p(w|w) and the extrinsic information of x, y, z and w from the other decoding stage denoted by q (m−1) C,w (treated as the a priori probability). This decoder calculates the extrinsic information of the information bits of the row's code q (m) R,x , q (m) R,z , as well as the extrinsic information of the check bits q (m) R,y and q (m) R,w . The outer decoder then implements the same process along the column code axis. It combines the channel likelihood ratio's p(x|x), p(ỹ|y), p(z|z), p(w|w) and the inner decoder extrinsic information q (m) R,x , q (m) R,y , q (m) R,z , q (m) R,w as its inputs, and calculates the extrinsic information of the information bits of the column's code q (m) C,x and q (m) C,y , as well of that of the check bits q (m) C,z and q (m) C,w . The optimal component decoder is the Log-MAP decoder, and it is the decoder we will consider in our work even though it is not the most computationally efficient. Both Pyndiah and Fang et al. proposed the usage of more computationally efficient suboptimal decoders: a modified Chase algorithm or an augmented list decoding (which was similarly proposed in [10]), respectively. Pyndiah also multiplied the exchanged extrinsic information by a set of restraining factors, which we will introduce to our model as well.

The reasoning behind SDM
The common attribute of all the SCPC decoding schemes we analyze, is the computation of the extrinsic information of not only the information bits (as for parallel-concatenated codes) but also of the check bits in at least one decoder. Of course, it is possible to decode without such a decoder, but here we explain that such a decoding scheme would not take full advantage of the entire code (this particularity was also pointed out in [11]). We will designate such a component decoder as SDM and a decoding block that calculates the extrinsic information of only the information bits as a parallel decoding module (PDM).
We consider applying the parallel decoding scheme to an SCPC code, using PDM blocks. We will use the PDM decoders to decode any part of the code they can decode (even if it is not part of a common parallel decoding scheme).
At the first iteration, the row decoder uses p(x|x), p(ỹ|y) to compute q (1) R,x , and may use p(z|z), p(w|w) to compute q (1) R,z . The column decoder uses p(x|x), p(z|z), q (1) R,x , q (1) R,z to compute q (1) C,x and may use p(ỹ|y), p(w|w) to compute q (1) C,y . Note that we decoded all the rows and all the columns rather than only compute q (1) R,x and q (1) C,x as in the classical parallel decoding scheme.
At the mth iteration the row decoder uses p(x|x), p(ỹ|y), q (m−1) C,x , q (m−1) C,y to compute q (m) R,x and p(z|z), p(w|w) to compute q (m) R,z . The column decoder uses p(x|x), p(z|z), q (m) R,x , q (m)

R,z
to compute q (m) C,x and p(ỹ|y), p(w|w) to compute q (m) C,y . We conclude that the updates of q (m) R,z and q (m) C,y depend only on the channel probabilities, and are independent of q (m) R,x , q (m) C,x . Therefore, they will remain constant: q (m) R,z = q (1)

R,z
for all m, q (m) C,y = q (1) C,y for all m. Hence, the contributions of the checks on checks portion (i.e., the extrinsic information of the checks on rows and of the checks on the columns) do not affect the iterative process. This makes such an algorithm to be degenerate. However, using a component decoder, that computes the extrinsic information for all the code bits (i.e., including the check bits), could tie the updates of q (m) R,z and q (m) C,y with their values in the previous iteration and with q (m) R,x , q (m) C,x . We thus conclude that at least one of the component decoders should be an SDM.

ANALYSIS OF TURBO DECODING OF SERIALLY CONCATENATED PRODUCT CODES
Our analysis is based on the geometric representation of turbo codes, formulated by Richardson in [2], in which tools and conditions were developed for analyzing the stability of the fixed points of the algorithm, their uniqueness, and their proximity to maximum likelihood decoding. This framework addressed parallel concatenation of codes, and was used in the analysis of PCPC [4,5]. As was demonstrated in the previous section, the turbo decoding of SCPC requires the computation of an additional element, which is the extrinsic information of the check bits. Hence, we first show how Richardson's theory can be extended for this case.

Notations
We begin with the case of a PDM decoder. Consider the sequence of all possible k-bit combinationsb 0 ,b 1 , . . . ,b 2 k −1 which is enumerated as follows: A density p assigns a nonnegative measure to each of thẽ b i 's, proportional to its probability density. For convenience, we will assume that densities are strictly positive. Densities p and q are equivalent [2] (and thus belong to the same equivalence class) if they determine the same probability density. Since turbo decoding (with maximum likelihood component decoders) uses only the ratios between (probability) densities, it is invariant under equivalence. Therefore, we can choose a particular representative from each equivalence class. Richardson chose to use the density with p(b 0 ) = p(0, 0, . . . , 0) = 1. By taking the logarithm of the representative densities, we define Φ to be the set of log-densities P, such that P(b 0 ) = 0 (in the sequel, upper case letters will denote log-densities, and lower case letters will denote densities). Given a linear systematic block code C(n, k, d), let H i i = 1, . . . , k denote the set of all binary strings b whose ith bit is 1, andH i denotes the set of all strings whose ith bit is 0. Now, if we denote by Y the concatenation of the systematic code portion x and the checks portion y: Y = [x y], then for each log-density P, we can calculate the bitwise log likelihood values, by using the mapπ PDM (P) : where LLR is the log-likelihood ratio. Richardson gives π PDM (·) a geometric interpretation, as the intersection of the surface of all log-densities having the same bitwise marginal distributions, with the space of bitwise independent logdensities.
The above definition ofπ PDM (·) addresses the computation of the LLR of the information bits only. As was discussed in the previous section, an SCPC decoder should contain at least one SDM decoder, which calculates also the extrinsic information of the code's check bits. Hence, we now extend Richardson's theory for this case.
First, we extend the set of the sequences b, and include all possible n-bit We choose (without loss of generality) to number the code bits according to their arrangement by rows: , where x ri , y ri , z ri , w ri denote the ith row of x, y, z, w, respectively. Let B denote a 2 n × n matrix containing all the sequences: B = (b 0 , b 1 , . . . , b 2 n −1 ) T , and let B C denote a 2 k × n matrix containing all the codewords in the same order as B. Definē B C = 1 2 k ×n − B C (where 1 2 k ×n denotes the all ones matrix of size 2 k × n). Since now some of the sequences do not belong to the code, we define H C i i = 1, . . . , n as the set of binary strings b whose ith bit is 1, and belong to the code C (andH C i as the set of all strings whose ith bit is 0 and which belong to C): where H is the n-dimensional hypercube (the set of binary vectors of length n), C is the set of all the code words, and ≥ is meant componentwise (note that b i is the sequence with 1 in the ith position, and 0 in all other positions). Denote by Y the codewords of the row code, generated by concatenation of the systematic code portion x, and the checks portion y: Y = [x y]. For each log-density P, we can calculate the bitwise log likelihood values, by using the mapπ This keeps the same definition as in [2] except that the calculation has been generalized for every code bit i = 1, . . . ,n. Note thatπ(P), which is the vector (π(P)(b 1 ), . . . , π(P)(b n )) T , is the vector of bitwise log-likelihood values associated with P.

Turbo decoding of SCPC
We now use the new definitions to build a new set of Richarson's update equations. The turbo decoder depends on the equivalence classes of p(x|x), p(ỹ|y), p(z|z), p(w|w). Let Px |x , Pỹ |y , Pz |z , Pw |w represent these equivalence classes in Φ. We define Hence, the probability of each codeword of the first k c rows can be written as [P CR x|x ; P CR y|y ]. P CR z|z , P CR w|w are defined similarly.
R,z , and Q (m) R,w denote the extrinsic information of x, y, z, and w blocks, respectively, extracted by the row decoder at the mth iteration. Let Q (m) C,x , Q (m) C,y , Q (m) C,z , and Q (m) C,w represent the outputs of the column decoder in the same manner. Q (m) •,• is similarly defined to (6), for example, R,x is the extrinsic information of the information bits (x) extracted by the row decoder, and is defined as The new update equations become as follows (refer to [2] for the PCPC case): The decision criteria for the data at the end of the iterative process is as follows (note that in practice, P and Q are represented by their bitwise marginals): Equation (7a) describes the decoding of [x; y] by the row decoder. To calculate the extrinsic information of the information bits and of the check bits the mappingπ(·) is used, then the intrinsic information is removed. The other equations use a similar process. Equations (7) provide a general structure, in various decoding algorithms some of the Q's are set to zero and kept unupdated. In other algorithms, some Q's are multiplied by a set of restraining factors before they are used in the update equations.
For comparison, the update equations representing turbo decoding of PCPC (at the mth iteration) are [4,5] using the extended notation This means that in the PCPC case only the extrinsic information of the data bits (x) is computed and updated.

Stability of turbo decoding
The expressions for the stability matrices are developed based on their derivation in the case of PCPC as outlined in [2,5]. Assume that given Q C = QC,x QC,y QC,z QC,w , the extrinsic information calculated by the row decoder is Q R = QR,x QR,y QR,z QR,w . Then, perturbing Q C to Q C + δ C , the decoder's output will be Q R + δ R . A linear approximation for δ R is as follows (denote the Jacobian ofπ CR (·) by J R P ): This derivation gives an expression for S R -the stability matrix of the row decoder, and its dependence on the Jacobian ofπ CR (·). A similar expression can be derived for S C -the stability matrix of the column decoder. The Jacobian matrix is the derivative of the change in the elements of the mapping functionπ C (·): (J C P ) i j = ∂u i /∂v j and its size is n × n.
The derivation of an SDM Jacobian is almost identical to the derivation of the PCC turbo decoding Jacobian [2]. For a vector y, define M(y) as then from the definition ofπ(P) we get for any Q density equivalent to P (the exponential is taken componentwise): M π C (P) e Q = 0.
Now check the environment of the state point y =π C (P): Now perturbate (12) around this point using P → P + δ P , and use the matrix form of the point equation (13) to get Reassigning the point equation, this time replacing M(y), we get This form can be represented alternatively in the following form: Note that this may be viewed as a "natural" extension to the Jacobian expression in [2], in which 1 ≤ i, j ≤ k.
The last form of representing the Jacobian allows us to conclude that for SCPC the Jacobian can take a block matrix structure, similar to the PCPC form shown in [5], since each row can be decoded independently of the other rows: x,z;y,w . . .
where J R,i x,z;y,w is the Jacobian of the ith row.
It is also interesting to observe the derivation of an SDM Jacobian for a row decoder that calculates the extrinsic information of only the information bits (as done by a PCPC turbo decoder). We get the following structure: where j R,i(PCPC) m,n is the corresponding Jacobian element of the PCPC decoder. Hence, the Jacobian (and stability) matrices of the SCPC turbo decoder are a generalization of the corresponding matrices of the PCPC decoder.

STABILITY OF SDM-TYPE DECODER AT ASYMPTOTICALLY HIGH SNR
In [3] it was shown that the fixed points of PCPC turbo decoder are stable at high SNR. This section examines the stability of the SDM of SCPC at high SNRs and shows that its fixed points are inherently unstable for practical codes. We prove the following claim. Proof. To prove the claim, examine the stability matrix at high SNR. Calculating the actual eigenvalues might be impractical for arbitrary matrix. But the maximal eigenvalue has a well-known upper bound [12], We can reevaluate this expression in the following way: A i and B i are positive expressions defined as follows: where w H (b) denotes the Hamming weight of the bit sequence b.
Without loss of generality, assume that the all-zeros codeword was transmitted. At asymptotically high SNR, the error probability (for AWGN channel) decreases exponentially with the number of errors: p(b) ∝ exp(−w H (b)). Therefore, the above expression will converge to a limit.
For A i , the most dominant term(s) in the numerator and the denominator is the codeword(s) with the lowest weight, that is, with the code's minimum Hamming distance (d). For B i the all-zeros codeword is the most probable and it appears only in the denominator (since w H (b) = 0 if b is the all-zeros code word): Substituting these limits in the expression for the stability matrix we get that for each row i, Since, at the limit, the sum of the elements along every row of the matrix is constant, it will become an eigenvalue (with an eigenvector of [1, . . . , 1]). Therefore, the stability matrix of the decoder is unstable at high SNR for any code with d ≥ 2. Equation (22) proves that this is the upper limit as well: and this proves the claim.
A PCPC decoder always has at least a single fixed point [2], and its stability matrix was derived in the context of that point. That is, assuming the decoder is in the fixed point vicinity, the stability matrix indicates if and how fast the decoding will converge to the point. However, for an SDMtype decoder, we did not prove that a fixed point must exist. Hence, in the analysis of SDMs, the Jacobian and stability matrices are mainly viewed as the derivative of the update equations with respect to the extrinsic information. Hence, we conclude that the maximal eigenvalue of the stability matrix and its related eigenvector indicate that the decoding process drives the extrinsic information to infinity in the direction of the selected codeword. That is, the SDM increases the density in a direction supporting the most likely codeword.
Note that in [13] it was shown that the slope in the graph of the density evolution SNR for serially concatenated codes is related to the same value: d − 1 (when the SNR is high). Here, a similar result is derived analytically. We believe that both results are connected and reflect similar phenomena.
In the case of turbo decoding of SCPC, each row (and column) has its own Jacobian and stability submatrices. Each of these stability submatrices has a maximal eigenvalue of d− 1, and an all-ones corresponding eigenvector, for high SNR. Hence, the stability matrix of the rows (columns) decoder will have n eigenvalues, all of which converge to the limit of d − 1 at high SNR (the eigenvalues of a block matrix are a union of the eigenvalues of all its submatrices).
The inherent instability of the SDM (demonstrated at high SNR) can be stabilized through other elements in the decoding process. A possible stabilizing approach is the multiplication of the extrinsic information by restraining factors through the update equations (as Pyndiah implemented as part of his decoding system [6]). Note that knowing the eigenvalue's upper bound, one can ensure stability using this method. Another approach to stabilize the resulting densities is to apply a (generally stable) decoder which calculates the extrinsic information of only the information bits, for one component code, along with an SDM decoder for the other code-as was proposed by Benedetto et al. [8].
It is important to note that an unstable decoding process, in the sense we have just shown, does not necessarily imply wrong decisions at the decoder's output. The instability of the decoder merely increases the density values. It does not change the decisions made by the decoder. The extrinsic information Q is a log likelihood ratio of the form log p(x = 1|data)/ p(x = 0|data). If p(x = 1|data) → 1 (or p(x = 1|data) → 0), then Q → ∞ (or Q → −∞). Hence, the instability of Q actually means that the decoder becomes more confident that x = 1 (or x = 0), which is reasonable as the SNR improves. Indeed, many of our simulations show that the SDM is increasing the extrinsic values of the correct word, instead of letting it converge to some constant.

STABILITY ANALYSIS OF SOME SCPC DECODING ALGORITHMS
In the previous section the stability of a single SDM was analyzed. A full decoding scheme has two decoding stages. For an SCPC decoding scheme, at least one of these decoders is an SDM. This section investigates the entire decoding process, using the formalized representation developed in the previous section. Specifically, we investigate the decoding algorithms proposed by Benedetto and Pyndiah by deriving the corresponding update equations. We then derive and analyze the stability matrices for two simple component codes: the repetition code and a code with 2 × 2 information bits. By this we demonstrate the structure of the stability matrices and the instability of the SDM.

Benedetto's decoding scheme
The update equations for Benedetto et al. [8] algorithm are These equations are based on the general structure described by (7), modified in accordance with Benedetto's decoding scheme: the first two equations (27a) and (27b) express the first decoding stage, the row (inner) decoding of both the information (x) and checks on the columns (z). This is a PDM decoder, and its output contains extrinsic information of its information bits only, hence both these equations have the form of (9a). The third equation (27c) expresses the second (outer) decoding stage: column decoding of the information rows. This equation would have been identical to equation (7c), except that Benedetto's decoding scheme does not use the a priori density probabilities here. Note that this is an SDM decoder, whose output contains the extrinsic information of both its information and check bits.
The maximal eigenvalue of S is smaller than or equal to the product of the maximal eigenvalues of S R and S C . A sufficient condition for the stability of S is that this product will be less than 1. Given our previous analysis for a high SNR, S C eigenvalues are limited to d C − 1, so a sufficient stability condition for S is that the eigenvalues of S R are smaller than (d C − 1) −1 . Since under high SNR conditions, the eigenvalues of the inner decoder (a PDM decoder) converge to 0 in probability [3], it satisfies the stability condition. Hence, Benedetto's decoding algorithm is stable for high SNR's.
The row decoder has a stability matrix with n C square submatrices J CR,i x,y of size k R on the main diagonal. However, the second decoding stage has the same structure except that it has k R square submatrices J CC,i x,y of size n C on its main diagonal.
Decoding stability of an SCPC with a repetition code As a simple example, consider an SCPC with a repetition code as its component rows and columns codes. Assume the code has a single data bit, which is repeated d R times in each row, and d C times in each column. The generator matrix for the component codes has the following form: We will now examine the stability matrices. S R has n C = d C square blocks, with the structure of (18). Since k R = 1, it can be easily shown that the submatrices are the all-zero matrix of size 1×1. Thus S R is the zero matrix and has zero as a multiple eigenvalue. As for S C , it has only one square block (we decode a single column), with a size of n C = d C . Since there are only two codewords (all ones and all zeros), all the matrix elements equal 1, except for the all-zero main diagonal: The maximal eigenvalue of S C is d C − 1, therefore S C is unstable for any SNR. Yet, the overall process is stable, due to the stability of S R .
A code with 2 × 2 information bits As a second example consider a column (outer) encoder with two data bits and a single check bit (parity), and a row (inner) encoder with two data bits and arbitrary number of check bits. The stability matrices are S R is stable for any row code and SNR as was proven in [5].
We have shown the maximal eigenvalues of S C to converge to 1 (= d c − 1) in high SNR, causing the second stability matrix to be marginally stable. Thus the overall decoder is stable.

Pyndiah's decoding scheme
We will now analyze the stability of Pyndiah's and Fang's decoding schemes. The scheme has SDM-type decoders for both the row and column decoders. Pyndiah [6] also suggested the usage of a set of restraining factors α(m), by which the extrinsic information should be multiplied in each iteration. The set of factors begins with a value of zero for the first iteration, and gradually increases to one. In our notations, the update equations of these schemes are as follows (note that here we use the optimal MAP decoder, where Pyndiah and Fang used suboptimal decoders): These equations are similar to (7), with the restraining factor α(m) introduced.
The Jacobian structure for both the row and column decoders will be of the form in (18). For example, the row Jacobian will have n C square submatrices J CR,i x,y of size n R on the main diagonal.
Applying the chain derivation rule, it can be shown that the multiplication by the set of restraining factors is equivalent to multiplying the Jacobian and stability matrices (with all their eigenvalues) by the same factors. Obviously, if the restraining factors are smaller than 1, it improves the stability of the decoding process.
For this decoding scheme, we now show that for asymptotically high SNR, the maximal eigenvalue of S converges to the product of the maximal eigenvalue of the stability matrices of its component codes, where λ S , λ SR , λ SC denote the maximal eigenvalues of S, S R , S C , respectively.
At the limit, the maximal eigenvalues of both S R and S C have the same eigenvector. From [12], if two matrices have the same eigenvector, then their product matrix will have the same eigenvector with the product of the respective eigenvalues as the eigenvalue associated to it.

Repetition code
To illustrate the above, we will examine the same example codes. For the repetition code, we get the following stability matrices (again, each matrix is indexed by rows or columns as is most convenient, the restraining factor is set to 1): For α(m) = 1, both S R and S C are unstable, regardless of the SNR, since they have maximal eigenvalues of d R − 1, and d C − 1, respectively. Therefore, the overall decoding process is unstable.

A code with 2 × 2 information bits
We now examine the second example and use a code with two information bits and a single check bit for both the row and the column codes (note that this is a private case of the example shown for Benedetto's decoder). The rows matrix form is The column stability matrix, indexed by the columns, has the same form, but if we index the matrix by the rows (as the row decoder Jacobian is ordered) it becomes As explained before, both these matrices are marginally stable at high SNRs, and the stability of the process is determined through their product. Generally, for other codes, this decoding process will be unstable at high SNR's, as practical codes have d ≥ 2. The restraining factor can be used to stabilize the iterative process of some of the iterations.

SIMULATION RESULTS
We For a given SNR (AWGN channel), we simulated the transmission of encoded blocks. For each block we ran up to 10 decoding iterations, in which we computed the BER, the stability matrices S, S R , S C , and their maximal eigenvalues.
As expected, due to the SDM's instability, we had to address out-of-bound numerical results in the decoding process, as the density of some bits overflowed. In these cases, we chose to stop the decoding, and discard the results of the last iteration. Hence, we had a significant reduction in the simulation data ensemble for the last iterations. Note that for practical implementations, a different stopping criterion should be considered. Figure 2 presents the results obtained for turbo decoding of the Golay PCPC and serves as a comparison reference to the decoding of SCPC. The figure shows the maximal eigenvalue of the stability matrix of the row and column decoders, as well as the overall decoder. The simulations show that as the SNR grows, the maximal eigenvalue approaches zero. Also evident is that the maximal eigenvalue of the overall stability matrix is within order of magnitudes smaller than the maximal eigenvalues of the row and column decoders (refer to [4,5] for explanation of this phenomena). Figure 3 shows the maximal eigenvalue of the stability matrices of the outer, inner, and overall decoders of Benedetto's scheme for the Golay code. The outer decoder (which is not an SDM-type decoder) is stable: its maximal eigenvalue converges to zero. As for the inner (SDM) decoder, its maximal eigenvalue approaches d − 1 = 7 (where d is the minimum Hamming distance of the Golay component code)-as was predicted by the theoretical results. The overall decoding process is stable again, due to the stability of the outer decoder (although the eigenvalues approach zero in a slower rate, compared to the rate of the corresponding parallel concatenation decoders, presented in Figure 2). Figure 4 shows the maximal eigenvalue of the stability matrices of the outer, inner, and overall decoder in Pyndiah's scheme for the Hamming code. Here, both decoders are of SDM type and their maximal eigenvalue approaches d − 1 = 2. Moreover, the eigenvalues of the overall stability matrix approach (d R − 1)(d C − 1) = 4 (as explained before), and the decoder is unstable. Note that compared to the turbo decoding of PCPC, in which the overall stability matrix had a much smaller maximal eigenvalue compared to those of the component decoders, here the contrary occurs: S has larger eigenvalues compared to those of the component codes.
The effect of applying restraining factors to Pyndiah's scheme is presented in Figure 5 for Hamming code. We used the same set of restraining factors used in [6]: α(m) = [0, 0.2, 0.3, 0.5, 0.7, 0.9, 1, 1]. Note that these values were selected for the particular code used there, and we did not try to optimize the factors nor to use them to force the decoder to converge. Thus, the effect of the restraining factors is limited in our simulation. The maximal eigenvalues of the first iterations are decreased due to this multiplication, and converge to α(m)(d − 1).
As can be seen, the simulation results support the theoretical results. The maximal eigenvalue of the SDM's stability matrix for the Hamming and Golay codes approaches d − 1 and the SDM-type decoder is indeed inherently unstable.

CONCLUSION
We extended the framework, established by Richardson, for turbo decoding of serially concatenated block codes and turbo codes. General update equations were derived for this case, and we showed how they are linked to the decoding algorithms of Benedetto and Pyndiah. The main difference, compared to decoding of parallel-concatenated code, is the incorporation of the SDM, in which the extrinsic information is calculated also for the code's check bits.   Then, we investigated the stability of the SDM, and of the overall decoder. For some simple codes we demonstrated that the extrinsic information, calculated by the SDM decoder, does not converge throughout the iterative process. Moreover, when the SNR is high the decoder becomes over confident in its decisions, and the extrinsic information approaches ±∞. Here, we showed a connection between the eigenvalues of the stability matrices and the minimum Hamming distance of the code (d): we proved that the eigenvalues of the SDM's stability-matrix approach d − 1, and when two SDMs are incorporated, as in Pyndiah's scheme, they approach (d R − 1)(d C − 1). Finally, we provided a theoretical justification for the use of restraining factors in Pyndiah's algorithm.