Eﬃcient Decoding of Turbo Codes with Nonbinary Belief Propagation

This paper presents a new approach to decode turbo codes using a nonbinary belief propagation decoder. The proposed approach can be decomposed into two main steps. First, a nonbinary Tanner graph representation of the turbo code is derived by clustering the binary parity-check matrix of the turbo code. Then, a group belief propagation decoder runs several iterations on the obtained nonbinary Tanner graph. We show in particular that it is necessary to add a preprocessing step on the parity-check matrix of the turbo code in order to ensure good topological properties of the Tanner graph and then good iterative decoding performance. Finally, by capitalizing on the diversity which comes from the existence of distinct e ﬃ cient preprocessings, we propose a new decoding strategy, called decoder diversity, that intends to take beneﬁts from the diversity through collaborative decoding schemes.


INTRODUCTION
Turbo codes and low-density, parity-check (LDPC) codes have long been recognized to belong tothe family of modern error correctingcodes. Although often opponents in standards and applications, these two classes of codes share common properties, the most important one being that they have a sparse graph representation that allows to decode them efficiently using iteratively whether the maximum a posteriori (MAP) algorithm [1] for turbo codes, or the belief propagation (BP) algorithm for LDPC codes [2], as well as their low-complexity iterative decoders.
Moreover, LDPC and turbo codes are two coding candidates which are often options within the same system [3,4]. It is thus interesting to investigate common architecture/algorithm at the receiver side to enable switching easily among them, whilst still maintaining reasonable cost and area size.
Even if turbo codes effectively exhibit a sparse factor graph representation for which the BP decoder is equivalent to the so-called turbo decoder [5,6], this factor graph representation is composed of different types of nodes, both for variable and for function nodes, which are not reduced to parity-check constraints (see [5] for more details). Later, some researchers have tried to use a factor graph representation of the turbo code based only on parity-check equations [7]. We will refer to a factor graph with only paritycheck constraints for the function nodes (binary or not) as Tanner graph in the rest of the paper [8].
The classical BP algorithm (sometimes called sumproduct) on the Tanner graph of a turbo code does not perform sufficiently well to compete with the turbo decoder performance [7]. This is mainly due to the inherent presence of many short cycles of length 4, that lead to a poor convergence behavior inducing loss of performance. In order to solve the problem of these short cycles, in [9,10] the authors propose to use special convolutional codes as components of the turbo code, called low-density convolutional codes, for which an iterative decoder based ontheir Tanner graph experiences has less statistical dependence, and therefore exhibits very good performance.
Our approach is different from [10] since we aim at having a generic BP decoder which performs close to the best performance, without imposing any constraint on the component code. In this paper, we present a new approach to decode parallel turbo codes (i.e., binary, duobinary, punctured or not, etc.) using a nonbinary belief propagation decoder. The generic structure of the proposed iterative decoder is illustrated in Figure 1. The general approach can be decomposed into two main steps: the first step consists in building a nonbinary Tanner graph of the turbo code using only parity-check nodes defined over a certain finite group, and symbol nodes representing groups of bits. The Tanner graph is obtained by a proper clustering of order p of the binary parity-check matrix of the turbo code, called "binary image." However, the clustering of the commonly used binary representation of turbo codes appears to be not suitable to build an nonbinary Tanner graph representation that leads to good performance under iterative decoding. Thus, we will show in the paper that there exist some suitable preprocessing functions of the parity-check matrix (first block of Figure 1) for which, after the bit clustering (second block of Figure 1), the corresponding nonbinary Tanner graphs have good topological properties. This preliminary two-round step is necessary to have good Tanner graph representations that outperform the classical representations of turbo codes under iterative decoding. Then, the second step is a BP-based decoding stage (last block in Figure 1) and thus consists in running several iterations of group belief propagation (group BP), as introduced in [11], on the nonbinary Tanner graph. Furthermore, we will show that the decoder can also fully benefit from the decoding diversity that inherently raises from concurrent extended Tanner graph representations, leading to the general concept of decoder diversity. The proposed algorithms show very good performance, as opposed to the binary BP decoder, and serve as a first step to view LDPC and turbo codes within a unified framework from the decoder point of view, that strengthen the idea to handle them with a common approach. The remaining of the paper is organized as follows. In Section 2, we describe how to decode turbo codes based on group BP decoder. To this end, we review how to derive the binary representation of the parity-check matrix H tc of a parallel turbo code. Then, we explain how to build the nonbinary Tanner graph of a turbo code based on a clustering technique and describe the group BP decoding algorithm based on this representation. In Section 3, we discuss how to choose a posteriori good matrix representations and how to take advantage of the inherent diversity that is offered by concurrent preprocessing in the decoding process. To this end, we present some choices for the required preprocessing of the matrix H tc before clustering to build a Tanner graph with good topological properties, that performs well under group BP decoding. Then, we introduce in Section 4 the concept of decoder diversity and show how it can be used to further enhance performance. Finally, conclusions and perspectives are drawn in Section 5.

DECODING A TURBO CODE AS A NONBINARY LDPC CODE
In this Section, we present the different key elements that enable to decode turbo codes as nonbinary LDPC codes defined over some extended binary groups. First, we briefly review how to derive the binary representation of the paritycheck matrix H tc of a parallel turbo code based on the paritycheck matrix of a component code. Then, we explain how to build the nonbinary Tanner graph of a turbo code based on a clustering technique and describe how the group BP decoding algorithm can be used to efficiently decode turbo codes based on this extended representation.

Binary parity-check matrix of a turbo code
The first step in our approach consists in deriving a binary parity-check matrix representation of the turbo code. We will only focus in this paper on parallel turbo codes with identical component codes.

Parity-check matrix of convolutional codes
The binary image of the turbo code is essentially based on the binary representation of the parity-check matrices of its component codes. Following the derivations presented in [12], the parity-check matrix for both feedforward convolutional encoders and their equivalent recursive systematic form is generally derived using the Smith's decomposition of its polynomial generator matrix G(D), where G(D) is a k × n matrix that gives the transfer of the k inputs into the n outputs of the convolutional encoder and D is defined as the delay operator (please refer to [12] for more details about this decomposition). From this decomposition, the polynomial syndrome former matrix H T (D) [12], of dimensions n×(n− k), can be derived and it can be expanded as where H T i , 0 ≤ i ≤ m s is a matrix with dimensions n×(n−k), and m s is the maximum degree of polynomials in H T (D). For both feedforward convolutional encoders and their recursive systematic form, it is possible to derive the binary image from the semi-infinite matrix H T given by where N and K are the codeword and information block lengths, respectively. Under some length restrictions for the recursive case [13,14], it is also possible to derive the binary image of the parity-check matrix of the tail-biting code H tb from the parity-check matrix H [15] for feedforward convolutional encoders and their recursive systematic form. This can finally be represented as follows using the so-called "wrap around" technique: Note that, in each case, both systematic and nonsystematic encoders give the same codewords and thus share the same parity-check matrix [12,16].

Parity-check matrix of turbo codes
For recursive systematic convolutional codes of rate k/(k +1), that mainly compose classical turbo codes in the standards, the matrix H T (D) is simply given by [12] where in fact h T i (D), 1 ≤ i ≤ k, are the feedforward polynomials and h T k+1 (D) is the feedback polynomial defining the recursive systematic convolutional code. Then, for this kind of components codes, the binary parity-check matrix can be simply derived using (2)-(4).
As recursive component codes of turbo codes are systematic, the columns of the associated parity-check matrix H with dimension (N − K) × N can be assigned to information bits and to redundancy bits. Note that when using the preceding expressions of H, the output bits of the convolutional encoder are supposed to be ordered alternatively within the codeword. After some column permutations, we can rewrite where H i and H r contain columns of H relative to information and redundancy bits, respectively. Using this notation, we can derive easily the parity-check matrix of a turbo code as follows for the case of twocomponent codes in parallel [17,18]: where Π T is the transpose of the interleaver permutation matrix at the input of the second component encoder. In that case, H tc has dimensions 2(N − K) × 2N − K. Of course, this technique can be easily generalized to more than two components.

Example
To illustrate this section, we consider an R = 1/3 turbo code with two rate one-half code components with parameters in octal given by (

Clustering and preprocessing
Once the parity-check matrix H of a turbo code has been derived, we obtain a nonbinary Tanner graph by applying a clustering technique, which is essentially the same as the one described in [11].
The matrix H is decomposed in groups of p rows and p columns. Each group of p rows represents a generalized 4 EURASIP Journal on Wireless Communications and Networking parity-check node in the Tanner graph, defined in the finite group G(2 p ), and each group of columns represents a symbol node, build from the concatenation of p bits (p-tuples) defining elements in G(2 p ).
A cluster is then defined as a submatrix h i j of size p × p in H, and each time a cluster contains nonzero values (ones in this case) in it an edge connecting the corresponding group of rows and group of columns is created in the Tanner graph. To each nonzero cluster is associated a linear function f i j (·) from G(2 p ) to G(2 p ) which has h i j as matrix representation. Using this notation, the ith generalized parity-check equation defined over G(2 p ) can be written as where c j is the jth coordinate of a codeword having symbols defined in G(2 p ).
To illustrate the clustering impact on the Tanner graph representation and to give some insights that can motivate to extend the representation from the binary domain to a nonbinary one, we will consider as a simple example the clustering of the recursive systematic convolutional codes with polynomial representation in octal basis given by (1, 5 8 /7 8 ). We assume that 12 information bits have been sent using direct truncation. Then, a 4 × 4 clustering is applied to the binary parity-check matrix. Using representation of (3), the resulting clustered matrix is given by We are now able to associate a nonbinary Tanner graph representation of H with generalized parity-check constraints applying now to 4-tuples binary vectors. The Tanner graph corresponding to our example is finally given in Figure 2(b) and it is compared with the Tanner graph associated with the binary image defined by H (Figure 2(a)).
Through this example, we can see that, for convolutional codes, when using the representation given in (3), we can still ensure a sparse graph condition and even reach a tree representation when increasing the order of the representation. In fact, for rate one-half codes, it has been observed that there exists a minimum value of P for which we can have a tree. This induces that using a BP-like decoder will lead to a maximum a posteriori symbol decoding and, in that case, it has been verified that BP and the MAP have the same performance. Unfortunately, this tree condition does not hold anymore when we use the alternative representation H of the parity-check matrix of a convolutional code as used in turbo codes parity-check matrix as it can be seen for the Tanner graph representation of our previous example in Figure 2(c). This representation introduces cycles even in the extended representation of the convolutional code using bit clustering, and as a result, in the extended representation of the turbo codes. Moreover, when tail biting is used, there is no possibility to ensure a tree condition due to the nonzero elements in the right-hand corner of the tail-bited, paritycheck matrix of the component code. Thus, a remaining issue is how to derive a "good" extended Tanner graph representation. To this end, we will present in Section 3 how to overcome these problems to ensure fair performance under BP decoding by applying an efficient preprocessing of the parity-check matrix of the turbo code.

Nonbinary group belief propagation decoding
The Tanner graph obtained by preprocessing and clustering the binary image does not correspond to a usual code defined over a finite field GF(q = 2 p ) but can be defined on a finite group G(2 p ) of the same order (see [11] for more details). We will refer to the belief propagation decoder on group codes as group BP decoder. The group BP decoder is very similar in nature to regular BP in finite fields. The Charly Poulliat et al.  only difference is that the nonzero values of a parity-check equation are replaced with more general linear functions from G(2 p ) to G(2 p ), defined by the binary matrices which form the clusters. In particular, it is shown in [11] that group BP can be implemented in the Fourier domain with a reasonable decoding complexity. We briefly review the main steps of the group BP decoder and its application to the nonbinary Tanner Graph of a turbo code. The modified Tanner graph of an LDPC code over a finite group is depicted in Figure 3, in which we indicated the notations we use for the vector messages. Additionally, to the classical variable and check nodes, we add function nodes to represent the effect of the linear transformations deduced from the clusters as explained in the previous section.
The group BP decoder has four main steps which use q = 2 p dimensional probability messages.
(i) Data node update: the output extrinsic message is obtained from the term by term product of all input messages including the channel-likelihood message, except the one carried on the same branch of the Tanner graph. (ii) Function node update: the messages are updated through the function nodes f i j (·). This message update is reduced to a cyclic permutation in the case of a finite field code, but in the case of a more general linear function from G(2 p ) to G(2 p ) denoted β = f i j (α) the update operation is (iii) Check node update: this step is identical to BP decoder over finite fields and can be efficiently implemented using a fast Fourier transform. See, for example, [11,19] for more details. (iv) Inverse function node update: with the use of the function f i j (·) backwards, that is, by identifying the values α i which have the same image β j , the update equation is These four steps define one decoding iteration of a general parity-check code on a finite group, which is the case of a clustered convolutional or turbo code as described previously. Note that the function node update is simply a reordering of the values both in the finite field case, and when the cluster defining the function f i j (·) is full rank. When the cluster has deficient rank r < p, which is often the case when clustering a turbo code, only 2 r entries of the message U pc are filled and the remaining entries are set to zero.
Note that we do not discuss in this paper the decoding complexity issues, but we rather focus on the feasibility of the decoding using a BP decoder. Of course, a nonbinary BP decoder is naturally much more computationally intensive than a binary BP or a turbo decoder. However, reduced complexity nonbinary decoders have been recently proposed, which exhibit good complexity/performance tradeoff even compared to binary decoders [20]. The reduced complexity decoder can be easily adapted to codes on finite groups, since the function node update is not more complex in the group case than in the field case.

COMPARISON OF BINARY IMAGES OBTAINED WITH DIFFERENT PREPROCESSINGS
In this Section, we discuss some relevant issues related to the improvement of the performance when group BP decoder is used. We show in particular that some preprocessing functions can lead to interesting Tanner graph topologies and good performance under iterative decoding.

Selection of preprocessing for an efficient sparse graph representation
It should be noted that the performance of the group BP decoder depends highly on the structure of the nonbinary Tanner graph. In our framework, it is possible to apply some specific transformations on the binary image H before the clustering operation, so that the Tanner graph has desirable properties. Indeed, any row linear transformation A and column permutation Π applied to H do not change the code space but change the topology of the clustered Tanner graph. Let us denote by H = P c (H) = A · H · Π the preprocessed 6 EURASIP Journal on Wireless Communications and Networking binary parity-check matrix. We propose in this paper two preprocessing techniques that we found attractive in terms of Tanner graph properties and described below and depicted in Figure 4.
This preprocessing is defined by alternating the information bits and the redundancy bits of the first convolutional code of the parallel turbo code. We obtain with this technique two parts in the parity-check matrix. Each of them has an upper triangular form with a diagonal (or near diagonal for the rectangular part of H ), therefore, reducing the number of nonzero clusters in the nonbinary Tanner graph deduced from H . Note that a second preprocessing of this type can be considered by alternating the information bits and the redundancy bits of the second convolutional encoder.
This preprocessing is obtained by column permutations with the aim of having the most concentrated diagonal in the parity-check matrix, that is, minimizing the number of clusters that will be created on the diagonal. This is supposed to be a good choice since the clusters on the diagonal are the more dense in the Tanner graph and are assumed to participate the most to the performance degradation of the BP decoder when they contribute to cycles. Indeed, we have verified by simulations on several turbo codes that the number of nonzero clusters of a given size has less in the preprocessing P c2 than in the preprocessing P c1 on the diagonal. Note that by properly choosing the columns to be permuted, several images of this type could be created. Note that the two proposed preprocessing techniques are restricted to column permutations, that is, with the special case of A = Id, where Id corresponds to the identity transformation. This case is the simplest one; the transformation keeps the binary Tanner graph of the code unchanged, but the nonbinary clustered Tanner graph is modified after preprocessing. We will show through simulations that this has an important impact on the decoder performance. Although Figure 4 plots examples of rate R = 1/3 turbo codes, the exact same preprocessing strategies can be applied to any type of turbo code, that is, any rate for punctured and/or multibinary turbo codes.

Simulation results with different preprocessings
In this section, we apply the different preprocessing techniques presented in the previous section to duobinary turbo codes with the parameters R = 0.5 and size N = {848, 3008} coded bits taken from the DVB-RCS standard [21,22]. The frame sizes we used correspond to ATM and MPEG frame sizes with K = {53, 188} information bytes, respectively. Note that these turbo codes have sizes which are not particularly well suited to clustering. A size of N = 864 would have been preferable for cluster size p = 8 to ensure a proper clustering of each part of the turbo code parity-check matrix corresponding to each component codes, but we wanted to  Figure 4: Three different binary representations of the same rate R = 1/3 turbo code. The first one is the natural representation (see (6)), the second one corresponds to the clustering P c1 , and the third one to the clustering P c2 .
In the following, we will consider the additive white Gaussian noise channel (AWGN) for our simulations. For this channel, we compare the group decoder performance with various preprocessing functions, a clustering size of p = 8, and a floating point implementation of the group BP decoder using shuffle scheduling [23]. As a reference, we simulated the turbo decoder based on MAP component decoders in floating-point precision in order to have the best results that one can obtain with a turbo decoding strategy.
The curves plotted in Figure 5 are related to the R = 1/2 turbo code with parameter N = 848 and correspond to the natural representation of the code and two preprocessings (one type P c1 and one type P c2 ).
In order to illustrate that the preprocessing has influence on the nonbinary factor graph, we have counted the number of nonzero clusters and also the number of full-rank clusters in the cases of the two simulated matrices tested in this section, and for the two types of preprocessings P c1 and P c2 . We reported the statistics on Table 1. Remember that a nonzero cluster corresponds to an edge in the Tanner graph, and that a full-rank cluster corresponds to a permutation function, while a rank deficient cluster corresponds to a projection. We can see that the number of nonzero clusters is much lower in the case of the proposed preprocessing, but also that there is no full-rank clusters. This indicates that the preprocessing P c2 has concentrated the ones of the parity-check matrix H b in a better way than P c1 . Our simulation results show that this better concentration has a direct influence on the error-correction capability of the group BP decoder. All group BP simulations have used a maximum of 100 iterations, but the average number of iteration is as low as 3-4 iterations for frame error rates below 10 −3 . Simulations were run until at least 100 frames have been found in error. As expected, the preprocessing of type P c2 is far better than the other preprocessings, which is explained by the fact that the corresponding Tanner graph has less nonzero clusters. It can be seen that with a good preprocessing function, a turbo code can be efficiently decoded using a BP decoder, and even can slightly beat the turbo decoder in the waterfall region. The turbo decoder remains better in the error floor region, which is due to the fact that the group BP decoder has much more detected errors (due to decoder failures) in this region than the turbo decoder. Although we are aware that the group BP decoder is much more complex than the turbo decoder, this result is quite encouraging since it was long thought that turbo codes could not be decoded using an LDPC-like decoder. As a drastic example, we have plotted the very poor performance of a binary BP decoder on the binary image of the turbo code, which does not converge at all for all the SNRs under consideration.
We also simulated the same curves for a longer code with N = 3008 in order to show the robustness of our approach. The results are shown in Figure 6, and the same comments as for the N = 848 code apply with an even larger performance gain when using the best preprocessing function.

IMPROVING PERFORMANCE BY CAPITALIZING ON THE PREPROCESSING DIVERSITY
As there exist more than one possibility to build a nonbinary Tanner graph from the same code through different preprocessing functions, this raises the question whether if it is possible to improve the decoding performance by using this diversity of graph representation. Actually, we have noticed that with the same noise realization, the group BP decoder on a specific Tanner graph can either (i) converge to the right codeword,or (ii) converge to a wrong codeword (undetected error),or (iii) diverge after a fixed maximum number of iterations. If we accept some additional complexity, using several instances of iterative decoding based on several preprocessing functions and a proper results merging strategy is likely to improve the errorcorrection performance.
In this paper, we will not address the problem of finding a good set of preprocessing functions, and we restrict ourselves to N d = 5 different Tanner graphs obtained with preprocessing functions of type P c2 . There are various possible merging methods to use the outputs of each decoder, with associated performance complexity tradeoffs. Aside from the two natural merging strategies depicted below, one can think of more elaborate choices.

Serial merging
The N d decoders are potentially used in a sequential manner. Assuming that we check the value of the syndrome at each iteration, when a decoder fails to converge to the right codeword or to a wrong codeword after a given number of iterations, we switch to another decoder, that is, another Tanner graph is computed with a different preprocessing and we restart the decoder from scratch with the new graph and the permuted likelihood. The process stops when one decoder converges to a codeword (either the sent codeword or not).

Parallel merging
The N d decoders are used in parallel and a maximum likelihood (ML) decision is taken among the ones that have converged to a codeword. If nb, with nb ≤ N d , is the number of decoders that have converged to a codeword in less than the maximum number of iterations, then the nb associated likelihood is computed and the one with the maximum likelihood is selected. Note that the nb candidate codewords are not necessarily distincts.

Lower bound on merging strategies
In order to study the potential of the decoder diversity approach regardeless of the merging strategy, we define the following lower bound. Among the N d decoders in the diversity set, we check if at least one decoder converges to the right codeword. A decoder failure is decided if all N d decoders have not converged after the maximum number of iterations. Note that this method does not exhibit any undetected error. This is called a lower bound on merging strategies because it assumes that if there exists at least one Tanner graph which converges to the right codeword, one can think of a smart procedure to select this graph. This is of course not always possible, especially if the codeword sent is not the ML codeword. This lower bound allows also to have a possibly tight estimation on the parallel merging case, without having to simulate all N d decoders.
The extra complexity induced by the serial merging is negligible since the other Tanner graphs will be used only when the first one fails to converge, that is, at an FER = 10 −3 for the first decoder, the decoder diversity will be used only 0.1% of the time. The parallel merging is much more complex since it uses N d times more computations, but one can argue that it is simpler to parallelize on a chip. We did not simulate the parallel merging in this work. In the worst case, the extra latency of the serial merging is obviously linearly dependent on the number N d of different Tanner graphs.
In Figures 7 and 8, we report simulation results for the AWGN channel for the two turbo codes that have been studied in the previous section. Of course, the results with no diversity are similar to those observed in Figures 5 and 6 for the preprocessing of type P c2 , and we do not plot them in the new figures. If we focus on the maximum performance gain that one can hope for by looking at the lower bound curves, it is clear that using several decoders can improve significantly the performance, both in the waterfall and the error floor regions. For the small code as well as for the longer code, using group BP decoding with decoder diversity can gain between 0.25 dB to 0.4 dB compared to the turbo decoder using MAP component decoders, which was up to now considered as the best decoder proposed for turbo codes. This result shows in particular that it is possible to consider iterative decoders which are more powerful and, therefore, which are closer to the maximum-likelihood decoder, than the classical turbo decoder.
Interestingly, the serial merging which is the more obvious merging strategy, and also requires the least additional complexity, achieves full decoder diversity gain in the waterfall region, that is, above FER = 10 −3 . This is particularly useful for wireless standards which use ARQbased transmission and, therefore, hardly require error correction below FER = 10 −3 . In the error floor region though, we can see in both Figures 7 and 8 that more elaborate merging solutions should be used to achieve full diversity gain and obtain a substantial gain compared with turbo decoder. Note, however, that with the serial merging Charly Poulliat et al.   and for the N = 3008 turbo code the results are better than the turbo decoder for all SNRs, even in the error floor region.

CONCLUSION
In this paper, we have proposed a new approach to efficiently decode turbo codes using a nonbinary belief propagation decoder. It has been shown that this generic method is fully efficient if a preprocessing step on the parity-check matrix of the code is added to the decoding process in order to ensure good topological properties of the Tanner graph and then good iterative decoding performance. Using this extended representation, we show that the proposed algorithm exhibits very good performance in both the waterfall and the error regions when compared to a classical turbo decoder. Moreover, using the inherent diversity induced by the existence of several concurrent extended Tanner graph representations, we show that the performance can be further improved and we introduce the concept of decoder diversity. This study shows that this decoding strategy (i.e., joint use of preprocessing, group BP and diversity decoding) appears as a key step that enables to consider LDPC and turbo codes within a unified framework from the decoder point of view.

Call for Papers
Computing platforms that are embedded within larger systems they control, called embedded systems, are inherently very complex as they are responsible for controlling and regulating multiple system functionalities. Often embedded systems are also safety-critical requiring high degree of reliability and fault tolerance. Examples include distributed microprocessors controlling the modern cars or aircrafts and airport baggage handling system that track and trace unsafe baggage. To address this growing need for safety and reliability, formal techniques are increasingly being adapted to suit embedded platforms. There has been widespread use of synchronous languages such as Esterel for the design of automotive and flight control software that requires stronger guarantees. Languages like Esterel not only provide nice features for high-level specification but also enable model checkingbased verification due to their formal semantics. Other semiformal notations are also being proposed as standards to specify industrial embedded systems using, for example, the newly developed IEC61499 standard for process control. This standard primarily focuses on component-oriented description of embedded control systems. The goal of this special issue is to bring together a set of high-quality research articles looking at different applications of formal or semiformal techniques in specification, verification, and synthesis of embedded systems.
Topics of interest are (but not limited to):

Special Issue on Patches in Vision Call for Papers
The smallest primitive employed for describing an image is the pixel. However, analyzing an image as an ensemble of patches (i.e., spatially adjacent pixels/descriptors which are treated collectively as a single primitive), rather than individual pixels/descriptors, has some inherent advantages (i.e., computation, generalization, context, etc.) for numerous image and video content extraction applications (e.g., matching, correspondence, tracking, rendering, etc.). Common descriptors in literature, other than pixels, have been contours, shape, flow, and so forth.
Recently, many inroads have been made into novel tasks in image and video content extraction through the employment of patch-based representations with machine learning and pattern recognition techniques. Some of these novel areas include (but are not limited to): • Object recognition/detection/tracking • Event recognition/detection • Structure from motion/multiview In this special issue, we are soliciting papers from the image/video processing, computer vision, and pattern recognition communities that expand and explore the boundaries of patch representations in image and video content extraction.
Relevant topics to the issue include (but are not limited to): • Novel methods for identifying (e.g., SIFT, DoGs, Harris detector) and employing salient patches • Techniques that explore criteria for deciding the size and shape of a patch based on image content and the application • Approaches that explore the employment of multiple and/or heterogeneous patch sizes and shapes during the analysis of an image • Applications that explore how important relative patch position is, and whether there are advantages in allowing those patches to move freely or in a constrained fashion • Novel methods that explore and extend the concept of patches to video (e.g. space-time patches/volumes) • Approaches that draw upon previous work in structural pattern recognition in order to improve current patch-based algorithms • Novel applications that extend the concept of patchbased analysis to other, hitherto, nonconventional areas of image and video processing, computer vision, and pattern recognition • Novel techniques for estimating dependencies between patches in the same image (e.g., 3D rotations) to improve matching/correspondence algorithmic performance

Call for Papers
Technology advances and a growing field of applications have been a constant driving factor for embedded systems over the past years. However, the increasing complexity of embedded systems and the emerging trend to interconnections between them lead to new challenges. Intelligent solutions are necessary to solve these challenges and to provide reliable and secure systems to the customer under a strict time and financial budget.
Typically, intelligent solutions often come up with an orthogonal and interdisciplinary approach in contrast to traditional ways of engineering solutions. Many possible intelligent methods for embedded systems are biologically inspired, such as neural networks and genetic algorithms. Multiagent systems are also prospective for an application for nontime critical services of embedded systems. Another field is soft computing which allows a sophisticated modeling and processing of imprecise (sensory) data.
The goal of this special issue is to provide a forum for innovative smart solutions which have been applied in the embedded systems domain and which are likely useful to solve problems in other applications as well.
Original papers previously unpublished and not currently under review by another journal are solicited. They should cover one or more of the following topics: • Smart embedded (real-time) systems • Autonomous embedded systems • Sensor networks and sensor node hardware/software platforms • Software tools for embedded systems • Topology control and time synchronization • Error tolerance, security, and robustness • Network protocols and middleware for embedded systems • Standardization of embedded software components • Data gathering, aggregation, and dissemination • Prototypes, applications, case studies, and test beds Before submission authors should carefully read over the journal's Author Guidelines, which are located at http://www .hindawi.com/journals/es/guidelines.html. Authors should follow the EURASIP Journal on Embedded Systems manuscript format described at the journal's site http://www .hindawi.com/journals/es/. Prospective authors should submit an electronic copy of their complete manuscript through the journal's Manuscript Tracking System at http://mts .hindawi.com/, according to the following timetable:

Manuscript Due
August 1, 2008 First Round of Reviews November 1, 2008