Efﬁcient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms

A computationally e ﬃ cient algorithm for decoding block codes is developed using a genetic algorithm (GA). The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR) codes. Further, we deﬁne a new crossover operator that exploits the domain speciﬁc information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.


Introduction
The current large development and deployment of wireless and digital communication encourage the research activities in the domain of error correcting codes.The later is used to improve the reliability of data transmitted over communication channels susceptible to noise.Coding techniques create codewords by adding redundant information to the user.Information vectors.Decoding algorithms try to find the most likely transmitted codeword related to the received one as depicted in Figure 1.Decoding algorithms are classified into two categories: hard-decision and softdecision algorithms.Hard-decision algorithms work on a binary form of the received information.In contrast, softdecision algorithms work directly on the received symbols [1].
Soft-decision decoding is an NP-hard problem and was approached in different ways.Recently, artificial intelligence techniques were introduced to solve this problem.Among the related works are the decoding of linear block codes using algorithm A * [2], another one uses genetic algorithms for decoding linear block codes [3], and the third one uses neural networks to decode BCH codes [4].
Maini et al. [3] were the first, according to our knowledge, to introduce genetic algorithm in the soft decoding of linear block codes.After that, Cardoso and Arantes [5] came to work on the hard decoding of linear block codes using GA and Shakeel [6] worked on soft-decision decoding for block codes using a compact genetic algorithm.These decoders based on GA use the generator matrix of the code; this fact makes the decoding very complicated for codes of high rates.
Genetic algorithms are search algorithms that were inspired by the mechanism of natural selection where stronger individuals are likely the winners in a competing environment.They combine survival of the fittest among string structures with a structured yet randomized information exchange to form a search algorithm with some of the innovative flair of human search.In each generation, a new set of artificial creatures (chromosomes) is created using bits and pieces of the fittest of the old [7,8].
The Dual Domain Decoding GA algorithm (DDGA) is a significant contribution to soft-decision decoding.In effect, a comparison with other decoders, that are currently the most successful algorithms for soft decision decoding, shows its efficiency.This new decoder can be applied to any binary linear block code, particularly for codes without algebraic decoder.Unlike chase algorithm which needs an algebraic hard-decision decoder.Further, it uses the dual code and works with parity-check matrix.The later makes them less complicated for codes of high rates.In order to show the effectiveness of this decoder, we applied it for BCH and QR codes over two transmission channels.
The remainder of this paper is organized as follows: in Section 2, we introduce the genetic algorithms.Section 3 expresses soft-decision decoding as a combinatorial optimisation problem.In Section 4, DDGA, our genetic algorithm for decoding, is described.Section 5 reports the simulation results and discussions.Finally, Section 6 presents the conclusion and future trends.

Genetic Algorithm
Genetic algorithm is an artificial intelligence based methodology for solving problems.It is a non-mathematical, nondeterministic, but stochastic process or algorithm for solving optimization problems.The concept of genetic algorithm was introduced by John Holland [7] in 1975 with the aim of making computers do what nature does.He was concerned with algorithms that manipulate strings of binary digits to find solution to problem in such a way that it exhibits the characteristics of natural evolution, that is, developing an algorithm that is an abstract of natural evolution.The idea of genetic algorithm by Holland stemmed from the evolutionary theory.
GA is excellent for all tasks requiring optimization and highly effective in any situation where many inputs (variables) interact to produce a large number of possible outputs (solutions).Some example situations are as the following.
Optimization.Such as data fitting, clustering, trend spotting, path finding, and ordering.
Management.Distribution, scheduling, project management, courier routing, container packing, task assignment, and timetables.
Financial.Portfolio balancing, budgeting, forecasting, investment analysis, and payment scheduling.
The typical steps in the design of genetic algorithm are described below and illustrated in the Figure 2: Step 1. Representation of the problem variable domain as a chromosome of a fixed length, the size of a chromosome population is chosen as well as the crossover probability.
Step 2. Definition of fitness function, which is used for measuring the quality of an individual chromosome in the problem domain.The fitness function establishes the basis for selecting chromosomes that will be mated during reproduction.
Step 3. Random generation of an initial population of chromosomes of a fixed size.
Step 4. Calculation of fitness function for each individual chromosome.
Step 5. Selection of pairs of chromosomes for mating from the current population.Parent chromosomes are selected with a probability related to their fitness.The highly fit chromosomes have a higher probability of being selected for mating.
Step 6. Application of the genetic operations (crossover and mutation) for creating pairs of offspring chromosomes.
Step 7. Placement of the created offspring chromosomes in the new population.
Step 8. Repetition of Step 5 through Step 7 until the size of the new chromosome population becomes equal to the size of the initial population.
Step 9. Replacement of the initial (previous) parent chromosome population with the new offspring population.
Step 10.Repeat Step 4 through Step 9 until the termination criterion is satisfied.
The termination criterion could be any of the following: (1) attaining a known optimal or acceptable solution level; (2) a maximum number of generations has been reached.

Soft Decision Decoding as an Optimisation Problem
The maximum-likelihood soft-decision decoding problem of linear block codes is an NP-hard problem and can be stated as follows.
Given the received vector r and the parity-check matrix H, and let S = zH T be the syndrome of z, where z is the hard decision of r, H T is the transpose of matrix H, and E(S) be the set of all errors patterns whose syndrome is S, find the E ∈ E(S) which minimises correlation discrepancy: where n is the code length.The optimisation problem (1) has n error variables, out of which only (n-k) are independent, where k is the code dimension.Using the algebraic structure of the code, the remaining k variables can be expressed as a function of these (n-k) variables.
Up till now, only few authors have tried to solve the soft decision decoding problem by viewing it as an optimisation problem and using artificial intelligence techniques.

DDGA Algorithm
Let C denote, a (n, k, d) binary linear block code of parity check matrix H, and let (r i ) 1≤i≤n be the received sequence over a communication channel with noise variance σ 2 = N 0 /2, where N 0 is noise power spectral density.
Let N i , N e , and N g denote, respectively, the population size, the number of elite members, and the number of generations.
Let p c and p m be the crossover and the mutation rates.

Decoding Algorithm.
The decoding-based genetic algorithm is depicted on Figure 3.The steps of the decoder are as follows.
Step 1. Sorting the sequence r in such a way that Further, permute the coordinates of r to ensure that the last (n − k) positions of r are the least reliable linearly independent positions.Call this vector r and let the permutation related to this permutation (r = π(r)).
Apply the permutation π to H to get a new check matrix Step 2. Generate an initial population of N i binary vectors of k bits (an individual represents the systematic part of an error candidate).
Substep 2.1.The first member, E 1 , of this population is zero vector.
Step 3.For i from 1 to N g .Let E be an individual, z be the quantization of r , S be the syndrome of z such that S = zH T , S 1 be an (n − k)-tuple such that S 1 = E A T where A is submatrix of H , and S 2 be an (n − k)-tuple such that S 2 = S + S 1 .
We form the error pattern E such that E = (E , E ), where E is the chosen individual and E = S 2 .Then, (z + E) is a codeword.
The fitness function is the correlation discrepancy between the permuted received word and the estimated error such that ( Substep 3.2.The population is sorted in ascending order of member's fitness defined by (2).Subsubstep 3.4.1.Selection operation: a selection operation that uses the linear ranking method is applied in order to identify the best parents (E (1) , E (2) ) on which the reproduction operators are applied.For each individual j (1 ≤ j ≤ N i ), we choose the following linear ranking: where w j is the jth member weight and w max = 1.1 is the maximum weight associated to the first member.
Subsubstep 3.4.2.Crossover operator: Create a new vector E j "child" of k bits.Let Rand be a uniformly random value between 0 and 1 generated at each occurrence.The crossover operator is defined as follows: if Rand 1 ≺ p c , then the ith bit of child (E j ) Ne+1≤ j≤Ni , (1 ≤ i ≤ k) is given by where It is clear that if the ith bit of the parent is different, then for greater positive values r j , the function 1/(1 + e) −4|r j |/N0 converges to 1. Hence, the ith bit of child has a great probability to be equal to 0. Note that if rand 1 ≥ p c (no crossover case): Subsubstep 3.4.3.Mutation operator: if the crossover operation realized, the bits E ji are muted with the mutation rate p m : Step 4. The decoder decision is V * = π −1 (E best + z), where E best is the best member from the last generation.
Remark 1.In Step 1 of the DDGA, in order to have a light algorithm we apply the Gaussian eliminations on the (n − k) independent columns corresponding to the least reliable positions, without the permutation π.This optimisation is not used in other similar works [3,9,10].

Complexity Analysis.
Firstly, we show that our algorithm has a polynomial time complexity.
Let n be the code length, k be the code dimension which must be equal to the length of individuals in population, and N i be the population size which must be equal to the total number of individuals in the population.
At any given stage, we maintain a few sets of N i × k arrays, therefore the memory complexity of this algorithm is O(N i k).
In order to get the complexity of our algorithm, we will compute the complexity of each step.
Step 2 has time complexity of O(k 2 n) [2].This complexity depends on the random number generator in use, but the cost is negligible compared to that of Step 3. Substeps 3.1 to 3.3 have a computational complexity of Steps 3.4.1 to 4 have an advantage case complexity of This reduces to O(k), which is also the worst case complexity.Hence, any iteration of the genetic algorithm part of DDGA has a total time complexity of O(k Now, we compare our algorithm with four competitors.The Table 1 shows the complexity of the five algorithms.The Chase-2 algorithm increases exponentially with t, where t is the error correction capability of the code.Its complexity is 2 t time the complexity of the hard-in hard-out decoding [11].Similarly, the complexity of the OSD algorithm of order m is exponentially in m [9].Moreover, the decoding becomes more complex for codes with large code length.
For Maini [3] and DDGA algorithms, the complexity is polynomial in k, n, N g , N i , and log N i , making it less complex compared to other algorithms.
For Shakeel algorithm, the complexity is also polynomial in k, n and T c , where T c presents the average number of generations.
The three decoders based on genetic algorithms are almost the same complexity and have lower complexity than Chase-2 and OSD-m algorithm.

Simulation Results and Discussions
In order to show the effectiveness of DDGA, we do intensive simulations.
Except the subsection J of the current section, the simulations where made with default parameters outlined in Table 2.
The performances are given in terms of BER (bit error rate) as a function of SNR (Signal to Noise Ratio E b /N 0 ).

Evolution of BER with GA Generations.
We illustrate the relation between bit error probability and the number of genetic algorithm generation for same SNRs.
Figure 4 indicates the evolution of bit error probability with genetic algorithm generations.This figure shows that the bit error probability decreases with increasing number of generations.

Effect of Elitism
Operator.The Figure 5 compares the performances of DDGA with or without elitism for the BCH (63,51,5) code.These simulation results reveal a slight superiority of DDGA with elitism.

Effect of Code
Length.The Figure 6 compares the DDGA performance for four codes with rate = 1/2.Except the small code, all other codes have performance almost equal and this is possibly due to the parameters of DDGA, in particular, the number of individuals chosen is fixed for all codes.Indeed, BCH code (31.16) is small and gives poor performance compared to the other three codes (BCH and QR) that are comparable in length and performance.

Effect of Crossover Rate on
Performance.The Figure 7 shows the effect of crossover rate on the performance.From this figure, we remark that the increasing of the crossover rate from 0.01 to 0.97 improves the performance of DDGA for BCH (63,51,5) code by 1 dB at 10 −4 .

Effect of Mutation Rate on Performances.
The Figure 8 emphasizes the influence of the mutation rate on the performance of DDGA.Decreasing the mutation rate from 0.2 to 0.1, we can gain 1 dB at 10 −4 .If we further decrease the mutation rate, the gain becomes negligible as shown in Figure 8.According to this result, we remark that 0.03 is the  optimum value of the mutation rate.This confirms the result in literature.

Comparison between Various Crossover Methods in DDGA.
In the Figure 9, we compare between results obtained using the proposed crossover (see Section 4), uniform crossover (UX), and two point crossover, (2-pt), in DDGA.Simulation results show that the proposed crossover is better than UX and 2-pt ones.The gain between the proposed crossover and the two other crossovers is 1.5 dB at 10 −5 .Besides, the three crossover methods have the same complexity of order O(k).

Comparison between Different Selection Methods in DDGA.
In the Figure 10, we present a comparison between the results obtained using linear ranking selection, tournament selection, and random selection in DDGA.Simulation results show that the linear ranking is better than tournament and random selection.

Comparison of DDGA versus Other Decoding Algorithms.
In this subsection, we compare the performance of DDGA with other decoders (Chase-2 decoding, OSD-1, OSD-3, and Maini decoding algorithms).The performance of DDGA is better than Chase-2 and OSD-1 algorithms as shown in Figure 11.According to this figure, we observed that DDGA is comparable to Maini algorithm.
The performance of DDGA, Chase, OSD-1 and Maini algorithms, for BCH (31,21,5) code, is shown in Figure 12.From the later, we remark that our algorithm is better than Chase-2 algorithm and comparable with Maini and OSD-1 algorithms for this code.
The Figure 13 presents the performance of different decoders for BCH (31,26,3) code.The behaviors of the four decoders for this code are similar as for the BCH (31,21,5) code.
The Figure 15 compares the performances of DDGA and others decoders for BCH (63,51,5) code.We notice the superiority of DDGA over Chase-2 algorithm and similarity to the others.

Figure 1 :
Figure 1: A simplified communication system model.

Figure 2 :
Figure 2: A simplified model of a genetic algorithm.

Substep 3 . 1 .
Compute the fitness of each individual in the population.An individual is a set of k bits.

Substep 3 . 3 .Substep 3 . 4 .
The first (elite) N e best members of this generation are inserted in the next one.The other N i − N e members of the next generation are generated as follows.

Figure 8 :
Figure 8: Effect of mutation rate on performances.

Figure 10 :
Figure 10: Comparison between different selection operators in DDGA.