Abstract

Two iterative decoding algorithms of 3D-product block codes (3D-PBC) based on genetic algorithms (GAs) are presented. The first algorithm uses the Chase-Pyndiah SISO, and the second one uses the list-based SISO decoding algorithm (LBDA) based on order-𝑖 reprocessing. We applied these algorithms over AWGN channel to symmetric 3D-PBC constructed from BCH codes. The simulation results show that the first algorithm outperforms the Chase-Pyndiah one and is only 1.38 dB away from the Shannon capacity limit at BER of 10−5 for BCH (31, 21, 5)3 and 1.4 dB for BCH (16, 11, 4)3. The simulations of the LBDA-based GA on the BCH (16, 11, 4)3 show that its performances outperform the first algorithm and is about 1.33 dB from the Shannon limit. Furthermore, these algorithms can be applied to any arbitrary 3D binary product block codes, without the need of a hard-in hard-out decoder. We show also that the two proposed decoders are less complex than both Chase-Pyndiah algorithm for codes with large correction capacity and LBDA for large 𝑖 parameter. Those features make the decoders based on genetic algorithms efficient and attractive.

1. Introduction

Among the proposed codes in the history of error correcting, there are those who have performance very close to the Shannon limit, like Turbo codes [1] and LDPC codes [2]. Nevertheless, the remarkable reduction of BER is performed at the expense of their decoders complexity. The current challenge for researchers in this field is to find a compromise between performance and decoding complexity. Thus, several optimization works of decoding algorithms have emerged, in particular, those associated to product codes. These codes were first introduced in 1954 by Elias [3]. In 1981 and 1983, an iterative decoding method hard-in hard-out (HIHO) of these codes has been described, respectively by Tanner [4] and Lin and Costello [5]. In 1994, a soft-in soft-out (SISO) iterative decoding of the product block codes (PBC) was proposed by Pyndiah et al. [6], using the Chase algorithm as the elementary decoder [7]. This algorithm does not work alone, but together with another decoder HIHO which is not always easy to find for some codes, like quadratic residue (QR). Later, in 2004, an enhanced SISO iterative decoding algorithm of PBC, based on order reprocessing decoding, was developed by Martin et al. [8].

Recently, the researchers in the field of channel coding were inspired from artificial intelligence techniques to develop very good decoders for linear block codes. We quote from the first works in this sense, the decoding of linear block codes using algorithm A* [9], genetic algorithms [10], and neural networks [11].

We were interested in this work in decoders based on genetic algorithms (GAD) [10] applied to the 3D-product block code (3D-PBC). It was shown in [12], that these decoders applied to BCH codes outperform the Chase-2 algorithm and present a lower complexity for BCH codes with large block lengths. We note that their performances can be improved further by optimizing some parameters such as the population size and the number of generations.

In this paper, which is the continuation of the work [13], we introduce and study two iterative decoding algorithms of an arbitrary 3D binary product block code based on GAD. The extrinsic information is computed in the first proposed algorithm according to the Chase-Pyndiah formulas [6] and is computed in the second one according to the list-based SISO decoding algorithm (LBDA) [8]. A comparison at the level of complexity of the proposed algorithms versus Chase-Pyndiah and LBDA algorithms was made.

This paper is organized as follows. Section 2 defines the 3D-PBC code. Then, we explain in Section 3, the elementary decoding based on GAD. The presentation and complexity study of our iterative decoding algorithms using genetic algorithms IGAD, will be given in Section 4. Section 5 illustrates, through simulations, the IGAD performances and the effect of some parameters on these performances. It also presents a comparison of performances between the two proposed algorithms. Finally, Section 6 presents the conclusion and indicates how the performances of our decoders can be improved further.

2. 3D-Product Block Code (3D-PBC)

The product codes (or iterative codes) are a particular case of serial concatenated codes. They allow to construct codes of great length by concatenating two or more arbitrary block codes with short lengths. In our case, we considered two symmetric3D-PBC, (16, 11, 4)3 and (31, 21, 5)3, which consists of three identical codes BCH each one.

Let 𝐶(1)(𝑛1,𝑘1,𝑑1), 𝐶(2)(𝑛2,𝑘2,𝑑2), and 𝐶(3)(𝑛3,𝑘3,𝑑3), three linear block codes. We encode an information block, using 3D-PBC =𝐶(1)⊗𝐶(2)⊗𝐶(3) given in the Figure 1, by(1)filling a cube of 𝑘2 rows, 𝑘1 columns and 𝑘3 as the depth by 𝑘1×𝑘2×𝑘3 information bits;(2)coding the 𝑘2×𝑘3 rows (the cube contains 𝑘3 lateral plans which are composed from 𝑘2 rows each one) using code 𝐶(1). The check bits are placed at the right, and we obtain a new cube with 𝑘2×𝑘3×𝑛1 bits;(3)coding the 𝑛1×𝑘3 columns of the cube obtained in the previous step using code 𝐶(2). This means that the check bits will be also encoded (the previous cube contains 𝑛1 transverse plans which are composed from 𝑘3 columns each one). The check bits are placed at the bottom of the cube obtained in step 2, and we get a new cube with 𝑛1×𝑘3×𝑛2 bits;(4)Coding, finally the obtained cube in step 3 from the front to the behind, that is, coding the 𝑛1×𝑛2 columns, using code 𝐶(3) (the previous cube consists of 𝑛2 horizontal plans which contains 𝑛1 columns). The check bits are placed at the behind. So, the last cube which has 𝑛1×𝑛2×𝑛3 bits is the codeword.

Figure 1: The 3D-product block code.

We can show by similar reasoning in [14] that the parameters of the 3D-PBC are(i)length: 𝑛=𝑛1×𝑛2×𝑛3;(ii)dimension: 𝑘=𝑘1×𝑘2×𝑘3:(iii)minimum Hamming distance; 𝑑=𝑑1×𝑑2×𝑑3.(iv)rate: 𝑅=𝑅1×𝑅2×𝑅3=𝑘1/𝑛1×𝑘2/𝑛2×𝑘3/𝑛3.

This shows one of the best advantages of product block codes: building very long block codes with large minimum Hamming distance by concatenating short codes with small minimum Hamming distance.

3. Elementary Decoding of Linear Codes

Let 𝑅=(𝑅1,…,𝑅𝑛) be the received sequence at the decoder input of a binary linear block code 𝐶(𝑛,𝑘,𝑑) with a generator matrix 𝐺.

3.1. Hard-Input Soft-Output Decoder

Step 1Sort the elements of received vector 𝑅 in descending order of magnitude. This will put reliable elements in the first ranks, since using an AWGN channel. Then, the vector is permuted such that its first 𝑘 coordinates are linearly independent. We obtain a vector 𝑅=𝜋(𝑅)=(𝑅1,…,𝑅𝑛) such that |𝑅1|≥|𝑅2|≥⋅⋅⋅≥|𝑅𝑛|. Let 𝐺 be the permutation of 𝐺 by 𝜋, that is, 𝐺=𝜋(𝐺).

Step 2Quantize the first 𝑘 bits of 𝑅 to obtain vector 𝑟 and randomly generate (𝑁𝑖−1) information vectors of 𝑘 bits each one. This vectors form with vector 𝑟 the initial population of 𝑁𝑖 individuals (𝐼1,…,𝐼𝑁𝑖).

Step 3Encode individuals of the current population, using 𝐺 to obtain codewords: 𝐶𝑖=𝐺⋅𝐼𝑖(1≤𝑖≤𝑁𝑖). Then, compute individuals fitness, defined as Euclidian distance between 𝐶𝑖 and 𝑅. Sort individuals in ascending order of fitness.

Step 4Place the first 𝑁𝑒 individuals (𝑁𝑒: elite number ≤𝑁𝑖) to the next population, which will be completed by offsprings generated using reproduction operators: selection of two best individuals as parents (𝑎,𝑏) using the following linear ranking:
𝑊𝑖=𝑊max𝑊−2(𝑖−1)max−1𝑁𝑖−1,∀𝑖∈1,…,𝑁𝑖,(1)
where 𝑊𝑖 is the 𝑖th individual weight, and 𝑊max weight is assigned to the fittest (nearest) individual.Reproduce the (𝑁𝑒+1) remaining individuals of the next population by crossover and mutation operations. Let 𝑝𝑐, 𝑝𝑚 and Rand be respectively, probabilities of crossover and mutation, and a uniformly random value between 0 and 1, generated at each time.ifRand<𝑝𝑐, then forall∈{𝑁𝑒+1,…,𝑁𝑖},𝑗∈{1,…,𝑘}:𝐼𝑖𝑗=⎧⎪⎨⎪⎩𝑎𝑗ifRand<1−𝑎𝑗+𝑎𝑗𝑏𝑗+𝑎𝑗−𝑏𝑗1+𝑒−4𝑅′𝑗/𝑁0𝑏𝑗else,(2)
and then,
𝐼𝑖𝑗=1−𝐼𝑖𝑗ifRand<𝑝𝑚,(3)else𝐼𝑖=𝑎ifRand<0.5𝑏else,(4)end if

Repeat steps 3 and 4 for 𝑁𝑔 generations.

Step 5The first (fittest) individual 𝐷′ of the last generation is the nearest to 𝑅. So, the decided codeword is 𝐷=𝜋−1(𝐷).

3.2. Soft-Input Soft-Output Decoder

In this section, we present the SO_GAD decoders (soft-output GAD) used as the elementary decoder in our iterative decoding algorithms.

Let 𝐷 denote the GAD decision of the input sequence 𝑅 and 𝑤 the extrinsic information.

Let 𝐻(𝑗) be the competitor codeword of 𝐷 corresponding to the 𝑗th bit defined by ‖‖𝐻(𝑗)‖‖−𝑅=min2≤𝑝≤𝑁𝑖‖‖𝑄(𝑝)‖‖−𝑅,𝑄𝑗(𝑝)≠𝐷𝑗,(5)
where 𝑄(𝑝)is the 𝑝th codeword of the last generation, 𝑄𝑗(𝑝) and 𝐷𝑗 are the 𝑗th bits of 𝑄(𝑝),𝐷 and, ‖.‖ is the Euclidean distance.

Algorithm 1. (𝑤,𝐷)=SO_GAD(𝑘,𝑛,𝑅,𝑝𝑐,𝑝𝑚,𝑁𝑖,𝑁𝑔,𝛽). Algorithm SO_GAD accepts as input 𝑘,𝑛,𝑝𝑐,𝑝𝑚,𝑁𝑖,𝑁𝑔, the coefficient 𝛽. This coefficient is optimized according to the chosen code and SNR to enhance the algorithm performance.For 𝑗=1to 𝑛do if𝐻(𝑗)exists, then 𝑤𝑗=𝐷𝑗‖‖𝐻(𝑗)‖‖−𝑅−‖𝐷−𝑅‖4=𝐷𝑗𝑛𝐻𝑝=1,𝑝≠𝑗𝑝(𝑗)≠𝐷𝑝𝑅𝑃𝐷𝑝,(6)else𝑤𝑗𝐷=𝛽𝑗,(7)
where 𝐷𝑗=2𝐷𝑗−1.end ifEnd for

Algorithm 2. (𝑤,𝐷)=SO_GAD(𝑘,𝑛,𝑅,𝑝𝑐,𝑝𝑚,𝑁𝑖,𝑁𝑔,𝑁𝑠). Let 𝑁𝑠 be the LBDA parameter (𝑁𝑠≤𝑘) enhancing the decoding performances [8]. The algorithm SO_GAD accepts as input 𝑘,𝑛,𝑝𝑐,𝑝𝑚,𝑁𝑖,𝑁𝑔, and 𝑁𝑠. This parameter is usually chosen to be ⌈2𝑘/3⌉ or 𝑘. For 𝑗=1to𝑘−𝑁𝑠do 𝑤𝑗=𝐷𝑗||Γ||𝑛𝑙=𝑘−𝑁𝑠+1,𝑙∈Γ𝐷𝑙𝑤𝑙if𝑛𝑙=𝑘−𝑁𝑠+1,𝑙∈Γ𝐷𝑙𝑤𝑙𝑤≥0,𝑗=𝐷𝑗min𝑙∈Γ𝐷𝑙𝑤𝑙>0otherwise,(8)
where Γ denotes the set of positions 𝑗 where 𝐻(𝑗) exists. End forFor 𝑗=𝑘−𝑁𝑠+1to 𝑛do if𝐻(𝑗)exists, then 𝑤𝑗=12𝐷𝑗𝑛𝑙=1𝐷𝑙−𝐻𝑙(𝑗)𝑅𝑙−𝑅𝑗,(9)else𝑤𝑗=𝐷𝑗||Γ||𝑛𝑙=𝑘−𝑁𝑠+1,𝑙∈Γ𝐷𝑙𝑤𝑙if𝑛𝑙=𝑘−𝑁𝑠+1,𝑙∈Γ𝐷𝑙𝑤𝑙𝑤≥0,𝑗=𝐷𝑗min𝑙∈Γ𝐷𝑙𝑤𝑙>0otherwise,(10)
where Γ denotes the set of positions 𝑗 where 𝐻(𝑗) exists. end if End for.

3.2.1. Decoding

The SO_GAD algorithm uses GAD for decoding the input sequence 𝑅. The decision codeword 𝐷 is the top of the 𝑁𝑔th generation sorted in ascending order of fitness, and the competitor codeword 𝐻(𝑗) corresponding to the 𝑗th bit of 𝐷, if it exists, is the first member of the last generation which have the different 𝑗th bit 𝐻𝑗(𝑗)(𝐻𝑗(𝑗)≠𝐷𝑗).

3.2.2. Extrinsic Information

The decision codeword 𝐷 and the associated competitor codewords (𝐻(𝑗))1≤𝑗≤𝑛 are used to calculate the extrinsic information from the formulas (6) and (7) for the first algorithm and (8) and (9) in the case of the second one.

4. Iterative Decoding Algorithm and Complexity

In this section, we describe the iterative decoding algorithm of PBC based on GAD (IGAD), then we show that IGAD has a polynomial time complexity.

4.1. Iterative Decoding Algorithm

Let (𝑅𝑖𝑗𝑘)1≤𝑖≤𝑛2,1≤𝑗≤𝑛1,1≤𝑘≤𝑛3 be the received codeword. Figures 2 and 3 show the iterative decoding schemes of PBC based on GAD for the proposed algorithms. The following is an outline of IGADs.

Figure 2: The (⌊𝜃/3⌋+1)th iteration of IGAD1.

Figure 3: The (⌊𝜃/3⌋+1)th iteration of IGAD2.

Algorithm 3. IGAD(𝑘1,
𝑘2,
𝑘3,𝑛1,
𝑛2,
𝑛3,𝑅,
𝑝𝑐,
𝑝𝑚,
𝑁𝑖,
𝑁𝑔,
𝑁𝑖𝑡,
𝛼,{𝑁𝑠|𝛽}).
Algorithm IGAD accepts as input 𝑘1, 𝑘2, 𝑘3, 𝑛1, 𝑛2, 𝑛3,𝑅, 𝑝𝑐, 𝑝𝑚, 𝑁𝑖, 𝑁𝑔, the iterations number 𝑁𝑖𝑡, the coefficients (𝛼(𝜃))0≤𝜃<3𝑁𝑖𝑡. In the case of the first algorithm, we use the coefficients (𝛽(𝜃))0≤𝜃<3𝑁𝑖𝑡, and in the second, we use the 𝑁𝑠 parameter. The 𝛼 and 𝛽 coefficients are optimized by simulation step by step for each code. For the second algorithm, we choose𝛼 to be 0.5.

Step 1Extrinsic information initialization 𝜃=0, Iteration=1.Let 𝑤(𝜃)𝑖𝑗𝑘 is the extrinsic information given to 𝜃th elementary decoder by the other decoder:𝑤(0)𝑖𝑗𝑘=0,1≤𝑖≤𝑛2,1≤𝑗≤𝑛1,1≤𝑘≤𝑛3.(11)

Step 2Row, column, and depth decoding:While (𝐼𝑡≤𝑁𝑖𝑡) do

Step 2.1. Decoding with SO_GAD the 𝑗th column and estimating the extrinsic information 𝑤(𝜃+1)𝑖𝑗𝑘, using (6) and (7), of each vector 𝑠.𝑗. at the input of the elementary decoder:
𝑠(𝜃)𝑖𝑗𝑘=𝑅𝑖𝑗𝑘+𝛼(𝜃)𝑤(𝜃)𝑖𝑗𝑘1≤𝑖≤𝑛2,1≤𝑘≤𝑛3.(12)

Step 2.2. and 2.3. Repeat step 2.1 for decoding the rows and depths and estimating the extrinsic information. Let 𝐷(𝜃+3) and 𝑤(𝜃+3) be respectively the cubes decision and extrinsic information at the output of the depth elementary decoder.

Step 3Iteration=Iteration+1; 𝜃=𝜃+1.

End While.

Select the decided codeword 𝐷(3𝑁𝑖𝑡)at the 𝑁𝑖𝑡th iteration.

Stopping Criterion for the Second Algorithm.

Since the GAD decoder decides always a codeword, our second decoder does not need to use the NCB (nonconvergent block) decoder proposed in [8]. So, its complexity will be reduced.

4.2. Complexity Analysis

In this section, we present and compare the expressions of time complexities of the studied decoders.

4.2.1. IGADs Time Complexity

If we do not take into consideration the calculating step of the extrinsic information, the two algorithms have the same time complexity. The GAD algorithm for a linear block code 𝐶(𝑛,𝑘) has polynomial time complexity 𝑂(𝑓(𝑘,𝑛,𝑁𝑖,𝑁𝑔)), where the function 𝑓 is given by [12]𝑓𝑘,𝑛,𝑁𝑖,𝑁𝑔=𝑘2𝑛+𝑁𝑖𝑁𝑔𝑘𝑛+log𝑁𝑖.(13)

Time Complexity of IGAD1(i)Time complexity of extrinsic information computing: For each decision (row, column, or depth) at the last generation of each iteration, the worst-case time complexity of competitors search is 𝑂([𝑁𝑖−1]𝑛).

From (6), the time complexity of extrinsic information calculating in the worst-case (if the competitor exists) at the last generation of each iteration is 𝑂(𝑛2). So the total time complexity of extrinsic information computing is 𝑂(comp1(𝑁𝑖,𝑛)), wherecomp1𝑁𝑖,𝑛=𝑁𝑖𝑛+𝑛2,(14)(ii)total time complexity:

At any iteration of IGAD1, the first elementary decoder has a time complexity of 𝑂(𝑘2𝑘3𝑓(𝑘1,𝑛1,𝑁𝑖,𝑁𝑔)), the second decoder has a complexity of 𝑂(𝑛1𝑘3𝑓(𝑘2,𝑛2,𝑁𝑖,𝑁𝑔)), and the third decoder has a time complexity of 𝑂(𝑛1𝑛2𝑓(𝑘3,𝑛3,𝑁𝑖,𝑁𝑔)), so the total complexity is polynomial:𝑂𝑁𝑖𝑡𝑘2𝑘3𝑔𝑘1,𝑛1,𝑁𝑖,𝑁𝑔+𝑛1𝑘3𝑔𝑘2,𝑛2,𝑁𝑖,𝑁𝑔+𝑛1𝑛2𝑔𝑘3,𝑛3,𝑁𝑖,𝑁𝑔,(15)
where𝑔𝑘,𝑛,𝑁𝑖,𝑁𝑔=𝑓𝑘,𝑛,𝑁𝑖,𝑁𝑔+comp1𝑁𝑖,𝑛.(16)
For the symmetric 3D-PBC 𝑛1=𝑛2=𝑛3=𝑛 and 𝑘1=𝑘2=𝑘3=𝑘, then the IGAD1 time complexity becomes𝑂𝑁𝑖𝑡𝑘2+𝑛2𝑘+𝑘𝑛2𝑛+𝑁𝑖𝑁𝑔𝑘𝑛+log𝑁𝑖+𝑛𝑁𝑖+𝑛2.(17)

Time Complexity of IGAD2(i)Time complexity of extrinsic information computing: The maximal number of competitors of each decision is |Γ|max=𝑛. So, at the last generation of each iteration, the worst-case time complexity of the first step given by (8) is 𝑂((𝑘−𝑁𝑠)max(2𝑛+1,2(𝑛+𝑁𝑠−𝑘)+3))=𝑂(𝑛(𝑘−𝑁𝑠)).

From (9), the worst-case time complexity of competitors search is is 𝑂([𝑁𝑖−1](𝑛−𝑘+𝑁𝑠)).

From (9) and (10), the time complexity in the worst-case of the second step of extrinsic information calculating is𝑂𝑛−𝑘+𝑁𝑠max2𝑛+3,2𝑛+𝑁𝑠𝑛−𝑘+3,2𝑛+1))=𝑂𝑛−𝑘+𝑁𝑠.(18)

So the total time complexity of extrinsic information computing is 𝑂(comp2(𝑁𝑖,𝑛,𝑁𝑠,𝑘)), wherecomp2𝑁𝑖,𝑛,𝑁𝑠,𝑘=𝑁𝑖𝑛−𝑘+𝑁𝑠+𝑛2,(19)(ii)total time complexity:

The total complexity in this case is given from (16):𝑔𝑘,𝑛,𝑁𝑖,𝑁𝑔=𝑓𝑘,𝑛,𝑁𝑖,𝑁𝑔+comp2𝑁𝑖,𝑛,𝑁𝑠.,𝑘(20)
For the symmetric 3D-PBC 𝑛1=𝑛2=𝑛3=𝑛 and 𝑘1=𝑘2=𝑘3=𝑘, then the IGAD2 time complexity becomes𝑂𝑁𝑖𝑡𝑘2+𝑛2×𝑘+𝑘𝑛2𝑛+𝑁𝑖𝑁𝑔𝑘𝑛+log𝑁𝑖+𝑁𝑖𝑛−𝑘+𝑁𝑠+𝑛2.(21)
It is clear from (17) and (21) that IGAD2 is less complex than IGAD1, and their complexities are equal if 𝑁𝑠=𝑘.

4.2.2. Chase-Pyndiah and LBDA Algorithms Time Complexities

We show that this algorithm has an exponential time complexity. Let 𝐶(𝑛,𝑘,𝑑) be a BCH code, and let 𝑀 be the test patterns number used in both Chase and OSD-𝑖 (ordered statistic decoding) algorithms. The complexity of each algorithm is 𝑂(𝑀𝑛2log2𝑛).

The Euclidian distance computing of each codeword has a computational complexity of 𝑂(𝑛). So, the total time complexity of decoding and computing fitness of the 𝑀 test patterns is 𝑂(𝑀𝑛2log2𝑛).

At any given decoding iteration of the Chase-Pyndiah algorithm, the sorting step of the 𝑀 fitness has a time complexity of 𝑂(𝑀log2𝑀) and the the worst-case time complexity of competitors search is 𝑂([𝑀−1]𝑛). Thus, the total time complexity of the Chase-Pyndiah algorithm is𝑂𝑁𝑖𝑡𝑘2𝑘3𝐹𝑛1,𝑘1,𝑡1+𝑘3𝑛1𝐹𝑛2,𝑘2,𝑡2+𝑛1𝑛2𝐹𝑛3,𝑘3,𝑡3,(22)
where 𝑛𝐹(𝑛,𝑘,𝑡)=𝑀2log2𝑛+log2𝑀.(23)

Thus, in the case of 𝑛1=𝑛2=𝑛3=𝑛, 𝑘1=𝑘2=𝑘3=𝑘, and, the exponential time complexity of the two algorithms is𝑂𝑁𝑖𝑡𝑀𝑛2+𝑘2+𝑘𝑛log2𝑀+𝑛2log2𝑛.(24)

Note that in the case of Chase-2 algorithm, 𝑀=2𝑡, where 𝑡=⌊(𝑑−1)/2⌋.

From (17) and (24), it is shown that IGAD1 and IGAD2 are less complex than the two Chase-Pyndiah and LBDA algorithms for codes with large correction capacity 𝑡 or for large 𝑖 parameter or also with great length and low rate.

5. Simulation Results

The figures in this section plot the bit error rate (BER) versus the energy per bit to noise power spectral density ratio 𝐸𝑏/𝑁0 for the symmetric 3D-PBC (16, 11, 4)3 and (31, 21, 5)3. The simulation parameters used in IGADs are given in Table 1.

Table 1: Simulation default parameters.

5.1. IGAD1 Performances

5.1.1. Scaling Factors Optimization for IGAD1

As the iterations number increases, the extrinsic information gradually becomes more reliable. To take the effect into account, the scaling factors 𝛼 are used to reduce the turbo decoder input impact. It has shown that these factors depend on the code and GAD. So, they are optimized step by step for each code. The optimized values 𝛼 and 𝛽 for our algorithm are shown in Table 2. However, as the scaling factors 𝛼 and 𝛽 are gradually increased or decreased from the optimal values, the decoding performance of IGAD1 decoder decreases. Figure 4 shows the gain with the optimized values of 𝛼, for (16, 11, 4)3, compared to the values taken randomly. The values of genetic parameters used are 𝑁𝑔=18,𝑁𝑖=35,𝑝𝑐=0.97 and 𝑝𝑚=0.03.

5.1.2. Effect of Evaluated Codewords Number

Generally, increasing the number of evaluated codewords 𝑁𝑖𝑁𝑔, the probability to find the codeword closest to the input sequence becomes high. This makes it possible to improve the BER performances. The effect of increasing the number of evaluated codewords on the BER improvement for code (16, 11, 4)3 at the 12th iteration is presented in Figures 5 and 6. The values 𝑁𝑔=18 and 𝑁𝑖=60 can be the optimal values in a large range 𝐸𝑏/𝑁0. The other genetic parameters for the first optimization are 𝑁𝑖=35, 𝑝𝑐=0.97, and 𝑝𝑚=0.03; 𝑁𝑔=18, 𝑝𝑐=0.97, and 𝑝𝑚=0.03 for the second.

Figure 5: Effect of the generation number for (16, 11, 4)3 at 12th iteration on IGAD1.

Figure 6: Effect of the population size for (16, 11, 4)3 at 12th iteration on IGAD1.

5.1.3. Cross-Over Rate Effect

Since the cross-over rate is one of the important features of a genetic algorithm, an optimization of this probability is necessary. Figure 7 shows the optimized value 𝑝𝑐=0.97 for (16, 11, 4)33D-PBC which improves the BER at a rather high SNR and at 12th iteration. This value closing to 1 means that IGAD1 requires a broad exploration and efficient exploitation, but increases somewhat the algorithm complexity. Indeed, when 𝑝𝑐 is close to 0, the crossover operation will occur rarely. For this simulation, we fixed the other parameters as follows: 𝑁𝑔=18, 𝑁𝑖=60, and 𝑝𝑚=0.03.

Figure 7: Effect of the crossover probability for (16, 11, 4)3 at 12th iteration on IGAD1.

5.1.4. Mutation Rate Effect

The effect of mutation rate on IGAD1 for BCH (16, 11, 4)33D-PBC is depicted in Figure 8. it is shown that 𝑝𝑚=0.05 is the optimal value for BER at a high SNR and at 12th iteration. One reason of this value close to 0 may be the stability of members in vicinity of optima for low mutation rates. The fixed values are 𝑁𝑔=18, 𝑁𝑖=60, and 𝑝𝑐=0.97.

Figure 8: Effect of the mutation rate for (16, 11, 4)33D-PBC at 12th iteration on IGAD1.

5.1.5. Code Rate Effect

The Figure 9 shows the improvement/degradation of the BER performance of IGAD1 at the 12th and 15ths, iteration respectively, with decreasing/increasing the code dimension or code rate. The rate 0.31 of (31, 21, 4)3 is less than that of (16, 11, 4)3 which equals to 0.32. This explains the better performances for the first 3D-PBC code in the range 𝐸𝑏/𝑁0≥2.5 dB. In this simulation, we adopted the optimal values previously found: 𝑁𝑔=18, 𝑁𝑖=60, 𝑝𝑐=0.97, and 𝑝𝑚=0.03.

5.1.6. Comparison between IGAD1 and IGAD2

As the iteration number increases, the IGADs performances improve approximately in this paper in the whole 𝐸𝑏/𝑁0 range for all 3D-PBC studied. The performances of the IGAD decoders is depicted in Figure 10 for BCH (16, 11, 4)33D-PBC. These performances can be improved by increasing the total number of members as shown in Figure 6. The IGAD1 and IGAD2 performances are respectively about 1.4 dB and 1.33 dB away from the Shannon capacity limit, which is 0.97 dB for this code. We used the following optimized parameters: 𝑁𝑔=18, 𝑁𝑖=60, 𝑝𝑐=0.97, and 𝑝𝑚=0.05.

6. Conclusion

In this paper, we have presented two iterative decoding algorithms which can be applied to any arbitrary 3D-product block codes based on a genetic algorithm, without the need of a hard-in hard-out decoder. Our theoretical results show that these algorithms reduce the decoding complexity, for codes with a low rate and large correction capacity 𝑡 or large 𝑖 parameter used in LBDA algorithm. Furthermore, the performances of these algorithms can be improved by using asymmetric 3D-PBC codes and also, by tuning some parameters like the selection method, the crossover/mutation rates, the population size, the number of generations, and the iterations number. These algorithms can be applied again on multipath fading channels in both CDMA systems and systems without spread-spectrum. Those features open broad prospects for decoders based on artificial intelligence.

M. Belkasmi, H. Berbia, and F. El Bouanani, “Iterative decoding of product block codes based on the genetic algorithms,” in Proceedings of the 7th International ITG Conference on Source and Channel Coding (SCC'08), 2008.