Cascade Structure of Digital Predistorter for Power Amplifier Linearization

In this paper, a cascade structure of nonlinear digital predistorter (DPD) synthesized by the direct learning adaptive algorithm is represented. DPD is used for linearization of the power amplifier (PA) characteristics, namely for compensation of PA nonlinear distortion. Blocks of the cascade DPD are described by different models: the functional link artificial neural network (FLANN), the polynomial perceptron network (PPN) and the radially pruned Volterra model (RPVM). At synthesis of the cascade DPD there is possibility to overcome the ill conditionality problem due to reducing the dimension of the DPD nonlinear operator approximation. Results of compensating nonlinear distortion in Wiener–Hammerstein model of PA at the GSM–signal with four carriers are shown. The highest accuracy of PA linearization is produced by the cascade DPD containing PPN and RPVM.


Introduction
With development of mobile communication, requirements for transmitting systems containing power amplifiers become increasingly stringent.PA is a nonlinear device, where the transmitted signal is distorted and its spectrum extends beyond the boundaries of the communication channel bandwidth.As a result, in multi-channel communication systems the distortion caused by the influence of adjacent channels is amplified (inter-channel interference is emerged) [1].
To prevent the extension of the PA output signal spectrum and to maintain high PA energy efficiency the linearization of the PA characteristics is fulfilled.One of universal linearization methods is the digital predistortion (precompensation), which is characterized by robustness, simplicity of predistorter hardware implementation, efficiency of nonlinear distortion canceling [3].The purpose of digital predistorter (DPD, precompensator) is to linearize the PA characteristics by introducing a predistortion compensating nonlinear PA distortion.In wideband communication channels PAs with high efficiency are described by nonlinear dynamic models therefore DPDs have nonlinear dynamic models, too [1].
Both the modification of DPD polynomial models and the DPD synthesis on the basis of neural networks are developed [3].Neural models can be much simpler than polynomials.This fact is important for DPD hardware implementation.
The present work describes the use of the functional link artificial neural network [2][3][4] and the polynomial perceptron network [4], [5] for synthesis of the cascade DPD in order to improve the quality of cancellation of nonlinear signal distortion in PA.The comparative analysis of DPD models is considered by the example of compensating nonlinear distortion in the Wiener-Hammerstein model of PA at the GSM-signal with four carriers.

Cascade Structure of Adaptive DPD
DPD introduces nonlinear predistortion in order to compensate PA nonlinear distortion or to linearize PA.
A structure of an adaptive DPD link to PA in accordance with the feedforward algorithm of DPD learning is shown in Fig. 1 [6].Here, DPD is described by the nonlinear operator S composed of two nonlinear operators S 1 , S 2 , which are characteristic of two cascade connected blocks correspondently.The operators S 1 and S 2 are introduced in the operational equations where n is the normalized discrete time, x(n), z(n) are the DPD input and output signals respectively, zd(n) is the output signal of the block described by the operator S 1 and shown in Fig. 1.
Models of nonlinear operators S 1 and S 2 are constructed by solving the approximation problem where y(z(n)) is the PA output signal, N x is the number of samples of the input signal x(n).At first we build a model of the operator S 1 , followed by a model of the operator S 2 .
Polynomials and neural networks can be used as nonlinear operator models Parameters of cascade DPD nonlinear models are derived from the solution of the approximation problem (1) by iteration procedures under the following equations: where k is the iteration number, e k (n) is the error of PA linearization at the k-th iteration, For k = 1 let us assume that PA has a low nonlinearity that is why its model can be represented as the convergent Volterra series.As a result the predistorter is weakly nonlinear.This condition influences the quick convergence of the DPD synthesis iteration procedure.
At the cascade DPD synthesis the decomposition of the approximation problem (1) is carried out, i.e. the approximation problem (1) with a high dimension is divided into two approximation problems with less dimensions solved in DPD blocks consecutively.Under this approach the ill conditionality problem of the approximation problem (1) is solved.

Functional Link Artificial Neural Network
FLANN is a single-layer network [2][3][4].Therefore the algorithm of this network learning has a more rapid convergence to the solution of the approximation problem and it is simpler in comparison with the algorithms of traditional neural networks learning.
The FLANN model is expressed as where f is the nonlinear activation function of the network, X(n) is the vector of input signals, 2) transform input signals into basic functions by, for example, trigonometric polynomials, Legendre or Chebyshev polynomials and carry out a multidimensional transformation of the obtained basis functions.The basic functions are formed for reducing the condition number at solving high nonlinear approximation problem.
The FLANN structure is shown in Fig. 2. For DPD synthesis let us consider the linear activation function f in the model ( 2) and the basic functions as Chebyshev polynomials T i (X(n)) of degree i, i = 1,2,…,P [2], [7].From (2) we infer where In the considered case the block named "Functional link" (Fig. 2) is represented as a structure showed in Fig. 3.The block "Functional link" consists of two blocks.In the first block Chebyshev polynomials of different degrees are formed on the bases of a variety of input signals, and the second block is the multidimensional transformation.
The FLANN model with Chebyshev basic functions is called Chebyshev functional link artificial neural network (CFLANN).CFLANN corresponds to a two-layer perceptron neural network [7].First multiplier Tab. 1. Multipliers of CFLANN members of the 3 rd degree.
First multiplier Tab. 2. Multipliers of CFLANN members of the 5 th degree.
First multiplier 8x 4 Tab. 3. Multipliers of CFLANN members of the 7 th degree.
CFLANN of a predistorter should form an output signal spectrum located within the PA bandwidth at input signal frequencies and intermodulation frequencies of products generated by nonlinear PA [8].
The CFLANN model terms of odd (from the 1 st to the 9 th ) degrees are produced by multiplying every components of the top columns from Tab. 1-4 by every components of the corresponding bottom columns.In Tab.1-4 the sign * means the complex conjugation, i is the signal delay.
The mentioned multiplication results in producing the vector (X(n)), which is then multiplied by the vector W T in (3).As a result we form the CFLANN model of odd degree.

Polynomial Perceptron Network
PPN is a single-layer network [4], [5].This network is characterized by the simplicity of its learning algorithm and high speed of convergence to the solution of the approximation problem.
The PPN model is described by the expression [4], [5]     where f is the nonlinear activation function of the network, X(n) is the vector of input signals, , P is the degree of the function-vector element, y(n) is the output signal of the model (4).
The PNN structure is shown in Fig. 4. From a comparison of PPN and FLANN it follows that FLANN (Fig. 2) is obtained from PPN (Fig. 4) by the additional transformation of input signals in basic functions by using Legendre polynomials, Chebyshev polynomials, etc.
For DPD synthesis let us assume that the activation function is linear in the model (4).We can rewrite (4) as The input signal vector X(n) is formed on the basis of delay line.Multidimensional transformation F in the model ( 5) is performed under conditions of constructing intermodulation spectral components of the DPD output [8], [9].
As a result we obtain the following PPN model where * is the sign of complex conjugation, M is the memory length, P is the odd degree of polynomial, The expression (7) describes the radially pruned Volterra model (RPVM).RPVM is the regressive form of the truncated Volterra series.Volterra kernels in RPVM are built on a hypercube grid and radial directions are selected on the basis of the 3 rd order kernel [9], [10].

Compensation of Nonlinear Distortion in PA Wiener-Hammerstein Model
In practice a nonlinear PA exhibits a memory effect that is why a nonlinear predistorter with memory should be used for compensation of PA nonlinearity.
In the presented example, we assume the PA model is Wiener-Hammerstein model composed of a linear timeinvariant (LTI) system, followed by a memoryless nonlinearity, which is in turn followed by another LTI system [9], [11].The LTI blocks before and after the memoryless nonlinearity have the system functions given by The input signal for the PA model is the complex envelope of a GSM-signal with four carriers in the frequency baseband with the bandwidth of 20 MHz.The sampling frequency of the GSM-signal complex envelope is 184.32 MHz.
The dependences of the normalized magnitude and phase change of the PA output signal on the PA input signal normalized magnitude are depicted in Fig. 5  The adaptive DPD based on CFLANN of the 9 th degree, RPVN (7) with P = 7, PPN (6) with P = 7 and the cascade connection of these models are constructed to linearize the above-mentioned PA model.It should be noted, that all the top indexes of the sums in expressions   7) and ( 8) are (P -1)/2.The memory length of the investigated models is 4 (M = 4 in ( 7), ( 8)).
The PA linearization error is estimated by normalized mean-square error (NMSE).This error is calculated from the expression   where x(n) is the complex-valued input signal of the cascade connection between DPD and PA shown in Fig. 1, N x is the number of samples in the input signal x(n), N x = 106339, y(z(n)) is the PA complex-valued output signal.As can be seen from Tab. 5, the PPN model provides higher accuracy of PA linearization than RPVM and CFLANN.The use of the cascade DPD structure leads to an increased PA linearization accuracy.The highest accuracy is achieved by the cascade DPD structure with PPN and RPVM.
Taking into account the cascade DPD with PPN and RPVM models the dependences of the normalized magnitude and phase change of the PA output signal on the normalized magnitude of the DPD input signal are shown in Fig. 6 (a), (b).The PSD of the PA output signal at the mentioned cascade DPD is depicted in Fig. 6 (c).
From comparison of Fig. 6 with Fig. 5 it follows that the cascade DPD consisting of PPN and RPVM models gives a high quality of PA Wiener-Hammerstein model linearization.

Conclusions
On the basis of the decomposition of the DPD nonlinear operator approximation problem into two sub-problems solved consecutively, it is possible to remove the ill conditionality from the DPD nonlinear operator approximation due to reduction of the approximation problem dimension.This decomposition is realized in a cascade DPD synthesis.DPD block models are built as the functional link artificial neural network, the polynomial perceptron network and the radially pruned Volterra model.DPD is synthesized for compensation of nonlinear distortion in PA Wiener-Hammerstein model at the complex envelope of a GSM-signal with four carriers.The predistortion results in the facts that the cascade DPD composed of PPN and RPVM linearizes PA with the highest accuracy, and the cascade DPD based on CFLANN yields the lowest accuracy of PA linearization.

Fig. 1 .
Fig. 1.The structure of the DPD feedforward learning algorithm.
n) is the output signal of the LTI system with the function ) ( 1 z H . (a), (b).In Fig.5 (c) the power spectral densities (PSD) of the described PA model input and output signals without DPD application are shown by dotted (1) and solid (2) lines correspondingly.

Tab. 5 .Fig. 6 .
Fig. 6.PA characteristics and signal PSD with the cascade DPD including the PPN and RPVM models.