A Geometric Approach to Group Delay Network Synthesis

,


Introduction
The ever-increasing performance requirements of modern high-speed communication systems favor analog solutions for real-time signal processing applications.At high frequencies (such as millimeter-wave frequencies) analog devices outperform their digital counterparts in terms of cost, power consumption and the maximum attainable bandwidth [1].Furthermore, the digitization process of future wideband channels (such as the International Telecommunication Union 71-76 and 81-86 GHz bands) requires sampling speeds of at least 10 GSPS, currently achievable at the cost of sacrificing resolution and dynamic range [2].The fundamental building block of any analog signal processor is a delay structure [3][4][5] of prescribed response.This quasi-arbitrary group delay function can be synthesized by cascading all-pass sections to obtain a delay function approximating the required response to within a constant.This synthesized network may then either be inserted in a system stand-alone to perform an analog signal processing function, or cascaded with an existing network to reduce the group delay variation of the resulting combined system.This approach is compatible with several existing techniques [1], [6][7][8][9][10][11][12] for the synthesis of group delay networks.Initial methods for synthesizing continuous group delay functions (using analog all-pass networks) were aimed at reducing the group delay variation of color television receivers [11].These methods typically assume the form of design tables and graphs, as well as trial and error design approaches and are therefore limited to particular applications.
In an effort to address these limitations, analytical techniques for synthesizing the group delay of an electrical network have been developed [3], [4], [10], [13][14][15].Of these, only one presents a rigorous theoretical treatment to finding minimal-order solutions to the synthesis problem [13].This is done by approximating the original system's group delay characteristic, to within a specified error, using the real Fourier series with an optimal number of all-pass sections.In this method, convergence of the system of analytical equations is, however, sensitive to both the original group delay function and the choice of initial solutions, often failing to converge for practical cases [13].This method is also limited to the low-pass case and provides poor convergence if the bandpass signal is treated as an extended low-pass spectrum [7], [8].The approach presented in [3], [4] solves the approximation problem by generating a Hurwitz polynomial, with the desired phase response, at specified frequencies, which are chosen a priori.No rigorous method of choosing these frequency points, such that the resulting solution is minimal-order, is presented.
Due to these problems with state-of-the-art analytical synthesis methods, numerical approaches relying on optimization algorithms have been widely sought in the modern group-delay synthesis literature [1], [5], [6], [14][15][16][17].The most popular approach [14] is based on approximating a desired group delay curve with a summation of secondorder all-pass networks to within an arbitrary additive constant.The theoretical approximation is expressed in terms of nonlinear constraints at specific points that are then perturbed to achieve quasi-equi-ripple convergence.Convergence of this method is sensitive to the chosen initial value set, though this shortcoming is partially alleviated by a trial-and-error approach.This method also fails to account for deviations of practical realizations from theoretical models, which are of particular importance in the design of high-frequency systems [18].A similar approach is presented in [15], with the exception that the imaginary components of the all-pass poles and zeros are assumed to be the frequency locations of the local maxima of the resulting delay curve.As with the method in [14], it is impossible to analytically determine the initial solution set which guarantees convergence.Other approaches using the differential evolution and genetic algorithms [1], [6], [16] have also been proposed.Such approaches tend to be computationally intensive and are prone to converge to local minima, as opposed to a global optimum.
In this work, we present a new numerical synthesis procedure of minimum-order series-cascaded all-pass networks having a quasi-arbitrary group delay response, making the following contributions to the state-of-the-art: 1) Our method does not require an initial value set.Convergence is therefore not dependent on an appropriate selection.
2) The resulting group delay function is an approximation of the required response to within any arbitrarily specified maximum delay variation across the passband, extending on methods in literature [3][4][5][6], [10], [13], [14] where the network's order is chosen and not, explicitly, the resulting maximum delay variation.
3) The method is implementation abstracted, in that any theoretical, circuit, or parametrized numerical model description of a second-order all-pass section can be applied in the algorithm.
4) Due to the underlying analytical nature of the approach, rapid convergence is achieved; typically within 20 iterations for practical cases (as will be shown in the examples).This is an order of magnitude fewer iterations required by more general stateof-the-art optimization methods (genetic algorithm and simulated annealing), as is shown in comparative examples.
To maintain generality, this paper only considers second-order all-pass sections with complex poles and zeros, of which a special case is the first-order network centered around zero frequency.This paper is organized as follows.First, the problem of synthesizing a quasi-arbitrary group delay function is formulated geometrically on the complex s-plane and a derivation of the algorithm is presented.The proposed method is then demonstrated by synthesizing a linear group delay function with a sixth-order theoretical all-pass network.Next, in order to demonstrate the flexibility of the algorithm, Gaussian and higher-order delay functions are synthesized, followed by an example synthesis of a network to reduce the variation in group delay response of a physically measured system (in this case, a fifth-order hairpin resonator BPF).Lastly, the proposed method is compared to existing techniques in the literature and the improvements are demonstrated.The synthesized all-pass networks are implemented in lumped-element [9] form.In all cases, National Instruments AWR Microwave Office 10 is used as the circuit simulator.

Approach
The group delay, as a function of frequency, of an ideal second-order all-pass network, may be expressed in terms of the quadrature all-pass pole/zero pair locations, by the relationship [13]: where where the order of the all-pass network is 2N, Δ max represents the maximum variation in the passband of interest and min is a numerical minimization algorithm.This is further illustrated in Fig. 1.The delay cost function  c can either represent the negative of the desired function (by reducing the variation Δ max , an approximation to the desired function is then synthesized) or a practical system that is to be equalized.
As per the introductory discussion, a theoretical basis is desired for finding the N initial solution sets ( x ,  x ), which ensure that (2) converges to a global optimum.
Our approach in this paper simplifies the complexity of the minimization problem described in (2) by using a novel geometrical approximation to the ideal all-pass network of (1), subsequently leading to an approximate analy- tical solution to (2) in the form of initial all-pass pole/zero solution intervals.These intervals are found in such a way that (2) is necessarily monotonic over each interval and, as a result, any gradient-based optimization will always converge to a solution.The end result is either an equiripple solution (if at all possible for the given delay function and desired maximum variation) or a solution with the fewest local minima and maxima, both of which are optimal results [14].
After an initial solution is found using the approximation to (1), it is replaced with the more accurate all-pass network descriptions of (1) and subsequently numerically optimized in the knowledge that, by the preceding step, a global optimum solution will be found.This results in an all-pass network composed of ideal all-pass quadrature pairs.The ideal all-pass quadrature pairs are then individually implemented in the passband of interest by using theoretical, circuit-simulated or otherwise parameterized numerical descriptions of practical all-pass networks [9].The individual second-order circuit blocks are then optimized to achieve 1:1 equivalence with the theoretical all-pass section, they are to represent in the circuit, and then cascaded with the original network without further optimization.

Theoretical Derivation
We will first find a simplification to (1) as mentioned above.Equation ( 1) may be simplified by assuming that  x >> BW/2 and by removing the negative frequency pole/zero pair, without appreciably affecting the group delay curve at positive frequencies, where BW is the bandwidth of interest.Next, using the additive property of differentiation and the symmetry of the pole/zero pair about the Im jω-axis, it can be shown that the group delay of a single pole and a pole/zero pair is equivalent, provided that: 2 , , 2 , where p x and z x are the locations of the x th pole and zero pair, respectively.An approximation to (1) can therefore be found by approximating the group delay of a single pole in the II quadrant of the s-plane, as described by theorem 1 below.The derivation is presented in Appendix 1.
Theorem 1. Assume that a single pole exists in the II quadrant of the complex s-plane at point P 1 .Further assume that  is the group delay caused by the pole at P 1 at a frequency  = P 2 where P 2 is any point on the positive Im j-axis of the s-plane.Then: where the error of the approximation is: Im 1 2 1 lim 0 Theorem 1 is demonstrated graphically in Fig. 2 where the error of approximation approaches zero as  approaches its maximum.This relationship forms the basis of the synthesis algorithm presented in this paper.
Having derived an approximation to (1) the problem of ( 2) can be solved.To realize a final cascaded system  e (Fig. 1) with equi-ripple variation in group delay, appropriate selections for  quad, x must be made.Since  c is assumed to be a piecewise smooth function, the introduction of  quad, x to satisfy (2) necessarily implies the introduction of new local minima and maxima to  e .Furthermore, by the equi-ripple requirement, a local minimum must be followed by a local maximum.The minimization problem can therefore be restated as: finding optimal locations for new local minima in  e , while simultaneously controlling the peaks of resulting subsequent local maxima, such that the desired maximum p-p delay variation of  e is obtained.
An algorithm is developed for introducing such local minima and maxima with second-order all-pass networks, whereby a sequence of iterations successively introduces new poles p x in the II quadrant of the s-plane, until the desired peak group delay variation in (2) is obtained.These poles are then later replaced by quadrature all-pass pairs as per the transformation of (3), to preserve the initial system's magnitude response.
To introduce a new local maximum or minimum at a specific frequency of the group delay function  e , a pole must be placed on a semi-circular curve on the s-plane, as described by the following theorem which is derived in Appendix 1. where:   The location of p x , as expressed in terms of rectangular coordinates, in a right hand coordinate system is equivalently: Theorem 2 is illustrated in Fig. 3.
As β x is traversed from π/2 to 0, the maximum peak delay value introduced by p x first decreases (since the perpendicular distance from p x to the Im jω-axis increases) and then increases again (after the apex of the semi-circle is traversed), as per theorem 1.The introduced delay value therefore begins and ends at .When β x > β cx , ω mx is a local maximum and as β x decreases ω mx transforms to a local minimum exactly once (see Appendix I for derivation), where β cx can be found by: The foregoing discussion is further illustrated with Fig. 4. The next step involves selecting one pole from the infinite set of allowed values for p x , with β x < β cx , such that the local maxima preceding (at ω = ω lx ) and succeeding (at ω = ω rx ) the newly introduced local minimum are both equal in magnitude and not greater from the local minimum than by the maximum allowed variation value, Δ max , as further illustrated in Fig. 5: Before solving for ω mx and β x the behavior of the group delay at any point other than at ω mx , as β x changes in Fig. 3, is found to be: 2 2 e cos sin .
Since ( 12) has an impractically lengthy analytical solution and since  e may be numerical in nature, a numerical solution is justified.An interval for ω mx where a unique solution can exist is found thereby systematizing the numerical approach:   (max) : where ω mx(max) is the first zero crossing of c 1 (itself a function of ω mx ) greater than ω lx .A simple numerical rootfinding algorithm is used to solve for ω mx in the interval of (13).The next step is that of finding β x such that (10) is satisfied.Similarly to the previous discussion, ( 11) is substituted into (10) and β x is found using numerical rootfinding methods in the interval: This step concludes solving ( 9) and ( 10) for the x th pole, p x .Through a sequence of iterations,  c is traversed over the entire bandwidth starting from the band edge with the larger group delay, resulting in the desired variation over the entire band of interest.Each new p x reduces the delay variation to the specified maximum value, over a portion of the bandwidth,   [ lx :  rx ].This procedure is illustrated in Fig. 5, where two poles are introduced, starting at the lower frequency band edge.
We summarize the synthesis unit step as shown in the flow diagram of Fig. 6.Vector p  is the collection of all n poles p x calculated thus far in the progression of the algorithm.For brevity, only traversal starting at the lower frequency band edge is described.The unit step is used iteratively by the main algorithm of Fig. 8, both to introduce new poles and also to resynthesize previously calculated poles.To simplify notation, the unit step is represented by the functional block of Fig. 7.In Fig. 8, q is set as the maximum allowed fractional variation between iterations of p x , allowing for a successful termination of the algorithm.
If the left-and right-most peaks of the group delay are approximately equal, equations for finding  mx and β x , in certain cases, have no solution (for the desired maximum variation).This problem can be averted by simply raising/pre-distorting one band edge with an additional pole before proceeding with the algorithm of Fig. 8.
Finally, each individual  quad,x may be replaced by a more accurate parametric or numerical description of the delay element, provided that the delay function can be approximated by (1).

Example 1: Synthesis of a Linear Group Delay Function
We now demonstrate the proposed algorithm of Fig. 8 by synthesizing a network with a linear increase in group delay from 2 ns to 4 ns, as may be required when implementing a Fourier transform [1].This can be done by reducing the variation of the negative of the desired response (cost function) over the desired passband, as illustrated in Fig. 9.
The initial peak group delay variation of  e is 2000 ps and the desired final variation is set to 300 ps.From Fig. 9,  l1 = 1.256 × 10 10 rad/s and by using ( 12) and ( 13) it is found that  m1 = 1.618 × 10 10 rad/s with β c1 = 0.9553 rad.
Then (10), (11) and ( 14) are used to calculate β 1 = 0.4286 rad.The initial starting location for p 1 is therefore (- 1 ,  1 ) = (0.994, 17.790) × 10 9 .This value is then optimized numerically using a simple monotonic gradientbased optimizer (no longer using the approximation of   in Fig. 6. Figure 12 shows the group delay profile after this subtraction is performed. We now repeat the unit step procedure and re-compute the erased pole as shown in Tab. 1 under iteration three.A comparison of iterations one and three reveals the improvement in the approximation of the location of pole p 1 .The group delay synthesis algorithm repeats in this manner until the values of p 1 , p 2 and p 3 converge (the threshold is set at q = 3%, which is arbitrarily chosen).The resulting error function variation is 278 ps over the entire bandwidth whereas the specified variation was 300 ps.Agreement can be further improved by specifying a threshold criterion more stringent than q = 3%.
In order to synthesize the desired linear all-pass network, the obtained poles (as shown in Fig. 16) can be used to calculate the all-pass quadrature pair locations by using the transformation in (3).

Example 2: Synthesis of Gaussian and Quadratic Delay Functions
We now proceed to synthesize more complicated examples, namely Gaussian and quadratic delay functions, with peak delays of 2 ns and 1 ns respectively, as shown in Fig. 17 17(c) and Fig. 18(c), while their convergence to the final solution is further illustrated in Fig. 17(d) and Fig. 18(d).Finally, the resulting Gaussian delay approximation is shown in Fig. 17(a) and the quadratic approximation in Fig. 18(a).
To illustrate the numerical process of synthesizing the Gaussian delay response, iterations that produce new poles, are shown in Fig. 19.This illustrates the systematic approach of the proposed algorithm, where the error delay curve is effectively "stitched-up" into its final form by the inclusion of each subsequent second-order all-pass section.The accuracy of the approximation of Theorem 1 is confirmed by the correspondence of the dotted group delay curves (representing an approximate group delay using theorem 1) to the solid curves.

Example 3: Equalization of Measured BPF Group Delay Response with Circuit Co-simulation
The proposed algorithm is now demonstrated by reducing the delay variation of physical S-parameter measurements; in this case on a fifth-order coupled hairpin resonator BPF with a fractional bandwidth of 6.8% and magnitude and group delay responses as shown in Fig. 21(a-b).The generated ideal equalization poles are implemented separately in a circuit solver, using lumped-element second-order all-pass sections [9], as shown in Fig. 20.
The initial delay variation is measured as 4026 ps and a desired maximum variation of 1100 ps is set for the error function  e .This requirement results in a three-section equalizing all-pass network with parameter values as summarized in Tab. 2. A finite Q-factor of 400 is assumed for the inductors and capacitors.Values for the circuit elements are calculated directly from the theoretical secondorder all-pass sections [9], [12], [18].All of the individually tuned all-pass sections are cascaded in series (without further optimization) to obtain the final all-pass network as shown in Fig. 1.The equalized curves are shown in Fig. 21(a-b).
The size and accuracy of component values required in Tab. 2 rule out implementation with discrete surface mount devices.This is true, in general, for microwave filters operating at C-band frequencies [20].However, the required values are feasible on-chip using MIM capacitors and spiral inductors [21].Furthermore, in the microelectronic realization, the Q-factors of the inductors may be achieved by either active enhancement of on-chip coil inductors [22] or implementation of active inductors with gm-C type impedance inversion of fixed MIM capacitors [23], [24].In both cases careful control of process tolerances (on-chip MIM capacitors have absolute and relative tolerances of ±25% and ±0.1% respectively), to ensure the desired group delay response, is necessary [21].One approach is by means of post-production tunable CMOS varactors.
A resulting ripple of 1058 ps is obtained in Fig. 21(b).This is equivalent to a group delay variation reduction of 74% and 72% respectively.
The reduced group delay is achieved at the cost of a 2.3 dB insertion loss increase.This increase is, however constant across the band, which is characteristic of dissipative resistive losses attributed to the finite Q-factor of the lumped elements.The effect of a finite Q-factor on the insertion loss and resulting group delay variation of the cascaded system is further illustrated in Fig. 23.The sixth-order equalizing all-pass network is synthesized in only 10 iterations due to the natural reduction in group delay variation experienced in practical filters because of finite resonator Q-factors.

Comparison of the Proposed Method with Existing Approaches
The Gaussian delay function synthesis (Fig. 17 (b)) and practical BPF delay equalization (Fig. 21 (b)) examples described above are repeated here, this time using existing approaches in literature, in order to illustrate the shortcomings stated in the introduction.
The methods in [1], [5], [6], [13], [16], require as input the desired number of poles as well as their starting locations.Here the starting values of the ω x components are evenly sub-divided across the bandwidth of interest while the starting  x components are all set such as to create a peak delay equal to the maximum initial delay variation of the cost function.This choice can be justified by observing the pole locations synthesized in earlier examples.
Using this initial pole placement, the theoretical treatment presented in [13] is first applied to find an allpass network for the Gaussian cost function (Fig. 17 (b)).No convergence could be obtained for any number of poles without reducing the bandwidth by 16.6% (right-hand edge is removed).Fig. 24(a) shows the resulting group delay when five all-pass poles are used over this reduced bandwidth.A ripple of 786 ps is achieved over the originally specified passband as opposed to the 126.6 ps demonstrated in Sec.3.2 using our proposed method.
In a similar manner an equalizing network is synthesized for the BPF delay response (Fig. 21 (b)), using two all-pass poles (no convergence could be obtained for three poles) as shown in Fig. 24(b).
The above two examples illustrate the following shortcomings associated with the approach in [13]: 1) There is no explicit control over the resulting maximum error delay variation.
2) Often the method does not converge to a solution (depending on the delay cost function and specified number of all-pass sections).In the preceding examples, this prevented the synthesis of the Gaussian delay to within the desired error ripple of 130 ps and the equalization of the BPF to within the specified ripple of 1100 ps, since the bandwidth and number of allpass sections had to be adjusted to ensure convergence.
Numerical approaches aimed at finding optimal solutions in a large search space have gained prevalence in the modern group-delay synthesis literature [1], [5], [6], [14], [15], [16].Here we will investigate two such approaches which are well suited to the synthesis problem, namely the genetic algorithm and simulated annealing technique.The maximum ripple of the error function is plotted versus the genetic algorithm generation in Fig. 24(c) and versus the simulated annealing iteration in Fig. 24(d), for a single run.A limit of 1000 generations and 2000 iterations are imposed on the two numerical algorithms.A solution is found to the BPF delay equalization problem after 100 generations (genetic algorithm) and 150 iterations (simulated annealing).On the other hand, an optimal Gaussian group delay response is not successfully synthesized -no improvement occurs after the 300 th generation and the 400 th iteration respectively.It is important to note that separate runs converge to different end results, due to the random initial seed values intrinsic to the aforementioned algorithms.Probability distributions of maximum error ripples, for the aforementioned synthesis problems, as shown in Fig. 24(e) and Fig. 24(f), are computed from a sample of 500 separate runs.The standard deviation is shown for each bin.A limit of 1000 generations and 2000 iterations is imposed on the two numerical algorithms.
These results illustrate that convergence to a global optimum is a matter of finite probability and cannot be guaranteed (Figure 24(e) and 24(f) show the large number of local minima that typically exist in the search space).For example, in the Gaussian synthesis problem, 10 parameters are optimized.If each parameter is assigned to a subset of 100 points about the initial starting value then 10 20 function evaluations are required to cover the entire search space.A simulation of 1000 generations requires roughly 20000 function evaluations, which covers only 2 × 10 -16 of the search space.Further, the search space partitioning could be too course and fail to find the solution altogether.
Only 1.8% of the genetic algorithm solutions for the Gaussian synthesis problem are optimal.On the other hand, an optimum solution is found to the simpler BPF equalization problem 76% of the time.This might still be insufficient for certain applications, such as the adaptive delay equalization of a practical system.
The proposed method of this paper overcomes the aforementioned limitations by, in each case, converging to an optimal solution with an order of magnitude fewer iterations.

Conclusion
An analytically-based numerical method for synthesizing quasi-arbitrary group delay functions using minimum-order series-cascaded all-pass networks is presented.The method is compatible with any physical implementation of a all-pass delay network and is shown to converge to a globally optimum, equi-ripple solution, requiring an order of magnitude fewer iterations than stateof-the-art methods.It is also shown that current methods relying on the genetic and simulated annealing algorithms do not always converge to a global optimum (as the proposed method does).As proof of concept, linear, quadratic and Gaussian delay functions are synthesized, to within any arbitrarily specified maximum error of approximation.The proposed method is further demonstrated by reducing the group delay variation of a physically measured fifthorder hairpin resonator BPF with 6.8% relative bandwidth, by 72%.

Appendix I 6.1 Derivation of Theorem 1
Consider a single pole in the s-plane as shown in Fig. 25.Point P 1 represents the location of the pole of interest.Points P 2 and P 4 are arbitrary locations on the Im jω-axis separated by a differential distance Δω.The group delay of a linear system described by an S-parameter matrix can be calculated from the S 21 parameter as follows: Equation ( 15) can be expressed as: allowing the relationship between  and the geometrical representation of Fig. 25.
As ω is traversed from P 2 to P 4 by the differential Δω, the angle S 21 increases by a corresponding differential amount.The ratio of these two differential changes is the group delay of S 21 .The distance of P 1 from P 2 determines the resulting differential ratio, or group delay.With the aid of Fig. 26 and using the approximation presented in (16), we first find the relationship between Δ and 2 3 P P as follows: In order to impose the required simplicity of this approximation it is assumed that a  /2 rad.This implies that the approximation is only valid for ω  ω px (where ω px is the imaginary component of the pole p x at point P 1 in Fig. 26).The validity of the approximation is only important near this point -as justified by the requirement of ( 9) and (10).Therefore we may now write:

Derivation of Theorem 2
Let ω = ω mx be the location of the x th desired local minimum of  e on the jω-axis.The frequency ω = ω mx is a local minimum or maximum and only if, Using theorem 1 we can replace the right hand side of ( 20) with
This equation can be rewritten as: In order to simplify this result, polar co-ordinates are introduced, where β x is defined as shown in Fig. 27.
Equation ( 24) may now be written as:    

Derivation of (8)
A turning point is a local minimum if and only if ''(ω) is continuous at ω mx and if both the following condi- Since the first condition has already been satisfied (in theorem 2) the following restriction must be applied to ensure that the second condition is met: , where: .
The above may be re-written as: This expression can be simplified by using the polar coordinate representation:

Theorem 2 .
Assume that  e (ω) represents the group delay of some system of interest.Suppose that a local maximum or minimum is required at ω = ω mx .Then a new pole p x must be placed in the s-plane anywhere on the curve   px x r   where β x is the angle with the +jω-axis and

Fig. 5 .
Fig. 5. Graphical illustration of the group delay synthesis algorithm.    e e m a x lx mx

Theorem 1 in ( 2 )
) and found to be (- 1 ,  1 ) = (0.995, 18.397) × 10 9 .The aforementioned results are summarized in Tab. 1.The first unit step is concluded with the desired variation satisfied over the interval   [ l1 :  r1 ] = [1.26:1.82]× 10 10 rad/s, as shown in Fig. 10.We repeat the unit step procedure as per the algorithm of Fig. 8.The second unit step is concluded with the desired delay variation satisfied over the interval   [1.83 : 2.14] × 10 10 rad/s, as shown in Fig. 11.The next step involves removing a previously synthesized pole and recalculating its position as per the modification  e   e - quad,x shown
For brevity, all iterations are not shown in detail.The algorithm ends after 9 iterations with the final cascaded error group delay shown in Fig.13.By removing the cost function response  c , the resulting synthesized linear group delay function  s is shown in Fig.14.Convergence of the three poles is shown in Fig.15.

Fig. 15 .
Fig. 15.Convergence of the pole components -measured as the distance of each respective pole from the origin of the s-plane.
Fig. 17(a) and Fig. 18(a).The cost functions  are constructed (desired variation of the error function is set to 130 ps and 80 ps respectively), resulting in the equi-ripple error curves  e as shown in Fig. 17(b) and Fig. 18(b).Synthesized pole locations are shown in Fig.17(c) and Fig. 18(c), while their convergence to the final solution is further illustrated in Fig. 17(d) and Fig. 18(d).Finally, the resulting Gaussian delay approximation is shown in Fig. 17(a) and the quadratic approximation in Fig. 18(a).

Fig. 23 .
Fig. 23.Percentage delay variation reduction and insertion loss increase of the cascaded as a function of the Q-factor.

Fig. 24 .
Fig. 24.(a) Cost and error group delay functions (Gaussian).(b) Initial and equalized group delay responses (practical BPF).(c) Genetic algorithm applied to the minimization of the Gaussian and BPF delay error functions.(d) Simulated annealing applied to the minimization of the Gaussian and BPF delay error functions.(e) Probability distribution of the deviation from the specified ripple for the Gaussian synthesis problem.(f) Probability distribution of the deviation from the specified passband ripple for the BPF delay synthesis problem.

Fig. 25 .
Fig. 25.Geometry of the group delay response of a single pole.

Fig. 26 .
Fig. 26.A geometrical representation of the group delay caused by a single pole.
solution is trivial.
tions are met (second derivative test):   x and  x represent the values of the Re and Im