STUDY OF RESONANT MICROSTRIP ANTENNAS ON ARTIFICIAL NEURAL NETWORKS

This paper present a new model based on the back propagation multilayered perception network to find accurately the bandwidth of both electrically thin and thick rectangular microstrip antennas. This proposed neural model does not require the complicated Green's function methods and integral transformation techniques. The method can be used for a wide range of substrate thickness and permittivities, and is useful for the computer-aided design of microstrip antennas. The results obtained by using this new method are in conformity with those reported elsewhere. This method may find wide applications in high-frequency printed antennas, especially at the millimeter-wave frequency range.


INTRODUCTION
Microstrip antennas have sparked interest among researchers because of their many advantages over conventional antennas, such as low cost, light weight, conformal structure, low profile, reproducibility, realibility, ease in fabrication and integration with solid state devices.Recently interest has developed in radiators etched on electrically thick substrates.The need for theoretical and experimental studies of microstrip antennas with electrically thick substrates is motivated by several major factors.The need for greater bandwidth is the major reason for studying thick substrate microstrip antennas.
One of certain ways of calculating the bandwidth of a rectangular microstrip antenna involves the evaluation of a double integral.Among others, this approach has been introduced by Perlmutter et al. [1] It resembles the method used by Uzunoglu et a1 [2] and Van der Paw [3] for various other problems.
Artificial neural networks are known to provide simple and faster solutions than the complicated methods and techniques.The features of artificial neural networks such as ability and adaptability to learn, generalisation; less information requirement, fast real-time operation and ease of implementation have made them popular in recent years.[4] An artificial neural network is a system that is designed to model the way in which the brain performs a particular task of function of interest.Although, the precise operation details of artificial neural networks are quite different from human brains, they are similar in three aspects : consisting of a very large number of the neurons connection of each neuron to a large number of other neurons, and the functionality of networks determined by modifying the strengths of connections during a learning phase.
This paper presents a new model based on the backpropagation multi layered perceptron network to find accurately the bandwidth of both electrically thin and thick rectangular microstrip antennas.The results obtained from this model are in excellent agreement with the results available in the literature.

BANDWIDTH OF RECTANGULAR MICROSTRIP ANTENNAS :
Fig. 1 shows a rectangular microstrip patch antenna.The input impedance of a rectangular microstrip antenna, which can be modeled by a simple parallel-resonant RLC circuit, can be expressed as jQv R Z in   1 where R is the resonant resistance, Q is the quality factor, f is the frequency variable, and fr is the resonant frequency.In the series-resonant case, input impedance is given by Zin = R(1 + jQv) (3) the input VSWR can be written as Where Z0 is the characteristic impedance of the feed line.If the bandwidth criterion is taken to be VSWR  s, and f1 and f2 are the lower and upper band edge frequencies, respectively, so that VSWR(f1) = VSWR(f2) = s, the bandwidth is given as From ( 1) -( 5), the following equation is obtained where T = Z0/R in the series-resonant case, and T = R/Z0 in the parallel-resonant case.
As we are interested in resonant antennas, the physical length L of the path is not of importance.It is determined by  L = edge sanctions.

BACK PROPAGATION MULTI LAYERED PERCEPTRON NETWORKS :
Multilayered perceptrons [4], which are among the simplest and therefore most commonly used network structures have been adopted for many applications.Fig. 2 shows an MLP with three layers: an input layer, an output layer and an intermediate or hidden layer.The circles and the connection lines in the figure represent neurons and weights respectively.The biases are not shown in the figure.Each layer consists of a number of neurons.All the neurons in a layer are fully connected to the neurons in adjacent layers but there is no connection between the neurons within the same layer.
Neurons in the input layer only act as buffers for distributing the input signals xi to neurons in the hidden layer.Each neuron j in the hidden layer sums up its signals xi after weighting them with the strengths of the respective weight connections wji from the input layer and computes its output yj, which is the output of the jth neuron in the hidden or output layer, as a function f of the sum, where f is a transfer or activation function and can be a sigmoid or a hyperbolic tangent function, and j is a variable bias with similar function to a threshold.The transfer function has the feature of being nondecreasing and differentiable, and the range of yj is between -1.0 and 1.0.The transfer function used in the hidden layers is given as follows We note that yj can be defined recursively in terms of its inputs.The computation continues until the output of the network is found.After computing the output, the training process starts in according with the learning algorithm used.
It is an iterative training process in which an output error is propagated back through the layers and used to modify weights.The error E is defined by where tyj is the desired or target value of output for a given input, and the summation is performed over all output neurons j.Once the outputs from the hidden layers and output layer have been calculated for each input pattern p, the direction of steepest descent in parameter space is determined by the following partial derivatives of E.  20) also shows how the analysis proceeds from the output layer to the proceeding layers.So the quantities pj can be calculated in parallel for all output neurons j as The following quantities 6 for all hidden layer can be then written by using ( 12) (1 -ypj) (1 + ypj) pj wkj (22) where j refers to a neuron in one of the hidden layers, and the summation is over all neurons k, which receive signals from neuron j.
Substituting ( 21) and ( 22) into ( 15) and ( 16), the steepest descent direction from a current weight bias configurations is obtained.The weights wji, and biases j are changed according to the following equations where t indexes the number of times to train the neural model,  is the learning coefficient,  is the momentum coefficient which determines the effect of past weights changes on the current direction of movement in the weight surface.

NEURAL COMPUTING OF THE BANDWIDTH, SIMULATION RESULTS AND DISCUSSIONS
The neural model for calculating BW is shown in Fig. 3.In the figure, LF and TF represent the linear activation function and the tangent hyperbolic function used in the MLP structure, respectively.
A set of random values distributed uniformly between -0.1 and +0.1 was used to initialise the weights of the networks.The tuples were scaled between -1.0 and +1.0 before training.
The adaptation in this study has been carried out after the presentation of each input tuple (h, W/0, r and BW) until the rms error in learning process was less than 0.009.The rms errors obtained were 0.009 for training and 0.012 for test.
In order to demonstrate the validity of the neural model, the unseen data set to the network obtained from moment methods [5] for W/0 = 0.3 and r = 2.55 were also used for test the performance of network.The test results of the model are shown in Figs. 4 to 7.
The both test results illustrate that the performances of the networks are quite robust and precise.Thus, the neural models achieve the calculation of the bandwidth for a resonant rectangular microstrip patch antenna with a very good agreement.

CONCLUSION
A new method based on artificial neural networks trained with the back propagation algorithm for calculating the bandwidth of both electrically thin and thick rectangular microstrip antennas has been presented.As can be seen from the Fig. 4, there is an excellent agreement with the data from the Green function methods.This excellent agreement supports the validity of neural models.
Since the neural model presented in this work has high accuracy in the range of 1.1  r  10.0 and 0  h/d  0.15 and requires no complicated mathematical functions, it can be very useful for the development of fast CAD algorithms.This CAD model capable of accurately predicting the bandwidths of rectangular microstrip antennas is also very useful to antenna engineers.Using this model, one can calculate accurately, by a personal computer (PC), the bandwidth of rectangular patch antennas, without processing any background knowledge of microstrip antennas.It takes only a few microseconds to produce the bandwidth on a Pentium/100 MHz PC.Even if the training time takes less than ten minutes, after training, the calculation time is less than hundred microseconds in real time calculation.Thus, the neural model is very fast after training.
Finally, we expect that the neural models will find wide applications in CAD of microstrip antennas and microwave integrated circuits.

Figure- 3 :
Figure-3: A neural model for bandwidth calculation of a rectangular microstrip antenna.with

2
and (20) are, respectively, valid for the output and hidden neurons.(

Figure 4 .
Figure 4.The training results of the bandwidth of a rectangular microstrip antenna as a function of dielectric thickness for r=1.1.

Figure 5 . 1 Figure 6 .
Figure 5.The training results of the bandwidth of a rectangular microstrip antenna as a function of dielectric thickness for r=1.1

Figure 7 .
Figure 7.The training results of the bandwidth of a rectangular microstrip antenna as a function of dielectric thickness for r=2.2.