Using Artificial Neural Networks in Solving Heat Conduction Problems

 This paper solves an important class of inverse heat conduction problems by using a back propagation neural network (BPN) to identify the unknown boundary conditions. The feasibility of the proposed method is examined in a series of numerical simulations. The results show that the proposed neural network method successfully predicting the unknown parameters with acceptable error.


INTRODUCTION
In forward heat conduction problems the heating characteristics, the boundary condition and the initial conditions of a body are known and are used to establish the internal temperature field.Conversely, in inverse heat conduction problems (IHCPs), experimental temperature measurement are taken at various points in the interior of a body and are used to estimate the unknown boundary conditions existing at the external surface.IHCPs are mathematically ill-posed in the sense that the existence, uniqueness and stability of their solutions cannot be assured (Beak, 1985).IHCPs are generally solved using some form of numerical technique.Classical approaches include space marching (Shin, 2000) the single future time step method, the function specification method, the regularization method and the trial function method (Beak, 1985).Since the 1970s, computer science and technology have advanced rapidly and hence contemporary researchers generally solve IHCPs using numerical methods such as the finite element method, the finite different method (FDM) and Genetic algorithm (Raudensk, 1995).
The rapid development of artificial neural network technology in recent years has led to an entirely new approach for solution of IHCPs (Raudensk, 1995).Neural networks are artificial intelligence systems which mimic the biological processes of a human brain by using non-linear processing units to simulate the functions of biological neurons.

DESCRIPTION OF FEED-FORWARD NEURAL NETWORKS
Artificial neural networks are modeled from the human brain and neural systems, which are suitable tools to solve large-scale problems.There are many references in theory and applications, modeling, algorithms, design, architecture and mathematics of neural networks.Artificial neural networks consist of calculation units called neurons.Every neuron has some real-valued inputs.Each input is multiplied with the corresponding synaptic (neural) coefficient and the sum of all these products is added to a value called bias.Finally, activation (instigation) function affects this sum and determines the real-valued output of the neuron, feed forwardly.In most of applications, activation functions are hyperbolic tangent or logistic function in the form of 1/(1+e -x ).Learning is the process of determining the optimal values for the synaptic weights.If this procedure works based on minimization of the error between the computed and the known desired output values, then it is called supervised learning.In contrast, there are no desired output values for the neuron in unsupervised learning.The main power of the neural computing arises from too many neurons that connect and adapt in order to form networks.The simplest topology for these networks is a group of neurons that are organized in one layer, called single layer network or perceptron .The architecture of multilayer neural networks is formed by a group of one layer networks.In multi-input-single-output feed forward multilayer neural networks, there is no recursion (feedback loop) and outputs of the neurons form the inputs for the next layer.Such networks are called multilayer perceptrons (McClelland, 1986), which contain at least three layers naming input, output and inner (hidden) layers (Fig. 1).
Accordingly, artificial neural networks can make a nonlinear mapping from the inputs to the outputs of the corresponding system (Hornik, 1990).This is suitable for analyzing the systems described by initial-boundary value problems that have no analytical solutions or their analytical solutions are not computable easily.
One of the applications of the multilayer perceptrons is the ability of global approximation for real-valued multi-variable

FORMULATION FOR INVERSE PROBLEMS
In this section an inverse heat conduction problem is considered as the following problem where k denotes the heat coefficient, g is the source term and p and c represent the heat capacity coefficients.All these functions and parameters are considered to be known.The determination of the boundary function ( ) q t is the goal of this work using neural network approach.

MAPPING NEURAL NETWORK ARCHITECTURE
A feed forward network with n-input units and m-output units can perform mapping from an n-dimensional cube n  to an m-dimensional cube m  .According to the original theorem proposed by Kolmogorov (Patterson, 1998): For all 2 n ³ , and for any continuous real function g of n variables in the domain [0,1] , : [0,1] n g   , there exist 2 1 n + continuous, monotonously increasing one-variable functions in[0,1] , by which g can be reconstructed according to the following equation: where j Y are continuous functions with one variable.Many authors have attempted to improve this theorem since its original  ( ) ( ) Eq. ( 6) corresponds to a three-layered feed forward network architecture with a single output.Based on the theoretical foundation above, and using a more complex proof process (Antognetti, 1991), the following theorem can be obtained: Let ( ) u j be a non-constant, bounded and monotonously increasing continuous function.There exist an integer k and sets of real constants i c , i q and ij w , where ( 1,2, , ),  such that expression: can be define to meet where ij w corresponds to the neuron weight.From the theoretical description above, for the condition 0 e > , there exists a three-layered feedforward network in which the activation function of the hidden layer is ( ) u j .The activation function of the input layer is non-linear, while that of the output layer is linear, and the three-layered feed for-ward network given in Eq. (7) satisfies Eq. ( 8).The theorem presented above provides the foundation for applying the BPN network to the solution of IHCPs.

BACK PROPAGATION NEURAL NETWORK
Fig2 illustrate a general BPN network.As shown, this network is a feed-forward, fully connected hierarchical M-layered network consisting of an input layer, M-2 hidden layers and output layer.If the kth unit in the Mth layer is denoted by (M,K), the state variable u k M for this unit and its output signal y k M to the units in the next layer (M+1, k) can be written as follows: , ) Here,

T x t x e q t e
--= = .Figure 3 demonstrates a comparison between the exact and approximate solution for ( ) q t .

Example2:
The goal is estimating the boundary condition ( ) q t by using the extra condition (( -1) * , ) The exact soloution is 2 ( , ) 2 , ( ) 1 2 T x t x t q t t = + = + .Figure 4 demonstrates a comparison between the exact and approximate solution for ( ) q t .

CONCLUSION
In this work, the feed-forward neural networks are used to solve a class of inverse heat conduction problems.The neural

Figure 1 .
Figure 1.Structure of the general feed forward single hidden layer perceptron functions in a closed analytical form.Namely such neural networks are universal approximators.They can be trained to approximate any Borel measurable function defined on a hypercube with any desired precision.The least conditions for the activation functions in the inner layers are nonlinearity and integrability.The approximation precision is related to the neurons in the hidden layers and not related to the number of the hidden layers (Shekari, 2009).

j
 are the bias and activation functions of unit ( , )M k , respectively.The output signal M k y is transmitted to all units in the next th M layer.In the current study, the BPN network is used to solve various IHCPs.The input parameters, the temperature data at specified points in the interior of the object of interest, while the outputs of the network,  , are parameters relating to the boundary conditions(Deng, 2005).

Figure 4 :
Figure 4: Comparison of exact an ANN for Example2.