THOR 50 th Dummy Neck Calibration Analysis Based on Bi-LSTM Neural Network

. In this paper, a neural network model is established based on the neck calibration data of the Thor50th crash test dummy using a bi-directional long and short-term memory (Bi-LSTM) neural network algorithm. The model input is the factors affecting the neck calibration test, and the output is the maximum value of My in the neck calibration test, and the accuracy of the model is calibrated by comparing it with the actual calibration data. The accuracy and suitability of the Bi-LSTM neural network model is further verified by comparing with the radial basis (RBF) neural network algorithm.


Introduction
As the proportion of Thor 50M in crash test dummies has significantly increased in current state of passive safety development, which also means the increase of the requirements against dummy performances.Hence, multiple assessment protocol has been added to the calibration procedure.The dummy neck calibration is the most complicated task, which is similar to the calibration process of the Hybrid III 50th dummy.The calibration process of the Thor 50M neck can be divided into six parts, Neck Frontal Flexion, Neck Extension, Neck Lateral Left/Right Flexion, Dynamic Response Left/Right Torsion.According to calibration test data, Neck Frontal Flexion and Neck Extension appears to have minimum completeness.Peak Upper Neck My is most likely to fail among the 8 assessment criterions, thus, this particular criterion is selected as the research subject in this study.In calibration tests, except from the peak value of each test item that is involved in the overall evaluation, the test curve should also be considered.Hence, it is more accurate to consider the neck calibration test results as a set of time-series data, however, due to the high sample frequency of calibration test, which leads to repetitive and meaningless data, so certain points can be selected instead of the entire time-series data for model fitting.In this case, Bi-LSTM neural network should be considered more suitable for time-series data processing, so Bi-LSTM neural network is selected for fitting in this study, and the conventional RBF neural network is introduced as control test to compare the fitting results of the two methods.

Bi-LSTM neural network
Bidirectional long-short term memory(bi-lstm) is the process of making any neural network to have the sequence information in both directions backwards (future to past) or forward (past to future).In bidirectional, input flows in two directions, making a bi-lstm different from the regular LSTM.With the regular LSTM, the input can flow in one direction, either backwards or forward.However, in bi-directional, the input can flow in both directions to preserve the future and the past information.Recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed or undirected graph along a temporal sequence.The model structure is shown below, including input layer x, hidden layer s, and output layer o.There is a loop operation on the hidden layer s.Meanwhile, the linear relationship parameters U, W, and V are shared by the RNN at all moments, which greatly reduces the amount of parameter training.Long Short-Term Memory is a kind of recurrent neural network proposed to solve the problem of RNN, e.g., vanishing and exploding gradients, and unable to keep track of long-term dependencies.LSTM has a similar main structure compared to RNN.The main difference is LSTM can protect its hidden activation using gates between each of its transaction points with the rest of its layer.The hidden activation that is protected is called the cell state.The protection of the cell state is undertaken by the three LSTM gates, namely, the forget, input, and output gates.The principle of the LSTM hidden activation structure is shown below.f(t), i(t), and o(t) denote the values of the forget gate, input gate, and output gate at time t, respectively.a(t) denotes the initial feature extraction of h(t-1) and x(t) at time t [1].

Fig. 2 Structure of LSTM neural network
The LSTM model can be described by following equations Where,  � is the input at current timestamp ℎ ��� is the output from the previous LSTM block (at time stamp t-1)  � is the weight for the respective gate(x) neurons  � is the weight  � is the bias for the respective gate(x)  is sigmoid function The results of the forget gate and input gate act on c(t-1), hence the cell state at timestamp t c(t), which can be expressed by following equation �� � � � 1� ⊙ �� � �� ⊙ �� Where ⊙ is Hadamard multiplication The Bi-LSTM network has two independent LSTMs.The input sequence is sent to the two LSTM neural networks in positive and reverse order for feature extraction, and the feature vector formed by splicing the two output vectors is used as the final feature expression of the instance [2].

Bi-LSTM neural network in neck calibration test
In the calibration process of neck frontal flexion and neck extension, the main factors affect peak upper neck My are number of honeycomb aluminum holes, front cable force, rear cable force, neck condition and temperature, considering the instability of the five factors mentioned above during the calibration process, the traditional databased fitting approach utilizes less information, which makes it cannot learn the features of Thor 50 th dummy neck properly, while no significant correlation could be observed between these five factors, and the effect of these factors on the PUN My is non-linear, hence, Bi-LSTM neural network is selected for this study.This neural network is a deep recurrent neural network, which can make good nonlinear approximation using forward and backward data information.In addition, the sensor data collected from the calibration test are considered as time-series data, and the data were simplified into 3 stages for the purpose of the study, namely,  � � � �,  � � � �, and  � ���� ,  � � � � and  � � � � were entered simultaneously with the information of the five influencing factors asked above to make predictions on the neck calibration results.The neck of the Thor dummy is constructed by rubber and metal, in this case rubber has the greatest impact to the overall performance of the neck.However, evaluation against the rubber part of the neck is relatively hard to perform and the neck assembly is not a disposable part, hence the overall performance of the neck in calibration scenario is irregular and cannot be evaluated intuitively.Hence, a subjective evaluation against the neck is conducted based on the neck calibration test results, in order to do so, velocity, peak upper neck My, peak upper neck Fz, Peak head angular rate and peak head angle are chosen as criterions to assess the current state of the specific neck assembly.Based on neck calibration test data, velocity, peak upper neck My, peak upper neck Fz, peak head angular rate and peak head angle are selected to conduct overall evaluation.Each criterion has its upper and lower limit, it can be considered that the neck assembly is in decent condition when the calibration test data is evenly distributed between upper and lower limit in each criterion, Therefore, it is necessary to calculate the proportion of the difference between the actual value of each criterion and the middle value of the upper and lower limit in the limit length interval(expressed in decimal), as shown in the

Nomalization process
To improve the stability of the calculation, the data are normalized and the range of value is incorporated between [0, 1] with the formula below.

� � 𝑋𝑋 � 𝑋𝑋 ��� 𝑋𝑋 ��� � 𝑋𝑋 ���
Where, X is the actual value of a variable, Xmax is the maximum value of a variable, Xmin is the minimum value of a variable, and Y is the normalized value.After the prediction model, the predicted value needs to be denormalized to facilitate error analysis with the true value, and the formula is: ������ � � ��� �  ��� �� �  ���

Parameter settings
In general, parameters need to be set including, network structure, number of units, learning rate, training algorithm and training times, etc.The number of model layers is 2, with a dropout random deactivation layer in the middle to prevent overfitting, and a full linkage layer in the last.Basically, it is adequate to use same number of neurons for all hidden layers.For some datasets, having a large first layer and following it up with smaller layers will lead to better performance, because the first layer can learn many lower-level features that can feed into a few higher order features in the subsequent layers.
In addition, performance boost is more likely to be acquired from adding more layers rather than adding more neurons in each layer.Hence, adding too many neurons in one single hidden layer is not recommended [3].
Where,  � is the number of neurons in the input layer;  � is the number of neurons in the output layer;  � is the number of samples in the training set;  is an arbitrary value, usually in the range of 2 to 10 The number of hidden neurons should be between the size of the input layer and the size of the output layer.The number of hidden neurons should be 2/3 of the size of the input layer, plus 2/3 of the size of the output layer.The number of hidden neurons should be less than twice the size of the input layer [4].In conclusion, the optimal number of neurons in the hidden layer should be obtained through continuous experiments.It is recommended to start with a small value such as 1 to 5 layers and 1 to 100 neurons, and then slowly add more layers and neurons if underfitting, and reduce the number of layers and neurons if overfitting.In addition, it is also available to introduce Batch Normalization, Dropout, Regularization and other methodologies to reduce overfitting in the actual process.The two neck calibrations in this study were influenced by the same factors, and the number of hidden neurons was calculated from the data to be between 2-20, and the appropriate number of hidden layer neurons was calculated by selecting 3 points per interval between 2-20, i.e., [1 5 8 11 14 17 20], and the results were as follows, and the number of hidden layer neurons was 5.

Neck frontal flexion test
The prediction results in neck frontal flexion tests are as follows.The prediction error of the Bi-LSTM neural network ranged from -0.25% to 0.2% with good prediction results, proving that the selected factor was indeed the key factor affecting the neck test.While, using RBF neural network for prediction resulted in a deviation of -10%-6%, which indicates significantly lower suitability than Bi-LSTM neural work.

Neck extension test
The prediction results in neck frontal flexion tests are as follows.The prediction error of the Bi-LSTM neural network ranged from -0.3% to 0.4% with good prediction results, proving that the selected factor was indeed the key factor affecting the neck test.While, using RBF neural network for prediction resulted in a deviation of -3%-3%, which indicates significantly lower suitability than Bi-LSTM neural work.

Fig. 5
Fig. 5 Deviation of hidden layer neuron number on Neck Extension

Fig. 7
Fig. 7 Neck frontal flexion predicted values vs. actual values using Bi-LSTM neural network

Fig. 11
Fig. 11 Neck extension predicted values vs. actual values using Bi-LSTM neural network Number of honeycomb aluminum holes, front cable force, rear cable force, neck condition, temperature,  � �� � � ,  � �� � �, and  � ��� were entered into the model to analyze the neck calibration test with the following model.
table below the closer the value is to zero, the better the neck condition is, the neck condition value of the test can be described as Neck Goal and shorted as NG in this study.