The Tumor Detection Neural Network using Texture Features

: Magnetic Resonance Imaging (MRI) is a medical imaging technique which provides information about the human tissue anatomy. The MRI images are high in tissue contrast which helps the radiologist to visualize the finer details of the most delicate parts of the body such as brain. Because of the variety of shapes, locations and image intensities within the MRI, it becomes the challenging task for the radiologists to diagnose the brain tumor correctly at its early stage. In this paper, a Tumor Detection Neural (TDN) network has been developed which detects whether the MRI for test is having benign or malignant tumor. First the tumor region is extracted using the image processing techniques which enhance the image and removes the skull region of the image. Then the texture features of the image are extracted using the Gray Level Co-occurrence Matrix (GLCM) with which the TDN network is trained. On basis of the training, the TDN network displays a message whether the test image is having a benign or malignant tumor.


INTRODUCTION
A tumor is a mass of tissue that grows out of control of the normal forces that regulates growth.The tumor not only destroys the healthy tissues but also produce some adverse effects such as inflammation, brain swelling and pressure within the skull.Human brain tumors are complex and the accurate diagnosis of these tumors is essential in order to provide a prognosis of tumor development [1].
Brain tumors can be categorized into two main types which are benign and malignant tumors.Benign tumors have cells that do not invade the tissues around them and do not spread to other parts of the body.Benign tumors have clear borders or edges and is sometimes life threatening.Malignant tumors are usually more serious.They grow more rapidly and invade the surrounding healthy brain tissue.According to the symptoms, to identify the internal body abnormalities the doctors would ask the patient to undergo several tests.Most commonly Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are used to locate brain tumor.The information from the MRI images obtained will decide the type of treatment a patient would be given [2].Perhaps the most widely used clinical diagnostic and research technique is MRI.MRI is an efficient medical imagery tool based on the principal of nuclear magnetic resonance (NMR).The MRI images may contain some noise and sometimes text too.So there is a need in pre-processing these images to enhance them.The image processing algorithms for enhancement such as histogram equalization are used.To extract the tumor region the skull border of the MRI is removed.The feature extraction means to get the information of image in the form of numerical data.This data extracted gives the information of the property of the texture character.Texture features are obtained with the Gray Level Cooccurrence Matrix (GLCM).The GLCM is a tabulation of how often different combinations of pixel brightness values or the grey levels occur in an image.The texture features extracted are the contrast, correlation, energy, homogeneity, dissimilarity, entropy and maximum probability.These seven features serves as inputs to the neural network.
Artificial Neural Network (ANN) is highly interconnected network of a large number of processing elements called neurons.This network is inspired by the brain.They exhibit mapping capabilities or pattern association and learn by examples.Back propagation is a systematic method of training multilayer artificial networks built on high mathematical foundations and has very good application potential [8].
The aim of this paper is to extract the tumor region and determine its texture features with the help of which the TDN network determines the type of the tumor.The MRI image of different type of brain tumor forms the database which is store in the knowledge base for this project.

METHODOLOGY
The TDN network developed for tumor detection is a feed forward backpropagation neural network.The knowledge base is made of 50 MRI images having benign or malignant tumors.The block diagram of the TDN network is shown in figure 1.
Associate Professor, EXTC, FCRIT, Vashi, Navi ME EXTC, FCRIT, Vashi, Navi Mumbai, India In the testing phase, an unknown MRI image with tumor is given for test.This test image is pre-processed to extract the tumor region and its texture features are extracted using the GLCM.The TDN network compares these features with the features of the images of the knowledge base.On the results of comparison, the network displays a message whether the input image is having benign or malignant tumor.

IMPLEMENTATION RESULTS
The implementation is done in three steps, namely image preprocessing, extraction of tumor region, texture feature extraction and finally the neural network.The following sections shows the results obtained while implementing these steps.

Image Pre-processing
The TDN network is trained for 50 MRI images.Out of which 25 are with benign tumor and other are with malignant tumor.One of the images with malignant tumor is shown in figure 3. The MRI is converted to grayscale if it is colored.Then it is resized to 210 X 210 so that all images will have same size.The contrast of the image is enhanced with the histogram equalization.The result of this operation is shown in figure 4.  The intensity adjustment of the image is done using the intensity mapping operation.The gamma value is kept greater than 1 so that the intensity mapping of the image is done towards the darker side.This is shown in figure 5.The skull is removed so that the extraction of tumor region can be done.The figure 6 shows the image after the skull.For further operation this image is needed to be converted into a binary image.The threshold for binarisation is decided using the Otsu's thresholding method.The output of binary image is shown in figure 7.

Extraction of tumor region by labeling the connected components
The connected components of the binary image are labeled using 8-connectivity.Then the centers of mass of labeled region are marked to know the position of labeled components.

Texture features extraction
The texture features of the grayscale image are obtained using the gray level co-occurrence matrix (GLCM).The GLCM functions characterize the texture of the image with tumor region.The features extracted are contrast, correlation, energy, homogeneity, dissimilarity, entropy and maximum probability.

Artificial neural network
The feed forward backpropagation network is made which is named as Tumor Detection Neural (TDN) network.The inputs to this network are the GLCM features of the 50 images that form the knowledge base.The TDN network has seven inputs as GLCM features and two hidden layers.The first hidden layer consists of 21 neurons and the second hidden layers consists 2 neurons.According to the texture features extracted of the knowledge base, the TDN is trained to decide whether an input image for test is having benign or malignant tumor.As shown in figure 14, the detection done by the TDN network is displayed in the form of message box.

CONCLUSION AND FUTURE SCOPE
The TDN network has two hidden layers.The number of neurons in the first layer is 21 and for the second layer the number of neurons (H) is 2 for two outputs 0 and 1.It is observed that the number of neurons has to be more than the number of inputs.For the TDN network the number of inputs i.e the texture features are 7.The numbers of neurons are in multiples of the number of inputs.It can be concluded that by selecting the number of neurons as approximately three times the texture features and the training sample size or the number of MRI images in the knowledge base as 50, good percentage of correct classification or accuracy of 87.5% is achieved.
In this project, the area of the extracted tumor region has been calculated.The network can be made more efficient if both the area and volume of the tumor are obtained.This requires all the scanned MRI slices of the whole brain and a software which produces the three dimensional view of the tumor present in the brain.In the testing phase, every time when a new MRI image is tested its features can be added to the knowledge base.This would result in wide varieties of tumor included in the knowledge base for detection.This would also increase the size of training samples required to train the neural network and increase the accuracy of the network.

Figure 9 :
Figure 9: White imageA white image is taken and the labeled regions are made black in it.Then this white image and the binary images are logically ANDed so that the unwanted regions other than the tumor are removed.This step of is repeated till the image is left with the tumor region.The figure9is the white image with labeled regions made black and the figure10is the result of ANDing the white image and the binary image.

Figure 10 :Figure 11 :
Figure 10: Result of ANDing the white

Figure 12 :
Figure 12: The excelsheet for training the network The features extracted from the above image of figure 11 are stored in the excelsheet which is shown below.The first seven columns of the excelsheet shown in figure 12 are the seven GLCM features and the last two columns are the targets.The targets for these images are [0 1] if there is benign tumor and [1 0] if there is malignant tumor.

Figure 14 :
Figure 14: The message displayed by TDN network 4.RESULTS FOR THE TESTING PHASEAfter training phase, the TDN network is given some other MRI with tumor for testing.In the testing phase, the network tests MRI and compares the features of the test image with the features of the images of the knowledge base.On the results of comparison, the network displays a message whether the input image is having benign or malignant tumor.The results are given in below in form of table 1.The results in the table are for four images taken from the Harvard medical school website where they provide the MRI for brain diseases.In this way more MRI images can be tested using the TDN network.

Figure 15 :
Figure 15: GUI displaying the test results

Table 1 :
Results for the test images (This work is licensed under a Creative Commons Attribution 4.0 International License.)