An Integrated System for Feature Evaluation of 3D Images of a Tissue Specimen

In this article we have proposed an integrated system for measurement of important features from 3D tissue images. We propose a segmentation technique, where we combine several methods to achieve a good degree of automation. Important histological and cytological three‐dimensional features and strategies to measure them are described. Figures can be viewed in colour on http://www.esacp.org/acp/2002/24‐23/adiga.htm


Introduction
Visual inspection of a tissue specimen is appropriate to identify different diseases. The same cannot be said for evaluating different sub-classes of the same disease [1]. Quantitative study of the tissue images is essential to determine the sub-classes of a same disease. Key factor in this is the availability of techniques for segmentation of individual cell nuclei and quantification of the features. Segmentation of the tissue images has been given a wide exposure as can be seen from the several techniques proposed in the literature [2][3][4]. However, none of these methods are global in nature. Many of them tend to fail when applied on images acquired with different set-up. We propose a combination of some of the robust techniques into one method, making it more robust and global in nature. The importance of segmentation lies in the fact that it makes it possible to measure implicit and explicit features of the cells/tissue. In the earlier works on 2D histopathological image analysis by Choi [5] and Nordin [6], list of features based on contours, textures and regions are proposed. Though they are mentioned with reference to 2D images, most of them can be directly extended to multi-dimensional images, with the con-sideration of correction factor for anisotropy of voxels. In the later part of this article we have proposed methods to measure some of the important features that can be related to different subclasses of the disease.
The general principle of combining different segmentation methods to obtain better results was earlier presented in the works of Grinekar [7], Anderson et al. [8] and Pavlidis and Liow [9]. Our earlier work on integrated approach for segmentation was presented in [10]. In most of the histo-pathological images, cells are compactly arranged. This requires a priori information about the possible locations of the cell surface where they appear to touch or overlap on one another. This can be done by initializing an approximate cell surface model around the cell and allowing it to deform in shape so that it converges to the actual cell surface. This is a 3D extension of active contour model as proposed by Kass et al. [11]. The main disadvantage of the use of deformable models is the need for interactive initialization very close to the cell surface. This is hard to realize in complex 3D images of tissue specimens. One of the solutions to this problem is to use another segmentation technique to approximately isolate the cell regions. The surface of the isolated regions can be considered as the initial surface model for the application of deformable model technique. This method is explored in this paper.

Material
Images are obtained using Confocal Laser Scanning Microscope (CLSM) by pathologists at GSF, Munich, Germany for the purpose of Fluorescence in situ Hybridization (FISH) signal evaluation in prostate tumor. The results of those experiments, details of specimen preparation protocols are published elsewhere [12,13]. The image data specification is as follows. Fluorescent images were scanned using a Confocal Laser Scanning Microscope (CLSM) Zeiss LSM 410, Lens Zeiss PNF 100×, numerical aperture 1.3, zoom = 2, realized by the scanning unit. The scanned field of 62.5 µm × 62.5 µm was sampled to 256 × 256 pixels giving a pixel size of 0.25 µm 2 in x and y directions and 0.5 µm axial size (x-z) is obtained. The image data are stored on a disk in Tagged Image File (TIF) format. Number of sections scanned depends on the specimen thickness. Typically, 22 to 28 optical sections are present in an image stack. The methodology presented in this paper is not specific to above image acquisition set-up and can be used to analyze most of the histo-pathological images acquired using confocal microscope.

Methods
In general, image analysis system consist of noise reduction and feature enhancement methods, segmentation methods and various feature measurement techniques. In the present system, the segmentation is achieved in three stages; in the first, approximate segregation of the foreground is obtained. In the second stage regions of interest and regions belonging to different objects/nuclei are isolated. The third stage is the refinement stage, where the surface of the nuclei regions obtained by the second stage of segmentation is fine-tuned for better precision.

Stage I: Preliminary segmentation
Segmentation is the process of assigning voxels to recognizable components, such as cellular material, etc. In the first stage, we classify the voxels as foreground voxels or background voxels by suitably thresholding the image. Visual selection of the threshold is an obstacle in automation of quantitative image analysis. There are several thresholding techniques proposed in the literature [14]. Here we use an empirical method which is found to provide suitable thresh-old in most of the cases. Threshold T is chosen automatically as (µ ± k · σ) where µ is the average image voxel intensity within the local window (default window size is one optical section) and σ is the standard deviation of the voxel intensity. The tuning constant k is determined as follows.
Let N k be the number of voxels in the foreground when the image is thresholded at some value of k.
if ∆ ≤ (10/100)(N i ) then k = i else i = i + 1 and repeat till the above condition holds.
Another approach which also give good thresholding is a multi-step approach. In the first step, the image is thresholded at its mean intensity value. Then the foreground is labeled and each component is subject to second stage of threshold calculation. In the second stage, mean intensity of each component is calculated. If µ 2i is the mean intensity of the component i, then k 1 · µ 2i is used as a threshold for that particular component. Here k 1 is an experimentally determined constant. Threshold value for each component is separately calculated and each component is thresholded at its corresponding threshold value. Artifacts in the binary image such as spots and island like structures are removed from the foreground based on their size and shape feature. All the objects having no signature in the third dimension are also removed. Similarly, the holes surrounded by foreground voxels, are restored to the foreground gray value.

Stage II: Secondary segmentation
We have extended watershed algorithm to 3D images for isolating regions belonging to different nuclei. Application of watershed for segmentation of 2D histological images is earlier reported in literature [15,16]. The success of watershed segmentation depends on regional markers. A regional marker is a group of voxels approximately located at the centre of the nucleus. Every nucleus has one regional marker in it. The region of influence of the regional marker covers all the voxels of the nucleus.
To reduce the fragmentation of the objects in watershed process, the grey level of each component in the foreground is reconstructed. In the reconstructed image, the grey level varies uniformly from the centre of the objects towards its surface. Using the two-tone image obtained by thresholding, the grey level of the foreground is reconstructed in such a way that the grey level uniformly varies from the centre of the individual objects to its surface. We have used path generated distance transforms (PGDT) proposed by Borgefors [17] for this purpose. The distance map is considered as reconstructed grey scale image. Let dist(.) represents the distance value of voxels in the distance map.
Step 1: All the connected group of voxels having maximum distance in the image domain are considered as markers. It may be a single voxel or a group of connected voxels or several groups of connected voxels. The markers are labelled using connected component labelling algorithm. Let d max be the maximum distance in the image domain and d min is the minimum distance value.
Step 2: All the voxels having a distance value (d max − 1) and located in the neighbourhood of the labelled regional markers are merged with their nearest regional marker. The isolated voxel or group of connected voxels with distance (d max − 1) and not having a labelled regional marker in their immediate neighbourhood are considered as new markers and given a new label.
Step 4: If the d max = d min then steps 2, 3 and 4 are repeated.
The result of watershed segmentation of histological images is normally fragmented cell nucleus. This may be due to holes present in the objects of two-tone image leading to error in distance map. Also, barb like structures at the cell surface and presence of irregular concavities in the cell surface lead to fragmentation of cell by watershed algorithm.
Some of the proposed methods to overcome the over segmentation problem are geodesic reconstruction [15], the hierarchical segmentation [16], hierarchical segmentation using dynamics of contour [18], hysteresis thresholding, etc. We have implemented an extension to 3D watershed technique, which identifies the over-segmented objects based on simple size and shape features and merge them with their parent cell.
Merging: Let N be the total number of segmented objects due to the application of simple 3D watershed segmentation method. Let N be the total number of segmented objects due to the application of classical 3D watershed segmentation method. Let T size be the size threshold for a cell, i.e., a cell should have a minimum size of T size . This threshold is set experimentally.
All the tiny fragments of the cells whose sizes are below T size are considered as noisy and are merged with corresponding parent cell. We assign fragment 'a' to a parent cell 'A', if the following heuristic conditions hold good.
(1) 'A' and 'a' should be touching neighbours, size(A) > size(a), and size(a) ≤ T size . (2) If 'a' is sharing its boundary with more than one large cell fragment, then the length of the boundary it shares with 'A' should be larger than the length of the boundary it shares with any other touching large object.
If these conditions are satisfied then the fragment 'a' is merged with the parent cell 'A'. To reduce the possible errors due to group of tiny fragments being sandwich between two large objects, we merge those fragments first which are touching neighbours to large objects. Also, after merging one fragment to a large object, merging of any other small object to the same large object is considered only after all the other large objects are checked for merging possible single fragment.
When fragment 'a' does not have any large touching neighbour but has many small fragments as its neighbours, then there is a possibility that a single cell might have been over-segmented to such a level that there is no fragment of size above threshold T size belonging to that cell. In such cases, all the tiny fragments that are touching each other and not connected to a larger object, are merged to form a single object. If this merged object is above the threshold T size then it is considered as a cell otherwise it is discarded as a noise artefact. The merging process stops when all the tiny fragments are either merged with a larger object or are discarded considering them as artefacts. Figure 1(a), shows a sequence of image slices of a 3D image stack of the prostate tumour. Figure 1 shows the result of a modified watershed algorithm extended to 3D. Figure 1(c), shows the result of automatic rule-based merging of the small objects in the over-segmented image volume. Cells, which are not completely represented in the image stack, are recognized by measuring the size of the nucleus signature in the extreme rows and columns as-well-as optical sections of the image volume. Such cells are automatically rejected from further processing. This step ends the first stage of coarse segmentation. Figure 2(b) shows the final result of coarse segmentation.

Stage III: Deformable models
Once the approximate regions of the nuclei are marked it is easier to search for the location of the nucleus surface in a small region around the boundary of the isolated objects detected by coarse segmentation. We have done the refinement process using an active surface model optimization technique [3,19,20]. There is a huge body of literature dealing with application of active surface models to find the precise location of the surface of the objects. But, there are only a few in the field of 3D tissue image segmentation. An active surface model can be defined as an ordered set of n points known as control points, The boundary/surface of the isolated cell regions obtained by second stage segmentation is used as the initial active surface model. One can sample the surface voxel to act as control points of the active model. In the present experiment we have chosen all the voxels as control points as it gives a smoothing effect on the reconstructed surface.
Let v(s, t) be the initial surface with parameters s (spatial interval between control points) and t (time), defined on a given open interval Ω and T , respectively. This active contour model is a function of the spatial co-ordinates x, y and z with the same parameteriza- Then the total potential energy of the active model at any given time t can be written as The total internal energy of the active model consists of elastic and bending energy and is given as The weight ω 1 (s) regulates the tension of the active model surface. The weight ω 2 (s) regulates the rigidity of the active model. To keep the computation as simple as possible, we have set the value of ω 1 (s) and ω 2 (s) as a small constant usually much below 1. The first term in the above equation represents elastic energy and the second term, the binding energy. The elastic energy shrinks the active model towards its centre and the bending energy resists any sharp bending in the surface. The internal energy keeps the initial model smooth and continuous while deforming. The external energy is calculated from the image intensity gradient. This energy forces the initial model to deform and converge to nucleus surface. A scaled 3D gradient image is added to the original image to make cell surface a dominant feature in the image. From this enhanced image stack, a diffused intensity gradient image is built. The diffused gradient surface makes sure that there is no homogenous region within the search region around the initial active model surface that may stagnate the deformation.
For calculating the external forces, the search region around the initial model is divided into smaller regions of size 3 × 1 voxels along the normal direction of the control points active model (i.e., initial surface voxels). The external energy is chosen as the minimum gradient value within the search region Θ i , i.e., where g(v ij (s)) is the gradient value associated with the control point i; j = {−m/2, 0, m/2} where m is the size of the search region along the normal vector. In our experiment, m = 3.
The major edge/surface voxels as given by the first stage of Canny edge operator [21], are considered as penalty terms to be imposed on deformation of the active model. The energy due to these penalty terms are considered as constraint energy E con . Corresponding control points in the active model are forced to converge to the locations marked by major edge points [22].
The result of the active model optimization is considered as indicating a precise location of the surface voxels or surface voxels of the cell. Figure 2(c), shows the result of active model optimization. The boundary or surface obtained by the active model optimization is superposed on the original image stack to isolate the touching and overlapping cells. The resulting image is labeled and considered as final segmentation. The features, both implicit and explicit, are measured for quantitative analysis.

Feature selection and measurement
An effective and meaningful feature can be defined as the one, which shows-up the difference between sub-classes of the disease or deterioration of the tissue specimen. Here subclass is the group of tissue specimen having same pathological/histological characteristics such as grade of a tumor. A subclass or grade may also indicate the extent to which the particular disease has deteriorated. The features that are selected for measurement and quantitative study should concur with pathologists' views as to what is important feature. Features that can clearly differentiate subclasses of the same disease, easy to extract, shows lesser dependency on accuracy of segmentation, possibility of finding the similar feature in all the data sets of the same class of tissue, etc., are also considered as useful features. It is very important to maintain same image acquisition standards and specimen preparation protocol for all the data sets in an experiment while dealing with an automatic analysis of a particular disease. Following are some of the essential setup features, which should be kept undisturbed within a particular experiment/project: 1. Microscope control settings; 2. Voxel size; 3. Image resolution; 4. Size/area of the specimen to be scanned; 5. Scan pattern and Datum position; 6. Specimen image enhancement/rejection criteria; 7. Cell segmentation rules, parameters and methods; 8. Feature extraction methods.
The basic object features in an image can be classified as shape features, texture features, spatial features, transform features and moment-based features. The spatial features include histogram features and amplitude features. The shape features include regenerative features such as boundaries or surface area, regions, moments, etc., and measurement features include volume, surface area, moment based features, etc. The other important features include variation in the tissue architecture, cell spread pattern, variation in dominant direction, variation in the tissue texture properties, etc.
Cell shape can be expressed in many ways. Combination of different radii, size, surface area, etc., can give a useful shape value. Following are some of the parameters that define the shape in terms of directly measurable object features.
Size: Size of the cells is determined by total number of voxels present in the cell. Number of voxels multiplied by the size of the voxel gives the size of the cell in standard units.
Surface area: Surface area of a 3D cell can be approximated as the number of voxels belonging to the cell and is having at least one background voxel or the voxel belonging to other cells in its immediate spatial neighborhood. This may not confirm to true surface area of an object in a digital image as surface of a digital object is fractal in nature and there will be quite significant variation in its estimation. Error can be reduced by finding all the surface voxels and classifying them as 6-connected, 12-connected and 8-connected voxels in a 3D space. These three classes of connectivity contributes uniquely to the total surface area. For example, a 6-connected voxel contributes 1 square units, 12-connected voxel contributes √ 2 square units and 8-connected voxels contribute approximately 2 units. But, for general practical purposes, we can consider the total number of surface voxels as surface area of the cell.
Shape factor: If A is the surface area of the cell and V is the volume of the cell then the shape factor γ is defined as γ = A 3 /64 · π · V 2 . For a perfect spherical shape, γ = 1.
Eccentricity: It is also known as aspect ratio or elongation of the cell. If R max is the maximum distance between cell centroid and cell surface while R min is the minimum distance, then the ratio R max /R min gives the eccentricity or the aspect ratio of the cell.

Roundness:
Roundness of the cell is defined as the ratio of total surface area of the cell to the maximum diameter of the cell, i.e., ω = A/D where D is the maximum diameter of the cell.
Convexity: Convexity is defined as the ratio between the surface area of a convex bounding body constructed around the cell and the surface area of the cell, i.e., Convex surafce/surface. A simple method to calculate convex surface (surface area of a convex hull) is as follows. The convex hull of a set of points S in n-dimensional space is the intersection of all convex sets containing S. For N points p 1 , p 2 , . . . , p N , the convex hull C is given by the expression Each object is individually rotated by constant steps in 3D space. The spatial coordinates of extreme points of the object in each rotation step is noted down. In a 3D space we will have six spatial points for each rotation steps, i.e., X min , Y min , Z min , X max , Y max , and Z max . Area of the convex surface can then be calculated approximately as the total area of the Voronoidiagram constructed using these spatial points.
Solidity: is the ratio of convex volume of the cell to the actual cell volume, i.e., Convex volume/volume. The convex volume is measured as the volume bound by the convex surface. If V 1 , V 2 , . . . , V n are the volumes of the rectangular objects constructed by connecting extreme coordinates of the object at different rotation steps, then, {V 1 ∩ V 2 ∩ · · · ∩ V n } gives the convex volume of the object. Smaller rotation steps increases total computation but, it also improves the precision of convex volume measurement.
Compactness: Compactness is another way of defining the relation between the volume of the cell and the R max . The compactness ς can be mathematically expressed as The compactness of a symmetrical sphere is 1.
One of the most important factors that guide the quantitative evaluation of the status of the tissue, is the set of histological features. These include orientation or directionality of the cells in the tissue, most dominant direction, variation in the directionality, cell density, clustering, inter nuclear distances, disruption of tissue architecture, etc. Many of these features cannot be quantified with mathematical precision. But, it is possible to develop heuristic algorithms to approximately emulate how a human vision quantifies these features. In the following we have explained some of these features and heuristic methods to measure them in 3D images.
Spatial orientation of the cells: For many organs in the body, the healthy tissue consists of the cells having same spatial orientation or directionality. In tumors, the cells lose their directionality and start floating around. The moment axis is the axis that best fits all the voxels in the object. The sum of the squared distances of the individual voxels from the moment axis is minimum. The orientation of the moment axis gives the orientation of nucleus in 3D space.
Let the following be the set of co-ordinate summations derived from image features. If there are n-number of image slices in the image stack and (x i , y i , z i ) are the co-ordinates of the voxels belonging to the object, then let, Then the net moment about the X, Y and Z axes can be calculated by using the above summations as, Then the angle of minimum moment with respect to XY plane and XZ plane respectively can be calculated as Let Θ i and Φ i be the angle of orientation of the cell nucleus i in a 3D space. Then the average angle of orientation or the most dominant direction of the cell nuclei in the data is given by in the spatial direction of the cell nuclei provides an important clue about the loss of directionality of the cell nuclei. If the gray level variation of the cell nuclei indicate degradation in the tissue rather than imaging problems, then it may be proper to calculate weighted spatial orientation of the cell nuclei, i.e., each voxel is weighed by its intensity value while calculating various parameters for finding orientation.

Disruption of directionality:
The spatial orientation of each nucleus in the image domain is calculated. The variance of the angle of orientation is directly proportional to disruption in cell directionality. The most dominant direction is the mean spatial angle.
Cell density: Cell density indicates the number of cells present in the image and how closely they are arranged. To make a reliable measure of the cell density, a large number of data sets should be used. Cell density is approximately proportional to the number of cells in the tissue specimen image under test.
Inter-nuclear distance: This is one of the more precisely measurable feature that is useful in quantification of a tissue architecture. The nearest neighbor distance or the nearest cell centroid distance for all the cells in the image domain is calculated. The average of the nearest neighbor distances gives the average distance between the cells. This feature in combination with cell density, is helpful in quantification of the cell distribution in the image space. The variation (variance) in the nearest distance values is useful in empirical measurement of the cells/nuclei distribution in the image.
Variation from the ideal tissue architecture: This is one of the most important features based on which pathologists and biologists decide about the deterioration of tissue specimens due to disease or otherwise. During visual categorization of the tumor specimens, pathologists look at the specimen through a microscope and visually compare the architecture of the tissue under investigation with the architecture of the healthy tissue. The amount of variation or disruption in the tissue architecture is then used to find the grade of abnormality. This is a qualitative approach. The result may not be reproducible and it varies from one pathologist to another based on his/her experience. A macro-feature which is a linear combination of different histological features such as number of neighbors, cell cluster density, inter-nuclear distance and the connecting line pattern may be used as unique indication of tissue architecture. If the model constructed on the basis of this macro-feature shows visual similarity with the tissue image, then we can consider the macrofeature to be representing the architecture of the tissue.
In the following, we have described a method to rank the disruption in the glandular arrangement of the cells in a prostate tissue. This is a heuristic approach but the result is found to be very supportive for further research in this direction. At this stage, we propose this method on a two-dimensional image. To form a similar approach on a 3D image, more research and observation of a glandular structure in its 3D form is needed. In a secretion gland tissue such as prostate, etc., cells are arranged in a ring like arrangement with a central lumen. This is called glandular structure or acinus structure. A disruption in the glandular architecture may indicate a pathological disorder. The amount of disruption can be used to sub-classify the disease.
A virtual line is drawn in the image space passing through the centroid of all the cells in a cluster. The virtual line is constrained by the following conditions that make it to produce an approximate convex shape.
(1) It should pass through the centroid of the closest two neighboring cells. (2) It should pass through the surface voxels of the neighboring cells that it connects such that the connecting line between the surface voxels of the two neighboring cells defines the minimum distance between the two cells.
(3) A cell is connected at most to two nearest neighboring cells.
If this connecting line does not pass through the background region, then the cells can be considered as touching and closely arranged. Also this indicates an approximate ring like structure being present. More number of times it passes through the background, more number of breaks in ring like arrangement of the cells. This is diagrammatically shown in Fig. 3. Thus number of times the line crosses the cell boundary into the background region can be related to the breaks in the ring like structure. The length of the connecting line in the background region as well as the angle, the connecting line makes at the cell centroid can be considered to measure the disruption of glandular architecture. Here the ideal glandular architecture is believed to have an irregular polygonal shape where each vertex represents a cell. More research is needed to precisely quantify this feature and to directly relate the feature values to actual disruption in the architecture of the glandular structure of the cells/nuclei. Depending on the organ/tissue one can derive more heuristic methods for the measurement.
Another useful set of features includes the various measurements made on the gray level of the voxels belonging to the objects. These features include histogram features, amplitude features, texture features, etc. There is a huge body of literature dealing with measurement of these features [5,6,23,24]. Depending on the experiment or the disease under investigation, one can make use of these features to distinguish between sub-classes of the diseases. Best set of sub-features can be selected either by visual observation of sub-classification of the data by each feature or by the application of Principal Component Analysis (PCA) [25].
We have compared some of the features of the cells segmented by the 3D watershed with the features of the cells measured after segmentation by the integrated approach. Simple features such as cell density and shape factor are considered for this purpose. If V man and V auto are the size of the manually segmented and automatically segmented cell region respectively, then, the percentage of relative difference (R.Diff ) in volume between the cells in manually segmented image and the cells in image segmented by integrated approach is given as, R.Diff = ((V man −V auto )/V man )×100%. Here manual segmentation is considered as gold standard for feature measurement. Table 1 gives quantitative comparison of the number of cells as segmented by the 3D watershed technique and the integrated approach compared to the manual segmentation result. Tables 2 and 3 give comparative result of R.Diff and the shape factor values for a few selected cells when segmented by the watershed technique and the integrated approach.
A Silicon Graphics INDY workstation with IRIX5.3 was used as a platform to implement the technique. IDL and C languages are used to write the programs. Result of segmentation is shown in Figs 2 and 3. An integrated approach for segmenting the cells has been proposed. The integrated approach involves the combination of several image analysis techniques to isolate different object regions in the image and to mark the boundary/surface of the objects as precisely as possible. Also, the active surface model removes the fragmentation of the nucleus that may still be present after the merging process. This is because the boundary that fragments the nucleus is not due to underly-ing image intensity gradient peaks but due to errors in gray scale reconstruction during watershed segmentation. Active surface model with sufficient flexibility to deform would reduce such noisy fragments.
Feature measurements of one hundred volumetric data sets are considered. Table 3 gives a brief quanti-tative study of the utility of heuristic method for quantification of breakdown in glandular structure and comparative view point of a pathologist grading the gland disruption in a prostate tumor tissue. We had to use more heuristic characterization of the tissue structure as most of the data sets used have shown partial glan- Table 2 Comparative study of different region based segmentation method corresponding to their ability in measuring simple features  Table 3 Comparative study of disruption in acinus structure. Low: Comparable to healthy tissue structure, Middle: Considerable disruption in architecture compared to healthy tissue, High: No proper glandular structure is observed Sp. no.
Pathologists ranking Ranking by heuristic algorithm Only the heuristic method Additional features like cell characterize the disruption in density and variation in interacinus structure is considered nuclear distance is considered dular structures. When only the breakdown in acinus architecture is consider as explained in Section 4, the grading of the prostate tumor tissue was found to be 40-45% erroneous (against one expert observation). If considered along with other features such as disruption in directionality, inter-cellular distance, variation in size and shape features, and cell density, the error was reduced to 20-30%. It should also be noted that the data we used are acquired for the purpose of Fluorescence In Situ Hybridization (FISH) signal evaluation. Data specifically acquired for grading the tumor tissue may give a better automatic analysis result. At this point we are acquiring large sections of breast cancer tissue that clearly shows the architecture and its disruption. Result of automatic grading of the tissue based on above described features will be communicated in the near future. This article introduces various image derived features that can be directly related to different grades or sub-classes of pathological disorder. Many of these features are difficult to measure visually. Based on the type of tissue specimen used to acquire the image as well as the instrumentation, one can derive many other important features, which have had a proven influence on automatic sub-classification. The initiative to correlate, quantifiable 3D image features and the visual perception of the tissue architecture is considered as the contribution of this paper.