How to optimize OCT image

Quantization, which maps real values of raw data to a series of fixed gray levels, is an inevitable step in Optical Coherence Tomography (OCT) image formation. Three new quantization methods, Minimum Distortion, Information Expansion and Maximum Entropy are applied in the specific problem. Quantization results of a capillary with milk and the femoralis of rabbit are shown in this paper. Comparisons with the present log-based methods show that a suitable quantization method significantly increases contrast, SNR and visual fineness of the final image and reduces quantization error effectively. Applicability of different quantization methods is also discussed. 2001 Optical Society of America OCIS codes: (110.4500) Optical Coherence Tomography, (100.0100) Image processing, (100.2980) Image enhancement Reference and links 1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J.G. Fujimoto, “Optical Coherence Tomography,” Science 254, 1178(1991) 2. J. M. Schmitt, “Optical coherence tomography (OCT): a review,” IEEE J. Sel. Top. Quantum Electron 5, 1205(1999) 3. K. R. Castleman, Digital Image Processing, (Prentice Hall, Inc., 1996) 4. J. S. Lim, Two-Dimensional Signal and Image Processing, (Englewood Cliffs, NJ: Prentice Hall, 1990) 5. R. M. Haralick, L. G. Shapiro, Computer and Robot Vision, (Reading Press: Addison-Wesley, 1993) 6. J. P. Dunkers, R. S. Parnas, C. G. Zimba, R. C. Peterson, K. M. Flynn, J. G. Fujimoto and B. E. Bouma, “Optical coherence tomography of glass reinforced polymer composites,” Compos. Pt. A: Appl. Sci. and Mfg. 30, 139 (1999) 7. W. Frei, “Image enhancement by histogram hyperbolization,” Comput. Graph. Image Process. , 6, 286(1977) 8. Y. Tao, Experimental Research of OCT System, MA’s thesis of Tsinghua University, (1998) (C) 2001 OSA 2 July 2001 / Vol. 9, No. 1 / OPTICS EXPRESS 24 #32804 $15.00 US Received April 10, 2001; Revised June 27, 2001 9. H. Ishikawa, R. Gurses-Ozden, S. T. Hoh, H. L. Dou, J. M. Liebmann, R. Ritch, “Grayscale and proportioncorrected optical coherence tomography images,” Ophthal. Surg. and Lasers 31, 223 (2000) 10. Jan C.A.Van der Lubbe, Information theory, (English translation Cambridge University Press, 1997) 11. J. Max, “Quantizing for minimum distortion,” IEEE Trans. Inf. Theory IT-6, 7 (1960) 12. R. O. Duda, P. E. Hart, Pattern Classification and Scene Analysis, (New York: Wiley, 1973) 13. M. Friedman, K.Abraham, Introduction to pattern recognition: statistical, structural, neural, and fuzzy logic approaches, (River Edge, NJ: World scientific, 1999)


Introduction
Optical coherence tomography (OCT) is a novel noninvasive tomographic imaging technique with micron scale resolution [1].It has been widely used in many fields such as biology, medical applications, material science and so on [2].However the quality of OCT images and the accuracy of micrometer-level information have not been validated.When OCT became a research hotspot, most researchers were interested in physics mechanism, instrumentation and practical applications.As the research goes on, more people think of using image processing to solve problems of real world.They realize that some of the problems may not be easily solved by physical methods.
Quantization, which maps real values of raw data to a series of fixed gray levels, is an inevitable step in OCT image formation.Image quantization is usually used for three purposes.The first is for image compression, transmission, storage, etc [3].The second purpose is to enhance images by adaptation to the visual properties of the human eyes [3,4].
In this situation, visual effect is more important than absolute distortion.For example, for an image that has few gray levels, the dithering technique can make the image look smooth by adding random noise without changing the number of gray levels [3].This kind of quantization does not concern any real information of images but the human psychological visual impression.It is indeed a "visual perceptional deceit".The third purpose is for data visualization or pixel level transformation [5].For example, quantization methods that map raw data to image scale levels are employed to obtain images from FFT transformation, Xray, MRI, ultra-sound and OCT.In this case, a distortion function, which is related to real information of raw data, should be kept to a minimum.Researchers have paid more attention to the first two purposes.For the third purpose, researchers in the area of digital image processing proposed some quantization methods for pixel level transformation The raw data usually have significantly different distribution from that of the image gray levels and real value can not be displayed on screen directly.Therefore, it is necessary to quantize the raw data, and the quantization is scalar quantization.The procedure will inevitably introduce quantization error that affects the image quality and may misinterpret some detail information hidden in the raw data.
OCT images are usually pseudo-color ones and pseudo-color may conceal the low contrast nature and detail structure information in the images.We investigate standard 8-bit gray images in this paper.All methods can be easily generalized to other gray levels image or pseudo-color image.In our experiments, a 1mm-diameter capillary filled with milk and the femoralis of rabbit were used as samples.The cross-section of each sample was scanned at a fixed angle.The resulting images should show the samples and the shadows due to the sideband spectral distribution of the light source.All these reveal the detail information of the raw data.Although threshold may reduce noise, it also degrades image quality.What's more, determination of threshold is not automatic.The main reason of choosing logarithm-based algorithm is to compress dynamic range and agree with the exponential law by which light attenuates in scattering materials.However the explanation does not take into account the quantization error and the phenomena that log-based methods often result in poor contrast or loss of detail information.

Applying new quantization methods in OCT image formation
Other functions instead of logarithm for compression of dynamic range have been reported [9].However no detail explanation is provided and the quality of the resulting images is still not good.All of those methods make no consideration of minimizing a distortion function of raw data.Since quantization introduces quantization noise, which will have a great impact on image processing and therefore it should be investigated thoroughly.We propose three new quantization methods here.

Minimum Distortion (MD) and Truncation MD (TMD) Methods
The Minimum Distortion (MD) method is based on the minimum distortion principle that has been thoroughly discussed in the rate distortion theory [10].If a mean-square measure is used as measure of distortion, MD method will be transformed to minimum-variance method.In this paper, we only concern the mean-square measure.Under mean-square error measure, for the input signal x with probability density function p(x), the optimal quantization output levels q 1 ,…,q N and the internal breakpoints Z 1 ,…,Z N+1 of minimum distortion are subject to the following formula [11]: Where N is the number of the output levels, k is from 1 to N for q k while 2 to N+1 for Z k .
Typically, endpoints Z 1 and Z N+1 are known a priori.
For quantization in OCT image formation, N usually equals 256.Despite the real value q k , each output level is mapped to a fixed gray level sequentially after quantization, i.e., the smallest output level is mapped to gray level 0, the second smallest to gray level 1, and so on.This is different from the common quantization procedure and is a particularity of OCT data quantization.
An iterative method is presented to compute the exact quantizer parameters [11].
Concerning the sensitivity to the initial conditions and the computational complexity of the iteration method, we used a clustering method instead.
Let a i and a i+1 the ith and the (i+1)th internal breakpoints of the raw data.The number of output levels is 256 and i is set from 0 to 255.The ith output level d i and the distortion function J e can be defined as following: Where y is the value of the raw data.n(y) is the number of raw data with value y. a 0 and a 256 -1 are the minimum value and the maximum value of the raw data respectively.
All d i can be determined by minimizing the distortion function, which is similar to that of the c-means clustering method in pattern recognition [12].Since y is scalar, it is not necessary to examine all clusters to decide whether J e is reduced, a comparison between adjacent clusters should be sufficient.All data with the same value y should be moved between clusters simultaneously.Therefore the common c-means algorithm can be modified and employed to execute MD method as the following [13]: i.
Set the initial clusters using simple quantization methods such as logarithmbased methods or linear methods.
ii. Suppose samples with value y are in i γ and i γ is the ith cluster in which all data will be mapped to the ith image gray level (i=0,…,255).Calculate j ρ as the following: (C) 2001 OSA 2 July 2001 / Vol. 9, No. 1 / OPTICS EXPRESS 28 Where m j is the center of the jth cluster, n(y) is the number of samples whose value is y, N j is the total number of samples in jth cluster. iii.If Calculate new m i , m j and J e v.
Back to ii and repeat above procedure, till J e is small enough or J e remains unchanged.
To reduce the effect of raw data with a very large value, a Truncation Minimum Distortion (TMD) method can be used.In this method, all data are sorted.The values of a predetermined percentage of the largest data are set to the value of the remaining largest datum before applying the common clustering procedure.

Information Expansion (IE) Method
Information Expansion (IE) method takes into account the phenomenon that the probability density function of OCT raw data usually has sharp peaks.Although the sharpness of the peaks depends on different samples, the facts that raw data have concentrative densities and OCT images have inferior contrast are common [9].Therefore in the IE method, the raw data are quantized to image gray values evenly.It is a close analogy to the concept of histogram equalization in image processing which is also called equal-probability method [4, 5].
However, equalization of raw data is different from equalization of image gray levels.It can be proved that the entropy of the raw data remains unchanged during equalization, while the equalization of image gray levels often reduces image entropy.Equalization before quantization is better because the distortion error caused by equalization is not introduced while equalization after quantization will introduce additional errors.Detail algorithm of IE method is presented as the following: i.
Count the number of levels of raw data and the number of data in each level. ii

Maximum Entropy (ME) Method
The Maximum Entropy (ME) method is concerned with the preservation of the information hidden in the raw data.From the point of view of information theory, OCT quantization transfers structure information of a sample from raw data to digital image and can be viewed as an information channel.The information is the uncertainty of the data.Preserving more information should be the essential purpose of quantization in OCT image formation.
According to information theory [10], when mutual information of data before and after quantization reaches the maximum, the loss of information reduces to the minimum.As the quantization function is deterministic, maximization of mutual information equals maximization of the entropy of image data after quantization.
In view of the property of entropy, if and only if the probability of each image gray level is identical, will the entropy of image data reach its maximum.Thus the method should make the probability of each image level basically identical, which means the raw data are mapped to gray levels from small to large and should make the number of data in each gray level closest to the average number.The detail algorithm is presented as following: i.
Count and sort the raw data. ii.
Calculate the average number of data in each image gray level. iii.
Map raw data to gray levels (0-255) from small value to large value and make the number of data in each gray level as close to the average number as possible.
Raw data with the same value must be set into the same gray level.

Experiments and Results
Two samples were used for experiments: a 1mm-diameter capillary filled with milk and the femoralis of rabbit.We scanned one cross-section of each sample at a fixed angle.Besides the methods proposed above, we also tested the performance of equal interval (EI) method, i.e.
simple linear transformation, for comparison [5].Using different quantization methods, 8 images of the same cross-section of each sample were constructed from the raw data respectively, as shown in Fig. 1 and Fig. 2.
Since the final outputs of OCT system are images, we should use criterions that are defined for images.To evaluate the image quality of different quantization methods, we used three objective criterions and a subjective visual criterion that are listed in N o and N b are numbers of pixels in the object region and the background region respectively, pixel(i,j) is the gray value of the point (i,j).In our experiments, object region and background region were determined manually, since the sample shape are known in advance.
The resulting images are shown in Fig. 1 and Fig. 2. All comparisons of different quantization methods are listed in Table 2 and Table 3.It can be seen from the figures and the tables that methods based on logarithm give relatively inferior contrast, CNR and SNR.DL obtains better contrast, CNR and SNR, but more detail loss than TL.There is a tradeoff between better detail preserving and higher contrast or SNR, which implies that it is hard to get a comprehensively good result.
MD and TMD methods both reduce noise significantly and obtain high contrast and SNR.
MD loses most details of samples, while TMD reveals some details.If the truncation threshold is determined appropriately, TMD can reveal most details without loss of contrast.
IE method gives the most abundant details at the cost of low contrast and high noise.
Since it reveals most details, it can be used as a detail preserving criterion.without any modification of the OCT system, the quality of the final images can be improved greatly.Log-based methods are usually faster than methods using sorting, iteration, etc.
However, if data sorting is performed line by line while scanning, methods using sorting can also be very fast.

Conclusion and Discussion
OCT image degradation is caused mainly by three kinds of reasons: physical reasons such as multi-scattering, equipment reasons such as electronic noise, data (image) processing reasons such as quantization and filtering.Improvements of quantization methods are mainly concerned with the third reason.The goal is to preserve the most primitive information that hides in the raw data and minimize the quantization error.Quantization only deals with the value of each pixel and is independent of the position of the pixel.Since some kinds of noise are position-dependent, a good quantization method may not eliminate all kinds of noise.
However, a good method will preserve more structure information of samples and introduce less quantization noise.This is very important for image processing.Appropriate selection of quantization methods can also reduce the impact of low-precision equipment to some extent.
Experiments show that log-based methods are not the best quantization methods.It often loses structure information or leads to poor contrast.The advantage of them is the low computational complexity.The MD method is especially good for improving contrast and reducing quantization noise.The IE method is extremely useful for revealing detail information.The IHE method improves IE method to some extent by both raising contrast and SNR and preserving abundant details.This improvement is restricted and model-dependent.
The TMD and the ME method are compromising between IE and MD methods.They can both obtain satisfactory contrast and details.The TMD method needs a predetermined truncation percentage while the ME method can run automatically.
The MD and the TMD methods take a long time for quantization and the selection of truncation threshold is not automatic.Therefore they are not suitable for real-time imaging.
[5].However, most researchers do physical data visualization by using logarithm to compress the dynamic range and to convert raw data to image scale levels [6].Few papers have been published to describe other quantization methods in the physics domain.At the heart of the OCT system is a Michelson interferometer illuminated by a broadband light source.A photodiode detects interference signal that occurs only when the optical length (C) 2001 OSA 2 July 2001 / Vol. 9, No. 1 / OPTICS EXPRESS 25 difference between two beams of light reflected by the sample and the reference mirror is within the coherence length of the light source.Using heterodyne detection method, the signal is amplified at the modulation frequency by a band-pass filter an amplifier.An A/D converter transforms analog signals to digital data.Using a quantization method, each raw datum is mapped to image scale level.A sample's cross-section information is obtained by performing repeated axial measurements at different transverse positions as the optical beam is scanned across the sample.The signals constitute a two-dimensional map of the backscattering or reflectance from internal structure of the sample.After interference signal of each position is converted to raw data and then transformed to image scale value, an OCT image is formed.
table, oµ is the mean value of the object region, b µ is the mean value of the background region, n σ is the standard deviation of the noise in the background region.S e is the mean energy of the object region, n e is the mean energy of the background region.

commonly used in digital image processing and in OCT image formation
function of human visual system to it[5].Models of the nonlinear characteristic of the human visual system are usually chosen to fit data from psychophysics experiments that attempt to measure the relative sensitivity of subjective brightness to image luminance.A typical model can be found in[7].Logarithm-based methods are commonly used currently in OCT image formation.The simplest one is Direct Logarithm (DL) method.The logarithm of the raw data are simply calculated directly and then converted to 0-255 using a linear function[6].Some researchers employ a Truncation Logarithm (TL) method considering both dynamic range determination and noise reduction[8].In this method, an appropriate threshold is chosen to eliminate noise and obtain a predetermined dynamic range.A detail procedure is: modelii.Set a threshold t, based on a predetermined dynamic range.All values less than t are set to t.iii.Calculate logarithm of all values in [t, 1] and then convert them to 256 gray levels linearly.
. Calculate accumulative real value histogram of the raw data.datahistogramequalization is done, an inverse of a model function is applied to the equalized raw data.Details about model and operating procedure can be found in[7].

Table 2 .
Image quality of capillary with milk

Table 3 .
Image quality of the femoralis of rabbit modifying the raw data histogram rather than equalizing it.However contrast and SNR of IHE are still not high enough.This improvement may depend on the model selection.ME method yields high contrast and sufficiently low noise.It also preserves most details.What's more, there is no parameter to choose, which is more convenient than TMD method.Although EI method obtains considerable contrast and SNR, it can not reveal any details.Thus it is of no use in OCT image formation.All experiments, including those using other samples that not listed here, such as mouse brains, yielded similar results.The log-based quantization methods are not the best quantization methods in OCT image formation.They often result in low contrast, low SNR or detail loss and can not get overall good OCT images.Other quantization methods are better than logarithm-based methods to some extent.By applying a suitable quantization method, by