Texture Filtering With Filtering Scale Map

In this paper, we propose a novel texture filtering method. Starting with a texture boundaries extraction, we obtain the possibility of texture boundary by the statistics of the proportion of the pixels in different colors. The possibility of texture boundary can be obtained by calculating the Bhattacharyya distance of the color proportion on each side of each pixel. Further, we build a filtering scale map to guide the parameters of the filter. This filtering scale map is based on the texture boundary. Finally, to obtain the texture filtering result, we design an adaptive shape edge-preserving filter which is simple and effective. By counting the color information of all pixel neighborhoods the filter can select the pixels in a similar color to filter. Experiments are performed on different color-texture images, and the results show that our proposed method performs much better compared with state-of-the-art methods on texture filtering.


I. INTRODUCTION
Texture filtering is a filtering method for filtering image texture information, which can filter the texture information of the image and preserve the main structure information of the image.It can be used as a preprocessing for many image processing and image analysis and has a wide range of applications on detail enhancement [1], visual abstraction [2], image segmentation [3], edge detection [4], tone mapping [5], optical flow estimation [6], and illumination estimation [7], etc.A good texture filtering method should preserve the clear edges and details of the image's main structure while filtering the texture information.Since the texture unit scale and the detail scale of the main structure are uncertain for a random image, it is always a challenging task to achieve a good texture filtering effect.
In recent years, many methods for texture filtering have been proposed, such as bilateral filter [8], [9], guided filter [10]- [12], weighted least squares filter [13], iterative global optimization filter [2], edge-selective joint filter [14], Bayesian model averaging [15], patch geodesic paths filter [16], discrete cosine transform based filter [17], non-local means filter [18], weighted median filter [19], Local activity-driven structural-preserving filtering [20], and Iterative range-domain weighted filter [21] etc.Some of these methods are based on bilateral filtering and can only achieve the function of edge preservation, but cannot filter out The associate editor coordinating the review of this manuscript and approving it for publication was Wenming Cao .
textures with large internal contrast.Some methods achieve texture filtering by first filtering the entire image and then sharpening the edges.These methods often seriously destroy the details of the main structure of the image.
To perform texture filtering better, it is not appropriate to process only local information.We need to get accurate information about the different texture distribution of the image.Only by knowing the exact distribution of the main structure and texture content in the image can the image be filtered more specifically.So, we proposed a scale map to guide the filters to filter at different locations with the appropriate parameters.The filtering process is as follows.First, we use the method based on histogram statistics to obtain accurate texture edge possibility.The texture edge possibility can globally represent the distribution of textures in the image.Then we constructed a scale map based on the texture edge possibility as a guide image for subsequent texture filtering.Finally, we designed an adaptive shape edge preservation filter.Guided by the scale map, the proposed filter can achieve texture filtering of the image.Different from other guided filtering methods, our guided map can accurately control the size of the filter.In this paper, we did a lot of comparative experiments on the proposed method.It can be seen from the experimental results that the method of this paper is superior to the state-of-art methods.

II. RELATED WORK
The existing texture filtering methods can be roughly divided into local texture filtering and global texture filtering.
Bilateral filtering [8], [22], [23] is a classical local texture filtering.In order to achieve edge-preserving while denoising, it introduces a color kernel based on Gaussian filtering and takes spatial information and color similarity into account, but the method cannot filter out strong gradient textures.The method proposed by Zhang et al. [24] continuously updates the guidance information through iterative bilateral filtering to restore the main structure of the image.But with the increase of the number of iterations, the structure will appear fuzzy and color cast, and even passivation phenomenon.Cho et al. [5] proposed the idea of patch offset on the framework of bilateral filtering, and generated a smooth image based on patch offset as the guiding image, but it could not suppress the texture with stronger gradient.Karacan et al. [25] proposed a block similarity texture filtering method based on regional covariance.Gastal and Oliveira [26] proposed the idea of domain transformation to achieve a certain improvement in smoothing results.Li et al. [27] use Gaussian pyramid mixing and smoothing to improve the effect of structure-preserving. Hua et al. [28] design a filtering framework for local diffusion in the gradient domain to preserving structures.
The global-based filtering method is based on the idea of global optimization, usually defining an objective function, including data items and smoothing terms.The data item requires that the difference between the filtering result and the original image is as possible as small, and the smoothing item requires the texture area to be smoothed.The total variation method proposed by Rudin et al. [29] is a classical model of global filtering, which uses image gradient as constraints to smooth, but it only has a certain smoothing effect on textures and noises with smaller gradients.Farbman et al. [13] propose a weighted least squares (WLS) method to process images with multi-scale textures, but this method cannot suppress the texture of strong gradients, and color rendering problems may occur.Zang et al. [30] propose a directional adaptive image smoothing method based on anisotropic structure measurement.Ham et al. [31] design an iterative method combining dynamic and static.Although good results can be achieved, the termination conditions are difficult to set.Xu et al. [32] propose a method for L0 gradient minimization, which obtains global optimization filtering results by controlling the number of nonzero gradients; Xu et al. [4] propose the RTV method by improving the total variation model to further improve the filtering quality, but the parameters are difficult to set, and the texture with strong gradient cannot be filtered out.Magnier et al. [33] propose a smoothed rotation filter that can distinguish texels and combines anisotropic diffusion to obtain the texture filtering results of the preserved structure, but this method is not suitable for images with strong gradient texture.

III. PROPOSED METHOD
In this section, the proposed method on texture filtering is introduced in detail, and it includes the following three parts: (1) texture boundaries extraction, (2) generation of the filtering scale map, and (3) adaptive shape filtering.

A. TEXTURE BOUNDARIES EXTRACTION
A region consisting of separate textures whose neighborhoods of internal pixels should have the same features.In many cases, the textons that make up the texture are not regularly arranged inside the texture, and the size of the textons is not uniform.Therefore, it is difficult to accurately distinguish whether two regions contain the same texture in the frequency domain of the image, which makes the application of wavelet-based texture detection methods such as Gabor operator in texture detection limited.Regardless of how the texton scale changes, how to arrange, in a single texture region containing multiple textons, the proportion of the total number of pixels in the same color interval is stable.Therefore, the ratio of the number of pixels in each color interval can be used as a feature of the texture.Within a single texture region, as long as the neighborhood is larger than one texton, the features of the neighborhood of different scales of one pixel should be similar.When detecting the boundary of texture, if the features of the neighborhoods of one pixel are very different, and the neighborhood features of each side should remain stable as the neighborhood scale increases.Based on this principle, a texture boundary detector can be constructed.
Before detecting the texture edge of the image, the color quantization transformation of the original image should be carried out to make it easier to count the information of each color interval.Firstly, the color value of the pixel of the original image is clustered by means of mean shift algorithm, and a set of color clustering results C are obtained.In this way, the clustered image is composed of K colors, and then the proportion of each color in the neighborhood is counted to get a K -dimensional vector, which is the texture feature vector in the neighborhood.
To highlight the information on the boundaries of the texture, make the value of the detect results at the boundaries of the texture obvious.We use the Bhattacharyya distance, which calculates the texture feature vector for the neighbors on both sides of the current pixel, to show the likelihood that the pixel position is on the boundary of the texture.The Bhattacharyya distance can be used to represent the similarity of the two probability distribution as follows, where DB denote the Bhattacharyya distance between class p and q, BC denote the Bhattacharyya coefficient, c is a color bin of all color categories C in the quantized image, and p(x) and q(x) denote the proportion of c in region p and q.However, if a pixel is located in a texture boundary, when two sides of its neighborhoods are exactly inside the two texture regions, the texture feature vectors calculated through the pixel values of the two neighborhoods will remain stable as the scale of the neighborhood changes within a certain range, and the Battacharyya Distance calculated by the texture feature vectors is also stable.When the two feature vectors belong to different textures, the Bhattacharyya distance is larger.In this way, multiple detections are carried out in the neighborhood of different scales and directions of the pixels, and the detection results are multiplied to make the calculated values at the texture boundaries more obvious compared with other regions.Through the above principle, the texture boundary detector can be designed as Fig. 1.
A texture boundary detector is composed of two semicircular regions p and q, as shown as the blue region and orange region in Fig. 1, which count the texture feature vectors within their coverage.The angle of the line which divides the detector into two semicircular regions is θ.For images whose colors have been quantized, the number of pixels of each color in each semicircle is counted, and the ratio of pixels of each color in the semicircle can be obtained, so as to calculate the texture feature vector.The detector calculates the Bhattacharyya distance of two texture feature vectors and outputs the Bhattacharyya distance as the detection result of the current pixel.The final image texture detection results can be obtained by multiplying and superimposing the detection results of detectors of different scales and directions.The boundary probability of pixel (x, y) is calculated as follows where r denotes the radius of p θ and q θ , and Tbp is the possibility of texture boundary.

B. GENERATION OF THE FILTERING SCALE MAP
After obtaining the possibility of texture boundary of the image, the filtering scale map can be constructed according to the possibility of texture boundary.Through the filtering scale map, texture filters can be guided to use appropriate parameters in different regions, and the edge-preserving effect of the filter can be reduced within the same texture region to obtain smooth filtering results, while the edge-preserving ability of the filter can be enhanced at the texture boundary to retain sharp texture edges.
The exact boundary of texture can be obtained according to the possibility of texture boundary.Firstly, the Euclidean distance of adjacent pixels of the image I is calculated according to Equation ( 5) as the possibility of the color boundary of the image.As shown in Fig. 2, the color boundary possibility can describe the position of the boundary more accurately, but many obvious results will be detected inside the texture.Although the possibility of texture boundary can detect the texture edge and suppress the results inside the texture, the significant range of the detection results cannot indicate the exact position of the texture boundary.By doing the Hadamard product for the possibility of the color boundary with that of texture boundary, the accurate position of texture boundary Tb (Fig. 2(d))can be obtained.
Texture edge Tb is binarized and its skeleton Ts is extracted.The filtering scale map can be obtained based on Ts.Set a variable λ to control the effect of the edge-preserving filter.If λ is larger, the filtering result is smoother.In a texture region, the value of λ should be large, the filtering result here should be smooth.At the boundary of the texture, the value of λ should be the smallest, and the edge at here will be preserved.In this paper, the dilating method is used to construct the filtering scale map, and the process of constructing a filtering scale map with scale resolution of n is as shown in Fig. 3.
An example of a filtering scale map is shown in Fig. 4. λ can be set as a variable that has a non-linear relationship with the value of filtering scale map.

C. ADAPTIVE SHAPE FILTERING
To obtain the final filtering result, an edge-preserving filter is designed in this paper.The principle of this edge-preserving filter is simple.Take the point I (x, y) on the image I as an example, the Euclidean distances of all pixels in its neighborhood and I (x, y) are counted, and the average value of all pixels whose Euclidean distance is less than the threshold λ is taken as the filtering result.This filtering is equivalent to filtering only the similar colors, while preserving the pixels that are too different, thus achieving the effect of edge-preserving filtering.The principle is as follows.
where.w denotes the scale factor of the neighborhood, they are of the neighborhood is (2w + 1) 2 .Since the geometric Euclidean distance between all points and the central point in the statistical neighborhood is time consuming, this paper designs an optimization method.For an image with row × col scale, when the neighborhood contains n pixels, this method can reduce the time complexity from First, the original image is horizontally transformed from −w to w, and the blank area is filled with 0. For each horizontal translation transformation, the vertical translation from −w to w is performed, so that (2w + 1) 2 translation images are obtained.Setn equal to.Then obtain the original Euclidean distance from the original image and get the n Euclidean distance matrix Do i , (i = 1, 2, . . ., n).Do the following calculation for Do i .
It is possible to obtain n binary matrices d i (i = 1, 2, . . ., n).The filtering result is calculated as follows by the Hadamard product.

IV. EXPERIMENTS
In this section, we compare the proposed method with other texture filtering methods and analyze the results of the experiments.The images used in the experiments are shown in Fig. 5.
Since the results of the texture filtering are difficult to quantify, we observe the details of the filtering results of different methods to evaluate the filtering effect.

A. PARAMETER SETTING
Different parameters in the proposed method will lead to different filtering results.In order to make the experimental results more stable and better comparison, we used fixed parameters to experiment on all the images.
For the texture boundary detector, we set 4 boundary detector scales, 20 filtering scales, 20 color bins, and 8 angels, and the radii of boundary detectors are 3, 4, 5, and 6.The scale variable of the filtering neighborhood w is set to 5. The filtering radii R i are set from 1 to 20.The threshold λ shown in equation ( 6) and equation ( 8) is calculated by equation (10).
Equation ( 10) is an empirical formula and it can be replaced by other equations to achieve different filtering effects.The i is index of the filtering scale.
For the fairness of the experiment, the main parameter settings for the three competing methods are given below.The parameters of RGF are the same as those in the experiment in their paper.The number of iterations is 5, the standard deviation σ s of the filter is set to 3, and the range weight σ r is set to 0.1.There are two parameters in RTV.For the images used in their paper, we continue to use their parameters.For the new images, we set the σ to 6, and λ to 0.015.For the parameters of STF, we set the σ to 0.2 and k to 9, which are used for the comparison in their paper.In the LADF, we set λ to 0.01, maxIter to 5. For the parameter of IRWF,the filtering window radium is set to 20.

B. EXPERIMENT ON IMAGES
The texture filtering task is tested with the images shown in Fig. 5. Fig. 6 shows the comparisons of the texture filtering results among our method, RGF, RTV, STF, LADF, and IRWF.
It can be seen that our method has advantages in detail retention and background color reproduction.RGF blurs the tiny details of the image.The filtering result of RTV is too flat, and the background shadow information is lost.STF lost too much information, such as fish teeth.IRWF can't filter out the texture with high contrast.LADF performs well, but the processing of the blue background in the first image is not ideal.

C. DISCUSSION
In the experimental results, we can see that the method in this paper has good results in filtering large gradient textures.This is because we mainly use mean filter for texture image filtering, which avoids the problem that the method based on bilateral filtering cannot get smooth results in large gradient texture regions.Moreover, due to the use of scale map and adaptive shape filtering, the method in this paper also better retains the details of the main structure in the original image.
Although our method is very good in filtering effect, the calculation time of this method is a little long because the adaptive shape filtering method requires a large window size.

V. CONCLUSION
In this paper, we propose a texture filtering method.By proportional statistics on pixel colors, the proposed method can obtain accurate texture boundaries.Meanwhile, the filtering scale map can guide the filters to filter in different regions with the appropriate parameters.Afterward, a novel edge preservation filter is proposed in this paper.We perform texture filtering on a series of images to compare with other methods.The results show that our method has better performance in the details.

FIGURE 2 .
FIGURE 2. (a) The original image.(b) The possibility of texture boundary.(c) The possibility of color boundary.(d) The texture boundary.

FIGURE 3 .
FIGURE 3. The process of constructing a filtering scale map.

FIGURE 5 .
FIGURE 5.The images for experiment.

FIGURE 6 .
FIGURE 6.Comparison of texture filtering results.