Curves Extraction in Images

We present a methodology for extracting processes of curves in images, using a statistical summary of the directional information given in measures of location, curvature and direction associated with the pixels that compose each curve. The main purpose is to obtain measures that serve as input for the reconstruction, in vector format, of a process of curves which are of interest, so that the extracted curves can be easily stored and reconstructed based on few parameters conserving representative information of its curvature at each pixel. As starting point, the directional information obtained from a methodology of consistent curves detection is used, which includes the decomposition of the image in a directional domain contained in R k , with k ∈ N. Basic summary measures criteria are proposed for this type of data and the application to four cases of satellite images for extraction of sections of rivers in these images are shown.


Introduction
The description of processes of interest located on objects in images is a task of great importance for applications in different disciplines, from the study of satellite images, printed text processing and representation of events in seismic trace, to medical imaging practiced by x-ray and tomography (Gonzalez & Woods 2002, Cheriet, Kharma, Liu & Suen 2007) Carrying out the description of the properties of the objects contained in images, allows us to obtain information about the phenomena under study associated with those objects.An image is made up of a number of pixels or points in N 2 , and each is associated with a value of intensities of gray (in the case of a monochrome image).Commonly, detection methodologies are applied to images based on some property of the intensity of the pixels, which allows to determine which pixels belong to a process described by the object and which should be separated from it and dismissed as part of the background.An extra step, which allows an understanding of the behavior of the visual appearance of the detected object, is the procedure of recognition of the object, carried out through the extraction of some of its features.We call this procedure "object extraction"; in our particular case "curves extraction", which does not only consist of the identification of the pixels of the image, as a part of a curve, but also to associate to each pixel, a vector of properties that describe the region of the curve to which they belong, and that will be used as input for a simplified reconstruction of curves.
In this paper a procedure of curves extraction by obtaining features based on the information of decomposition of the studied image in directional channels is carried out.In a first step, the image will be processed using a method of curves detection in order to obtain information from its directional decomposition (Do 2001, Martínez & Ludeña 2011).Thereafter, a summary of measures of directional information is presented and applied, chosen to represent the features of extraction (Nixon & Aguado 2008, Chang & Coghill 2000), which are location, curvature and direction of the curve on each pixel.Also, a preliminary proposal of curves reduction algorithm is presented, whose purpose is to provide an alternative to the classic thinning and object tracking algorithms (Cheriet et al. 2007, Myler & Weeks 1993) through simple operations linked to the directional information from the process, which will facilitate the reconstruction of the extracted curves.This article is organized as follows: In section 2 we expose in detail the methodology to apply, since the filters used for curves detection to obtain the directional information, until the design of summary measures and the reduction algorithm.In section 3 an example of application on satellite images of rivers is shown with a brief presentation of the results of the use of the proposed methodology.Finally, conclusions about the results obtained are exposed.

Methodology
This section describes the methodology used for the extraction and reduction of curves.We observe the image I (x, y), with (x, y) ∈ P ⊂ N 2 , P a finite ensemble.I is a function which associates to each pixel (x, y) a value u ∈ R of intensity of gray.The relevant information in the image consists of the curves processes.So, the image can be described by the model where C(x, y) represents curves and b(x, y) is the background, composed by objects which are not of interest, and we want to discard this background.This procedure is called object detection.Because a known property for curve-like objects is that its pixels spread energy over several simultaneous directions (Chang & Coghill 2000, Do 2001, Candès & Donoho 2004), a procedure of detection using directional filters is applied.
So, as an initial step, a decomposition of the image I in directional nonoverlapping channels containing the directional energy produced by the convolution between the image and a tight frame of curvelets is applied.This decomposition covers directions from 0°to 180°in a given scale j, and for each pixel, a total of 2 D(j) directional channels are obtained.
The problem is that after this application, too much directional information for each pixel is obtained.If the refinement scale j increases, so does the number of directions in which it is possible to track curves.This hinders the effective description of the behavior of the curves.In this type of problems, called high dimensionality problem (Nixon & Aguado 2008), the solution lies in making a transformation of information through any technique of summary that conveniently describes with few parameters the studied objects.As part of the procedure of features extraction, a summary of the information based on directional energy distribution is proposed.We will use the maximum and median given by the distribution of the percentage of directional energy as measures, first of all, to assign a unique label to each pixel that identifies the predominant direction to which it belongs.Secondly, to assign a unique magnitude of directional energy.Measures to be obtained will be: Number of directional channels, directional labels and summary of directional energy.The first two measures will be used as representatives of curvature and direction of each pixel respectively.The last measure will be used later as information about the morphology of the curve.
During the detection a threshold method is applied to improve the information of interest which is usually hidden by energy events.
In a final step, a proposal of reduction is presented as an alternative thinning procedure (Cheriet et al. 2007, Myler & Weeks 1993) to obtain a simplification close to the medial axis of the extracted curves and facilitate their representation.It will be based on the information given by the directional features.
In the following subsections we, briefly describe the theoretical and application details about these steps to achieve the extracting processes of curves in images.
Curvelets tight frame elements are denoted as ρ j,k,x,y , where j represents the scale at which is calculated the curvelet, k the direction of the base of the curvelet, and (x, y) the position of the pixel p x,y .The appendix provides a brief description of the expression used for the calculation of ρ j,k,x,y .The relationship between width and length of each of these elements is anisotropic and is given by w = l 2 , where w represents the width and l the length of the base of the curvelet.In the work of Candès & Donoho (2004) its suitability and efficiency for the representation of objects with curvature was proven.
A directional channel is obtained from the coefficients given by the convolution of an element ρ j,k,x,y from the frame of curvelets with the image I. Coefficients of the channel have more energy for pixels in regions that have a direction that matches the base of the curvelet used to produce it.
Let n = n 1 × n 2 be the dimension of the treated image; after the application of the 2 D(j) curvelets an output matrix with dimension 2 D(j) ×n is obtained.Each of the n pixels that compose the image will have 2 D(j) associated channels, through which the directional energy is distributed.One of the properties of this energy distribution is related to the curvature of the curve: the greater the curvature, the greater directional energy dispersion; i.e., energy occupies more directional channels, but at the same time the magnitude of this energy is lower in each of them.Whereas, if the curvature is less, then there are fewer occupied channels, but the magnitude of the energy that these present is increased.
The directional channel α j,k is obtained by the convolution of the curvelet matrix ρ j,k with the image I, therefore α j,k = I, ρ j,k is a matrix where each entry is linked to a pixel p x,y in coordinates (x, y) and contains the value of directional energy α j,k (x, y).We assign intensity values as follow: zero to the background, greater than zero to objects.
For the previous development, in a scale j there are d = 2 D(j) directional channels for the implementation of the frame of curvelets, with j ∈ N and D(j) = j + 2. This means that when the value of scale increases, the scale is finer and more directional channels are obtained.For each directional channel as a label a number k is assigned , with k = 1, . . ., d, representing ranges of directions for the angle θ with increments of s = 180 °d .Thus, for scale j = 3, used in the application in section 3, the directional domain between 0°and 180°is subdivided in d = 32 channels, with increments of s = 5.625°, where (k − 1) s ≤ θ < ks, with k = 1, . . ., 32.For example, the channel labeled as k = 7 contains information of directions from 33.75°to 39.375°.

Thresholding
Enhancement by intensities transformation is an ideal approach to try on problems where part of the image may contain hidden features of interest.In our particular case, we need to enhance zones of the the curve where the amount of energy spreading is too subtle in relation to other events such as intersections between curves, since these produce an amount of energy greater than the rest of the pixels on almost all directional channels.So, after the directional decomposition, to improve the information of interest and select the pixels to be labeled as curve, a global threshold is used with r = 3 levels (Gonzalez, Woods & Eddins 2004).
The definition of the threshold is the following: Here, a slicing of the image is produced, where values lower than u 0 are discarded (set to zero), values higher or equal to u 2 remain unaltered and the rest of intensities are transformed to two slices which only preserve a proportion a and b of the intensities, respectively.
Due to the fact that objects in an image occupy different ranges in the gray scale, to choose appropriates values to thresholds, histogram shape-based methods are commonly used, where the peaks, valleys and curvatures of the histogram of the image are analyzed (Sezgin & Sankur 2004).In our particular case, intervals of intensities of gray where concavities and valleys are produced will be the guide to select the values u 0 , u 1 and u 2 .
The values a = 0.8; b = 0.95, representing only 80% and 90% of the original intensity of the pixel were used.These values are kept fixed for all the presented examples.

Revision of Directional Channels
We present the following definitions: Adjacent Channels: Channels corresponding to the direction closest to the range of angles of the studied channel.Each channel has two adjacent channels.
For example, the channel including directions ranging from 33.75°to 39.375°h as the adjacent channels from 28.125°to 33.75°and from 39.375°to 45°.
Significant Channels: Channels where the energy is greater than a certain threshold.In this case, this means that the energy is greater than zero.
Significant Adjacent Channels: String of ordered and significant channels that have at least one adjacent channel which is also significant.
Isolated channels: Channels whose adjacent channels are not significant.
As a first treatment of the energy information, we apply a revision to the directional channels associated with each pixel, removing isolated channels and maintaining only significant adjacent channels that form the chain of channels with greater length; i.e., in the case that more than one string of adjacent channels occurs, only the longer string is preserved.Then, we adjust the energy in the channels not removed by scaling.So, it is set that the given pixel has energy in a unique and connected string of ordered directional channels, which eliminates any ambiguity in the directional information of the pixel.

Number of Significant Adjacent Channels
We use the number of significant adjacent channels that were not eliminated during the revision, denoted by l (x, y).In Martinez (2011) the total number of channels that are occupied by the directional energy was used as part of a functional which allowed to separate objects in the image into two groups in a consistent way: "curves" or objects with non-zero curvature and "straight lines" or objects with zero curvature.Here, the revised version of this amount is proposed as a representative value of the curvature feature of the curves.

Directional Label
We establish a unique label that represents, among all significant adjacent channels, the summary of directional information of each pixel.As criteria of summary, the median position and the position of the maximum of significant adjacent channels were taken, and are described as follows.
Middle Channel Label.For each pixel the central channel of significant adjacent channels is identified, and its directional label is assigned to the pixel.Thus, for the image I, if the pixel p = (x, y) ∈ I has a chain of significant adjacent channels with length l x,y given the vector of energies α 1 , α 2 , . . ., α lx,y , then is the middle channel label for pixel p.
Maximum's Channels Label.For each pixel we identify the channel between significant adjacent channels that has greater directional energy, and its directional label is assigned to the pixel.Thus, for the image I, if the pixel p = (x, y) ∈ I has a chain of significant adjacent channels with length l x,y given the vector of energies α 1 , α 2 , . . ., α lx,y , then is the channel's maximum label for pixel p.

Summary of the Energy from Significant Adjacent Channels
Here, we take into account the relationship between the magnitude of the summary of the directional energy and the morphology of the curve.This requires a unique value per pixel that represents the value of the directional energy.The median and maximum are used again to obtain this value.Values of summarized energy will be compared among groups of pixels to decide which are discarded and which are retained to carry out the tracking of the trajectory of curves previously detected and extracted.
Middle Channel Energy.For each pixel we identify the central channel of significant adjacent channels, and its energy is assigned as the representative of all energies.
Thus, for the image I, if the pixel p = (x, y) ∈ I has a chain of significant adjacent channels with length l x,y , given by the vector of energies α 1 , α 2 , . . ., α lx,y , if l x,y is the middle channel label, then the middle channel energy is given by, G med (x, y) = α lx,y . (5) Maximum Energy of the Channels.For each pixel we identify the channel with maximum energy between significant adjacent channels, and its energy is assigned as the representative of all energies.
Thus, for the image I, if the pixel p = (x, y) ∈ I has a chain of significant adjacent channels with length l x,y , given by the vector of energies α 1 , α 2 , . . ., α lx,y , if l x,y is the maximum's channels label, then the maximum energy of the channels is given by, For both measures it will be shown in section 3 that energy tends to be higher for internal pixels of the curves.Then, this directional information can be useful for the method of reduction, which will be used to locate pixels representatives of the medial axis of the curves.

Curves Reduction
We present the following definitions: Region: pixels belonging to pieces of curves.
Directional Region: all of the pixels labeled under the same direction.There will be a total of d directional regions produced in the decomposition of the image, which we denote as R k , with k = 1, . . ., d.
Directional Subregion: subset of pixels belonging to a directional region which compose a connected region; i.e., each pixel has at least one neighbor pixel that is in the same directional region.It will be denoted as S i k , with k = 1, . . ., d, i = 1, . . ., r (k), where r(k) represents the total number of directional subregions of the directional region R k .
To keep track of the curves, a reduction is carried out taking advantage of the directional information summarized by location, direction and curvature features obtained in the extraction.This procedure is done for directional subregions.We will take into account only the subregions R i k that have more than q pixels, where the value q can be changed interactively.
Two approaches are proposed to deal with this part of the problem.A first approach is based on measures of middle channel energy and maximum energy of the channels, so that, each region is reduced by a threshold that depends not only on the value of energy, but is also based on the number of pixels in the subregion.The threshold retains only a proportion p e ∈ (0, 1) of the pixels with higher energy value.
Let S i be the subregion composed by n i pixels with energy coefficients G m i , 1 ≤ m ≤ n i , and G (m) i its sorted version, where (m) refers to the index of the m-th value of energy sorted in ascending order.Let p e be the proportion of pixels we want to retain by subregion; then, the amount of pixels which will be rejected is calculated by, So, the limit of the threshold will be given by G (n0) i . Thus, only those pixels such that G m i > G (n0) i will be kept.
In another approach, for the subregion S i the pixel which exhibits the highest value of energy is used as a starting point for the reduction on the subregion.From this, a straight line is drawn with slope given by the directional label previously assigned to the subregion.Only the pixels in the subregion that match a discretized version of this straight line will be kept.
Finally, we complete the tracking of the trajectory of curves associating subregions which have neighboring pixels with directional labels near to directional label of the treated subregion.

Implementation with MATLAB
Routines to implement the methodology outlined in section 2 were programmed in MATLAB.The scale j = 3 was used in the application, for a total of d = 32 directional channels.Images are presented in a graylevels system with 256 values in the range from 0 to 255.

Case Studies
We consider an application to four satellite images from Google Earth, which contains roads, built-up areas or cultivated areas and sections of rivers.In figure 1 the images are presented.The goal is to achieve rivers extraction as a curve process by obtaining the following features: Localization, curvature and direction for each pixel of the curve.Next, a reduction of the extracted curves is applied.

Directional Channels
Figure 2 shows some examples of curvelet frame elements in the scales j = 1, 2, 3, for two different directional channels.In these images the differences between scales can be appreciated.On a larger scale, the elements will be smaller and thinner.In the decomposition and extraction of curves from the image, only the finer scale j = 3 will be used.

Thresholding
Figure 3 shows histograms for directional channels from each image in figure 1 with vertical lines set in the intensities of gray where values u 0 , u 1 and u 2 were selected.In the case of river images, values u 0 , u 1 and u 2 are selected taking into account the concavity and the valley or low values presented in the tail of histograms.For the river 1 image we decided, based on its histogram, to use two values near to concavity of the histogram and another in the end of the tail given by u 0 = 100, u 1 = 150 and u 2 = 240.Due to the similar behavior of the histograms in other images we used the same values, except in the case of river 4, where the tail presents a substantial decay.In this case the value u 2 = 200 was selected.Table 1 summarizes the values selected.

Number of Significant Adjacent Channels
To facilitate their interpretation, figure 4 shows the number of significant adjacent channels l (x, y) encoded in grouped labels.In figure 5   river 1 and river 4 which include several regions are shown.We observe that in regions where there is a greater change of the curvature or there are changes in concavity, the number of significant adjacent channels per pixel is notably higher than in regions where the curvature changes are few and do not produce changes in concavity.For example, in figure 5(a) the pixels with 18 to 22 significant adjacent channels approximately correspond to a change in concavity.When pixels move away from the zone of change in concavity, the number of significant adjacent channels decreases, in this case, the group with 13 to 17 channels, followed by groups with less than 13 channels.In the case 5(b) the inflection point of the curve, scarcely noticeable at first sight, is clearly marked with a high value of l (x, y) from 23 to 32 channels.In the cases 5(c-d) pixels with several changes in concavity are marked with 32 channels label.In figure 6 these details are highlighted for river 1.
In summary, with this measure we can observe that zones with greater curvature correspond to a larger number of channels, while zones with less curvature correspond to a smaller number of channels, and changes in concavity can be determined.A result to take into account is that some regions with 30 or more significant adjacent channels may belong to crosses, regions composed of circular structures, or regions with non-identifiable directionality in the current scale.In figure 7 examples from river 1 and river 3 are shown, in which pixels with 32 significant adjacent channels were highlighted; i.e., all possible channels have non-zero energy.In the case 7(a) the highlighted pixels belong to a crossing of the river with a very faint object.In the cases 7(b-c) ones does not get a clear identification of the curvature changes present at first glance in the original image.In these cases, an initial proposal is to work on a more coarse scale for analysis; since, in the current scale, the bases of the elements of the frame do not produce coefficients equal to zero, moreover its base does not match the direction of the curve.For example, the case 7(b) is treated in the scale j = 2 in figure 8, which shows three of d = 16 possible channels on this scale.In this case, from the total number of channels only eight are significant, and only five of them are adjacent; most channels are between 0°and 45°.Therefore, this scale does not present the same kind of lack of definition present in the finer scale.However, in the following sections we can see that one of the proposed measures is also useful for handling this problem.Here, two channels that are significant for this zone and one in which the energy is zero are shown.

Directional Labels
As a summary of all directional channels we associate a single value or label to each pixel between d = 32 possible directions.The encoding of directional channels in labels was detailed in subsections 2.1.1 and 2.2.3.The results for middle channel label and maximum's channel label were calculated using (3) and (4) respectively.In the graphical representation, to simplify, labels are grouped in three classes: labels from 1 to 12, associated with directions 90°to 157.5°; labels from 13 to 19, associated with directions 157.5°to 180°and 0°to 16.875°; and labels from 20 to 32, associated with directions 16.875°to 90°.
As can be seen on the right side of figure 9, which represents middle channel labels, subregions labeled from 1 to 12 presented the notorious property of being sloped towards the left in the majority of cases, while subregions labeled from 20 to 32 presented the property of being sloped towards the right in the majority of cases, and subregions labeled from 13 to 19 presented the property of being sloped on horizontal position.Few of the latter cases presented problems in identification.In figure 10 we show examples of particular zones.
In the figure 11, we show a couple of examples of zones with the assigned middle channel label of each pixel, represented by straight line segments with the slope associated with its labels.The gradual change in the label can be observed at the same time that the direction of the curve also changes.
In cases with ambiguities for the number of significant adjacent channels the information is not clear in this measure; as seen in few regions from rivers 1 and 4 and long regions from rivers 2 and 3.Then, the middle channel label is a suitable candidate to represent the directional information about each pixel as a unique value, but problems still remain for some types of regions.

River 1
River 1 River 3 River 3 Figure 9: Images of middle channel labels for rivers 1 and 3. On the left side, the representation in each of 32 possible labels is shown.On the right side, labels were grouped in three gray levels.
In the case of maximum's channel labels, in figures 12 and 13 a behavior similar to the middle channel labels can be observed.We emphasize, in particular, the behavior of the maximum in the zones with problems of identification represented in cases like 13(c) compared to 10(c) and the entire image the river 3 in 12 compared to 9. In these cases, it was possible to obtain an assignment of a label for subregions consistent with the behavior of river direction in a zone where the assignment given by the number of significant adjacent channels or the middle channel labels failed.Thus, we propose this label as the best provider of information for the final extraction of the curves in the image in comparison with the middle channel label.But in a future work we will be combining such label information with that found in the more coarse scale in subsection 3.5 for a best treatment of curvature in cases with ambiguity.Then, with these labels we have a single value per pixel serving as the representative of curve direction.
Finally, for the case under study, we obtained each of the desired features of the curves, and its usefulness has been proved comparing its behavior against the behavior of the curves in the image, • Location: the same pixel coordinates, which possess a non-zero energy, identified it as part of a curve.
• Curvature: the number of significant adjacent channels is a measure of the degree of curvature of the curve.
• Direction: middle channel label is a proper representative of the direction the curve, follows but the use of the maximum's channel label is preferred since it allows a better directional description, even in cases where the middle channel labels fails.
So, we assign to pixel p x,y the features vector, c x,y = ((x, y) ; l (x, y) ; l (x, y)) The set of all these vectors composes the group of extracted curves.

Summary of the Energy
It is naturally expected for the energy produced by the convolution between the image and a determined curvelet adjust to the shape of the curve in the direction of the base of the curvelet, such as the highest values of directional energy are concentrated in the inner part of the region, in the given direction.This is because the values that are closer to the background, and away from the direction of the base of the curvelet, incorporate more zeros than the rest.Our measure is however, a summary of energy where we choose among the 32 directional channels a single channel whose energy will be the representative; it requires that the summary maintain the above-described property.
The measures used for selecting the energy are the median and the maximum again.So, we calculate by ( 5) and ( 6) the middle channel energy and maximum energy of the channels.Results are shown in figures 14 and 15, with examples of some zones studied in the previous sections.Here, it can be observed that the magnitude of the energy produced by the summary procedure continues to adjust to the shape of the curves through the entire image.These results motivate the proposal of reduction presented in subsection 2.3.2 to find a representative of the medial axis of the curves within each subregion.

Curves Reduction
In order to simplify any reconstruction of the curve the next goal is to find a reduced version of the extracted curves to perform tracking of curve trajectory calculating a candidate for medial axis of each subregion.We will use as input the summary of energy due to the properties observed in section 3.7.The results presented in the images are based on the maximum energy of the channels.We only consider subregions composed of more than q = 5 pixels.
In the first case of reduction, figure 16, we set the value p e = 1 3 ; i.e., only a third of pixels from each subregion is preserved.
In the second case of reduction, the angle to the straight line segment is the lower limit of the range of angles that define the directional channel.The initial pixel is the one with more energy over all pixels in subregion S i .An example of the application to one subregion is given in figure 17.
In Figure 18 the original images of the rivers, are shown together with the results of the second case of reduction.We carry out tracking of the curve, joining only neighboring subregions whose directional labels had close values; in this case, not more than three neighbor channels of difference.Results are shown also on the right side of figure 18.When comparing the reduced results with the original images, the reduction follows much of the rivers path.But, for example, at the beginning of river 1, shown in figure 5   Some river sections are thinner and not captured by the detection procedure.In this case we propose working with a finer scale j = 4 which would use d = 64 directional channels.
As for remaining objects not belonging to the rivers, it can be observed that they are reduced to very simplified strokes and are also perfectly separated from the rivers.

Conclusions and Future Work
For practical examples on satellite river images, we determine that using curve detection and measures such as the number of significant adjacent channels, middle channel and maximum's channels labels, we can effectively describe features such as localization, curvature and direction on each pixel from the curves in the image.With this, we achieved the extraction of the curves with features that can be used as an input for the reconstruction thereof.As a preliminary step for reconstruction we applied a proposal of reduction of the extracted curves, based on middle channel energy and maximum energy of the channels with two different approaches of thinning by reduction.The results obtained in the reduction show an effective tracking of river trajectory.However, there is room for improvement and suggestions for future work are presented in this regard, • The inclusion of pixel curvature information to make a better decision on the representation of the reduction.
• The implementation of a finer scale j = 4 with d = 64 possible directions, which would include information on thinner sections of the river that were not detected.Aside from this, we will consider other applications for the obtained results, • The reconstruction in a vectorial file (svg) for curves extracted from an image.
• The measurement of objects in an image.
• The characterization of textures in an image.
Filters h l k are the inverse z-transform from filters H l k , and with

Figure 1 :
Figure 1: Satelital images of rivers from Google Earth.

Figure 2 :
Figure 2: Images of some examples of curvelet frame elements in the scales j = 1, 2, 3.

Figure 3 :
Figure 3: Histograms of directional channels from images in figure 1.

Figure 4 :
Figure 4: Images of the Number of Significant Adjacent Channels.The scale j = 3has a total of d = 32 possible channels, here labels were grouped in six gray levels.

Figure 5 :Figure 6 :
Figure 5: Images of the Number of Significant Adjacent Channels.Example zones from figure 4. (a-b) from river 1, (c-d) from river 4.

Figure 7 :
Figure 7: Images of highlighted details of zones with problems (a) Cross between river and another object.(b-c) Regions with non-identifiable directionality.In all cases, the energy of the pixels occupies all d = 32 directional channels.

Figure 8 :
Figure 8: Images of channels in scale j = 2 for zone from figure 7(b).In more coarse scale j = 2 the studied region appears in scarcely five of d = 16 significant adjacent channels, while on the scale j = 3 it appears in all d = 32 channels.Here, two channels that are significant for this zone and one in which the energy is zero are shown.

Figure 10 :
Figure 10: Images of example zones of river 1 from the right side of the figure 9.

Figure 11 :
Figure 11: Images of flow of directional labels.Gradual change in the labels follows changes in curvature.

Figure 12 :Figure 13 :
Figure12: Images of maximum's channel labels for all rivers.On the left side, the representation in each of 32 possible labels is shown.On the right side, labels were grouped in three grey levels.

Figure 14 :
Figure 14: Image of middle channel energy with zones where we highlight some of the pixels with higher energy.(a-c)river 1, (d) river 2, (e) river 3 and (f) river 4.

Figure 15 :
Figure 15: Image of maximum energy of the channels with zones where we highlight some of the pixels with higher energy.(a-c)river 1, (d) river 2, (e) river 3 and (f) river 4.

Figure 16 :
Figure 16: Image of first case of reduction for river 1.Only a third of pixels from each subregion is preserved.

Figure 17 :
Figure 17: Image of second case of reduction.Example of a single subregion.The zone of the curve containing the subregion is in light gray, subregion in dark gray, and straight line segment from reduction in black.

Figure 18 :
Figure 18: Images of Second case of reduction for all rivers.On the left side, original images.In the center, the reduction.On the right side, the tracking of the trajectory from the reduction.
r−1 ki − G(z)H i z M r r−2 j=0

Table 1 :
Values for the threshold.