Multi-Spot Tracking System for Free-Space Optical Communication

. This article describes a multi-spot tracking system for free space optic communications systems suitable for tracking optical terminals observed within the ﬁeld-of-view of a wide angle telescope. The article explains the con-version of the terminal position to the position in the plane of the tracking sensor and the subsequent recognition and tracking of terminals. In the ﬁrst part of the article camera image processing is described. Then, basic projection of optical terminal into the tracking sensor plane (into the active surface of a camera) is expressed by using geometry and matrix optics. Afterwards, the angle resolution of optical terminal in plane of the receiver is obtained by using the Airy disk approximation. The basic projection is then extended to the calculation of the Earth-satellite system followed by spot tracking, where method for determining the optical terminal and the state machine is explained.


Introduction
With the current growth in the volume of transmitted data, the demand for throughput of data networks is growing globally. Without internet access, the underserved areas can offer only limited development possibilities, especially for applications such as Industry 4.0. One of the possible technologies that can close these coverage gaps is optical feeder-links. To ensure internet access in unserved and underserved areas, the data signal is sent from a ground station with a relatively good network connection through a high capacity link to a satellite and from the satellite the signal travels through multiple beams to these areas (to users). The difference between the satellites stems from the orbit on which they are located, where geosynchronous orbit (GEO) provides the greatest coverage. These satellites are based on radio-frequency (RF) technology with throughput in the hundreds of Gbit/s where tens of gateways are required for this purpose. However, the most limiting feature for this technology is bandwidth and interference with other communication systems [1][2][3].
Another case is the need to quickly restore communication during a disaster such as an earthquake. In this case, temporary free-space optical (FSO) lines may be set up between buildings to restore communication infrastructure.
FSO technologies offer throughput in the order of Tbit/s using only a single terminal thanks to relatively wide available spectrum (in the order of THz). Another advantage over conventional RF links is the large bandwidth (in the order of THz), a license-free spectrum and immunity to interference.
A study conducted by the German Aerospace Center looked at the connection between the ground and the satellite on a geostationary orbit, where the aim was to provide a connection with different optical terminals simultaneously, which leads to the maintenance and large capacity of the connection in the presence of clouds over some stations [4]. There were introduced five basic design principles of reflective wide angle telescopes with corrections up to the 5th order [4]. One of the basic telescope is shown in Fig. 1. It can be seen how the rays generated by three sources from different angles are focused in an image plane. The telescope must be extended by an optical part supporting acquisition, pointing and tracking (APT) for aligning the optical axis of transmitter and receiver and active compensation of optical power deviations caused by vibrations as described in [5] and [6]. For the implementation of these receivers, an optical part that covers the individual areas with communication terminals must be designed. Furthermore, the system must be able to recognize and track these terminals to prevent signal failure. In [7] development of a pointing, acquisition, and tracking system for a nanosatellite laser communications module is described. The system considers single optical ground terminal and single satellite terminal occurred at altitude of 400 km. It uses multiple stages sensors for system alignment. The coarse sensor uses wider field of view (FOV) to acquire the uplink beacon whereas fine sensors have a narrow FOV for precision pointing. In a coarse stage the satellite points to the ground station where the position of the ground station and satellite must be known. Then Kalman filter and PD regulator is used for displacement compensation. In a fine stage the centroid detector is used for angle compensation of the terminal optic. In this stage the tracking is triggered.
This article presents methods for image processing, a model for calculating the position of multiple terminals in the plane of the tracking sensor (for the basic case of 3D space and also for the case of ground-satellite system) and introduces methods for recognition and tracking these terminals.

Camera Image Processing
This section presents methods for spot acquisition and detection on the image. These methods are then compared.

Spot Detection
The laser spot on the receiver theoretically corresponds to a circular spot with a Gaussian intensity distribution. However, in the case of a ground-satellite system, the size of the receiving aperture is relatively small relative to the size of the beam (at the receiving plane). Therefore, a constant distribution of optical intensity and phase is taken into account. For this reason, the shape of the focused spot is similar to an Airy disc.
The contour of the laser spot can be described by a general equation of a circle where ( , ) is the center coordinate of a circle e.g. in pixels and is radius. The contour can also be described by a parametric equation where is angle in the range 0, 360 .
Basic methods for finding the optical point in the camera image are further introduced.

Simple Summation of Pixels in Rows and Columns
This method is based on the summation of the magnitude of pixel intensities in rows and columns. The result is then an approximate determination of the coordinates of the spots. In Fig. 2 we show a simulated image assuming a camera with 320 × 256 px. It is spot1 [170, 60], spot2 [100, 150] and spot3 [230,200]. On the top of the figure, there is a column summation graph and on the right a row summation graph is placed. From these graphs, a coordinate grid can be created as it is for peak amplitude in column 230 and peaks for rows 60, 150 and 200. Two false detected spots coordinates [230, 60] and [230,150] can be seen there. These false spots may be easily filtered out by checking the intensity in these coordinates with a predetermined threshold intensity value. A limitation of this method is its poor detectability of the spot with small spacings in the diagonal directions. The approximate size of the spots must also be known in advance.

Hough Algorithm
The algorithm is used to detect circles in an image with a predefined radius. The process of detection consists of edge detection in the first step. Afterwards, circles with predefined (desired) radius and center coordinates ( , ) in each edge detected pixels are drawn in an accumulator. The center of the detected circle in the image will be then highlighted in the accumulator [8], [9]. To detect circles with different radii, multiple accumulators have to be constructed. In Fig. 3(a) an image of size 200 × 200 px and spot with center coordinate [100, 100] is shown. Then, Figure 3(b) represents detected edges in the image and Figure 3(c) shows the accumulator with peak value in the center of the circle. The Hough algorithm is used to detect many spots in the image. However, it requires higher computation performance and more memory usage [9]. Another disadvantage may be a more complex algorithm setup.

Least Squares Method
The least squares method is used to find the coordinates of the center of a circle and its radius. This method minimizes the sum of the squares of the residues from the prescribed circle equation (1). The residue is given by equation where and are point (sample) coordinates, is the point (sample) number, and are the circle center coordinates, is the radius of the circle. The sum of the residues is given by equation [10] By using least squares method principle, equation is applied and coordinates of the circle center and the circle radius is obtained by using equations The least squares method allows relatively accurate determination of the center of the spot which is disturbed by diffraction [10]. This algorithm, however, can detect only one spot in an image. It follows that it is first necessary to divide the image if there is more than one spot in it and then apply this algorithm to each part of the image. Comparison of these methods is summarized in Tab. 1.

Edge Detection
As already mentioned, some spot detection techniques require edge detection. Edge detection is a technique for finding boundaries of objects in an image by detecting a discontinuities in intensity (brightness). In this section Robert's algorithm, Sobel and Canny edge detection will be presented.

Robert's Algorithm
Robert's algorithm uses 2 × 2 kernels described as to detect edges in running in ± 45 degrees to the pixel grid. These kernels are applied separately and produce a gradient component in each orientation. After this, the magnitude G and slope Θ are calculated using where values of G x and G y are obtained as 2-dimensional signal processing convolution of matrices K x and K y with the source image.

Sobel Edge Detection
The principle of Sobel's edge detection is similar to Robert's algorithm. It applies two matrices K x and K y defined as to the image. The application of matrix K x results in the detection of edges in the x-axis and with the application of matrix K y , edges in the y-axis is obtained. From this derivative in horizontal and vertical direction, G x and G y is obtained. Finally the magnitude and G and slope Θ are calculated using (8) and (9), [11].

Canny Edge Detection
The Canny edge detection is a multi-stage edge detection algorithm. The first stage is noise reduction because edge detection algorithms are highly sensitive to image noise. To reduce this noise Gaussian blur is used. This is done by Gaussian kernel with size 3 × 3, 5 × 5, 7 × 7 etc., where a small kernel causes little visible blur. After this gradient is calculated by using the Sobel edge detection algorithm. As a result of this step, the pixels are of various thickness. For this reason non-maximum suppression is applied. This goes through all points on the gradient intensity matrix and keeps only pixels with the maximum value in the edge direction. After this step, the image can contain some edge pixels that are caused by noise and color variation. To remove it, double threshold is used. It marks pixels as strong when intensity is bigger than the high threshold, and weak when the intensity is between the high and low threshold. Pixels with lower intensity then the low threshold are considered as non-relevant for the edge. Finally, edge tracking by hysteresis transforms weak pixels into strong ones, if and only if at least one of surrounding pixels are marked as strong [12], [13].
Robert's algorithm uses only 2 × 2 kernel to compute the difference between adjacent pixels along diagonal orientation. This algorithm is fast and has high precision in horizontal and vertical direction but it is sensitive to noise and some edges can be lost. The Sobel edge detection algorithm uses 3 × 3 kernels with weighting of samples in horizontal and vertical directions. Noise immunity is better than in the case of Robert's algorithm. Slightly bigger calculation performance is required. The result of this process may contain false edges and also more pixels in the edge. The accuracy of this method is then relatively small. As was described, the Canny edge detection is a multi-stage algorithm. It follows that it is more demanding on computing time. However, the result of this method offers accurate edge detection, good immunity to noise and a small number of pixels in the edge [14].
From this, the Canny algorithm should be used in applications where spot size is at least in tens of pixels and high accuracy of edges is required. For cases where the size of the spot is in the order of a few pixels and a quick response to the calculation is required the Robert's or Sobel's algorithm satisfy the requirements. The comparison is summarized in Tab

Projection of Transmitters from 3D Space to 2D Space of the Receiver (Camera)
To calculate an approximate position of transmitters in camera image, projection from 3D to 2D space is considered. In Fig. 4 layout of an example consists of one receiver ( = [0, 0, 2]) and two transmitters ( 1 = [1, 2, 2] and In Fig. 5 a geometric optic for the presented case is shown. This is used for calculating the coordinates of a focused beam on a tracking sensor plane. The beam at the system input is given by a distance from the optic axis and an angle Θ 0 , which characterize the input beam entering the system. The output of the system is again a distance 2 and an angle Θ 2 which characterize beam leaving the system.  The geometric system is divided into three parts. The first part corresponds to the distance from the optic transmitter to the receiver and is characterized by the matrix M 1 The second part represents the optical lens for beam focusing by matrix defined by focus length 1 . The last part determines the distance 2 from the optical lens to the tracking sensor plane and it is defined by matrix These matrices are used for the transfer matrix given by For calculating the projection of transmitter into the tracking sensor, Δ (respectively Δ ) is substituted for , Θ x (respectively Θ z ) is substituted for Θ 0 and x (respectively z ) is substituted for 1 [15].
The dependence of the output beam ( 1 and Θ 2 ) on the input beam ( 0 and Θ 0 ) is given by equation After substitution where distance of the tracking sensor from lens 1 is equal to focal distance of lens 1 ( 2 = 1 ) and simplification equation is obtained. The distance from the main optic axis in tracking sensor plane 2 and angle of beam propagation Θ 2 is defined by Minimal separation distance of two spots on the camera image can be obtained by using Airy disk equation where beam stands for wavelength of the beam, 1 and aperture is focus length and diameter of the lens 1 . From (21) the minimum resolution angle between two stations Θ min is obtained as (22)

Projection of Earth-Satellite System into Camera Image Plane
In order to calculate (predict) the position of the optical terminal (OT) on the tracking sensor on the camera imaging plane a proposed model for Earth-satellite system will be introduced in this section. The model works with a spherical approximation of the earth. For calculation it is necessary to know the location of the OT (point = ( O , O , O )), center of the sphere (point ), the point above which the satellite hovers -Nadir (point = ( N , N , N )), point (which is defined as perpendicular line to line |CN| of length and corresponds to the highest top point on the sphere) and point = ( R , R , R ) (perpendicular line to line |CS| and |CH| of length ). The geometry of these points is shown in Fig. 6. For this case, where the satellite is above the equator, the point corresponds to the north pole. For the projection calculation simplification O = N = R = T = is used. Line |CS| defines optical axis used for matrix optic. For projection of the OT into camera image, the distance |PS|, S and angle , shown in Fig. 7, has to be calculated. To do it first of all angles (∠SCO), (∠TCO) and (∠RCO) are calculated by using great circle formula as The perpendicular distance of the OT from optical axis is given by The distance of projection plane from center of the sphere is defined as and from this it is possible to calculate distance of the projection plan (in point ) from satellite where ℎ is altitude of the satellite. By using (26) and (28) the angle at which the beam propagates to the satellite is defined as and is used in (19) to calculate the projected distance of the OT in camera image. Since this distance carries only information about the distance from the optical axis, it is necessary to calculate the angle of rotation from the line |CT|. For that, first the distance between point and optical terminal is calculated by equation Fig. 7. Geometry of Eart-satellite system for projection of OT into camera image calculation.
After it the distance between point and projection point is given as In the next step the angle between points is given as and S (distance of point and point ) vertical distance of optical terminal Sy is defined by equation From this the distance Sx and angle of rotation from the line of the horizon (|CT|) are given as = arccos Sy S .
Because the great circle equation returns only a positive angle it is necessary to determine the sign of the variables Sx and Sy as The model describes projection of optical terminal placed on earth into satellite but it can be also used for projection/tracking optical terminals placed in different Earth orbits. FSO communication duration in this case will vary according to maximal elevation of satellite's path over the ground station and largeness of the horizon plane as it is described in [16] the results were validated with AGI Systems Tool Kit software.

Proposed Method for Multi-Point Tracking System
In this section a description of the proposed method for multi-point tracking system will be described. This method takes into account only static optical terminals that do not move relative to each other. It consists of image processing, optical ground station (OGS) recognition, assignment, and tracking OGS. The system uses images obtained by using SWIR camera with temperature stabilized InGaAs detector, a 320 × 256 pixel resolution and 16-bit sample value. The spectral range of the camera is 900-1700 nm and read noise of 110 electrons. The obtained images are extended by simulated optical spots. The form of the proposed system is created by a state machine (SM) which is shown in Fig. 8. The proposed mechanism takes into account that the optical terminals do not move relative to each other.  The SM consists of the following states: -Init and config -for initialization of the system, camera and also can be used for configuration of the system.
-Dark img -for obtaining dark current images which are used for image correction. Dark current images are sampled for different exposure times when the camera lens is obscured.
-Idle -default state of the SM.
-Spot search -this state is used for the initial spot detection in image.
-OGS assignment -here the spots from the Spot search state are recognized and assigned to the right optic ground stations.
-Coarse tracking -this state is used to direct spots into segments where Fine tracking can be executed.
-Fine tracking -in this state the optic beams are in the sectors, where for example piezoelectric mirrors are located, then fine tracking and active communication with the OGS can be performed.
-Error -the SM enters this state when error e.g. unable to capture a new image from the camera occurred.

Image Preparation for Further Processing
In Spot search, Coarse tracking and Fine tracking states, correction of the obtained image is required before calculating spot coordinates. In Fig. 9 we show the results of the individual steps of the process. The obtained camera image in Fig. 9(a) is corrected by subtracting the dark current image in Fig. 9(b) (obtained by obscuring the camera lens). Afterwards, Gaussian blurring is applied to reduce noise in the image. The corrected image is shown in Fig. 9(c). Finally, a binary mask from the corrected image is created by using the intensity threshold. The result of the binary operation is shown in Fig. 9(d).

Spot Search and OGS Assignment
In the Spot search state the simple summation of pixels in rows and columns is used to obtain spot coordinates. After successfully finding at least one spot, the SM passes into the OGS assignment state. The assignment of spots to the corresponding OGS is performed by comparing spot distances with the reference ones. Reference distance calculation between two points -Reference spot 1 (defined by coordinate [ RS1 , RS1 ]) and Reference spot 2 (defined by coordinate [ RS2 , RS2 ]) is performed by equation which is immune to the rotation of the whole image. The principle of this method is shown in Fig. 10 where in the left part, reference distances of theoretically determined spots (spot 1, spot 2 and spot 3) are calculated. In the right part, the found spots (spot A, spot B and spot C) are shown. The distances between each of these spots are calculated and compared to the reference ones. As can be seen from this figure the distances between spot C and spot B correspond to the distances between spot 1 and spot 3. For this method to be applicable, at least three spots must be available and the reference distances between the spots must not be the same for all spots.

Coarse and Fine Tracking
To perform coarse and fine tracking, reference coordinates of optical terminals and segments where fine tracking can be performed must be determined. These coordinates (green) and segments (red) are shown in Fig. 11. The new spot position is always calculated by the least squares method applied on the segment around the spot coordinate obtained from the previous sample. The intensity of each spot is recorded. From this, the scintillation index is calculated and is used to determine the most stable spots/OGS. The scintillation index is defined by equation where is received optical irradiance and indicates averaging [17]. The most stable spots are then used to compensate for the deflection and tilt of the receiving optics.
In Fig. 12 simulation of tracking for 15 optical terminals is shown. Red squares characterize segments where fine tracking and communication can be triggered. Green squares define segments where the least squares method for spot detection is used. There can also be seen the state of the SM, the spot/OGS with best scintillation index and calculated offset from the right position.

Conclusion
In this article, methods for spot detection and edge detection were described and compared. From these comparisons, simple summation, least squares method and Canny edge detection were used in the part dealing with the finite state machine for multi-spot tracking system. Furthermore, model for projection of optical terminals into the active surface of a detector on a satellite was presented and the minimum resolution angle was determined. The model is used for ground-satellite systems but can be extended to cases of satellite-satellite system. Next, state machine for multispot tracking system was presented, the individual states and image processing were described. Finally, simulation for 15 spots were shown. Future work is to determine a model for spot movement in different orbitals, design and implementation of an algorithm for spot detection using the least squares method for more than one spot at a time.