Distributed control of mobile robots in an environment with static obstacles

Konstantinos Giannousakis, Electrical & Computer Engineering Department, University of Patras, Rio 26500, Greece. Email: giannousakis@ece.upatras.gr Abstract This study addresses the problem of deploying a group of mobile robots over a non‐ convex region with obstacles. Assuming that the robots are equipped with omnidirectional range sensors of common radius, disjoint subsets of the sensed area are assigned to the robots. These proximity‐based subsets are calculated using the visibility notion, where the cell of each robot is treated as an opaque obstacle for the other robots. Based on that, optimal spatially distributed coordination algorithms are derived for the area coverage problem and for the homing problem, where the swarm needs to move to specific locations. Experimental studies demonstrate the results.


| INTRODUCTION
The coordination of autonomous vehicles is a field that has known a huge growth over the last few years. This is of no surprise as the idea of deploying a decentralised group of robots, working in collaboration by distributing information, is very enticing and has many applications. Some of the most popular tasks for swarm robotics are: moving while preserving a formation [1][2][3], exploration and mapping of an unknown region [4][5][6][7], and foraging [8][9][10], where the robots have to find and transport objects to specific locations.
One of the most basic and interesting tasks in collaborative robotic networks is the area coverage. In area coverage, the robots are equipped with sensors and are assigned to sweep across a region of interest in a way that maximises their monitoring ability. Extensive work exists in the literature considering convex regions [11][12][13] as well as non-convex [14][15][16][17] regions. The definitive step in most of these methods is to determine what parts of the region are assigned to each robot, while there are also methods such as the Virtual Forces [18] that rely only on virtual interactions between the nodes to evenly distribute them. In convex domains. the commonly used method for determining the territory of each robot is the Voronoi tessellation [11,12], whereas for nonconvex domains, various methods have emerged.
One of these methods is the geodesic diagram, which uses the geodesic distance to create cells that follow the Voronoi or power diagram notion. The authors in [17,19] use the limited geodesic diagram based on the positions of the robots, to build a distributed control law that maximises the total assigned area. On the other hand, the authors in [20] use static points to generate the geodesic power diagram of the whole region and modify the weights depending on the capabilities of the robots to make the partitions equitable.
Another way to deal with the non-convexity of the region is to discretise it and use graph representation to denote the connections between the neighbouring discrete points. The authors in [21] use this concept to create a generalised partitioning method, while ensuring that the partitions are connected. They determine that the optimal partitioning is also a centroidal Voronoi partition. Another method of partitioning the whole discretised region is presented in [22], where the main concept is to divide the region into cells of equal area.
The method presented in this article uses the visibility notion to assign cells to the robots. The visibility-based approach is more realistic as the responsibility of the robots is determined by the area that they can sense. Additionally, the a priori knowledge of the region is not necessary, making it suitable for exploration as well. Naturally, using only the visible area means that the assigned cells are not a partition of the This work was not supported by any organization.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. total region and depending on the requirements, they may not even utilise the total sensed area. The authors in [23], tried to exploit the whole sensed area using a strictly proximity-based assignment of the visible region, with the drawback that the resulting cells consist of disjoint parts. Apart from violating the conditions of the extended Leibniz integral rule [24], which is required to determine the distributed control law, allowing a robot to move towards these disjoint parts means that the robot may step into the territory of another robot. This does not sit well with the notion of cell assignment and does not guarantee collision avoidance of the agents. The authors in [16], to avoid that, added the restriction of connected cells by completely discarding the disjoint parts, without trying to assign them to a different node. Here, a novel visibility-based approach is utilised, which ensures connected cells, while trying to make use of as much of the sensed area as possible.
Apart from area coverage, another useful task is the homing of the swarm, where the team of robots should acquire specific locations, and possibly orientations, in the region of interest. Homing is one of the subroutines in foraging [10], and is also a task that has been examined for single robots [25,26], but not as an explicit task that considers the issues that surface in a multi-robot environment. The issues that the homing problem has to face concern: the optimal target assignment, how to prevent robot collisions and how to guarantee that no robot will block another. The proposed scheme in this article addresses these issues in a simple collaborative manner.
The focus of this article lies into the realistic application of the existing theory in practice. The region of interest is a general, non-convex, connected polygon with obstacles, thus a visibility-based approach has been utilised to determine which parts correspond to the vicinity of each robot. This proximitybased cell assignment maintains visibility accurate connected cells by treating the neighbouring cells as obstacles, while trying to maximise the total assigned area. The advantage of these cells is that every location of their interior is reachable in a straight line from the respective robot. This allows the robots to move freely in their cell and at the same time prevents collisions with other robots. Additionally, while most of the works in the literature consider dimensionless robots with simplified dynamics [13-15, 17, 19], this cell assignment enables us to directly utilise the motion controller of our previous work [27], to take into account the dimensions and the full model of the differential drive robot.
Furthermore, the notion of objective functions is employed to determine the optimal control inputs for the robots. An aggregate objective function [11] is defined for each problem, in a way that maximising it implies optimal placement of the robotic agents. The distributed nature of the problem lies on dividing this aggregate objective function into individual subproblems over each robot, where after iteratively optimising them, the aggregate objective function will converge on a local maximum. Additionally, the distributed concept lies on requiring only local knowledge, such as the locations of spatial neighbours, to form the individual objective functions, so that the robots can move autonomously with minimal communication requirements.
For the area coverage problem, the same procedure found in the literature is followed, while utilising the aforementioned visibility-based cell assignment and relying on our previous work [27] for obstacle avoidance. The resulting optimal control scheme is then evaluated by experimental studies. Subsequently, the homing problem is examined for a swarm of robots, where a simple target assignment method is devised and used in conjunction with the visibility cells. After assigning the targets, the optimal movement direction for each robot is determined by minimising the distance to its target. This scheme is tested in experimental studies as well.
The rest of the article has the following structure. In Section 2, the main assumptions are defined along with the model of the robots and their motion controller. Additionally, the problems of area coverage and homing are introduced. Section 3 defines how the region of interest is split into visibility-based cells, which are then assigned to the robots. In Section 4, the area coverage problem is analysed and the optimal movement direction for the robots is formulated for the general case and then for some specific cases. In Section 5, the homing problem is examined, where the target assignment and movement direction of the robots are determined. Finally, experimental results are presented in Section 6, followed by some concluding remarks in Section 7. The most important symbols of the article are summarised in the Appendix.

| PROBLEM FORMULATION
Let Ω ⊂ R 2 be a simple compact and connected non-convex polygonal region with holes, which will be our domain of interest. In the region there are n ∈ Z robotic agents, whose positions will be denoted as p i ∈ Ω, i ∈ I n , with I n = {1, …, n}. Additionally, we define P = {p 1 , …, p n } as the set of all the distinct robot positions.
The agents are assumed to be non-holonomic mobile robots with differential drive, as shown in Figure 1. Concerning their dimensions, it is presumed that they are of a circular shape of radius R ∈ R * þ , centred at their rotation centre, and that they are small enough to be able to traverse the narrowest parts of the region. Moreover, the robots are equipped with visibility-based omnidirectional range sensors, such as cameras or LIDARs, and their range is limited in a radius r d > 2R around the centre of the robots. If we define the unlimited visibility space of each robot as then the limited range visibility space simply takes the form where Bðp i ; r d Þ is the ball of radius r d , centred at p i .
Apart from the visibility space, we also need to define the distance of the visible obstacles around the robot, in terms of the visibility angle θ ∈ (−π, π]. For that to end, we define the visibility field as where W i ⊆ V i is the part of the visibility space that the robot is allowed to operate and ∂W i is its boundary. The case ∄q θ refers to the unobstructed visibility angles, where W i is unbounded. Note that in the presence of obstacles, this distance is not continuous with respect to θ. If we denote Even though the kinodynamic model of the robots is not required for the derivation of the control schemes, the obstacle avoidance is employed implicitly by the motion controller of our previous work [27], which in turn, relies on the model of the differential drive robot. Therefore, the position and the orientation of each robot are related to its linear and angular velocities through the kinodynamic equations where t ∈ R þ is the time instance, p i (t) is the location of its rotation centre, θ i (t) ∈ (−π, π] is its orientation and v i ðtÞ; ω i ðtÞ ∈ R are its linear and angular velocities, respectively. The kinematic constraints can be expressed as follows: for v i ðtÞa i ðtÞ < 0; for ω i ðtÞα i ðtÞ < 0; where v min ∈ R * − , v max ∈ R * þ and ω max ∈ R * þ are the linear and angular velocity bounds, a i ðtÞ; α i ðtÞ ∈ R are the linear and angular accelerations of each robot, while a d ; a a ∈ R * þ and α d ; α a ∈ R * þ are the linear and angular deceleration and acceleration limits. Finally, if v lim ∈ R * þ is the maximum wheel velocity supported by the motors and L ∈ R * þ is the wheel to wheel distance, then the differential drive robot has the additional constraint |ω i | ≤ 2(v lim −|v i |)/L between its velocities.
The control input of each robot is assumed to be its reference linear and angular velocities. Thus, each robot has an internal velocity controller v i ðtÞ ω i ðtÞ to drive the motors, which naturally depends on all the robot constraints. The only elements that concern us about the internal velocity controller C r are, firstly, that our motion controller should provide the reference velocities v i;r ðtÞ; ω i;r ðtÞ ∈ R, and secondly, that the actual velocities are part of the state vector of the robot. Consequently, our motion controller, which has a time interval T s ∈ R þ , is assumed to have access to the full state of the robot at each iteration k ∈ N þ . In combination with Eq. (1), the full state of the robot is defined as where the time parameter t = kT s is omitted for simplicity, and will also be omitted wherever it is not necessary. Here, a position error is allowed, but should be bounded in a disk of radius r u ∈ R * þ . As aforementioned, the obstacle avoidance of the robots is handled by the motion controller of our previous work [27]. We use this controller, with slight modifications, to perform direction-based control of the robots, while considering their dimensions and the obstacles around them, along with their dynamics and constraints. The controller here takes the form where W i is the region that the robot is allowed to move and u i is the desired movement direction. Treating the bounds of W i as the actual visible obstacles, the controller transforms the desired movement direction into the proper velocity inputs for the robot, v i,r , ω i,r . The implementation modifications are to select p i,t = p i + u i as the final target, and completely skip the computation of the shortest path and select which is the furthest allowed point in the given direction. Both the area coverage problem and the homing problem will be expressed in the form of an aggregate objective function. This function will be denoted by HðPðtÞÞ, as its value depends on the positions of the swarm. Its maximisation implies the optimal placement of the robotic agents; thus the optimal control law for the robots is to simply select u i ¼ ∂HðPÞ=∂p i . Therefore, the aim of the following sections is to determine this quantity for each robot independently.
In the area coverage problem, the objective function is formulated based on two components [12]. A density function ϕ : Ω → R þ , which is a measurable function representing the importance of the points in Ω. A performance function f : R þ → R, which is a non-increasing and piecewise differentiable function that describes the sensing performance of the sensors. Naturally, the latter goes hand in hand with the selection of the distance function d : Ω → R þ , which expresses the distance between the points of the region and the robots, with respect to the capabilities of the sensors. Thus, the aggregate objective function to be maximised, takes the form where Ω ∋ q ¼ x y ½ � T are the points of the region under integration and dA = dxdy is the infinitesimal area at q.
A problem that arose in the experimentation with multiple robots is the initialisation of their position and orientation in the region of interest. To evaluate the behaviour of a distributed control scheme, such as the area coverage problem, multiple experiments should be performed using the same initial configuration. Even though we examined position control of a single robot inside a region with obstacles in our previous work [27], the simultaneous positioning of multiple robots proved to be challenging. Thus, we examine it as a separate problem, which we refer to as the homing problem.
We define the homing problem for a group of robots, where the robots need to acquire specific locations in the region, but with no predefined assignment. Let z j ∈ Ω, with j ∈ I n be the target locations and Z = {z 1 , …, z n } be the set of all the distinct target points. Even though it is not required, for simplicity, we assume that these points are reachable by the robots, that is, their distance from the boundary of the region is at least R. Let g: P → Z be a bijective function expressing the robot-to-target assignment, then the objective function under maximisation is chosen as follows where d s : Ω � Ω → R þ is the length of the shortest path between two points in the region.
Even though in the area coverage problem, the knowledge of the sensed area of the robots is sufficient for them to move, in the homing problem, it is not enough. The computation of the shortest path [28] between the robots and their targets assumes a more comprehensive knowledge of the region. Clearly, a solution to that matter would be to use an Simultaneous Localization And Mapping algorithm [29] to map the region and assume that all unexplored area is free, but this is out of the scope of this article, which emphasises on the assignment of the targets and the optimal control law to solve the aggregate problem. For that reason, the region is presumed to be known and, additionally, the visibility of the robots is not limited, or equivalently, r d = ∞.

| VISIBILITY-BASED CELL ASSIGNMENT
A visibility-based approach is taken to properly deal with the proximity of the region points. First of all, we define the visibility distance function as otherwise: Based on that, we define the relative visibility distance function as where P 0 ⊆ P and ∀λ ∈ ½0; 1�; ∀p j ∈ P 0 nfp i g The distance d RV (q; p i , P) and the cell V R,i (P) are inseparable by definition and they are characterised by a recursion. For a point q ∈ Ω to be part of V R,i (P), firstly it has to be part of V i , or equivalently, q λ ∈ Ω, ∀λ ∈ [0, 1]. Moreover, d V (q λ , p i ) ≤ d RV (q λ ; p j , P\{p i }) requires that all points in the line segment from p i to q are closer to robot i. This will filter out any disjoint parts ensuring that the visibility property is GIANNOUSAKIS AND TZES -131 satisfied for the cell of each robot, where a line segment from the robot to the boundary of the cell should be part of its interior. Furthermore, the recursion d RV (q λ ; p j , P\{p i }), can be perceived as allowing the other nodes to form their cells in the absence of node p i and then determining which parts of these cells actually belong to node p i . This allows the selection of points filtered out from other nodes, where the simple inequality d V (q, p i ) ≤ d V (q, p j ) does not hold, as the cell of node p j may be already limited by the presence of another node.
An example of the relative visibility cell assignment is shown in Figure 2b. By comparing it with the visibility shown in Figure 2a and the simple Euclidean Voronoi shown in Figure 2c, one can see the existence of points in V R,1 where d V (q, p 1 ) > d V (q, p 3 ). Additionally, it is clear that, in general, this cell assignment is not a partition of the total visible space. All the parts that would violate the visibility property have been discarded, forming a cell where all points in the line segment from its boundary to the robot are part of its interior. This cell is ideal for our motion controller, M, which utilises the visibility property to incorporate obstacle avoidance. Moreover, it is worth mentioning that for convex regions, this cell assignment is identical to the Voronoi partitioning. Regarding the region, an interesting peculiarity that arises from the existence of holes is that two nodes may be neighbours from more than one path. This imposes a difficulty in expressing the neighbouring information with graph theory, [12].

| AREA COVERAGE
To exploit the relative visibility cells in the coverage problem, in the objective function (3), we will utilise the respective distance function d LRV . Given that f is non-increasing and by the definition of V R,i (P) and thus V R,i,r (P), the aggregate objective function can be decoupled [11] and assumes the form Since V R,i,r (P) is a connected region, we can apply the extended Leibniz integral rule [24,30] to find out how the objective function is modified with respect to the changes of p i coordinates The notation ∂V R,i,r refers to the boundary of the cell, dl is the infinitesimal length and b nðqÞ is the outward normal vector to the curve at the point under integration q, while ∂q is the velocity of the curve at that point in direction b nðqÞ. Finally, C g i;j ¼ ∂V R;i ðPÞ ∩ ∂V R;j ðPÞ are the points where the cell of node j meets the cell of node i.
To compute the aforementioned equation, it is required to express the velocity of the curves at their various parts. For that purpose, the boundary of each cell is broken down into four groups of disjoint curves which are analysed separately. • C o i ¼ ∂V R;i ∩ ∂Ω: Visible boundary of the actual obstacles. These points are static, thus The common line segments between the cells of nodes i, j ∈ I n , i ≠ j. Obviously, C g i;j ¼ ∅ when the nodes are not neighbours. Also note that two nodes may share more than one bounding line segment, as the presence of holes Line segments of obscured areas. The best way to describe these segments is as a discontinuity in the field of view. The n ℓ;i ∈ Z þ critical field-of-view angles are the θ k ∈ ð−π; π� : d − f ;i ðθ k ; V R;i;r ðPÞÞ ≠ d þ f ;i ðθ k ; V R;i;r ðPÞÞ, with k ∈ {1, …, n ℓ,i }. In that way we can define these segments as In this case [15] we have dl = dλ‖b i,k −a i,k ‖ and where Now it is possible to examine specific cases based on the selection of the performance function f.
In this case, the objective function is maximised when the area of the cells is maximised. Using Eqs. (7) and (8), we have This case normally goes along with r d = ∞ and the resulting control law is based on the Lloyds algorithm [31], where the agents are inclined to become evenly spaced by moving towards the centroid of their cell. Here we assume that f d (x) is not defined for x = ∞, thus, the robot cannot assume responsibility for these points, where Combining the above results [11] we have

GIANNOUSAKIS AND TZES
In the experimental studies, we will examine both the area and the mixed problems. While the results are very close in terms of the covered area, using f a alone may result in a swarm that is not uniformly swept across the region. This may seem like an unbalanced monitoring; thus it is not preferable. In the mixed problem, this issue is countered by the f d term, while at the same time, the f a term ensures that enough importance is given in the increase of the covered area.

| HOMING
The objective function for the homing problem Eq. (4) seems already decoupled as long as the assignment function g is defined. But before jumping to the definition, there are some things to be considered. First of all, the robots cannot blindly head to their assigned target, their movements should be restricted to avoid collisions with other robots. Additionally, the target assignment cannot be random, especially in regions with narrow paths. As shown in Figure 3a, a robot which has gloriously reached its target may block the path of another.
To deal with the collision avoidance, we will use the relative visibility cell assignment. Limiting the robot movements in its respective cell, V R,i , eliminates any possibility of collisions. Considering the assignment of the targets, to avoid the aforementioned problem of a robot blocking another, it should be carried out in favour of the robots that are farther from the targets. Additionally, targets that are already in the respective cell of a robot should be considered better candidates to avoid decision fluctuations, where the target assignment would change from a time instance to the next.
Firstly, we define the shortest path [28] connecting the points q s , q e ∈ Ω as the series of points Sðq s ; q e Þ ¼ ½q 0 ; …; q n s �, n s ≥ 1. The individual points of the path will be referred as S k (q s , q e ), with k ∈ [0, n s ]. Considering these points, we have S 0 (q s , q e ) = q s and S n s ðq s ; q e Þ ¼ q e , whereas all the other points S k (q s , q e ) ∈ ∂Ω, ∀k ∈ [1, n s − 1], are the reflex vertices. Subsequently, the distance of the shortest path can be written as d s ðq s ; q e Þ ¼ ∑ n s k¼1 ‖S k ðq s ; q e Þ − S k−1 ðq s ; q e Þ‖: Considering the requirements explained above, the distance of the shortest path does not suffice for an assignment criterion; we need a distance that also expresses the proximity of the target to the cell of the robot. Thus, we choose the following distance function otherwise; which simply expresses the length of the shortest path minus the part of the path that is in the cell of the robot. The latter can be expressed by d The proposed assignment function is given in Algorithm 1.
As it is also shown in the respective experiment, the assignment may change later, depending on the actual movement of the robots. This is due to the fact that the physical robots move independently and do not have a constant velocity, additionally, they should keep a distance from the obstacles. Thus, the distances from their targets change at different rates. Either case, the fluctuations are minimised by the subtraction of d min f ;i;s 1 ðp i ; z j Þ in d g . In our experimentation, the robots always reached the desired configuration, with no significant changes from the initial assignment.
To explain the methodology, we will use the simple example of Figure 3b. After the cell assignment, the distance function d g is evaluated for all combinations of robots and targets, as summarised in the Table 1, which is expressed in arbitrary units.
Since robot p 1 is the furthest from all the targets, it takes priority in the assignment and is the first to choose its closest target in terms of d g , so g(p 1 ) = z 1 . In the next iteration, p 1 and z 1 are not considered at all and the same procedure is followed with the remaining robots and targets. In our case, the iteration simply terminates and g(p 2 ) = z 2 . Another example is given in Figure 3c, where the assigned targets are marked with the colour of the respective robot.
Having solved the target assignment problem, it is now possible to determine the control inputs for the motion controller. Even though each robot should reevaluate its assigned target at each iteration; for simplicity, we assume that g(p i ) = z j , therefore using Eq. (4) we get Hence, as expected, the optimal way for a robot to minimise the distance to its target is to head towards the next point of the shortest path. An important aspect achieved by the selection of the objective function (4) is that . This is desirable, because it informs the controller, M, that the robot should not stop until this distance is minimised. Here we exploit the capability of the utilised motion controller, where the desired movement direction will be adapted to achieve obstacle avoidance, depending on how much further the robot must go. This results in a smoother and faster trajectory than instructing the robot to explicitly approach the next point of the path by using ðS 1 p i ; z j À � − p i Þ as the desired direction. This way the robot will properly traverse around the intermediate points of the path and will not stop until the final target has been reached.
For completeness and without any changes to the methodology in our experiments, we will incorporate a desirable orientation θ j that accompanies each target point z j . This is a feature of the utilised motion controller, and can be expressed as an extra parameter in Eq. (2). As a result, when the controller assumes that the robot reached its destination, or equivalently ‖u i ‖ ≤ 2r u ; then it switches to orientation control trying to match the desired orientation θ j . Algorithm 1 Assignment function for the homing problem.

| EXPERIMENTAL RESULTS
The configuration and the region of our experiments is the same as in [27]. All experiments are performed by deploying n = 6 mobile robots; the robots are the AmigoBot from ActiveMedia, as the one shown in Figure 4. Considering their dimensions, they can be bounded in a radius R = 0.17 m around their rotation centre and have a wheel-to-wheel distance L = 0.279 m. The maximum supported wheel velocity is T A B L E 1 Evaluation of the distance function for the example of  Considering localisation, it was implemented by utilising both an external camera system and the encoders at the wheels of the robots. The external camera system consists of two cameras on the ceiling, whose image frames are processed by the augmented reality library ArUco [32], to retrieve the position and orientation of the robots in the known region of interest. Additionally, the robots calculate their wheel velocities and their displacement using their equipped encoders, which are then combined with the camera measurements for a more complete and accurate localisation. The localisation error is bounded in a radius r u ≈ 0.015 m, while the sampling time interval in our experiments is T s ≈ 160 ms. Furthermore, we selected l s = R/4; as the safe distance that the robots should maintain from the boundary when possible, which is an extra parameter of the utilised motion controller.

| Case studies on the area coverage problem
The area coverage problem was examined in three cases. In all cases, a uniform importance density ϕ = 1 was assumed for the region, and the initial position and orientation of the robots was the same. The first case utilises the simplest performance function, fa, which is the one expressing the area problem. Additionally, a sensing radius of r d = 0.8 m was assumed. The results of the experiment are depicted in Figure 5, whereas Figure 6 shows the motion of the robots throughout the experiment. The total area that the swarm can theoretically cover is nπr 2 d ≈ 12 m 2 , which is larger than the area of the region, where area (Ω) ≈ 8.7 m 2 . Naturally, it is impossible to cover the entire region, due to its non-convexity. Therefore, the swarm reached a local maximum of H a , where the assigned area was 5.4 m 2 , approximately 62.14% of the total region in almost 40 s, as shown in the coverage progress in Figure 7. The second case demonstrates the mixed problem, using the performance function f m , and again a sensing radius of r d = 0.8 m. The results of the experiment are shown in Figure 8 and the robot motion in Figure 9. The final configuration, the attained local maximum of H m was also reached in almost 40 s, and the assigned area was again 5.4 m 2 , or 62.14% of the region. As it can be seen in Figure 7, the behaviour is very close to that of the previous case, but a little slower. On the other hand, comparing the final configurations, the robots in this case are better swept across the region. It is also interesting to notice that there is a discrepancy on which robot reached each part of the region. This happened because even robots with restricted cells are inclined to move towards the centroid of their cell, while in the previous case, the robots are reluctant to move until they have unobstructed visibility parts in their assigned cell, given that the gradient of the free arcs, G c a;i is the decisive factor of the control law.
Finally, the last case shows the mixed problem again, but this time with a smaller sensing radius of r d = 0.6 m. The results are illustrated in Figure 10 and the robot motion in Figure 11. In this case, the total area that the swarm can theoretically cover is nπr 2 d ≈ 6:8 m 2 , which is approximately 78% of the area of the region. The team achieved a coverage of 40% in 40 s and stabilised at almost 50% coverage of the region in another 30 s. Comparing the final configuration with the ones of the previous cases, it is interesting to notice that it is more similar to the case of the area problem, f a , rather than the one with the same problem. The reason for this is because smaller sensing radius means smaller cells, which in turn, lead to slower movements and behaviour similar to the restricted robots in the area problem.

| Case study on the homing problem
In the homing problem, the knowledge of the region is taken for granted; thus the notion of a sensing radius has no significant meaning and so we assume r d = ∞. In all the previous experiments, the homing algorithm was performed for the swarm to acquire its initial configuration. Therefore, the experiment presented here uses as target positions and orientations for the swarm, the initial configuration of the coverage experiments, and as a starting point the final configuration of the one experiment with f m and r d = 0.8.
The results are shown in Figure 12 and the motion of the robots throughout the experiment in Figure 13. The swarm reached the desired configuration in 39 s. The trajectories were, naturally, much smoother than the coverage experiments, as the robots had a specific final location to reach and not just a varying direction. This fact is also the reason why certain robots reached different positions than the ones of the initial configuration of the coverage scheme. An interesting turn of events that can be barely observed in the trajectories is that robot 1 initially settled on target z 2 , but when robot 2 approached, avoiding the obstacles to target z 1 , the targets swapped leading to the presented final configuration.

| CONCLUSIONS
This article studied the collaborative control of mobile robots in a non-convex region with obstacles. In contrast to other authors, the dimensions and the full model of the differential drive robot were taken into account by utilising the motion controller of our previous work. Assuming line-of-sight sensors equipped on the robots, a visibility-based approach, slightly different from what is found in the literature, was devised to assign parts of the region to the robots. The problems of area coverage and homing were properly expressed by aggregate objective functions and the optimal movement direction for each robot was successfully