Visual application of navigation framework in cyber-physical system for mobile robot to prevent disease

In this article, we propose the visual application of a navigation framework for a wheeled robot to disinfect surfaces. Since dynamic environments are complicated, advanced sensors are integrated into the hardware platform to enhance the navigation task. The 2D lidar UTM-30LX from Hokuyo attached to the front of the robot can cover a wide scanning area. To provide better results in laser scan matching, an inertial measurement unit was integrated into the robot’s body. The output of this combination feeds into a global costmap for monitoring and navigation. Additionally, incremental encoders that obtain high-resolution position data are connected to the rear wheels. The role of the positioning sensor is to identify the existing location of the robot in a local costmap. To detect the appearance of a human, a Kinect digital camera is fixed to the top of the robot. All feedback signals are combined in the host computer to navigate the autonomous robot. For disinfection missions, the robot must carry several ultraviolet lamps to autonomously patrol in unknown environments. To visualize the robot’s effectiveness, our approach was validated using both a virtual simulation and an experimental test. The contributions of this work are summarized as follows: (i) a structure for ultraviolet-based hardware was first established; (ii) the theoretical computations for the robot’s localization in the 3D workspace will play a fundamental role in further developments; and (iii) data fusion from advanced sensing devices was integrated to enable navigation in uncertain environments.


Introduction
In the recent time, the infected disease has impacted every place on Earth, including Asia, Europe, and Africa. Most regular activities such as commercial exchange and increasingly interconnected economies have been globally interrupted. Although there have been significant efforts to battle against this virus, such as breaking the chains of virus transmission and reducing the infection rate, to date, we have not only misunderstood the original root of the virus but also missed the mechanisms of virus transformation.
Hence, to protect public health, direct interactions between people should be restricted as much as possible. However, it is impossible to prohibit social relations in a modern society. As a result, an intermediate smart robot represents an excellent solution for delivering medications and food, measuring human health, and assisting people's psychological lives. In the waves of infected diseases, the potential applications of robotics systems are considerably widened.
In real-world applications, the presence of robots around us is not strange due to the Industrial Revolution 4.0, which involves smart sensing devices, wide-band communication, connected multi-agents, and visual computation in complicated systems. In most relevant studies, smart sensors and smart manufacturing processes were able to facilitate monitoring and supervising the entire production line, in addition to making their own decisions. Nevertheless, the problems are completely different in medical applications. To adapt with rapid infection, it is necessary to establish an intelligent supply chain for medical disposables and equipment so that patients can use the required essential medical items in time. 1,2 Otherwise, it has also affected manufacturing and the economy throughout the world. This reality emphasizes the need for more research into remote operating machines and autonomous systems that can operate far from operators and make decisions by themselves. To respond to these requirements, there are extensive developments and opportunities to be explored and integrated in robotics. In the case of clinical care, some fields of specific importance, including disease prevention, diagnosis and screening, patient care, and disease management, need greater investments.
In this context, an autonomous robot must navigate by itself in an unknown environment. For several decades, various commonplace navigation strategies have been used. These strategies can be classified into two subcategories: classical approaches and reactive ones. During the period when artificial intelligence schemes were not commonly studied, classical methods were very popular for mobile robots to solve issues in navigation. These methods included cell decomposition, 3,4 roadmaps, [5][6][7] and artificial potential field. 8,9 The major disadvantages of these methods are their high-cost computational resources and difficulty adapting to unexpected occurrences in the working environment, making these methods difficult to implement in real-time applications. In the second group, advanced algorithms such as particle swarm optimization, 10 artificial bee colony, 11 cuckoo search, 12 firefly algorithm, 13 and data fusion 14 are suggested for mobile robot navigation over conventional methods. These methods have the potential ability to deal with uncertainty when the robot moves alone. Today, most researchers prefer to focus on reactive approaches because of their powerful ability to fuse various strategies or data to enhance autonomous characteristics. With less computational effort, these approaches are promised to develop more in further studies.

Literature reviews
Generally speaking, there is still room to develop robotics systems to combat COVID-19 in such areas as telesurgery robots, diagnostic testing of COVID symptoms, personal care robots, and disease prevention. For the first kind of robot, teleoperation is a mature technology that can be used for both telemedicine and telecommuting. This technology can be used regardless of the known or suspected severe acute respiratory syndrome coronavirus status of the patient, and all surgery can be fulfilled in an epicenter of the COVID-19 pandemic. Robot-assisted surgery 15 could be helpful in decreasing the time of a patient's stay in the hospital and protecting the surgical team at the patient's bedside. This technology potentially lessens not only contamination with body fluids and surgical gasses of the surgical area but also the number of directly exposed medical staff. However, some questions still remain, as surgery robots may be not sufficient for all cases of COVID-19 or for unknown future diseases that also become widespread. Furthermore, the obligatory constraints of facilities and the operating skills of the surgical robot could become possible issues if such a robot is deployed in poor nations or developing countries. In the near future, with considerable enhancement in 5G bandwidth, remote communication with surgery robots 16,17 will become faster and more stable. In the areas of medicine delivery, health-care services, and daily consultation, the use of robot assistant care 18 is an excellent method for maintaining medical order in a hazardous environment. In this way, the health and wellness of staff in the hospital can be preserved to fight against COVID-19. During a long period of treatment, the absence of social interaction for the patient and the machinery's intelligence should be discussed more thoroughly. For the initial diagnostic testing of COVID-19, most experts suggest that one should gather and examine nasopharyngeal and oropharyngeal swabs. When an outbreak occurs, a vital problem is a lack of qualified medical staff to swab patients and process test samples. The greatest values of robots in clinical application 19,20 are their ability to provide noncontact detection and remote sampling in order to minimize risk. However, there is no discussion on isolating mutual infections among a large number of suspected patients, and medical assistant persons are still needed to detect pathogens in isolation sleeves.
For the robot's classification in diagnosis and screening, the use of a wheeled platform to measure temperature or recognize disease symptoms could be a practical application in public places, such as at the entryways of buildings and offices. Commonly, an automated camera system is utilized to screen multiple people in a large area. In the literature, 21,22 several developers introduced models for surveillance robots in order to promote social distancing in complex urban areas and monitor the body temperatures of people in crowded groups. Incorporating thermal sensors and visual computation schemes into distant mobile robots could increase the efficiency and coverage of screening. Nonetheless, the design of these robots is still very simple while real-world scenarios are complex. Intelligent algorithms must be integrated into such robots to predict human intentions. Currently, although robot models are small, they can still cause confused feelings or perturbation when reaching out to humans.
For practical implementation, a friendly service robot is one of the best ideas for human-oriented design. Given the issues of an ageing population and the busy life styles of youth in many countries, there is a need to utilize intelligent robotic systems and autonomous machines for this purpose. Most notably, elderly people need their mental health taken care of if they remain in insolation. In the literature, 23-25 a companion robot was shown to have the potential to mitigate feelings of loneliness by building different types of supportive relationships. Initially, the operationalization and measurement of loneliness and an impact analysis of the companion robot were undertaken. However, existing limitations include the significant need to improve the robot. Moreover, the level of autonomy and proactive interaction model were too simply investigated. To enhance its interactive effects, the robot needs to deeply converse with elderly patients by integrating artificial intelligence algorithms. For the well-being of humans, the socially assistive robot plays a role in addressing the secondary impacts of the global pandemic. 26 Usually, most researchers focus on the primary aspects of robotic applications such as monitoring and reducing loneliness when a human remains in isolation. On the other hand, the secondary influences on distance learning, job searches, and vocational training need to be explored. Interdisciplinary robotics investigations should be considered to establish a foundation for fighting against COVID-19.
The most important factor to evaluate the quality of a hotel is human resources. When making travel decisions, people often compare several options for accommodations with different attributes. In the tourism industry, the attitude performance of staff can exert an influence on a customer's pleasure to avoid inducing any distress or anxiety. 27,28 The change in a customer's consciousness to accept the robot's appearance would considerably encourage many industrial developers. For disease prevention, robot-controlled touchless ultraviolet (UV) light is being employed for disinfection because COVID-19 can exist on contaminated surfaces due to respiratory droplet transfer. 29 Before the occurrence of the outbreak, disinfection by UV lamp was considered an efficient touchless solution in the terminal tools of rooms. 30 During the global pandemic, COVID-19 can remain on inanimate surfaces, including metal, glass, and plastic, for days. The use of a wheeled robot with an UV light device has thus become increasingly common to reduce contamination on hightouch surfaces in offices, hotels, public places, and hospitals. 31 Instead of manual disinfection, which involves many operators and increases exposure risk, autonomous wall-following disinfection robots represent a cost-efficient, rapid, and highly effective method. 32 However, in uncertain environments that might contain unexpected obstacles, wall-less working spaces, or unseen areas, more studies are needed to develop autonomous UV-based disinfection robots. Table 1 summarizes studies on robotic applications related to the global pandemic. The contributions of our research are as follows. First, the development of a hardware platform for UV-based application is outlined. Second, a mathematical expression of theoretical localization in a three-dimensional workspace is described to compute the location of the robot. Third, a data fusion technique is developed to autonomously drive the robot in an unknown environment.

Proposed approach
Disinfectant robots are a recent technology used to deactivate micro-organisms but require mastery of a set of techniques including mechanics, electronics, navigation, and programming, in which vision-based drive is the most significant application. Since this technology works in an unknown environment, the autonomous disinfection robot must first launch a global map. Then, the robot can robustly navigate and orientate itself to find final destination without any collisions.

Investigation of UV power sources
Previous studies indicated that an ultraviolet beam (UVB) can effectively destroy various micro-organisms including COVID-19. UV disinfection technology uses either mercury bulb devices or pulsed xenon bulb devices. 29 In both cases, objects and surfaces in direct line of sight can be more successfully decontaminated by UVB than objects in other areas. It is widely recognized that UVB is a set of continuously radiant point sources, as shown Figure 1(a). Each point source overspreads the radiant power P i , which can be measured by the ratio of the radiant power of the lamp and the total amount of point sources. The radiation of point source P i is a scalar value. Hence, the intensity of UVB at any point A, which has a relative distance r from the point source, represents a point of UV energy on the sphere centered on the point source, as shown in Figure 1 The intensity E A produced by P i of UVB at point A is computed as where P i is the radiant power at point source i, T is the coefficient of the radiation absorption of light through matter, and r is the transmission of radiant light through all matter The UV intensity around the point source can be precisely determined by the distance between the point source Instead of nurses, doctors, or other health-care workers, RAC plays a role as medicine delivery and daily consultation.
By facilitating social distancing and the ability to work in hazardous environments, the health and wellness of medical staff are protected.
Lack of social interactions and poor intelligence of the machinery systems used in RAC.

Remote clinical application
Li et al. 19 Using a safe an effective robotic sampling system, oropharyngeal swabs can be acquired and evaluated.
With a high success rate in sampling and automatic disinfection, this system has advantages in the quality control of its swab sampling and minimizing related risks.
The detection rate of pathogens is not inferior to that of manual collection, and RCA still requires medical assistants in the replacement of isolation sleeves. The friendly appearance of this robot, named Lio, resulted in the robot being well accepted by health-care staff and patients.

Remote clinical application
This robot provides highly manipulable capabilities in its large arm and gripper, which allows the robot to open doors, grasp objects, or carry and operate a large variety of tools.
The level of autonomy in, e.g. multi-floor mapping and elevator use needs to be increased. Moreover, the robot should be made more proactive in its interactions, such as finding specific people around the facility or actively approaching them. Friendly care robot Gaby et al. 24 An integrative framework and research agenda on the role of a companion robot were developed to mitigate feelings of loneliness.
This robot is able to enhance well-being status and contribute to transformative service by building different types of supportive relationships.
There is no measurement for loneliness or classification based on age groups for different social supports and design features to drive adoption of the robot.
(continued) and the receiving point and the absorption coefficient of the UV transmission medium. As shown in Figure 2(a), the intensity at any receiving point in the radiation region is considered to be the sum of all the intensity distributions from the source points in the system.
where r i is the traveling distance radiating from the ith point source to the receiving point, and n is the total number of point sources If many UV lamps are used, the UV intensity at any one point will be equal to the total UV intensity of each lamp at that point. From the results of surveying UV intensity at any point using the multi-point source method, we can determine the UV intensity when using multiple lamps X n i¼1 P n 4pr 2 :T where P is the output power of one UV lamp, and N is the number of lamps. This social robot is utilized to take care of the psychological well-being of two particularly vulnerable consumer groups.
This study delivers an overview of the conceptual integration of the robot typology into the transformation of hedonic and eudaimonic well-being in the post COVID-19 reality.
This study lacks an impact analysis for creating uplifting changes in well-being for customers at large if the pandemic is widespread.

Socially assistive robots
Scassellati et al. 26 In addressing the secondary impacts of infectious disease outbreaks, this robot not only improves mental health but also supports education.
The authors suggested substantial investments in robotics and human-robot interactions to help people combat the consequences of infectious diseases with two key factors: the robot's behaviors in dynamic human environments and maximizing positive social impacts on societies.
Frequent contact with the screen can cause some negative mental states and stress. Moreover, efforts in economic recovery could expose personal information.
Robot service Kim et al. 27 Jiang et al. 28 Customers had a more positive attitude toward robot-staffed hotels during a health crisis.
The awareness of clients about the robot employee was positively higher than their feelings of fear. Thus, robotic applications in hotel services could be widespread in the future.
The sophistication level of the robot service depends on the development of artificial intelligence. In addition, our understanding of technology acceptance in human-robot interactions needs to be improved.

Ultraviolet light robot
Fleming et al. 30 A touchless solution for the terminal disinfection of rooms using ultraviolet light could effectively decrease the bioburden of epidemiologically relevant pathogens.
Deployment in a multidisciplinary practice was performed with structured education.
This study was done in a single center; thus, the results may not be generalizable.

Ultraviolet light robot
Guettari et al. 31 The findings show that a wheeled robot is the most efficient device to inactivate microorganisms.
A robot-aided solution to kill bacteria through ultraviolet lamps flexibly provides an excellent method to lessen the spread of infectious diseases.
The self-governing function, which plays a key factor in autonomous systems since it can drive robots in unknown environments, was not mentioned in this article. Ultraviolet light robot Muthugala et al. 32 A novel method to enable wallfollowing behavior for a wall disinfection robot was invented using fuzzy logic.
Various disinfectant agents such a sprayable liquid, gas, and ultraviolet light could be applied on the wall.
It is impossible to utilize this robot in a wall-less environment;there is a lack of practical verification to show the superior performance this robot.
For the UV disinfection system shown in Figure 2(b), the UV lamp is situated in a crystal tube. Then, the total coefficient of radiation absorption 33,34 is calculated as where s q is the coefficient of the material absorption of crystal, and t q is the thickness of the crystal tube. Consequently, equation (1) becomes X n i¼1 P n 4pr 2 Á e Às q t q Combining equation (1) and equation (2), we obtain X n i¼1 P n 4pr 2 Á e Às q t q where t is sterilization time.
The authors in the literature 35,36 discussed which UVbased technologies are appropriate for autonomous robots in the current situation. First, it is necessary to locate the adapted UV light on top of the mobile platform, as shown in Figure 3. Given the new restrictions placed on daily life by social distancing requirements, advanced techniques could be utilized for navigation and detection in this scenario, such as lidar, digital cameras, and ultrasound sensors. The robotic UV platform fuses integrated sensors to perform simultaneous localization and mapping (SLAM).
To stabilize the whole system, including both the mobile platform and UV lamp, the system's center of gravity (CoG) must be adapted to different system states. The robot model is theoretically simulated in Figure 4(a). To avoid collapse of the system, the moment generated by the force around the center of rotation C must be greater than the moment of the centrifugal force F lt where b is the distance between two active wheels, h is the height of the CoG, P is the gravity force, v is the velocity of the robot, m is the total weight of the system, g is gravitational acceleration, and R is the radius of turning motion.
To validate the mathematical computations, the theoretical result was simulated on a computer, as shown in Figure 4. The benefits of adaptive height provide a working space with a wide area for the UV lamp to disinfect and prevent accidental collapse. 37,38 In terms of adjusting system height, the robot shrinks to its minimum size when turning in order to ensure stable movement. In a linear trajectory, the system height can reach the boundary conditions shown in equation (7).

Autonomous navigation in an unknown environment
In SLAM, the robot operating system (ROS) is an essential tool for the robotic system to function in uncertain circumstances. The autonomous navigation function consists of one master node and several other nodes, among which the move_base node plays a crucial role. 39 This node is able to plan the desired trajectory and order the driving motors via linear and angular velocity. The inputs for the move_base node include the scanning data from sensing devices, odometry information, and translational offset values. In the move_base node, two costmaps are used to save environmental data: the global costmap plans the motion trajectory on a universal map, while the purpose of the local costmap is to locally generate an obstacle-avoidance map.
In this research, we propose a navigation framework that avoids collision for mobile platform with UV lamps (see Figure 5). Data from the lidar sensor and inertial measurement unit (IMU) sensor are fused together to produce the evaluated values used as input for the model of observation. To augment the system's highly accurate navigation and lessen the computational burden, the process for  estimating weights and resampling is employed before the global map of the workspace is created. For localization of the robot, two positioning sensors are required. Based on the resulting signals, a model of motion is established to build the robot's trajectories. The Monte Carlo algorithm, which represents the distribution of possible states for the robot to move in and sense the environment, is embedded in the sampling position. Then, a local costmap is formed to indicate the current locations of the robot. Based on these investigations, autonomous navigation via data fusion was successfully deployed in the platform of the ROS.

Theoretical estimation in a 3D workspace
Here, we consider a nonholonomic wheeled mobile robot without wheel slips. A list of mathematical symbols is briefly outlined in Table 2. It is assumed that R 0 ðO 0 ; i 0 ; j 0 ; k 0 Þ and R 1 ðO 1 ; i 1 ; j 1 ; k 1 Þ are two separate coordinates. Vector u, which is expressed in Figure 6, has locations ðx 0 ; y 0 ; z 0 Þ and ðx 1 ; y 1 ; z 1 Þ in R 0 and R 1 , respectively.
With any vector v ¼ ði 0 ; j 0 ; k 0 Þ, we have Since ði 0 ; j 0 ; k 0 Þ are orthogonal, then which we re-write as follows where Rð; q; Þ is the rotational matrix around x; y; z.  Therefore, R Á R T ¼ I, and we can obtain Matrix S n Â n is symmetrical if and only if S þ S T ¼ 0; hence, we have s ij þ s ji ¼ 0, i ¼ 1; . . . ; n; j ¼ 1; . . . ; n From equations (15) and (18), we obtain where R ; R q ; R are the three rotational matrices around x; y; z with ; q; , respectively.
In brief, we obtain In the relationship between R 0 and R 1 Àsin 0 cos q sin cos 0 Àsin At time sequence k, the location of the robot in working space is defined as p k ¼ ðx k ; y k ; z k ; k ; q k ; k Þ T , and its velocity is determined as m v k . Furthermore, the variation of the robot's location can be estimated as D m p k between time sequence k and k þ 1 where D t represents the time duration between k and k þ 1.
The model of motion for a mobile platform is computed as where R 1k ¼ Rð; q; Þ ¼ Àsin q cos qsin cos qcos The angular values of k ; q k ; k can be measured by the IMU sensor at time sequence k. In the scope of this research, the autonomous robot only rotates around the zaxis and does not travel along the vertical z-axis. Thus, equation (23) can be simplified as In the above equation, D m x k ; D m y k ; D m z k are the three displacements of the robot in the x-, y-, and z-axes at time sequence t and t þ Dt, respectively. Here, the displacements of the left wheel and right wheel that can be measured by the positioning sensor are DS L; k and DS R; k , respectively. We can also compute D m x k , D m y k . Indeed, the speed data from the IMU sensor could be utilized to estimate D m x k , D m y k , but there is external noise in the IMU sensor. Consequently, data from the positioning sensor are used to evaluate D m x k , D m y k .
Model of motion. Generally speaking, the system state usually involves the three linear variables x; y; z and the angular variables ; q; . Because the robot only works in the plane Oxy, its state is basically achieved as Due to the effect of uncertain factors or environmental disturbances, it is impossible to present the system state using a vector at time sequence t. To overcome this problem, the system state is indicated by the probability distribution. In Figure 8, the autonomous robot is initially located at x tÀ1 ; then, the robot receives control signal u t in order to reach new location x ðiÞ t , signified by one of the green points. However, the robot only reads the positioning value from the encoders in order to interpolate the red point. Each potential location x ðiÞ t is considered one particle. There are two kinds of models of motion. In the first model, control signal u t is the driving speed of the motor or the positioning feedback values. The first model is better for obstacle avoidance since motion is estimated before the control signal u t is transmitted to the motor. This model is very effective if the difference between the command speed and actual speed is small enough. In the second model of motion, the robot assumes that the control signal  u t can be achieved by feedback values from the positioning sensor. Due to its higher precision, this model is used to predict the system state.
Model of observation. The model of observation that describes data processing from a laser under external disturbances is defined as a conditional probability distribution pðz t ; x t ; mÞ. In this function, z t is sensor measurement, x t is the robot's location at time sequence t, and m is a map of the surrounding workspace. The sensor measurement z t obtains an array of k elements of z ðiÞ t , with 0 I k Consider that the measurements are independent. Then, we can approximate Map m is a grid map that is divided into several cells. Each cell contains the coordinates (x, y) together with or without an obstacle. In cell m i , map m performs a set of cells Monte-Carlo localization. The method of Monte Carlo localization is a type of particle filter used to identify the robot's position in a given workspace. This method uses a finite number of samples to represent a probability distribution. 40,41 Because the number of samples is limited, this scheme is approximate. The distribution function belðx t Þ is considered to represent the robot's awareness about its location in the workspace. The main idea is to utilize a set of M random patterns to characterize belðx t Þ Here, each particle x ½i t with (1 i M) is a position in the practical environment where the robot could possibly be located. In the initial stage, the mobile robot does not ensure its starting location because it does not observe the surrounding area. Then, each particle x i is randomly and uniformly sampled in the global map m. With sensor measurement z t , the Monte-Carlo algorithm computes the weights of sampled particles. Function belðx t Þ indicates the awareness of the autonomous robot regarding its location based on a set of particles. Whenever the robot moves, a novel set of particles x t is re-sampled; simultaneously, the weights of particles are estimated again via measurement z t . The computation of function belðx t Þ is then repeated, and the particles with small weights are rejected. This process is replicated, and, after several times, a set of particles converges around a neighbor with the highest probability of the robot being at that location. The Monte-Carlo localization method is regularly deployed in grounded autonomous systems. 42

Results of simulation and experiments
In this section, we verify the effectiveness of the proposed approach in the ROS platform. The model of the autonomous surveillance robot was established as shown in Figure 9(a) with the exact design in a real scene as shown in Figure 9(b). The overall process to manipulate the robot is as follows. There were several target rooms, and the robot initially moved from the walking corridor to each room. When arriving in a room, the robot would patrol around the tables and chairs for several minutes. At that time, an UV lamp was vertically elevated to expand the laser scanning range. The wheeled robot utilized data fusion from various sensors to avoid collision, plan its trajectory, and check whether it had completed its movement round. To further research the performance of the surveillance robot in disinfecting, we conducted some real-world experiments. The proposed framework was embedded into the microprocessor, which performed the proper behavior. Figure 10 shows the simulation results for the mobile robot using the proposed framework. The blue color is the zone that the laser beam could reach to. The autonomous robot accomplishes its task of disinfection in two stages. In the first step, due to the unknown nature of the environment, the global map is empty. Then, the robot must scan around its location to acquire the surrounding data. After receiving feedback signals, the surveillance robot is able to recognize whether there is an obstacle. Later, the robot approaches subsequent positions without any collision. The robot's movement is sensed by two rear positioning sensors that exactly reflect the present location on map. After entering each room, the robot patrols around. This process is repeated many times until the robot completely creates the visible map. With this knowledge, the robot can autonomously control an UV lamp in the second step. If an obstacle suddenly appears, the robot updates the status of the global map. It can be seen clearly that the proposed navigation framework performed well in the virtual tests.
We conducted the experimental validation with the same conditions. Our research and development center, which includes several meeting rooms, working rooms, and walking corridors, was used as the area of the proposed framework. The autonomous surveillance robot spent the first period exploring the unknown environment by visiting and scanning each room. After a period of time, the global map in the host personal computer was successfully established, as shown in Figure 11.
The result of the real-world verification of our approach is shown in Figure 12. Ignoring the slipping phenomenon, the autonomous robot accurately tracked the desired trajectory and avoided obstacles. The mixed control using both self-governing navigation and a UV lamp flexibly enabled a series of complex actions. Therefore, our proposed framework can be employed in navigation for UV-based disinfection robots and promises to be a highly applicable technique in public zones to prevent disease. To make a feature of competitive performance, Table 3 represents the structure and techniques of the proposed method comparing to other works.

Conclusions
In this article, the visual application of navigation framework for autonomous system in unknown environment was presented. The mobile platform integrating with UV lamp patrols around living area to eliminate bacteria. Based on this idea, the proposed framework involving data fusion from different sensing devices, is to navigate whole system to avoid obstacle. Several simulation tests and experiments are conducted to demonstrate the effectiveness. It is believed that our developed method is entirely capable of enabling the self-governing navigation of UV-based disinfection robot in public workspace.
Future work in this field remains necessary. Advanced algorithms should be investigated to enhance the visual Table 3. List of comparative specifications among related researches.
Reque et al 43 Conte et al 44  Require to implement the social-awareness and optimized energy capabilities of surveillance robots on any terrain. Moreover, humans or groups of humans might appear in front of the robot. 46,47 Accordingly, socially interactive models and context-based learning techniques for improving the robot's behavior represent promising research directions.