As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
With the continuous progress and development of robotics, mobile robots have been widely used in many different fields. In the power grid, mobile robots are used to inspect electrical equipment, which greatly reduce the investment of manpower and material resources. However, in many cases, mobile robots need to work in a constantly changing and complex environment. Because they cannot obtain environmental information in time, it is often difficult to make path planning. In response to this problem, this paper proposes a path planning method for mobile robots based on improved reinforcement learning. This method establishes a grid environment model, defines the return value through the number of steps of the robot, and then proposes a changing action selection strategy for the balance between the robot’s exploration and utilization of the environment in reinforcement learning, so that the exploration factor dynamically change with the increase of the robot’s exploration degree of the environment, thus speeding up the convergence speed of the learning algorithm. Simulation results show that this method can realize autonomous navigation and path planning of mobile robots in complex environments. Compared with traditional algorithms, it greatly reduces the number of iterations.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.