Abstract
By utilizing only depth information, the paper introduces a novel two-stage planning approach that enhances computational efficiency and planning performances for memoryless local planners. First, a depth-based sampling technique is proposed to identify and eliminate a specific type of in-collision trajectories among sampled candidates. Specifically, all trajectories that have obscured endpoints are found through querying the depth values and will then be excluded from the sampled set, which can significantly reduce the computational workload required in collision checking. Subsequently, we apply a tailored local planning algorithm that employs a direction cost function and a depth-based steering mechanism to prevent the robot from being trapped in local minima. Our planning algorithm is theoretically proven to be complete in convex obstacle scenarios. To validate the effectiveness of our DEpth-based both Sampling and Steering (DESS) approaches, we conducted experiments in simulated environments where a quadrotor flew through cluttered regions with multiple various-sized obstacles. The experimental results show that DESS significantly reduces computation time in local planning compared to the uniform sampling method, resulting in the planned trajectory with a lower minimized cost. More importantly, our success rates for navigation to different destinations in testing scenarios are improved considerably compared to the fixed-yawing approach.
Article PDF
Similar content being viewed by others
Data and/or Code availability
All generated data and implementation codes for simulations will be available and maintained in the author’s online repository https://github.com/thethaibinh. If the reader has further needs and questions, do not hesitate to contact the corresponding author.
References
Popovic, M., Thomas, F., Papatheodorou, S., Funk, N., Vidal-Calleja, T., Leutenegger, S.: Volumetric occupancy mapping with probabilistic depth completion for robotic navigation. IEEE Robot. Autom. Lett. 6, 5072–5079 (2021). https://doi.org/10.1109/LRA.2021.3070308
Dey, R.: Monodepth-vslam: a visual ekf-slam using optical flow and monocular depth estimation. (2021). http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627666226301079. Accessed 1 June 2023
Quan, L., Han, L., Zhou, B., Shen, S., Gao, F.: Survey of uav motion planning. IET Cyber-Syst. Robot. 2, 14–21 (2020). https://doi.org/10.1049/IET-CSR.2020.0004
Florence, P., Carter, J., Tedrake, R.: Integrated perception and control at high speed: Evaluating collision avoidance maneuvers without maps. Springer Proceed. Adv. Robot. 13, 304–319 (2020). https://doi.org/10.1007/978-3-030-43089-4_20
Lopez, B.T., How, J.P.: Aggressive 3-d collision avoidance for high-speed navigation. Proceedings - IEEE International Conference on Robotics and Automation, 5759–5765 (2017) https://doi.org/10.1109/ICRA.2017.7989677
Ryll, M., Ware, J., Carter, J., Roy, N.: Efficient trajectory planning for high speed flight in unknown environments. Proceedings - IEEE International Conference on Robotics and Automation. 2019-May, 732–738 (2019). https://doi.org/10.1109/ICRA.2019.8793930
Bucki, N., Lee, J., Mueller, M.W.: Rectangular pyramid partitioning using integrated depth sensors (rappids): A fast planner for multicopter navigation. IEEE Robotics and Automation Letters. 5, 4626–4633 (2020). https://doi.org/10.1109/LRA.2020.3003277
Florence, P.R., Carter, J., Ware, J., Tedrake, R.: Nanomap: Fast, uncertainty-aware proximity queries with lazy search over local 3d data. Proceedings - IEEE International Conference on Robotics and Automation, 7631–7638 (2018). https://doi.org/10.48550/arxiv.1802.09076
Herissé, B., Hamel, T., Mahony, R., Russotto, F.-X.: Landing a vtol unmanned aerial vehicle on a moving platform using optical flow. IEEE Trans. Robot. 28, 77–89 (2012). https://doi.org/10.1109/TRO.2011.2163435
Maier, J., Humenberger, M.: Movement detection based on dense optical flow for unmanned aerial vehicles. Int. J. Adv. Robot. Syst. 10, 146 (2013). https://doi.org/10.5772/52764
Nourani-Vatani, N., Borges, P.V.K., Roberts, J.M., Srinivasan, M.V.: On the use of optical flow for scene change detection and description. J. Intell. Robot. Syst. 74, 817–846 (2014). https://doi.org/10.1007/s10846-013-9840-8
Li, H., Yang, S.X.: A behavior-based mobile robot with a visual landmark-recognition system. IEEE/ASME Trans Mechatronics. 8(3), 390–400 (2003). https://doi.org/10.1109/TMECH.2003.816818
Silveira, G., Malis, E., Rives, P.: An efficient direct approach to visual slam. IEEE Trans. Robot. 24(5), 969–979 (2008). https://doi.org/10.1109/TRO.2008.2004829
Cho, D.-M., Tsiotras, P., Zhang, G., Holzinger, M.: Robust feature detection, acquisition and tracking for relative navigation in space with a known target. American Institute of Aeronautics and Astronautics, ??? (2013). https://doi.org/10.2514/6.2013-5197
Loquercio, A., Kaufmann, E., Ranftl, R., Müller, M., Koltun, V., Scaramuzza, D.: Learning high-speed flight in the wild. Sci Robot. 6 (2021). https://doi.org/10.1126/SCIROBOTICS.ABG5810/SUPPL_FILE/SCIROBOTICS.ABG5810_MOVIES_S1_AND_S2.ZIP
Kahn, G., Abbeel, P., Levine, S.: Badgr: An autonomous self-supervised learning-based navigation system. IEEE Robot. Automat. Lett. 6(2), 1312–1319 (2021). https://doi.org/10.1109/LRA.2021.3057023
Kahn, G., Abbeel, P., Levine, S.: Land: Learning to navigate from disengagements. IEEE Robot. Automat. Lett. 6(2), 1872–1879 (2021). https://doi.org/10.1109/LRA.2021.3060404
Nguyen, H., Fyhn, S.H., Petris, P.D., Alexis, K.: Motion primitives-based navigation planning using deep collision prediction, pp. 9660–9667. IEEE, ??? (2022). https://doi.org/10.1109/ICRA46639.2022.9812231. https://ieeexplore.ieee.org/document/9812231/
Bircher, A., Kamel, M., Alexis, K., Oleynikova, H., Siegwart, R.: Receding horizon next-best-view planner for 3d exploration. Proceedings - IEEE International Conference on Robotics and Automation. 2016-June, 1462–1468 (2016). https://doi.org/10.1109/ICRA.2016.7487281
Oleynikova, H., Burri, M., Taylor, Z., Nieto, J., Siegwart, R., Galceran, E.: Continuous-time trajectory optimization for online uav replanning. IEEE International Conference on Intelligent Robots and Systems. 2016-November, 5332–5339 (2016). https://doi.org/10.1109/IROS.2016.7759784
Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: Octomap: An efficient probabilistic 3d mapping framework based on octrees. Autonomous Robot. 34, 189–206 (2013). https://doi.org/10.1007/S10514-012-9321-0/FIGURES/18
Oleynikova, H., Taylor, Z., Fehr, M., Nieto, J., Siegwart, R.: Voxblox: Incremental 3d euclidean signed distance fields for on-board mav planning. IEEE International Conference on Intelligent Robots and Systems. 2017-September, 1366–1373 (2016). https://doi.org/10.1109/IROS.2017.8202315
Karaman, S., Frazzoli, E.: Sampling-based algorithms for optimal motion planning. (2011) https://doi.org/10.48550/arxiv.1105.1186
Cieslewski, T., Kaufmann, E., Scaramuzza, D.: Rapid exploration with multi-rotors: A frontier selection method for high speed flight. IEEE International Conference on Intelligent Robots and Systems. 2017-September, 2135–2142 (2017). https://doi.org/10.1109/IROS.2017.8206030
Oleynikova, H., Lanegger, C., Taylor, Z., Pantic, M., Millane, A., Siegwart, R., Nieto, J.: An open-source system for vision-based micro-aerial vehicle mapping, planning, and flight in cluttered environments. J. Field Robot. 37, 642–666 (2018). https://doi.org/10.1002/rob.21950
Lu, B.X., Tseng, K.S.: 3d map exploration using topological fourier sparse set. J Intell. Robot. Syst. 104, 1–22 (2022). https://doi.org/10.1007/S10846-021-01565-1
Wagner, A., Peterson, J., Donnelly, J., Chourey, S., Kochersberger, K.: Online aerial 2.5d terrain mapping and active aerial vehicle exploration for ground robot navigation. J. Intell. Robot. Syst. 106, 1–18 (2022). https://doi.org/10.1007/S10846-022-01751-9
Faria, M., Maza, I., Viguria, A.: Applying frontier cells based exploration and lazy theta* path planning over single grid-based world representation for autonomous inspection of large 3d structures with an uas. J. Intell. Robot. Syst. 93, 113–133 (2018). https://doi.org/10.1007/S10846-018-0798-4
Cai, K., Wang, C., Song, S., Chen, H., Meng, M.Q.H.: Risk-aware path planning under uncertainty in dynamic environments. J. Intell. Robot. Syst. 101, 1–15 (2021). https://doi.org/10.1007/S10846-021-01323-3
Grando, R.B., Jesus, J.C., Kich, V.A., Kolling, A.H., Drews-Jr, P.L.J.: Double critic deep reinforcement learning for mapless 3d navigation of unmanned aerial vehicles. J. Intell. Robot. Syst. 104, 1–14 (2022). https://doi.org/10.1007/S10846-021-01568-Y
Sanchez-Lopez, J.L., Wang, M., Olivares-Mendez, M.A., Molina, M., Voos, H.: A real-time 3d path planning solution for collision-free navigation of multirotor aerial robots in dynamic environments. J. Intell. Robot. Syst. 93, 33–53 (2019). https://doi.org/10.1007/s10846-018-0809-5
Lee, J., Wu, X., Lee, S.J., Mueller, M.W.: Autonomous flight through cluttered outdoor environments using a memoryless planner. 2021 International Conference on Unmanned Aircraft Systems, ICUAS 2021, 1131–1138 (2021). https://doi.org/10.1109/ICUAS51884.2021.9476874
Dharmadhikari, M., Dang, T., Solanka, L., Loje, J., Nguyen, H., Khedekar, N., Alexis, K.: Motion primitives-based path planning for fast and agile exploration using aerial robots. 2020 IEEE International Conference on Robotics and Automation (ICRA), 179–185 (2020). https://doi.org/10.1109/ICRA40945.2020.9196964
Kamon, I., Rivlin, E.: Sensory-based motion planning with global proofs. IEEE Trans. Robot. Automat. 13, 814–822 (1997). https://doi.org/10.1109/70.650160
Mueller, M.W., Hehn, M., D’Andrea, R.: A computationally efficient motion primitive for quadrocopter trajectory generation. IEEE Trans. Robot. 31, 1294–1310 (2015). https://doi.org/10.1109/TRO.2015.2479878
Song, Y., Kaufmann, E., Bauersfeld, L., Loquercio, A., Scaramuzza, D.: Icra 2022 dodgedrone challenge: Vision-based agile drone flight. https://uzh-rpg.github.io/icra2022-dodgedrone/
Michael, N., Mellinger, D., Lindsey, Q., Kumar, V.: The grasp multiple micro-uav testbed. IEEE Robotics and Automation Magazine. 17, 56–65 (2010). https://doi.org/10.1109/MRA.2010.937855
Acknowledgements
This research was supported by the Henry Sutton scholarship from Federation University Australia.
Funding
Open Access funding enabled and organized by CAUL and its Member Institutions Open Access funding enabled and organized by CAUL and its Member Institutions. Federation University Australia funded the research via the Henry Sutton scholarship - Application ID: 3056759.
Author information
Authors and Affiliations
Contributions
Thai Binh Nguyen conceived original ideas, conducted the research, coding and simulations, and prepared the manuscript. Linh Nguyen primarily supervised the research. He participated in the literature review, discussed the idea, method and results, and edited the manuscripts. Tanveer Choudhury is an associate supervisor. He gave comments on the research and all versions of the manuscript. Manzur Murshed is a co-supervisor. He gave comments on the research and all versions of the manuscript. Kathleen Keogh is an associate supervisor. She gave comments on the research and all versions of the manuscript.
Corresponding author
Ethics declarations
Ethics approval
No ethical approval is required by this research.
Consent to participate
Not applicable
Consent for publication
This paper does not require any consent for publication.
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Proof of Lemma 1
All convex obstacles can be represented by the wall and spheroid shapes. When the vehicle hits the boundary of a wall on the motion to the goal, the planner cannot generate any feasible normal trajectory since all sampled candidates lead to collisions. The robot needs to steer so that its camera can have a more open FoV. Assumption 2 manifests that after that steering, the first feasible trajectory’s direction will be parallel to the current side of the wall. And the subsequent consecutive best-cost trajectories would form a line parallel to the wall’s peripheral until they end up at a leaving point \(q_{L}\) (as illustrated in Fig. 7a). Indeed, these trajectories have the best direction costs as they are the minus dot product of two unit vectors of the m-line and trajectory’s direction as formulated in Eq. 4. Spheroids and other convex obstacle scenarios can be proved by following the same geometrical approach as visualised in Fig. 7b and c. Thus, when the vehicle faces a convex obstacle, it will reach a leaving point \(q_{L}\) after only the first steer.\(\square \)
Proof of Theorem 1
Assumption 2 shows that the robot will reach \(q_{S}\) by a straight-line path from the start. Once the robot reaches \(q_{S}\), the Lemma 1 ensures that it can also get to a \(q_{L}\) of the current obstacle after only one steer, where it can either go seamlessly to the visible goal or the next obstacle’s \(q_{S}\), as illustrated in Fig. 7d. Assumptions 1 and 2 indicate that once the robot reaches \(q_{L}\) of an obstacle, it never faces that obstacle again in motion toward the goal. Since the environment has finite obstacles, the robot only needs to execute a limited number of steers to reach a \(q_{L}\) where \(q_{G}\) is visible. Assumption 2 again allows the robot to travel directly to \(q_{G}\) from a \(q_{L}\) where \(q_{G}\) is visible. Therefore, the proposed algorithm will always be able to find a feasible path to the goal.\(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Nguyen, B.T., Nguyen, L., Choudhury, T.A. et al. Depth-based Sampling and Steering Constraints for Memoryless Local Planners. J Intell Robot Syst 109, 46 (2023). https://doi.org/10.1007/s10846-023-01971-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-023-01971-7