Abstract
This paper presents a study on using knowledge graph with ChatGP for robotics applications, called KGGPT. Traditional planning methods for robot tasks based on structured data and sequential actions, such as rosplan, have limitations such as limited data range and lack of flexibility to modify behaviors based on user feedback. Recent research has focused on combining AI planning with large language models (LLMs) to overcome these limitations, but generated text may not always be consistent with real-world physics and the robot skills to perform physical actions. To address these challenges, we propose KGGPT, a system that incorporates prior knowledge to enable ChatGPT for a variety of robotic tasks. KGGPT extracts relevant knowledge from the knowledge graph, generates a semantic description of the knowledge, and connects it to ChatGPT. The gap between the knowledge of ChatGPT and actual service environments is addressed by using the knowledge graph to model robot skills, task rules, and environmental constraints. The output is a behavior tree based on robot skills. We evaluate our method in an office setting and show that it outperforms traditional PDDL planning and a separate ChatGPT planning scheme. Additionally, our system reduces programming effort for applications when new task requirements arise. This research has the potential to significantly advance the field of robotics.
Supported by “Pioneer” and “Leading Goose” R &D Program of Zhejiang (2022C01130) and Key Research Project of Zhejiang Lab (No. G2021NB0AL03).
Z. Mu and W. Zhao—Contribute equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Brohan, A., et al.: Do as i can, not as i say: grounding language in robotic affordances. In: Conference on Robot Learning, pp. 287–318. PMLR (2023)
Cashmore, M., et al.: ROSPlan: planning in the robot operating system. In: Proceedings of the International Conference on Automated Planning and Scheduling, vol. 25, pp. 333–341 (2015)
Daruna, A., Nair, L., Liu, W., Chernova, S.: Towards robust one-shot task execution using knowledge graph embeddings. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 11118–11124. IEEE (2021)
Hanheide, M., et al.: Robot task planning and explanation in open and uncertain worlds. Artif. Intell. 247, 119–150 (2017)
Huang, W., Abbeel, P., Pathak, D., Mordatch, I.: Language models as zero-shot planners: extracting actionable knowledge for embodied agents. In: International Conference on Machine Learning, pp. 9118–9147. PMLR (2022)
Kootbally, Z., Schlenoff, C., Lawler, C., Kramer, T., Gupta, S.K.: Towards robust assembly with knowledge representation for the planning domain definition language (PDDL). Robot. Comput.-Integr. Manuf. 33, 42–55 (2015)
Liang, J., et al.: Code as policies: language model programs for embodied control. arXiv preprint arXiv:2209.07753 (2022)
Lu, Y., et al.: Neuro-symbolic procedural planning with commonsense prompting. arXiv preprint arXiv:2206.02928 (2022)
Munawar, A., et al.: MaestROB: a robotics framework for integrated orchestration of low-level control and high-level reasoning. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 527–534. IEEE (2018)
Nyga, D., et al.: Grounding robot plans from natural language instructions with incomplete world knowledge. In: Conference on Robot Learning, pp. 714–723. PMLR (2018)
Puig, X., Ra, K., Boben, M., Li, J., Wang, T., Fidler, S., Torralba, A.: VirtualHome: simulating household activities via programs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8494–8502 (2018)
Saxena, A., et al.: RoboBrain: large-scale knowledge engine for robots. arXiv preprint arXiv:1412.0691 (2014)
Silver, T., Athalye, A., Tenenbaum, J.B., Lozano-Perez, T., Kaelbling, L.P.: Learning neuro-symbolic skills for bilevel planning. arXiv preprint arXiv:2206.10680 (2022)
Singh, I., et al.: ProgPrompt: generating situated robot task plans using large language models. arXiv preprint arXiv:2209.11302 (2022)
Tenorth, M., Beetz, M.: KNOWROB-knowledge processing for autonomous personal robots. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4261–4266. IEEE (2009)
Varadarajan, K.M., Vincze, M.: AfRob: the affordance network ontology for robots. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1343–1350. IEEE (2012)
Vemprala, S., Bonatti, R., Bucker, A., Kapoor, A.: ChatGPT for robotics: design principles and model abilities (2023)
Waibel, M., et al.: Roboearth. IEEE Robot. Autom. Mag. 18(2), 69–82 (2011)
Xu, D., Mandlekar, A., Martín-Martín, R., Zhu, Y., Savarese, S., Fei-Fei, L.: Deep affordance foresight: planning through what can be done in the future. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 6206–6213. IEEE (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Mu, Z. et al. (2023). KGGPT: Empowering Robots with OpenAI’s ChatGPT and Knowledge Graph. In: Yang, H., et al. Intelligent Robotics and Applications. ICIRA 2023. Lecture Notes in Computer Science(), vol 14271. Springer, Singapore. https://doi.org/10.1007/978-981-99-6495-6_29
Download citation
DOI: https://doi.org/10.1007/978-981-99-6495-6_29
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-6494-9
Online ISBN: 978-981-99-6495-6
eBook Packages: Computer ScienceComputer Science (R0)