ReActIn: Infusing Human Feedback into Intermediate Prompting Steps of Large Language Model

Open Access
Article
Conference Proceedings
Authors: Manuel DelaflorCarlos ToxtliClaire GendronWangfan LiCecilia Delgado Solorzano

Abstract: This paper introduces ReActIn, a framework designed to infuse human feedback into the intermediate prompting steps of large language models. The practicality and effectiveness of ReActIn are validated through experiments that apply four established prompting strategies, evaluated both with and without human feedback integration. The proposed architecture's performance is compared against traditional large language models across various tasks using four standard evaluation tests. Our findings reveal that the integration of human feedback has a direct impact on the reasoning, action prompting, and overall decision-making capabilities of the language models. This study underscores the potential of ReActIn to shape a future where sophisticated, context-aware AI systems, empowered by human feedback, can effectively navigate complex real-world scenarios.

Keywords: Large Language Models, Artificial Intelligence

DOI: 10.54941/ahfe1004597

Cite this paper:

Downloads
38
Visits
634
Download