An AI-assisted chatbot for radiation safety education in radiotherapy

Purpose. We created a virtual assistant chatbot that will serve as a tool for radiation safety training for clinical staff, including radiation oncologist, radiotherapist and medical physicist, in cancer treatment. The Bot can also be used to test their knowledge on radiation safety. Methods. The Bot was constructed using IBM’s Watson Assistant functionalities on the IBM cloud. A layered structure approach was used in the workflow of the Bot to interact with the user. Through answering various questions concerning radiation safety in radiotherapy, the users can learn the essential information to gain knowledge, when working in a cancer centre/hospital. Results. The user interface of the Bot was a front-end window operating on Internet, which could easily be accessed by any Internet-of-things such as smartphone, tablet or laptop. The Bot could communicate with the user for radiation safety Q&A. If the Bot could not identify what the user needed, the Bot would provide a list of options as a guidance. Using the natural language processing in communication, knowledge transfer from the Bot to user could be carried out. Conclusion. It is concluded that the radiation safety chatbot worked as intended, utilizing all the tools provided by the IBM Watson Assistant. The Bot could provide radiation safety information to the radiation staff effectively, and be used in staff training in radiotherapy.


Introduction
Radiotherapy is the main nonsurgical method used to control and treat cancer, as approximately 50% of all cancer patients are treated with radiotherapy [1]. Radiation safety in radiotherapy is therefore a concern for radiation staff using ionizing radiation in cancer treatment in a hospital. The diversity of backgrounds, professions, and certification requirements among different staff such as radiation oncologist, radiotherapist and medical physicist bring about challenges to radiation safety protection that need to be addressed [2]. The concept of radiation protection culture was first introduced by International Radiation Protection Association in 2008 [3]. It describes the combination of beliefs, practices, attitudes and rules amongst the staff, professionals as well as patients, related to radiation protection. The key to establishing radiation protection culture is continuous education of the staff and professionals [4].
Chatbot is a software application to simulate a human conversation by integrating the natural language processing and computational algorithms [5,6]. According to a study, the key aspect of chatbots when it comes to learning, is that they are well suited for a student-centered learning approach [7]. Students can cognitively obtain knowledge via experiences and interactions with the chatbot, rather than through memorization. Interaction is further increased if the chatbot can respond to questions from students and guide them to the answer in a step-by-step approach. Furthermore, chatbots may facilitate the interactions between students and instructors, as well as students and their peers. This in turn leads to a compounding effect, enhancing dialoguebased learning. In fact, when surveyed, 71% of physicians believed that chatbots would be most beneficial at tasks such as providing medical information. This was extrapolated to the objective of this note [8]. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Machine learning plays a key role in creating a chatbot using platform such as IBM's Watson Assistant [9,10]. The objective of machine learning in chatbot is to analyze the user-inputted text to obtain rules and recognize patterns [11]. This in turn means that the chatbot converts the user-inputter text into a structured format, one that follows a hierarchy [5]. Some of the tools used to develop machine learning are natural language processing technologies [12,13]. These include pattern matching and linguistic analysis. Watson Assistant specifically recognizes keywords in the user input and weighs them to determine the intent of the sentence. This is then cross-referenced with the database of intents to evaluate the response that the chatbot should provide [14]. The objective of this study was to create a chatbot (Bot) that can serve as a tool for radiation safety training in radiotherapy.

Methods
The Bot is built on IBM's Watson Assistant as the platform because it provides a relatively easy way to use interface, along with simple but powerful integration tools that allow it to be used through different channels such as webchat on any Internet-of-things [15,16]. Watson Assistant also provides high-level control of dialogue flow and context variables that can be used throughout the conversation. Moreover, it is a free to use platform that ensures the Bot can be worked on and improved in the future.

Main workflow
The flowchart representing the overarching structure of the dialogue is shown in figure 1. The red nodes represent the output of the Bot, while the black lines with text represent the user input, and the intent of that input. On the other hand, the blue rhombus node represents the main branch point of the dialogue, where the user can either go through examples or the full quiz. This workflow, along with the tools and methods used, were vital in the creation of the Bot.
To make the Bot recognize the name of user and save it as a context variable in machine learning for the rest of the conversation instance, data is required to be 'fed' to the Bot, so that it may cross-reference and recognize the name of the user. This data is imported from an EXCEL file containing 6,784 most common names in America. On the other hand, the Bot is set to go through radiation safety questions to the user to test his/her knowledge, and to provide him/her a learning experience through a hint function. The questions are integrated individually into the dialogue nodes, each one being self contained but leading towards the next question.
To interact with the user through a set of radiation safety questions, the dialogue nodes are programmed on the platform using a tree approach. Each node has an option to allow an added new node either below or above it. This is used to create the general flow of the dialogue tree. Each node also has an option to have child nodes. This is important considering there are various ways that a user can answer a question. The user input is processed by the Intent function powered by machine learning and natural language processing on the platform to determine the intent of the user. For example, when the user found it difficult to answer a question or answered it wrongly couple of times, the Bot will note it and provide hints to the user. Since different users will have different responses to the Bot when having difficulty, the machine learning and natural language processing algorithm are used to learn the intents from the user input. For example, some users might input 'hint' but some might input 'help' when asking for help. The Bot has to recognize the intents of the user based on various inputs and take appropriate action.

Design of bot
When designing the dialogue tree for the Bot, the Watson Assistant platform allows the use of logical arguments to ensure the proper flow of dialogue. Due to the nature of this work is to provide radiation safety education using a question-and-answer method, the focus for logical arguments is on intents, which work perfectly with the dialogue that is intended for the Bot. For example, the logic behind a true-or-false question (Question 1) can be described as follows: (1) Bot displays the question, and the user replies with an input.
(2) If the Bot recognize 'True' intent, it will initiate the 'Question 1-True' child node.  (5) If the Bot does not recognize 'Hint', it will ask the user to clarify as his/her answer is not recognized as appropriate.
The context variables are related to the logical arguments and can be set at predefined nodes or be extracted from input of user. These context variables are extremely useful, as they can be accessed and displayed by the Bot at any point in the conversation. For example, once the Bot learns the user name and saves it as a context variable, it will have access to that name for the entirety of the conversation. This proves very useful when displaying results of the Quiz as well. Moreover, to further ensure that the user can communicate with the Bot easily and in a humanlike manner, responses such as "Please type 'x' or 'y'" are limited. This however can prove to be counter-productive, as giving the user more freedom can result in responses that are not valid. To find the balance between humanlike conversation and adequate guidance, the sequential function of response variance is used. This allows the Bot to maintain a humanlike manner during the initial response to an invalid input. However, with the repeated access to the invalid response node, the Bot will escalate its response and ask the user to provide a specific type of input. To further achieve a humanlike communication, the Bot is programmed to have a variance in the responses. With over 60 unique nodes, each one containing 2 or 3 variants, it became unlikely to repeat the same exact conversation, even if the user provides same exact answers.

Results and discussion
The Bot was built and embedded to a website with the following link: https://radiation-safety-chatbot. webnode.com. Figure 2 shows the frontend window of the radiation safety chatbot webpage.
At the beginning, the Bot started an introductory message to say hello to the user and ask the user's name as shown in figure 3(a). The user answered to the Bot his name and the Bot would use this name in the remaining communication. The Bot then asked if the user would like to answer some radiation safety questions or to go through some examples first ( figure 3(b)). However, when the user was not able to follow the communication, the Bot would provide sequential guidance to the user as shown in figure 3(c). For example, if the user could not give the expected answer to the Bot, guidance would be provided to request the user to type in some words (e.g. 'ready' in figure 3(c)), which were meaningful as a response. In that case, the Bot could continue the communication process.
When the user answered the question incorrectly with an in-determinate answer or was not able to answer the question, for example as shown in figure 4, he/she could request a hint from the Bot. A hint would then be given to the user to help to answer the question. Through learning from the hints and answers of the questions, radiation safety knowledge transfer was carried out from the Bot to the user.
When the user finished answering all radiation safety questions, the Bot would summarise all results answered by the user as shown in figure 5(a). The user could review the result and decide if he/she would like to retake the quiz again. If the user did not want to retake, the Bot would say good bye and conclude the communication ( figure 5(b)).
One limitation in creating the Bot was the lack of global ability to restart the conversations freely, once the Bot is published on a website. Essentially, when developing the Bot and testing it, the developer tools provide a 'Clear' button, that allows the conversation to reset back to the first node. Unfortunately, this tool is not available once the Bot is published, and to implement this ability globally, every single node would need to get reworked manually. A new intent would then be added, leading to an exit node, thus leading back to the first node. In fact, attempts to add the exit node to some key parts of the conversation led to unforeseen errors, as the initial design of logic did not foresee the need for this functionality. As this limitation was only noticed once the Bot was fully published on a website, it became too time costly to go back and redo and error-test each node by implementing this functionality. The simplest of the fixes is to include the reset-conversation functionality into the logic of the chatbot, so that the user does not need to refresh the website to restart the conversation. This is best done from the start, when building the new version of the chatbot. The actual logic is not complicated, but to make it global, it needs to be added to every single node.
Another issue was the lack of ability to perform mathematical operations with the context variables. This was discovered when the Bot was originally designed to provide the user with either a letter grade ('A'-'F') or a percentage ('0'-'100%') result. The goal was to create two separate branches where one would have a hint function, while other would be an exam style branch, where the user's knowledge was tested. However, the tools  do not allow the developer to add up the context variables 'Q_1'-'Q_15' so that '1' was output as correct and '0' as incorrect. The values would be added up into the 'Result' variable, which would then be divided by 15 (total number of questions), and multiplied by 100, to get a percentage grade. With mathematical operation implemented, the context variables in the Bot could be used smarter and more humanlike. On the other hand, it should be noted that if a subset of questions were sampled, the outcome of repeated testing would likely be different by taking more repeats to achieve the 100% pass rates. This would potentially change some of the assertions stated.
For implementing the Bot, this work was presented to the radiation staff/colleagues and students in the cancer centre and in a scientific meeting so that potential users had the pre-bot experience baselines. The outcomes were encouraging and many positive feedbacks regarding how to improve the Bot were received. The Bot was therefore improved. As comments and feedback will be collected continuously, further improvement of the Bot will be carried out routinely. This will include improving the options to make the hints more robust, since the answers given to the Bot are from the set {a, b, c, d, t, f}. For example 'not sure' or 'not sure I know' yielded the hint information, whereas 'apples' drew a request to 'Please answer the questions with 'a', 'b', 'c' or 'd'. Moreover, variations of answer (e.g. 'True', 'Not False', 'Not Not True' K) with the same meaning can be included as variations under the configuration of the 'Intents' of the bot. The content and number of questions in the Bot can also flexibly be changed adapting to different requirements from other legislations. It is seen that constructing a Bot is a continuous learning process for the developers.