CAMPUS MITHRA: Design and implementation of voice based attender robot

A robot is a machine which is programmed by a computer and the movements and functions of which are controlled by an external or an embedded control. It has dynamic uses in all domains of life. A robot in a university setting can be used as an attender for passing circulars around instead of multiple attenders doing the task manually which is a cost and time consuming process. Parents often find it difficult to navigate through the unfamiliar university. In this paper, we have focused on a voice based attender robot with line following capabilities along with speech recognition that can be used at universities for a variety of purposes like passing around the circulars, interacting with parents and helping them navigate through the university through Spoken Natural Language. The main objectives of the proposed work is to reduce the burden of passing circulars, calling a student/faculty on the attender by designing a robot that is also competent enough to connect with human through spoken natural language such as English or Kannada, so that it interacts with parents who are new to the institution and do not know whom to approach. The main aim of this work is to introduce a robot that it is able to interact with human through the Spoken Natural Language. Here, the focus is on two languages; English and Kannada. This system uses a voice recognition module to recognize human voice and a voice playback module is used to reply back in either English or Kannada according to the user’s command. It can work in two modes, the voice recognition mode to answer to user queries or in line following mode to pass circulars, call student/faculty. In this way, the voice based attender robot finds its applications in the university setting. But it is not limited to only universities. It can also be further implemented in places like railway stations, bus stations, big factories and other similar surroundings.


INTRODUCTION
Attenders are being used for passing circulars and calling student/faculty from class rooms which is a repetitive and tedious task. Sometimes parents who are new to the college do not know whom to approach when they visit for the first time. For many people, the most effective form of communication is face-to-face, and it is important to be able to use one's mother tongue when dealing with interactive services. Some of the parents also cannot converse in English and prefer the regional language Kannada to convey their queries. The existing high dependence on attenders in educational institutions to convey even the smallest of messages is a matter of concern. The inability of parents to 2 communicate and navigate through the college departments because of the language barriers is also a matter of concern. This robot addresses the above matters of concern and aims to provide solutions.
A robot is a machine that can be programmed, guided and controlled by an embedded or an external device with the ability to automatically carry out complex actions without any human interference. Robots are being used in all walks of life mainly to automate different processes that involve performing repeated tasks continuously [12]. A self-operating mobile machine that follows a white / black line drawn on the floor is known as a Line follower robot. Nowadays, all robots are designed to be more human friendly by use of speech recognition and various other applications among which the ability to communicate in multiple languages is a salient feature. The main mode of communication between human beings has been Spoken Natural Language and individuals communicating with devices such as laptop computers or smart phones through speech are a common sight today. Thus, in the last few decades speech-to-text analysis has gained more attention [1]. Therefore, such technology is the trend of the day and can be conveniently used in robots. In fact it is inevitably used along with humanoid robots. The applications of speech recognition and conversion make the whole process of communicating with a robot just a breeze. For example in applications like Google assistant, instead of typing the text, people can just directly speak. This is called as automatic speech recognition, computer speech recognition or speech to text (STT) [16]. Such systems that need prior "training" wherein single or multiple speakers read the text or the lexicons to the system and it analyses each individual's specific voice, which is then used to increase the accuracy by fine tuning the speech recognition is a "speaker dependent" system. Systems that do not require any training are called "speaker independent" systems [2]. To automate the tasks like taking circulars and calling students/faculty from different classrooms we have designed a robot that can be used in educational institutions on a daily basis.

LITERATURE SURVEY
Literature survey plays a pivotal in the life cycle of any project and hence it cannot be under estimated. The main objective of a literature survey is to come up with a new solution by comprehending the failures and drawbacks of the existing system. This survey is carried out in the early stages of the project where information is collected from various sources, analyzed for the project requirements and it includes the study of different technologies and flaws of the present technologies. It also involves comparing the existing and proposed designs. The following inferences were made from various research papers for the substantial study and better implementation of our project. Some of the prominent papers which were relevant and helpful in this project have been discussed briefly.
The project focuses on the concepts of voice recognition and speech-to-text conversion thus enabling the system to interact with humans through Spoken Natural Language. The speech input is given through the microphone which is processed by voice recognition module that sends a command message to the robot's microcontroller and appropriate actions are taken based on the received message. The robot movement is controlled using voice as command input and the special feature is use of multi-language (Kannada and English). Some shortcomings are less range, short battery life and inability to recognize inputs in noisy areas [1]. This paper explains about a humanoid robot 'WikiTalk' that is capable of speaking in multiple languages. It basically acquires information from distinct language Wikipedia and then uses them to meet the linguistic needs of different users. It switches languages based on explicit requests from the users. But the need for Wikipedia makes its bulky in terms of memory or it needs access to the internet [2].
This paper tackles interactions of humans with a robot in multi-modal setting and emphasizes on Natural Language Component. They aimed to ground the rich language terms that were used in the interaction into robotic actions for correct enforcement of commands. Also constant interactions with the operator trains the robot about the environment instead of the need for a priori domain description. Thus it concentrates on collection and application of the information about the environment. It used a wheeled robot that was employed in a home setting. The major goal was continuous learning by interactions human-robot interactions [3]. This project uses a line follower robot based on ATmega32A microcontroller which includes three parts -mechanical, electronics, and software which is written in the C language. Mechanical includes the robot frame, gearbox, wheels etc., electronics includes a line sensor circuit, microcontroller, power supply etc., whereas the software part is program written in the C language which displays the workflow of the line follower robot. The drawbacks of this include reduced speeds and instability due to thickness of lines [4].
The paper is about a voice controlled talking robot which is operated by a phone. The movements of it are done using voice and act accordingly by generating outputs via speech. Now that is connected to phone through a Bluetooth module for receiving commands in form of voice. The robot gives appropriate output responses for the corresponding input commands. The drawbacks of this include usage of expensive devices. [5] The project is a robot and it is controlled by giving voice commands. Input is given through a microphone which is later processed by a voice module which then directs it to the microphone of the robot. The aim of this project is to develop a robot which can be operated using motors. The input is given through a transmitter, gets processed, converts it to digital signals and gives it to the robot. The drawbacks of this include errors due to noise in the background and chances of unauthorized usage [6].

METHODOLOGY
The input is given in the form of voice which is used to communicate with the robot. The robot uses speech recognition module to understand the input given. It then either addresses the user query or passes the circulars around. After the task has been completed by the robot, it will go back to its original position.
Methodology for the robot works as follows: 1. The robot is given inputs in the form of a voice. 2. The robot performs two tasks, announcing circulars/ calling students or faculty and addressing user query. 3. On completion of the task, the robot goes back to its original position if required.

HARDWARE AND SOFTWARE
The main hardware component is the microcontroller and here we have used the Arduino Mega 2560 microcontroller board based on the ATmega2560. For recognition of voice we have used the V3 Speech recognition module which supports up to 80 voice commands and Max 7 voice commands could work at the same time. Any sound/ speech could be trained as command. Once the speech is recognized, for the bot to be able to respond back via voice again, we have used the APR33A3 voice recording and playback module. Then for the locomotion part we have chosen simple line following mechanism for which we have used TTCRT3000L 3-Channel IR Sensor for line sensing and tracking and also we have made use of LN298 H-Bridge motor driver for controlling the speed and direction of the motors. Apart from these main parts we have also used a mode switch to switch over from line following to voice recognition an LCD Display to display the same, indicator LEDs, Buzzer to indicate that the bot has arrived with a message, etc. The software that was used to program the board is the Arduino IDE.

IMPLEMENTATION
The V3 voice recognition module was trained to identify voice commands in English as well as the regional language kannada like 'Where is the principal's cabin?' which can be translated to kannada as 'praamshupaalara kotadi ellidhe'. Once the training was done, the module was able to recognize the queries and characteristic outputs were communicated serially to the Arduino board based on which the corresponding output line is made high for the APR33A3 to playback the answer to that particular query. Then for the line following part, the IR sensors were programmed to track the black lines and the LN298 was made to drive the motors accordingly for the robot to move without any difficulties.The V3 Speech recognition module was connected serially as an input to the Arduino Mega with the corresponding TXD and RXD connections as shown in the schematic. The output of the IR sensor array was given as input to the Arduino Mega through digital I/O pins. The mode selecting switch was also connected serially to the Mega board. The LN928 was connected as an output to the Arduino Mega with digital I/O pins. Thus the output from the board was used to control the driver which in turn was connected to the DC geared motors for speed and direction control of the wheels. The digital output from the Arduino board is used to control the operation the APR33A3 module. LCD display, buzzer modules, etc. were connected as digital outputs to the Arduino Mega for operation via digital GPIO pins.

RESULTS AND DISCUSSION
The project was implemented successfully and the expected results were achieved. So, the final result is that using Campus Mithra, the HOD or faculty in charge can easily assign the robot to perform the required task by giving voice command and the robot carries out the task without any further human intervention and comes back to it original position. Once the user, be it the HOD or Principal wants to send a message to the classrooms, he will have to record the message in the APR33A3 module, where one track has been exclusively assigned for circulars and then put the robot to Line following mode. Once put on line following mode, the robot automatically moves along the tracks and stops at each classroom, buzzes for 5 seconds indicating that the bot has arrived with a circular, waits for 5 seconds, automatically plays back the message pre-recorded and then moves further to the next class. Also, when a student or parent visits the department for the first time, they will be able to ask basic queries to Campus Mithra like location of Head of the department's cabin, classrooms etc., Some of the parents may not be able to read English or converse in English and that is when Campus Mithra comes to the rescue and they can converse with it in Kannada also. The user may go near the microphone of the bot and speak out the query and the bot will recognize the voice and give suitable answer in the choice of language of the user (either kannada or English) like the location of the Head of the department's cabin.
The ability to use spoken natural language to communicate with the users makes it human friendly and also minimizes the burden of passing circulars and calling a student/faculty by the attender. It can be further implemented in online and real time environments using cloud to access real time data and information. The robot can be further enhanced with capabilities to understand its environment and comprehend changes through AI and other technologies for better performance. It can also be fitted with cameras to get visual captures of the path it has travelled which can be helpful in many applications which are beyond the reach of humans. We also have a scope of converting speech or text to other languages like German, Italian and many more.

CONCLUSION
We have designed and implemented a robot that can reduce the burden on the attenders for repetitive tasks like passing circulars or calling students/faculty. It also helps and guides the parents/students who are new to the college in the language of their choice. The robot can address basic queries via speech module. The time taken by the robot to cover a distance of 1m is 4-5 seconds, as for classrooms it will totally depend on the infrastructure and distance between each classroom. By using this attender robot, the man power can be reduced in the daily routine tasks like at circular passing, calling a student/faculty in educational institutions. This voice based self-assisting robot in the indoor navigation assistance for individuals is very important when they visit a new environment like parents visiting the college for the first time. The proposed method for the robot plays a very big role in such situations by taking voice commands and guiding people. It can also be implemented in places such as railway stations, bus stations, big factories and other similar surroundings.

ACKNOWLEDGEMENT
We would like to acknowledge and thank all those who have been instrumental in the successful completion of this project. We specially express our gratitude to our respected guide Prof. D J