ABSTRACT
The research of the brain and how it works is a hot topic and gaining popularity in recent years. Therefore, it is not surprising that more and more researchers, as well as young scientists, are interested in trying to connect different hardware with brain activity -- mind-controlled cars, planes, robots. In this presentation, a brief overview of how to record and proceed with the brain activity signals will be given. Special attention will be done on the developed brain activity management system that can successfully control "Baxter" [1] and "Nao v5" [2] robots. The system uses the Emotiv Epoc+ electroencephalograph [3] (EEG) to capture real-time brain activity. The hardware follows the International 10-20 Electrode Positioning System[4].
In order to achieve control with the mind, a user must create their own profile. This profile stores that user's specific personal brain activity data because each person has very unique brain activity. First, a neutral level of brain activity [5] must be recorded and then it is necessary to train the various commands to the robots. The current version of the system allows distinguishing between 5 different commands. The first command is relatively easy to train. But with each other successfully trained command it becomes more difficult to distinguish the individual command from the others. Training is done through a virtual cube, so over a period of time, a person can learn to control their brain activity. It can be moved in different directions, and if a person has a highly developed imagination, they can even make the cube disappear.
The EEG data [6] from the device is processed and compared with that of the user's profile. When there is a coincidence between both the data from the device and the data already trained by the user, the command is sent to the robot in real-time for execution. Specially written Python scripts [7] are used to make the connection between Emotive Epoc + and the robots.
Each robot can perform up to 5 different commands individually -- four movement commands and one command to return the robots to a neutral position. The results depend mainly on how well the system is trained and how long the training is done. With enough training time and the appropriate training methods, each of the five commands can be distinguished and executed. For comparison on average, most users manage to train up to 3 different commands correctly.
- S. Cremer, L. Mastromoro and D. O. Popa, "On the performance of the Baxter research robot," 2016 IEEE International Symposium on Assembly and Manufacturing (ISAM), Fort Worth, TX, 2016, pp. 106--111.Google Scholar
- C. Jost, M. Grandgeorge, B. L. Pévédic and D. Duhaut, "Study of Nao's Impact on a Memory Game," 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Bielefeld, 2014, pp. 186--187..Google Scholar
- R. Zeng, A. Bandi and A. Fellah, "Designing a Brain Computer Interface Using EMOTIV Headset and Programming Languages," 2018 Second International Conference on Computing Methodologies and Communication (ICCMC), Erode, 2018, pp. 908--913.Google Scholar
- J. A. I. R. Silva, F. E. Suarez Burgos and S. Wu, "Interactive Visualization of the Cranio-Cerebral Correspondences for 10/20, 10/10 and 10/5 Systems," 2016 29th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Sao Paulo, 2016, pp. 424--431, doi: 10.1109/SIBGRAPI.2016.065.Google Scholar
- M. Pereira, A. Sobolewski and J. d. R. Millán, "Modulation of the inter-hemispheric asymmetry of motor-related brain activity using brain-computer interfaces," 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, 2015, pp. 2319--2322, doi: 10.1109/EMBC.2015.7318857.Google Scholar
- L. Gutierrez and M. Husain, "Design and Development of a Mobile EEG Data Analytics Framework," 2019 IEEE Fifth International Conference on Big Data Computing Service and Applications (BigDataService), Newark, CA, USA, 2019, pp. 333--339, doi: 10.1109/BigDataService.2019.00059.Google Scholar
- Di Liu and Pin Wang, "Use Python API to automate script based on Open Stack platform," 2015 12th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, 2015, pp. 465--468, doi: 10.1109/ICCWAMTIP.2015.7494032.Google Scholar
- B. Rebsamen et al., "A Brain Controlled Wheelchair to Navigate in Familiar Environments," in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 18, no. 6, pp. 590--598, Dec. 2010, doi: 10.1109/TNSRE.2010.2049862.Google ScholarCross Ref
- M. Sarabia, R. Ros and Y. Demiris, "Towards an open-source social middleware for humanoid robots," 2011 11th IEEE-RAS International Conference on Humanoid Robots, Bled, 2011, pp. 670--675, doi: 10.1109/Humanoids.2011.6100883.Google Scholar
- Q. Huang and P. Yin, "Research on multi-degree of freedom force loading system based on parallel mechanism," 2015 International Conference on Fluid Power and Mechatronics (FPM), Harbin, 2015, pp. 542--547, doi: 10.1109/FPM.2015.7337177.Google Scholar
- A. L. Kaczmarek, "Stereo camera upgraded to equal baseline multiple camera set (EBMCS)," 2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Copenhagen, 2017, pp. 1--4, doi: 10.1109/3DTV.2017.8280416.Google Scholar
- Y. Juan, X. Feng, L. Jia, A. Xudong and J. Yongqiang, "The reverberation suppression in wideband diver detection sonar," 2014 Oceans - St. John's, St. John's, NL, 2014, pp. 1--4, doi: 10.1109/OCEANS.2014.7003121.Google Scholar
- M. Matsumoto, H. Naono, H. Saitoh, K. Fujimura and Y. Yasuno, "Stereo zoom microphone for consumer video cameras," in IEEE Transactions on Consumer Electronics, vol. 35, no. 4, pp. 759--766, Nov. 1989, doi: 10.1109/30.106893.Google ScholarDigital Library
- K. Žmolíková, M. Delcroix, K. Kinoshita, T. Higuchi, A. Ogawa and T. Nakatani, "Learning speaker representation for neural network based multichannel speaker extraction," 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Okinawa, 2017, pp. 8--15, doi: 10.1109/ASRU.2017.8268910.Google Scholar
- S. Yan et al., "Real-Time Ethernet to Software-Defined Sliceable Superchannel Transponder," in Journal of Lightwave Technology, vol. 33, no. 8, pp. 1571--1577, 15 April15, 2015, doi: 10.1109/JLT.2015.2391299.Google ScholarCross Ref
- S. Li, M. Hedley, K. Bengston, D. Humphrey, M. Johnson and W. Ni, "Passive Localization of Standard WiFi Devices," in IEEE Systems Journal, vol. 13, no. 4, pp. 3929--3932, Dec. 2019, doi: 10.1109/JSYST.2019.2903278.Google ScholarCross Ref
Index Terms
- Control of various robots through signals from the brain activity
Recommendations
The BCI as a pervasive technology: a research plan
PervasiveHealth '14: Proceedings of the 8th International Conference on Pervasive Computing Technologies for HealthcareIn this work we explore whether the Brain-Computer Interface (BCI) can become a pervasive technology. The primary goal of BCI technology has been to provide communication and control for people with severe neural dysfunction, which affects their ability ...
Are low cost Brain Computer Interface headsets ready for motor imagery applications?
We compare the quality of data captured by a professional and a low cost EEG headset.The results are based on the comparison of the success rate of a BCI system.Higher precision and less variance are found on low cost Emotiv EPOC headset datasets.The ...
Towards ambulatory brain-computer interfaces: a pilot study with P300 signals
ACE '09: Proceedings of the International Conference on Advances in Computer Entertainment TechnologyBrain-Computer Interfaces (BCI) are communication systems that enable users to interact with computers using only brain activity. This activity is generally measured by ElectroEncephaloGraphy (EEG). A major limitation of BCI is the electrical ...
Comments