skip to main content
10.1145/3407982.3407983acmotherconferencesArticle/Chapter ViewAbstractPublication PagescompsystechConference Proceedingsconference-collections
research-article

Control of various robots through signals from the brain activity

Authors Info & Claims
Published:25 August 2020Publication History

ABSTRACT

The research of the brain and how it works is a hot topic and gaining popularity in recent years. Therefore, it is not surprising that more and more researchers, as well as young scientists, are interested in trying to connect different hardware with brain activity -- mind-controlled cars, planes, robots. In this presentation, a brief overview of how to record and proceed with the brain activity signals will be given. Special attention will be done on the developed brain activity management system that can successfully control "Baxter" [1] and "Nao v5" [2] robots. The system uses the Emotiv Epoc+ electroencephalograph [3] (EEG) to capture real-time brain activity. The hardware follows the International 10-20 Electrode Positioning System[4].

In order to achieve control with the mind, a user must create their own profile. This profile stores that user's specific personal brain activity data because each person has very unique brain activity. First, a neutral level of brain activity [5] must be recorded and then it is necessary to train the various commands to the robots. The current version of the system allows distinguishing between 5 different commands. The first command is relatively easy to train. But with each other successfully trained command it becomes more difficult to distinguish the individual command from the others. Training is done through a virtual cube, so over a period of time, a person can learn to control their brain activity. It can be moved in different directions, and if a person has a highly developed imagination, they can even make the cube disappear.

The EEG data [6] from the device is processed and compared with that of the user's profile. When there is a coincidence between both the data from the device and the data already trained by the user, the command is sent to the robot in real-time for execution. Specially written Python scripts [7] are used to make the connection between Emotive Epoc + and the robots.

Each robot can perform up to 5 different commands individually -- four movement commands and one command to return the robots to a neutral position. The results depend mainly on how well the system is trained and how long the training is done. With enough training time and the appropriate training methods, each of the five commands can be distinguished and executed. For comparison on average, most users manage to train up to 3 different commands correctly.

References

  1. S. Cremer, L. Mastromoro and D. O. Popa, "On the performance of the Baxter research robot," 2016 IEEE International Symposium on Assembly and Manufacturing (ISAM), Fort Worth, TX, 2016, pp. 106--111.Google ScholarGoogle Scholar
  2. C. Jost, M. Grandgeorge, B. L. Pévédic and D. Duhaut, "Study of Nao's Impact on a Memory Game," 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Bielefeld, 2014, pp. 186--187..Google ScholarGoogle Scholar
  3. R. Zeng, A. Bandi and A. Fellah, "Designing a Brain Computer Interface Using EMOTIV Headset and Programming Languages," 2018 Second International Conference on Computing Methodologies and Communication (ICCMC), Erode, 2018, pp. 908--913.Google ScholarGoogle Scholar
  4. J. A. I. R. Silva, F. E. Suarez Burgos and S. Wu, "Interactive Visualization of the Cranio-Cerebral Correspondences for 10/20, 10/10 and 10/5 Systems," 2016 29th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Sao Paulo, 2016, pp. 424--431, doi: 10.1109/SIBGRAPI.2016.065.Google ScholarGoogle Scholar
  5. M. Pereira, A. Sobolewski and J. d. R. Millán, "Modulation of the inter-hemispheric asymmetry of motor-related brain activity using brain-computer interfaces," 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, 2015, pp. 2319--2322, doi: 10.1109/EMBC.2015.7318857.Google ScholarGoogle Scholar
  6. L. Gutierrez and M. Husain, "Design and Development of a Mobile EEG Data Analytics Framework," 2019 IEEE Fifth International Conference on Big Data Computing Service and Applications (BigDataService), Newark, CA, USA, 2019, pp. 333--339, doi: 10.1109/BigDataService.2019.00059.Google ScholarGoogle Scholar
  7. Di Liu and Pin Wang, "Use Python API to automate script based on Open Stack platform," 2015 12th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, 2015, pp. 465--468, doi: 10.1109/ICCWAMTIP.2015.7494032.Google ScholarGoogle Scholar
  8. B. Rebsamen et al., "A Brain Controlled Wheelchair to Navigate in Familiar Environments," in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 18, no. 6, pp. 590--598, Dec. 2010, doi: 10.1109/TNSRE.2010.2049862.Google ScholarGoogle ScholarCross RefCross Ref
  9. M. Sarabia, R. Ros and Y. Demiris, "Towards an open-source social middleware for humanoid robots," 2011 11th IEEE-RAS International Conference on Humanoid Robots, Bled, 2011, pp. 670--675, doi: 10.1109/Humanoids.2011.6100883.Google ScholarGoogle Scholar
  10. Q. Huang and P. Yin, "Research on multi-degree of freedom force loading system based on parallel mechanism," 2015 International Conference on Fluid Power and Mechatronics (FPM), Harbin, 2015, pp. 542--547, doi: 10.1109/FPM.2015.7337177.Google ScholarGoogle Scholar
  11. A. L. Kaczmarek, "Stereo camera upgraded to equal baseline multiple camera set (EBMCS)," 2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Copenhagen, 2017, pp. 1--4, doi: 10.1109/3DTV.2017.8280416.Google ScholarGoogle Scholar
  12. Y. Juan, X. Feng, L. Jia, A. Xudong and J. Yongqiang, "The reverberation suppression in wideband diver detection sonar," 2014 Oceans - St. John's, St. John's, NL, 2014, pp. 1--4, doi: 10.1109/OCEANS.2014.7003121.Google ScholarGoogle Scholar
  13. M. Matsumoto, H. Naono, H. Saitoh, K. Fujimura and Y. Yasuno, "Stereo zoom microphone for consumer video cameras," in IEEE Transactions on Consumer Electronics, vol. 35, no. 4, pp. 759--766, Nov. 1989, doi: 10.1109/30.106893.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. K. Žmolíková, M. Delcroix, K. Kinoshita, T. Higuchi, A. Ogawa and T. Nakatani, "Learning speaker representation for neural network based multichannel speaker extraction," 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Okinawa, 2017, pp. 8--15, doi: 10.1109/ASRU.2017.8268910.Google ScholarGoogle Scholar
  15. S. Yan et al., "Real-Time Ethernet to Software-Defined Sliceable Superchannel Transponder," in Journal of Lightwave Technology, vol. 33, no. 8, pp. 1571--1577, 15 April15, 2015, doi: 10.1109/JLT.2015.2391299.Google ScholarGoogle ScholarCross RefCross Ref
  16. S. Li, M. Hedley, K. Bengston, D. Humphrey, M. Johnson and W. Ni, "Passive Localization of Standard WiFi Devices," in IEEE Systems Journal, vol. 13, no. 4, pp. 3929--3932, Dec. 2019, doi: 10.1109/JSYST.2019.2903278.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Control of various robots through signals from the brain activity

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        CompSysTech '20: Proceedings of the 21st International Conference on Computer Systems and Technologies
        June 2020
        343 pages
        ISBN:9781450377683
        DOI:10.1145/3407982

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 25 August 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        CompSysTech '20 Paper Acceptance Rate46of72submissions,64%Overall Acceptance Rate241of492submissions,49%
      • Article Metrics

        • Downloads (Last 12 months)10
        • Downloads (Last 6 weeks)1

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader