GESTURE INTERACTION IN THE LIRKIS CAVE BASED ON EMG AND KINECT SENSOR

The article deals with the possibilities of interaction in the virtual environment (VE) of the CAVE LIRKIS system. Commonly available peripherals are keyboard, mouse or gamepad. However, these do not provide a sufﬁcient level of immersion, such as hand gestures. The MYO armband has proven to be one of the potential devices for improving interaction. Therefore, a system for editing an object transformation in VE was created using hand gestures and movements. Enhancing the natural user interface (NUI) of the system with additional interaction capabilities has proven the device to be suitable also for further development.


INTRODUCTION
A hand is an essential element of humans gesticulation. However, exist situations when the hand activity is weakened, restricted (e.g. post-stroke) or interrupted (amputation). In such cases, usually help rehabilitation exercises, physiotherapy or prostheses. One of the possibilities to restore hand functionality is to apply electromyography (EMG). These electrical biosignals determine the magnitude of the muscle tension. By monitoring muscle activity, it is possible to adjust the patient's treatment effectively. However, the patient's cooperation with the physician/physiotherapist is also a significant factor. Positive patient motivation has been shown to speed up the treatment process. One of the motivating factors for training is a 3D game. Either in a classic two-dimensional environment or the use of virtual reality. Patient training is much better, more efficient, and more responsible.
Continuous development and improvement of VR technology enable them to better integrate into society. Devices such as VR glasses, HMDs, or various mobile AR/VR applications are becoming more and more common. Not just because of the affordability, but also because of their intuitive and straightforward to use. Thanks to smartphones, the touch screen displays are the closest to people today. However, these devices do not provide a high level of virtual environment immersion. A fully-visible virtual world can be achieved with, e.g. VR glasses, HMD or CAVE systems. To increase immersion and interaction in VE is essential to use a set of sensors or controllers. Some HMDs such as Microsoft HoloLens have built-in sensors. This way it is possible to control and interact with the environment using hand gestures only. The CAVE system is a specific type of VR environment. This system is also less available because of the price category. The advantage over other VR systems is to share virtual space with multiple users. Interaction in these VR interfaces is primary involved through haptic interfaces (e.g. mouse, gamepad, Oculus Touch) and/or sensory interfaces (e.g. OptiTrack, MYO, Kinect, Leap Motion) [1]. Considering that the user sees himself as an avatar in VE, it is better to use a sensory interface. The main us-ability of sensors affects an interaction between the user and VE in case of scene editing, object manipulation, and user movement.
Using sensors such as the MYO armband in the CAVE system opens up additional options for rehabilitation training (e.g. after the stroke) to restore hand movement.

RELATED WORKS
Surface electromyography (sEMG) is widely used in clinical diagnosis, rehabilitation engineering, and humancomputer interaction and other fields. By collecting these signals, it is possible to detect and manipulate the interface using gestures. Similarly, using BP neural network [2], it is possible to use 6 -8 custom gestures. The accuracy of gesture recognition is 93%. However, by increasing the number of custom gestures affects the accuracy of the recognition process [3]. There are several devices for monitoring sEMG (or just EMG) signals, and one of them is the MYO armband available in the LIRKIS laboratory. The advantage of the MYO is the low-power Bluetooth, which does not cause strong interference.
MYO can be worn on a forearm or above an elbow of an arm. Therefore, it is possible to obtain EMG data from biceps brachii and triceps brachii when a person stretches or bends the arm. With an appropriate algorithm and EMG signals, it is feasible to estimate the movement of the forearm [4]. This algorithm type can be applied to robotic arm equipment by attaching to the upper arm for hand amputees. By performing built-in MYO gestures, it is possible to achieve total accuracy of between 50% and 97% (tested by three subjects without previous training) [5]. The contribution of EMG is also visible in a transradial myoelectric prostheses [6]. The most advanced solution controlled by sEMG includes dexterous hand prosthesis. However, control difficulties, comfort problems, and high costs are still the main limitations of some commercial devices [7].
Another case of using muscle activity detection is to help with motor skills. Rehabilitation treatment is prescribed after surgery or any cerebrovascular disease such as stroke. Lovett scale is a parameter commonly used by the doctor or therapist to determine the muscle strength of the patient's hands. By combining MYO (EMG and accelerometer sensor) with Unity 3D application, it is possible to display the data as a 3D visual hand movement. It can help the doctor to measure the Lovett scale and better adjust the treatment [8].
However, a conventional rehabilitation process consisting of repetitive exercises and similar gestures can cause boredom or depression. One of the proven methods to increase the attractiveness of rehabilitation and motivation for training is through an interactive game. Myoelectric gaming conception makes rehabilitation exciting and can improve the rate of treatment and thereby improve utilitarian score lines of the patients [9]. A similar conception is to link the EMG (not MYO armband) and the EEG sensor along with the HMD (Oculus Rift DK2) as a virtual interface. The results also confirmed positive feedback even without the previous experience of patients with virtual reality [10]. For this reason, systems have been developed, such as the Rehabilitation gaming system (RGS) [11], Computer Assisted Rehabilitation Environment (CAREN) [12] or Virtual Assisted Rehabilitation System (VARS, lowcost solution) [13]. Most of the systems using high immersive technologies like VR has a positive effect on minimizing recovery time and training progress [14].

METHODS
According to relevant studies the MYO is classified as a device with high performance. The MYO represents a low-cost myoelectric control system, used in gaming or VR rehabilitation conceptions. The CAVE system is one of the VR conceptions. Due to its specific level of immersion, it is appropriate to create an experimental system used for further development in the rehabilitation area. The system will allow physicians to monitor muscle activity as well as the level of motivation for patient rehabilitation. For this reason, it is necessary to prepare a virtual cave environment, EMG sensor(s) for muscle activity detection and create a system for monitoring and result evaluation.

Virtual environment
The LIRKIS CAVE is one of the unique CAVE in Central Europe [15] at the Technical University in Košice. The LIRKIS CAVE increases the level of immersion with various VR technologies. The main properties of the LIRKIS CAVE (see Fig. 1) are a cylindrical shape with 20 LCD screens, distributed cluster rendering [16], full perspective stereoscopic VE and OptiTrack motion tracking system.
For monitoring the patient's treatment, it is necessary to create an exciting and immersive scene. However, in the first phase, it is needed to verify and evaluate the system response and suitability of the EMG sensor in LIRKIS CAVE on healthy subjects. Therefore, a scene with one cube object reacting to the inputs sent by the EMG sensor was created. The received data are processed and applied to the cube transformation. In this way, it is possible to monitor the system's response to the change of the cube position, rotation and scale in the virtual cave environment. Given the results will be designed the next version of the complex scene.

Communication with EMG peripheral(s)
In the LIRKIS CAVE, there were created several VR interfaces to extend natural user interface (NUI) and haptic commands (see Fig. 2). The main structure of software equipment consists of several parts. The first part called Control Center (CC) provides data-flow with commands to all of the slave computers. The second part named Video Renderer supports 3D VE rendering of the final scene to LCD screens. The next one is the support of external I/O devices with external tools and applications implemented in C#. Their focus is to read data from peripheral devices and send them to the socket trough cave TCP/UDP interface. For the proper running of the system, it is necessary to have synchronized all individual parts in real-time. Therefore, a simple text format for sent/received data between the EMG sensor and LIRKIS CAVE is designed. Numerical coordinates are separated by a unique character and include information about the position (x, y, z), rotation (x, y, z) and scale (x, y, z).

Interaction system using EMG and Kinect sensor
As the studies show, MYO is a suitable device for detecting EMG signals. MYO has a simple calibration and the possibility of using without the physical connection with another device. Communication is based on data transmission via Bluetooth technology and therefore restricts the motion activity to a minimum. This device is also available in the LIRKIS laboratory and will be used for the MyoGest system.
Due to the designed scene (section 3.1), it is necessary to allow transforming the object. Basic editing of objects usually consists of 3 transformations -position, rotation, and scale. For each of them, a particular manipulator is created [17].

1 EMG senzor -one hand
The prototype of MyoGest is used three pre-set MYO gestures to identify a specific transformation (fist -rotation, wave right -position, spread -scale) and 'quit' gesture (wave-left) for ending the manipulation mode (Fig. 3). The current transformation is saved before the 'quit' gesture is finished. When moving the hand forward, the MYO gyroscope is inactive. Another option is to use an accelerometer. Due to the complexity of the implementation and the increasing inaccuracy of the resulting values in the continuous use of the sensor, this option is useless. Therefore, the MYO rotation along its z-axis is used as an alternative for moving forward. The gesture recognition also includes the determination of the starting and ending points of the gesture to be recognized. After performing a specific gesture, the system evaluates the type of used transformation and starts sending the processed data from the MYO armband to the LIRKIS CAVE ( Fig. 4) with predefined string format (section 3.2).
The scene of this system applies the acquired data to object transformation. Signals from MYO can be processed using a simple Myo script (Lua) or a MyoSharp compound library based on the C# language. To synchronize data with LIRKIS CAVE is necessary to work with files. Since Myo script does not have access to the file system [18], MyoSharp was chosen as the base of the MyoGest.

Verification of the first prototype
To verify the suitability of the implemented system, MyoGest was tested in CAVE LIRKIS. Participants used one MYO armband on any hand. They were introduced to the application functionality before testing began. After completion of the test, participants completed the SUS questionnaire (Tab. 1).
The scenario was the following: 1. Move the object to the platform (Fig. 5) in the virtual scene.
2. Scale object to the platform dimension.
3. Rotate an object by 90 • in any direction.

SUS questions:
Q1 I think that I would like to use this system frequently Q2 I found the system unnecessarily complex.
Q3 I thought the system was easy to use.
Q4 I think that I would need the support of a technical person to be able to use this system.
Q5 I found the various functions in this system were well integrated.
Q6 I thought there was too much inconsistency in this system.
Q7 I would imagine that most people would learn to use this system very quickly.
Q8 I found the system very cumbersome to use.
Q9 I felt very confident using the system.
Q10 I needed to learn a lot of things before I could get going with this system.
The purpose of the first testing was to determine the initial user response to the application, as well as to test the accuracy of the editing using only one device.

2 EMG senzors -one for each hand
Testing confirmed the data acquisition accuracy from the MYO wristband (EMG sensor, gyroscope). Users perceived the response and accuracy of object transformations as acceptable. However, the usability showed difficult object editing with one MYO bracelet. Performing a stop gesture on the same EMG sensor that reads data for processing has proven impractical. Performing the gesture caused the movement of the user's hand. Due to the processing data are incorrect.
Therefore, the second EMG sensor was used for the next prototype. This makes it possible to exit the edit mode with one hand and to easily manipulate the object with the other. In the settings, the user chooses which hand will be used which MYO bracelet, or he can choose to use only one bracelet.

Improving the interactivity using Kinect sensor
Another situation needs to be taken into account to improve the intuitiveness and natural movements in the VE. After taking an object, the user may tend to move the object by moving the entire body (by step), not only by moving the upper limb. In this case, the hand movement sensor (gyroscope) is inactive and it is necessary to add another sensor to detect human movement in space. The LIRKIS CAVE has an OptiTrack system, but it is only used to detect the position of the eyes in the space, for the right angle of view of the scene. Using this sensor could cause inaccuracies, for example in flexion of the trunk. Therefore the Kinect was chosen as a suitable sensor. With the skeleton, tracking can capture the center of the body (hip center) and eliminating inaccuracies, such as using the OptiTrack. By adding the Kinect, it is possible to move an object in the scene as well as to move the body that would be undetectable for the MYO bracelet.
The location of the Kinect in the LIRKIS CAVE is also an important factor. The minimum Kinect distance is 40 cm in Near mode, and has a viewing angle of -27 to +27. Due to the sufficiently wide range of the adjustment angles, the device can be placed on the floor of the cave. Also, to placet, the device facing the user is not necessary and therefore the final location can be seen in Fig. 6 (top view).

Verification of the second prototype
In the second verification, both EMG sensors and Kinect have already been used. The participants were the same as in the first testing and were again familiar with the functionality of the application. After completion of testing, participants completed the SUS questionnaire again (Tab. 2).
The scenario was the following: 1. Move the object to the platform (marked place) in the virtual scene (also with using Kinect).
The purpose of the second test was to see if reusing the app improves the user experience. Users' response to object editing was also tested using a combination of multiple sensors and how this affects the complexity of application usage and the accuracy of editing.

RESULTS
The result is the MyoGest system that connects two EMG sensors (MYO armband) with Kinect in the LIRKIS CAVE environment. The MyoGest is currently set up for a maximum of one user. It can manipulate objects in the scene by moving the upper limb (MYO), using hand gestures (MYO), as well as body movements (Kinect). Object manipulation includes 3-axis movement, rotation, scaling, and reset of the object transformation performed. The data obtained during object manipulation are processed and sent to the LIRKIS CAVE system.
Given the results of the tests, moving the object forward using the rotation of MYO armband proved to be non-intuitive. Instead, users tried to move an object by moving their upper limb forward. Detection of such movement is possible in several ways, for example by adding additional Microsoft Kinect devices along the side of the LIRKIS CAVE. Or by adding another MYO armband on the arm (over the elbow). The forward movement of the forearm could be calculated from the movement of the arm. However, the subsequent convenience of using 3 sensors should be considered.
Testing occasionally resulted in an unintentional reset of objects, especially during the execution of the 'quit' gesture. Also, the wrong edit mode was sometimes turned on. The problem was often poor evaluation of the gesture by Myo Connect system. Therefore, users should create and calibrate own user profile for connected armbands in Myo Connect before launching the MyoGest.
Testing has shown that users need to become familiar with the application before using it for the first time. A participant who had no experience in using a MYO armband had significant difficulties in making certain gestures, such as a wave gesture. As a result, the user was unable to successfully complete the first test. Also, participants who had no previous experience with the application had trouble remembering which edit mode corresponds to which gesture. Therefore, they had to constantly look outside the LIRKIS CAVE cave to see the appropriate gestures in the user interface. An icon with a gesture image could be added to its VE in the future.
Moving an object using Microsoft Kinect is intuitive and increases the accuracy of object editing. To recognize a person, it is necessary to stand about 50+ cm from Kinect. At this distance, Kinect can quickly detect a new person. Once the person is recognized, the device can track the person in whole area of the LIRKIS CAVE.
From the test results it is clear that adding more sensors (EMG, Kinect) improves the user experience and accuracy of measurement. Improvements have been demonstrated at all SUS points (Table 6: SUS Form Result). The average SUS result after the first test was 53.1, which is well below the average SUS value of 68.0. However, the addition of more sensors increased the average SUS result to 77.5 after the second test. The changes were most noticeable in questions about the complexity of using the MyoGest. It can therefore be concluded that adding another EMG and Kinect sensor increases the usability of the MyoGest system.

CONCLUSION
Development of controls for VR systems enhances communication and level of immersion between the user and VE. As a result of progressing development, innovative systems are more accessible to users, which contributes to the improvement of technology. They are also available for testing and training in rehabilitation activities e.g. focused on motor skills. This paper describe possibility of enhancing the interaction in the LIRKIS CAVE using MYO(s) and Kinect sensor. For the CAVE a presentation scene with a cube object was created. To edit this object, MyoGest application was implemented. The application communicates with sensors (MYO, Kinect) and LIRKIS CAVE at the same time. Editing involves changing 3 basic transformations (position, rotation, and scale) of the object. For simple editing of the object in VE with hand gestures, results show the suitability of using MYO device.
Further development can be used to optimize the system for even more sensitive motion detection and hand gestures within more complex virtual scenes. For more precise hand move It could be interesting to use two MYO armband on the one hand (arm and forearm). This type of interaction could be used, for example, in fine motor skills or rehabilitation of people with disabilities in cooperation with the Slovak Academy of Sciences, the Institute of Measurement.