Augmented Reality for Personalized Learning Technique: Climbing Gym Case Study

Augmented Reality is a technology that allows to expand the traditional learning techniques complementing the perception and interaction with the real world that allows the student to be in real environment with additional information generated by the computational algorithm. However, the knowledge and applicability of this technology in the field of personalized education is not a common practice. In this article, personalized education strategies are applied in the process of developing the application for indoor climbing teaching techniques. The application allows the trainer or climber to select the climbing holds that make up a route and display it by visualization on a projector to customize the training program. The system has the detection algorithm and recognition of climbing holds in real time and visualization of the route to climb. This kind of applications using emergent technologies oriented to personalized training has enormous potential for efficient education.


Introduction
Nowadays, the use of IT for the creation of immersive environments for learning of specific topics has been implemented in wide variety of disciplines. On the edge of technological advances, the education sector is going through a renovation in which technology is used as a tool, adding a value to a learning process and shared knowledge. Through IT, communication spaces can be established that adopt a new modality of relating to the object of study and, in addition, allow the transmission of comprehensive knowledge between the student, the facilitator and the environment.
Education through virtual environments is more focused on the needs and pace of student learning. Therefore, virtual education promotes connections not only with technology but also between the facilitator, the student and behavior, thus, allowing greater interconnectivity with the world and with the sources of information, promoting collaborative learning. The virtual education perspective includes certain categories that are appropriated in a very specific way, according to the learning criteria that are worked on (Chen & Yang, 2014).
On a technological level, virtual environments are based on a human-computer interaction, where the user interacts with the elements of the system and the server makes the connection with the environment possible. The application of virtual environments has great potential in education, more specifically when talking about contexts where learning is immersive or exploratory (Zamora-Musa, 2016).
• It is presented the conceptual framework of personalized learning as an educational approach and the principles of adaptive educational systems.
• It has been proved that it is possible to implement personalized training program through emerging technologies as an augmented reality.
• It has been developed meaningful learning framework of the physical education in specific for rock climbers using an application with augmented reality.
• The application recognizes the movements of the human body, as well as its interaction with virtual objects using the 3D sensors.
Recently, there has been proposed many solutions that make use of human-computer interaction technologies applied to real-world sports, such as trampoline, climbing and mixed martial arts, among others (Kajastila & Hämäläinen, 2015). As for the sport of climbing, there are a number of projects and research that are intended to support the process of training, for example Strange Beta (Phillips, Becker & Bradley, 2012). They use a mathematical model and machine learning for the design of indoor climbing routes. It is an assistant that manages to configure climbing routes through machine learning, that involved analyzing climbers to be able to describe their movements by following their routes, so that system could learn the patterns. Due to the nature of this system, it is designed for experienced climbers and expert trainers. The use of Strange Beta consists of defining one or more routes, using a computer-readable descriptive language of climbing routes.
Other proposal consists in automatic detection and classification system of climbing activities based on inertial measurement units (IMUs), that are placed on the wrists and feet of the climber and can record limb acceleration and angular velocity (Boulanger, Seifert, Herault & Coeurjolly, 2016). This project is focused on the regularization of the climber's posture, the free movement of his limbs by following different routes. They based on the behavior of climbers when they are kept in a static due to fatigue, especially the expert climbers as an example they refer to the fact that they tend to try at least three climbing holds before to choose the ideal. That is why its objective is to detect and quantify some common climbing activities: immobility, postural regulation, hold exploration, etc. As a result, there was designed a system that requires manual corrections to obtain knowledge of the progress, and based on this, a statistical model is constructed for the norms of acceleration and angular velocity.
Another project known as ClimbSense is an automatic recognition system of climbing routes using IMU on the wrists by extracting the characteristics of a registered climbing, like the previous proposal, and using them as training data for the recognition system. Consequently, this research is also focused on optimized route tracking, through a system that automatically records and recognizes the route that a user climbed during a climbing session. The climber is tracked by IMUs. The characteristics that are extracted from the collected data are used as training data for the recognition system (Kosmalla, Daiber & Krüger, 2015).
It should be said that today there are interactive climbing walls that have mostly focused on the use of sensors and lights. A new climbing wall enhanced with hardware and software makes a combination of computer games with sports climbing (Liljedahl, Lindberg & Berg, 2005). Each wall has a printed circuit board (PCB) incorporated with a capacitive sensor and LEDs that transmit sounds and music to convey a better gaming experience.
One of the first projects mentioned in the literature with some similarity to the present work, is the one presented by Daiber, shows a system that provides an intuitive way to create and share routes. In the research paper it is presented a mobile application of augmented reality BouldArt for the adjustment of several parameters in climbing training (Daiber, Kosmalla & Krüger, 2013). This approach supports cooperative training and uses synthetically generated images of climbing walls that are then used as traces for existing real walls. In order to establish the automated system, they needed to take photographs of a large number of holds, then trimmed and stored them in a database. Subsequently, an image of the wall is created by aligning the visualized holds on a predefined grid with the dimensions of the real ones.
As it has been shown previously a market focused in novel products has been extended, offering a technological experience in a real and virtual environment. For example, games that are focused on the practice of real-life sports. In these games, new technologies of digital image processing and computational vision are integrated for their learning and training, so they are quickly becoming a new category for users that enjoy new experiences of sports games with human-computer interaction (Kim, 2017). The approach that distinguishes this work consists of combining the principles of personalized education and its subsequent application to accelerate the acquisition of skills and abilities that convert climbing training to an efficient process that adapts to the needs of the climber. In our project, we are developing a new augmented reality climbing wall, where will use a combination of the wall analysis, projected graphics on an artificial climbing wall and body tracking through artificial vision.

Methodology
A lot of discussions are dedicated to the radical change of the learning process and almost all the parameters involved in learning: where do we learn, when, how, with whom and from whom, of course what and especially for what we learn (Collins, Halverson, 2010). Today, technology is advancing rapidly, providing benefits to the users and demanding changes in the traditional educational process where the teacher or coach was the only carrier of exclusive information that he shared. Traditional education system, which was built to standardize the way of teaching falls for the simple reason that if two students are learning the same subject, it does not mean they learn at the same pace or should follow the same pathway. Each person has different learning needs at different times to process the information.
Modern society requires implementing new effective learning methods paying attention on the process, because what is learned is as important as the way how it is learned. It is about guiding the learning process -how to learn -to develop the skills of learning for oneself "learn to learn" and "learn to think". In the constructivist model, the teacher is a mediator of learning in two senses: first, guiding and structuring learning according to the student needs and, secondly, building and offering a meaningful material or creating a meaningful content.
The growing importance of constant actualization of the professional skills and continuous improvement is a reality that people face during all their lives. One way of access to knowledge in the information society is the individual learning path. Last years the opportunities, resources and instruments for learning are diversified and cease to be associated exclusively or as a priority to a single context of activity usually in the context of formal education, the focus of interest shifts to the learning experiences and the learning process that take place in the different contexts of activity through that people are passing (Arnseth & Silseth, 2013).

Towards a personalized learning process
Learning styles are cognitive, affective and physiological traits that serve as relatively stable indicators of how students perceive interactions and respond to their learning environments (Keefe, 1988). Cognitive traits have to do with the way students structure content, form and use concepts, interpret information, solve problems, select means of representation: visual, auditory, kinesthetic, etc. Affective traits are linked to the motivations and expectations that influence learning, while physiological traits are related to the student's biotype and biorhythm. The learning style is the way in which a learner begins to concentrate on new and difficult information, treats and retains it.
The following attributes describe essential parts of personalized learning model (Benson, 2013): Flexible learning environment: Multiple instructional delivery approaches that continuously optimize available resources in support of student learning. Instructional materials allow students in different ways and pace resolve practical tasks.
Learner profiles: Analyze abilities of each participant and capture individual skills, gaps, strengths, weaknesses, interests and aspirations.
Personal learning paths: Each student has learning goals and objectives. Learning experiences are diverse and matched to the individual needs of students. They should have frequent opportunities to reflect on what they are learning, apply knowledge in authentic and relevant contexts and about their success in learning.
Individual mastery: Instructions that are aligned to specific student needs and learning goals. Also continuously assesses student progress against clearly defined standards and goals. Students advance is based on demonstrated mastery and targeted instructions.
So, each person develops and enhances a certain strategy to reach the meaningful knowledge. Someone learns from reading, others from practicing or group work, others from individual isolated work, however all students have different traits of different learning styles in different percentages and we are considering these different styles of learning while planning the application.

Meaningful learning
The important factor for meaningful learning is prior knowledge, prior experience or prior perception, and also student must express a willingness to non-arbitrarily relate the new knowledge to its cognitive structure (prior knowledge). In addition, in order to achieve meaningful learning, the material or content should be potentially significant, that means it can be substantially related to some specific cognitive structure of the student that must have logical meaning. This meaning refers to the inherent characteristics and the nature of the material or content to relate it intentionally and substantially to the corresponding and relevant ideas that are available in the student's cognitive structure.
When the potential meaning becomes new, differentiated and idiosyncratic cognitive content within a particular individual as a result of meaningful learning, it can be said that it has acquired a psychological meaning in this way the emergence of the psychological meaning does not only depend on the representation that the student makes the material logically significant, but also that such student actually possesses the necessary ideational background in his cognitive structure (Ausubel, 1983). The fact that the psychological meaning is individual does not exclude the possibility that there are meanings that are shared by different individuals, these meanings of concepts and propositions of different individuals are homogeneous enough to enable communication and understanding between people.
When the student shows a disposition to relate in a substantive and non-literal way the new knowledge with their cognitive structure means their disposition for meaningful learning. Thus, regardless of the potential meaning of the material to be learned, if the student intends to memorize arbitrarily and literally, both the learning process and its results will be mechanical. In case the material is not potentially significant and if it is not related to its cognitive structure regardless of the meaning of the student's disposition, neither the process nor the result will be significant.
The facilitation of meaningful learning according to Ausubel is the deliberate manipulation of relevant attributes of the cognitive structure for pedagogical purposes. It can be implemented in two ways (Ausubel, 1983): substantially and programmatically.
Substantially for organizational and integrative purposes means by using the unifying concepts and propositions of the content of the learning subject that have greater explanatory power, inclusiveness, generality and relationality of this content.
Programmatically means by using programmatic principles to sequentially order the subject of learning, respecting its internal organization and logic and planning the implementation of practical activities.
In substantive terms, Ausubel postulate that to facilitate meaningful learning it is necessary to pay attention to the content and cognitive structure, trying to manipulate the both (Ausubel, 1983). It is necessary to make a conceptual analysis of the content to identify concepts, ideas, basic procedures and concentrate on them the instructional effort. It is important not to overload the student with unnecessary information, making cognitive organization difficult. It is necessary to look for the best way to explicitly relate the most important aspects of the content of the specific subject with its own relevant aspects of the cognitive structure of the learner.
This relationship is essential for meaningful learning. In summary, a prior analysis of what is going to be taught is essential. Many times, the order in which the main concepts and ideas appear in educational materials and programs is not the most appropriate to facilitate interaction with the student's prior knowledge.
The critical analysis of the teaching subject must be done thinking about the student. In the case of climbing discipline, we will develop the program that will record the mistakes made by the climber. These mistakes then will be analyzed by the trainer along with the climber and this moment is suitable to provide meaningful learning based on previous experience.

Augmented reality in educational process
Thanks to technological advances, there are many innovative solutions in different areas, such as technologies and interfaces for immersive environments.
Augmented reality (AR) in education supplements reality rather than replacing it like Virtual reality (VR) with digital information designed to be entertaining and relevant to the activity learners are engaging. Augmented reality will provide an understandable and positive experience of the surrounding world only if the real and virtual scenarios will be synchronized in space and by context. AR technology renders content generated by computer on users' physical surroundings. Klopfer (2008) in his book characterizes AR in terms of the amount of digital media that is provided to the learner, ranging from lightly augmented reality where information is provided primarily through the real-world environment, to heavily augmented reality where most of the information is provided virtually by device.
The architecture of any AR system is based on two critical elements: tracking and visualization. The degree of immersion and integration in mixed reality depends on them. The tracking system determines the exact position and orientation of real and virtual objects in the real world. The graphic system, or visualization, in addition to generating the virtual objects, combines all the elements of the scene, real and virtual, showing them on the screen. Correct and effective visualization of these data using an AR technology can reduce the misunderstanding/ misinterpretation in spatial and logical aspects. There are varios significant applications of AR in education (Reinoso, 2012): (1) Learning based on discovery.
(2) Development of professional skills. Vocational training is one of the main areas of application of the AR allowing to improve understanding in practical training activities and recreate real work situations.
(3) Books and learning materials with AR.
(4) Educational games with AR (include Games based on markers and codes, in which 3D elements are interacting; Games based on gestural recognition, in which the user is part of the game interface; Games based on geolocation, they are played in a social and collaborative way, and where the physical space becomes the game scenario). (5) Modeling 3D objects. Using object modeling tools and AR applications, the student can create and visualize 3D models and manipulate them: zoom them in, zoom out, rotate them, place them in specific places or explore their physical properties.
In the quantitative study of Redondo (Redondo, Fonseca, Sánchez & Navarro, 2014) on the advantages obtained with the use of applications with AR in the educational process it is mentioned that improvements were reflected both in the degree of motivation shown by the students and in the final qualifications. So, Augmented Reality that provides new environments to explore, new challenges and new ways of teaching could be adapted to different learning abilities.
Explanations of differences in the ways that people learn are not focus only on cognitive factors having to do with the path to receive information and process it, such as learning styles. Several areas of research point towards the important effect of positive emotions on successful personalized learning. In this context AR reinforces learning and increases motivation to learn. This investigation supports better design because it addresses for a more comprehensive set of psychological factors such as immersive experience by using AR reality to reach high level of enjoyment.

Case study: Development of the training solution for the Climbing gym
Nowadays there is a great variety of sports games that make use of new technologies. In relation to the rock climbing, there are ones played on screen, which consist of showing a virtual game projected on artificial climbing walls. These games use Microsoft's native technology -Kinect. However, such games are usually expensive and hardly personalized, since most of the companies that offer this type of products are international, specifically dedicated to the development of interactive games offering a complete equipment for their installation. On the contrary, the present work has an aim to cover needs of the local schools of climbing implementing the principles of personalized teaching and to develop an algorithm that includes procedures for the digital processing of images with computer vision methods for the detection and analysis of climbing holds. That will help to develop a route projection system on climbing walls that works as a tool for the improvement of indoor climbing training and facilitates learning skills. The focus groups to which this application is directed are mainly beginners. However, at any given time the application can be modified for any installation and climbers' level, encouraging the practice of climbing.

Recognition of climbing holds on climbing walls in real time
The design process involves following next key stages:

 Acquisition of images
To acquire images correctly, it is necessary to evaluate different factors that directly affect the capture process, the hardware and software that are involved, as well as the environment and the positioning of the elements (lighting, climbing wall, position of the camera, etc.) The Kinect ONE v2 RGB video camera was used to acquire the scenes in real time, with a resolution of 1920x1080 at 30fps. Another point is the programming language and the characteristics of the device for processing. For the development of the algorithm, the C ++ programming language was used together with OpenCV, an open source library to develop artificial vision applications and for mobile development the Android Studio programming was used. As for computational requirements for processing, there were considered such important aspects as the processor, hard disk, RAM and video card.
Finally, the elements that define the environment and the way of placing them were identified: a concept of the experimental environment consists of a climbing wall, a multimedia projector, the Kinect camera and a computer.

 Image preprocessing
A start point of any image processing is enhancement. Image enhancement is the procedure of improving the quality and the information content of original data before processing. In our case it consists in eliminating the noise produced by the camera, change the exposureeffects of the lighting that altering the images. It will make possible highlight the important aspects that we need to analyze. The Gaussian filter (one of the best-known filters for noise elimination) is based on the mathematical operation of the convolution. It consists of the sum of the convolution of each point of the input matrix with a Gaussian kernel, by traveling pixel by pixel of an image, with a mask or kernel of NxN size. The Gaussian filter is defined as follows: (1) Then for the enhancement of the climbing holds it was applied subtraction or restoration of the images, a common arithmetic operation in computational vision. Background subtraction is widely used for object detection, which is the difference between a current pixel and a reference pixel, in our case the background image. The climbing wall does not vary much with time, that is, the camera will be statically focused on it as the wall only changes when new climbing holds are placed, and in this moment the detection procedure is performed. The areas where the difference is significant will indicate the location of a new object. Background subtraction attempts to eliminate variations in color levels, first approaching them analytically with a background image and then subtracting this approximation from the original image. So, the new image is:

 Segmentation, Recognition and classification
Image segmentation is the process of partitioning a digital image into different segments and discrete regions. Image segmentation is typically used to locate objects and boundaries in images and there are a wide variety of techniques that leads us to the conditions of the problem to be solved. Image segmentation is the partition of an image ( , ) into a set of nonoverlapping, homogeneous regions with respect to some criteria common for the entire image. The objective of segmentation is to separate the objects of interest from the non-relevant rest considered as background. To achieve the segmentation of the climbing holds, the resulting image was subtracted from the background in the RGB color space. Then it was transformed into the HSV color space. The HSV model is obtained by deforming the representative RGB cube to inverted hexagonal pyramid. To threshold the image, a certain range is taken, in this case the largest range that characterizes the black color in the HSV model, that is the background color has obtained from the background subtraction. So, the following step used to create an inverted binary mask to obtain the objects: fmask(x,y)=if fthres(x,y)>T fthres(x,y)=0, if not fthres(x,y)=255 (4) In the next step we found the contours of the segmented image. The contours are curves that join the continuous points of an object that has the same color or color intensities. They help us to analyze the shapes and therefore detecting and recognizing the objects. Finding outlines in a binary image is much simpler, since the objects are whites and the background is black. For the search and drawing of the contours, OpenCV has the functions findContours () and drawContours (). The first function recovers contours of a binary image using the algorithm of Satoshi Suzuki (Suzuki, S. and others 1985), that is based on the fundamental technique in the processing of binary images -Border following. where: image is the binary input image of a single 8-bit channel; contours is an array of dot vector where the detected contours are stored; hierarchy is an optional output vector that stores information about the image topology; mode is mode in which the algorithm retrieves (contours, for example); RETR_EXTERNAL retrieves only extreme outer contours; method is contour approach method, for example; CHAIN_APPROX_NONE store all contour points.
We used drawContours() for the drawing of the contours. Its first argument is the original image, the second argument is the arrangement of contours, the third parameter is the index of contours for drawing individual contours, and the remaining arguments are optional such as color, thickness, etc.

 Mobile application development
The Canvas class of Android represents a type of canvas or surface where you can draw lines, circles, text, etc., through a variety of methods that it provides. For the creation of the mobile application that shows the detected climbing holds, an HTTP connection was made from Android to a web server, so each time the objects were detected, they were sent to the server and stored in a .txt file for later use. In our case they are drawn in the mobile application. The general functionality of the application is following: (1) By monitoring the changes in the climbing wall, the server is notified every time a new climbing hold detection is made. In turn, the server sends a notification to the cellphone through the application.
(2) Upon receiving the notification, the mobile application makes a connection to the server to obtain the updated detected objects.
(3) These objects are drawn and represented as contours.
(4) The route is defined by selecting the climbing holds in the applications.

 Testing process
To offer better results, it was necessary to have a controlled environment, since natural environment has unpredictable starting conditions. We have used a frontal lighting, where the light directly affects the object, so it allows to distinguish details of the objects, as well as their shape. The following prototype was tested on the wall located in Irapuato, MundoBloke:

Interaction of the human body with virtual objects using 3D sensors
The next stage of the project includes interaction of the climber with the route created. The computer system was developed in the programming environment of Microsoft Visual Studio 2017, using C ++ as the programming language as well as the software development kit (Software Development Kit), Kinect for Windows SDK 2.0, the NtKinect library (Nitta and Murayama, 2018) and the open source library for OpenCV computer vision. Figure 5 shows the general procedure for the recognition of the movements of the human body and its interaction with virtual objects that are visualized in real time.

-User
Kinect 2 for Windows can monitor up to six people simultaneously within its field of vision and can detect 25 joints for each of them. People can be detected while standing or sitting. The optimal distance to detect the human body by means of Kinect is 0.5m to 4.5m, and it has a horizontal viewing angle of 70 ° and vertical of 60 °.
To interact with the system the user must be positioned in front of the Kinect device, either standing or sitting at the distance mentioned above. Figure 5. General process of the interaction of the human body with virtual objects.

-Kinect
Kinect is a motion detection device created by Microsoft for Xbox console games and Windows personal computers. The versatility of Kinect allows you to see the movements of a complete human body, as well as detect small hand gestures.
The Kinect sensor provides color image frames from its RGB camera. It also has an infrared emitter that, together with a depth sensor, can measure the depth of the captured images at a millimeter resolution. It has a four-microphone array that transfers audio data to the SDK development kit.

-Image capture
The Kinect RGB camera can acquire images with a resolution of 1920x1080. Because the OpenCV library uses the BGR or BGRA format by default, it has been decided to use the NtKinect library that has functions that convert BGRA to RGB format automatically without the need for the programmer to develop additional code. NtKinect uses the setRGB () function to obtain the RGB image from the Kinect camera and handles it using the rgbImage variable.

-Human body detection
As mentioned above, the depth sensor can detect up to 6 people simultaneously with 25 joints of the each one when they stand and 10 joints when they sit at a distance of 0.5m to 4.5m.
To detect a human body, it is first necessary to obtain the position of the joints using a structure type variable used by Kinect for Windows SDK v2.0 called "Joint", and has the following member variables: • Joint_Type: Type of articulation.
• Position: 3D coordinates that represent the position of the joint.
• TrackingState: Value used to indicate the tracking status of the joint.
Using the NtKinect library, you can access the joint information using the setSkeleton () function.

-Obtaining the coordinates of the hands
To obtain the coordinates of the hands it is necessary to take into account that Kinect v2.0 has 3 coordinate systems, namely ColorSpace, DepthSpace and CameraSpace. When using information obtained from different Kinect sensors at the same time, it is necessary to convert the coordinates to match. For instance, the CameraSpace coordinate system uses a 3D coordinate system (x, y, z), and the ColorSpace system uses a 2D (x, y) system. In this case a coordinate mapping is made. This is made using the Kinect for Windows SDK v2.0 ICoordinateMapper class, which will convert a 3D coordinate system to another in 2D.
Hand positions are obtained from the joint type (JointType) for the articulations (Joint) JointType_HandLeft for the left hand and JointType_HandRight for the right hand. The coordinate system of the hands is CameraSpace and it is necessary to map them to the ColorSpace system to obtain their coordinates in X, Y, and thus obtain their corresponding position in the RGB format image.

-Virtual object display
Using OpenCV virtual objects are displayed at the monitor. Each object is drawn as a circle. In this project, we used the cv::circle () function of OpenCV library.
One of the input parameters for drawing the circles with the cv::circle () function is an X, Y coordinate point on the screen where the circle is displayed. Each of the X, Y coordinates of the center points of the circles are stored in an array or vector in order to have their positions stored.

-Collision detection
A collision is the interaction or clash between two or more bodies where at least one of them is in motion. Therefore, to detect the interaction of the hands with the virtual objects it is necessary to track the hands and obtain their positions in each moment to compare their coordinates with those of the circles. However, until now the coordinates of the center point of the circles are still and the collision must be detected from the moment there is contact with its circumference. Therefore, it is important to consider the radius of the circles when comparing the coordinates.

-Event Activation
In this phase it is defined what happens after a collision has been detected between the user's hand and a virtual object; an event or action that has been given to the user is to be able to move the virtual object that is displayed on the screen when wielding the hand over the object, also when positioning the hand on the object it will change color.

-Show result to user
The result of the collision is visualized by changing circles color to green, which indicates that the user has put his hand on a virtual object, also when closing or wielding the hand, the circle will change its position and coordinates X, Y to move together with the hand when moving it, if the user opens the hand the circle will stop moving.

-Summary of Results
In this project a basic computer system was developed to interact with virtual objects that are displayed on a computer screen. It was found that Kinect tends to present certain inaccuracies if you do not have the appropriate distance, position and lighting when facing this device, which causes false positives to occur when another part of the body touches a virtual object and is taken as valid, although it has not been touched with the hands. To try to solve this situation a timer was applied when having contact with a virtual object, therefore, it is necessary to maintain the position of the hand on the object for a second so that it can be considered as a valid interaction or collision, in this way the incidence of false positives has been significantly reduced.

Results
The project is composed of the hardware-software solution for customizing the training in rock climbing. The wall and the climbing holds positions analysis is based on the computer vision analysis of the image acquired by the Kinect2 camera using the OpenCV library. After the analysis, the climbing holds are being classified and appear in their corresponding positions in the Android application. Using the application, the trainer selects the route which is illuminated by means of mapping with a multimedia projector. The route correctness is controlled by recognizing the movements using the 3D sensor of the Kinect2. Basing on the result of the series of routes, the trainer gets the statistics of the errors and may individualize the training methods by modifying the complexity of the routes, number of repetitions and required speed.

Conclusions
Analyzing current trends in education, we confirm that recent years have shown that the way of communicating knowledge is changing with the advancement of technologies. Virtual education has a strong connection with immersive environments. Debates about the future of education center on changing the process of learning, to embrace technology in the classroom the student obtain meaningful skills thanks to efficient human-machine interaction and develops new potential.
In our project we implement the principles of meaningful and situated learning. We also facilitate the communication of the trainer and climber through immersive experience. The project in its current state has several disadvantages such as artificial illumination requirements for precision climbing holds recognitions, uncertainty in the holds classifications as well as manual training programs definitions. However, future development including advanced clustering algorithms, neural networks and self-learning training algorithm would allow to overcome these problems and create fully-functional climbing training product.