Road to Strengthen of Virtual Infrastructure and Security of Remote Laboratories on Trnava University in Trnava

Many organizations, both large and small, are investigating the potential of storage architectures for their companies. Few years ago, we built our own virtualized cloud for REMLABNET and we still are taking benefits of this decision. This item handles with using Cloud computing platform for providing Remote laboratories. This work shows, how it is possible to save money if we use centralized system for more consumers. Every consumer can use access to centralized portal in the Cloud computing from Consortium REMLABNET. Every item is focused on environments of universities, where this cloud is existing and this is what we want to use for remote labs. This is item from practice knowledge and experiences about system function and managing virtual platform and next construction this proposal. Keywords—Remote laboratories, cloud computing, REMLABNET, datacentre, data storage, disk array, snapshot.


Introduction
Education and research of the Science disciplines are rather difficult activities. Teachers, scientist and researchers are confronted with reduced budgets for their work and disinterest of the students. This makes ill effect predominantly on experimental laboratories, which in return reduces level of Science education. In such situation the major call is to build laboratories at lower expenses. One way out of the situation in last century was building of remote laboratories (RLs), shared by many clients outside formal hands-on laboratories. Subsequently it turned out that simple remote experiments have not fulfilled expectations of educators and new forms of remote experiments sophistications stepped in. As examples may serve extensive graphical support for virtual reality (VR) modeling, rich scale of artificial intelligence (AI) outputs, etc.
Very short logical structure for Remote Laboratory (RL) is on Figure 1, where we can see standard laboratory equipment (right), interface for communication with computer or server and video camera. Following is the main communication computer (server), where data from experiment is coded to standard web page and distributed to a client (left).  [1] In our laboratories, hosted in the Faculty of Applied Informatics (FAI), Tomas Bata University in Zlín and Trnava University in Trnava (TU), are greater number of remote laboratories built on Internet School Experimental System (ISES) [2], for example: Electrochemical cell, Energy in RLC, Incline, Electromagnetic induction, Radioactivity, Wave laboratory, described in [3,4,5 and 6]. All RLs are placed in Remote Laboratory Management System (RLMS) REMLABNET, where they are supervised and monitored for functioning [7]. Some of the mentioned RL are equipped with embedded and synchronized simulations [8]. The block scheme of REMLABNET is depicted on Figure 2 with following parts [9]: • Data Warehouse (DW): Is a part of the system for the storage and data analysis.
• Reservation and management server: Part of the content management system (CMS) -generates a service enabling individual remote experiment reservation for a given time period.
• Communication server: Next part of CMS is a system designed for the transmission of information and real-time communication, interaction and collaboration in teaching and learning process with RE. • Virtualized cloud: Virtualized DTC contains physical and virtual servers which serve a variety of services including web services, file services etc.
On top of this were recently added following servers: • Diagnostic server: Of I and II level [10], • Embedded simulations server [11].
All this REMLABNET's components were placed in the cloud of Trnava University in Trnava [12].

Fig. 2.
Idea of the representation of the Remote Laboratory Management System REMLABNET schematically embedded in a virtualized cloud (shaded area) Mind the "federalization" connection to the RLMS Go-Lab, also serving to Graasp interface [13] The last three decades has seen the rise of the DTC computing practically in every application domain. The move to DTC has been powered by two separate trends. In parallel, functionality and data usually associated with personal computing have moved into the DTC; users continuously interact with remote sites while using local computers, while running intrinsically online applications, such as email, chat or manipulating data, traditionally these are stored locally, such as documents, spreadsheets, videos and photos.
In effect, modern architecture is converging towards, virtualization and cloud computing (CC), a paradigm where the whole user activity is funneled into the large DTC via high-speed networks. Simply speaking, CC is a set of computers, services or infrastructure. Delivering services are meant to reduce the work of consumers every day, as well as service providers and IT specialists. CC allows more access to services as it reduces infrastructure delivery time from weeks to hours and it offers reimbursement for provided sources and services only [14].
Main idea of our work and this paper is for clients to use new methods on how to provide RLs. We were first in the world, who provided RLs via CC technology. A new concept of our CC is figured on the Figure 3, where we can see all interesting parts of this idea.
First, we can see main parts of cloud computing. Each cloud is based on three primary services for use [15]: IaaS: Infrastructure as a service is a standard service for providing all infrastructures.
PaaS: Platform as a service is a standard service for providing VMs with operating systems.
SaaS: Software as a service is a standard service for providing SW features for consumers.
Virtualized DTC contains physical and virtual servers, which serve a variety of services including web services, file services, etc. The advantages of DTC are enabling application isolation from malicious or greedy applications to cannot impact other applications co-located on the same physical server. Perhaps the biggest advantage of employing virtualization, is the ability to flexibly remap physical resources to virtual servers in order to handle workload dynamics.
Our other aims are: To construct really stable and dynamically expandable CC for using RLs. To create VMs and linkage for all parts in cloud, is needed to create communication links, virtual network for cloud computing inside, and all needed parts for Cloud computing concept. The goal of our work is new and acute topic of providing a new service for the consumers -completely functioning "Remote laboratory as a service" (RLaaS) [16].
It is very interesting for all clients of the Remote laboratories, because they can find this cloud concept and every RLs there. For this purpose, we created Consortium named REMLABNET and this is consortium of the three universities: Trnava University in Trnava (Slovakia), Tomas Bata University in Zlin (Czech Republic) and Charles University in Prague (Czech Republic). REMLABNET portal is on domain name or web site http://www.remlabnet.eu [17]. Existing Infrastructure of Remlabnet Trnava University in Trnava is one of the oldest universities in Sovakia. Its history ranges to 1635, when it was found by cardinal Péter Pázmány. At that time, it has four faculties. Today, it has one more faculty and the number of students is 1300 times bigger. By the growing development of universities, the requires and expectations on operational security of common needs for functioning of university also reaserch activities, which REMLABNET is part of, were growing too.
As is the case in many organizations, the Trnava University in Trnava has grown an infrastructure in recent years, which was quite diverse, as it was built by an ad-hoc system. Servers, switches, but also disk storage of various brands and manufacturers meant that the management of such an environment required a significantly differentiated approach to that part, as it was a multi-vendor environment. Routine tasks, such as regular backups, operating system updates, server administration, and so on, have become increasingly demanding, overly dependent on humans, and carry a high risk of service outages, but especially data loss.
In this heterogenous environment, we operated REMLABNET too and what is worse, also his cloud services called REMLABGRAB (RLG). It was profoundly difficult from the point of view of right functioning of CC. Basically RLG serves for easy access to all bidirectional data between supervising client and controlling server of the RL. In this respect it differs from normal client who has access only to selected data. Supervising client has thus approach to all data corresponding to proper function of the RL, but also can change the functioning of the RL.
Simple scheme of RLG is on Figure 4 including ISES RL blocks and Measure Server (MS) unit (in orange). In principle REMLABNET with all RLs enters the system of RLG, followed by basic unit DATA Supply, whose function is to provide all cumulative data of RL from communication server of REMLABNET by parsing. The block PHP CLI serves for direct PHP communications commands creation. The block WEB is for setup of communication web page for supervising client comprising list of RL (Figure 5a and detailed on Figure 5b) and items of inputs, outputs, IP and camera addresses of RLs ( Figure 6).

Fig. 4. RLG main idea
The function of the unit of RLG is following: 1. Supervising client must choose the RL in question 2. Communication with RL must be established in order to generate RL data 3. Supervising client can now control RL and may make full use of the data and functioning of RL Supervising client thus can read the data from MS of RL addressed with given IP address and corresponding PORT ( Figure 5). Visible in figure traffic light indicates the availability of RL.  In general, the supervising client gains general data comprising inputs, outputs, camera address or else, what is need to work. Example of possible data is in Figure 6.

Fig. 6. Available information for Electromagnetic induction experiment
The main asses of a new interface of RLG is by providing new cloud service, Remote Laboratory as a Service (RlaaS) [12] for our clients, teachers, students and any interested in RLs of Science via Internet.

New Security Features of Remlabnet
Those responsible took a more conscious approach to the matter and decided not to leave anything to chance. The University of Trnava in Trnava has decided that, even with partial investments in IT development, it will fulfill the concept of building a secure infrastructure that will be as uninterrupted, highly self-service, universal and cost-effective as possible. We were looking for a secure and timeless design that can be dynamically scaled in the future according to the needs of individual applications and users, and which can -if necessary -bridge traffic across diverse environments, including the REMLABNET cloud concept.
When designing the concept of development, it was necessary to take into account that in the initial state, we implement data storage and management in several locations and in data storages from various vendors. It was also not easy to predict how technologies would evolve and what role cloud computing could play in REMLABNET's needs in the future. NetApp best reflected our expectations and requirements with its solutions.
New data storage NetApp series FAS2700 in first phase totally replaced three old data storages of various vendors. Because it natively allows a combination of block and file protocols over a single disk space, it can be integrated into the infrastructure very flexibly. Thanks to the possibility to make Storage Virtual Machine (SVM), we know to provide to every application needed protocol, dedicated logic interface (LIF), resources and to define Quality of Service (QoS). Subsequently simple transfer of SVM / LIF between individual controllers or between several disk arrays within a cluster allows us to implement maintenance and future technological upgrades without the need to interrupt the provision of services. Moving data to faster layers of the cloud is seamless and transparent to applications. The integrated Non-Volatile Memory Express (NVMe) cache on the controllers acts as an excellent accelerator of IO operations, which were insufficient for the original data storages and slowed down the operation of applications.
Thanks to the new ONTAP® storage operating system, we can automatically create SnapShots with virtually no limitations and no negative impact on performance (1023 SnapShotov per volume).
A Snapshot copy is a point-in-time file system image. Low-overhead Snapshot copies are made possible by the unique features of the Write Anywhere File Layout (WAFL®) storage virtualization technology that is part of Data ONTAP®. Like a database, WAFL uses pointers to the actual data blocks on disk, but, unlike a database, WAFL does not rewrite existing blocks; it writes updated data to a new block and changes the pointer. A NetApp Snapshot copy simply manipulates block pointers, creating a "frozen" read-only view of a WAFL volume that lets applications access older versions of files, directory hierarchies, and/or logical unit numbers (LUNs) without special programming. Because actual data blocks aren't copied, Snapshot copies are extremely efficient both in the time needed to create them and in storage space [18].
A snapshot is taken in 7a. In 7b, changed data is written to a new block and the pointer is updated, but the snapshot pointer still points to the old block, giving you a live view of the data and an historical view. Another snapshot is taken in 7c and you now have access to 3 generations of your data without taking up the disk space that 3 unique copies would require: live, snapshot 2 and snapshot 1 in order of age.

Conclusion
Our idea of use of Cloud computing was attested and discussed with experts in this research part. The way of our work is good and have a big progress. We can provide new service, Remote laboratory as a Service (RLaaS) in our cloud system. Our clients are primary teachers, students and brainpower of the universities and high schools, but access is possible for all consumers via Internet. This show, how the university network is very overcast for communicaton and traffic. This claims, that network and all parts of IT structure must be without failure and latency. And be secured too for management and research data protection.
Creating snapshots allows us to make effective backups, go back in time with unwanted changes with extremely high granularity and, last but not least, have a tool to easily recover from a possible ransomware attack (SnapShots are ransomware resistant), because a NetApp Snapshot copy is a read-only, static, and immutable copy.
Using the SnapManager tool, we can set up an automated backup cycle, while applications run, and their retention policy. The high performance, scalability, and stability of NetApp Snapshot technology means it provides an ideal online backup for user-driven recovery. Additional solutions allow you to copy backups to offline disk or to tape and archive them.
By integrating into the application environment, this tool can be used even by the user of the university network. For example, to recover an overwritten file, it is easy to do it completely on its own, and the user does not have to hire an IT administrator.
In this paper we showed our idea of constructing Cloud computing system with important parts like using snapshot technology on storage. Our work is oriented to save money in education and research, if everyone builds their own Remote laboratories. We have connected many laboratories from Zlin University, Trnava University, Charles University and other in the world. Our work is in simple terms "Bring Technology to Service!". 5 of many publications in the field of informatics and security. For many years he has worked as a teacher with research in pedagogy and didactics of the sciences. František