FogQSYM: An Industry 4.0 Analytical Model for Fog Applications

: Industry 4.0 refers to the fourth evolution of technology devel-opment, which strives to connect people to various industries in terms of achieving their expected outcomes efficiently. However, resource management in an Industry 4.0 network is very complex and challenging. To manage and provide suitable resources to each service, we propose a FogQSYM (Fog— Queuing system) model; it is an analytical model for Fog Applications that helps divide the application into several layers, then enables the sharing of the resources in an effective way according to the availability of memory, bandwidth, and network services. It follows the Markovian queuing model that helps identify the service rates of the devices, the availability of the system, and the number of jobs in the Industry 4.0 systems, which helps applications process data with a reasonable response time. An experiment is conducted using a Cloud Analyst simulator with multiple segments of datacenters in a fog application, which shows that the model helps efficiently provide the arrival resources to the appropriate services with a low response time. After implementing the proposed model with different sizes of fog services in Industry 4.0 applications, FogQSYM provides a lower response time than the existing optimized response time model. It should also be noted that the average response time increases when the arrival rate increases.


Introduction
The term "Internet of Things (IoT)" was coined in 1999 [1]. A few years ago, IoT was adopted in Industry 4.0 with a new term called Industrial Internet of Things (IIoT) [2]. IIoT enables the ubiquitous control of industrial processes that are implemented in the IoT network. IIoT is mainly used in manufacturing tasks that fall under the new Industry 4.0 [3]. Sensor networks, actuators, robotics, computers, equipment, business processes, and staff comprise the main components of Industry 4.0. The overall industrial processes are accomplished locally due to delay and security requirements. In the industrial environment, structured data needs to be communicated over the Internet to web services and the cloud with the assistance of supportive middleware [3,4]. In this sense, fog is a potential middleware that could be very useful for a variety of industrial applications. With appropriate latency, fog may provide local processing support for actuators and robots in the manufacturing industry [5]. The resources provided over the cloud can quickly adapt to customer requirements, while the implementation and usage costs can be reduced. Fog computing has technological limitations that restrict its use in specific application domains. Industry 4.0 is one such domain wherein advanced digitization is implemented within production sites that are directly associated with the Industrial Internet of Things (IIoT) by connecting different types of machinery and enabling more efficient production as well as new applications and business models [6]. Fig. 1 illustrates the concept of Industry 4.0 within fog applications. Fog computing is a recently evolved technology that acts as a middle layer between the cloud and IoT; it does so by expanding the cloud to edge devices such as sensors, gateways, etc. CISCO developed fog computing, or fogging, in 2014 [7]. It consists of multiple nodes that are placed one hop from the user. Due to this placement, the data should be processed locally instead of being transferred to the cloud, which is placed far away from the user. Fog computing is distributed locally with a pool of resources that consists of heterogeneous devices connected ubiquitously at the edge network [8]. Fog computing involves multiple devices like bridges, switches, routers, servers, base stations, gateways, smart gateways, etc. to handle IoT devices like sensors and actuators. As IoT consists of thousands of sensors and actuators, it produces millions of points of data which are to be processed with low latency and high Quality of Service (QoS) [9]. Fog computing provides important functionalities like mobility and scalability to devices as well as real-time interactions between devices. As a result, deploying fog computing services would lead to low latency, low power, and reduced expenses for IoT-based applications.
The number of IoT devices is projected to increase rapidly, and the data generated by these devices is expected to surpass 50 billion by 2025 [10]. While existing technologies like cloud computing enable huge data processing for large volumes of data, they fail to provide a faster response with low latency. To solve this issue, fog computing is introduced. Although fog computing succeeds in achieving lower response time and latency, it has many challenges such as node discovery, resource allocation, estimation, and scheduling. Fog computing has been deployed in a number of applications like healthcare, military, agriculture, smart cites, home automation, smart transportation, etc. [11].

Fog Computing Data Management
Fog computing has emerged as a popular solution for managing the huge amounts of data generated by IoT applications. For example, Qin [12] stated that 2,500 petabytes of data were generated per day in 2012. Dastjerdi et al. [13] and Pramanik et al. [14] noted that in the healthcare industry, which deals with 30 million users per second, 25,000 records were generated, and the amount of data will eventually increase to the scale of zettabytes. The US Smart grid and US Library of Congress generate 1 exabyte and 2.4 petabytes of data per year, respectively [12,15]. Fog computing offers proper solutions for such scenarios, as it can process the data locally generated by IoT devices with reductions in end-end delay, processing time, and response time [16]. Given the incredibly large amount of data generated from massive networks of associated devices, data management is of prime significance in the Industry 4.0 and IoT domains [17]. In addition, a high percentage of devices involved have limited resources, which depend entirely on their battery life, processing power, and storage capacity. It is important to build novel and efficient methods to resolve these barriers and leverage data storage effectively. To assist information networks, Fog Computing appears to be a perfect solution.
Fog data management has many advantages like improved efficiency, privacy, data quality, data dependability, and decreased cost and end-end latency with the aim of effective resource provisioning. We incorporate these features in our proposed Fog-QSYM model to increase the average response time of the arrival resources.

Contributions
We summarize the main contributions of this paper as follows.
• To present the fundamental concepts of Fog computing for Industry 4.0 and the scope of their relationship. • To describe the role and develop the architecture of FogQSYM as a fog serving as middleware for Industry 4.0. • To propose an analytical model for Fog applications that divides each application into several layers. • To compare our model with the existing ORTM model in terms of sharing resources in an effective way according to the availability of memory, bandwidth, and resources.

Organization of the Paper
The rest of this paper is organized as follows: Section 2 presents a survey of the existing works. Section 3 discusses the proposed work and addresses open issues in resource allocation based on the QTCS model. Section 4 contains the experimental results and related discussions. Section 5 describes the significance of our proposed model. The paper finally concludes in Section 6 with future directions.

Related Works
This section discusses the existing methods used in fog computing for resource management. Industry 4.0 has revolutionized information and communication technology. Cyber Physical System (CPS) is one of the core components in Industry 4.0. O'Donovan et al. [18] analyze a CPS framework deployed on fog which implements a Predictive Model Markup Language machine learning model for day-day factory operations. This framework follows the design principles of Industry 4.0 with respect to security and fault tolerance. With the advent of the industrial internet of things (IIoT), factory operations have become smarter and more efficient. Lin et al. [19] analyze an intelligent system that combines the power of the cloud, fog computing, and a wireless sensor network to assist in the effective working of factory logistics. Further, the authors also implement a metaheuristic algorithm that uses a discrete monkey algorithm to find the best possible solutions and a genetic algorithm to increase the computational efficacy. The widespread deployment of IoT devices has generated huge amounts of data, and there is a dire need to manage this data. Fog computing has the capability to process such data with a much smaller delay than that of a cloud environ. Jamil et al. [20] discuss various methods that can be used to schedule the requests of IoT devices in the fog. The authors also highlight the methods used to address the dynamic demands of these IoT devices as well as the effective resource utilization of every fog device. Using fog devices to process IoT data might reduce the latency, and it is also expected to aid in the smart decision process. Sadri et al. [21] survey different fog data management techniques used for decision-making processes. The authors also consolidate the challenges in fog computing as well as the road ahead for future research in fog data management. The major parameter for the performance of real time IoT applications is response time. The cloud computing environment does not achieve the required response time due to latency issues. Enguehard et al. [22] propose a request aware admission control technique for increasing the requests of the fog nodes by dynamically evaluating the frequently accessed resources. The authors use the Least Recently Used filter to estimate the resources. In recent years, there have been substantial advancements in technologies pertaining to the building of smart cities. These technologies are implemented under the cloud computing paradigm, but the latency issues have yet to be addressed. Fog computing has emerged to overcome the latency issues for real time applications. Javadzadeh et al. [23] survey the various methods used in fog computing with a focus on latency factors in smart city deployment. Different QoS parameters have been considered for the deployment of fog computing in sensitive applications. Kashani et al. [24] analyze different QoS-aware techniques that are used in fog computing nodes, particularly for resource management. At present, one of the most urgent needs is for techniques that can be used to analyze patient health records for accurate diagnoses. This data needs to be accessed frequently and must be stored securely. Nandyala et al. [25] have implemented an architecture for a health care monitoring system using IoT technology employing fog computing services. In a similar analysis in [26], the authors propose a layered framework in fog for processing patient data. Fog computing has been extensively used for processing IoT device data, and fog nodes are deployed in the cloud specifically for this purpose. Marín-Tordera et al. [27] analyze the various methods involved in the use of fog services for computing IoT data. The Summary of existing works is shown in the Tab. 1.

FogQSYM: An Analytical Model for Fog Applications
From the literature, we can infer the following: First, a sustainable architecture for fog resource management still needs to be developed and evaluated. Second, research into the interoperable architecture of fog resources management involves challenges such as Quality of Service (QOS), resource management, energy management, resource reusability, and so on.
Resource management in fog computing plays a vital role in providing a fog service with the appropriate resources to increase the requirements of QoS. For example, assume a set of requests R with a set of requirement of QoS, say Q. Then, the solution can be one-to-many or many-tomany. In one such scenario, a request can be placed to one or more resources by dividing the request into multiple segments, and a request can be placed to a single resource. For successful resource management in fog computing, the following taxonomy should be addressed: First, the application deployment has a direct impact on resource utilization. Second, based on the service request, resource scheduling has an impact on resource management. Third, there is a possibility of a service request that can access the resources constrained devices which has constraints such as low computing power, low battery power, or fewer storage devices. As a result, the request is outsourced to the other devices and returns the result; this technique is called offloading. The fourth category is load balancing, as some services which involve time critical requests require balancing of the load with a proper mechanism. Finally, the resource allocation and resource provisioning have direct impacts on resource management. Fig. 2 below illustrates the taxonomy of resource management from the fog computing perspective. Accuracy of the ML model not considered [25] Cloud to fog architecture Location, reliability Delay from sensor nodes not considered [26] Layered framework for processing patient data Reliability, availability Data management, response time, and processing delay [27] Fog framework for processing IoT data

Reliability, network delay
Restrictions in the amount of data disclosed among providers [28] Fog architecture is layered as tiers Response time Network costs like network uptime/downtime not considered For critical applications like healthcare, smart cities use a large number of powerful devices, sensors, and smart devices that collects huge amounts of data either continuously or periodically [29]; more smart devices are needed to process these huge amounts of data. Internetof-Things (IoT) plays a vital role in turning devices into smart devices capable of running these critical applications. Both the cloud and IoT offer greater computing resources with higher computing power for processes between smart devices and things. However, these computing paradigms require continuous connectivity with the internet. Fog computing can be used to process the data locally between things and devices without any internet connectivity.
Since substantial numbers of devices are connected to each other, there is a chance that some devices are in the idle state, which results in unused or underused resources [30]. To overcome this issue, FogQSYM was introduced to effectively use the available resources. In this proposed model, the application is broken down into 'n' layers, where the first layer 's0' will capture the data and send the captured data to the remaining layers for processing. After identifying the suitable devices which have free memory, high processing power, resource availability, etc., the data are transferred to the remaining layers as appropriate. Fig. 3 depicts the proposed model wherein the application is divided into n layers, like {S 0 , S 1 ,. . ., S n−1 }.
Where each layer (S) n−1 i=0 is a set of services, and services in S 0 will perform data capture. Each service in the remaining layers (S) n−1 i=1 will process data coming from some service in S i−1 (composition). Here, Fig. 4 illustrates the flow of resource prediction from the perspective of fog computing.
Effective resource management is required for fog computing devices to improve the Quality of Service. The system is modeled as a queuing network that consists of a network of queues wherein a single queue consists of one or more servers. Here, servers refer to storage servers, networks, input output devices, processors, routers, switches, etc. The queuing network receives a stream of requests from IoT devices and passes them to the fog server through LAN or WAN. After a request reaches the fog server, it passes through a queue, and the request may be completed in the fog server, or additional resources may be needed to complete the request. A queuing system consists of a single server or multiple servers with a finite or infinite buffer size. In the queuing system, a server serves only one customer at a given time, which results in one of two states, i.e., the idle or busy state. When the servers are busy and a new customer arrives, then the new customer is buffered in the queue. The arrival rate of a system is given in Eq. (1) with k number of users as mentioned in Eq. (2), The service time of a system is given in Eq. (3) with k number of users,

Figure 4: Resource prediction flow
The steady-state probability of the system is given in Eq. (4), where p k is the steady state probability with the ratio between the arrival and service time of k number of users.
The probability of a job arriving to join the queue is given in Eq. (4); it is the ratio of initial probability and the utilization factor as represented in Eq. (5) The utilization factor ρ of the system is given in Eq. (6); it is the ratio of the arrival and service rate with m servers, The assumed probability of having exactly only one user in a system p k is given in Eq. (7) ∞ k=0 p k = 1 Then, the probability of having an empty system wherein no user exists in a system is given in Eq. (8); it is the summation of the ratio of the utilization factor with k users and m servers and utilization factor, The mean queue length of a system is given in Eq. (9); it is the ratio of the probability of k users in a system with utilization factor, The mean waiting time of a system is given in Eq. (10); it is the ratio of the mean queue length and the arrival rate, The total number of jobs in a system is given in Eq. (11); it is the summation of probabilities 1, 2, and the mean queue length, (11) The system response time is given in Eq. (12); it is the ratio of the number of jobs in the system and the arrival rate, The service demand for a LAN to a request is given in Eq. (13), where AD r is the amount of data processed for a request, LAN bw is the bandwidth available in MB/sec, and LAN rlat is the roundtrip latency of LAN in seconds, The service demand for a WAN to a request is given in Eq. (14), where AD r is the amount of data processed for a request, WAN bw is the bandwidth available in MB/sec, and WAN rlat is the roundtrip latency of WAN in seconds, The service demand of a request to a fog server is given in Eq. (15), where k1 and k2 are the constants k1 in seconds and k2 in MB/sec. The total response time of a system is the summation of the total response time of a LAN in seconds, the total response time of a WAN in seconds, and the total response time of a fog server in seconds, which is given in Eq. (16), Here, Algorithm 1 describes how FogQSYM assigns the request to the appropriate device and returns the response time. The algorithm starts by assuming each individual layer, then computes the required computing load of a device as well as the service demands of the LAN, WAN, and fog server. Next, the algorithm computes the mean queue length and the number of jobs in the system. Then, the request is compared with the computing load to see if it is matched with the mean queue length and the number of jobs in the system, and if the request is matched successfully, then it is added to the matched resource. Finally, the time taken (response time) to assign the request is returned.

Experimental Results and Analysis
The experiment is set up using Cloud Analyst simulator to evaluate the proposed method. Cloud Analyst is used to simulate large-scale applications that are geographically distributed. The fog service contains three types of computing nodes, namely the intermediate computing node, edge computing node, and physical machines in the cloud, as shown in Fig. 5. To evaluate the proposed method, a dataset with different sizes of fog services is used. For example, a dataset with 500 fog services contains 179 edge computing nodes, 167 intermediate computing node, and 154 physical machines in the cloud. In Cloud Analyst, two datacenters are simulated as intermediate and edge computing nodes, and the simulation run time is 60 minutes. The simulation configuration for the model is illustrated in Fig. 7. The datacenter is simulated as computing nodes, and it is configured based on the fog node which contains one region (since Cloud Analyst is geographically distributed, it may be configured using one or more regions), the simulation proceeds on the Linux operating system with the X86 architecture. It contains ten physical hardware units, and it supports xen as the virtual machine manager. In terms of the physical hardware configuration, the memory support is 2048 Mb, the storage is 100000 Mb, the bandwidth is 1000 mb/s, and the machine contains four processors with a speed of 3.20 ghz. Regarding the virtual machine (VM) configuration, for the Vm policy it follows the Time-shared policy, and it has an image size of 10000 Mb, memory of 1024 Mb, and bandwidth of 1000 Mb/s. The other configuration required is the server broker policy, which follows the existing Optimize Response Time algorithm [26], with a user grouping factor (Simultaneous user for single user base) of 1000 and a request grouping factor (Simultaneous request for a single application) of 100. The executable instruction length per request is 250 bytes and the load balance policy is throttled. Fig. 5 illustrates the different numbers of computing nodes for the different numbers of fog services. In the figure, Pm in cloud represents the number of physical machines (such as laptops, mobiles, etc.) in the cloud environment and Intermediate node represents the devices located inside the cloud like routers, switches, hubs, and many others.
After configuring the simulation parameter, the simulation requires the input parameters listed in Tab. 2. In order, these are the constants k1 and k2, LAN and WAN bandwidth in Mb/sec, and LAN and WAN round trip latency.
After giving the input values in Eqs. 13-15, the values are computed and provided in Tab. 3.    6 gives the overall response time, which is the total of (arrival time + waiting time) of the request, and Fig. 7 shows the total datacenter processing time, which is the overall time taken to complete the request by simulation.

Significance of our Proposed Work
The proposed Fog-QSYM is compared with the ORTM to evaluate the performance of the proposed model.
• The proposed Fog-QSYM model helps assign resources efficiently to free datacenters using the Markovian queuing model. • By fixing different sizes of datacenters in the fog layer, the service response time for each resource is gradually reduced. • According to different QoS metrics-based calculations, the average response time increases when there is a high arrival rate of resources in the fog layer.

Conclusion
The increased numbers of IoT and smart devices in Industry 4.0 applications generate enormous amounts of data with minimal delay tolerance. This paper proposed FogQSYM, which is an analytical Industry 4.0 model for fog applications that divides the application into several layers and ensures that resources are shared in an efficient manner according to the availability of memory, bandwidth, and resources. It follows the Markovian queuing model that helps identify the service rate of Industry 4.0 devices, availability of the system, and number of jobs in the system, which helps the application process the data without any delay tolerance. This shows the feasibility of dividing the application into several layers to share resources according to their suitability, which results in a low response time. After being implemented with differently-sized fog services in Industry 4.0 applications, the Fog-QSYM model has a lower response time than the existing ORTM model. It can also be noted that the average response time increases when the arrival rate increases. Upon implementing the model in the simulator, it is found that to decrease the response time, it is better to disconnect the slowest device with the lowest utilization.

Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study.