Next Article in Journal
Controlling and Optimizing Entropy Production in Transient Heat Transfer in Graded Materials
Previous Article in Journal
An Adaptive and Secure Holographic Image Watermarking Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on a Trustworthiness Measurement Method of Cloud Service Construction Processes Based on Information Entropy

1
School of Information, Yunnan University of Finance and Economics, Kunming 650221, China
2
Key Laboratory in Software Engineering of Yunnan Province, Kunming 650091, China
3
Big Data School, Yunnan Agricutural University, Kunming 650201, China
*
Authors to whom correspondence should be addressed.
Entropy 2019, 21(5), 462; https://doi.org/10.3390/e21050462
Submission received: 14 March 2019 / Revised: 21 April 2019 / Accepted: 23 April 2019 / Published: 2 May 2019
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
The popularity of cloud computing has made cloud services gradually become the leading computing model nowadays. The trustworthiness of cloud services depends mainly on construction processes. The trustworthiness measurement of cloud service construction processes (CSCPs) is crucial for cloud service developers. It can help to find out the causes of failures and to improve the development process, thereby ensuring the quality of cloud service. Herein, firstly, a trustworthiness hierarchy model of CSCP was proposed, and the influential factors of the processes were identified following the international standard ISO/IEC 12207 of the software development process.Further, a method was developed combined with the theory of information entropy and the concept of trustworthiness. It aimed to calculate the risk uncertainty and risk loss expectation affecting trustworthiness. Also, the trustworthiness of cloud service and its main construction processes were calculated. Finally, the feasibility of the measurement method were verified through a case study, and through comparing with AHP and CMM/CMMI methods, the advantages of this method were embodied.

1. Introduction

With the development and rapid popularization of cloud computing, cloud service has become widely accepted as a new computing model. Cloud service refers to a type of emerging network service relying on a cloud computing platform. Its outsourcing service model and the security risk of the cloud platform itself have aroused users’ concerns about its trustworthiness. Therefore, how to build secure and trustworthy cloud services has become one of the hotspots in the research field over the past years [1]. The field of software process research believes that process plays a determinant role in product quality [2]. Therefore, to improve the trustworthiness of cloud services, the trustworthiness problems in the process must be solved.
There are mainly two research directions on process trustworthiness: process measurement and process improvement methods. Reference [3] considered software process assessment (SPA) as a foundation step for software process improvement. For improving and optimizing the software process, the first step is to measure the software process objectively and find out the problematic process. Otherwise, insufficient process improvement and optimization may cause the failure of the process [4]. Main research results in process measurement are CMM [5] and CMMI [6] models developed by the United States Department of Defense, the Software Engineering Institute (SEISM) of Carnegie-Mellon University and the National Defense Industry Association. The two models classified the development stages of software organizations in practice as defining, implementing, measuring, controlling and improving software products. However, only a framework was put forward, extracting no specific knowledge of each key process area and failing to quantify the quality of a specific process. Also, the two models are primarily used to evaluate the degree of process management practices of a development organization, involving many contents, and being time-consuming and costly. As a result, small and medium-sized software companies, even some large ones, face challenges to meet relevant requirements and standards. For most software development organizations, what they need is an objective, quantitative, real and easy-to-implement measurement method. The method should enable them to find the weak links in the cloud service construction process (CSCP), and then carry out subsequent improvement or reinforcement, thus reducing the probability of institutional failure and improving product quality. Other primary software process measurement methods include Goal-driven Software Measurement (GSM) [7], Practical software measurement (PSM) [8,9], and Statistical Process Control (SPC) [10]. Based on the characteristics of the CMM and GQM model [11], reference [12] established a software process framework supporting metrics and gave the metrics of software process improvement. However, it did not give the exact measurement steps for the software development process. Reference [13] discussed the significant problems in software process measurement and presented an active measurement model (AMM) to support software process improvement (SPI). It emphasized the measurement of quality, maintainability and stability of the software products, rather than the processes. To find the problems in the product, we still need to reverse the development or maintenance processes for finding the processes responsible for the problems.
To this end, a trustworthiness measurement method for CSCP based on information entropy is proposed here. Through in-depth study of CSCP, the risk factors in CSCP are analyzed and summarized. Then, according to the frequency of each risk factor and the degree of corresponding loss, the uncertainty of the primary construction process for cloud services and the expectation of risk loss are calculated by using information entropy. Finally, process trustworthiness is calculated based on the relevant concepts. Development, maintenance, and other processes are directly measured. By comparing the uncertainty and loss expectations of each process and the whole process, the organization can find the weaknesses and influential factors in CSCP. The present study provides an excellent method to help organizations reduce risk, optimize and improve the development and construction process, thereby fundamentally enhancing the trustworthiness of cloud service products. The paper is structured as follows:

2. Related Work

2.1. Cloud Computing and Cloud Services

Cloud computing has become one of the research hotspots in the current computer field [14], and a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources [15]. To deeply study cloud computing and reduce its complexity, NIST divided cloud computing into three levels according to service types: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) [15]. On this basis, Linthicum further proposed the concept of “service stack”, and summarized 11 service modes of cloud computing, including storage as a service, database as a service, process as a service, information as a service, application as a service, platform as a service, integration as a service, security as a service, management as a service, test as a service, infrastructure as a service [16]. In a sense, cloud computing is cloud services or service computing. For the future of cloud services or service computing, Chen and Zheng [17] believed that there were two main development directions in the future: one was to build a large-scale underlying infrastructure closely integrated with applications, so that applications could expand to a large scale; the other was to build new cloud computing applications to provide a richer user experience on the network.
In the aspect of the analysis and design of the system model of SaaS services, Meng at Jinan University has put forward a seven-layer model based on the traditional software development five-level model to meet the needs of SaaS software development [18]. It was the early study of SaaS service at the level of the development model; Yuan in his paper “Research of online software system development solution based on SaaS” [19] had studied the modeling methods, security handling and database design of SaaS software. At the database level, many studies have been done wherein the data storage method of SaaS software and the way of expanding the data storage were the main direction, such as Zha [20] and Yu [21], both of whom have proposed a solution to the data structure of SaaS service.
Cloud-native applications is a good cloud computing design pattern, which has offered a unique blend of academic knowledge and practical experience due to a variety of authors [22]. It is common sense that cloud-native applications (CNA) are intentionally designed for the cloud [23]. Reference [24] has described the key technologies for cloud native design to meet the requirements of a successful cloud application, including dynamic scalability, extreme fault tolerance, seamless upgradeability and maintenance and security are the basic properties of successful cloud applications. Reference [25] has presented a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services, which can be used to guide technology identification, classification, adoption, research and development processes for cloud native application and for vendor lock-in aware enterprise architecture engineering methodologies.
The existing research on cloud computing and cloud services is mostly focused on cloud services themselves and the research about the quality of cloud services mainly starts from the environment, content, development method, structure and design mode, ignoring the specific development environment and development process of cloud services. Osterweil [26] has pointed out that software processes are also software too, and that software process is of the same importance as software itself. Later, software process technology has proved to be effective in the support of many business activities [27].
For the same method and structure, different development organizations and different development processes may produce products of different quality. Therefore, besides paying attention to the development method and structure of service itself, to improve the quality of a specific service, measuring and optimizing its development and construction process is a very important link.

2.2. Trustworthy Cloud Service

In terms of the factors affecting trustworthiness, [28] summarized the non-functional requirements of the reliable software proposed by relevant scholars and institutions since the 1985 as trusted computer system evaluation criteria (TCSEC). It was concluded that there was still not any recognized standard for the non-functional requirements of reliable software. Thus, a non-functional requirements decomposition model for reliable software was proposed, and concerns and soft goals were obtained, based on the Software Trustworthiness Classification Specification (STC 1.0) and the software quality requirements defined by ISO/IEC 25010 [29]. Reference [30] presented a model targeting both quantitative and qualitative non-functional properties (NFPs) and developed two algorithms to obtain the NFPs. The research findings mentioned above laid a solid foundation for the present study when analyzing the factors influencing the trustworthiness of CSCP. Reference [31] conducted an in-depth investigation of the trustworthiness and risk in Web services, identifying the relationship between trustworthiness and risk. Through the analysis and study of process risk factors, the calculation method of process trustworthiness was obtained in this paper.
Regarding service measurement and selection, [32] demonstrated the concerns of potential users on the trustworthiness of cloud computing service, hindering the development of cloud computing. Thus, a trustworthy cloud service attribute model was developed, and a trustworthiness evaluation method was put forward based on Information Entropy and Markov Chain. Reference [33] also applied information entropy and Markov Chain method into the calculation of the risks in cloud services. It also highlighted the high impact from the uncertainty of risks on cloud computing. Reference [34] proposed an information-entropy-based decision-making method for selecting cloud computing service. To help select trustworthy services, [35] developed a support vector machine (SVM)-based collaborative filtering (CF) service recommendation approach. In [36], starting from requirements for users’ privacy protection, a cloud service evaluation model based on trustworthiness and privacy-awareness was put forward. Combining objective and subjective trustworthiness assessment, [37] designed an integrated trustworthiness evaluation method.
Other areas of research in trustworthy computing include trustworthy computing models in IoT [38], trustworthy computing problems in SOA [39], the influence of uncertainty on trustworthy computing [40], trustworthy computing problem in the social network [41,42,43,44], and so on.
The existing research fields of trustworthy cloud computing covered a wide range, but with some distinct emphases. Some studied the factors affecting the trustworthiness including uncertainty; some explored the methods of measuring and selecting the trustworthy cloud services from the perspective of risk. Few studies of trustworthiness focused on the CSCP determining the trustworthiness of cloud services. Therefore, based on previous trustworthiness researches, herein the processes of constructing cloud services were analyzed to establish a trustworthiness measurement hierarchy model for cloud service construction process. Furthermore, the trustworthiness of the construction process was measured based on the theory of information entropy.

2.3. Information Entropy

Shannon introduced the concept of physical entropy into information theory and defined the magnitude of information, named information entropy. Information entropy can be used to measure the amount of message information in communications effectively. The greater the entropy, the higher the uncertainty and the less the amount of information. Despite various understandings of risk, the uncertainty of risk is commonly accepted, which is the essence of risk. The greater the uncertainty, the greater the risk. This essential feature of risk is consistent with the concept of information entropy. Thus, information entropy was selected here to measure the risk and trustworthiness of CSCP. To put it simply, the greater the uncertainty, the greater the entropy, the greater the risk, and thus the lower the trustworthiness. The definition of entropy [45] is given below:
Definition 1 (Information entropy)
Let X be a discrete random variable, and n is the number of its possible values, then X = {x1, x2… xn}. For each xi, its probability value is P(xi), and the discrete probability space is:
[ X P ( x ) ] =   [ x 1 x n P ( x 1 ) P ( x n ) ]
Then:
H ( X ) =     i = 1 n P ( x i ) l o g P ( x i )
H (X) is called information entropy of discrete random variable X.
Information entropy is used to describe the uncertainty of things. For CSCP, the frequency of impact on the trustworthiness of the service is uncertain. The higher the uncertainty degree of different influencing factors is, the higher the risk will be, and the lower the trustworthiness will be. It is in line with the concept and characteristics of information entropy, which hence was chosen as a measure of uncertainty in CSCP.

3. Framework of CSCP

According to ISO/IEC 12207 [46] information technology and its improved application [47], the software life cycle process is divided into three categories: main process, supporting process and organizing process. Despite the service nature, cloud services are a new form of software in the new computing era.
SaaS services are characterized by repeatability, rapid scaling, Internet, multi-tenancy, on-demand services, and other features. An increasing number of SaaS services present small size, and are easy to iterate and combine [48]. It is necessary to emphasize more the development process and weaken the supporting and organizing processes, specifically, to consider the latter as a daily behavior and scatter them in each significant department or development process. Therefore, based on ISO/IEC 12207, here the description of specific processes was revised according to the characteristics of cloud services, and the framework of the construction process for cloud services was designed. In CSCP, there was no supporting process or organizing process. Instead, their sub-processes were scattered to the appropriate main process, and the corresponding sub-processes of the main process were modified. In this way, all the sub-processes of the three processes could work together to serve the main process, to facilitate the construction and management of web-delivered services CSCP structure as shown in Figure 1 and Figure 2.

3.1. Overall Structure

CSCP is mainly divided into the main process class, supporting process class and organizing process class. The main process is responsible for production, operation, and maintenance of cloud service products. The other two processes serve the main process. The results and data generated during the main process can also be used to improve the initial supporting and organizing processes. The supporting process runs with the main process, providing all kinds of assurance work needed in the main process. Management and staffing of the main process are all constructed and managed by the organizing process. Organizing process is mainly responsible for the construction of infrastructure, personnel composition, and related management rules, as well as the configuration of supporting processes. It starts before the main process and the support process. Figure 1 illustrates the relationship between the three processes.

3.2. Main Process, Supporting Process and Organizing Process

The main process is generally used by the participants in developing, running, and maintaining cloud services. A detailed description of each main process is listed in Table 1.
The supporting process provides support for the main process and helps cloud service projects to achieve success and guarantee product quality. It is an effective aid to the sub-processes in the main process category, including documentation process, configuration management process, quality assurance process, verification and validation process, joint review process, audit process, and problem-solving process.
Processes in the organizing process are used to build and implement infrastructure and to make continuous improvement. The infrastructure consists of several related process rules and personnel to improve the coordination between different processes, including management process, infrastructure process, improvement process, and training process.

3.3. CSCP Trustworthiness Measurement Hierarchical Model

The quality of cloud service products depends on the overall quality of CSCP. Thus, the activities in the main process have a direct impact on the trustworthiness of the cloud service product. Supporting process and organizing process act directly on the activities in the main process, and hence influence the quality of the final cloud service product. Here, the ISO model was reconstructed, and the sub-activities in supporting process and organizing process were considered as the influencing factors of the sub-activities in the main process. Then the hierarchical model of CSCP trustworthiness measurement was obtained as shown in Figure 2.
CSCP consists of five main processes: analysis process (β1), developing process (β2), receiving process (β3), running process (β4) and maintenance process (β5), as shown in Figure 2a. Their results determine the trustworthiness of CSCP. Supporting process and organizing process, as components or influencing factors of the main process, impact the trustworthiness of each main process. Figure 2b–f provide a hierarchical graph of five main processes respectively and their influencing factors, including a total of 33 risk factors: Defining requirements ( α 1 ), Bidding preparation ( α 2 ), Contract preparation ( α 3 ), Requirement review ( α 4 ), Proposal preparation ( α 5 ), Award of contract ( α 6 ), Project programming ( α 7 ), Joint review ( α 8 ), Product delivery ( α 9 ), Requirements analysis ( α 10 ), Structure design ( α 11 ), Detailed design ( α 12 ), Coding test ( α 13 ), System integration ( α 14 ), Software installation ( α 15 ), Operation plan ( α 16 ), Operational testing ( α 17 ), Operations management ( α 18 ), Change analysis ( α 19 ), Change implementation ( α 20 ), Maintain test ( α 21 ), Maintain acceptance ( α 22 ), Software transportation ( α 23 ), Software abandoned ( α 24 ), Documentation ( α 25 ), Configuration management ( α 26 ), Quality guarantee ( α 27 ), Verification and validation ( α 28 ), Audit process ( α 29 ), Problem solving ( α 30 ), Daily management process ( α 31 ), Infrastructure process ( α 32 ), and Training process ( α 33 ). Section 4 details the calculation of the trustworthiness for each main process and the whole CSCP process according to the hierarchical relationship in Figure 2.

4. CSCP Trustworthiness Measurement Method

4.1. Related Definitions on CSCP Trustworthiness

Definition 2 (Process trustworthiness)
According to literature [49] and the definition of trustworthiness, trustworthiness of non-functional requirements are mainly manifested in 11 indicators: functional applicability, risk prevention, reliability, security, accuracy, maintainability, performance, ease of use, compatibility, portability and privacy, which are consistent with the indicators of risk concern. Process trustworthiness has a negative correlation with the loss from risk and the uncertainty of risk occurrence. Thus, trustworthiness is calculated by:
T =   k 1 U ( R ) + k 2 L ( R )
where T is trustworthiness; R is specific risk items; U (R) is the uncertainty of risk occurrence; L(R) is a loss caused by risk; k1, k2 are constants, representing trustworthiness coefficient, whose values depend on the project or process to be measured. The trustworthiness of the main processes in the hierarchical model is affected by many factors. To calculate the trustworthiness T ( γ ) and T ( β i ) , it is necessary to obtain the uncertainty functions U ( γ ) and U ( β i ) and loss functions L ( γ ) and L ( β i ) .
Definition 3 (Risk)
Software process risk includes two essential characteristics: uncertainty and loss impact. Risk can be defined as a triple R = (X, U, L), where X denotes the sub-process or the set of influencing factors that generate risk; U denotes the uncertainty function of risk occurrence; L denotes the risk loss function. The key to risk measurement is to quantify the degree of uncertainty and the degree of loss when there is any risk-related factor [50].
Definition 4 (Risk Uncertainty Function of Main Process)
The uncertainty of risk occurrence in a process is determined by the risk factors impacting on process trustworthiness. In the CSCP trustworthiness measurement hierarchical model, for the main process βi, its risk uncertainty is determined by the threat frequency of each factor α j .
Let P ( α j ) be the threat frequency of the risk factor α j , and P ( β i , α j ) be the entropy weight coefficient of α j relative to β i . Assuming that there are k risk factors in the main process of β i , the entropy weight coefficient can be calculated by:
P ( β i , α j ) = 1 j = 1 k P ( α j ) P ( α j )
Substitute it into the information entropy formula (1), then the following formula is obtained:
U ( β i ) = 1 log 2 m j = 1 m P ( β i , α j ) log 2 P ( β i , α j )
U ( β i ) ( 0 U ( β i ) 1 ) is the uncertainty function of risk, which denotes the degree of uncertainty of the main process of β i . Let U ( γ ) be the uncertainty of CSCP, then:
U ( γ ) = 1 log 2 n j = 1 n P ( γ , α j ) log 2 P ( γ , α j )
Definition 5 (Risk loss expectation)
Risk loss expectation L(x) refers to the degree of loss impact caused by a risk factor. The greater the probability of risk occurrence, the higher the risk it brings to the project; the greater the loss caused by risk, the higher the risk it brings to the project. Therefore, measurement of risk size depends not only on the probability of risk factors but also on its impact on software projects. The Risk loss expectation L(x) can be defined as the product of the occurrence probability P(x) and the degree of loss C(x):
L ( x ) = P ( x ) × C ( x )
In the CSCP trustworthiness measurement hierarchical model, according to formula (5), risk loss L ( α j ) caused by a factor is calculated by the product of the probability of occurrence and the degree of loss, that is:
L ( α j ) = P ( α j ) × C ( α j )
The formulas for calculating the risk loss of the main process β i and the CSCP γ, which are affected by multiple risk factors, as below:
L ( β i ) = j = 1 m ( P ( β i , α j ) × C ( α j ) )
L ( γ ) = j = 1 n ( P ( γ , α j ) × C ( α j ) )

4.2. CSCP Trustworthiness Measurement Method

A trustworthiness measurement method for CSCP was proposed based on information entropy. The calculation is detailed as follows:
Input: Probability of risk factors P(αj), the degree of loss caused by various factors C(αj).
Output: Trustworthiness of the main process T(βi), CSCP trustworthiness T(γ).
Step 1: Establish the evaluation tables as shown in Table 2 and Table 3 below using the Delphi method [51]. Acquire the frequency data P(αj) and loss degree data C(αj) for calculating risk factors according to actual conditions.
Step 2: Sort up the raw data acquired. Calculate the frequency and degree of loss of each risk factor in the third tier based on the weight values in Table 2 and Table 3.
Step 3: According to the division of the main process in Figure 2, the third level risk factors were classified and normalized. Then, the entropy weight coefficients of risk factors were obtained. Through formula (3), the occurrence frequency P(αj) of the third level risk factors was normalized, and the entropy weight coefficient P ( β i , a j ) relative to the risk of the main process β i was obtained. By substituting P ( β i , a j ) and C(αj) into formulas (4), (7) and (8) respectively, the uncertainty degree U ( β i ) and loss degree L ( α j ) and L ( β i ) of the risk of the first main process were calculated.
Step 4: From formula (5), the occurrence frequency P(αj) of all the third level risk factors was normalized, and the entropy weight coefficient P ( γ ) of the whole process relative to CSCP was obtained. By substituting P ( γ ) and C(αj) into formulas (5) and (9) respectively, the uncertainty degree U ( γ ) and loss degree L ( γ ) were calculated.
Step 5: The uncertainty degree U ( β i ) and loss degree L ( β i ) of the risk of each main process were substituted into formula (2) to obtain the trustworthiness of each main process T ( β i ) . The trustworthiness T ( γ ) of CSCP was acquired by substituting uncertainty degree U ( γ ) and loss degree L ( γ ) of CSCP risk into formula (2).

5. Case Study and Analysis

5.1. Case Study

Y is a small and medium-sized software company mainly engaged in mobile applications (app) and SaaS services in a cloud computing environment. In different product development processes, it is subject to occasional failures or low-quality products, which may be attributed to various reasons. In the case study, the trustworthiness of the company’s service development process was calculated, providing a reference for its follow-up process improvement work. Due to the privacy of the service development documents, a questionnaire was designed containing 33 risk factors according to the model in Figure 2. The frequency and extent of risk occurrence and loss included in the questionnaire were scored according to Table 2 and Table 3. A total of 15 employees from the company participated in the survey, covering a variety of roles including systems analysts, designers, developers, testers, maintainers, managers, and trainers. These participants were graded anonymously according to the analysis report of failed products and actual work experience, and the scores were used as the original data for later calculation.
The specific calculation steps are as follows:
Step 1: Collect the questionnaires and count the scoring results. The results are shown in Table 4.
Step 2: Calculate the risk frequency P ( α j ) and loss degree weight C(αj) of the risk factors according to the following formulas, and the results are shown in Table 5:
( a j ) = ( i = 1 5 ( ω i × k i ) ) / n
where ω i is the weight value of an influencing factor in Table 2 and Table 3; k i is the number of people choosing the weight value of the influencing factor, and n is the total number of people who participated in the questionnaire survey. Here the value of n is 15.
Step 3: According to Figure 2, calculate the entropy weight coefficient P ( β i , a j ) of each main process. Then substitute each P ( β i , a j ) into Equations (4), (7) and (8) respectively and calculate the risk uncertainty U ( β i ) and loss degree L ( β i ) of each main process. The results are shown in Table 6.
Step 4: Using Equation (5), normalize the occurrence frequency P(αj) of the risk factors and obtain the entropy weight coefficient P ( γ ) of CSCP. Then substitute P ( γ ) and C(αj) into Equations (5) and (9) respectively, and calculate the risk uncertainty U ( γ ) and loss degree L ( γ ) . The results are shown in Table 6.
Step 5: To simplify the calculation, after receiving the consent of Company Y, set the trustworthiness coefficients k1 and k2 as 1. Then, substitute the uncertainty degree U ( β i ) and loss degree L ( β i ) of the risk of each main process into Equation (2). Obtain the trustworthiness of each main process T ( β i ) . Substituting the uncertainty degree U ( γ ) and loss degree L ( γ ) of CSCP into Equation (2), the trustworthiness of CSCP T ( γ ) is obtained. The results are shown in Table 7.

5.2. Analysis

From Table 6 and Table 7, the final calculation results were summed up, as shown in Figure 3. The proposed method was mainly designed to measure the trustworthiness of the software construction process within relevant organizations. It was hoped to help organizations identify weaknesses within the organization, thereby improving and optimizing the construction process. The construction process of Y Company was analyzed mainly from three dimensions: trustworthiness, uncertainty and risk expectation.
(1) Process trustworthiness analysis. It is found in Figure 3 that T ( β 2 ) < T ( β 1 ) < T ( β 5 ) < T ( γ ) < T ( β 3 ) < T ( β 4 ) . In running process and receiving process, the user acquires and uses the service. The trustworthiness of these two processes was observed to be higher than that of the whole process T ( γ ) , showing better quality, stability and safety than those of the other three processes. Hence, the two processes were not responsible for the failure of the projects. Also, the maintenance process was not the cause of project failure either, owing to its close trustworthiness to that of T ( γ ) . All the above three processes performed after product development, delivery and use. Therefore, it was concluded that the processes of user acceptance, product operation, and maintenance were not the causes of project failure, but the development process and analysis process with the lowest trustworthiness. In the future, Y Company should carefully analyze the existing problems in development and analysis processes, strengthen process management and improve the business level of personnel in these two processes. The proposed method can be used to conduct a more in-depth analysis of the specific problems.
(2) Process uncertainty analysis. The results showed U ( β 1 ) < U ( β 3 ) < U ( β 4 ) < U ( β 5 ) < U ( β 2 ) < U ( γ ) . Uncertainty of process risk refers to the probability distribution of the risk factors occurrence. Uncertainty reflects the difficulty of risk control. The higher the uncertainty is, the less obvious the cause of risk is and the more difficult the maintenance and control of risk will be. Y Company exhibited low uncertainty of analysis process and receiving process, indicating that the risk factors of these two processes were relatively obvious and well controlled. In other words, under the risk of these two processes, the cause could be found more clearly and directly. By contrast, the risk factors in maintenance process and development process remained unclear, the distribution of which was relatively average. Hence it was difficult to quickly identify the causes of problems in these two processes.
(3) Loss expectation analysis. It was observed that L ( β 4 ) < L ( β 3 ) < L ( γ ) < L ( β 5 ) < L ( β 2 ) < L ( β 1 ) . In other words, risks in running process and receiving process tended to cause the lowest loss. It can be explained that the main participants in these two processes are users. Most of the problems are caused by poor user operation and management. Hence Y Company was subject to a small impact of project failure. Yet, the problems in the development process and analysis process were more likely to cause great losses. In particular, a problem in the analysis process may lead to the failure of the whole project. Thus, more attention should be paid to the analysis process.
The company hired professional evaluation agencies to test its internal management, research and development levels, with different evaluation dimensions and methods. The results were consistent and found that there were major defects in the company’s demand analysis, definition process and development process, which is in line with the results of this paper. In addition, during the questionnaire survey, communication with the company’s internal staff and related leaders found that the company did not pay enough attention to the research on the preliminary demand. Also, its development team was very young and changed its members frequently. On the other hand, it maintained relatively standardized regulation of sales-related and other management, which is consistent with the results of this paper.. In terms of methodology, information entropy theory serves as a measurement tool. The research on measuring product quality and risk has achieved positive results. Therefore, it is feasible and theoretically and practically important to detect weaknesses and improve the construction process by applying information entropy theory and method to process measurement.
Based on the frequency of problems occurring in the sub-procedure of CSCP and the consequent loss degree, this paper tries to calculate the trustworthiness of the cloud service establishment process. Any change of P(αj) and C(αj) in the sub-procedure will give rise to the change of credibility including its parent process, which will be reflected in the final credibility results. Thus, it is convenient for users to discover links with problems and measure the improvement effects. However, this paper mainly calculates the relative credibility of the enterprise’s internal process, which is suitable for those that have regional problems but have no idea where they are, and the case company is a typical example of them, and such enterprises account for the vast majority of SMEs. But for enterprises whose integral level is relatively balanced or those to be specified in their process level, if our method is chosen, other data need to be introduced so as to establish the standard for measurement.

5.3. Comparisons

5.3.1. Comparison with AHP

Analytic Hierarchy Process (AHP) is an evaluation- and decision-making method that combines qualitative and quantitative analyses, which can be applied to multi-objective, multi-element, multi-level problem solving. CSCP trustworthiness measure just meets such requirements. AHP can be used to measure a specific CSCP. However, AHP first requires an entire evaluation system to be established. In order to find out the cause of a problem, it is necessary to comprehensively find out the factors that affect the problem. In addition, the AHP method involves multi-person scoring and calculation, with corresponding consistency detection operations, but the measurement results are still highly subjective, as a result of varied environments and statuses of the participants. Moreover, if multiple processes or entirety are to be measured, each sub-process should be analyzed separately to find the influencing factors. Also, the relationship between sub-processes should be considered. As the entire development process contains many sub-processes, comprehensive measurement requires a large amount of work, and failure to offer complete or correct measurement results in compromised trustworthiness. In addition, AHP can be applied to other processes. However, relevant measurement models are poorly compatible to construction process, management process, and support process, and other processes of different organizations. Therefore, it is necessary to re-analyze and re-select the influencing factors. Further, as participants are varied and very subject, the measurement results may be highly unstable.
In this paper, the CSCP framework is based on ISO/IEC 12207. Analysis and summary of many cloud service construction processes found that the management process and support process of ISO/IEC 12207 are split and refined and scattered into each main process, which basically covers the entire process of existing cloud service development. In terms of specific application, the required processes and sub-processes can be selected and measurement can be conducted within the CSCP framework. Or the selection might be skipped to directly measure all the processes involved in the entire CSCP framework (irrelevant processes will be scored zero), which is highly compatible and plastic. Furthermore, the method proposed in this paper is easy to input, which only requires two parameters (risk occurrence frequency and loss degree). The input quantity is small and easy to obtain. Moreover, in this paper, risk occurrence frequency and loss degree data can only be acquired by questionnaires, for the data is confidential. In reality, if the relevant data of previous projects are offered, the two input parameters required in the method can be obtained by data mining, etc. Therefore, it is ensured that the method is objective.

5.3.2. Comparison with CMM/CMMI

CMM/CMMI is a set of modes and methods for the management, improvement and evaluation of software processes. It stipulates the characteristics of various levels of software development process capabilities and the goals for improvement. It can help software companies manage and improve software engineering processes, and enhance development and improvement capabilities, thereby developing high quality software timely within budget. CMM/CMMI evaluation requires the following: a dedicated evaluation team; an established CMM/CMMI evaluation system model to classify the development process into key processes and identify the main activities for later; maturity questionnaires designed based on processes and activities; on-site visits; generated survey lists; evaluation and conclusions. Therefore, CMM/CMMI is a complex engineering system that requires comprehensive cooperation by all parts of the company, a large amount of capital, manpower, material resources and time. It is not for small and medium-sized enterprises and enterprises with poor management.
The small size of cloud services makes its development and management process flexible, but CMM/CMMI is a large-scale measurement system that does not suit small-scale software services, for it requires heavy investment, high cost and is highly complex, with poor adaptability to small-scale processes. CMM/CMMI is a comprehensive and complete method for judging whether processes like organization, development, management, meet relevant benchmark requirements. Small and medium-sized enterprises (SMEs), especially cloud service development organizations, are more concerned about the weaknesses and risks in the development and management processes of the method. As factors like personnel and environment change, its development and management processes should be frequently evaluated. Therefore, SMEs need a measurement method that is simple, easy, low-cost, low-investment and can be frequently conducted. With the development of enterprises, the CMM/CMMI method may be introduced according to the actual needs when enterprises grow to a certain extent. The method can measure and evaluate the overall processes, thereby comprehensively improving processes. Daily management and development processes of enterprises need a small-scale method for self-test and self-evaluation, at which the research in this paper is aimed.

5.3.3. Summary

All three methods can measure the quality of cloud service construction process. However, the three methods focus on different areas, with significant differences in usability, objectivity, versatility, functionality and cost. To facilitate the description, the method in this paper is abbreviated to information entropy method, IEM.
In the aspect of usability, (U(x)), it is mainly reflected in Data Acquisition Difficulty (DAD), Operational Process Complexity (OPC) and Readability of Results (RR). According to what is described in Section 5.3.1 and Section 5.3.2, Table 8 is formed. In brief, the usability degree of a certain main method has an inverse correlation with DAD and OPC. Thus, we can get the comparison result of the three methods in usability: U(IEM) > U(AHP) > U (CMM/CMMI).
In the aspect of objectivity (O(x)), it is mainly determined by the objectivity of the data for measurement, the model and the operational process. In terms of IEM method, the input data include the frequency of risks in the previous projects and the loss degree caused by risks in different processes, all of which can be summarized from the previous data. At the same time, the model is mainly based on international standards and development practice, whose operational process is actually the data computational process, so it is of the higher objectivity. CMM/CMMI is a method measured by others, which possesses both a mature measurement system and the acquisition method of necessary data. But the data acquisition process and the operational process need the coordination of the personnel in the whole process, so its objectivity is inferior to IEM method. In terms of AHP method, the input data completely come from grading of the personnel. In spite of its consistency detection mechanism, its objectivity is still greatly affected. Besides, its process measurement model should be established temporarily by the measurement personnel, whose reference frame and establishment process are easy to be affected by human factors. Thus, its objectivity is lower than CMM/CMMI. The comparison result of the three methods in objectivity is as follows: O(IEM) > O(CMM/CMMI) > O(AHP).
In the aspect of versatility (V(x)) and functionality (F(x)), CMM/CMMI is a mature international measurement standard, which can not only help find out the weak links and analyze causes, but also offer the grades of an organization’s ability maturity, so it is suitable for varieties of software development institutions or organizations. As to the IEM method, it is a lightweight method aimed at small and medium-sized enterprises, which can discover the weak links in development and management, but is powerless in determining the ability maturity grades. AHP is a universal method, but in the field of process measurement, aimed at different institutions, it is necessary to first establish the process model. However, there are huge differences in the development and management process of different institutions. Thus, the measurement model of one institution is unlikely to apply to other institutions, whose versatility is not quite good in the process measurement field. AHP finally offers subordinate grades for process credibility, but for small and medium-sized enterprises, in order to better reduce risks and guarantee quality, what they pay close attention to is not which grade their management level is, but where their weak link is. The comparison results of the three methods in versatility and functionality is as follows: V(CMM/CMMI) > V(IEM) > V(AHP), F(CMM/CMMI) > F(IEM) > F(AHP).
In the aspect of cost (Cost(x)), for CMM/CMMI, professional measurement teams are required to participate in the whole process of development and management, which involves a huge amount of capital, time and energy. AHP method needs to employ experts to engage in the establishment, grading and calculation of the measurement model, which also requires of certain investment. As to the IEM method, the input is based on previous data or experience, which can realize measurement through statistical analysis of the frequencies and loss degree of previous risks. In addition, with the usage of the mining method in the later period, it will further reduce the cost and time in data statistics. Thus, it is a method with the lowest cost. The comparison result of the three methods in cost is as follows: Cost(CMM/CMMI) > Cost(AHP) > Cost(IEM).
Based on the comparisons above, we get the specific results which are shown in Table 9, Table 10 and Table 11.
SMEs are flexible in software development within the cloud computing environment. However, they lack a set of effective methods for measuring process management. Traditional self-evaluation through project summary meetings fails to find the real process problems in most cases. CMM/CMMI, AHP and other methods may deliver good and comprehensive performance, but they are difficult to operate, and most of them require additional cost. Therefore, they cannot be frequently used by SMEs. The method in this paper is aimed at finding a simple, easy-to-use, low-cost process self-test method for specific cloud services and even software service development organizations. It finds out weaknesses through self-test and self-evaluation on a regular basis, thereby continuously improving development and management processes. It has significant advantages in terms of operation and cost for organizations with variable organizational structures.

6. Conclusions

With the advent and popularization of cloud computing, cloud services have gradually become the main computing mode. With increasingly more cloud services in the market, their trustworthiness becomes the key to product selection and largely depends on the process of constructing cloud services. Hence, the trustworthiness measurement of CSCP becomes crucial for the trustworthiness of cloud services. It can help cloud service developers identify the weaknesses in the development process and the main risk factors causing losses. Eventually, it benefits the organizations by improving and optimizing the development process of cloud services, which is conducive to enhancing the quality of cloud services.
CMM and CMMI methods are costly, long-term and extremely complicated for small and medium-sized organizations. This paper presents an objective, quantitative, simple and effective method for CSCP trustworthiness measurement. Firstly, combined with the concepts and characteristics of ISO/IEC 12207 and CSCP, the main process of cloud service construction and its influencing factors were obtained, and a hierarchical model of trustworthiness measurement for CSCP was established. Then the trustworthiness of CSCP was defined, and the uncertainty and risk loss expectation of CSCP was calculated. Based on the uncertainty and risk loss expectation, the trustworthiness of CSCP was calculated. Finally, through a case study, the trustworthiness of the company’s development process was calculated, which verified the feasibility and correctness of the proposed method.
A simple and feasible method was proposed here to measure the trustworthiness of CSCP. Yet, it is only helpful for process measurement and improvement within a specific organization. Due to data sources and influencing factors, horizontal comparisons between different agencies are needed in further research. Although the participants undertake a variety of roles in the company, objectivity of the data used in the calculation is still impacted to a certain extent. It can be improved by obtaining the frequency of the risk factors and the loss degree caused by the factors through past analysis reports or operational data of the system.

Author Contributions

Conceptualization, G.T.; Formal analysis, G.T., L.T. and J.R.; Writing—original draft, G.T. and Y.M.; Writing—review & editing, L.T. and Y.M.

Funding

This work was supported by National Natural Science Foundation of China (Nos. 61662085, 61763048), Yunnan Science and Technology Innovation Team Project “Data-driven Software Engineering Science and Technology Innovation Team” (No. 2017HC012), Innovation and Promotion of Education Foundation Project of Science and Technology Development Center of Ministry of Education (No. 2018A01042), Science and Technology Foundation of Yunan Province (No. 2017FB095), Yunnan Province Applied Basic Research Project (No. 2016FD060).

Acknowledgments

The authors would like to thank the anonymous reviewers and the editors for their suggestions.

Conflicts of Interest

The authors declare there is no conflict of interest regarding the publication of this paper.

References

  1. Ding, Y.; Wang, H.; Shi, P.; Wu, Q.; Dai, H.; Fu, H. Trusted cloud service. Chin. J. Comput. 2015, 38, 133–149. [Google Scholar]
  2. Münch, J.; Armbrust, O.; Kowalczyk, M.; Soto, M. Software Process Definition and Management; Springer: Berlin/Heidelberg, Germany, 2012; ISBN 9783642242908. [Google Scholar]
  3. Tarhan, A.; Giray, G. On the use of ontologies in software process assessment: A systematic literature review. In Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering, Karlskrona, Sweden, 15–16 June 2017. [Google Scholar]
  4. Li, B.; Wang, H.; Li, Z.; He, K.; Yu, D. Software complexity metrics based on complex networks. Acta Electron. Sin. 2006, 34, 2371–2375. [Google Scholar]
  5. Paulk, M.C.; Curtis, B.; Chrissis, M.B.; Weber, C.V. Capability maturity model, Version 1.1. IEEE Softw. 1993, 10, 18–27. [Google Scholar] [CrossRef]
  6. Regan, G.O. Capability Maturity Model Integration; Springer: Cham, Switzerland; Berlin, Germany, 2014; ISBN 978331906105. [Google Scholar]
  7. Solingen, R.V.; Basili, V.; Caldiera, G.; Rombach, H.D. Goal Question Metric (GQM) Approach; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2002; ISBN 9780471377375. [Google Scholar]
  8. Ramon, H.D. Practical software measurement: Objective information for decision makers. Softw. Qual. Prof. 2003, 3, 68–70. [Google Scholar]
  9. Statz, J. Practical software measurement. In Proceedings of the 21st international conference on Software engineering, Los Angeles, CA, USA, 16–22 May 1999; pp. 667–668. [Google Scholar]
  10. Florac, W.A.; Carleton, A.D. Measuring the Software Process: Statistical Process Control for Software Process Improvement; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1999. [Google Scholar]
  11. Caldiera, V.; Rombach, H.D. Goal Question Metric Paradigm, Encyclopedia of Software Engineering; Wiley: Hoboken, NJ, USA, 1994. [Google Scholar]
  12. Xing, D.; Liu, Z.; Li, X.; Xu, D.; Zhu, B. Metric-based software process improvement method. In Proceedings of the National Software and Applications Conference, Beijing, China, 1 November 2003; Mechanical Industry Press: Beijing, China, 2003; pp. 374–379. [Google Scholar]
  13. Wang, Q.; Li, M. Measuring and improving software process in China. In Proceedings of the 2005 International Symposium on Empirical Software Engineering, Noosa Heads, QLD, Australia, 17–18 November 2005. [Google Scholar]
  14. Hoffa, C.; Mehta, G.; Freeman, T.; Deelman, E.; Keahey, K.; Berriman, G.B.; Good, J. On the use of cloud computing for scientific workflows. In Proceedings of the Fourth International Conference on Escience, Indianapolis, IN, USA, 7–12 December 2008; IEEE Computer Society: Danvers, MA, USA, 2008; pp. 640–645. [Google Scholar]
  15. Mell, P.M.; Grance, T. Sp 800-145. The Nist Definition of Cloud Computing; Technical Report; National Institute of Standards & Technology: Gaithersburg, MD, USA, 2011. [Google Scholar]
  16. Linthicum, D.S. Cloud Computing and Soa Convergence in Your Enterprise: A Step-by-Step Guide; Addison-Wesley Professional: Boston, MA, USA, 2009; ISBN 9780136009221. [Google Scholar]
  17. Kang, C.; Wei-Min, Z. Cloud computing: System instances and current research. J. Softw. 2009, 20, 1337–1348. [Google Scholar]
  18. Meng, X.; Chuankai, C. Research of solution on transforming traditional software to saas. Microcomput. Appl. 2012, 31, 7–10. [Google Scholar]
  19. Yuan, Z.; Xia, H.X. Research of online software system development solution based on saas. Comput. Eng. Design 2009, 30, 2714–2717. [Google Scholar]
  20. Jun, Z.; Haoyu, W.; Chaojun, Y. A comparative study of saas data layer scheme. J. Intell. 2010, 29, 176–177. [Google Scholar]
  21. Yu, W.B.; Huang, X.H.; Yu, M. A solution of shared database and architecture’s saas based on xml. Comput. Modern. 2008, 4, 8–10. [Google Scholar]
  22. Leymann, C.F.F.; Retter, R.; Schupeck, W.; Arbitter, P. Cloud Computing Patterns; Springer: Berlin, Germany, 2014. [Google Scholar]
  23. Kratzke, N.; Quint, P.-C. Understanding cloud-native applications after 10 years of cloud computing-a systematic mapping study. J. Syst. Softw. 2017, 126, 1–16. [Google Scholar] [CrossRef]
  24. Gannon, D.; Barga, R.; Sundaresan, N. Cloud-native applications. IEEE Cloud Comput. 2017, 4, 16–21. [Google Scholar] [CrossRef]
  25. Kratzke, N.; Peinl, R. Clouns-a cloud-native application reference model for enterprise architects. In Proceedings of the 20th International Enterprise Distributed Object Computing Workshop (EDOCW), Vienna, Austria, 5–9 September 2016. [Google Scholar]
  26. Osterweil, L.J. Software processes are software too. In Software Engineering; Springer: Berlin, Germany, 1987. [Google Scholar]
  27. Derniame, J.C.; Derniame, J.C.; Kaba, A.B.; Wastell, D.; Derniame, J.C.; Kaba, A.B.; Wastell, D. Software Process Principles Methodology & Technology; Springer: Berlin, Germany, 1999. [Google Scholar]
  28. Bao, T.; Liu, S.; Wang, X.; Sun, Y.; Zhao, G. A software trustworthiness evaluation model based on level mode. In Computer Science for Environmental Engineering and EcoInformatics; Springer: Berlin, Germany, 2011. [Google Scholar]
  29. Möller, D.P.F. Systems and Software Engineering; Springer: Cham, Switzerland; Berlin, Germany, 2016; ISBN 9783319251783. [Google Scholar]
  30. Wang, H.; Ma, P.; Yu, Q.; Yang, D.; Li, J.; Fei, H. Combining quantitative constraints with qualitative preferences for effective non-functional properties-aware service composition. J. Parallel Distrib. Comput. 2017, 100, 71–84. [Google Scholar] [CrossRef]
  31. Cho, V. A study of the roles of trusts and risks in information-oriented online legal services using an integrated model. Inf. Manag. 2006, 43, 502–520. [Google Scholar] [CrossRef]
  32. Ma, Z.; Jiang, R.; Yang, M.; Li, T.; Zhang, Q. Research on the measurement and evaluation of trusted cloud service. Soft Comput. 2016, 22, 1–16. [Google Scholar] [CrossRef]
  33. Yang, M.; Gao, T.; Xie, W.; Wang, J. Research on cloud computing security risk assessment based on information entropy and Markov chain. Int. J. Netw. Secur. 2018, 20, 664–673. [Google Scholar]
  34. Jiang, R.; Liao, H.; Yang, M.; Li, C. A decision-making method for selecting cloud computing service based on information entropy. Int. J. Grid Distrib. Comput. 2015, 8, 225–232. [Google Scholar] [CrossRef]
  35. Ren, L.; Wang, W. An SVM-based collaborative filtering approach for Top-N web services recommendation. Future Generat. Comput. Syst. 2018, 78, 531–543. [Google Scholar] [CrossRef]
  36. Wang, Y.; Wen, J.; Wang, X.; Zhou, W. Cloud service evaluation model based on trust and privacy-aware. Optik Int. J. Light Electron Opt. 2017, 134, 269–279. [Google Scholar] [CrossRef]
  37. Tang, M.; Dai, X.; Liu, J.; Chen, J. Towards a trust evaluation middleware for cloud service selection. Future Generat. Comput. Syst. 2017, 74, 302–312. [Google Scholar] [CrossRef]
  38. Guo, J.; Chen, I.-R.; Tsai, J.J.P. A survey of trust computation models for service management in internet of things systems. Comput. Commun. 2017, 97, 1–14. [Google Scholar] [CrossRef]
  39. Aljazzaf, Z.M.; Capretz, M.A.M.; Perry, M. Trust-based Service-Oriented Architecture. J. King Saud Univ. Comput. Inf. Sci. 2016, 28, 470–480. [Google Scholar] [CrossRef]
  40. Saoud, Z.; Faci, N.; Maamar, Z.; Benslimane, D. A fuzzy-based credibility model to assess Web services trust under uncertainty. J. Syst. Softw. 2016, 122, 496–506. [Google Scholar] [CrossRef]
  41. Yuan, B.; Liu, L.; Antonopoulos, N. Efficient service discovery in decentralized online social networks. Future Generat. Comput. Syst. 2017, 86, 775–791. [Google Scholar] [CrossRef]
  42. Kalaï, A.; Zayani, C.A.; Amous, I. Social collaborative service recommendation approach based on user’s trust and domain-specific expertise. Future Generat. Comput. Syst. 2017, 80, 355–367. [Google Scholar] [CrossRef]
  43. Zhang, Z.; Yang, L.; Li, H.; Xiang, F. A quantitative and qualitative analysis-based security risk assessment for multimedia social networks. Int. J. Netw. Secur. 2016, 18, 43–51. [Google Scholar]
  44. Wang, S.; Huang, L.; Hsu, C.-H.; Yang, F. Collaboration reputation for trustworthy Web service selection in social networks. J. Comput. Syst. Sci. 2016, 82, 130–143. [Google Scholar] [CrossRef]
  45. Núñez, J.A.; Cincotta, P.M.; Wachlin, F.C. Information entropy. Celest. Mechan. Dynamic. Astron. 1996, 64, 43–53. [Google Scholar] [CrossRef]
  46. Singh, R. International Standard ISO/IEC 12207 Software Life Cycle Processes. Softw. Proc. Improv. Pract. 1996, 2, 35–50. [Google Scholar] [CrossRef]
  47. Crisostomo, J.; Flores, L.; Melendez, K.; Davila, A. Convergence analysis of ISO/IEC 12207 and CMMI-DEV: A systematic literature review. In Proceedings of the 2016 XLII Latin American Computing Conference (CLEI), Valparaiso, Chile, 10–14 October 2016. [Google Scholar]
  48. Concas, G.; Damiani, E.; Scotto, M.; Succi, G. Agile processes in software engineering and extreme programming. Lect. Notes Bus. Inf. Proc. 2018, 4536, 1207–1214. [Google Scholar]
  49. Zhang, X.; Li, T.; Wang, X.; Yu, Q.; Yu, Y.; Zhu, R. Formal analysis to non-functional requirements of trustworthy software. J. Softw. 2015, 26, 2545–2566. [Google Scholar]
  50. Pressman, R.S. Software Engineering: A Practitioner’s Approach; McGraw-Hill: Pennsylvania Plaza, NY, USA, 2001; ISBN 0073655783. [Google Scholar]
  51. Skulmoski, G.J.; Hartman, F.T.; Krahn, J. The Delphi Method for Graduate Research. J. Inf. Technol. Educ. 2007, 6, 1–21. [Google Scholar] [CrossRef]
Figure 1. Processes in CSCP.
Figure 1. Processes in CSCP.
Entropy 21 00462 g001
Figure 2. CSCP trustworthiness measurement hierarchical model. (a) Main processes; (b) Analysis processes; (c) Developing processes; (d) Receiving processes; (e) Running processes; (f) Maintenance processes.
Figure 2. CSCP trustworthiness measurement hierarchical model. (a) Main processes; (b) Analysis processes; (c) Developing processes; (d) Receiving processes; (e) Running processes; (f) Maintenance processes.
Entropy 21 00462 g002aEntropy 21 00462 g002b
Figure 3. Results. (a) Uncertainty of CSCP; (b) Loss Expectancy of CSCP; (c) Trustworthiness of CSCP.
Figure 3. Results. (a) Uncertainty of CSCP; (b) Loss Expectancy of CSCP; (c) Trustworthiness of CSCP.
Entropy 21 00462 g003
Table 1. Main Process.
Table 1. Main Process.
ProcessesDescriptions
Analysis process (β1)Buyers’ activities to acquire systems, software products or software services, including definition, analysis of requirements, tender preparation, contract preparation, acceptance, and acceptance, etc.
Developing process (β2)Developers define and develop activities for software products, including system requirements analysis, structural design, detailed design, coding and testing, system integration, software installation, acceptance, etc.
Receiving process (β3)Activities of suppliers and demanders in supplying systems, software or service products, including reviewing requirements, preparing bids, signing contracts, formulating and implementing project plans, conducting reviews and evaluations, delivering products, etc.
Running process (β4)Operators’ activities of providing computer system services to their users in specified environments include formulating and implementing operation plans, running tests, system operation, providing help and consultation to users, etc.
Maintenance process (β5)Maintainers provide activities to maintain software and service products, including analysis of problems and changes, implementation of changes, maintenance review, maintenance acceptance, software migration, software exit, etc.
Table 2. The assignment table of risk frequency P(αj).
Table 2. The assignment table of risk frequency P(αj).
WeightLevelDescription
5Very highThe frequency of risk caused by this factor is very high, and it is inevitable in practice.
4HighThe frequency of risk caused by this factor is high, and it will happen in most cases.
3MediumThe frequency of risk caused by this factor is general and may occur in some cases.
2LowThe frequency of risk caused by this factor is low, and it will occur in a few cases.
1Very lowThe frequency of risk caused by this factor is very low, and it hardly happens in practice.
Table 3. The assignment table of risk loss C(αj).
Table 3. The assignment table of risk loss C(αj).
WeightLevelDescription
5Very highOnce this risk occurs, it will cause devastating losses.
4HighThe impact of this risk is more significant and the maintenance fund is higher.
3MediumThe economic losses and impacts caused by this risk are general.
2LowThe impact of this risk is small and the maintenance fund is low.
1Very lowThe impact of this risk is negligible and requires little maintenance.
Table 4. Statistical results of P(αj) and C(αj).
Table 4. Statistical results of P(αj) and C(αj).
αjP(αj)C(αj)αjP(αj)C(αj)
12345123451234512345
α 1 00112200771 α 18 1284011661
α 2 10320096000 α 19 6810011571
α 3 112200101103 α 20 2222735421
α 4 06810211200 α 21 10311085002
α 5 9600062025 α 22 141000131100
α 6 123000113001 α 23 21120001950
α 7 01581011103 α 24 132000101121
α 8 67200114000 α 25 13110011661
α 9 132000114000 α 26 3471061161
α 10 1175100483 α 27 9510000177
α 11 93300002310 α 28 95100121020
α 12 14100036510 α 29 22722105000
α 13 3471095100 α 30 2561111751
α 14 132000113001 α 31 67110410100
α 15 132000110103 α 32 131100001113
α 16 132000104100 α 33 132000113100
α 17 104100114000
Table 5. P ( a j ) and C(αj).
Table 5. P ( a j ) and C(αj).
α j P(αj)C(αj) α j P(αj)C(αj)
α 1 4.06673.6000 α 18 3.00003.3333
α 2 1.46671.4000 α 19 1.66673.4000
α 3 1.40003.9333 α 20 3.66672.5333
α 4 2.66672.0000 α 21 1.53331.8667
α 5 1.40002.8667 α 22 1.06671.2000
α 6 1.20001.4667 α 23 2.00003.2667
α 7 3.60004.0000 α 24 1.13333.1333
α 8 1.73331.2667 α 25 1.20003.3333
α 9 1.13331.2667 α 26 2.40002.6667
α 10 3.26673.9333 α 27 1.46674.4000
α 11 1.60004.5333 α 28 1.46672.8667
α 12 1.06672.2667 α 29 3.00001.3333
α 13 2.40001.4667 α 30 2.60003.2667
α 14 1.13331.4667 α 31 1.80001.8000
α 15 1.13333.8667 α 32 1.20004.8000
α 16 1.13331.4000 α 33 1.13331.3333
α 17 1.40001.2667
Table 6. U ( β i ) , L ( β i ) and U ( γ ) ,   L ( γ ) .
Table 6. U ( β i ) , L ( β i ) and U ( γ ) ,   L ( γ ) .
ProcessesUncertainty (U)Loss Expectancy (L)
Analysis process β 1 0.94343.1273
Developing process β 2 0.97263.0945
Receiving process β 3 0.96272.5051
Running process β 4 0.96492.3534
Maintenance process β 5 0.97192.7832
CSCP γ 0.97272.7365
Table 7. T ( β i ) and T ( γ ) .
Table 7. T ( β i ) and T ( γ ) .
ProcessesTrustworthiness (T)
Analysis process β 1 1.3798
Developing process β 2 1.3514
Receiving process β 3 1.4380
Running process β 4 1.4613
Maintenance process β 5 1.3882
CSCP γ 1.3935
Table 8. Comparison in Usability.
Table 8. Comparison in Usability.
Comparison Results
DADIEM < AHP < CMM/CMMI
OPCIEM = AHP < CMM/CMMI
RRIEM >= AHP > CMM/CMMI
Table 9. Comparison with AHP and CMM/CMMI (1).
Table 9. Comparison with AHP and CMM/CMMI (1).
Comparison Results
UsabilityIEM > AHP > CMM/CMMI
ObjectivityIEM > CMM/CMMI > AHP
VersatilityCMM/CMMI > IEM > AHP
FunctionalityCMM/CMMI > IEM > AHP
CostCMM/CMMI > AHP > IEM
Table 10. Comparison with AHP and CMM/CMMI (2).
Table 10. Comparison with AHP and CMM/CMMI (2).
Application Scenario
IEMTo find relative weaknesses for continuous improvement by self-test
AHPTo help with decision making by self-test
CMM/CMMITo set benchmark and to judge whether it is up to standard by other tests
Table 11. Comparison with AHP and CMM/CMMI (3).
Table 11. Comparison with AHP and CMM/CMMI (3).
UsabilityObjectivityVersatilityFunctionalityCost
IEMHighHighModestModestLow
AHPModestLowLowLowModest
CMM/CMMILowModestHighHighHigh

Share and Cite

MDPI and ACS Style

Tilei, G.; Tong, L.; Ming, Y.; Rong, J. Research on a Trustworthiness Measurement Method of Cloud Service Construction Processes Based on Information Entropy. Entropy 2019, 21, 462. https://doi.org/10.3390/e21050462

AMA Style

Tilei G, Tong L, Ming Y, Rong J. Research on a Trustworthiness Measurement Method of Cloud Service Construction Processes Based on Information Entropy. Entropy. 2019; 21(5):462. https://doi.org/10.3390/e21050462

Chicago/Turabian Style

Tilei, Gao, Li Tong, Yang Ming, and Jiang Rong. 2019. "Research on a Trustworthiness Measurement Method of Cloud Service Construction Processes Based on Information Entropy" Entropy 21, no. 5: 462. https://doi.org/10.3390/e21050462

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop