Keywords

1 Introduction

Commonly, Security transparency relates to the level of visibility into security policy and operations offered by the Cloud Service Provider (CSP) to the Cloud Service Consumer (CSC) [25]. Recent research has highlighted the need of security transparency and mutual auditability as salient factors for sustaining the current momentum of cloud services [1–4]. Such a narrative stems mainly from the very fact that, the data or processes used for storing or treating the CSC’s information are geo-delocalized from the CSC’s premises. A consequence of this, include the de facto devolution of security related responsibilities to the CSP whose capability to effectively safeguard data and processes may be feeble or simply mistrusted by the CSC. In the context of outsourced services such as the cloud, the due diligence of the CSP in promptly informing their clients in the event of a security compromise on their infrastructure can be decisive for the CSC to minimize its exposure to risk, by for instance stopping using it. However from a CSP perspective, reporting on security woes, is not always well perceived as their reputation can be tarnished. As a result, the fear of security shortcomings and the lack of visibility on a salient security matter they no longer have full sights on, have been an hindrance for the broader adoption of cloud based services, especially by those companies involved with security critical data such as the banking a financial sector.

While researchers on service oriented architecture and network security along with certification bodies have been very active proposing a wide range of initiatives in response to the problem (including virtual machine monitoring [5–9]; service level agreement specification and monitoring [11–21]; certification and audits [1, 15, 17]), similar efforts remain to be seen from the software engineering community, the secure system engineering community in particular. This is a contrast to the leading role played by the community in the early 2000 on the salient issue of integrating security considerations earlier in the development stage of Software to ensure a smooth and efficient integration in the future software system [26–29]. The creation of cloud services, private and public alike, are mainly undertaken using cloud development and management software of the like of Openstack (https://www.openstack.org/) and Opennebula (http://opennebula.org/). Unfortunately, none of those tools support the integration of security transparency concepts. As such, most efforts that have been devised as a response to the issue have been ad hoc and primarily serve the purpose of the CSP. For instance, the terms of SLA clauses are often those the CSP is certain it can abide by and fail to comprehensively encompass all the specific expectations in terms of security transparency of the CSC. Demonstrating that a cloud service meets some security transparency requirements can play in the hand of both CSC and CSP, as it may serve as a way for the latter to demark itself from competitors while providing to the latter a baseline for an informed selection of a CSP for the handling of security critical data and processes. In that vein of idea, this paper argues that the cloud model could widen its appeal if concepts related to security transparency were integrated in the conceptualisation and development of cloud services. This would have the merit to help capture on the one hand, CSC’s requirements in terms of security transparency while helping to provide a design solution as the means in which the CSP would be practically meeting such requirements. In another word, the engineering of cloud services that integrates security transparency have to be interactive, allowing each of prospective CSC to first specify its security transparency needs and for the CSP to then model the available capabilities to meet the CSC’s expectations before it finally make a decision on whether to adopt the service. Alternatively, the method and resulting platform could be a tool in the hand of cloud broker who will be tasked with selecting the CSPs with the most adequate capability upon the specification of the requirements by the CSC.

This paper is organized as follows: Sect. 2 review existing efforts on security transparency and highlights their shortcomings as ad hoc rather than built-in initiatives. In Sect. 3 we provide some initial thoughts on a roadmap for fully integrating security transparency consideration in the engineering for cloud base services. Sect. 4 concludes the paper.

2 Enabling Security Transparency in the Cloud

There are several works in the literature that focus on enhancing the trust relationship in the cloud environment. These works have come mainly in the form of virtual machines monitoring; certification, audits and monitoring of Service Level Agreement (SLA). We have performed a systematic literature review with these keywords along with audit and trusted cloud platform to identify the relevant literature, research project and industry practice. We use search engines from the following five sites: Google Scholar, Elsevier, IEEE Xplore, ACM Digital Library, EU projects, and Science Direct to extract the literature. The primary literatures were selected based on the keywords and by reviewing abstract and title. The initial selected literature further refined based two inclusion criteria, i.e., works that focus on transparency issues in cloud and studies that consider techniques, processes and tools for managing transparency and trusted cloud environment. Our findings are given below.

2.1 The Usage of Trusted Cloud Computing Platforms and Monitoring of Virtual Machines

A myriad of initiatives focusing on the usage of a trusted Cloud Computing Platform (TCCP) and the monitoring of Virtual Machines, have emerged as potential solutions for addressing the issue of security transparency in the cloud.

Santos et al. [5] proposed an architecture for a trusted platform called Trusted Cloud Computing Platform (TCCP) that purports to ensure the confidentiality and integrity of the data and computation undertaken by the provider. Using a program associated to TCCP, a customer may be able to detect whether the data or computation has been tampered with or been accessed even by the provider. Subsequently, the customer may decide on whether to terminate a Virtual Machine (VM) should they notice any abnormality. In particular, the TCCP needs to guarantee that the VM is launched on a trusted node and that the system administrator is unable to inspect or tamper with the initial VM state as it traverses the path between the user and the node hosting it. The TCCP approach builds upon a traditional trusted platform, such as TERRA [6], to ensure the integrity and confidentiality in the context of multiple hosts. Humberg et al. [30] propose a two –step based ontology driven approach to identify relevant regulation to support the compliance requirements for a trusted cloud based system. The regulator ontology is based on the rule, rule elements; situation and constraint, where constraint checked a specific situation using rule sets. The proposed process consists of three steps for identifying relevant rules, mapping business process with rules and finally verifies the rules. Finally a tool is presented to demonstrate the execution state of the process. Wenzel et al. [31] consider security and compliance analysis of outsourcing services in the cloud computing context and focuses on risk analysis and compliance issues of business processes that are planned to be outsourced.

Another initiative that uses the concept of trusted platform is the Private Virtual Infrastructure (PVI) proposed by Krautheim [7]. Krauthiem has suggested a means to allow monitoring in the cloud by combining the trusted platform module (TPM) and a Locator Bot that pre-measures the cloud for security properties, securely provisions the data-centre in the cloud, and provides situational awareness through continuous monitoring of the cloud security. In this approach, security appears as a shared responsibility between the provider and the consumer. Thus, the SLA between the client and the provider is critical to defining the roles and responsibilities of all parties involved in using and providing the cloud service.

The authors in [8] argued that the dependability of cloud services may be attained through the quantification of security for intensive compute workload clouds to facilitate provision of assurance for quality of service. They subsequently defined seven security requirements which include: Workload state integrity, Guest OS Integrity, zombie protection, Denial of Service attacks, malicious resource exhaustion, platform attacks and backdoor protection. Unfortunately the paper does not provide any evidence of effort towards quantification of security as it claimed. Moreover it remains unclear as to how information relating those security requirements may be conveyed to the provider and consumer alike.

De Chaves et al. proposed an initiative to private cloud management and monitoring called PCMONS [9]. The authors argued that despite the peculiarity of cloud services compared to traditional legacy systems, existing tools and methods for managing networks and distributed systems can be reused in cloud computing management. PCMONS is based on a centralised architecture with the following features [9]: (a) a Node Information Gatherer, which is responsible for gathering local information on a cloud node; (b) Cluster Data Integrator, an agent that gathers and prepares the data for the next layer (the monitoring data integrator); (c) a Monitoring Data Integrator that gathers and stores cloud data in the database for historical purposes, and provides such data to the Configuration Generator; (d) a Virtual Machine (VM) Monitor that sends useful data from the VM to the monitoring system; (e) a Configuration Generator for retrieving information from the database; (f) a monitoring Tool Server that receives monitoring data from different resources (e.g., the VM Monitor); and finally (g) a database where the data needed by the Configuration Generator and the Monitoring Data Integrator are stored. Given PCMONS was developed to respond to the needs of management in private cloud, the need of establishing mutual trust between the provider and the consumer does not arise.

Shao et al. have introduced a runtime monitoring approach for the Cloud, concentrating on QoS (Quality of Service) aspects [10]. Their model, RMCM (Runtime Model for Cloud Monitoring) uses multiple monitoring techniques to gather data from the cloud. However, their approach seems generic and security monitoring is not discussed in particular.

Overall, it can be said that the research community has moved from debating whether the cloud is a mere hype to devising some tangible initiatives for resolving one of its most salient issue that is security. Unfortunately the current efforts on trusted cloud computing platforms and monitoring of Virtual Machines have mainly been driven by the need to foster a better management of security for the CSP provider rather than addressing the complexities of multi-party trust considerations (particularly those related to security), and the ensuing need for mutual auditability. In fact monitoring of VMs is meant to be conducted by and for the CSP.

2.2 Security Transparency Through SLA Management

For Rak et al., the mutual trust between a provider and a customer should be considered only in context of an SLA management [11]. Using a cloud-oriented API derived from the mOSAIC project (http://www.mosaic-project.eu/), the authors built up an SLA-oriented cloud application that enables the management of security features related to user authentication and authorization of an IaaS Cloud Provider. This gives the opportunity to the customer to select from amongst a number of security requirements templates, the one that may be appropriate for the nature of his/her application before the provider can set up the configuration of the concerned node accordingly. As noted by the authors, the consideration of SLA in the management of the cloud security provides the consumer with formal documentation about what he/she will effectively obtain from the service. Meanwhile, from the provider point of view, SLAs are a way to have a clear and formal definition of the requirements that the application must respect. However, the initiative by Rak et al. [11], does not go far enough to incorporate means for monitoring and reporting on the fulfilment of such SLA to the consumer. An extension of the work of Rak et al. in the context of the EU FP7 project Specs (http://specs-project.eu/) considered the provision of a platform for providing a security services based on SLA management.

The SLA@SOI project [12] also followed in the path of SLA management in service oriented architectures, which includes cloud technology. The monitoring of SLAs expressed in the SLA specification language of SLA@SOI requires the translation of these SLAs into operational monitoring specifications (i.e., specifications that can be checked by a low level monitor plugged into the SLA@SOI framework). The SLA monitoring in SLA@SOI relies on EVEREST+ [14], which is a general-purpose engine for monitoring the behavioural and quality properties of distributed systems based on events captured from them during the operation of these systems at runtime. The properties that can be monitored by EVEREST are expressed in a language based on Event Calculus [15], called EC-Assertion. Similarly, Chazalet discusses SLA (Service Level Agreements) compliance checking in cloud environments and uses JMX (Java Management Extensions) technology in the prototype implementation [16]. Their checking approach allows separating concerns related to the probes, information collection and monitoring and contract compliance checking.

The negotiation of SLA in the context of federated cloud has also been the focus of research initiatives. Such initiatives range from simulation frameworks purporting to help in selecting the optimal combination of cloud services which better meet SLA requirements [22] to the optimal negation of SLA using multi-objective genetic algorithms [23].

In a similar way, some recent work on accountability in the cloud has started to emerge through projects such as A4CLOUD (http://www.a4cloud.eu), whereby researchers are thriving to devise models that can help put in place the set of mechanisms that would ensure cloud providers are hold accountable should there be a breach of SLA or a security incident that can be traced back to a lax in their security. In the context of A4Cloud the concept of transparency in the broader sense is dealt with as an attribute of Accountability [17]. Readers interesting in further comprehending the scope and diversity of existing efforts on SLA based monitoring of cloud security can refer to the taxonomy of Petcu [21].

The major problem with the adoption of SLA management as a means to enhance security transparency is primarily on its practicality. Indeed the academic notion of SLA appears to be far more extensive than it is in reality. Form our own experience in approaching CSPs on the issue, most often, the content of such documents are restricted to the sole aspects of: allocated bandwidth, storage capacity, etc., while the only security aspects included often relate to service availability. Clearly, the items included in those specifications are those the companies were confident they could deliver on. Their argument on the most pressing and challenging issues such as security was that stringent and redundant mechanisms were in place for its guarantee, as witness by some of their security certification.

2.3 Security Certification and Audits

In their effort to reduce the fears of the CSCs and distinguish themselves from competitors by promoting their service as one that is secure, CSPs have often turned to certifications as a way of swaying CSCs. Reasons for this include the lack of metrics and sometimes resources from the CSC to adequately assess the cloud services. As such Certification from a third party organization has been hailed by proponents as the ultimate means of promoting trust and transparency in the cloud ecosystem, which is a key to its wider acceptance [1]. For instance, certification to ISO/IEC 27001 is valued in the industry, as it provides a holistic framework for appreciating how well a company manages its information security. The standard emphasizes the need for organisations to have clear means of understanding their security needs. Additionally it is meant to assist them in implementing controls to address risks facing their business and monitoring, reviewing and improving the performance and effectiveness of the Information Security Management Systems (ISMS). Importantly, the authors in [1] have also highlighted the need for certification scheme to be affordable to avoid smaller company having the carry those expenses in the price of their service delivery and thus become ultimately uncompetitive against their bigger rivals.

Following the argument that providers should rely more on a certification from a governing or standardized institution that stipulates the provider has established adequate internal security controls that are operating efficiently, the Cloud Security Alliance has made a number of effort towards the provision of clear guidelines towards controlling security risks in the cloud [15]. The CSA guidance is made up of 99 control specifications covering such area as: Compliance, Governance, Facility, human resource and Information security, Legal matters, Operations, Risk and Release management, Resiliency and the security architecture. The individual controls identified within the guideline emanate from well-established standards and guidelines pertinent in both the context of traditional Information Systems and the cloud, covering a wide range of domains including the IT Governance (COBIT, the banking and financial domain (PCI-DSS and BITS), Government (NIST SP800-53 and FedRAMP), Health care (HIPAA) and cross-domain standard for the management of information security systems (ISO/IEC27001). Recently, the CSA has put forwards the idea of a three-levelled certification scheme that would rely on the compliance to its set of security guidance and control objectives. According to the CSA each level will provide an incremental level of trust to CSP operations and a higher level of assurance to the CSC. The first of such levels (which it must be stressed is a mere self-assessment exercise) requires each CSP to submit a report on the CSA to assert its level of compliance to the advocated best practices. The second level, referred to as CSA STAR CERTIFICATION, is meant to provide a third-party independent assessment conducted by an approved certification body under the supervision of the CSA and BSI. The third level, will extend the STAR CERTIFICATION in view of providing a continuous monitoring based certification.

Similarly, the Certified Cloud Service of TÃœV Rheinland, runs a certification scheme which is based on CSPs compliances on the most essential information security standards such as ISO 27001 basic protection standards issued by the German Federal Office for Information Technology and ITIL [17].

It is clear that standardization and certification bodies are rushing to make a foot print in the certification market related to cloud based services. Although the intention lies in helping to make an informed judgment about the quality of a given CSP, companies with interest in adopting the cloud could be swamped and confused by the sheer number of standards and their actual scope. In anticipation to this, a recent research conducted by the University of Cologne in Germany has suggested a taxonomy of cloud certification whereby commonly agreed structural characteristics of cloud service certifications could be adopted as a baseline for classifying certification schemes depending on their core purpose [20].

The adoption of certification as a way of making a statement about the reliance of the security of one’s service has reinforced the importance of audits for the cloud model. Audits are meant to provide a third party independent assessment of the posture of the security. Until autumn 2011, the SAS70 was a standard audit approach for service companies to use with their customers instead of customers individually auditing the services companies [18]. The actual purpose of the standard was primarily aimed to assess the sufficiency and the effectiveness of the security controls of the CSP. The standard was superseded by SSAE16 (www.ssae16.com), which stands for Statement on Standards for Attestation Engagements No. 16. The rationale for such a change was to align the reporting standard of US based companies to that of the international standard ISAE3402 (http://isae3402.com/). One of the core difference between the two standards rests on the fact that the evaluated company is bound to provide a written statement about the accuracy of the description of their system and the corresponding time frame during which such an assessment has been made.

What becomes apparent after analyzing the different audits standard available is that they rely in a large part on the words and assessment of the CSP. Such information cannot be guaranteed to be immune from bias. For instance, the CloudAudit initiative from the CSA (http://cloudaudit.org/CloudAudit/Home.html) is seeking to provide a common API for CSPs to specify their assertion, assessment and assurance. Such information is meant to be made readily available to the CSCs and also allow the latter to make comparisons between potential providers based on their security. Given the CSP has often a greater control over the security in the cloud, with very little visibility (if any) for the CSC, the frequency and independence of such audits is paramount along with the appropriate reporting of the findings to the CSC. Thus automated and continuous audits will be more appropriate, especially when considering the evolving nature of the cloud infrastructure.

3 Requirements for Security Transparency Driven Cloud Service Engineering

As can be taken from the analysis above, addressing security transparency in the cloud as an afterthought raises a number of issues: first, the full extent of the CSC’s requirements in terms of security transparency cannot be captured and accounted for. Secondly, most of the clauses that underpin the usage of existing initiatives are primarily in the terms of the CSP. Thirdly, they do not allow the CSC to formally appreciate if and how the CSP will meet their transparency need and help them make a consequent informed decision prior to adopting the service.

From the secure software engineering domain, a methodology such as Secure Tropos [24] appears to harbor some of the flavor of a methodology that could lend itself for the engineering of security transparency aware cloud systems. However this would first require some notable amendments. Such enhancement would have to account for the fact that any effort to devise methodologies and methods for the cloud that integrates security transparency concerns will have to consider at least two aspects: (i) bear some level of interactivity which will allow the security transparency requirement of the CSC to be captured and the strategy of the CSP to meet such requirements to be designed; (ii) be resource-oriented like the cloud paradigm. The rationale for putting emphasis on the resource rather than the goals of the consumer is primarily because consumers when considering the cloud, have a pretty good idea of what their needs are and their intention is known. What may elude them at this point are the specificities in terms of security (pros and cons) of the resource they would have to rent. After all the cloud is known to be an abstraction of a pool of computing resources made available upon request to the consumer. Consequently, it is therefore essential that the resource to be used is center-staged. Besides the potential adaptation of an existing methodology such as Secure Tropos, two main aspects have to be considered when considering a security transparency aware cloud engineering: the need of capabilities that allow the CSC to profile the resources she seeks to rent or Interactive Resource Profile modelling and for the CSPs to demonstrate the existing strategy in their midst to meet the expectation of the CSC or Strategic Assurance Modelling.

3.1 Interactive Resource Profile Modelling

By seeking to outsource to a third party, the future cloud user aspires to a number of requirements and non-functional requirements for its service. For each resource of interest to the consumer, security requirements should be specified. At this stage of the modelling, it must also be possible for the future CSC to select the set of security requirements that are the most critical to its activity and as such, would require some attention (through monitoring for instance) during the usage of the resource. Such security requirements are referred to as security transparency relevant or STA requirements. Unlike the other standard security requirements associated to a cloud resource, tagging a security requirement as STA would imply more analysis at a later stage of the methodology is needed and a clear strategy put in the place by the provider to: ensure their continuous fulfilment and inform the CSC should they be infringed. Once the whole security requirements for a resource are known, the profile of the prospective service as envisage by the CSC should be moved (made available to) the potential CSPs who will then associated to each security requirements, the set of security controls available in view of helping in their fulfilment. The exercise referred to as Coupling would determine the extent to which the listed security requirements can be met by the providers ‘controls. A final validation from the consumer side for accepting or rejecting the capability of the provider’s security resources to satisfactorily meet the underlining security requirements is possible at this stage. The whole process of resorting to a given provider’s service may be brought to a halt in case of remaining STA security requirements without any association with the provider’s security mechanisms or, when the proposed controls are considered not too satisfactory by the consumer.

3.2 Strategic Assurance Modelling

While the interactive resource profile modelling phase primarily purports to support the CSC in making an informed decision on the adequacy of a CSP to meet her security transparency needs, the strategic assurance modelling mainly aims at providing a framework for the CSP to effectively design her strategy around the efficient used of security resources in implementing security transparency needs of the CSC. Given that the need for security transparency will mainly translate into monitoring of and reporting on the status of the security resources and any potential security incident to the CSC, this step will mainly involve the elaboration of software agents tasked with continuously probing the security controls and other transparency relevant components. In order to achieve that, a decomposition of the security controls (provided by the provider) into finer key properties that underpin their functionalities will take place. This will have the benefit to enable the providers’ security engineer to assign software components (agents for instance) responsible for the monitoring the correctness of such properties which is essential for the security resources to be effective. The actual decision on whether to assign individual agent for each property or for the monitoring of a group of properties across different security mechanisms is left to the discretion of the security engineer. In case of dependency between two security controls, a correlator agent may be created between the respective aggregator agents of those security controls. The role of the correlator agent will mainly consist in tuning in the status of the dependee security resource according to the status of the depender and the degree of the correlation parameter between the two. In another word, if a property of the security resource SR1 is known to be non-compliant with respect to the CSC security transparency need and, given that such property also play a role in the functionality of a secure resource SR2, the correlator will be the component downgrading the status level of SR2 based on the information on the non-complaint property. Unlike the correlator, an aggregator agent, is local for a given secure resource and is untrusted with the task to combine information gathered by all the probing agents associated to a secure resource.

4 Conclusion

This paper has provided an analysis of the set of initiatives seeking to address the salient issue of security transparency in the cloud. Our analysis of the proposed initiatives reveals that addressing security transparency as an afterthought bears three main shortcomings: (i) they fail to fully capture the security transparency need of the CSC; (ii) most of the clauses that underpin the usage of existing initiatives are primarily in the terms of the CSP; (iii) they do not allow the CSC to formally appreciate if and how the CSP will meet their transparency need and help the make a consequent informed decision prior to adopting the service. Consequently, the conclusion of our work was that both CSP and prospective SCC could benefit from the integration of security transparency concerns in the development cycle of cloud based services. We thus provide a number of desiderata and an initial direction for a security transparency driven cloud engineering. We believe such a roadmap can be a starting point for the secure system engineering community, which has so far overlooked the issue, to start investing some interest in the domain.