1 Introduction

Development of technology has enabled mobile devices as appropriate tools for facilitating health care of various types, in what is known as M-health. M-health provides the opportunity to store and share the health information of a patient in their devices in order to deliver more efficient services, from fitness advice to more physician-oriented tools. However, security of the information in addition to access control is one of the controversial issues in this area [18]. Health care applications collect and share various type of information regarding physical activities and the lifestyles of users in addition to their medical and physiological information [16]. This is a privacy issue that could be better managed.

According to the National Committee for Vital and Health Statistics (NCVHS), health information privacy is “An individual’s right to control the acquisition, uses, or disclosures of his or her identifiable health data. Confidentiality, which is closely related, refers to the obligations of those who receive information to respect the privacy interests of those to whom the data relate. Security is altogether different. It refers to physical, technological, or administrative safeguards or tools used to protect identifiable health data from unwarranted access or disclosure” [6].

In 2013 more than 35000 health apps existed for iOS and Android. From the most 600 useful ones, only 183 (30.5 %) address mobile health privacy policies in some meaningful way [28]. Recently, both Google and Apple announced new platforms for health apps such as Health Kit [2], Research Kit and Google kit [5], which provide the possibility of information sharing between health care applications in one place.

We consider apple health kit as an example. Currently, each application is individually responsible for obtaining the trust of the user in order to get access to their health information. Health information has been divided into different categories. However, once the user shares their information, they have no further control over it. There has been extensive research in this area, however, this research has been focused on making policies or traditional mechanisms such as data encryption. Considering the importance of this information, in addition to usual privacy measurements, other considerations in design need to be met.

To address this, in our current research, we are proposing a trust model which aims to be used as leverage for the owner of the information to make decisions about sharing their information or part of it with a specific application. To formalize the trust value for each purpose of information, there are two components of a trust model which must be calculated. The first component of the model calculates the amount of trust which each person needs for sharing a specific part of the information. The second component of the model calculates the amount of trust already extant between a device and the application which is asking for the information.

The paper is organized as follows. In Sect. 2 we examine related work, before presenting our proposed model in Sect. 3, and a worked analysis in Sect. 4. We conclude with future work in Sect. 5.

2 Background

In this section, we review existing information systems in the health and M-health area. For the sake of brevity, we look at HealthKit, Apple’s health framework in detail. In addition, we provide a summary of current trust-based models and architectures in this area.

2.1 Healthcare Information Systems

Health Information Systems utilize data processing, information and knowledge in order to deliver quality and efficient patient care in health care environments [12]. In recent years, there has been a great deal of movement towards computer-based systems from paper-based systems in health care environments [14]. Computer based systems provide the possibility of patient-centric systems instead of location constrained systems [7]. Furthermore, targeted users of these systems have also changed. Computer-based systems originally targeted only health care professionals, but gradually they have come to involve patients and their relatives as well [11]. Developments in these information systems over the previous decades provide the possibility of use of data for care planning and clinical research in addition to patient care purposes [11]. In addition, continuous health status monitoring using wearable devices such as sensors and smart watches further enhances the patient experience [17].

Expansions in use of data and health information in parallel with advancements in technology contributed to development of different architectures and information systems in this field. M-health is the use of mobile devices and their information in the health care area [26]. Special characteristics of mobile devices make them an excellent choice for this purpose. Their mobility and ability to access the information in addition to their ubiquity are some of these characteristics [26]. Employing technologies such as text messaging for tracking purposes, cameras for data collection, documentation and their ability to use cellular networks for internet connection, enable mobile devices to act as a perfect platform for delivery of health interventions [15]. Determining exact location through employing positioning technology, is also helpful for emergency situations [26] and device comfort purposes [19], where devices can determine how, when, and where to share relevant health information. Poket Doktor System (PDS) is one of the primary architectures in this area. This system includes an electronic patient device which contains electronic health care records, health care provider device and a communication link between them [29].

One of the major uses of mobile devices in health care is for monitoring purposes. Intelligent mobile health monitoring system (IMHMS) [25], introduces an architecture which is the combination of 3 main parts. Through a wearable body network, the system collects data and sends it to the patient’s personal network. This network, based on the normal range of the index in question, logically decides whether to send the information to an intelligent medical server or not. The intelligent medical server is monitored by a specialist. Due to the broadness of the field, different monitoring systems have been introduced for specific purposes.

Some architectures have been introduced in order to improve the privacy of health care in this area. Weerasinghe et al. [30] present a security capsule with token management architecture in order to have secure transmission and data storage on device. Some models also use access control for healthcare systems based on users behaviours [31]. In [33] the authors propose a role-based prorogate framework. Some architectures have been developed in order to decrease clinical errors. For example, [32] proposes a scenario based diagnosis system which extracts relative clinical information from electronic health records based on the most probable diagnostic hypothesis.

2.2 Information Platform Example: Apple Healthkit

The HealthKit framework, which was introduced by Apple in iOS 8, lets health and fitness applications as well as smart devices gather health information about a user in one location. The framework provides services in order to share data between health and fitness applications. Through the HealthKit framework different applications can get access to each other’s data with the user’s permission. Users also can view, add, delete and manage data in addition to edit sharing permission using this app [1]. The framework can automatically save data from compatible Bluetooth LE heart rate monitors and the M7 motion coprocessor into the HealthKit store [3].

All the data which is managed by HealthKit is linked through the HealthKit store. Each application needs to use the HealthKit store in order to request and get permission for reading and sharing the health data [4].

Currently each application is individually responsible for obtaining the trust of the user in order to get access to their health information. The user has the control over the data and can decide whether to share data with the app or not. Users can also share some part of data whilst not giving permission for sharing another part [3].

In order to maintain the privacy of a user’s data any application in the HealthKit must have a privacy policy. Personal health records models and HIPAA guidelines can be used in order to create these policies [3].

In addition, data from the HealthKit store cannot be sold. Data can be given to a third party app for medical research with owner consent. The use of data must be disclosed to the user by the application [1].

2.3 Trust in Information Systems

Trust plays an important role in human daily life. Trust can be studied from different perspectives, depending on the person who defines trust and the type of trust [20]. There is wide literature exploring in different fields such as evolutionary biology, sociology, social psychology, economics, history, philosophy and neurology.

The use of Trust Models in electronic healthcare can be classified into two groups: sharing information and electronic health records and monitoring patients. Becker, Moritz and Sewell introduced Cassandra, a trust management system that is flexible in the level of expressiveness of the language by selecting an appropriate constraint domain. Also, they present the results of a case study, a security policy for a national Electronic Health Record system, demonstrating that Cassandra is expressive enough for large-scale real-world applications with highly complex policy requirements. The paper concludes with identifying implementation steps including: building a prototype, testing the EHR policy in a more realistic setting, and producing web-based EHR user interfaces [8].

Considering the importance of security in wireless data communication, [9] reviews the characteristics of a secure system and proposes a trust evaluation model. Data confidentiality, authentication, access control and privacy are examples of mentioned security issues. In this system nodes are representative of each component of system. A trust relationship between nodes has been evaluated to determine trustworthiness of each node. The main difference between this system and related works is that trust value of each node computed based on increased shaped functions such as exponential while others use linear functions. This leads to increase of past behaviour impact on trust [9]. In [21] the authors developed a trust-based algorithm for a messaging system. In this system, each node is assigned a trust value based on their behaviour. At same time, each message was divided to 4 parts and only nodes with the total trust value possible to read all parts of the messages.

3 Our Trust Model

In this section a trust model that considers both personal and environmental aspects is presented. This model aims to be used by the owner of the information to make decisions about sharing their health (or indeed, any) information, or part of it, with a specific application.

To formalize the trust value for each purpose of information, there are two components which must be calculated. The first component of the model calculates the amount of trust which each person needs for sharing a specific part of the information. The amount of trust that already exists between a device and the application which is asking for the information is calculated through the second component of the model. In the end, by comparing the two values, advise on sharing the information is made.

Table 1 summarizes the notations used in this chapter:

Table 1. Explanation of notations

3.1 Personal Perspective

The personal perspective layer of the model will calculate the amount of trust that the user requires in order to share the information or a specific part of it. This layer is based on preferences of the owners of the information. To formalize the proposed system, this research considers a scenario in which a specific part of health information of a user has been requested by a specific application. Based on personal characteristics and experiences of persons, their behavior varies towards information sharing [13]. Stone and Stone [27] explored links between personality of individuals and information privacy issues. Gefen et al. [10] determined that personality has an impact on trust in virtual environments.

In order to determine the privacy preferences of each user, various factors should be considered and specific trust values need to be assigned. In the following sections, these factors and the methodology of assigning the trust values are presented.

Sensitivity of Information: Sensitivity of information might differ for individuals [2224]. To facilitate the subjectivity of sensitivity of each piece of health information for users, we give the user the chance for decision making for each piece of information. The most significant factors which have an impact on calculation of the trust value are the following.

Category of Information: (See Table 2) Some health information can alter over its lifetime. In our model, we used the Apple health kit categories which falls into two main groups. The first group, “Characteristics data” refers to data which does not change over time such as gender, blood type and date of birth. The second group of data has been collected through the device and might change over time [1, 3].

Table 2. Information categories

In our model, \(C_j\) represents different categories of information. For each category of information, users would assign a comfort value for sharing each category of information. This value would be between (\(-1,+1\)).

Purpose of Use of Information: Different mobile applications use health information for various purposes. Considering existing applications in health care in parallel with the iOS health framework, the aim of use of information categorized to at least one of the several groups.

In our model we use \(A_i\) to represent these categories, thus, for each application depending on its purpose, \(A_i\), would be an element of at least one of the following sets:

$$\begin{aligned} A_i \in P_k \end{aligned}$$
(1)

in which:

$$\begin{aligned} k = {\left\{ \begin{array}{ll} {Research}\\ {Personal Monitoring}\\ {Public Health Monitoring}\\ {Commercial Usage}\\ {Govermental Usage} \end{array}\right. } \end{aligned}$$

Depending on the personality and priorities of the users, they might be interested in sharing information for each purpose. For each purpose again, users would assign a comfort value for sharing the information.

We use a matrix in order to represent and determine the relationships between various categories and purposes. In this \({m_p} \times {n_c}\) matrix, columns represent categories and rows represent purposes. Each element of the matrix is the minimum number of the assigned (by the user, but with some defaults) value for a specific purpose and category.

$$\begin{aligned} S_{j,k}= min({C_j,P_k}) \end{aligned}$$
(2)

Through this matrix, the system is able to choose a specific part of information for specific purpose, instead of omitting a whole category of information.

If the purpose of the application which is asking for the information is unclear, average of assigned values for all purposes could be used as a trust value.

$$\begin{aligned} \frac{1}{m_p}\displaystyle \sum _{i=1}^{m_p} P_k \end{aligned}$$
(3)

At this point the trust value for a specific information item in a specific context (application) would be a function of the following variables:

\({\varvec{T}}_{\varvec{d}}{} \mathbf{:}~\mathbf{Delay Time: }\) This factor is added in order to improve the privacy of the user. Users can decide on sharing part(s) of their information after a specific delay. This may result in decrease in sensitivity of information for the user. Users have 3 options for sharing, representing different time periods before information is released. Depending on the user’s preference, \(T_d\) would be equal to:

$$\begin{aligned} T_d= {\left\{ \begin{array}{ll} 1.5, if\text { Share immediately} \\ 1, if\text { Share after one week}\\ 0.5, if\text { Share after one month} \end{array}\right. } \end{aligned}$$
(4)

Then:

$$\begin{aligned} S=f(C,P,T_d)=T_d \cdot \begin{bmatrix}&s_{1,1}&s_{1,2}&\cdots&s_{1,n} \\&s_{2,1}&s_{2,2}&\cdots&s_{2,n} \\&\vdots&\vdots&\ddots&\vdots \\&s_{m,1}&s_{m,2}&\cdots&s_{m,n} \end{bmatrix} \end{aligned}$$
(5)

3.2 Context Perspective

The second component of the model examines the environment of a user at the time of giving permission for sharing the information. This is used when calculating the amount of trust that exists at any time by considering the following:

  • Default Trust to the applications in question

  • The application’s reputation based on its current rating

  • Common friends in social networks using the application

  • The person who suggested installation of the application for example health care provider versus old friend

A higher amount of existing trust results in a lower threshold.

3.3 Default Trust to Each Category and Purpose

Since at the beginning there is no information on the applications, the average trust value which was assigned by the user would be calculated for the sensitivity matrix.

$$\begin{aligned}&C_0= \text { Default Trust value for all of the categories} \\&P_0= \text { Default Trust value for all of the purposes} \end{aligned}$$
$$\begin{aligned} S_{i_0,j_0}=min(\frac{1}{n_c}\displaystyle \sum _{i=1}^{n_c} C_j , \frac{1}{m_p}\displaystyle \sum _{i=1}^{m_p} P_k) \end{aligned}$$
(6)

then:

(7)

3.4 Application Rating (Public Social)

\(R_v\) represents the rating score of the application in our model, for a specific online rating of v. Considering who is seeking for the application \(R_v\) would have one of the following values:

$$\begin{aligned} R_v= {\left\{ \begin{array}{ll} 0.5, if~ v\text { = more than average } \\ 1, if ~v\text { = less than average}\\ 2, if ~v\text { = negative} \end{array}\right. } \end{aligned}$$
(8)

3.5 Social Network Friends

Another factor which has impact on the threshold is the number of friends in their social network who are using the application. \(SN_u\) represents the number of mutual friends who are using the same application. Considering the number of friends in common \(SN_u\) would value one of the followings:

$$\begin{aligned} SN_u= {\left\{ \begin{array}{ll} 0.5, if\text { u = More than 5 friends } \\ 1.5, if\text { u = Less than 5 friends}\\ 1, if\text { u = No mutual friend} \end{array}\right. } \end{aligned}$$
(9)

3.6 Installer of the Application

In health care information systems, the relationship of the person who is asking for the information to the owner of information can have a crucial impact on the existing level of trust between them. Therefore, for example, if a person involved in the patient care suggests an application, the application in question is seen as potentially more reliable. In this model, 3 scenarios have been considered for installing an application. \(I_t\) represents the source suggesting the application. Considering who is seeking for the application \(I_t\) would value one of the following:

$$\begin{aligned} I_t= {\left\{ \begin{array}{ll} 0.5, if\text { t = Healthcare provider suggests} \\ 0.75, if\text { t = Proposed by a sensor the user already uses}\\ 1, if\text { t = Randomly downloaded application} \end{array}\right. } \end{aligned}$$
(10)

3.7 Estimation of the Threshold

Considering all the factors the second component of the model would be:

$$\begin{aligned} TR=S_0\cdot R_v \cdot SN_u \cdot I_t \end{aligned}$$
(11)

Information would be shared if:

$$\begin{aligned} TR<T \end{aligned}$$
(12)

4 Analysis

In this part, we examine our model using different scenarios as use case examples. Furthermore, different user personalities and various applications have been considered.

4.1 Various Agents

Personality and characteristics of people have a crucial impact on their decision making. In order to make allowances for this, in this experiment we divide the user agents to three main categories: optimistic, pessimistic and realistic. In the following section each category is described:

Optimist. An optimist believes in the best outcome in all the situations and expects the best results in everything [20]. In our examples, an optimist always selects the maximum trust value.

Pessimist. In the eyes of pessimist, in opposite to the optimist, the worst possible result is being seen. The pessimist expects the worst outcome in any situation. Therefore, the pessimist agent selects the worst trust value in all the situations [20].

Realist. However, in reality most people are some place between the two extremes. This situation also applies to agents. For the sake of simplicity in this paper, we randomly choose from intervals within the 4 quartiles in the spectrum from optimist to pessimist

4.2 Pool of Applications

In healthcare environments, various applications with different characteristics exist. This section looks at examples of these applications.

Application \(\varvec{\alpha }\) .  \(\alpha \) has the following characteristics:

  • It needs to have access to nutrition information, fitness information and vital signs.

  • It uses information for commercial purposes, research purposes and also personal health monitoring.

  • It has been rated less than average.

Application \(\varvec{\beta }\) .  Application \(\beta \) has the following characteristics:

  • This app needs access to sleep analysis information and nutrition information.

  • It uses information for research purposes, personal health monitoring and public health.

  • It has been rated higher than average.

4.3 Various Situations

Although personality plays a significant role in decision making other factors including the experiences of the user or their current mental state can affect their judgment. To address this, we test the model in 2 different scenarios.

Scenario 1 – Installing Random Applications. Tracy was browsing health care applications on the app store. One of the diet applications interested her and she installed it on her device. She did not have any past information about this application, no one has suggested it and none of her friends is using this application. This application needs to have access to her fitness information, nutrition information and weight information.

Scenario 2 – Various Rated Applications. Steve is a tech savvy person. He reads reviews of applications and downloads many health apps onto his device. Rating of the applications is the most effective reason for him to decide to download the application or not. Furthermore, he is willing to share his information for research purposes or for monitoring his own health. However, Steve is not interested in sharing for commercial uses. Recently, he has sleeping problems. In order to monitor himself he decides to install an sleep analysis application on his device.

4.4 Mathematical Analysis

In order to examine how the model works, in this part we briefly analyse the model in different situations.

Example 1. In the first scenario, we consider Tracy as an optimist. Therefore, she relatively assigns a higher trust value for sharing information. Table 3 represents the trust values she assigned for each purpose and category.

Table 3. Trust values assigned by Tracy

In the sensitivity matrix we have the minimum amount between each category and purpose, therefore:

$$\begin{aligned} S=f( C,P)=\begin{bmatrix}&0.22&0.22&0.22&0.22&0.22 \\&0.31&0.31&0.31&0.22&0.31 \\&0.46&0.32&0.46&0.22&0.33 \\&0.51&0.32&0.46&0.22&0.33 \\&0.73&0.32&0.46&0.22&0.33 \\ \end{bmatrix} \end{aligned}$$
(13)
$$\begin{aligned} t_d= 1.5 \end{aligned}$$
(14)

Then the trust matrix would be:

(15)

We consider that application \(\alpha \) is the application which Tracy has downloaded. Therefore, we have:

$$\begin{aligned} S_{i_0,j_0}=min(\frac{1}{5}\displaystyle \sum _{i=1}^{5} C_j , \frac{1}{5}\displaystyle \sum _{i=1}^{5} P_k) =min(0.428, 0.444)= 0.428 \end{aligned}$$
(16)
$$\begin{aligned} S=f( C,P)=\begin{bmatrix}&0.428&0.428&0.428&0.428&0.428 \\&0.428&0.428&0.428&0.428&0.428\\&0.428&0.428&0.428&0.428&0.428\\&0.428&0.428&0.428&0.428&0.428\\&0.428&0.428&0.428&0.428&0.428\\ \end{bmatrix} \end{aligned}$$
(17)

And:

$$\begin{aligned} R_v=1 \end{aligned}$$
(18)
$$\begin{aligned} SN_u= 1 \end{aligned}$$
(19)
$$\begin{aligned} I_t= 1 \end{aligned}$$
(20)

The threshold matrix would be:

$$\begin{aligned} TR=\begin{bmatrix}&0.428&0.428&0.428&0.428&0.428 \\&0.428&0.428&0.428&0.428&0.428\\&0.428&0.428&0.428&0.428&0.428\\&0.428&0.428&0.428&0.428&0.428\\&0.428&0.428&0.428&0.428&0.428\\ \end{bmatrix} \end{aligned}$$
(21)

Specific parts of information for specific purposes are expected to be shared if the value of corresponding member of sensitivity matrix is higher than the value of corresponding member in the threshold matrix. Therefore, fitness information wont be shared since the trust value is less than the threshold. However, nutrition information and vital signs information will be shared since for the purpose in which application \(\alpha \) using those information, trust value is higher than the threshold.

$$\begin{aligned} 0.33 < 0.428 \rightarrow Do\ not \ share \end{aligned}$$
(22)
$$\begin{aligned} 0.46> 0.428 \rightarrow Share\ vital\ signs \ information\ for\ personal\ monitoring \end{aligned}$$
(23)

Example 2. In the second scenario, we considered Steve as a pessimist. He does not give high trust values to the application. Therefore he assigns the following trust values as noted in Table 4.

Table 4. Trust values assigned by Steve

In the sensitivity matrix we have the minimum amount between each category and purpose, therefore:

$$\begin{aligned} S=f(C,P)=\begin{bmatrix}&-0.31&-0.68&-0.22&-0.46&-0.33 \\&-0.31&-0.68&{0.}0.11&-0.46&-0.33 \\&-0.47&-0.68&-0.47&-0.47&-0.47 \\&-0.86&-0.86&-0.86&-0.86&-0.86 \\&-0.59&-0.68&-0.59&-0.59&-0.59 \\ \end{bmatrix} \end{aligned}$$
(24)

Steve decides to share his information after one week.

$$\begin{aligned} t_d= 1 \end{aligned}$$
(25)

Then the trust matrix would be:

(26)

We consider that application \(\beta \) is the application which Steve has installed. Therefore, we have:

$$\begin{aligned} S_{i_0,j_0}=min(\frac{1}{5}\displaystyle \sum _{i=1}^{5} C_j , \frac{1}{5}\displaystyle \sum _{i=1}^{5} P_k) =min(-0.31, -0.406)= -0.406 \end{aligned}$$
(27)
$$\begin{aligned} S=f( C,P)=\begin{bmatrix}&-0.406&-0.406&-0.406&-0.406&-0.406 \\&-0.406&-0.406&-0.406&-0.406&-0.406 \\&-0.406&-0.406&-0.406&-0.406&-0.406 \\&-0.406&-0.406&-0.406&-0.406&-0.406 \\&-0.406&-0.406&-0.406&-0.406&-0.406 \\ \end{bmatrix} \end{aligned}$$
(28)

And:

$$\begin{aligned} R_v=1 \end{aligned}$$
(29)
$$\begin{aligned} SN_u= 1 \end{aligned}$$
(30)
$$\begin{aligned} I_t= 1 \end{aligned}$$
(31)

The threshold matrix would be:

$$\begin{aligned} TR=\begin{bmatrix}&-0.406&-0.406&-0.406&-0.406&-0.406 \\&-0.406&-0.406&-0.406&-0.406&-0.406 \\&-0.406&-0.406&-0.406&-0.406&-0.406 \\&-0.406&-0.406&-0.406&-0.406&-0.406 \\&-0.406&-0.406&-0.406&-0.406&-0.406 \\ \end{bmatrix} \end{aligned}$$
(32)

Again, by comparing matrix elements, a recommended decision for information sharing can be made. In this case, sleep analysis information won’t be shared. Also, nutrition information wont be shared as application \(\beta \) use this information for public health purposes.

$$\begin{aligned} -0.68 < -0.406 \rightarrow { Do\ not\ share} \end{aligned}$$
(33)
$$\begin{aligned} -0.33> -0.406 \rightarrow { Share\ nutrition\ information\ for\ research} \end{aligned}$$
(34)
$$\begin{aligned} -0.33> -0.406 \rightarrow { Share\ nutrition\ information\ for\ personal\ monitoring} \end{aligned}$$
(35)
$$\begin{aligned} -0.47< -0.406 \rightarrow { Do \ not \ share\ nutrition\ information} \end{aligned}$$
(36)

5 Conclusions and Further Work

In this paper, we proposed a trust model which calculates the required trust value of information sharing between health care mobile applications, in addition to the existing amount of trust. By employing a trust model, we believe we can be proactive and prevent sharing parts of the information which put the privacy of the user in danger. Moreover, by categorizing the information and purpose of use, we aim to provide the opportunity for sharing in different levels. Going forward, we plan to implement the framework and the corresponding user interfaces, and a user study is in the planning stage.