Skip to main content
Log in

TruBeRepec: a trust-behavior-based reputation and recommender system for mobile applications

  • Original Article
  • Published:
Personal and Ubiquitous Computing Aims and scope Submit manuscript

Abstract

Mobile applications are software packages that can be installed and executed in a mobile device. Which mobile application is trustworthy for a user to purchase, download, install, execute or recommend becomes a crucial issue that impacts its final success. This paper proposes TruBeRepec, a trust-behavior-based reputation and recommender system for mobile applications. We explore a model of trust behavior for mobile applications based on the result of a large-scale user survey. We further develop a number of algorithms that are used to evaluate individual user’s trust in a mobile application through trust behavior observation, generate the application’s reputation by aggregating individual trust and provide application recommendations based on the correlation of trust behaviors. We show the practical significance of TruBeRepec through simulations and analysis with regard to effectiveness, robustness, and usability, as well as privacy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Yan Z (2007) Trust management for mobile computing platforms. Dissertation, Helsinki University of Technology

  2. Avizienis A, Laprie JC, Randell B, Landwehr C (2004) Basic concepts and taxonomy of dependable and secure computing. IEEE Trans Dependable Secur Comput 1(1):11–33

    Article  Google Scholar 

  3. Yan Z, Dong Y, Niemi V, Yu G (2009) Exploring trust of mobile applications based on user behaviors. InTrust 2009, LNCS, pp 212–226

  4. McKnight DH, Choudhury V, Kacmar C (2002) Developing and validating trust measures for e-commerce: an integrative typology. Inf Syst Res 13(3):334–359

    Article  Google Scholar 

  5. Marsh S (1994) Formalising trust as a computational concept. Dissertation, University of Stirling

  6. Yan Z, Holtmanns S (2008) Trust modeling and management: from social trust to digital trust. In: Subramanian R (ed) Computer security, privacy and politics: current issues, challenges and solutions. Idea Group Inc, USA, pp 290–323

    Chapter  Google Scholar 

  7. Yan Z, Prehofer C (2010) Autonomic trust management for a component based software system. IEEE Trans Dependable Secur Comput.doi:10.1109/TDSC.2010.47

  8. Xiong L, Liu L (2004) PeerTrust: supporting reputation-based trust for peer-to-peer electronic communities. IEEE Tran Knowl Data Eng 16(7):843–857

    Article  Google Scholar 

  9. Song S, Hwang K, Zhou R, Kwok YK (2005) Trusted P2P transactions with fuzzy reputation aggregation. IEEE Intern Comput 9(6):24–34

    Article  Google Scholar 

  10. Theodorakopoulos G, Baras JS (2006) On trust models and trust evaluation metrics for ad hoc networks. IEEE J Sel Areas Commun 24(2):318–328

    Article  Google Scholar 

  11. Sun Y, Yu W, Han Z, Liu KJR (2006) Information theoretic tramework of trust modeling and evaluation for ad hoc networks. IEEE J Sel Area Commun 24(2):305–317

    Article  MATH  Google Scholar 

  12. Li X, Valacich JS, Hess TJ (2004) Predicting user trust in information systems: a comparison of competing trust models. In: Proceedings of 37th annual Hawaii international conference on system sciences, 10 pp

  13. Bigley GA, Pearce JL (1998) Straining for shared meaning in organization science: problems of trust and distrust. Acad Manag Rev 23(3):405–421

    Google Scholar 

  14. Fishbein M, Ajzen I (1975) Beliefs, attitude, intention and behavior: an introduction to theory and research. Addison-Wesley, Reading

    Google Scholar 

  15. Anderson JC, Narus JA (1990) A model of distributor firm and manufacturer firm working partnerships. Marketing 54(1):42–58

    Article  Google Scholar 

  16. Fox A (1974) Beyond contract: work, power, and trust relations. Faber, London

    Google Scholar 

  17. Deutsch M (1973) The resolution of conflict: constructive and destructive processes. Yale University Press, New Haven

    Google Scholar 

  18. Sheppard BH, Hartwick J, Warshaw PR (1988) The theory of reasoned action: a meta analysis of past research with recommendations for modifications in future research. Consumer Res 15(3):325–343

    Article  Google Scholar 

  19. Venkatesh V, Davis FD (2000) A theoretical extension of the technology acceptance model: four longitudinal field studies. Manag Sci 46(2):186–204

    Article  Google Scholar 

  20. Grabner-Kräuter S, Kaluscha EA (2003) Empirical research in on-line trust: a review and critical assessment. Int J Hum Comput Stud 58(6):783–812

    Article  Google Scholar 

  21. Muir BM (1994) Trust in automation part I: theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11):1905–1922

    Article  Google Scholar 

  22. Muir BM (1996) Trust in automation part II: experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3):429–469

    Article  Google Scholar 

  23. Lee J, Moray N (1992) Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35(10):1243–1270

    Article  Google Scholar 

  24. Grandison T, Sloman M (2000) A survey of trust in internet applications. IEEE Commun Surv 3(4):2–16

    Article  Google Scholar 

  25. Yan Z (2010) Trust modeling and management in digital environments: from social concept to system development. In: IGI Global, pp 20–57

  26. Yan Z, Niemi V (2009) A methodology towards usable trust management. In: ATC09, LNCS, vol 5586, pp 179–193

  27. Yan Z, Chen Y (2010) AdContRep: a privacy enhanced reputation system for MANET content services. UIC 2010, LNCS, vol 6406, pp 414–429

  28. Aberer K, Despotovic Z (2001) Managing trust in a peer-to-peer information system. In: Proceedings of the ACM conference on information and knowledge management (CIKM), pp 310–317

  29. Resnick P, Varian HR (1997) Recommender systems. Commun ACM 40(3):56–58

    Article  Google Scholar 

  30. Hancock JT, Toma C, Ellison N (2007) The truth about lying in online dating profiles. In: Proceedings of the ACM conference on human factors in computing systems (CHI 2007), ACM, pp 449–452

  31. Su X, Khoshgoftaar TM (2009) A survey of collaborative filtering techniques. Adv Artif Intell. doi:10.1155/2009/421425

  32. O’Donovan J, Smyth B (2005) Trust in recommender systems, IUI’05, pp 167–174

  33. Jøsang A, Ismail R, Boyd C (2007) A survey of trust and reputation systems for online service provision. Decis Support Syst 43(2):618–644

    Article  Google Scholar 

  34. Resnick P, Zeckhauser R (2002) Trust among strangers in Internet transactions: empirical analysis of eBay’s reputation system. In: Baye M (ed) Advances in applied microeconomics: the economics of the internet and e-commerce, vol 11. Elsevier, Amsterdam, pp 127–157

    Chapter  Google Scholar 

  35. Resnick P, Kuwabara K, Zeckhauser R, Friedman E (2000) Reputation systems. Commun ACM 43(12):45–48

    Article  Google Scholar 

  36. Corritore CL, Kracher B, Wiedenbeck S (2003) On-line trust: concepts, evolving themes, a model. Int J Hum Comput Stud Trust Technol 58(6):737–758

    Article  Google Scholar 

  37. Yang Y, Sun Y, Kay S, Yang Q (2009) Defending online reputation systems against collaborative unfair raters through signal modeling and trust. In: SAC’09, pp 1308–1315

  38. Douceur JR (2002) The sybil attack. In: IPTPS’02, LNCS, vol 2429, pp 251–260

  39. Sun Y, Han Z, Liu KJR (2008) Defense of trust management vulnerabilities in distributed networks. IEEE Commun Mag 46(2):112–119

    Article  Google Scholar 

  40. Sun Y, Han Z, Yu W, Liu KJR (2006) A trust evaluation framework in distributed networks: vulnerability analysis and defense against attacks. In: IEEE INFOCOM, pp 1–13

  41. Fogg BJ, Tseng H (1999) The elements of computer credibility. In: Proceedings of the CHI’99, ACM Press, New York, pp 80–87

  42. Crocker L, Algina J (1986) Introduction to classical and modern test theory. Thomson Leaning, Belmont

    Google Scholar 

  43. TCG TPM Specification v1.2, http://www.trustedcomputinggroup.org/resources/tpm_main_specification. Accessed 8 Sep 2010

  44. Yan Z, Liu C, Niemi V, Yu G (2010) Effects of displaying trust information on mobile application usage. In: ATC’10, LNCS, vol 6407, pp 107–121

  45. Yan Z, Liu C, Niemi V, Yu G (2010) Trust information indication: effects of displaying trust information on mobile application usage. Technical Report NRC-TR-2009-004, Nokia Research Center. http://www.research.nokia.com/files/NRCTR2009004.pdf. Accessed 8 Sep 2010

  46. Yan Z, Yan R (2009) Formalizing trust based on usage behaviours for mobile applications. ATC09, LNCS 5586:194–208

    Google Scholar 

  47. Schiffman J, Moyer T, Jaeger T, McDaniel P (2011) Network-based root of trust for installation. IEEE Secur Priv 9(1):40–48

    Article  Google Scholar 

  48. Wu J, Fang M, Yu P, Zhang X (2009) A secure software download framework based on mobile trusted computing. WCSE '09, pp 171–176

  49. Ahtiainen A, Kalliojarvi K, Kasslin M, Leppanen K, Richter A, Ruuska P, Wijting C (2009) Awareness networking in wireless environments: means of exchanging information. IEEE Veh Technol Mag 4(3):48–54

    Article  Google Scholar 

  50. Nokia SmartPhone 360 panel survey results: http://www.nwiki.nokia.com/Smartphone360/WebHome. Accessed 31 Jan 2009

Download references

Acknowledgments

The authors would like to thank Dr. Yan Dong, Prof. Rong Yan, Prof. Guoliang Yu, Dr. Valtteri Niemi and Dr. N. Asokan for user experiments, comments and their support to this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zheng Yan.

Appendix

Appendix

1.1 A. Evaluation on individual trust calculation

From Nokia SmartPhone 360 usage statistics [50], we can figure out one usage model that is periodically changed, e.g., mobile email usage. We use function \( \left| {{ \sin }(\omega t)} \right| \), (ω = 1) to model it in our simulation with regard to usage frequency. The second usage model could be a logistic function, also known as Richards’ curve, which is widely used for growth modeling. We use a modified logistic function\( \left( {1 - e^{ - \gamma t} } \right)/\left( {1 + e^{ - \gamma t} } \right) \), \( (\gamma = 1/2) \)in our simulation in order to make the growth start from 0 at t = 0. The third usage model is a growth curve at the beginning and then reducing to a stale level (including 0, which can be controlled by the function parameters). Herein, we use a \( \Upgamma (\alpha ,\beta ) \) distribution \( t^{\alpha - 1} e^{ - \beta t} (\alpha = 2;\,\beta = 0.5) \) to model it. We also propose a linear increase model \( \eta t(\eta = 0.1,\,\eta t < 1) \) to roughly model, for example, recommendation percentage, elapsed usage time, and the number of usages. The above usage models can be applied in usage time, the number of usage, frequency, or context index. The user-experienced feature \( {\text{EF}}(i)/F(i) \) could be increased quickly and then gradually stay in a stable level. We use the logistic function to model it.

Figures 10a, b, and c show the simulation results of usage behavior formalization, reflection behavior formalization, and correlation behavior formalization, respectively. The usage models (or functions) applied in the simulations are listed in Table 3. For simplification, we apply function (2’) and (4) in our simulation. Figure 10d shows the aggregated trust value (\( T_{i} (t)_{0} = 0. 5 \)) based on function (8) and the data of T(UB)_3; T(RB)_3; and T(CB)_1, T(CB)_2, and T(CB)_3 in Table 3, respectively.

Fig. 10
figure 10

Individual trust value calculated based on usage behavior

Table 3 Usage models applied in simulations \( (\eta = 0.1;\,\gamma = 1/2;\,\alpha = 2;\,\beta = 0.5) \)

From the simulation, we can see that the individual trust value calculated based on the proposed formalization reflects usage change no mater it is periodically up and down or increased or decreased. It also implies the context’s influence on trust. The trust value contributed by the correlation trust behavior indicates the impact of application similarity and usage difference on trust. To uniform the result, we apply a sigmoid function to map final trust value into (0, 1). We can also use this function to map different part of trust contribution into (0, 1) and then aggregate them together. In this case, the general metric becomes:

$$ T_{i} (t) = f\left\{ {T_{i} (t)_{0} + \rho f\left\{ {T_{i} (t)_{\text{UB}} } \right\} + \vartheta f\left\{ {T_{i} (t)_{\text{RB}} } \right\} + \varsigma f\left\{ {T_{i} (t)_{\text{CB}} } \right\}} \right\}\quad (\rho + \vartheta + \varsigma = 1) $$

1.2 B. Random data generated for simulations in 6.5

The random data generated for simulations in Sect. 6.5 are provided in Tables 4 and 5.

Table 4 Simulated random data 1
Table 5 Simulated random data 2

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yan, Z., Zhang, P. & Deng, R.H. TruBeRepec: a trust-behavior-based reputation and recommender system for mobile applications. Pers Ubiquit Comput 16, 485–506 (2012). https://doi.org/10.1007/s00779-011-0420-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00779-011-0420-2

Keywords

Navigation