skip to main content
research-article

Improving the Effectiveness of Testing Pervasive Software via Context Diversity

Authors Info & Claims
Published:17 July 2014Publication History
Skip Abstract Section

Abstract

Context-aware pervasive software is responsive to various contexts and their changes. A faulty implementation of the context-aware features may lead to unpredictable behavior with adverse effects. In software testing, one of the most important research issues is to determine the sufficiency of a test suite to verify the software under test. Existing adequacy criteria for testing traditional software, however, have not explored the dimension of serial test inputs and have not considered context changes when constructing test suites. In this article, we define the concept of context diversity to capture the extent of context changes in serial inputs and propose three strategies to study how context diversity may improve the effectiveness of the data-flow testing criteria. Our case study shows that the strategy that uses test cases with higher context diversity can significantly improve the effectiveness of existing data-flow testing criteria for context-aware pervasive software. In addition, test suites with higher context diversity are found to execute significantly longer paths, which may provide a clue that reveals why context diversity can contribute to the improvement of effectiveness of test suites.

Skip Supplemental Material Section

Supplemental Material

References

  1. R. Alur and M. Yannakakis. 1999. Model checking of message sequence charts. In Proceedings of the 10th International Conference on Concurrency Theory (CONCUR'99). Springer, London, UK, 114--129. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. P. Ammann and J. Offutt. 2008. Introduction to Software Testing. Cambridge University Press, New York, NY. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. J. H. Andrews, L. C. Briand, Y. Labiche, and A. Siami Namin. 2006. Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Transactions on Software Engineering 32, 8, 608--624. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Arcuri and L. C. Briand. 2011. A practical guide for using statistical tests to assess randomized algorithms in software engineering. In Proceedings of the 33rd ACM/IEEE International Conference on Software Engineering (ICSE'11). ACM, New York, NY, 1--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. F. Belina and D. Hogrefe. 1989. The CCITT-specification and description language SDL. Computer Networks and ISDN Systems 16, 4, 311--341. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Bertolino. 2007. Software testing research: Achievements, challenges, dreams. In Proceedings of Future of Software Engineering (FOSE'07) (in conjunction with the 29th International Conference on Software Engineering (ICSE'07)). IEEE Computer Society, Los Alamitos, CA, 85--103. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. Binkley and M. Harman. 2004. Analysis and visualization of predicate dependence on formal parameters and global variables. IEEE Transactions on Software Engineering 30, 11, 715--735. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. T. A. Budd, R. A. DeMillo, R. J. Lipton, and F. G. Sayward. 1980. Theoretical and empirical studies on using program mutation to test the functional correctness of programs. In Proceedings of the 7th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL'80). ACM, New York, NY, 220--233. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. C. Chen, C. Ye, and H.-A. Jacobsen. 2011. Hybrid context inconsistency resolution for context-aware services. In Proceedings of the 2011 IEEE International Conference on Pervasive Computing and Communications (PerCom'11). IEEE Computer Society, Los Alamitos, CA, 10--19. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. T. Y. Chen and R. G. Merkel. 2008. An upper bound on software testing effectiveness. ACM Transactions on Software Engineering and Methodology 17, 3, 16:1--16:27. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. B. Diaz-Agudo, P. A. Gonzalez-Calero, J. A. Recio-Garcia, and A. A. Sanchez-Ruiz-Granados. 2007. Building CBR systems with jCOLIBRI. Science of Computer Programming 69, 1--3, 68--75. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. V. D'Silva, D. Kroening, and G. Weissenbacher. 2008. A survey of automated techniques for formal software verification. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 27, 7, 1165--1178. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. Edvardsson. 1999. A survey on automatic test data generation. In Proceedings of the 2nd Conference on Computer Science and Engineering in Linöping (ECSEL'99). Linöping, Sweden, 21--28.Google ScholarGoogle Scholar
  14. G. Forney, Jr. 1966. Generalized minimum distance decoding. IEEE Transactions on Information Theory 12, 2, 125--131. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P. G. Frankl and S. N. Weiss. 1993. An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Transactions on Software Engineering 19, 8, 774--787. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. P. G. Frankl and E. J. Weyuker. 1988. An applicable family of data flow testing criteria. IEEE Transactions on Software Engineering 14, 10, 1483--1498. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. R. W. Hamming. 1950. Error detecting and error correcting codes. Bell System Technical Journal 29, 1, 147--160.Google ScholarGoogle ScholarCross RefCross Ref
  18. M. Harder, J. Mellen, and M. D. Ernst. 2003. Improving test suites via operational abstraction. In Proceedings of the 25th International Conference on Software Engineering (ICSE'03). IEEE Computer Society, Los Alamitos, CA, 60--71. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. M. Harman and P. McMinn. 2010. A theoretical and empirical study of search-based testing: local, global, and hybrid search. IEEE Transactions on Software Engineering 36, 2, 226--247. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. M. Hassan and J. H. Andrews. 2013. Comparing multi-point stride coverage and dataflow coverage. In Proceedings of the 2013 International Conference on Software Engineering (ICSE'13). IEEE, Piscataway, NJ, 172--181. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. M. P. E. Heimdahl and D. George. 2004. Test-suite reduction for model based tests: effects on test quality and implications for testing. In Proceedings of the 19th IEEE International Conference on Automated Software Engineering (ASE'04). IEEE Computer Society, Los Alamitos, CA, 176--185. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. M. Hutchins, H. Foster, T. Goradia, and T. Ostrand. 1994. Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria. In Proceedings of the 16th International Conference on Software Engineering (ICSE'94). IEEE Computer Society, Los Alamitos, CA, 191--200. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. D. Jeffrey and N. Gupta. 2007. Improving fault detection capability by selectively retaining test cases during test suite reduction. IEEE Transactions on Software Engineering 33, 2, 108--123. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. N. H. Kacem, A. H. Kacem, and K. Drira. 2009. A formal model of a multi-step coordination protocol for self-adaptive software using coloured Petri nets. International Journal of Computing and Information Sciences 7, 1.Google ScholarGoogle Scholar
  25. G. M. Kapfhammer and M. L. Soffa. 2003. A family of test adequacy criteria for database-driven applications. In Proceedings of the Joint 9th European Software Engineering Conference and 11th ACM SIGSOFT International Symposium on Foundations of Software Engineering (ESEC'03/FSE-11). ACM, New York, NY, 98--107. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. G. Kastrinis and Y. Smaragdakis. 2013. Hybrid context-sensitivity for points-to analysis. In Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI'13). ACM, New York, NY, 423--434. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Z. Lai, S. C. Cheung, and W. K. Chan. 2008. Inter-context control-flow and data-flow test adequacy criteria for nesC applications. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering (SIGSOFT'08/FSE-16). ACM, New York, NY, 94--104. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. D. Leon and A. Podgurski. 2003. A comparison of coverage-based and distribution-based techniques for filtering and prioritizing test cases. In Proceedings of the 14th International Symposium on Software Reliability Engineering (ISSRE'03). IEEE Computer Society, Los Alamitos, CA, 442--453. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. O. Lhoták and K.-C. A. Chung. 2011. Points-to analysis with efficient strong updates. In Proceedings of the 38th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL'11). ACM, New York, NY, 3--16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. J.-W. Lin and C.-Y. Huang. 2009. Analysis of test suite reduction with enhanced tie-breaking techniques. Information and Software Technology 51, 4, 679--690. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Y. Liu, C. Xu, and S. C. Cheung. 2013. AFChecker: effective model checking for context-aware adaptive applications. Journal of Systems and Software 86, 3, 854--867. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. H. Lu, W. K. Chan, and T. H. Tse. 2006. Testing context-aware middleware-centric programs: A data flow approach and an RFID-based experimentation. In Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering (SIGSOFT'06/FSE-14). ACM, New York, NY, 242--252. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. H. Lu, W. K. Chan, and T. H. Tse. 2008. Testing pervasive software in the presence of context inconsistency resolution services. In Proceedings of the 30th International Conference on Software Engineering (ICSE'08). ACM, New York, NY, 61--70. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. C.-K. Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V. Janapa Reddi, and K. Hazelwood. 2005. Pin: Building customized program analysis tools with dynamic instrumentation. In Proceedings of the 2005 ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI'05). ACM, New York, NY, 190--200. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Y.-S. Ma, J. Offutt, and Y.-R. Kwon. 2006. MuJava: A mutation system for Java. In Proceedings of the 28th International Conference on Software Engineering (ICSE'06). ACM, New York, NY, 827--830. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. L. Mei, W. K. Chan, and T. H. Tse. 2008. Data flow testing of service-oriented workflow applications. In Proceedings of the 30th International Conference on Software Engineering (ICSE'08). ACM, New York, NY, 371--380. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. J. Misurda, J. A. Clause, J. L. Reed, B. R. Childers, and M. L. Soffa. 2005. Demand-driven structural testing with dynamic instrumentation. In Proceedings of the 27th International Conference on Software Engineering (ICSE'05). ACM, New York, NY, 156--165. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. A. L. Murphy, G. P. Picco, and G.-C. Roman. 2006. LIME: A coordination model and middleware supporting mobility of hosts and agents. ACM Transactions on Software Engineering and Methodology 15, 3, 279--328. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. L. M. Ni, Y. Liu, Y. C. Lau, and A. P. Patil. 2004. LANDMARC: indoor location sensing using active RFID. ACM Wireless Networks 10, 6, 701--710. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. J. Offutt, J. Pan, K. Tewary, and T. Zhang. 1996. An experimental evaluation of data flow and mutation testing. Software: Practice and Experience 26, 2, 165--176. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. K. Pearson. 1920. Notes on the history of correlation. Biometrika 13, 1, 25--45.Google ScholarGoogle ScholarCross RefCross Ref
  42. G.-C. Roman, P. J. McCann, and J. Y. Plun. 1997. Mobile UNITY: Reasoning and specification in mobile computing. ACM Transactions on Software Engineering and Methodology 6, 3, 250--282. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. D. Salber, A. K. Dey, and G. D. Abowd. 1999. The context toolkit: Aiding the development of context-enabled applications. In The CHI is the Limit: Proceeding of the CHI'99 Conference on Human Factors in Computing Systems. ACM, New York, NY, 434--441. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. M. Sama, S. G. Elbaum, F. Raimondi, D. S. Rosenblum, and Z. Wang. 2010. Context-aware adaptive applications: fault patterns and their automated identification. IEEE Transactions on Software Engineering 36, 5, 644--661. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. R. Santelices and M. J. Harrold. 2007. Efficiently monitoring data-flow test coverage. In Proceedings of the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE'07). ACM, New York, NY, 343--352. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. T. H. Tse, S. S. Yau, W. K. Chan, H. Lu, and T. Y. Chen. 2004. Testing context-sensitive middleware-based software applications. In Proceedings of the 28th Annual International Computer Software and Applications Conference (COMPSAC'04), Vol. 1. IEEE Computer Society, Los Alamitos, CA, 458--465. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. J. von Ronne. 1999. Test suite minimization: An empirical investigation. BSCS Thesis, Oregon State University, Corvallis, OR.Google ScholarGoogle Scholar
  48. H. Wang and W. K. Chan. 2009. Weaving context sensitivity into test suite construction. In Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering (ASE'09). IEEE Computer Society, Los Alamitos, CA, 610--614. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. H. Wang, K. Zhai, and T. H. Tse. 2010. Correlating context-awareness and mutation analysis for pervasive computing systems. In Proceedings of the 10th International Conference on Quality Software (QSIC'10). IEEE Computer Society, Los Alamitos, CA, 151--160. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Z. Wang, S. G. Elbaum, and D. S. Rosenblum. 2007. Automated generation of context-aware tests. In Proceedings of the 29th International Conference on Software Engineering (ICSE'07). IEEE Computer Society, Los Alamitos, CA, 406--415. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. E. J. Weyuker. 1990. The cost of data flow testing: An empirical study. IEEE Transactions on Software Engineering 16, 2, 121--128. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. E. J. Weyuker and T. J. Ostrand. 1980. Theories of program testing and the application of revealing subdomains. IEEE Transactions on Software Engineering 6, 3, 236--246. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. C. Xu, S. C. Cheung, W. K. Chan, and C. Ye. 2008. Heuristics-based strategies for resolving context inconsistencies in pervasive computing applications. In Proceedings of the 28th International Conference on Distributed Computing Systems (ICDCS'08). IEEE Computer Society, Los Alamitos, CA, 713--721. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. C. Xu, S. C. Cheung, W. K. Chan, and C. Ye. 2010. Partial constraint checking for context consistency in pervasive computing. ACM Transactions on Software Engineering and Methodology 19, 3, 9:1--9:61. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Q. Yang, J. J. Li, and D. Weiss. 2006. A survey of coverage based testing tools. In Proceedings of the 2006 International Workshop on Automation of Software Test (AST'06). ACM, New York, NY, 99--103. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Y. Yang, Y. Huang, J. Cao, X. Ma, and J. Lu. 2013. Formal specification and runtime detection of dynamic properties in asynchronous pervasive computing environments. IEEE Transactions on Parallel and Distributed Systems 24, 8, 1546--1555. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. K. Zhai, B. Jiang, and W. K. Chan. 2014. Prioritizing test cases for regression testing of location-based services: metrics, techniques and case study. IEEE Transactions on Services Computing. 7, 1, 54--67. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. K. Zhai, B. Jiang, W. K. Chan, and T. H. Tse. 2010. Taking advantage of service selection: A study on the testing of location-based web services through test case prioritization. In Proceedings of the IEEE International Conference on Web Services (ICWS'10). IEEE Computer Society, Los Alamitos, CA, 211--218. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. J. Zhang and B. H. C. Cheng. 2006. Model-based development of dynamically adaptive software. In Proceedings of the 28th International Conference on Software Engineering (ICSE'06). ACM, New York, NY, 371--380. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. J. Zhang, H. J. Goldsby, and B. H. C. Cheng. 2009. Modular verification of dynamically adaptive systems. In Proceedings of the 8th ACM International Conference on Aspect-Oriented Software Development (AOSD'09). ACM, New York, NY, 161--172. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. H. Zhu, P. A. V. Hall, and J. H. R. May. 1997. Software unit test coverage and adequacy. ACM Computing Surveys 29, 4, 366--427. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Improving the Effectiveness of Testing Pervasive Software via Context Diversity

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Autonomous and Adaptive Systems
        ACM Transactions on Autonomous and Adaptive Systems  Volume 9, Issue 2
        July 2014
        146 pages
        ISSN:1556-4665
        EISSN:1556-4703
        DOI:10.1145/2642710
        Issue’s Table of Contents

        Copyright © 2014 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 17 July 2014
        • Accepted: 1 January 2014
        • Revised: 1 December 2013
        • Received: 1 October 2012
        Published in taas Volume 9, Issue 2

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader