Next Article in Journal
Model Test of Micro-Pile Group Reinforcing High Steep Landslide
Next Article in Special Issue
A Comparison of the Music Key Detection Approaches Utilizing Key-Profiles with a New Method Based on the Signature of Fifths
Previous Article in Journal
Foreword to the Special Issue on Advances in Secure AI: Technology and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Methodical Approach to Functional Exploratory Testing for Embedded Systems

1
Department of Digital Systems, Silesian University of Technology, 44-100 Gliwice, Poland
2
Rockwell Automation, Intelligent Devices, Power Control Business, 39 Konduktorska Str., 40-155 Katowice, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 10016; https://doi.org/10.3390/app121910016
Submission received: 26 August 2022 / Revised: 29 September 2022 / Accepted: 1 October 2022 / Published: 5 October 2022
(This article belongs to the Special Issue Cyber-Physical and Digital Systems Design)

Abstract

:
Functional exploratory testing is often considered a time- and resource-consuming activity, especially within embedded systems testing. The purpose of this paper is to present a case study of functional exploratory testing that demonstrates it to be a highly valuable technique and proves that applying the proposed methodical approach can overcome its disadvantages. This paper also provides a step-by-step framework that aids with the implementation of exploratory testing. The case study presents a low-voltage, near-motor variable frequency drive product. The results prove the effectiveness of the proposed approach.

1. Introduction

Testing is intended to verify and validate a product and is required from the early stages of manufacturing. The earlier a product (and its components) is tested, the cheaper the overall process is, including any changes required to achieve the expected functionality. Agile software development methods distribute the testing effort evenly across iterations and focus on unit testing and verification automation [1]. The complexity of the testing process means that there are a wide range of potential techniques to use. One variation is functional testing, known as black-box testing [2]. Generally, the tester does not know the construction of the program being tested, while tests are used to detect errors in the implementation of the functionality from the requirements specification. In some cases, testing is based on the abstract model, which is known as gray-box testing [2]. Test cases are generated by exploring the search space of the model, but it requires extra effort to create the model. Functional testing requires a special set of metrics to be gathered in order to evaluate the outcome [3,4,5].
The main problem with testing is the complexity of systems, including embedded systems. They are extremely complex as they perform safety-critical functions and require extra treatment during the testing process [6]. The difficulties are compounded by the relationship between producers and testers. Challenges related to the automotive industry were summarized in a taxonomy in [7]. Embedded software can be tested by using a regression testing methodology [8,9,10]. This is a retesting activity to gain confidence that the system changes have not introduced unwanted system behavior and that the unchanged parts of the system are still working as before. There is a lot of extra work that testers must put into regression testing, making it expensive. Regression testing includes [11,12]:
  • Test case selection.
  • Test case re-validation.
  • Test case execution.
  • Failure identification.
  • Fault identification and mitigation.
It is commonly applied within industry but lacks scientific research. Generally, the process of building testing suites is experimental [5,13].
It is an interesting problem to run tests on the system during startup and shutdown, or periodically while the system is running [14,15]. This type of post-production testing has a completely different meaning than classical functional testing.
Traditional software testing is based on predesigned test cases. Instead, an experience-based approach can be applied, known as exploratory testing (ET) [1,16,17]. A unique characteristic of this process is that it extracts the most out of human knowledge and intelligence [18,19] via simultaneous learning, creating a test design, and performing product verification. It is often confused with ad hoc testing, where the tester does not have a specific objective, apart from interacting with the system during testing. It is usually very effective in finding defects which could escape any formal technique, have a significant severity, but maybe a lower customer occurrence, or are simply harder to reproduce. It is hypothesized that exploratory testing is more efficient than test-case-based testing in recognizing functional failures. This is due to the testers being able to use their personal knowledge in designing tests and recognizing failures on the fly even with low levels of testing experience [20].
The main contribution of the paper is to provide experience- and case-study-based evidence that ET can be strategically adopted within a complex embedded system during the functional testing process. A methodical approach that can be easily implemented and maintained within an organization is proposed. The conducted experiments prove the suitability of the proposed solution and framework.

2. Exploratory Testing

A common problem of many organizations when it comes down to product development is creating and maintaining functional requirements. Some fail to do it properly because they lack a systematic approach, while some suffer from low capacity while working on complex projects, or pay less attention to the documentation because of the framework or style of work they have chosen to operate within.
Exploratory testing was coined by Cem Kaner in [16] and was then expanded upon as a teachable discipline by Cem Kaner, James Bach, and Bret Pettichord in [17]. Exploratory testing combines test design with test execution and focuses on learning about the system being tested [21]. This is an extremely helpful aspect when testing products without proper functional requirements specifications. Simultaneous learning, creating a test design, and performing product verification are the essence of functional testing. Moreover, when working on projects that lack functional requirement specifications, exploratory testing may be used for additional coverage.
Although exploratory testing can be used as a beneficial supplement for traditional script testing, the software industry has long praised, but also criticized, its applications. It does not belong to any specific testing technology and cannot be constrained by a certain testing characteristic during testing. It can be applied within any testing phase or test implementation, and can also be combined with corresponding test technologies [22]. ET is often opposed to scripted testing; however, exploratory testing can apply varying degrees of exploration, from fully exploratory to fully scripted [23]. Therefore, the question is not whether to apply ET, but rather when to apply which level to achieve the desired outcome.
Bach et al. rightly claim that all testing is exploratory, at least to some degree, and exploration is the natural mode of testing [24]. Both scripted and unscripted testing require a level of familiarity with the device being tested. However, in the case of ET, it adds a level of intangibility and requires test engineer’s experience, sometimes becoming an art of its own. Considering that statement, exploratory testing is not only reliant on tacit knowledge, but is a process for developing tacit knowledge about a product that enables better testing.
Since ET relies on the knowledge of engineers, the question is whether it is possible to take a structured approach by providing a model to support the optimization of exploratory testing on a case-by-case approach. Based on many interviews with industry practitioners, models are being developed that provide systematic test improvements over time [21,25]. Other work aims to minimize the drawbacks of exploratory testing (the time and resources required for manual execution) by combining it with model-based testing techniques to automate the process [26]. The results of the work show that this approach can be effective.
Exploratory testing techniques are also evolving toward the application of machine learning and the use of neural networks [27,28,29].
To present the difference between scripted and exploratory testing, we take a typical simplified process for creating and executing a traditional test case design, based on the functional requirement specification:
  • Requirements analysis.
  • Define the coverage area and test objective.
  • Create the description and test conditions.
  • Identify any preconditions.
  • Develop the test procedure (test steps), including test data, and a pass/fail criteria.
  • Perform the review.
  • Approve the procedure and include it within the test suite.
A test case created by following those steps can be automated, or automation can simultaneously be a part of the process. Exploratory testing typically follows a less formalized approach, which is reduced to defining the test objective without giving the exact steps as to how to perform the procedure. While it still results in comparable coverage of the desired area, it also has the ability to add more, including near-proximity features or the interfaces between them. The major disadvantage of this approach in the ET process is the possibility of producing outcomes that are random and unsustainable, hence the need for a more structured approach to ensure repeatable results.

3. Case Study

The proposed methodical approach was introduced as a part of the functional testing campaign for a new low-voltage, near-motor variable frequency drive product. The simplified architecture of the product is presented in Figure 1. It was considered an answer to the following development process difficulties: an underinvested functional requirement specification, suffering from a wide range of changes late in the life cycle, hardware availability issues, and a tight release schedule. Due to the completely new device architecture, no scripted test cases could be imported from its predecessors. In typical conditions, a set of system-level scenarios would be formed and executed, including, but not limited to:
  • Initial device configuration (startup wizard).
  • Automatic device configuration.
  • Start inhibition scenarios.
  • Fault and alarms handling.
  • Application-specific testing.
  • Handling different types of motor encoders.
These examples of product features normally involve very extensive, time-consuming testing because of their complexity and criticality to the customers. To be fully operational, they require a certain firmware and interface maturity level, otherwise testing could be blocked, resulting in wasted time. There are a wide range of variables and dependencies that could be considered for an initial device configuration, for a mass-market product. These can be applied in many industries and scenarios, as can the features listed above. This makes product validation challenging, as it should go beyond the functional requirements. Exploratory testing can cope with most of the mentioned issues. First, it offers time savings in test script development, but also by prioritizing the validation aspect makes it more independent from the perspective of the conditions of the requirement specification.
A difference between the two types of testing is also visible in the way product features are approached, which for the scripted testing tends to be more isolated. In this case study, the device under test has a modular structure, which is followed by the way functional requirements have been designed. This means that scripted testing could not ensure the proper coverage of interfaces and communication between each module. ET implies a more holistic approach, where a majority of the modules are exercised simultaneously in each scenario, which also places focus on the interaction between them.
This philosophy puts the test engineer in a more customer-centric role compared with scripted testing. During a timeboxed session it is expected for the engineer to become familiar with the objective, conditions, and constraints that apply to a particular scenario. The engineer can then simultaneously perform the testing in the same way that an end-user would use the product and record the steps taken, e.g., using a special tool that aids exploratory testing. Interaction with the device during testing is handled by a physical interface or, if it exists, a software component that enables product configuration and operation. Some free roam is desired during the session to allow it to reach the areas of the product that could either be skipped in the scripted testing or be difficult to verify because of the ambiguous verification criteria.
The initial estimated effort and projected burndown (taking into account an increasing number of engaged test engineers) presented in Section 5 showed savings of two months of test teamwork. This was due to following the ET approach compared with the traditional model, which also created space to analyze the case scenarios provided by the product management, enabling conditions for more contextual testing. The whole test campaign took over a year to fully execute, completing all scheduled testing as an exit criterion. Exploratory testing was introduced during the last two-month period, which was ran on the mature firmware version by two experienced test engineers and was concluded within a single two-week-long sprint. In total, nine out of ten planned complex scenarios were executed, with the last one being blocked from execution by an external source. This covered most of the product features and typical case scenarios. Normally, using scripted testing for those scenarios, a requirement review would have to be performed followed by creating a test design with a list of test steps. At this stage, a test case could be automated which, based on its complexity, could become a time-consuming activity. A set of reviews for the test case and test automation script would be conducted, ensuring they comply with the inputs. ET makes this process less formal and focuses more on an interaction with the device being tested. The major advantage comes from only requiring an outlined objective, rather than producing formal test documentation, which can be later added based on the outcomes of the ET session.
Alternatively, a risk-based approach [30,31] could be applied in this case, focusing first on the risk analysis and then performing testing according to the assigned priorities. However, considering that this is the initial release of a product with a new architecture, with code written entirely from scratch, this strategy could eventually leave some areas undertested. Risk-based testing would also be ineffective against the wide range of changing requirements. Scripted tests would have to be regularly updated and rerun if not properly synchronized with the changes. Risk-based testing would be the desired approach to the subsequent product releases, or for similar products with features being ported between one another.

4. Framework

To place ET within the organization process and regulations, maximizing its benefits, a structured approach is suggested, as shown in Figure 2. This ensures the sustainability, maintainability, and portability of the implemented solution, facilitating the decision-making process of when and where to introduce it.
The proposed framework based on the authors’ experience and the case study presented above consists of the following steps:
  • Determine the ET ratio/scale.
  • Define the desired coverage.
  • Collect inputs.
  • Identify test oracles.
  • Develop test charters.
    (a)
    Describe the objectives.
    (b)
    Provide overall summary.
    (c)
    Determine test targets.
    (d)
    Set the timebox.
    (e)
    State the charter vs. opportunity ratio.
  • Execute scenarios (optionally, record the execution).
  • Collect and analyze the results.
First, it must be decided what portion of testing within the project should be exploratory. For similar or repetitive projects, the scripted approach is more suitable as it enables large-scale regression testing automation, while some unique endeavors could be largely tested in only an exploratory fashion. It must also be remembered that scripted testing is susceptible to the pesticide paradox, which makes portions of ET beneficial and rational in almost every context. By having the scale defined, the coverage should be determined based on criteria, such as: familiarity with the device or feature under test, the possibility of introducing automation, existing documentation, or the test engineers’ availability.
While being less formalized, ET does not exist in a void. Therefore, it is required to collect inputs in the form of existing documentation, requirements, case scenarios, or experience. This step is extremely important to create valuable test charters that clearly outline their objectives and consciously identify potential risks. To be able to unambiguously determine test results, test oracles must be identified. In the case of ET this can be more difficult than for scripted testing, where pass or fail criteria are usually much better defined.
After completing the preparation steps, a test charter can be created. This will serve the test engineer with the information required to complete the ET session, while remaining vastly briefer compared with a test script. The most important aspect is to provide the set of objectives, which are a crucial element of a scenario. An overall summary or description can elaborate on session goals to provide more context, intention, and provide any crucial information. As with scripted testing, it is advisable to determine test targets that can support the charter. Lastly, it is considered good practice to set a timebox and clearly state the ratio between focusing on objectives versus the so-called “free roam”. This ensures focus and discipline, while at the same time enabling the opportunity to pursue any behavior or phenomena that test engineers may consider anomalous. No test steps are provided, creating a major difference from traditional scripted testing, while still allowing the test engineer to follow their own path.
The execution of an exploratory testing session can be supported by a tool that enables recording and can be used to capture the test steps. This enables ET to be later transformed into scripted testing, while simultaneously having a significant impact on identified defect reproduction and enabling the same steps to be followed during regression testing. The final step should be collecting, analyzing, and reporting the results based on the previously determined test oracles or other defined verification or validation criteria.
The main aspect of the successful implementation of ET lies within the preparation, which drives the effort in the correct direction. While not having clear guidelines in the form of the functional requirements, a proper level of analysis must be performed to determine the following questions: which aspects of the system should be considered as the key points of focus, what are the pass and fail criteria for them, and where can the information be found? Misguided exploratory testing may take too much effort compared with the benefits, but it must be considered that an element of luck is included. Again, the potential waste is limited by the charter vs. opportunity ratio and timeboxing.

5. Results

As a result of nine ET scenario executions, six firmware defects were found, compared with 198 defects found by 983 scripted test cases. This included 361 automated tests run in a continuous integration environment that were responsible for finding five defects in total. Based on an analysis of defects found by exploratory testing, it was concluded that five of those six could not be captured by the test automation or scripted testing being recorded by an ET support tool. However, at the same time they had low occurrence and reproducibility. The severity of those defects met the “must be fixed” criteria; therefore, they had to be fixed within the same release they were found in.
Based on the track record of executed exploratory testing charters, ten test procedures were created, with one of the charters being split into two test cases. Two of those test cases were introduced as part of the continuous testing scope for subsequent product releases and became automated. One test case was added to the final regression suite, which was executed on release candidate firmware.
Effort estimates were accurately reflected during the exploratory testing process, including preparation, execution, and results management. The lightweight approach offered by ET proved to be successful within a rigid schedule. The traditional scripted testing part of the process was suffering from delays caused by underinvested functional requirements and therefore was delayed by over three weeks compared with the estimates provided in Figure 3.
The summary of the achieved results is presented in Table 1.
While the exploratory testing scale is much lower than that for scripted testing, and data about the escaped defects are not available, it is still possible to calculate the Test Effectiveness Ratio (TER) coefficient [4] for both the exploratory (1) and scripted testing (2). In principle, they both correspond to the same total product coverage, but were executed in sequential order, with exploratory testing being run in the very late stage of product development.
T E R E T = Defects   Found   in   Test Total   Defects   Found × 100 % = 6 204 × 100 % 3 %
T E R S T = 198 204 × 100 % 97 %
From the Test Effectiveness Ratio perspective, ET can be thought of as negligible. However, it must be considered that it was executed at the last stages of the development life cycle; therefore, most defects were already found during scripted testing. It still managed to detect defects that had significant customer severity aspects and would have otherwise escaped the testing process. The recorded course of actions taken by the test engineers led to developing test procedures to expand the portfolio of available test cases. Some of these have been added to the continuous integration environment or the regression testing suite. The test charter itself can be written in a way to be agnostic to the device under test (DUT). Therefore, it is easily portable between various products that share similar features, but with different kinds of implementation. This makes exploratory testing charters portable and reusable, which is a significant time-saving advantage.
In this context, it is extremely valuable to consider the number of tests executed and the time required to test for each testing type [4]. Based on this, exploratory testing provides a comparable, but not as detailed, test coverage, much faster and with fewer test cases that must be developed and maintained. Due to timeboxing and clear objectives, the exploratory testing estimation can be completed accurately, and the overall process management is not complicated. For large-scale, complex embedded systems, executing exploratory testing based on case scenarios can generally result in it being blocked by clusters of smaller defects. Therefore, it is advisable to execute most of the scenarios within more mature stages of product development. They usually require many elements of the system to interact with each other, while scripted testing tends to be more isolated.

6. Conclusions

The paper provides experience- and case-study-based evidence that can be strategically adopted within a complex functional testing process for an embedded system. The proposed methodical approach can be easily implemented and maintained within the organization. It is extremely successful in detecting defects that would escape scripted testing, but also enhances product validation and is sustainable within the full development life cycle. Exploratory testing can be a highly valuable activity, and by applying a methodical approach, its disadvantages can be overcome.
When deciding whether to include an exploratory testing technique, it must first be analyzed how much product knowledge is available, and how likely it is to enable repeat testing. If the product specification is detailed and clear, with significant regression testing expected, then exploratory testing can be involved as a supplement to scripted testing and test automation. Artificial intelligence, machine learning, and neural networks may be used to aid the process, with the possibility for automation. For new architecture or unique products, exploratory testing helps to gain familiarity with the system and simultaneously test it. Therefore, it invests time into this manual process and directly increases the quality.
Putting significant effort into test charters by completing the groundwork maximizes the value of the exploratory testing. The proposed step-by-step approach to preparing them has proven to be effective. Combined with the observed outcomes, it resulted in expanding the adoption of the process within the organization it was introduced. It has the potential to be used in new product introduction initiatives while eliminating some of the remaining residual defects that scripted and automated testing could not detect because of the “pesticide paradox”.
The suggested approach to ET still allows space for test engineers’ creativity and experience. It also serves as a lightweight framework that can be clearly defined and placed within the context of organization processes, becoming a part of the product test strategy. This enables it to be widely adopted within industry, having a positive impact on quality assurance.

Author Contributions

Conceptualization, R.K. and R.C.; methodology, R.K.; software, R.K.; validation, R.K.; formal analysis, R.K.; investigation, R.K. and R.C.; resources, R.K.; data curation, R.K.; writing—original draft preparation, R.K. and R.C.; supervision, R.K. and R.C.; funding acquisition, R.K. and R.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was co-financed by Rockwell Automation and the Ministry of Education and Science of Poland under grant no. DWD/4/21/2020-76/003.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ETExploratory Testing
TERTest Effectiveness Ratio

References

  1. Gregory, J.; Crispin, L. More Agile Testing; Addison Wesley: Boston, MA, USA, 2015. [Google Scholar]
  2. Banerjee, A.; Chattopadhyay, S.; Roychoudhury, A. Chapter Three–On Testing Embedded Software. In Advances in Computers; Elsevier: Amsterdam, The Netherlands, 2016; Volume 101, pp. 121–153. [Google Scholar] [CrossRef]
  3. Needham, D.; Jones, S. A software fault tree key node metric. Evaluation and Assessment in Software Engineering. J. Syst. Softw. 2007, 80, 1530–1540. [Google Scholar] [CrossRef]
  4. Kimla, R. Overview of Metrics Applicable in Embedded Systems Functional Testing. In Proceedings of the 17th International Conference of Computational Methods in Science and Engineering, ICCMSE 2021, Heraklion, Greece, 4–7 September 2021. [Google Scholar]
  5. Cieplucha, M. Metric-Driven Verification Methodology with Regression Management. J. Electron. Test. 2019, 35, 101–110. [Google Scholar] [CrossRef] [Green Version]
  6. Arsie, I.; Betta, G.; Capriglione, D.; Pietrosanto, A.; Sommella, P. Functional testing of measurement-based control systems: An application to automotive. Measurement 2014, 54, 222–233. [Google Scholar] [CrossRef]
  7. Juhnke, K.; Tichy, M.; Houdek, F. Challenges concerning test case specifications in automotive software testing: Assessment of frequency and criticality. Softw. Qual. J. 2021, 29, 39–100. [Google Scholar] [CrossRef]
  8. Minhas, N.M.; Petersen, K.; Börstler, J.; Wnuk, K. Regression testing for large-scale embedded software development—Exploring the state of practice. Inf. Softw. Technol. 2020, 120, 106254. [Google Scholar] [CrossRef] [Green Version]
  9. Gupta, A.; Mahapatra, R.P. Multifactor Algorithm for Test Case Selection and Ordering. Baghdad Sci. J. 2021, 18, 1056. [Google Scholar] [CrossRef]
  10. Hasnain, M.; Ghani, I.; Pasha, M.F.; Jeong, S.R. Ontology-Based Regression Testing: A Systematic Literature Review. Appl. Sci. 2021, 11, 9709. [Google Scholar] [CrossRef]
  11. Onoma, A.K.; Tsai, W.T.; Poonawala, M.; Suganuma, H. Regression Testing in an Industrial Environment. Commun. ACM 1998, 41, 81–86. [Google Scholar] [CrossRef]
  12. Tsai, W.; Na, Y.; Paul, R.; Lu, F.; Saimi, A. Adaptive scenario-based object-oriented test frameworks for testing embedded systems. In Proceedings of the 26th Annual International Computer Software and Applications, Oxford, UK, 26–29 August 2002. [Google Scholar] [CrossRef]
  13. Afzal, W.; Ghazi, A.; Itkonen, J.; Torkar, R.; Andrews, A.; Bhatti, K. An experiment on the effectiveness and efficiency of exploratory testing. Empir. Softw. Eng. 2015, 20, 844–878. [Google Scholar] [CrossRef] [Green Version]
  14. Mukherjee, N.; Tille, D.; Sapati, M.; Liu, Y.; Mayer, J.; Milewski, S.; Moghaddam, E.; Rajski, J.; Solecki, J.; Tyszer, J. Time and Area Optimized Testing of Automotive ICs. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2021, 29, 76–88. [Google Scholar] [CrossRef]
  15. Akili, S.; Lorenz, F. Towards runtime verification of collaborative embedded systems. SICS Softw.-Inensiv. Cyber-Phys. Syst. 2019, 34, 225–236. [Google Scholar] [CrossRef]
  16. Kaner, C.; Falk, J.L.; Nguyen, H.Q. Testing Computer Software, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NY, USA, 1999. [Google Scholar]
  17. Kaner, C.; Bach, J.; Pettichord, B. Lessons Learned in Software Testing; John Wiley & Sons, Inc.: Hoboken, NY, USA, 2001. [Google Scholar]
  18. Makondo, W.; Nallanthighal, R.; Mapanga, I.; Kadebu, P. Exploratory Test Oracle using Multi-Layer Perceptron Neural Network. In Proceedings of the 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Jaipur, India, 21–24 September 2016. [Google Scholar]
  19. De Oliveira Neves, V.; Delamaro, M.; Masiero, P. An Environment to Support Structural Testing of Autonomous Vehicles. In Proceedings of the Brazilian Symposium on Computing Systems Engineering, Manaus, Brazil, 3–7 November 2014. [Google Scholar] [CrossRef]
  20. Itkonen, J.; Mäntylä, M.V.; Lassenius, C. The Role of the Tester’s Knowledge in Exploratory Software Testing. IEEE Trans. Softw. Eng. 2013, 39, 707–724. [Google Scholar] [CrossRef]
  21. Mårtensson, T.; Ståhl, D.; Martini, A.; Bosch, J. Efficient and effective exploratory testing of large-scale software systems. J. Syst. Softw. 2021, 174, 110890. [Google Scholar] [CrossRef]
  22. Yu, J.; Zhang, J.; Pan, L.; Chen, Y.; Wu, N.; Sun, W. Software Exploratory Testing: Present, Problem and Prospect. In Proceedings of the 3rd International Academic Exchange Conference on Science and Technology Innovation (IAECST), Guangzhou, China, 10–12 December 2021. [Google Scholar]
  23. Ghazi, A.N.; Petersen, K.; Bjarnason, E.; Runeson, P. Levels of Exploration in Exploratory Testing: From Freestyle to Fully Scripted. IEEE Access 2018, 6, 26416–26423. [Google Scholar] [CrossRef]
  24. Bach, J. Exploratory Testing. Available online: https://www.satisfice.com/exploratory-testing (accessed on 17 August 2022).
  25. Mårtensson, T.; Ståhl, D.; Martini, A.; Bosch, J. The MaLET Model—Maturity Levels for Exploratory Testing. In Proceedings of the 2021 47th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Palermo, Italy, 1–3 September 2021; pp. 78–85. [Google Scholar] [CrossRef]
  26. Schaefer, C.J.; Do, H. Model-Based Exploratory Testing: A Controlled Experiment. In Proceedings of the 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation Workshops, Cleveland, OH, USA, 31 March–4 April 2014; pp. 284–293. [Google Scholar] [CrossRef]
  27. Nishi, Y.; Shibasaki, Y. Boosted Exploratory Test Architecture: Coaching Test Engineers with Word Similarity. In Proceedings of the 2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), Porto de Galinhas, Brazil, 12–16 April 2021; pp. 173–174. [Google Scholar] [CrossRef]
  28. Eidenbenz, R.; Franke, C.; Sivanthi, T.; Schoenborn, S. Boosting Exploratory Testing of Industrial Automation Systems with AI. In Proceedings of the 2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST), Porto de Galinhas, Brazil, 12–16 April 2021; pp. 362–371. [Google Scholar] [CrossRef]
  29. Fatima, S.; Mansoor, B.; Ovais, L.; Sadruddin, S.A.; Hashmi, S.A. Automated Testing with Machine Learning Frameworks: A Critical Analysis. Eng. Proc. 2022, 20, 12. [Google Scholar] [CrossRef]
  30. Kloos, J.; Hussain, T.; Eschbach, R. Risk-Based Testing of Safety-Critical Embedded Systems Driven by Fault Tree Analysis. In Proceedings of the 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops, Berlin, Germany, 21–25 March 2011; pp. 26–33. [Google Scholar] [CrossRef]
  31. Souza, E.; Gusmão, C.; Venâncio, J. Risk-Based Testing: A Case Study. In Proceedings of the 2010 Seventh International Conference on Information Technology: New Generations, Las Vegas, NV, USA, 12–14 April 2010; pp. 1032–1037. [Google Scholar] [CrossRef]
Figure 1. Simplified product architecture.
Figure 1. Simplified product architecture.
Applsci 12 10016 g001
Figure 2. Exploratory testing framework steps.
Figure 2. Exploratory testing framework steps.
Applsci 12 10016 g002
Figure 3. Original project estimates.
Figure 3. Original project estimates.
Applsci 12 10016 g003
Table 1. Summary of test results.
Table 1. Summary of test results.
Test Summary
Type of TestingTest CasesDefectsMust be FixedWeeks
Exploratory Testing9652
Scripted Testing9831987968
Total9922048470
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kimla, R.; Czerwinski, R. A Methodical Approach to Functional Exploratory Testing for Embedded Systems. Appl. Sci. 2022, 12, 10016. https://doi.org/10.3390/app121910016

AMA Style

Kimla R, Czerwinski R. A Methodical Approach to Functional Exploratory Testing for Embedded Systems. Applied Sciences. 2022; 12(19):10016. https://doi.org/10.3390/app121910016

Chicago/Turabian Style

Kimla, Rafal, and Robert Czerwinski. 2022. "A Methodical Approach to Functional Exploratory Testing for Embedded Systems" Applied Sciences 12, no. 19: 10016. https://doi.org/10.3390/app121910016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop