Usability Evaluation of Business Process Modelling Tools through Software Quality Metrics

Due to the widening range of functionality, the software is becoming more complex, and there are a number of problems related to its ease of use, and specifically, its usability. The concept of "usability" is associated with the ease, efficiency and satisfaction of using any item, including computer technology. Specialists in the development of user interfaces face the challenge of creating products based on user experience and combining aesthetics, functionality, ergonomics, the ability to quickly accomplish the tasks, while also having to comply with the constraints imposed by the specificity of the activity for which the software applications are intended. It can be said that business process modelling software is characterized by medium and even high complexity due to the many functionalities they offer. In this regard, the purpose of this paper is to propose and implement a method for evaluation the usability of this type of product based on software quality metrics. As a supporting method for conducting the evaluation procedure we used Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method. We evaluated some business process modelling tools for approbating the suggested method.


Introduction
In recent years, more and more companies purposefully or under the influence of imposed modern trends put end users at the center of their technological developments, with the aim to influence their emotions and feelings 1 . The main goal in designing the user interface of software products is to focus on the way of thinking, behavior, beliefs, perceptions, emotions, preferences, as well as the psychological and subsequent physical reactions of people consuming a particular product.
The efforts of specialists and scientists working in the field of human-computer interaction are aimed mainly at minimizing the barriers between people's mental models in terms of fulfilling their goals and technological support of users' tasks. In the paradigm of user-oriented design, the product is developed in accordance with the expectations of its potential users, i.e. it is created to be usable.
The term "usability" has a broad meaning. Generally, it is used as a collective term to denote the easiness of using individual tools or other man-made objects for achieving a specific goal. The user interface 2 of a system is the "portal" to its functionality. These are the elements of the hardware and / or software system that allow users to communicate with it. ISO/IEC 25010:2011 defines term "usability" as a subset of quality in use consisting of effectiveness, efficiency and satisfaction, for consistency with its established meaning (ISO, 2011). According to ISO/DIS 9241-11.2 it is "extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" (ISO, 2018). Effectiveness is associated with the precision and completeness of the implementation of user-specific goals in specific environments. Efficiency describes the resources needed to achieve these goals. Satisfaction is the comfort of users when working with a system. Jennifer Preece defines the term as providing interactive products in terms of ease of learning, effective use, and user perspective. It also includes optimizing the interaction of people with interactive products in order to carry out specific activities in the work and study environment, as well as in everyday life. (Preece et. al., 2002) Whitney Quesenbery specify the term as a commonly used phrase for products that work better for their consumers, but it's hard to pinpoint exactly what they actually think of it. (Quesenbery, 2001) Jacob Nielsen is of the opinion that "usability is a quality attribute that assesses how easy user interfaces are to use. The word "usability" also refers to methods for improving ease-of-use during the design process" (Nielsen, 2012).
Based on the above, it should be summarized that usability is a qualitative criterion that refers not only to the user interface of a product, but also to its functionality, considered in a specific context of use. The considered definitions of the concept can be defined as generally valid, i.e. applicable to any type of technology. Usability is also defined as a quality attribute of the software (Tomer, 2019;Kurosu et. al., 2015;Ormeño et. al., 2013;Masip et. al., 2011;ISO, 2011;Raza and Capretz, 2009). In that connection, the aim of the paper is to propose and implement a method for software usability evaluation based on software quality metrics. As a supporting method for conducting the evaluation procedure we used Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method. We evaluated some business process modelling tools for approbating the suggested method.

Usability evaluation methods
As part of the present study, we conducted a survey among software professionals and computer science researchers about the usability evaluation methods that they use. Participants are from Latvia, Bulgaria, Russia, Germany, USA, UK, Serbia and Poland. We received 48 valid answers, of which 48% were given by representatives of the academic community, and the rest -by business. 50% of all respondents have experience as usability experts between 4 and 25 years.
The study includes the following questions:  Is the usability evaluation important for your company? -According to 89% of people, the usability of software products is important and they related it to quality of the final product offered by the company or organization in which they work.
 Please, write down why it is important or not? -The received answers are different depending on the point of view of the current position of participants. The most mentioned reasons are related to: increase customer satisfaction, meet users' expectations (develop user-centred software); offer software that is really easy to use and accessible; reduce support issues; receive feedback about UX of our products; feedback about competition level and regulation testing; improving our processes and providing the best products for our clients.
 Which usability evaluation methods do you USE? -A bit different is the picture of the answers of actually used methods. 77% of respondents use Heuristic Evaluation, 25% -Cognitive Walkthrough; 11% -Pluralistic Walkthrough; 41% -Feature Inspection and 20% -Compliance with Standards. 2% of all participants do not conduct usability research by themselves and 2 % do not conduct any usability research.
 Do you conduct usability testing with users? -69% of people answer they involve users in the usability testing.
 If you conduct usability testing with users, how many users do you include usually? -The number of involved users is different and depends on the needs of the organization or projects. 45% of people reported that they involve between 5 and 10 users, 10% involve 15 users and the rest of the respondentsmore than 15 users.
 Do you use any usability testing tools? -69% of people answer they do not use any usability testing tools.
 In which phase of the development of the software products do you apply usability evaluation methods? -38% of people said they apply usability evaluation methods in all phases, 23% -during Design and Coding, and the rest of peopleduring Testing phase.
 Please, describe what other methods and tools you use for conducting usability evaluation. -Respondents answer they use surveys with likert and kansai scales; tasks and time measurements; think aloud protocol; internal QA and performance analysis.
Based on above mentioned study we can conclude that the most used methods according to our participants are: Heuristic Evaluation, Cognitive Walkthrough, Feature Inspection and Compliance with Standards. The differences between them are related to number of the participantsusability specialists, developers, designers, managers, end users, involved in the evaluation process. They are the subject of research in various papers (Wilson, 2013;Rubin and Chisnell, 2008;Lewis, 2006;Faulkner, 2003;Nielsen and Mack, 1994;Nielsen, 1993).
The Heuristic evaluation method is based on Nielsen heuristics. The advantage of the method is that it is implemented quickly, as only usability specialists are involved in its conducting, without end users. The disadvantage is that one hundred percent of usability problems cannot be detected, but a maximum of 75%, as Jacob Nielsen (1994) summarizes in his study.
Cognitive Walkthrough is implemented by usability specialists, developers and designers. A group of evaluators simulate the behavior of potential users of the system, performing a scenario with typical tasks. The questions that they ask themselves when performing each task are related to the expected behavior of users. Evaluators try to succeed at every step of the process. If they fail, evaluate why the user may not complete the task with the interface.
The focus of the Feature Inspection method is on the set of features of a product. Evaluators analyze availability, comprehensibility, and other aspects of usability for each feature. This feature is usually involved in running a specific scenario with the product. Applying it, usability specialists only establish compliance between the requirements set for the application and their implementation.
Compliance with standards seeks compliance with international standards and guidelines for the development of user interfaces. The method is applied only by usability specialists. The disadvantage is the requirement for excellent knowledge of the standards, which do not always fully cover the problems of software usability.
These usability evaluation methods do not follow a formal assessment process. The disadvantage is the lack of reporting of user satisfaction, as well as the lack of quantitative data on the basis of which to draw more in-depth conclusions about the usability of the software. As a result of their implementation, a basic expert opinion is obtained, including of designers, ergonomics specialists and others involved in the process of developing the evaluated system. We propose usability assessment to be a focused process that follows formally defined criteria related to software quality.

Usability evaluation through software quality metrics
To describe the usability evaluation process, we propose to apply a practice proven approach, such as a process approach. It should help companies to create user-oriented interfaces in a coordinated way, with clearly defined input parameters, constraints and output artifacts. The formalization of the usability evaluation process is conducted in the following phases: Planning, Evaluation and Reporting.
The business process receives as input parameters data for the project and for users and a prototype of the software or final product, which will be evaluated by specialists.
Restrictions are related to the context of use of the evaluated product (external environment -light intensity, noise levels, etc., and technical environment -technical parameters and condition of the hardware device from which the system is currently used) and business requirements.
As output artifacts of the process it is expected to receive reports with the results of its implementation and recommendations for improving the software usability.
Each of the phases of the usability evaluation process represents its subprocess.
Planning is the setting of research tasks and deadlines for their implementation. This is actually the evaluation process schedule. To achieve a quality end result, it is necessary to carefully perform its first phase. As is well known, a well-designed plan reduces the cost of implementing the next phases.
The next stage of the business process is the evaluation of the software. We propose that it should be based on quality assessment metrics derived from international standards. The advantage of referring standards, even if they are not detailed enough, is that they describe principles for assessing the software quality or user interface design. (Bevan, 2006).
One of the popular quality models is defined by ISO/IEC 25010:2011 Systems and software engineering -Systems and software Quality Requirements and Evaluation (SQuaRE). That is Quality in use model which can be used to meet users' needs to achieve specific goals with effectiveness, efficiency, freedom from risk and satisfaction in specific contexts of use. ISO/IEC 25010:2011 qualifies characteristics and subcharacteristics of the quality in use that are: Effectiveness, Efficiency, Satisfaction (Usefulness, Trust, Pleasure, Comfort), Freedom from risk (Economic risk mitigation, Health and safety risk mitigation, Environmental risk mitigation), Context coverage (Context completeness, Flexibility). On the other hand, "the product quality model categorizes product quality properties into eight characteristics (functional suitability, reliability, performance efficiency, usability, security, compatibility, maintainability and portability)" (ISO, 2011).
Another standard that is targeted to software quality is IEEE Std.730-2014 Standards for Software Quality Assurance. It establishes requirements for initiating, planning, controlling, and executing the Software Quality Assurance (SQA) processes of a software development or maintenance project (IEEE Computer Society, 2014). SQA processes is defined to specify the activities and tasks that enable software suppliers to produce, collect, and validate evidence that forms the basis for a justified statement of confidence that the software product conforms to its established requirements (IEEE Computer Society, 2014). A disadvantage from the point of view of the present study is the lack of precise criteria for assessing the quality of user interfaces.
Based on the above for both standards, we propose our evaluation method to use usability-related quality characteristics of ISO/IEC 25010:2011 as usability assessment criteria. They are well defined and suitable for the goal for our study. They are: Appropriateness; Recognizability; Learnability; Operability; User error protection; User interface aesthetics; Accessibility.
In the context of the present research, we made a survey among researchers and practitioners about the most important attributes of a modelling software. We propose nine attributes based on (Badreddin et. al., 2018;Forward et. al., 2010) and the results of our study. Usability-related quality criteria of ISO/IEC 25010:2011 and suggested attributes could be combined to form our evaluation criteria. We suggest the following: Appropriateness; Recognizability; Learnability; Operability; User error protection; User interface aesthetics; Accessibility; Ease of use; Readability. In terms of assessing the usability of business process modelling tools it is necessary to add "Ability to analyze", "Support modelling notations", "Communication between teams" and "Ability to view different aspects of a model".
In order to evaluate the software, we propose to use the TOPSIS method, which is multi-criteria decision analysis method, which was originally developed by Ching-Lai Hwang and Yoon in 1981's. It is based on the concept that the chosen alternative should have the shortest geometric distance from the positive ideal solution and the longest geometric distance from the negative ideal solution (Stecyk, 2019;Balioti et. al., 2018;Vavrek et. al., 2017;Assari et. al., 2012). The positive ideal solution is a solution that maximizes the benefit criteria and minimizes the cost criteria, whereas the negative ideal solution maximizes the cost criteria and minimizes the benefit criteria. (Puthanpura et. al., 2018;Radosavljevic and Andjelkovic, 2017;Panda et. al., 2014;Kabir and Hasin, 2012) The method is conducted in following steps (Krohling and Pacheco, 2015):  Creating decision matrix D, which consists of m alternatives and n criteria (x ij ) mxn ;  The matrix (x ij ) mxn is normalized to form R = (r ij ) mxn by using the normalization method r ij = xij √∑ 2 =1 , i = 1,…, m; j = 1,…, n.
 Calculating a weighted normalized decision matrix: p ij = w i . r ij , i = 1,…, m; j = 1,…, n.  Identify the positive ideal solutions A + (benefits) and negative ideal solutions Aas follows: A + = ( 1 + , 2 + , … , + ) and where  Calculating the positive ideal solution, A + and the negative ideal solution Aof each alternative A i :  Calculating the relative closeness s i for each alternative A i with respect to positive ideal solution as given by:  Ranking the alternatives according to the relative closeness. We propose the suggested usability evaluation criteria to have an equal TOPSIS weight.
The results of the evaluation are summarized and we move on to the last phase of the usability study process -Reporting. The reports include the final assessment of the software usability and recommendations for improving usability. They are usually made in free form, as there is no single established format. It is advisable to refer to the plan of the evaluation process, indicating whether the project is completed on time, whether the risk is justified and the criteria for success of the project are met, etc.

Results
Based on the results of above-mentioned study we decided to evaluate usability of Altova and Visual Paradigm modelling software. Altova Mission Kit is multipurpose software, complex functions supporting (enterprise-class XML, SQL, and UML tools for information architects and application developers). MissionKit includes Altova XMLSpy, Umodel, MapForce, StyleVision. Only Umodel is UML modeling tool (Altova, 2020). According to participants of our research Altova software is ease of use, support communication between teams and developers. It supports ability to analyze and to view different aspects of a model, support UML modelling notations. As the software package includes many products for various purposes, we focus on Altova Umodel that could be used for business process modelling.
Visual Paradigm is multipurpose and flexible software (System Modelling, Enterprise Architecture, Project Management, Agile & Scrum support, User Experience Design), supports complex functions (Visual Paradigm, 2020). According to participants of our research it is not ease of use, supports communication between teams and developers. It offers ability to analyse, ability to view different aspects of a model and different modelling notations (like UML and BPMN).
We applied the TOPSIS method with the proposed criteria for assessing usability. As a result, Altova Umodel has higher usability points: 0.5220, while Visual Paradigm has a score of 0.4780.
The result is the relatively simpler interface of Altova Umodel, although the analytical capabilities of Visual Paradigm are much better.

Conclusion
When creating user-oriented designs, it is necessary to follow some "good practices", which are reduced to:  basic rules / principles in design and / or standards, derived as universally valid when creating a human-machine interface;  rules / principles and / or standards for creating software application interfaces that result from restrictions imposed by mobile devices;  templates for designing the user interface of software applications. We proposed to usability evaluation to be interpreted as a business process, conducting the following phases: Planning, Evaluation and Reporting. During the evaluation phase we suggested to use TOPSIS method as one of the multi-criteria decision analysis method. We used it for assessing usability of business process modelling tools -Visual Paradigm and Altova Umodel. The results showed that Altova Umodel has higher usability points than Visual Paradigm.