Next Article in Journal
Individual Green Certificates on Blockchain: A Simulation Approach
Next Article in Special Issue
E-Mentoring in Higher Education: A Structured Literature Review and Implications for Future Research
Previous Article in Journal
Reporting Sustainability in China: Evidence from the Global Powers of Luxury Goods
Previous Article in Special Issue
Implementation of E-Proctoring in Online Teaching: A Study about Motivational Factors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dashboard for Evaluating the Quality of Open Learning Courses

1
Universidad Central del Ecuador, Quito 170129, Ecuador
2
Smart Learning Research Group, University of Alicante, 03690 Alicante, Spain
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(9), 3941; https://doi.org/10.3390/su12093941
Submission received: 3 April 2020 / Revised: 1 May 2020 / Accepted: 4 May 2020 / Published: 11 May 2020
(This article belongs to the Special Issue Opportunities and Challenges for the Future of Open Education)

Abstract

:
Universities are developing a large number of Open Learning projects that must be subject to quality evaluation. However, these projects have some special characteristics that make the usual quality models not respond to all their requirements. A fundamental part in a quality model is a visual representation of the results (a dashboard) that can facilitate decision making. In this paper, we propose a complete model for evaluating the quality of Open Learning courses and the design of a dashboard to represent its results. The quality model is hierarchical, with four levels of abstraction: components, elements, attributes and indicators. An interesting contribution is the definition of the standards in the form of fulfillment levels, that are easier to interpret and allow using a color code to build a heat map that serves as a dashboard. It is a regular nonagon, divided into sectors and concentric rings, in which each color intensity represents the fulfillment level reached by each abstraction level. The resulting diagram is a compact and visually powerful representation, which allows the identification of the strengths and weaknesses of the Open Learning course. A case study of an Ecuadorian university is also presented to complete the description and draw new conclusions.

1. Introduction

According to the Cambridge dictionary [1], Open Learning (OL) is a way of studying that allows people to learn where and when they want, and to receive and send their materials through electronic means. For Caliskan [2], the term Open Learning is used to describe learning situations in which learners have the flexibility to choose from a variety of options in relation to the time, place, instructional methods, modes of access, and other factors related to their learning processes.
Although they are not exactly the same [3], the term Open Learning is closely related with other terms such as e-Learning, Online Learning, Technology-enhanced Learning, Flexible Learning, or Distance Learning [2]. All in all, universities are developing a large number of Open Learning projects based on Information and Communication Technologies (ICT), mainly e-Learning courses (as it has been agreed to call Electronic or Online Learning), to support their students within the teaching and learning process. As any process developed in the university environment, OL or e-Learning must be subject to parameters that allow to evaluate their quality. However, these systems have some special characteristics that make the usual quality systems not respond to all their requirements. For example, the high dependence on technology, which also entails the need to train teachers and students in that technology, or the need for different teaching and instructional methodologies, are differentiating features of OL systems. Although several proposals have been developed to evaluate the quality of OL, many of them are not transferable, are unstructured, are incomplete or do not present a formal description, as it will be explained in the background section.
On the other hand, a fundamental part in the quality models is the visual representation of the results, since the main objective of these models is decision making based on the diagnosis established by the quality model. The usual form of information representation for decision making is the dashboard, consisting of a graphic representation of a set of indicators and other relevant information for the user who makes the decisions [4].
The general purpose of this research is to provide a compact and easily interpretable visualization tool for decision making in the context of an OL course quality assessment model. The proposed tool has the desirable features of a useful dashboard.
The document is organized as follows: in Section 2, we review some previous concepts and work related to Open Learning, quality models for OL and dashboard design and indicator selection. Section 3 is devoted to present the methodology used in this research. In Section 4 we present our proposal, explaining the quality model and its principles. The research instruments for data collection, including their design and validation are presented in Section 5, while Section 6 is devoted to the design and construction of the dashboard. A case study of the application of the model to an Ecuadorian university is presented in Section 7, to illustrate the interpretation of the results. Finally, the conclusions are set out in Section 8.

2. Background

In this section we present the background of the research, focusing on three main issues that support the proposal: the concepts of Open Learning, e-Learning, b-Learning and other related ones; the existing models for quality assessment of OL systems; and the visualization of quality assessment results through dashboards as well as their desirable features.

2.1. Open Learning, e-Learning, b-Learning and Other Related Concepts

Open Learning is a term used to describe flexible learning experiences in which the time, place, instructional methods, modes of access, and other factors related to their learning can be chosen by the learners [2]. Bates [3] considers that Open Learning is primarily a goal, or an educational policy whose essential characteristic is the removal of barriers to learning.
The concepts of Open Learning, Distance Learning, Flexible Learning and e-Learning (and other terms), are related and frequently considered as equivalent, though some different nuances are reported. For instance, Bates [3] states that Distance Learning is less a philosophy and more a method, so that students can study in their own time, at the place of their choice (home, work or learning center), and without face-to-face contact with a teacher. About Flexible Learning, the same author also considers it more of a method than a philosophy, but he reports a nuance: the flexibility in aspects such as geographical, social and time constraints of individual learners, rather than those of the institution. Flexible Learning may include distance education, but it also may include delivering face-to-face training in the workplace or opening the campus longer hours or organizing weekend or summer schools.
Although Open Learning, Distance Learning and Flexible Learning can mean different things, they all have one feature in common: they provide alternative means of high quality education for those who either cannot take conventional, campus-based programs, or choose not to [3].
The term e-Learning is much more modern, born as a result of the emergence, explosion and generalization of information technologies and the Internet in particular, with its associated tools (e-mail, World Wide Web, videoconferencing, apps) and devices (computers, tablets, smartphones). It dates back to the late 1980s, was consolidated during the 1990s [5], until its current omnipresence. Although there is no consensus on the definition of e-Learning, we have chosen that of Koper [6]: e-Learning can be defined as the use of ICT to facilitate and improve learning and teaching.
The term e-Learning has given rise to other related terms: mobile learning or m-Learning, ubiquitous learning or u-Learning, and blended learning or b-Learning. B-Learning is the mode of learning that combines classroom teaching with non-classroom technology [7]. In a b-Learning course, the methods and resources of both face-to-face and distance learning are mixed, giving students more responsibility in their individual study by providing them with skills for such studies. Moreover, b-Learning is an option for introducing information technologies among a reluctant teaching staff and it fosters innovation processes and improvement of teaching quality [7].
The philosophy of Open Learning has given rise to a set of derivatives with a slightly different nuance. The term open has come to be used in recent times as a synonym for freely accessible, public domain or open license. This is the sense in initiatives such as Open Educational Resources (OER), OpenCourseWare (OCW) or Massive Open Online Course (MOOC). Despite the diversity of these concepts and tools, and the arguments for or against each one, they all have in common one objective: to improve the way the contents are made available to the learners [8].
In this paper, it was decided to use the terms Open Learning, e-Learning and b-Learning since the proposed model can be applied to all these cases. The former is used because of its tradition and because it is a particularly broad concept that includes all the others. The second because of its wide use, having become almost the standard term. The third because our model takes into account classroom teaching in addition to virtual teaching.

2.2. Quality in Open Learning Systems

It is not possible to find a consensus on the concept of quality of education in a university, the definition of which varies greatly since quality has different perspectives. In this section, we are going to mention some contributions.
One of the consequences of Open Learning is the self-organization of learning by the students, i.e., the student can lead his or her own learning, which implies a radical change in the roles assumed by the instructors and the students themselves. The instructor becomes a learning guide or facilitator, while the learners abandon their passive role and become the main protagonists. Therefore, if teaching and learning are changing, we cannot expect that the definition of quality and the method used to assess it will not also change [9].
The culture of quality is already well established in the universities. However, when it comes to Open Learning, quality assessment is addressed in the literature from very different points of view and for very specific cases. As a result, proposals are often not very transferable and it is difficult to find standards that allow us to undertake the task of assessing the quality of an Open Learning system in a structured and formal way. However, there are some authors that are looking for alternatives to the definition of quality in the field of Open Learning, as explained next.
Ehlers [9] considers that with technology transforming the higher education institutions, the concept of quality must be redefined. Quality is no longer an add-on to teaching and to learning, but quality is the constituting issue. Therefore, the question is not how quality can be assured for the technology-enhanced learning systems but rather how technology-enhanced learning can be provided in a way so that high quality learning scenarios unfold.
Vagarinho and Llamas-Nistal [10] establish that the quality of e-Learning is understood as the adequate fulfillment of the objectives and needs of the people involved, as a result of a transparent and participatory negotiation process within an organizational framework. Furthermore, in the field of e-Learning, quality is related to processes, products and services for learning, education and training, supported by the use of information and communication technologies.
Martínez-Caro, Cegarra-Navarro and Cepeda-Carrión [11] give some clues about what the main factors are that affect the quality of e-Learning: the design and management of the learning environment, and interaction. Peer interaction, assessment and cooperation, and student-teacher interactions contribute to establishing an environment that encourages students to better understand the content.
There have been efforts to evaluate the potential of other OL environments such as m-learning, through the evaluation of learning activities [12] and through other more complete analysis that try to develop authentic learning-based evaluation method and design approach for m-learning activities [13].
The ESVI-AL project [14] is about accessibility in e-Learning, but it makes an interesting analysis of the areas that must be studied to guarantee the quality of the e-Learning process:
  • Quality of the technology, from the technical point of view: availability, accessibility, security, etc.
  • Quality of the learning resources included in the platform: content and learning activities.
  • Quality of the instructional design of the learning experience: design of learning objectives, activities, timing, evaluation, etc.
  • Quality of the teacher and student training in the e-Learning system.
  • Quality of the services and support, help and technical and academic support offered to the users of the system.
A more exhaustive review of the literature (a systematic review of the literature) on quality models for e-Learning/b-Learning can be found in a previous work by the authors [15]. It can be seen that the focus of a large number of publications is to address the technical quality of the technology that supports the e-Learning process [10,14,16,17,18,19,20,21,22,23,24,25]. The quality of services and support associated with e-Learning systems [11,26,27,28,29,30,31,32], learning resources [33,34] and instructional design of online courses [23,35,36,37] are also topics of interest, although there is less consensus among researchers, as studies are case-focused and results are not generalizable. As for the training of students and teachers in the skills of using the e-Learning system [31], this seems to be an interesting issue but few authors have addressed it. On the other hand, an important symptom of the weak formalization of quality assessment models in Open Learning is the lack of references to more formal and widespread quality models [11,16,22].
As a result of this systematic study, we detected that more effort is needed in empirical research on this topic and that current research seems to focus on five aspects: technology, instructional design, learning resources, training, and services and support. However, there is no consensus on the characteristics that make a quality Open Learning course. Furthermore, no single comprehensive quality scheme has been found that contains the five areas and defines meaningful and measurable indicators. There are also some transversal aspects that a quality evaluation system should consider: communication, personalization, teaching innovation, entrepreneurship, linkage with society and collaboration, among others.

2.3. Dashboards

A dashboard is a business tool that displays a set of indicators and other relevant information to a business user. The information is usually represented graphically and must include the indicators involved in achieving the business objectives.
All organizations need an information system that enables communication of key strategies and objectives and decision making. This is what Eckerson [38] calls the “organizational magnifying glass”. This author considers that the dashboard is the organizational magnifying glass that translates the organization’s strategy into objectives, metrics, initiatives and tasks.
Few [4] considers that “a dashboard is a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance”. In short, a dashboard should contain limited, understandable, visual, important and goal-oriented information.
The main objective of a dashboard is transforming data into information and turn this into knowledge for the business. More precisely, for Eckerson [38], the goals of a dashboard are:
  • Monitor critical business processes and activities using metrics of business performance that trigger alerts when potential problems arise.
  • Analyze the root cause of problems by exploring relevant and timely information from multiple perspectives and at various levels of detail.
  • Manage people and processes to improve decisions, optimize performance, and steer the organization in the right direction.
Few [4] describes an interesting set of characteristics for dashboards:
  • A dashboard is a visual presentation, a combination of text and graphics (diagrams, grids, indicators, maps...), but with an emphasis on graphics. An efficient and attractive graphical presentation can communicate more efficiently and meaningfully than just text.
  • A dashboard shows the information needed to achieve specific objectives, so its design requires complex, unstructured and tacit information from various sources. Information is often a set of Key Performance Indicators (KPI), but other information may also be needed.
  • A dashboard should fit on a single computer screen, so that everything can be seen at a glance. Scrolling or multiple screens should not be allowed.
  • A dashboard presents up-to-date information, so some indicators may require real-time updating, but others may need to be updated with other frequencies.
  • In a dashboard, data is abbreviated in the form of summaries or exceptions.
  • A dashboard has simple, concise, clear, and intuitive visualization mechanisms with a minimum of unnecessary distractions.
  • A dashboard must be customized, so that its information is adapted to different needs.
The information in a dashboard is a set of Key Performance Indicators. These indicators are usually high-level measurements of how well an organization is doing in achieving critical success factors. To determine what KPIs take part in the dashboard, the designer must consider the audience level in the management structure. Therefore the details should be removed as the target audience moves up in the management structure to avoid information overload. Moreover, a typical dashboard is usually made of a few KPIs, including only those that are strictly necessary (typically between 7 and 10).
Selecting the KPIs is not an easy task. In most cases this task is manual and specific to a particular case. However, there are some research projects that seek to formalize and automate all or part of the dashboard design process, from selecting KPIs to defining visualization tools.
Chowdhary, Palpanas, Pinel, Chen and Wu [39] propose an efficient and effective model-driven dashboard design technique. A model is a formal specification of the function, structure and behavior of a system from a specific point of view, represented by a combination of drawings and text. The models are used as the primary source for selecting the KPIs and designing, constructing and deploying the dashboards.
Kintz [40] presented a semantic dashboard description language used in a process-oriented dashboard design methodology to help overcome known challenges of business process monitoring, such as the difficulty to built appropriate dashboards from complex data sources to best monitor given goals.
As regards methodologies for building dashboards, Brath and Peters [41] advise following an iterative model of creating dashboards for better designs. The design iteration and the use of sketches and prototypes help identify the needs and requirements and refine vague design ideas into the best possible solution.
The selection of the KPIs must meet a number of constraints that we have already discussed: they must be directly related to the organization’s goals, they must focus on few key metrics, they must consider the state of the organization and be adapted to the business model and features. An interesting work is that of Keck and Ross [42], that have investigated solutions to the selection of KPIs through the use of machine learning techniques in the particular case of a call center. In this context of dynamism they have consider the problem as one of multi-label classification where the most relevant KPIs are labeled and selected later.
Molina-Carmona, Llorens-Largo and Fernández-Martínez [43] propose the use of the values of the own indicators of a model to classify them and determine which of them are most suitable to be part of the dashboard, in what is called data-driven indicator classification and selection. This way, decision making takes place at two levels: on the one hand the values of indicators and their evolution help the dashboard designer to classify and select the KPIs that will be part of this dashboard, and on the other hand the KPIs themselves will help the top management to make their business decisions. Moreover, there is a second derivative from this proposal: the data themselves will report how the different indicators evolve and, therefore, when the KPIs of the dashboard are likely to be replaced by more significant ones. In short, more dynamic dashboards are obtained, to be adapted to the changing environment conditions.
The need to reduce information means that many data and indicators collected at universities do not end up as part of the dashboard. The selection of these KPIs has been the subject of numerous studies that highlight the complexity of this selection. Therefore, a good option is to try to represent the indicators at various levels, so as not to have to give up any of them in the final visualization, but to obviate them in the overall view. One example of that are the Technological Ecosystem Maps (TEmaps) [44]. A TEmap is a polygonal representation of the main elements of a technological ecosystem [8]. It is divided into levels (levels of abstraction from which to study the ecosystem), facets (basic principles that guide the organization and are transferred to the technological ecosystem) and components (specific aspects that are affected by the technological ecosystem). Each component, at each facet, studied from each level, is evaluated according to its maturity level. To do so, a maturity model is required so that it can be measured how good the element of the ecosystem is to fulfil the required objectives. Each maturity level is represented by a color, so that the TEmap finally takes the form of a heat map.

3. Methodology

For this research, we propose a methodology based on five major stages and a total of ten intermediate steps (Figure 1). The stages and steps proposed are:
(1)
Review: in this stage we have made a deep and systematic study of the literature, which has been presented in the previous section. This stage consists of two steps:
(1)
Literature review on the quality of Open Learning courses (Section 2.2)
(2)
Literature review on dashboard design (Section 2.3)
(2)
Model: The formulation of the model is key in our proposal. This formulation is presented in Section 4 and it is based on the literature review of the quality of the courses of the previous stage. The design of the model has been structured in three steps
(3)
Components design: The first step has been the structuring of the model in components, elements and attributes, inspired by the models found in the literature (Section 4.1).
(4)
Indicators design: The design of the indicators is presented in Section 4.2, which establishes the specific indicators that we have considered in our proposal.
(5)
Fulfillment levels: The last step of this stage is to establish the fulfillment levels we propose for the aggregation and comparison of the indicators (Section 4.3)
(3)
Instruments: Data collection is the objective of this stage (Section 5). It involves three steps:
(6)
Collection instruments: in this step the data collection instruments have been designed, based on the indicator design of the previous stage (Section 5.1).
(7)
Instruments validation: to ensure that the data collection instruments are useful and valid, a validation by experts has been carried out and is reported in Section 5.2.
(8)
Data collection: In this step the data of the specific institution are collected thanks to the application of the data collection instruments (Section 5.3)
(4)
Dashboard: the design and construction of the dashboard takes up the fourth stage (Section 6), divided into two steps:
(9)
Dashboard design: the dashboard is designed on the basis of the desirable characteristics established in the dashboard literature review, and on the basis of the model design, particularly the fulfillment levels (Section 6.1).
(10)
Dashboard construction: finally, thanks to the data collected, it is possible to build the dashboard designed in the previous step (Section 6.2).
(5)
Case study: the last step is the analysis and interpretation of the results of the case study (Section 7).

4. Model

The proposal is a complete model for evaluating the quality of Open Learning courses based on the principles of quality, and is supported by different theoretical frameworks that allow it to be given a formal structure: process management and the principle of continuous improvement. As a starting point, we defined the following principles for our model:
  • It must be supported by previous studies, which is why it is based on a systematic review of the literature.
  • It must be integral, trying to include all aspects.
  • It must be open, that’s why we use an iterative methodology that allows us to include new aspects in the future.
  • It must be adaptable, being able to be applied in any e-Learning course with few adaptations.
  • It must have a solid theoretical base, such as instructional design theories and process management.
From the systematic review of the literature, we obtain four key aspects for the definition of the model:
  • The literature describes five areas on which to study quality and which should appear in the model: learning resources, instructional design, user training and education, service and technology support, and learning management system (LMS).
  • Kirkpatrick’s model [23,30], which proposes the evaluation of training through the four levels (feedback, learning, transfer and impact) should guide the design of our model.
  • The instructional design ADDIE [36] (Analysis, Design, Development, Implementation and Evaluation), on the use of technology, should be taken into account.
  • We must take into account the generic quality models, among them, the Total Quality Management (TQM) model [11,37], the Sustainable Environment for the Evaluation of Quality in e-Learning (SEEQUEL) [22] and Benchmarking [16].

4.1. Learning as a PROCESS: Components and Elements

A process is a set of mutually related and interacting activities that uses inputs to provide an output [45]. The teaching-learning process supported by technology is a dynamic system that fits this definition, in which the input to the system (the student, with his or her previous knowledge and skills) undergoes a transformation involving different resources (human, technological and methodological) until an output is obtained (the student with new knowledge and skills). It is possible, therefore, to see learning as a process.
This view of learning as a process has some background that is worth noting. For example, Biggs [46] established the so-called 3Ps Model to explain the teaching-learning process, especially from the student’s point of view. To this end, he established three components that correspond to three moments in the process: (1) Presage, which characterizes the student and the context in which he or she is learning; (2) Process, which refers to the way in which learning tasks are carried out; and (3) Product, which focuses on learning outcomes. Biggs’ proposal has many points in common with ours, although in our case the point of view is not restricted to the student, so the elements involved in the process are extended.
The process (Figure 2), generated through the interaction between the student and the different resources, makes possible the transfer of knowledge from the teachers and the resources to the students. In the output, the student is transformed through a process of knowledge acquisition, where, according to Kirkpatrick, we have four levels of evaluation that we can measure: reaction, learning, knowledge transfer and impact. Finally, there is the feedback to the process, which includes the results, the levels of satisfaction, the errors, the possible improvements, etc. These improvement options should be included in the next version of the course, with the corresponding modifications.
As a result, we propose a hierarchical quality model, obtained from the previously described principles, key aspects and process. It is based on four levels of abstraction, so that the upper level represents the three major components of the model, and the lower level the indicators. As the levels of abstraction are lowered, the information becomes more concrete and detailed. These levels of abstraction are:
  • Components
  • Elements
  • Attributes
  • Indicators
The model is divided into three components, according to the nature of the participants in the process in Figure 2. These components are the human agents involved in the process, the resources they use in its development and the dynamic part of the process that includes the interactions that occur and the result itself.
The elements make up the second level. They are the concrete elements in the process in Figure 2. Each component in the previous level is divided into three elements. However, the teaching-learning process is itself divided into six sub-elements, which represent the interaction of students and teachers with the elements of the resource component (instructional design, LMS and helpdesk).
A summary of the components and elements are presented in Table 1. The attributes and indicators make up the third and fourth level and they should be dependent on the features of the particular Open Learning courses that are being assessed. They are analyzed in the next section.

4.2. Attributes and Indicators

The elements and sub-elements are divided, in turn, into attributes. The attributes represent characteristics of each element and are measurable by means of indicators. The indicators represent specific variables that can be evaluated in terms of reference levels or evaluation standards.
A set of 38 attributes and 99 indicators are proposed [15], which are adapted to most of the situations that an Open Learning course can present. However, following the same methodology, it is possible to adapt the proposal to other cases and create other attributes and indicators more in line with the situation of each institution.
For each indicator the following information should be considered:
  • Name, clearly identifying the meaning of the indicator.
  • Type, indicating whether its value is quantitative or qualitative.
  • Definition, explaining completely the nature and aim of the indicator.
  • Evidence, indicating how to obtain its value (instrument).
  • Standard, which represent the desirable qualities for the indicators and allow them to be compared with a standard measurement.
An example of the attributes and indicators of the Human component corresponding to the elements Student, Teacher and LMS manager is presented in Appendix A. The complete list can be found in the work of Mejía-Madrid [15].

4.3. Fulfillment Levels

An interesting contribution of our model is to define the standards in the form of fulfillment levels. We will propose five fulfillment levels for each indicator (from level 1 to level 5), regardless of the type of indicator. The levels of the indicator are established according to:
  • If there are associated regulations, the regulations are used to establish the levels. For example, Ecuadorian regulations establish that universities must aspire to have at least 70% of their teaching staff have a doctorate, so the indicator “% of teaching staff with a doctorate” will have a level 5 if it is greater than 70% and the rest of the levels are established by dividing the range from 0% to 70% into 4 intervals.
  • If there is no regulation that allows the establishment of reference points, the levels are defined by dividing the whole range into 5 parts, when they are quantitative, or they correspond to the 5 levels of a Likert scale, when they are qualitative. For example, for the indicator “% of teachers using the virtual classroom as a means of communication with students”, the 5 fulfillment levels are established homogeneously, in steps of 20%. Another example is that of the qualitative indicator “level of satisfaction of students with the learning experience”, for which a Likert scale with 5 values is used, equivalent to the fulfillment levels.
Normalizing the value of the indicators through the fulfillment levels allows for the comparison of indicators and provides a homogeneous scale that has two fundamental advantages:
  • The model is hierarchical, so that attributes are evaluated according to their indicators, elements according to attributes and components according to elements. The fulfillment level of a hierarchical layer is the average of the lower layers. However, it is possible to establish a weighted average, so that the weight of each part is different. The determination of the weights is very dependent on each particular case and it could be a powerful strategy tool for the institution. In the standard model, though, it has been decided the use of a uniform weighting.
  • The simplification of the scale to 5 values is easier to interpret and allows us to establish a color code that will facilitate the graphic representation we are looking for, as we will see in Section 6.

5. Research Instruments

The instruments for collecting the data are mainly two: surveys for students and teachers, and interviews with the directors and managers of the units responsible for the management of the LMS and other technologies supporting the teaching-learning process. In this section we present the design of these instruments, their validation and the data collection using them.

5.1. Collection Instruments

The purpose of the data collection instruments is to collect from the different stakeholders the data that will allow the calculation of the indicators. We have considered two types of instruments: surveys and interviews.
In the case of the surveys, we have followed the methodology proposed by Kitchenham and Pfleeger [47], which indicates that these instruments are complex and a series of well-defined activities must be carried out: establish the objectives of the survey, designing the survey, developing the questionnaire, evaluating and validating the questionnaire, carrying out the survey by collecting the data, analyzing the data obtained and reporting the results.
When designing the instruments, a preliminary version was first made and then validated (Section 5.2). The resulting questionnaires are presented here.
The student survey aims to evaluate the quality of the virtual classroom and the LMS from the students’ point of view. The first part contains the informed consent and survey instructions. Participants are informed of the purpose of the research and the voluntary and anonymous nature of the survey. The survey consists of 12 parts and 61 questions. The 12 parts are:
  • Socio-demographic data (7 questions)
  • Digital skills (2 questions)
  • Use of the learning platform (1 question)
  • Use of the virtual classroom (9 questions)
  • Use of resources, learning activities, evaluation and collaborative work (6 questions)
  • Learning experience (7 questions)
  • Monitoring, feedback and training (7 questions)
  • Quality of the virtual classroom (11 questions)
  • Instructional design (1 question)
  • Teacher training and updating (3 questions)
  • Teaching and learning process supported by the virtual classroom (2 questions)
  • Technological services provided by the institution for the operation of the virtual classroom (5 questions)
As for the teacher survey, it aims to evaluate the quality of the virtual classroom and the LMS, but in this case from the teachers’ point of view. Like the other survey, it contains at the beginning the informed consent and the survey instructions. In this case the survey consists of 7 parts and 33 questions:
  • Socio-demographic data (2 questions)
  • Use of the LMS (1 question)
  • Quality of instructional design (20 questions)
  • Use of virtual classrooms (1 question)
  • Digital skills (4 questions)
  • Teacher training and updating (3 questions)
  • Technological services provided by the institution for the operation of the virtual classroom (1 question)
  • Recommendations (1 question)
In addition to the surveys, an interview was arranged with the Information Technology Directorate and the Institutional Development Directorate to obtain some data. The interview consisted of 5 parts, in which the following data were collected:
  • Training of LMS managers, collecting data on training hours.
  • Characteristics of the technological infrastructure, in order to know, among other issues, the availability of the platform, the bandwidth, the security policies, the accessibility of the platform, the software update policies or the contingency plans.
  • Training of teachers and students, to know the percentage of teachers and students trained in the use of learning support technologies and training programs.
  • User support, collecting data on resolved incidents and response time.
  • Use of the virtual classroom, to know the teachers and students who really use the LMS.
The complete data collection instruments can be consulted in the work of Mejía-Madrid [15].

5.2. Instruments Validation and Redesign

In this section we present how the initial data collection instruments were validated to give rise to the final instruments. This validation consisted of an initial pilot test, both for students and teachers; plus a validation questionnaire by experts, in this case only for the teachers’ instrument. In the case of the interviews for the directors and managers of the units, no explicit validation was made, since the questions were obtained directly from the model and reviewed by the authors of this article.
As for the student data collection instrument, its validation was carried out with a pilot test to students in the Information Systems subject. The aim of the pilot test is to find possible shortcomings in language, writing, relevance or technical quality. In this way, a validation by students for students is carried out, which we consider indispensable because quality is focused on learning and, therefore, on the student as the main actor of the process. The students mentioned some suggestions regarding form; these were incorporated to later improve the research instrument.
The validation of the teacher instrument was carried out in two ways: with a pre-test and with expert validation. The pre-test validation consisted of a pilot study with 17 teachers from the Central University of Ecuador, whose observations were incorporated. Then, the resulting questionnaire was validated by seven experts, among whom were four researchers in educational innovation and technologies for learning and three university managers in the field of educational technologies and academic management. The expert validation was conducted using an instrument provided by the Directorate of Academic Development of the Central University of Ecuador, in which the following aspects were evaluated for each question:
  • Relevance: The correspondence between the objectives and the items in the instrument.
  • Technical quality and representativeness: The adequacy of the questions to the cultural, social and educational level of the population to which the instrument is directed.
  • Language quality and writing: Use of appropriate language, writing and spelling, and use of terms known to the respondent.
This instrument was passed on to each expert, who made their respective observations, and subsequently the final version of the survey was constructed. By compiling the results issued by the experts, they requested few changes in form, language and wording. In addition, they asked for the unification of questions due to the fact that it was an extensive survey. These observations were taken into account for the development of the final version of the survey.

5.3. Data Collection

Once the data collection instruments have been defined, it is necessary to carry out this collection. The following are some of the aspects that have been taken into account for this process. With regard to student and teacher surveys, three key aspects need to be defined for their implementation:
  • The population, i.e., the recipients of the survey. In this case the population is made up of the students and teachers of the university analysed, for each of the surveys carried out.
  • The chosen sample, that is, who from the entire population will answer the questionnaires. In our case it is a voluntary survey, so it is not possible to define a sample size a priori. This introduces the problem of the possible non-representativeness of the sample, either because of an insufficient size or because it is not a random sample.
  • The way to send them the questionnaire: on paper, by e-mail, by means of an online form...
All questionnaires are accompanied by instructions indicating the purpose, who is sending the questionnaire, why the recipient of the survey was selected and whether and how the results will be shared.
As for the interview with those responsible for the learning technology system, the selection of the participants is crucial. This is a key informant interview, that is, the individuals selected are considered unique because of their position or experience. To get the best results from the interview and to collect all the expected data, it is essential to have a well prepared interview guide.

6. Dashboard

A dashboard, as already noted, is a business tool that displays a set of indicators and other information needed to make decisions. It is important that it presents in a visual way, at a single glance, the most important data needed to achieve the business objectives. In this section we present our proposal for a dashboard.

6.1. Dashboard Design

Decision-oriented representation of results is one of the objectives of this research. To this end, we propose a heat map as a dashboard, in the form of a regular nonagon (because of the nine elements), divided into sectors and concentric rings, in which each color intensity represents the fulfillment level reached by each indicator, each attribute (as the average of the fulfillment levels of its indicators) and each element (as the average of the fulfillment levels of its attributes).
An example of this type of display is shown in Figure 3. The three components (each with an associated color) are represented and their quality represented as a fulfillment level (with a different intensity of the chosen color). The first, human component (in red) includes students, teachers and LMS managers. The second, methodological and technological resources (in green) includes the instructional design, the LMS and the helpdesk. Finally, the third component is the dynamics of the process (in blue), which includes the process itself, the result and the feedback arising from the interaction between the elements. In each component, the elements ( X i ), attributes ( A j ) and indicators ( a k ) are distributed in three concentric rings.
The concentric rings give us information of different levels of abstraction. The closer they are to the center, the more general the information is, and as the rings move away from the center the information becomes more specific. The resulting diagram is a compact and visually very powerful representation, which allows us to easily identify the strengths and weaknesses of the Open Learning course analyzed.
The proposed representation in the form of a heat map can be used as a dashboard since it mostly fulfills the characteristics of Few [4] for a dashboard:
  • It is an efficient and attractive visual presentation, combining text and graphics.
  • It shows information needed to achieve a specific objective (evaluate the quality of a Open Learning course), and has complex, unstructured and tacit information from various sources (data collection tools). It shows a set of KPIs (the 9 elements), but also other additional information (the attributes and indicators).
  • It fits on a single computer screen.
  • Allows for updated information if required.
  • The information can be considered as an aggregated summary of the whole Open Learning quality assessment.
  • A heat map is a simple, concise, clear and intuitive display mechanism.
  • It could be customized, showing more or less rings depending on the needs.

6.2. Dashboard Construction

The construction of the dashboard is a direct process once the data has been collected and the indicators calculated. Specifically, the steps that have been followed for the construction are:
  • Collection of indicator data from surveys and interviews.
  • Calculation of the fulfillment levels, based on the value of the indicators and the established standards, as indicated in Section 4.3.
  • Calculation of the fulfillment levels of the attributes, as an average of the fulfillment levels of the indicators of that attribute, with rounding to the nearest integer value.
  • Calculation of the fulfillment levels of the elements (sub-elements), as an average of the fulfillment levels of the attributes of that element (sub-element), with rounding to the nearest integer value.
  • Calculation of the fulfillment levels of the components, as an average of the fulfillment levels of the elements of that component, with rounding to the nearest integer value
  • Assignment of a color level according to the fulfillment level. To do this:
    • The hue depends on the component to which each element, attribute or indicator belongs: red for the human component, green for the methodological and technological component, blue for the process component.
    • The saturation depends on the fulfillment level. Five levels of saturation are established, distributed in intervals of 20%, between 0% (white, minimum saturation) and 100% (maximum saturation).
The control panel has been built manually, but it can be easily automated since the calculation procedure is perfectly defined.

7. Case Study

The application of the model to a case study allows the description to be completed and new conclusions to be drawn. The model has been applied at the Central University of Ecuador (UCE—Universidad Central del Ecuador). In this case, the 3 components and 9 elements of the model have been divided into the 38 attributes and 99 indicators proposed in Section 4.2 and presented in the work of Mejía-Madrid [15]. In order to collect data to calculate the indicators, two surveys have been designed for teachers and students, and a series of interviews have been carried out with those responsible for the LMS. The surveys were conducted with 111 teachers (out of a total of approximately 2300) from the different faculties and 677 students (out of a total of approximately 40,000), and the heads of the university’s information technology department were interviewed. Although the voluntary nature of the questionnaire does not make it possible to ensure the absence of bias, the sample can be considered sufficiently large for the population under study.
Based on the data collected, each indicator is assigned its fulfillment level, with values from 1 to 5, and the results are incorporated into the heat map that constitutes the dashboard (Figure 3). In this graphic representation, the elements, attributes and indicators can be seen with different color intensities, depending on the fulfillment level reached. The ultimate goal of the model is to provide a complete picture of the state of the institution and determine what improvement actions can be taken to increase fulfillment levels for each element. The heat map can be used as a dashboard for the quality of the institution’s Open Learning courses. Without being exhaustive, we present some interesting results of the application in the UCE.
In the central ring we have the high level information concerning the nine elements, grouped in the three main components. In the example of the UCE, the “LMS manager” element within the human component stands out as the element with the highest fulfillment level (level 5), although the “Student” element of the same component also has a notable fulfillment level (level 4). However, the “Helpdesk” element within the methodology and technology component is the one with the lowest fulfillment level (level 1).
Looking only at the innermost ring of the heat map gives us an overview, but can lead to confusion if the attributes and corresponding indicators are not analyzed in detail, especially when the fulfillment level is intermediate. Calculating the fulfillment levels of a ring as an average of those of the immediately outer ring may mean that the intermediate levels of fulfillment are due to very different values that contribute to that calculation. A clear case is that of the “Teacher” element of the human component. Its average fulfillment level is intermediate (level 3), but if we look at the outermost rings we can see that at the level of attributes (central ring) and indicators (outer ring) there are very different values. Thus, attributes A 2.1 and A 2.4 have a minimum level of fulfillment (level 1), while attribute A 2.3 has a maximum level (level 5). It is interesting to note that the attributes with the lowest values are A 2.1 : “Educational level” (due to the low number of PhDs on the staff of the UCE) and A 2.4 : “Teacher contribution to transparency” (since very few teachers publish data on their subjects on the institutional website, one of the indicators for this attribute). However, “Teacher training” is very good, according to attribute A 2.3 , with the highest level. The description of all the attributes and indicators can be consulted in the work of Mejía-Madrid [15]

8. Conclusions

In this paper, we presented two main contributions: a model for the evaluation of the quality of Open Learning courses and a visual tool for the representation of the results of the application of the model, useful as a dashboard for decision making.
The quality assessment model has a solid theoretical basis from a systematic review of the literature, is comprehensive, open and adaptable. It is made up of 3 components and 9 elements, that are divided into 38 attributes and 99 indicators in the case of the Central University of Ecuador (consult the work of Mejía-Madrid [15] for the complete description of the model), organized in a hierarchical way and whose data are obtained through various data collection tools validated by experts. The greatest contribution is to formalize the data of different types on a single scale formed by five fulfillment levels, which allows for easy comparison.
A dashboard, in the form of a heat map is proposed, which can be constructed thanks to the scale of fulfillment levels, is a compact and intuitive representation of the situation of the university with respect to the quality of Open Learning courses. It has several levels of abstraction represented in the rings of the diagram, which allows the information to be analyzed in different detail. Each color value represents a level of fulfillment (between 1 and 5), so that the representation is aggregated, homogeneous and comparable.
The tool has an important potential for decision making, so we propose to continue advancing in this line of research in the future. Specifically, we are considering automating the process of obtaining the dashboard, in order to keep it updated in a simpler way. We also propose to develop a systematic diagnostic methodology based on the dashboard, to achieve the automatic definition of improvement actions in the weakest areas aligned with the institution’s strategy. Finally, the proposal must be validated by university policy makers. The authors have experience in positions of responsibility in university governance and management, but we consider it essential to gather the opinion of other university leaders to understand the usefulness of the proposed model and dashboard.

Author Contributions

Conceptualization, G.M.-M., F.L.-L. and R.M.-C.; methodology, G.M.-M., F.L.-L. and R.M.-C.; software, G.M.-M. and R.M.-C.; validation, F.L.-L. and R.M.-C.; formal analysis, G.M.-M., F.L.-L. and R.M.-C.; investigation, G.M.-M.; resources, G.M.-M. and R.M.-C.; data curation, G.M.-M.; writing—original draft preparation, G.M.-M. and R.M.-C.; writing—review and editing, F.L.-L. and R.M.-C.; visualization, F.L.-L. and R.M.-C.; supervision, R.M.-C.; project administration, R.M.-C.; funding acquisition, F.L.-L. and R.M.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Universidad Central del Ecuador, through the agreement with Universidad de Alicante for the direction of doctoral theses. “The APC was funded by Cátedra Santander-UA de Transformación Digital and Smart Learning Research Group, Universidad de Alicante”.

Acknowledgments

The authors would like to acknowledge the experts who validated the research instruments, and the students, teachers, managers and directors who participated in the surveys and interviews, for their generous contribution.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Attributes and Indicators

The attributes and indicators of Human component are presented here. They correspond to elements X 1 , Student (Table A1); X 2 , Teacher (Table A2); X 3 , LMS manager (Table A3). The complete list can be found in the work of Mejía-Madrid [15].
Table A1. Attributes and indicators for element X 1 , Student.
Table A1. Attributes and indicators for element X 1 , Student.
AttributeIndicatorDescription
A 1 , 1 Digital skills a 1 , 1 , 1 Use of computer tools% students who regularly use computer tools in their learning activities.
A 1 , 2 LMS training a 1 , 2 , 1 Training% students who have been trained in the use of LMS.
Table A2. Attributes and indicators for element X 2 , Teacher.
Table A2. Attributes and indicators for element X 2 , Teacher.
AttributeIndicatorDescription
A 2 , 1 Level of studies a 2 , 1 , 1 Level of studies% teachers who are PhD.
A 2 , 2 Digital skills a 2 , 2 , 1 Basic notionsAverage degree of use of basic learning resources in teaching practice.
a 2 , 2 , 2 Knowledge deepeningAverage degree of use of knowledge deepening resources in teaching practice.
a 2 , 2 , 3 Knowledge generationAverage degree of use of knowledge generation resources in teaching practice.
A 2 , 3 LMS training a 2 , 3 , 1 Training% teachers who have been trained in the use of LMS.
a 2 , 3 , 2 Training expectation% teachers who would like to be trained in the use of digital tools and resources.
A 2 , 4 Contribution to transparency a 2 , 4 , 1 Syllabus publication% teachers who publish at least the subject syllabus on the institutional website.
Table A3. Attributes and indicators for element X 3 , LMS manager.
Table A3. Attributes and indicators for element X 3 , LMS manager.
AttributeIndicatorDescription
A 3 , 1 Training in LMS management a 3 , 1 , 1 Training in LMS managementAverage number of training hours of the LMS manager in the last year.

References

  1. Anon. Cambridge Business English Dictionary; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]
  2. Caliskan, H. Open Learning. In Encyclopedia of the Sciences of Learning; Seel, N.M., Ed.; Springer US: Boston, MA, USA, 2012; pp. 2516–2518. [Google Scholar] [CrossRef]
  3. Bates, T. Technology, e-Learning and Distance Education, 2nd ed.; RoutledgeFalmer Studies in Distance Education; OCLC: Ocm56493660; Routledge: London, UK; New York, NY, USA, 2005. [Google Scholar]
  4. Few, S. Information Dashboard Design: The Effective Visual Communication of Data, 1st ed.; OCLC: Ocm63676286; O’Reilly: Beijing, China; Cambride, MA, USA, 2006. [Google Scholar]
  5. Moore, J.L.; Dickson-Deane, C.; Galyen, K. e-Learning, online learning, and distance learning environments: Are they the same? Internet High. Educ. 2011, 14, 129–135. [Google Scholar] [CrossRef]
  6. Koper, R. Open Source and Open Standards. In Handbook of Research on Educational Communications and Technology, 3rd ed.; Spector, J.M., Ed.; OCLC: Ocn122715455; Lawrence Erlbaum Associates: New York, NY, USA, 2008; Volume 31. [Google Scholar]
  7. Pina, A.B. Blended learning. Conceptos básicos. Pixel-Bit. Rev. De Medios Y Educ. 2004, 23, 7–20. [Google Scholar]
  8. Llorens-Largo, F.; Molina-Carmona, R.; Compañ-Rosique, P.; Satorre-Cuerda, R. Technological Ecosystem for Open Education. In Frontiers in Artificial Intelligence and Applications; IOS Press: Amsterdam, The Netherlands, 2014; pp. 706–715. [Google Scholar] [CrossRef]
  9. Ehlers, U.D. Open Learning Cultures: A Guide to Quality, Evaluation, and Assessment for Future Learning; Springer: New York, NY, USA, 2013. [Google Scholar]
  10. Vagarinho, J.P.; Llamas-Nistal, M. Quality in e-learning processes: State of art. In Proceedings of the 2012 International Symposium on Computers in Education (SIIE), Andorra la Vella, Andorra, 29–31 October 2012; pp. 1–6. [Google Scholar]
  11. Martínez-Caro, E.; Cegarra-Navarro, J.G.; Cepeda-Carrión, G. An application of the performance-evaluation model for e-learning quality in higher education. Total Qual. Manag. Bus. Excell. 2015, 26, 632–647. [Google Scholar] [CrossRef]
  12. Huang, Y.M.; Chiu, P.S. The effectiveness of a meaningful learning-based evaluation model for context-aware mobile learning: Context-aware mobile learning evaluation model. Br. J. Educ. Technol. 2015, 46, 437–447. [Google Scholar] [CrossRef]
  13. Chiu, P.S.; Pu, Y.H.; Kao, C.C.; Wu, T.T.; Huang, Y.M. An authentic learning based evaluation method for mobile learning in Higher Education. Innov. Educ. Teach. Int. 2018, 55, 336–347. [Google Scholar] [CrossRef]
  14. Camacho Condo, A. Modelo de Acreditación de Accesibilidad en la Educación Virtual; Deliverable E3.2.1, European Union—Project ESVI-AL; European Union: Brussels, Belgium, 2013. [Google Scholar]
  15. Mejía-Madrid, G. El Proceso de Enseñanza Aprendizaje Apoyado en las Tecnologías de la Información: Modelo Para Evaluar la Calidad de Los Cursos b-Learning en Las Universidades. Ph.D. Thesis, Universidad de Alicante, Alicante, Spain, 2019. [Google Scholar]
  16. Martín Núñez, J.L. Aportes Para la Evaluación y Mejora de la Calidad en la Enseñanza Universitaria Basada en e-Learning. Ph.D. Thesis, Universidad de Alcalá, Alcalá de Henares, Spain, 2016. [Google Scholar]
  17. Santoveña Casal, S.M. Criterios de calidad para la evaluacion de cursos virtuales. Rev. Científica Electrónica De Educ. Y Comun. En La Soc. Del Conoc. 2004, 2, 18–36. [Google Scholar]
  18. Frydenberg, J. Quality Standards in eLearning: A matrix of analysis. Int. Rev. Res. Open Distrib. Learn. 2002, 3. [Google Scholar] [CrossRef] [Green Version]
  19. Mejía, J.F.; López, D. Modelo de Calidad de E-learning para Instituciones de Educación Superior en Colombia. Form. Univ. 2016, 9, 59–72. [Google Scholar] [CrossRef] [Green Version]
  20. Stefanovic, M.; Tadic, D.; Arsovski, S.; Arsovski, Z.; Aleksic, A. A Fuzzy Multicriteria Method for E-learning Quality Evaluation. Int. J. Eng. Educ. 2010, 26, 1200–1209. [Google Scholar]
  21. Tahereh, M.; Maryam, T.M.; Mahdiyeh, M.; Mahmood, K. Multi dimensional framework for qualitative evaluation in e-learning. In Proceedings of the 4th International Conference on e-Learning and e-Teaching, Shiraz, Iran, 13–14 February 2013; pp. 69–75. [Google Scholar] [CrossRef]
  22. Militaru, T.L.; Suciu, G.; Todoran, G. The evaluation of the e-learning applications’ quality. In Proceedings of the 54th International Symposium ELMAR-2012, Zadar, Croatia, 12–14 September 2012; pp. 165–169. [Google Scholar]
  23. Chatterjee, C. Measurement of e-learning quality. In Proceedings of the 2016 3rd International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 22–23 January 2016; pp. 1–4. [Google Scholar] [CrossRef]
  24. Grigoraş, G.; Dănciulescu, D.; Sitnikov, C. Assessment Criteria of E-learning Environments Quality. Procedia Econ. Financ. 2014, 16, 40–46. [Google Scholar] [CrossRef] [Green Version]
  25. Nanduri, S.; Babu, N.S.C.; Jain, S.; Sharma, V.; Garg, V.; Rajshekar, A.; Rangi, V. Quality Analytics Framework for E-learning Application Environment. In Proceedings of the 2012 IEEE Fourth International Conference on Technology for Education, Hyderabad, India, 18–20 July 2012; pp. 204–207. [Google Scholar] [CrossRef]
  26. Skalka, J.; Švec, P.; Drlík, M. E-learning and Quality: The Quality Evaluation Model for E learning Courses. In Proceedings of the International Scientific Conference on Distance Learning in Applied Informatics, DIVAI 2012, Sturovo, Slovakia, 2–4 May 2012. [Google Scholar]
  27. Zhang, W.; Cheng, Y.L. Quality assurance in e-learning: PDPP evaluation model and its application. Int. Rev. Res. Open Distrib. Learn. 2012, 13, 66–82. [Google Scholar] [CrossRef]
  28. Arora, R.; Chhabra, I. Extracting components and factors for quality evaluation of e-learning applications. In Proceedings of the 2014 Recent Advances in Engineering and Computational Sciences (RAECS), Chandigarh, India, 6–8 March 2014; pp. 1–5. [Google Scholar] [CrossRef]
  29. Casanova, D.; Moreira, A.; Costa, N. Technology Enhanced Learning in Higher Education: Results from the design of a quality evaluation framework. Procedia Soc. Behav. Sci. 2011, 29, 893–902. [Google Scholar] [CrossRef] [Green Version]
  30. Lim, K.C. Quality and Effectiveness of eLearning Courses—Some Experiences from Singapore. Spec. Issue Int. J. Comput. Internet Manag. 2010, 18, 11.1–11.6. [Google Scholar]
  31. Friesenbichler, M. E-learning as an enabler for quality in higher education. In Proceedings of the 2011 14th International Conference on Interactive Collaborative Learning, Piestany, Slovakia, 21–23 September 2011; pp. 652–655. [Google Scholar] [CrossRef]
  32. D’Mello, D.A.; Achar, R.; Shruthi, M. A Quality of Service (QoS) model and learner centric selection mechanism for e-learning Web resources and services. In Proceedings of the 2012 World Congress on Information and Communication Technologies, Trivandrum, India, 30 October–2 November 2012; pp. 179–184. [Google Scholar] [CrossRef]
  33. Tinker, R. E-Learning Quality: The Concord Model for Learning from a Distance. NASSP Bull. 2001, 85, 36–46. [Google Scholar] [CrossRef] [Green Version]
  34. Alkhalaf, S.; Nguyen, A.T.A.; Drew, S.; Jones, V. Measuring the Information Quality of e-Learning Systems in KSA: Attitudes and Perceptions of Learners. In Robot Intelligence Technology and Applications 2012; Kim, J.H., Matson, E.T., Myung, H., Xu, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 208, pp. 787–791. [Google Scholar] [CrossRef]
  35. Marković, S.; Jovanović, N. Learning style as a factor which affects the quality of e-learning. Artif. Intell. Rev. 2012, 38, 303–312. [Google Scholar] [CrossRef]
  36. Aissaoui, K.; Azizi, M. Improvement of the quality of development process of E-learning and M-learning systems. Int. J. Appl. Eng. Res. 2016, 11, 2474–2477. [Google Scholar]
  37. Hoffmann, M.H.W.; Bonnaud, O. Quality management for e-learning: Why must it be different from industrial and commercial quality management? In Proceedings of the 2012 International Conference on Information Technology Based Higher Education and Training (ITHET), Istanbul, Turkey, 21–23 June 2012; pp. 1–7. [Google Scholar] [CrossRef]
  38. Eckerson, W.W. Performance Dashboards: Measuring, Monitoring, and Managing Your Business, 2nd ed.; Wiley: New York, NY, USA, 2010. [Google Scholar]
  39. Chowdhary, P.; Palpanas, T.; Pinel, F.; Chen, S.K.; Wu, F.Y. Model-Driven Dashboards for Business Performance Reporting. In Proceedings of the 2006 10th IEEE International Enterprise Distributed Object Computing Conference (EDOC’06), Hong Kong, China, 16–20 October 2006; pp. 374–386. [Google Scholar] [CrossRef] [Green Version]
  40. Kintz, M. A semantic dashboard description language for a process-oriented dashboard design methodology. In Proceedings of the 2nd International Workshop on Model-Based Interactive Ubiquitous Systems, Modiquitous 2012, Copenhagen, Denmark, 25 June 2012; pp. 31–36. [Google Scholar]
  41. Brath, R.; Peters, M. Dashboard Design: Why Design is Important. DM Rev. 2004, 85, 1011285-1. [Google Scholar]
  42. Keck, I.R.; Ross, R.J. Exploring customer specific KPI selection strategies for an adaptive time critical user interface. In Proceedings of the 19th International Conference on Intelligent User Interfaces, Haifa, Israel, 24–27 February 2014; pp. 341–346. [Google Scholar] [CrossRef] [Green Version]
  43. Molina-Carmona, R.; Llorens-Largo, F.; Fernández-Martínez, A. Data-Driven Indicator Classification and Selection for Dynamic Dashboards: The Case of Spanish Universities. In Proceedings of the EUNIS (European University Information Systems), Paris, France, 5–8 June 2018; p. 10. [Google Scholar]
  44. Molina-Carmona, R.; Villagrá-Arnedo, C.; Compañ-Rosique, P.; Gallego-Duran, F.J.; Satorre-Cuerda, R.; Llorens-Largo, F. Technological Ecosystem Maps for IT Governance: Application to a Higher Education Institution. In Open Source Solutions for Knowledge Management and Technological Ecosystems; Garcia-Peñalvo, F.J., García-Holgado, A., Jennex, M., Eds.; Advances in Knowledge Acquisition, Transfer, and Management; IGI Global: Hershey, PA, USA, 2017; pp. 50–80. [Google Scholar] [CrossRef]
  45. ISO. ISO 9001:2015(es)—Sistemas de Gestión de la Calidad. Norma, International Organization for Standardization; ISO: Geneva, Switzerland, 2015. [Google Scholar]
  46. Biggs, J. What do inventories of students’ learning processes really measure? A theoretical review and clarification. Br. J. Educ. Psychol. 1993, 63, 3–19. [Google Scholar] [CrossRef] [PubMed]
  47. Kitchenham, B.A.; Pfleeger, S.L. Personal Opinion Surveys. In Guide to Advanced Empirical Software Engineering; Shull, F., Singer, J., Sjøberg, D.I.K., Eds.; OCLC: 1027138267; Springer: London, UK, 2010; Volume 3. [Google Scholar]
Figure 1. Methodology stages and steps.
Figure 1. Methodology stages and steps.
Sustainability 12 03941 g001
Figure 2. The model seen as a process.
Figure 2. The model seen as a process.
Sustainability 12 03941 g002
Figure 3. Dashboard for the UCE case.
Figure 3. Dashboard for the UCE case.
Sustainability 12 03941 g003
Table 1. Model components and elements.
Table 1. Model components and elements.
ComponentElementDescription
Human (Red)StudentLearning recipient and input to the system.
TeacherWho guides and creates the learning atmosphere using different methods and techniques.
LMS managerWho provides the management and administration services of the learning platform.
Methodology and Technology (Green)Instructional designAcademic activity devoted to designing and planning resources and learning activities. The instructional design corresponds to the ADDIE model.
LMSSoftware platform that manages learning, where resources and activities are located.
HelpdeskInstitutional service offered to students and teachers for the use, management and training of the LMS.
Process (Blue)ProcessInteraction process of students, teachers and managers with each resource. It has 6 sub-processes, the result of combining each element of the human component with each element of the methodological and technological component..
ResultOutput of the teaching learning process in which a student with knowledge i ends up with knowledge j, where j > i. For evaluation, Kirkpatrick’s 4 levels of assessment are taken: reaction, learning, knowledge transfer and impact.
FeedbackImprovement actions that imply a feedback to the system.

Share and Cite

MDPI and ACS Style

Mejía-Madrid, G.; Llorens-Largo, F.; Molina-Carmona, R. Dashboard for Evaluating the Quality of Open Learning Courses. Sustainability 2020, 12, 3941. https://doi.org/10.3390/su12093941

AMA Style

Mejía-Madrid G, Llorens-Largo F, Molina-Carmona R. Dashboard for Evaluating the Quality of Open Learning Courses. Sustainability. 2020; 12(9):3941. https://doi.org/10.3390/su12093941

Chicago/Turabian Style

Mejía-Madrid, Gina, Faraón Llorens-Largo, and Rafael Molina-Carmona. 2020. "Dashboard for Evaluating the Quality of Open Learning Courses" Sustainability 12, no. 9: 3941. https://doi.org/10.3390/su12093941

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop