1 Introduction

E-learning is a teaching model that supports access to information via digital platforms or media which is mainly based on internet technologies to share academic contents. E-learning systems not only allow to create and share content but also to have control over other aspects of teaching (tuition fees, grades) and sources of exchange between students (chats, forums, etc.) (Muilenburg & Berge, 2005). E-learning platforms are defined as systems where students access online resources and curriculum, communicate with instructors and receive the instructor’s assessments (Bhuasiri et al., 2012). Today, e-learning platforms are preferred because they allow students to continue their education while fulfilling their other responsibilities. E-learning platforms have advantages as well as disadvantages. The flexibility provided by these platforms in terms of time and space also confronts us as an obstacle between teacher and student (Lara et al., 2014).

The market share of e-learning platforms is increasing day by day. Many factors affect the success of the platform and customer satisfaction. Instructor, courses, technology, design and environment are factors affecting satisfaction (Sun et al., 2008). The COVID-19 epidemic, which affects social life in many aspects, has also profoundly affected the education preferences of society, and higher education institutions needed to reevaluate their teaching techniques. In this context, the assessment of e-learning portals has become a necessity (Ouajdouni et al., 2021). Evaluation of e-learning systems is multidisciplinary and involves its difficulties (Roffe, 2002). A systematic approach should be used to determine the platforms' dimensions, features, and critical criteria (Ouadoud et al., 2016). E-learning portal selection or comparison is a complex process and, at the same time, a multi-criteria decision making (MCDM) problem (Gong et al., 2021).

The features of e-learning platforms or learning management systems (LMS) have been superficially studied in the literature and it has been reported that interfaces and models should be improved for comparisons (Buendia & Hervas, 2006). COVID 19 is neither the first nor the last pandemic. The interest in e-learning platforms will increase in the coming years. The possible risk of disease and the advantages offered by distance education increase, and the quality of education offered are equal to face-to-face education. The motivation of our study is to determine the weights of all critical success criteria and offer a reliable method for evaluating e-learning platforms. This study consists of five sections. The literature review is given in the second section, the evaluation criteria, Type-2 Fuzzy Set and F-AHP in the third section, the findings and obtained weights in the fourth section, the results and future works are presented in the fifth section. This is the most comprehensive study considering all critical success factors of e-learning platforms as an MCDM problem, where 11 criteria and 106 sub-criteria in Table 1 were defined, evaluated and prioritized. All these criteria and sub-criteria were determined and defined by conducting a deep literature survey. Originality and value of this paper are defining all critical factors for e-learning platforms, ranking these factors, revealing the most important ones and using MCDM methods for the evaluations of effectiveness factors for the first time.

Table 1 Criteria and Sub-criteria

2 Literature review

Today, e-learning platforms attract more attention than ever before. The issue that determines the quality of education in these platforms is the technical features of the platforms and the pedagogical method used. For this reason, the educational strategies of the platforms should be considered (Begičević et al., 2007). E-learning should be based on pedagogy. The Behaviouristic, Cognitive and Constructivistic Approach were used to compare some e-learning platforms (Moedritscher, 2006). The increase in the usage rate of e-learning platforms depends on the richness of the portal's contents. The creation and reusability of learning content are ensured by effective content libraries (Yigit et al., 2014). Increased multimedia use, high media update speed and short waiting times increase students' attentions (Lin et al., 2011). The framework (portal) does also increase learning effectiveness for all learning branches (Sahasrabudhe & Kanungo, 2014). Different multimedia content is used to support learning in e-learning platforms. A quality model has been proposed to determine the importance of factors such as media loss, video stream quality, download time, file size, user control, timeliness, user-friendliness that affect learning (Jeong & Yeo, 2014). It was shown that developing an e-learning platform according to students' needs using a student-oriented approach increases efficiency (Dominici & Palumbo, 2013).

Three important features of e-learning platforms have been defined as structure, media and communication capabilities. The factors affecting the design of e-learning platforms are theoretical orientation, learning objectives, content, student characteristics and technological ability (Susan & Kenneth, 2000). Furthermore there is a significant relationship between individual differences, academic achievement and usability of the e-learning portal (Karahoca & Karahoca, 2009). The purpose of educational platforms is to ensure that students acquire the skills they need and increase their knowledge level. However, each student’s capability and needs are different. Thus the e-learning systems should be adaptable for the various student needs and present information in different formats accordingly (Leka et al., 2016).

Multi-Criteria Decision-Making (MCDM) methods have been used in the literature to compare and evaluate e-learning platforms. The main components of the platforms were used as criteria for comparisons. These components are digital libraries of educational resources and services, learning objects and virtual learning environments (Kurilovas & Dagiene, 2009). Availability of online discussions is one of the essential factors for benchmarking e-learning platforms (Ng & Murphy, 2005). Furthermore, learning effectiveness evaluation is vital in comparing learning platforms. The hybrid MCDM evaluation method, which combines the Analytic Hierarchy Process (AHP) and fuzzy integral method, is employed to simultaneously considers the interactive relationship between the criteria and the blurriness of subjective perception. The relationship between the criteria was examined by factor analysis and Decision Making Trial And Evaluation Laboratory (DEMATEL) method (Tzeng et al., 2007).

Hwang et al. employed integrated group decision approach including fuzzy theory with the grey system theory to evaluate e-learning platforms (Hwang et al., 2004). In addition e-learning platforms were compared using consistent fuzzy preference relations (Chao & Chen, 2009). The Proximity Indexed Value (PIV), which is a multi-criteria decision-making method, has been developed to compare these platforms. Vise Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR) and the Complex Proportional ASsessment (COPRAS) methods were also employed for platform evaluations (Khan et al., 2019). Moreover, a combination of AHP and Quality Function Deployment (QFD), was used to evaluate the e-learning systems (Xu et al., 2009). A combination of fuzzy logic-based Kirkpatrick and the layered evaluation framework PeRSIVA model is proposed to evaluate e-learning methods (Chrysafiadi & Virvou, 2013). Other parameters used to compare e-learning platforms include adaptability, customization, extensibility and customization. Adaptation is defined as the student's adaptation to the flow of the course (Graf & List, 2005). In addition, self-learning evaluation is recommended to compare e-learning platforms using AHP. Learning behavior, cooperation and communication, resource use and learning effect were considered as criteria in the AHP method (Chen & Yang, 2010; Mingli & Yihui, 2010). Comparison of educational platforms should focus on education issues as well as technical issues (Martin et al., 2008). AHP and artificial neural networks were used to compare the qualities and learning efficiency of the e-learning platform (Chen & Fu, 2010). A primitive cognitive network process, which considers multiple criteria and alternatives, is proposed to select the e-learning platform and this method was compared with the AHP (Yuen, 2012).

Implementation of e-learning platforms have also been examined to compare their performances. Success factors at the implementation stage were investigated using AHP with group decision-making and F-AHP (Naveed et al., 2020). AHP diagrams were used to evaluate the learning effectiveness of the web-based e-learning platforms (Murakoshi et al., 2001). Key factors affecting e-learning were examined with the help of AHP (Qin & Zhang, 2008). AHP was used for students' adoption of e-learning platforms, and 33 different factors were examined. Factors such as cost, quality, agility, timing control, degree certification and personal demands have been reported to have a significant impact on individuals' adoption of e-learning platforms (Zhang et al., 2010). A methodology has been developed to determine the difficulty levels of the e-learning platform. The Linear Program (LP) was used to evaluate the difficulty level of the questions (Matsatsinis & Fortsas, 2005). The association rule with F-AHP was used to evaluate the application score and interactive learning process in e-learning platforms (Wang & Lin, 2012). Quality function deployment (QFD) and fuzzy linear regression were used for e-learning platform selection (Alptekin & Karsak, 2011). MCDM was used to determine the quality of learning material in e-learning platforms (Kurilovas & Dagienė, 2009). Factors affecting the successful implementation of e-learning platforms were identified by AHP (Lo et al., 2011). Fuzzy mathematics was used to determine the factors affecting the effectiveness of e-learning platforms (Bo et al., 2009). AHP and F-AHP have been widely used to set criteria priorities in comparing e-learning platforms (Alptekin & Karsak, 2011; Bo et al., 2009; Chen & Fu, 2010; Lo et al., 2011; Martin et al., 2008; Murakoshi et al., 2001; Naveed et al., 2020; Qin & Zhang, 2008; Wang & Lin, 2012; Yuen, 2012). Other methods used for prioritizing the e-learning criteria can be expressed as DEMATEL, PERSIVA, LP, QFD, PIV and VIKOR [24,(Chrysafiadi & Virvou, 2013; Kurilovas & Dagienė, 2009; Matsatsinis & Fortsas, 2005). In previous studies, the adaptation of the platform student, the success factors for the implementation of e-learning platforms, and the determination of the weights of the education-related criteria was examined. (Graf & List, 2005; Naveed et al., 2020; Zhang et al., 2010).

In this study, 11 criteria (Adaptation, Framework, Function properties, Security, Content, Collaboration & Communication, Quality, Learning, Assessment and evaluation, Technical Specifications, Support) and 106 sub-criteria were defined to evaluate e-learning platforms. Main criteria and sub-criteria weights were determined by using Type-2 Fuzzy Sets AHP. This is the most comprehensive study performed on this issue with 11 criteria and 106 sub-criteria for evaluating e-learning platforms.

3 Material and method

The motivation of our study is to determine the weights of all critical success criteria and offer a reliable method for evaluating e-learning platforms. The decision model was structured hierarchically in Fig. 1 to prioritize all critical success factors for e-learning platforms. In the first level the goal of this study is explained as determining the priorities of critical success factors in e-learning systems. The criteria handled in this study are C1-adaptation, C2-framework, C3-function, C4-security, C5-content, C6-collabration & communication, C7-quality, C8-learning, C9-assesment and evaluation, C10-technicial specifications, C11-management support. The vagueness and subjectivity of trainee and trainer are taken into account by linguistic parameters of interval valued trapezoidal fuzzy numbers.

Fig. 1
figure 1

AHP Model

3.1 Evaluation Criteria

In this study, 11 criteria and 106 sub-criteria in Table 1 for e-learning platforms comparison were defined, evaluated and prioritized. All these criteria and sub-criteria were determined and defined by conducting a deep literature survey. The criteria are determined as adaptation (C1), framework (C2), function (C3), security (C4), content (C5), cooperation and communication (C6), quality (C7), learning (C8), assessment and evaluation (C9), technical specifications (C10), and support (C11).

The C1-adaptation is based on the 4 sub-criteria, e.g C11-compatibility, C12-extendibility, C13-customization and C14-adaptability. C11-compatibility means high correlation between user needs and system features in terms of learning material and framework. C12-extendibility is architecture and content designed according to subsequent needs. C13-customization means user can customize the platform according to his needs. C14-adaptability means that platform can be customized according to user needs.

The C2-framework is based on the 15 sub-criteria. These are C21-warning/message system, C22-ease of understanding, C23-clear navigation, C24-attractive interface, C25-graphics layout, C26-easy participation, C27-offline resource, C28-interface customization, C29-computer knowledge, C210-user friendly, C211-learning objectives, C212-usability, C213-easy data access, C214-color match, C215-ergonomic. C21-warning/message system means that the warning and error messages guide the user with clear and understandable precaution. C22-ease of understanding is that the label and routing in the interface are clear and understandable. C23-clear navigation is that the menus and subpage hierarchy within the portal are clear and understandable. C24-attractive interface is a platform design that is engaging, simple and focused on learning material. C25-graphics layout refers to the placement of figures, tables, videos on the screen. C26-easy participation donates that participation in the training is straightforward, and there are no complicated links. C27-offline resource is that students can have offline access to pre-downloaded resources. C28-interface customization refers that the interface can be customized according to the needs of the user. C29-computer knowledge is the level of computer usage knowledge that the user will need to use the portal. C210-user friendly means that portal has an easy-to-understand and easy-to-use interface and a nice look, as well as a good user experience. C211-learning objectives is the ability of users to achieve the intended learning goal with the course and training. C212-usability is implied that the portal does not have design-related errors or omissions that have been overlooked. C213-easy data access points that educators can share additional data, documents with course content, and students can easily access the content provided. C214-color match is portal design includes harmonious and combined colors. C215-ergonomic betoken designing portal according to ergonomic criteria.

The C3-function criteria is based on 9 sub-criteria. These are C31-evaluation architecture, C32-user control, C33-search, C34-learning history, C35-progress control, C36-application architecture, C37-counselor, C38-discussion, and C39-additive content. C31-evaluation architecture is that a new evaluation process can be designed for training. C32-user control is the student's ability to organize learning activities. C33-search imply that the ability of searching for training/content on the portal with different parameters. C34-learning history is that the student can see the completed training and share the system certificates. C35-progress control donates that the student can see his progress in the training he has received. C36-application architecture is the student/trainer's ability to define a predecessor or successor relationship between training specifically for the student. C37-counsellor is that the student can access the advisor online 24/7. C38-discussion indicates that technical tools allow students to open a new topic within the portal and share their ideas. C39-additive content shows that trainers can add teaching materials (media, pictures, documents) to the portal.

The C4-security are based on 4 sub-criteria. The sub-criteria of C4-security are C41-logging, C42-authorization based access, C43-password and, C44- assessment & evaluation. C41-logging is the registration of user access information. C42-authorization based access is means that users have different menu groups and data authorization according to the authority they have. C43-password shows that the portal has strong password management. C44- assessment & evaluation refers to ensure assessment & evaluation (exam, test) security.

The C5-content is based 18 sub-criteria. These are C51-material, C52-content, C53-presentation, C54-engaging content, C55-practice/test, C56-interactive mode, C57-functional content, C58-current content, C59-right content, C510-applications, C511-guide material, C512-ease of use, C513-perspective, C514-instructional design, C515-pedagogical content, C516-transnational curriculum, C517-approved curriculum, and C518-material quantity. C51-material means that the portal has active and lively multimedia training material. C52-content means controlling the information contained on the portal and the assurance of the information quality through standards. C53-presentation refer to supervise the presentation of information and to meet certain standards requirements. C54-engaging content means attracting the attention of the student attention via rich multimedia content and making a positive contribution to learning. C55-practice/test points that the portal has good practice and test material. C56-interactive mode donates that the portal has a learning-based interactive mode. C57-functional content means using of the same educational content by different educational programs. C58-current content refers to follow the training material up-to-date. C59-right content means checking the accuracy of the training material. C510-applications refers to include practical educational applications. C511-guide material point that there are guide materials on the platform. C512-ease of use point that fit to be used of learning material. C513-perspective means learning activities have a systematic perspective. C514-instructional design shows that instructional management has a particular methodology and design. C515-pedagogical content is that pedagogical factors are taken into account in the teaching method. C516-transnational curriculum is to include transnational curriculum topics. C517-approved curriculum platform has an assessment tool and curriculum approved by the country's education authority. C518-material quantity has acceptable content.

C6-collaboration & communication criteria is based on 11 sub-criteria. These sub criteria are C61-information sharing, C62-interoperability, C63-discussion, C64-announcement, C65-dialogue, C66-forum, C67-attendance, C68-collaboration, C69-mail/messages, C610-conversation, and C611-question sharing. C61-information sharing is to provide mutual communication opportunity for the platform. C62-interoperability is to offer the possibility of working together with the means of remote access. C63-discussion is the ability to open a discussion topic and share ideas within the platform. C64-announcement is the ability to provide one-sided communication with trainers and managers. C65-dialogue has the opportunity to have a conversation with the trainer during the training. C66-forum is the ability of trainees to express their views on a topic that concerns the learning community. C67-attendance is the ability to list meeting participants. C68-collaboration is to provide the opportunity for students to work together on a particular learning topic. C69-mail/messages is the possibility to receive personal mail and messages within the platform. C610-conversation is the possibility of text chat during training. C611-question sharing is the opportunity to share questions between trainers and trainees.

C7-quality criteria is based on 9 sub-criteria. These criteria are C71-integrity, C72-education quality, C73-instructor quality, C74-satisfaction measurement, C75-material and content, C76-reliability, C77-download resources, C78-documentation, C79-standard. C71-integrity is training materials that complement each other. C72-education quality is that platform has a quality internal control system. C73-instructor quality is that trainers are subject to standardization training at certain periods. C74-satisfaction measurement is that measuring student/learning satisfaction. C75-material and content is the control of content quality. C76-reliability is that the number of unplanned failures of the system is at an acceptable level. C77-download resources documentation content count and streaming speed are sufficient. C78-documentation is the system usage document of the platform. C79-standard is that the platform has sufficient national and international education standards because it has a standard.

The C8-learning criteria is based on 8 sub-criteria. These sub-criteria are C81-organization, C82-access, C83-source, C84-threshing, C85-re-access, C86-process control, C87-feedback, C88-encourage. C81-organization is that the course objects are organized. C82-access is that the training content is accessible at different times and places. C83-source is that platform that provides access to libraries related to the subject. C84-threshing is that the training method is enriched with online training materials. C85-re-access is that there is no possibility of regaining access to the training material studied/used. C86-process control is the portals ability of the student development. C87-feedback is a timely response and feedback from the learning counsellor. C88-encourage is that students are encouraged by the trainer for discussion and feedback.

The C9-assessment and evaluation criteria is based on 10 sub-criteria. These sub criteria are C91-different exam mode, C92-result record, C93-rating, C94-progress follow-up, C95- experience observation, C96-exam level, C97-progress control, C98-recording, C99-process control, C910-transfer. C91-different exam mode is that measurement can be made with different techniques and methods. C92-result record is the regulation of recording the assessment score. C93-rating is the ability to rate trainees. C94-progress follow-up is the student's ability to follow their development. C95- experience observation is that the trainer cannot observe the learner experiences. C96-exam level is a form of exams that assess the learning level. C97-progress control is the trainee’s ability to control learning progress. C98-recording is the portal's ability to record learning performance. C99-process control is the learner's ability to control the learning process. C910-transfer is the transfer of knowledge gained in the process between the student and the teacher.

C10-technical specifications criteria is explained depend on 11 sub-criteria. These are C101-payment, C102-style, C103-connection, C104-update, C105-hierarchical structure, C106-language support, C107-adaptation, C108-access speed, C109-compatibility, C1010-mobile support, C1011-verification. C101-payment is that alternative methods can pay fees. C102-style is that the interface style is selectable. C103-connection is that the platform internet connection is stable. C104-update is to be maintained and updated regularly. C105-hierarchical structure is that the structure is that the sections and subsections of the tutorials and pages are clearly defined. C106-language support is that the platform has multi-language support. C107-adaptation is that feedback and adaptation are possible during training. C108-access speed is that the speed of accessing the content is sufficient. C109-compatibility is that the matching of metadata to content. C1010-mobile support is that it has mobile system support. C1011-verification is that the platform has data transmission verification.

The C11-support criteria is based on 7 sub-criteria. These sub criteria are C111-budget support, C112-profile, C113-institutionalism, C114-reward system, C115-planning, C116-admin, C117-equipment support. C111-budget support is the provision of necessary financial support by senior management or access to financial resources. C112-profile is personal user profile and account management. C113-institutionalism is that the unit that organizes the training is institutional. C114-reward system is that it is a certificate etc., reward mechanism for learning activities. C115-planning is the alignment of platform plans and activities. C116-admin is that the system has a maintenance operation and maintenance application manager. C117-equipment support is information technology equipment is sufficient.

3.2 Interval type-2 fuzzy sets

In this section the interval type-2 fuzzy sets is briefly explained and some definitions are given (Çalık and Paksoy, 2017; Kahraman et al, 2014).

Definition 1

A type- 2 fuzzy sets \(\stackrel{\sim }{\tilde{A }}\) in the universe of discourse X can be represented by a type-2 membership function \({\mu }_{\stackrel{\sim }{\tilde{A }}}\) shown as follows:

$$\stackrel{\sim }{\tilde{A }}=\left\{\left(\left(x,u\right),{\mu }_{\stackrel{\sim }{\tilde{A }}}\left(x,u\right)\right)\left|\forall \right.x\in X,\forall u\in {J}_{x}\subseteq \left[\mathrm{0,1}\right],0\le {\mu }_{\stackrel{\sim }{\tilde{A }}}(x,u)\le 1\right\},$$

where Jx denotes an interval [0,1]. The type-2 fuzzy set \(\stackrel{\sim }{\tilde{A }}\) may also be expressed as follows:

$$\stackrel{\sim }{\tilde{A }}={\int }_{x\in X} {\int }_{{u\in J}_{x}}{u}_{\stackrel{\sim }{\tilde{A }}}(x,u)/(x,u),$$

where \({J}_{x}\in [\mathrm{0,1}]\) and \(\int \int\) represents the combination of all reasonable (acceptable) values of x and u.

Definition 2

If all \({\mu }_{\stackrel{\sim }{\tilde{A }}}\left(x,u\right)=1\), the set \(\stackrel{\sim }{\tilde{A }}\) is called interval type-2 fuzzy set. A specific instance of a type-2 fuzzy set is an interval type-2 fuzzy set, which may be described as:

$$\stackrel{\sim }{\tilde{A }}={\int }_{x\in X} {\int }_{{u\in J}_{x}}1/(x,u)$$

where \({J}_{x}\in [\mathrm{0,1}]\).

Definition 3

The lower and upper membership functions of the interval type-2 fuzzy set are type-1 membership functions, respectively. Chen and Lee (2010) presented a new approach for employing interval type-2 fuzzy sets in solving fuzzy multi-criteria group decision-making issues in their investigations. The heights of the upper and lower membership functions, as well as the reference points of the interval type-2 fuzzy sets, were utilized to describe the type-2 fuzzy sets using this technique. The following item represents a trapezoidal interval type-2 fuzzy set.

$${\stackrel{\sim }{\tilde{A }}}_{i}=\left({\tilde{A }}_{i}^{U};{\tilde{A }}_{i}^{L}\right)=(({a}_{i1}^{U},{a}_{i2}^{U},{a}_{i3}^{U},{a}_{i4}^{U};{H}_{1}\left({\tilde{A }}_{i}^{U}\right),{H}_{2}\left({\tilde{A }}_{i}^{U}\right)),(({a}_{i1}^{L},{a}_{i2}^{L},{a}_{i3}^{L},{a}_{i4}^{L};{H}_{1}\left({\tilde{A }}_{i}^{L}\right),{H}_{2}\left({\tilde{A }}_{i}^{L}\right)))$$

where, \({A}_{i}^{U}\) and \({A}_{i}^{L}\) are type-1 fuzzy sets. \({a}_{i1}^{U},{a}_{i2}^{U},{a}_{i3}^{U},{a}_{i4}^{U}\), \({a}_{i1}^{L},{a}_{i2}^{L},{a}_{i3}^{L},{a}_{i4}^{L}\) are the reference point of interval type-2 fuzzy set \({\tilde{A }}_{i}^{U}\). \({H}_{j}({\tilde{A }}_{i}^{U});\) denotes the membership value of the element \({a}_{j\left(j+1\right)}^{U}\) in the upper trapezoidal membership function \({\tilde{A }}_{i}^{U}\), \(1\le j\le 2\). \({H}_{j}({\tilde{A }}_{j}^{L});\) denotes the membership value of the element \({a}_{j(j+1)}^{L}\) in the lower trapezoidal membership function \(({\tilde{A }}_{i}^{L})\), \(1\le j\le 2\). \({H}_{1}\left({\tilde{A }}_{i}^{U}\right)\in [\mathrm{0,1}]\), \({H}_{2}\left({\tilde{A }}_{i}^{U}\right)\in [\mathrm{0,1}]\) ,\({H}_{1}\left({\tilde{A }}_{i}^{L}\right)\in [\mathrm{0,1}]\), \({H}_{2}\left({\tilde{A }}_{i}^{L}\right)\in [\mathrm{0,1}]\) and \(1\le i\le n\).

Definition 4

The following equation shows the summation operator of the trapezoidal interval type-2 fuzzy clusters:

$$\begin{array}{c}{\stackrel{\sim }{\tilde{A }}}_{1}=\left({\tilde{A }}_{1}^{U};{\tilde{A }}_{1}^{L}\right)=(({a}_{11}^{U},{a}_{12}^{U},{a}_{13}^{U},{a}_{14}^{U};{H}_{1}\left({\tilde{A }}_{1}^{U}\right),{H}_{2}\left({\tilde{A }}_{1}^{U}\right)),(({a}_{11}^{L},{a}_{12}^{L},{a}_{13}^{L},{a}_{14}^{L};{H}_{1}\left({\tilde{A }}_{1}^{L}\right),{H}_{2}\left({\tilde{A }}_{i}^{L}\right))) \\ {\stackrel{\sim }{\tilde{A }}}_{2}=\left({\tilde{A }}_{2}^{U};{\tilde{A }}_{2}^{L}\right)=(({a}_{21}^{U},{a}_{22}^{U},{a}_{23}^{U},{a}_{24}^{U};{H}_{1}\left({\tilde{A }}_{2}^{U}\right),{H}_{2}\left({\tilde{A }}_{2}^{U}\right)),(({a}_{21}^{L},{a}_{22}^{L},{a}_{23}^{L},{a}_{24}^{L};{H}_{1}\left({\tilde{A }}_{2}^{L}\right),{H}_{2}\left({\tilde{A }}_{2}^{L}\right)))\end{array}$$
$$\begin{array}{l}{\widetilde{\widetilde A}}_1 \oplus {\widetilde{\widetilde A}}_2=(\widetilde A_1^U,\widetilde A_1^L) \oplus (\widetilde A_2^U,\widetilde A_2^L)\\((a_{11}^U+a_{21}^U,a_{12}^U+a_{22}^U,a_{13}^U+a_{23}^U,a_{14}^U+a_{24}^U;\min(H_1\left(\widetilde A_1^U\right);H_1(\widetilde A_2^U))),\min(H_2\left(\widetilde A_1^U\right);H_2\left(\widetilde A_2^U\right))),\\((a_{11}^L+a_{21}^L,a_{12}^L+a_{22}^L,a_{13}^L+a_{23}^U,a_{14}^U+a_{24}^L;\min(H_1\left(\widetilde A_1^L\right);H_1(\widetilde A_2^L))),\min(H_2\left(\widetilde A_1^L\right);H_2\left(\widetilde A_2^L\right))),\end{array}$$

Definition 5

The following is the procedure for subtracting the trapezoidal interval type-2 fuzzy sets:

$$\begin{array}{l}{\stackrel{\sim }{\tilde{A }}}_{1} {\stackrel{\sim }{\tilde{A }}}_{2}=(({a}_{11}^{U}-{a}_{24}^{U},{a}_{12}^{U}-{a}_{23}^{U},{a}_{13}^{U}-{a}_{22}^{U},{a}_{14}^{U}-{a}_{21}^{U};\mathrm{min}({H}_{1}\left({\tilde{A }}_{1}^{U}\right);{H}_{1}({\tilde{A }}_{2}^{U}))),\mathrm{min}({H}_{2}\left({\tilde{A }}_{1}^{U}\right);{H}_{2}\left({\tilde{A }}_{2}^{U}\right))),\\ (({a}_{11}^{L}-{a}_{24}^{L},{a}_{12}^{L}-{a}_{23}^{L},{a}_{13}^{L}-{a}_{22}^{L},{a}_{14}^{L}-{a}_{21}^{L};\mathrm{min}({H}_{1}\left({\tilde{A }}_{1}^{L}\right);{H}_{1}({\tilde{A }}_{2}^{L}))),\mathrm{min}({H}_{2}\left({\tilde{A }}_{1}^{L}\right);{H}_{2}\left({\tilde{A }}_{2}^{L}\right)))\end{array}$$

Definition 6

The following is the multiplication between trapezoidal interval type-2 fuzzy clusters:

$$\begin{array}{l}{\widetilde{\widetilde A}}_1 \oplus {\widetilde{\widetilde A}}_2=(\widetilde A_1^U,\widetilde A_1^L) \oplus(\widetilde A_2^U,\widetilde A_2^L)\\((a_{11}^U\ast a_{21}^U,a_{12}^U\ast a_{22}^U,a_{13}^U\ast a_{23}^U,a_{14}^U\ast a_{24}^U;\min(H_1\left(\widetilde A_1^U\right);H_1(\widetilde A_2^U))),\min(H_2\left(\widetilde A_1^U\right);H_2\left(\widetilde A_2^U\right))),\\((a_{11}^L\ast a_{21}^L,a_{12}^L\ast a_{22}^L,a_{13}^L\ast a_{23}^U,a_{14}^U\ast a_{24}^L;\min(H_1\left(\widetilde A_1^L\right);H_1(\widetilde A_2^L))),\min(H_2\left(\widetilde A_1^L\right);H_2\left(\widetilde A_2^L\right))),\end{array}$$

Definition 7

The followings are the arithmetic operations between trapezoidal interval type-2 fuzzy sets and scalar k:

$$\begin{array}{l}{\stackrel{\sim }{\tilde{A }}}_{1}=\left({\tilde{A }}_{1}^{U},{\tilde{A }}_{1}^{L}\right)=(({a}_{11}^{U},{a}_{12}^{U},{a}_{13}^{U},{a}_{14}^{U};{H}_{1}\left({\tilde{A }}_{1}^{U}\right),{H}_{2}({\tilde{A }}_{1}^{U})),(({a}_{11}^{L},{a}_{12}^{L},{a}_{13}^{L},{a}_{14}^{L};{H}_{1}\left({\tilde{A }}_{1}^{L}\right),{H}_{2}({\tilde{A }}_{1}^{L}))\\ k*{\stackrel{\sim }{\tilde{A }}}_{1}=((k*{a}_{11}^{U},k*{a}_{12}^{U},k*{a}_{13}^{U},k*{a}_{14}^{U};({H}_{1}\left({\tilde{A }}_{1}^{U}\right);{H}_{2}({\tilde{A }}_{1}^{U})),(k*{a}_{11}^{L},k*{a}_{12}^{L},k*{a}_{13}^{L},k*{a}_{14}^{L};({H}_{1}\left({\tilde{A }}_{1}^{L}\right);{H}_{2}({\tilde{A }}_{1}^{L})))\\ \frac{{\stackrel{\sim }{\tilde{A }}}_{1}}{k}=((\frac{1}{k}*{a}_{11}^{U},\frac{1}{k}*{a}_{12}^{U},\frac{1}{k}*{a}_{13}^{U},\frac{1}{k}*{a}_{14}^{U};({H}_{1}\left({\tilde{A }}_{1}^{U}\right);{H}_{2}({\tilde{A }}_{1}^{U})),(\frac{1}{k}*{a}_{11}^{L},\frac{1}{k}*{a}_{12}^{L},\frac{1}{k}*{a}_{13}^{L},\frac{1}{k}*{a}_{14}^{L};({H}_{1}\left({\tilde{A }}_{1}^{L}\right);{H}_{2}({\tilde{A }}_{1}^{L})))\end{array}$$

where k>0.

3.3 Interval Type-2 Fuzzy AHP

We have employed Interval Type-2 Fuzzy AHP for evaluating our criteria. Kahraman et al. (2014) extended Buckley's (1985) fuzzy AHP approach based on type-1 fuzzy clusters into interval type-2 fuzzy clusters (Buckley, 1985; Çalık & Paksoy, 2017; Kahraman et al., 2014). The AHP technique was developed by Saaty (Saaty, 1980) and it has been used to find solutions to or help solve decision-making problems in various fields onward the day it was developed (risk assessment (Adem et al., 2018), green ergonomics (Adem et al., 2021), occupational health and safety (Adem et al., 2020), machine selection (Özceylan et al., 2016); site selection (Paul et al., 2021); vendor selection Gernowo & Surarso, 2021; data intelligence implementation (Merhi, 2021); green energy (Asadi and Pourhossein, 2021)).

E-learning platforms factors evaluation process with Interval Type-2 Fuzzy Sets is presented in Fig. 2. This method's steps are outlined as follows (Kahraman et al., 2014; Çalık & Paksoy, 2017):

  • Step 1: Determine the criteria, sub-criteria and alternatives of the determined decision-making problem. (Decision-making problems may contain all or some of the elements listed here.)

  • Step 2: Table 2 lists linguistic variables and associated interval type-2 fuzzy scales. Fuzzy pairwise comparison matrices are generated using linguistic variables, as shown in Eq (1).

    $$\stackrel{\sim }{\tilde{A }}=\left[\begin{array}{c}\begin{array}{ccc}1& {\stackrel{\sim }{\tilde{a }}}_{12}& {\stackrel{\sim }{\tilde{a }}}_{1n}\end{array}\\ \begin{array}{ccc}{\stackrel{\sim }{\tilde{a }}}_{21}& 1& {\stackrel{\sim }{\tilde{a }}}_{2n}\end{array}\\ \begin{array}{ccc}{\stackrel{\sim }{\tilde{a }}}_{n1}& {\stackrel{\sim }{\tilde{a }}}_{n1}& 1\end{array}\end{array}\right]=\left[\begin{array}{l}\begin{array}{ccc}1& {\stackrel{\sim }{\tilde{a }}}_{12}& {\stackrel{\sim }{\tilde{a }}}_{1n}\end{array}\\ \begin{array}{ccc}1/{\stackrel{\sim }{\tilde{a }}}_{12}& 1& {\stackrel{\sim }{\tilde{a }}}_{2n}\end{array}\\ \begin{array}{ccc}1/{\stackrel{\sim }{\tilde{a }}}_{1n}& 1/{\stackrel{\sim }{\tilde{a }}}_{2n}& 1\end{array}\end{array}\right]$$
    (1)

    Where \(1/\stackrel{\sim }{\tilde{a }}=((\frac{1}{{a}_{14}^{u}},\frac{1}{{a}_{13}^{u}},\frac{1}{{a}_{12}^{u}},\frac{1}{{a}_{11}^{u}};{H}_{1}\left({a}_{12}^{u}\right),{H}_{2}\left({a}_{13}^{u}\right)),((\frac{1}{{a}_{24}^{L}},\frac{1}{{a}_{23}^{L}},\frac{1}{{a}_{22}^{L}},\frac{1}{{a}_{21}^{L}};{H}_{1}\left({a}_{22}^{L}\right),{H}_{2}\left({a}_{23}^{L}\right))\),

  • Step 3: If there is more than one expert in the decision-making process, the experts' judgements need to be aggregated with the help of the geometric mean. The calculation details of the geometric mean are shown as follows:

    $${\widetilde{\widetilde r}}_i={\lbrack a_{i1} \oplus a_{i2} \oplus \dots {\oplus} a_{in}\rbrack}^{1/n}$$
    (2)

    where,

    $$\sqrt[n]{{\stackrel{\sim }{\tilde{a }}}_{ij}}=((\sqrt[n]{{\stackrel{\sim }{\tilde{a }}}_{ij1}^{U}},\sqrt[n]{{\stackrel{\sim }{\tilde{a }}}_{ij2}^{U}},\sqrt[n]{{\stackrel{\sim }{\tilde{a }}}_{ij3}^{U}},\sqrt[n]{{\stackrel{\sim }{\tilde{a }}}_{ij4}^{U}};{H}_{1}^{U}\left({a}_{ij}\right),{H}_{2}^{U}\left({a}_{ij}\right),((\sqrt[n]{{\stackrel{\sim }{\tilde{a }}}_{ij1}^{L}},\sqrt[n]{{\stackrel{\sim }{\tilde{a }}}_{ij2}^{L}},\sqrt[n]{{\stackrel{\sim }{\tilde{a }}}_{ij3}^{L}},\sqrt[n]{{\stackrel{\sim }{\tilde{a }}}_{ij4}^{L}};{H}_{1}^{L}\left({a}_{ij}\right),{H}_{2}^{L}({a}_{ij}))$$
  • Step 4: The fuzzy weights of each criterion are calculated. First of all, \({\stackrel{\sim }{\tilde{r }}}_{i}\), the geometric mean of each row is computed. After that, the fuzzy weight of criterion (\({\stackrel{\sim }{\tilde{p }}}_{i}\)) is calculated as follows:

    $${\widetilde{\widetilde p}}_i={\widetilde{\widetilde r}}_i \oplus \left[{\widetilde{\widetilde r}}_1 \oplus{\widetilde{\widetilde r}}_2\dots.. \oplus {\widetilde{\widetilde r}}_n\right]^{-1}$$
    (3)

    where

    $$\begin{array}{l}\frac{{\stackrel{\sim }{\tilde{a }}}_{ij}}{{\stackrel{\sim }{\tilde{b }}}_{ij}}=(\frac{{a}_{1}^{U}}{{b}_{4}^{U}},\frac{{a}_{2}^{U}}{{b}_{3}^{U}},\frac{{a}_{3}^{U}}{{b}_{2}^{U}},\frac{{a}_{4}^{U}}{{b}_{1}^{U}};\mathrm{min}({H}_{1}^{U}\left(a\right),{H}_{1}^{U}\left(b\right)),\mathrm{min}({H}_{2}^{U}\left(a\right),{H}_{2}^{U}\left(b\right))),\\ (\frac{{a}_{1}^{L}}{{b}_{4}^{L}},\frac{{a}_{2}^{L}}{{b}_{3}^{L}},\frac{{a}_{3}^{L}}{{b}_{2}^{L}},\frac{{a}_{4}^{L}}{{b}_{1}^{L}};\mathrm{min}({H}_{1}^{L}\left(a\right),{H}_{1}^{L}\left(b\right)),\mathrm{min}({H}_{2}^{L}\left(a\right),{H}_{2}^{L}\left(b\right)))\end{array}$$
  • Step 5: The computed weights must be defuzzied because they are in the form of interval type-2 fuzzy. The defuzzification procedure is based on the following formula:

    $$DTraT=\frac{[\frac{\left({u}_{U}-{l}_{U}\right)+\left({\beta }_{U}.{m}_{1U}-{l}_{U}\right)+\left({\alpha }_{U}.{m}_{2U}-{l}_{U}\right)}{4}+{l}_{U}]+\frac{\left({u}_{L}-{l}_{L}\right)+\left({\beta }_{L}.{m}_{1L}-{l}_{L}\right)+\left({\alpha }_{L}.{m}_{2L}-{l}_{L}\right)}{4}+{l}_{L}}{2}$$
    (4)
Fig. 2
figure 2

E-learning platforms factors evaluation process with interval type-2 fuzzy sets

Table 2 Definition and interval type 2 fuzzy scale of the linguistic variables

4 Results and discussion

In this part of the paper, the criteria that affect the success of e-learning systems were prioritized by utilizing interval valued type-2 fuzzy AHP technique. 11 criteria and 106 sub-criteria were determined by conducting a deep literature survey, and the determined criteria were shown in Table 1. The joint fuzzy evaluations of a team of experts and calculation details of 11 criteria weights are shown in Tables 3, 4, 5 and 6.

Table 3 The linguistic evaluations of the main criteria
Table 4 Geometric means of main criteria
Table 5 Fuzzy weights of criteria
Table 6 Defuzzied and Normalized weights of main criteria

Equation (2) was utilized to compute the geometric mean of each criterion. To illustrate this calculation, how to calculate the geometric mean of the first criterion is shown as follows:

$$\begin{array}{l}{\widetilde{\widetilde r}}_i={\lbrack{\widetilde{\widetilde a}}_{11} \oplus {\widetilde{\widetilde a}}_{12} \oplus {\widetilde{\widetilde a}}_{13} \oplus {\widetilde{\widetilde a}}_{14} \oplus {\widetilde{\widetilde a}}_{15} \oplus {\widetilde{\widetilde a}}_{16} \oplus {\widetilde{\widetilde a}}_{17} \oplus {\widetilde{\widetilde a}}_{18} \oplus {\widetilde{\widetilde a}}_{19} \oplus {\widetilde{\widetilde a}}_{110} \oplus {\widetilde{\widetilde a}}_{111}\rbrack}^{1/11}\\=\lbrack(1,1,1,1;1,1)(1,1,1,1;1,1) \oplus (0.20,0.25,0.50,1;1,1)(0.20,0.26,0.45,0.83;0.8,0.8)\\\begin{array}{l} \oplus (3,4,6,7;1,1)(3.2,4.2,5.8,6.8;0.8,0.8) \oplus (1,2,4,5;1,1)(1.2,2.2,3.8,4.8;0.8,0.8)\\{\oplus}(1,2,4,5;1,1)(1.2,2.2,3.8,4.8;0.8,0.8) \oplus (5,6,8,9;1,1)(5.2,6.2,7.8,8.8;0.8,0.8)\\\begin{array}{l} \oplus (1,2,4,5;1,1)(1.2,2.2,3.8,4.8;0.8,0.8) \oplus (0.20,0.25,0.50,1;1,1)(0.20,0.26,0.45,0.83;0.8,0.8)\\{\oplus}(0.20,0.25,0.50,1;1,1)(0.20,0.26,0.45,0.83;0.8,0.8)\\\begin{array}{l} \oplus {(5,6,8,9;1,1)(5.2,6.2,7.8,8.8;0.8,0.8) \oplus (5,6,8,9;1,1)(5.2,6.2,7.8,8.8;0.8,0.8\rbrack}^{1/11}\\=\lbrack(3.00,108.0,24576.0,637875.0;1,1){(6.22,187.33,13762.59,2933031.91;0.8,0.8)}^{1/11}\\=\left(1.11,1.53,2.51,3.37;1,1\right)(1.18,1.61,2.38,3.14;0.8,0.8)\end{array}\end{array}\end{array}\end{array}$$

After all the main criteria's geometric mean, now, the fuzzy weights of each criterion can be computed. Table 4 shows the geometric mean of each criterion.

Equation (3) was employed to compute fuzzy weights of criteria. To illustrate this calculation, how to calculate the fuzzy weights of the first criterion is shown as follows:

$$\begin{array}{l}{\widetilde{\widetilde w}}_1={{\widetilde{\widetilde r}}_1{\oplus}\lbrack{\widetilde{\widetilde r}}_1 \oplus {\widetilde{\widetilde r}}_2{\oplus}.....\oplus {\widetilde{\widetilde r}}_n\rbrack}^{-1}\\=\left(\left(1.11,1.53,2.51,3.37;1,1\right)\left(1.18,1.61,2.38,3.14;0.8,0.8\right)\right)\oplus \\\begin{array}{c}\left[\left(8.229,11.39,18.34,24.13;1,1\right)\left(8.82,11.98,17.48,22.61\right)\right]^{-1}\\=\left(1.11,1.53,2.51,3.37;1,1\right)\left.\left(1.18,1.61,2.38,3.14;0.8,0.8\right)\right) \oplus \\\begin{array}{c}\left(\left(0.04,0.05,0.09,0.12;1,1\right)\left(0.04,0.06,0.08,0.11;0.8,0.8\right)\right.\\=\left(0.05,0.08,0.22,0.41;1,1\right)\left(0.05,0.09,0.20,0.36;0.8,0.8\right)\end{array}\end{array}\end{array}$$

Table 5 shows the fuzzy weights of criteria. The computed weights must be defuzzied because they are in the form of interval type-2 fuzzy.

By utilizing Eq. (4), the computed fuzzy weights are defuzzied. Table 6 shows the defuzzied and normalized weights of the main criteria.

The same steps were repeated for all sub-criteria, and the local weights of the sub-criteria were calculated. Table 7 shows the global weights of the sub-criteria obtained by multiplying the weights of the main and sub-criteria.

Table 7 Weights and global weights of sub-criteria

According to the weight values, the first three main criteria were C9-Assessment and Evaluation, C1-Adaptation and C4-Security with 0.214, 0.138, 0.129 weight values respectively. The ranked weights of 11 criteria are presented in Table 8. According to the global weight values first ten sub-criteria are C44- assessment & evaluation security, C910-transfer, C11-compatibility, C91-different exam mode, C14-adaptability, C92-result record, C84-threshing, C41-logging, C12-extendibility and C13-customization. The ranked global weights of 106 sub-criteria are presented in Table 9.

Table 8 Ranked weights of criteria
Table 9 The ranked global weights of sub-criteria

The most important criterion is the “C9-assessment and evaluation” means that the level of knowledge and ability of the student varies so the achievements of students in the e-learning should be measured with sufficient accurate measurement techniques at different stages of the learning progress. The second important criterion is the “C1-adaptation” implies the harmony between users’ needs, architecture, and framework should be considered simultaneously. The third important criterion is the “C4-security” indicates that user information, authorization-based access, ensuring the security of measurement and evaluation system should be considered.

The most important sub-criterion according to global weight is “C44-sssessment & evaluation” emphasizes performing complete and inclusive assessment & evaluation processes, and taking anti-cheating precaution. The second important sub-criterion is “C910-transfer” means transferring knowledge gained in the process between students and teacher. The third important sub-criterion is “C11-compatibility” which shows correlation between user needs and system features in terms of learning material and framework. The fourth important sub-criterion is “C91-different exam mode” stands for measurement the level of learning which can be made with different techniques and methods. The fifth important sub-criterion is “C14-adaptability” means that platforms should be customized according to user needs.

5 Conclusion

E-learning is a teaching model that supports access to information via digital platforms or media. Distance education is learning, and knowledge management activities carried out through internet technologies. Today, e-learning platforms attract more attention due to COVID 19 pandemic. The individuals have had the opportunity to try the advantages of e-learning platforms for themselves throughout the COVID-19 pandemic. The need for e-learning platforms has increased by users and educational institutions. Different e-learning systems have been developed. The main problem in this area is assessment process includes uncertainty and qualitative assessment. Another problem is the need to examine the factors to be used in the evaluation of changing and developing platforms.

In this study, the criteria to be used to evaluate e-learning platforms were determined by a comprehensive literature review. The fuzzy analytic hierarchy method was used to compare the e-learning platforms. 11 criteria and 106 sub-criteria were determined to evaluate e-learning platforms. According to the weight values, the first three main criteria were C9-Assessment and Evaluation, C1-Adaptation and C4-Security with 0.214, 0.138, 0.129 weight values respectively. According to the global weight values first ten sub-criteria are C44- assessment & evaluation security, C910-transfer, C11-compatibility, C91-different exam mode, C14-adaptability, C92-result record, C84-threshing, C41-logging, C12-extendibility and C13-customization.

This study provides an acceptable rationale for evaluations of e-learning platform. The results of this study can be used in real-world performance evaluations of various e-learning platforms. The effectiveness of e-learning platform with respect to our factor weights can be compared.