1 Introduction

Case study research has become a widely used research method in Information Systems (IS) research (Palvia et al. 2015) that allows for a comprehensive analysis of a contemporary phenomenon in its real-world context (Dubé and Paré, 2003). This research method is particularly useful due to its flexibility in covering complex phenomena with multiple contextual variables, different types of evidence, and a wide range of analytical options (Voss et al. 2002; Yin 2018). Although case study research is particularly useful for studying contemporary phenomena, some researchers feel that it lacks rigour, particularly in terms of the validity of findings (Lee and Hubona 2009). In response to these criticisms, Yin (2018) provides comprehensive methodological steps to conduct case studies rigorously. In addition, many other publications with a partly discipline-specific view on case study research, offer guidelines for achieving rigour in case study research, e.g., Benbasat et al. (1987), Dubé and Paré (2003), Pan and Tan (2011), or Voss et al. (2002). Most publications on case study methodology converge on four criteria for ensuring rigour in case study research: (1) construct validity, (2) internal validity, (3) external validity, and (4) reliability (Gibbert et al. 2008; Voss et al. 2002; Yin 2018).

A key element of rigour in case study research is to look at the unit of analysis of a case from multiple perspectives in order to draw informed conclusions (Dubois and Gadde 2002). Case study researchers refer to this as triangulation, for example, by using multiple sources of evidence per case to support findings (Benbasat et al. 1987; Yin 2018). However, in our own research experience, we have come across numerous IS publications with a limited number of sources of evidence per case, such as a single interview per case. Some researchers refer to these studies as mini case studies (e.g., McBride 2009; Weill and Olson 1989), while others refer to them as multiple mini cases (e.g., Eisenhardt 1989). We were unable to find a definition or conceptualisation of this type of case study. Therefore, we will refer to this type of case study as a multiple mini case study (MMCS). Interestingly, many researchers use these MMCSs to study emerging and innovative phenomena.

From a methodological perspective, multiple case study publications with limited sources of evidence, also known as MMCSs, may face criticism for their lack of rigour (Dubé and Paré 2003). Alternatively, they may be referred to as “marginal case studies” (Piekkari et al. 2009, p. 575) if they fail to establish a connection between theory and empirical evidence, provide only limited context, or merely offer illustrative aspects (Piekkari et al. 2009). IS scholars advocate conducting case study research in a mindful manner by balancing methodological blueprints and justified design choices (Keutel et al. 2014). Consequently, we propose MMCSs as a mindful approach with the potential for rigour, distinguishing them from marginal case studies. The following research question guides our study:

RQ: How can researchers rigorously conduct MMCSs in the IS discipline?

As shown in Fig. 1, we develop an analytical framework by synthesising methodological guidance on how to rigorously conduct multiple case study research. We then address three aspects of our research question: For aspect (1), we analyse published MMCSs in the IS discipline to derive a "Research in Practice" definition of MMCSs and research situations for MMCSs. For aspect (2), we use the analytical framework to analyse how researchers in the IS discipline ensure that existing MMCSs follow a rigorous methodology. For aspect (3), we discuss the methodological findings about rigorous MMCSs in order to derive methodological guidelines for MMCSs that researchers in the IS discipline can follow.

Fig. 1
figure 1

Overview of the research approach

We approach these aspects by introducing the conceptual foundation for case study research in Sect. 2. We define commonly accepted criteria for ensuring validity in case study research, introduce the concept of MMCSs, and distinguish them from other types of case studies. Furthermore, as a basis for analysis, we present an analytical framework of methodological steps and options for the rigorous conduct of multiple case study research. Section 3 presents our methodological approach to identifying published MMCSs in the IS discipline. In Sect. 4, we first define MMCSs from a research in practice perspective (Sect. 4.1). Second, we present an overview of methodological options for rigorous MMCSs based on our analytical framework (Sect. 4.2). In Sect. 5, we differentiate MMCSs from other research approaches, identify research situations of MMCSs (i.e., to study emerging and innovative phenomena), and provide guidance on how to ensure rigour in MMCSs. In our conclusion, we clarify the limitations of our study and provide an outlook for future research with MMCSs.

2 Conceptual foundation

2.1 Case study research

Case study research is about understanding phenomena by studying one or multiple cases in their context. Creswell and Poth (2016) define it as an “approach in which the investigator explores a bounded system (a case) or multiple bounded systems (cases) over time, through detailed, in-depth data collection” (p. 73). Therefore, it is suitable for complex topics with little available knowledge, needing an in-depth investigation, or where the research subject is inseparable from its context (Paré 2004). Additionally, Yin (2018) states that case study research is useful if the research focuses on contemporary events where no control of behavioural events is required. Typically, this type of research is most suitable for how and why research questions (Yin 2018). Eventually, the inferences from case study research are based on analytic or logical generalisation (Yin 2018). Instead of drawing conclusions from a representative statistical sample towards the population, case study research builds on analytical findings from the observed cases (Dubois and Gadde 2002; Eisenhardt and Graebner 2007). Case studies can be descriptive, exploratory, or explanatory (Dubé and Paré 2003).

The contribution of research to theory can be divided into the steps of theory building, development and testing, which is a continuum (Ridder 2017; Welch et al. 2011), and case studies are useful at all stages (Ridder 2017). In theory building, there is no theory to explain a phenomenon, and the researcher identifies new concepts, constructs, and relationships based on the data (Ridder 2017). In theory development, a tentative theory already exists that is extended or refined (e.g., by adding new antecedents, moderators, mediators, and outcomes) (Ridder 2017). In theory testing, an existing theory is challenged through empirical investigation (Ridder 2017).

In case study research, there are different paradigms for obtaining research results, either positivist or interpretivist (Dubé and Paré 2003; Orlikowski and Baroudi 1991). The positivist paradigm assumes that a set of variables and relationships can be objectively identified by the researcher (Orlikowski and Baroudi 1991). In contrast, the interpretivist paradigm assumes that the results are inherently rooted in the researcher’s worldview (Orlikowski and Baroudi 1991). Nowadays, researchers find that there are similar numbers of positivist and interpretivist case studies in the IS discipline compared to almost 20 years ago when positivist research was perceived as dominant (Keutel et al. 2014; Klein and Myers 1999). As we aim to understand how to conduct MMCSs rigorously, we focus on methodological guidance for positivist case study research.

The literature proposes a four-phased approach to conducting a case study: (1) the definition of the research design, (2) the data collection, (3) the data analysis, and (4) the composition (Yin 2018). Table 1 provides an overview and explanation of the four phases.

Table 1 Research phases for case study research (Yin 2018)

Case studies can be classified based on their depth and breadth, as shown in Fig. 2. We can distinguish five types of case studies: in-depth single case studies, marginal case studies, multiple case studies, MMCSs, and extensive in-depth multiple case studies. Each type has distinct characteristics, yet the boundaries between the different types of case studies is blurred. Except for the marginal case studies, the italic references in Fig. 2 are well-established publications that define the respective type and provide methodological guidance. The shading is to visualise the different types of case studies. The italic references in Fig. 2 for marginal case studies refer to publications that conceptualise them.

Fig. 2
figure 2

Simplistic conceptualisation of MMCS

In-depth single case studies focus on a single bounded system as a case (Creswell and Poth 2016; Paré 2004; Yin 2018). According to the literature, a single case study should only be used if a case meets one or more of the following five characteristics: it is a critical, unusual, common, revelatory, or longitudinal case (Benbasat et al. 1987; Yin 2018). Single case studies are more often used for descriptive research (Dubé and Paré 2003).

A second type of case studies are marginal case studies, which generally have low depth (Keutel et al. 2014; Piekkari et al. 2009). Marginal case studies lack a clear link between theory and empirical evidence, a clear contextualisation of the case, and are often used for illustration purposes (Keutel et al. 2014; Piekkari et al. 2009). Therefore, marginal case studies provide only marginal insights with a lack of generalisability.

In contrast, multiple case studies employ multiple cases to obtain a broader picture of the researched phenomenon from different perspectives (Creswell and Poth 2016; Paré 2004; Yin 2018). These multiple case studies are often considered to provide more robust results due to the multiplicity of their insights (Eisenhardt and Graebner 2007). However, often discussed criticisms of multiple case studies are high costs, difficult access to multiple sources of evidence for each case, and long duration (Dubé and Paré 2003; Meredith 1998; Voss et al. 2002). Eisenhardt (1989) considers four to ten in-depth cases as a suitable number of cases for multiple case study research. With fewer than four cases, the empirical grounding is less convincing, and with more than ten cases, researchers quickly get overwhelmed by the complexity and volume of data (Eisenhardt 1989). Therefore, methodological literature views extensive in-depth multiple case studies as almost infeasible due to their high complexity and resource demands, which can easily overwhelm the research team and the readers (Stake 2013). Hence, we could not find a methodological publication outlining the approach for this case study type.

To solve the complexity and resource issues for multiple case studies, a new phenomenon has emerged: MMCS. An MMCS is a special type of multiple case study that focuses on an investigation's breadth by using a relatively high number of cases while having a somewhat limited depth per case. We characterise breadth not only by the number of cases but also by the variety of the cases. Even though there is no formal conceptualisation of the term, we understand MMCSs as a type of multiple case study research with few sources of evidence per case. Due to the limited depth per case, one can overcome the resource and complexity issues of classical multiple case studies. However, having only some sources of evidence per case may be considered a threat to rigour. Therefore, in this publication, we provide suggestions on how to address these threats.

2.2 Rigour in case study research

Rigour is essential for case study research (Dubé and Paré 2003; Yin 2018) and, in the early 2000s, researchers criticised case study research for inadequate rigour (e.g., Dubé and Paré 2003; Gibbert et al. 2008). Based on this, various methodological publications provide guidance for rigorous case study research (e.g., Dubé and Paré 2003; Gibbert et al. 2008).

Methodological literature proposes four criteria to ensure rigour in case study research: Construct validity, internal validity, external validity, and reliability (Dubé and Paré 2003; Gibbert et al. 2008; Yin 2018). Table 2 outlines these criteria and states in which research phase they should be addressed (Yin 2018). Methodological literature agrees that all four criteria must be met for rigorous case study research (Dubé and Paré 2003).

Table 2 Criteria for rigour in case study research (Yin 2018, p. 42f)

The methodological literature discusses multiple options for achieving rigour in case study research (e.g., Benbasat et al. 1987; Dubé and Paré 2003; Eisenhardt 1989; Yin 2018). We aggregated guidance from multiple sources by conducting a cross-disciplinary literature review to build our analytical foundation (cf. Fig. 1). This literature review aims to identify the most relevant multiple case study methodology publications from a cross-disciplinary and IS-specific perspective. We focus on the most cited methodology publications, while being aware that this may over-represent disciplines with a higher number of case study publications. However, this approach helps to capture an implicit consensus among case study researchers on how to conduct multiple case studies rigorously. The literature review produced an analytical framework of methodological steps and options for conducting multiple case studies rigorously. Appendix AFootnote 1 provides a detailed documentation of the literature review process. The analytical framework derived from the set of methodological publications is presented in Table 3. We identified required and optional steps for each research stage. The analytical framework is the basis for the further analysis of MMCS and an explanation of all methodological steps is provided in Appendix B.Footnote 2

Table 3 Methodological steps and options to ensure rigour in multiple case study research

3 Research methodology

For our research, we analysed published MMCSs in the IS discipline with the goal of understanding how these publications ensured rigour. This section outlines the methodology of how we identified our MMCS publications.

First, we searched bibliographic databases and citation indexing services (Vom Brocke et al. 2009; Vom Brocke et al. 2015) to retrieve IS-specific MMCSs (Hanelt et al. 2015). As shown in Fig. 3, we used two sets of keywords, the first set focusing on multiple case studies and the second set explicitly on mini case studies. We decided to follow this approach as many MMCSs are positioned as multiple case studies, avoiding the connotation “mini” or “short”. We restricted our search to completed research publications written in English from litbaskets.io size “S”, a set of 29 highly ranked IS journals (Boell and Wang 2019)Footnote 3 and leading IS conference proceedings from AMCIS, ECIS, HICSS, ICIS, and PACIS (published until end of June 2023). We focused on these outlets, as they can be taken as a representative sample of high quality IS research (Gogan et al. 2014; Sørensen and Landau 2015).

Fig. 3
figure 3

The search process for published MMCSs in the IS discipline

Second, we screened the obtained set of IS publications to identify MMCSs. We only included publications with positivist multiple cases where the majority of cases was captured with only one primary source of evidence. Further, we excluded all publications which were interview studies rather than case studies (i.e., they do not have a clearly defined case). In some cases, it was unclear from the full text whether a publication fulfils this requirement. Therefore, we contacted the authors and clarified the research methodology with them. Eventually, our final set contained 50 publications using MMCSs.

For qualitative data analysis, we employed axial coding (Recker 2012) based on the pre-defined analytical framework shown in Table 3. For the coding, we followed the explanations of the authors in the manuscripts. The coding was conducted and reviewed by two of the authors. We coded the first five publications of the set of IS MMCS publications together and discussed our decisions. After the initial coding was completed, we checked the reliability and validity by re-coding a sample of the other author’s set. In this sample, we achieved inter-coder reliability of 91% as a percent agreement in the decisions made (Nili et al. 2020). Hence, we consider our coding as highly consistent.

In the results section, we illustrate the chosen methodological steps for each MMCS type (descriptive, exploratory, and explanatory). For this purpose, we selected three publications based on two criteria: only journal publications, as they have more details about their methodological steps and publications which applied most of the analytical framework’s methodology steps. This led to three exemplary IS MMCS publications: (1) McBride (2009) for descriptive MMCSs, (2) Baker and Niederman (2014) for exploratory MMCSs, and (3) van de Weerd et al. (2016) for explanatory MMCSs.

4 Results

4.1 MMCS from a “Research in Practice" perspective

In this section, we explain MMCSs from a "Research in Practice" perspective and identify different types based on our sample of 50 MMCS publications. As outlined in Sect. 2.1, an MMCS is a special type of a multiple case study, which focuses on an investigation’s breadth by using a relatively high number of cases while having a limited depth per case. In the most extreme scenario, an MMCS only has one source of evidence per case. Moreover, breadth is not only characterised by the number of cases, but also by the variety of the cases. MMCSs have been used widely but hardly labelled as such, i.e., only 10 of our analysed 50 MMCS publications explicitly use the terms mini or short case in the manuscript. Multiple case study research distinguishes between descriptive, exploratory, and explanatory case studies (Dubé and Paré 2003). The MMCSs in our sample follow the same classification with three descriptive, 40 exploratory, and seven explanatory MMCSs. Descriptive and exploratory MMCSs are used in the early stages of research, and exploratory and explanatory MMCSs are used to corroborate findings.

Descriptive MMCSs provide little information on the methodological steps for the design, data collection, analysis, and presentation of results. They are used to illustrate novel phenomena and create research questions, not solutions, and can be useful for developing research agendas (e.g., McBride 2009; Weill and Olson 1989). The descriptive MMCS publications analysed contained between four to six cases, with an average of 4.6 cases per publication. Of the descriptive MMCSs analysed, one did not state research questions, one answered a how question and the third answered how and what questions. Descriptive MMCSs are illustrative and have a low depth per case, resulting in the highest risk of being considered a marginal case study.

Exploratory MMCSs are used to explore new phenomena quickly, generate first research results, and corroborate findings. Most of the analysed exploratory MMCSs answer what and how questions or combinations. However, six publications do not explicitly state a research question, and some MMCSs use why, which, or whether research questions. The analysed exploratory MMCSs have three to 27 cases, with an average of 10.2 cases per publication. An example of an exploratory MMCS is the study by Baker and Niederman (2014), who explore the impacts of strategic alignment during merger and acquisition (M&A) processes. They argue that previous research with multiple case studies (mostly with  three cases) shows some commonalities, but much remains unclear due to the low number of cases. Moreover, they justify the limited depth of their research with the “proprietary and sensitive nature of the questions” (Baker and Niederman 2014, p. 123).

Explanatory MMCSs use an a priori framework with a relatively high number of cases to find groups of cases that share similar characteristics. Most explanatory MMCSs answer how questions, yet some publications answer what, why, or combinations of the three questions. The analysed explanatory MMCSs have three to 18 cases, with an average of 7.2 cases per publication. An example of an explanatory MMCS publication is van de Weerd et al. (2016), who researched the influence of organisational factors on the adoption of Software as a Service (SaaS) in Indonesia.

4.2 Applied MMCS methodology in IS publications

4.2.1 Overarching

In the following sections, we present the results of our analysis. For this purpose, we mapped our 50 IS MMCS publications to the methodological options (Table 3) and present one example per MMCS type. We extended some methodological steps with options from methodology-in-use. A full coding table can be found in Appendix DFootnote 4. Tables 4, 5, 6 and 7 summarise the absolute and percentual occurrences of each methodological option in descriptive, exploratory, and explanatory IS MMCS publications. All tables are structured in the same way and show the number of absolute and, in parentheses, the percentual occurrences of each methodological option. The percentages may not add up to 100% due to rounding. The bold numbers show the most common methodological option for each MMCS type and step. Most publications were classified in previously identified options. Some IS MMCS publications lacked detail on methodological steps, so we classified them as "step not evident". Only 16% (8 out of 50) explained how they addressed validity and reliability threats.

Table 4 Applied methodological steps in the research design
Table 5 Applied methodological steps in the data collection phase
Table 6 Applied methodological steps in the data analysis phase
Table 7 Applied methodological steps in the composition phase

4.2.2 Research design phase

There are six methodological steps in the research design phase, as shown in Table 4. Descriptive MMCSs usually define the research question (2 out of 3, 67%), clarify the unit of analysis (2 out of 3, 67%), bound the case (2 out of 3, 67%), or specify an a priori theoretical framework (2 out of 3, 67%). The case replication logic is mostly not evident (2 out of 3, 67%). Descriptive MMCS use a criterion-based selection (1 out of 3, 33%), a maximum variation selection (1 out of 3, 33%), or do not specify the selection logic (1 out of 3, 33%). Descriptive MMCSs have a high risk of becoming a marginal case study due to their illustrative nature–our chosen example is not different. McBride (2009) does not define the research question, does not have a priori theoretical framework, nor does he justify the case replication and the case selection logic. However, he clarifies the unit of analysis and extensively bounds each case with significant context about the case organisation and its setup.

The majority of exploratory MMCSs define the research question (34 out of 40, 85%) clarify the unit of analysis (35 out of 40, 88%), and specify an a priori theoretical framework (33 out of 40, 83%). However, only a minority (6 out of 40, 15%) follow the instructions of bounding the case or justify the case replication logic (13 out of 40, 33%). The most used case selection logic is the criterion-based selection (23 out of 40, 58%), followed by step not evident (5 out of 40, 13%), other selection approaches (3 of 40, 13%), maximum variation selection (3 out of 40, 13%), a combination of approaches (2 out of 40, 5%), snowball selection (2 out of 40, 5%), typical case selection (1 out of 40, 3%), and convenience-based selection (1 out of 40, 3%). Baker and Niederman (2014) build their exploratory MMCS on previous multiple case studies with three cases that showed ambiguous results. Hence, Baker and Niederman (2014) formulate three research objectives instead of defining a research question. They clearly define the unit of analysis (i.e., the integration of the IS function after M&A) but lack the bounding of the case. The authors use a rather complex a priori framework, leading to a high number of required cases. This a priori framework is also used for the “theoretical replication logic [to choose] conforming and disconfirming cases” (Baker and Niederman 2014, p. 116). A combination of maximum variation and snowball selection is used to select the cases (Baker and Niederman 2014). The maximum variation is chosen to get evidence for all elements of their rather complex a priori framework (i.e., the breadth), and the snowball sampling is chosen to get more details for each framework element.

All explanatory MMCSs define the research question, clarify the unit of analysis, and specify an a priori theoretical framework. However, only one (14%) bounds the case. The case replication logic is mostly a mixture of theoretical and literal replication (3 out of 7, 43%) and one (14%) MMCS does a literal replication. For 43% (3 out of 7) of the publications, the step is not evident. Most explanatory MMCSs use criterion-based selection (4 out of 7, 57%), followed by maximum variation selection (2 out of 7, 29%) and snowball selection (1 out of 7, 14%). In their publication, van de Weerd et al. (2016) define the research question and clarify the unit of analysis (i.e., the influence of organisational factors on SaaS adoption in Indonesian SMEs). Further, they specify an a priori framework (i.e., based on organisational size, organisational readiness, and top management support) to target the research (van de Weerd et al. 2016). A combination of theoretical (between the groups of cases) and literal (within the groups of cases) replication was used. To strengthen the findings, van de Weerd et al. (2016) find at least one other literally replicated case for each theoretically replicated case.

To summarize this phase, we see that in all three types of MMCSs, the majority of publications define the research question, clarify the unit of analysis, and specify an a priori theoretical framework. Moreover, descriptive MMCSs are more likely to bound the case than exploratory and explanatory MMCSs. However, only a minority across all MMCSs justify the case replication logic, whereas the majority does not. Most MMCSs justify the case selection logic, with criterion-based case selection being the most often applied methodological option.

4.2.3 Data collection phase

In the data collection phase, there are four methodological steps, as summarised in Table 5.

One descriptive MMCS applies triangulation via multiple sources, whereas for the majority (2 out of 3, 67%), the step is not evident. One (33%) of the analysed descriptive MMCSs creates a full chain of evidence, none creates a case study database, and one (33%) uses a case study protocol. McBride (2009) applies triangulation via multiple sources, as he followed “up practitioner talks delivered at several UK annual conferences” (McBride 2009, p. 237). Therefore, we view the follow-up interviews as the primary source of evidence per case, as dedicated questions to the unit of analysis can be asked per case. Triangulation via multiple sources was then conducted by combining practitioner talks and documents with follow-up interviews. McBride (2009) does not create a full chain of evidence, a case study database, nor a case study protocol. This design decision might be rooted in the objective of a descriptive MMCS to illustrate and open up new questions rather than find clear solutions (McBride 2009).

Most exploratory MMCSs triangulate via multiple sources (20 out of 40, 50%) or via multiple investigators (4 out of 40, 10%). Eight (20%) exploratory MMCSs apply multiple triangulation types and for eight (20%), no triangulation is evident. At first glance, a triangulation via multiple sources may seem contradictory to the definition of MMCSs–yet it is not. MMCSs that triangulate via multiple sources have one source per case as the primary, detailed evidence (e.g., an interview), which is combined with easily available supplementary sources of evidence (e.g., public reports and documents (Baker and Niederman 2014), press articles (Hahn et al. 2015), or online data (Kunduru and Bandi 2019)). As this leads to multiple sources of evidence, we understand this as a triangulation via multiple sources; however, on a different level than triangulating via multiple in-depth interviews per case. Only a minority of exploratory MMCSs create a full chain of evidence (14 out of 40, 35%), and a majority (23 out of 40, 58%) use a case study database or a case study protocol (20 out of 40, 50%). Baker and Niederman (2014) triangulate with multiple sources (i.e., financial reports as supplementary sources) to increase the validity of their research. Further, the authors create a full chain of evidence from their research question through an identical interview protocol to the case study’s results. For every case, an individual case report is created and stored in the case study database (Baker and Niederman 2014).

All explanatory MMCSs triangulate during the data collection phase, either via multiple sources (2 out of 7, 29%) or a combination of multiple investigators and sources (5 out of 7, 71%). Interestingly, only three explanatory MMCSs (43%) create a full chain of evidence. All create a case study database (7 out of 7, 100%) and the majority creates a case study protocol (6 out of 7, 86%). In their explanatory MMCS, van de Weerd et al. (2016) use semi-structured interviews as the primary data collection method. The interview data is complemented “with field notes and (online) documentation” (van de Weerd et al. 2016, p. 919), e.g., data from corporate websites or annual reports. Moreover, a case study protocol and a case study database in NVivo are created to increase reliability.

To summarise the data collection phase, we see that most (40 out of 50, 80%) of MMCSs apply some type of triangulation. However, only 36% (18 out of 50) of the analysed MMCSs create a full chain of evidence. Moreover, descriptive MMCSs are less likely to create a case study database (0 out of 3, 0%) or a case study protocol (1 out of 3, 33%). In contrast, most exploratory and explanatory MMCS publications create a case study database and case study protocol.

4.2.4 Data analysis phase

There are three methodological steps (cf. Table 6) for the data analysis phase, each with multiple methodological options.

One descriptive MMCS (33%) corroborates findings through triangulation, and two do not (67%). Further, one (33%) uses a rich description of findings as other corroboration approaches, whereas for the majority (2 out of 3, 67%), the corroboration with other approaches is not evident. Descriptive MMCSs mostly do not define their within-case analysis strategy (2 out of 3, 67%). However, pre-defined patterns are used to conduct a cross-case analysis (2 out of 3, 67%). In the data analysis, McBride (2009) triangulates via multiple sources of evidence (i.e., talks at practitioner conferences and resulting follow-up interviews), but does not apply other corroboration approaches or provides methodological explanations for the within or cross-case analysis. This design decision might be rooted in the illustrative nature of his descriptive MMCS and the focus on analysing each case standalone.

Exploratory MMCSs mostly corroborate findings through a combination of triangulation via multiple investigators and sources (15 out of 40, 38%) or triangulation via multiple sources (9 out of 40, 23%). However, for ten (25%) exploratory MMCSs, this step is not evident. For the other corroboration approaches, a combination of approaches is mostly used (15 out of 40, 38%), followed by rich description of findings (11 out of 40, 28%), peer review (6 out of 40, 15%), and prolonged field visits (1 out of 40, 3%). For five (13%) publications, other corroboration approaches are not evident. Pattern matching (17 out of 40, 43%) and explanation building (5 out of 40, 13%) are the most used methodological options for the within-case analysis. To conduct a cross-case analysis, 11 (28%) MMCSs use a comparison of pairs or groups of cases, nine (23%) pre-defined patterns, and six (15%) structure their data along themes. Interestingly, for 14 (35%) exploratory MMCSs, no methodological step to conduct the cross-case analysis is evident. Baker and Niederman (2014) use a combination of triangulation via multiple investigators (“The interviews were coded by both researchers independently […], with a subsequent discussion to reach complete agreement” (Baker and Niederman 2014, p. 117)) and sources to increase internal validity. Moreover, the authors use a rich description of the findings. An explanation-building strategy is used for the within-case analysis, and the cross-case analysis is done based on pre-defined patterns (Baker and Niederman 2014). This decision for the cross-case analysis is justified by a citation of Dubé and Paré (2003, p. 619), who see it as “a form of pattern-matching in which the analysis of the case study is carried out by building a textual explanation of the case.”

Explanatory MMCSs corroborate findings through a triangulation via multiple sources (4 out of 7, 57%) or a combination of multiple investigators and sources (3 out of 7, 43%). For the other corroboration approaches, a rich description of findings (3 out of 7, 43%), a combination of approaches (3 out of 7, 43%), or peer review (1 out of 7, 14%) are used. To conduct a within-case analysis, pattern matching (5 out of 7, 71%) or explanation building (1 out of 7, 14%) are used. For the cross-case analysis, pre-defined patterns (3 out of 7, 43%) and a comparison of pairs or groups of cases (2 out of 7, 29%) are used; yet, for two (29%) explanatory MMCSs a cross-case analysis step is not evident. van de Weerd et al. (2016) corroborate their findings through a triangulation via multiple sources, a combination of rich description of findings and solicitation of participants’ views (“summarizing the interview results of each case company for feedback and approval” (van de Weerd et al. 2016, p. 920)) as other corroboration approaches. Moreover, for the within-case analysis, the authors “followed an explanation-building procedure to strengthen […] [the] internal validity” (van de Weerd et al. 2016, p. 920). For the cross-case, the researchers compare groups of cases. They refer to this approach as an informal qualitative comparative analysis.

To summarize the results of the data analysis phase, we see that some type of triangulation is used by most of the MMCSs, with source triangulation (alone or in combination with another approach) being the most often used methodological option. For the within-case analysis, pattern matching (22 of 50, 44%) is the most often used methodological option. For the cross-case analysis, pre-defined patterns are most often used (14 out of 50, 28%). However, depending on the type of MMCS, there are differences in the options used and some methodological options are never used (e.g., time-series analysis and solicitation of participants’ views).

4.2.5 Composition phase

We can find two methodological steps for the composition phase, as summarized in Table 7.

Descriptive MMCSs do not apply triangulation in the composition phase (3 out of 3, 100%), nor do they use the methodological step to let key informants review the draft of the case study report (3 of 3, 100%). Also, the descriptive MMCS by McBride (2009) does not apply any of the methodological steps.

Exploratory MMCSs mostly use triangulation via multiple sources (25 out of 40, 63%), a combination of multiple sources and theories (2 out of 40, 5%), triangulation via multiple investigators (1 out of 40, 3%), and a combination of multiple sources and methods (1 out of 40, 3%). However, for 11 (28%) exploratory MMCS publications, no triangulation step is evident. Moreover, the majority (24 out of 40, 85%) do not let key informants review a draft of the case study report. Baker and Niederman (2014) do not use triangulation in the composition phase nor let key informants review the draft of the case study report. An example of an exploratory publication that applies both methodological steps is the publication by Kurnia et al. (2015). The authors triangulate via multiple sources and let key informants review their interview transcripts and the case study report to increase construct validity.

Explanatory MMCSs mostly use triangulation via multiple sources (5 out of 7, 71%) and for two (29%), the step is not evident. Furthermore, only two MMCS (29%) publications let key informants review the draft of the case study report, whereas the majority (5 out of 7, 71%) do not. In their publication, van de Weerd et al. (2016) use both methodological steps of the composition phase. The authors triangulate via multiple sources by presenting interview snippets from different cases for each result in the case study manuscript. Moreover, each case and the final case study report were shared with key informants for review and approval to reduce the risk of misinterpretations and increase construct validity.

To summarize, most exploratory and explanatory MMCSs use triangulation in the composition phase, whereas descriptive MMCSs do not. Moreover, only a fraction of all MMCSs let key informants review a draft of the case study report (8 out of 50, 16%).

5 Discussion

5.1 MMCS from a “Research in Practice" perspective

5.1.1 Delineating MMCS from other research approaches

In this section, we delineate MMCSs from related research approaches. In the subsequent sections, we outline research situations for which MMCSs can be used and the benefits MMCSs provide.

Closely related research approaches from which we delineate MMCSs are multiple case studies, interviews, and vignettes. As shown in Fig. 2, MMCSs differ from multiple case studies in that they focus on breadth by using a high number of cases with limited depth per case. In the most extreme situation, an MMCS only has one primary source of evidence per case. Moreover, MMCSs can also consider a greater variety of cases. In contrast, multiple case studies have a high depth per case and multiple sources of evidence per case to allow for a source triangulation (Benbasat et al. 1987; Yin 2018). Moreover, multiple case studies mainly focus on how and why research questions (Yin 2018), whereas MMCSs can additionally answer what, whether, and which research questions. The rationale why MMCSs are used for more types of research questions is their breadth, allowing them to also answer rather explorative research questions.

Distinguishing MMCSs from interviews is more difficult. Yet, we see two differences. First, interview studies do not have a clear unit of analysis. Interview studies may choose interviewees based on expertise (expert interviews), whereas case study researchers select informants based on the ability to inform about the case (key informants) (Yin 2018). Most of the 50 analysed MMCS (88%) specify their unit of analysis. Second, MMCSs can use multiple data collection methods (e.g., observations, interviews, documents), while interviews only use one (the interview) (Lamnek and Krell 2010). An example showing these delineation difficulties between MMCSs and interviews is the publication of Demlehner and Laumer (2020). The authors claim to take “a multiple case study approach including 39 expert interviews” (Demlehner and Laumer 2020, p. 1). However, our criteria classify this as an interview study. Demlehner and Laumer (2020) contend that the interviewees were chosen using a “purposeful sampling strategy” (p. 5). However, case study research selects cases based on replication logic, not sampling (Yin 2018). Moreover, the results are not presented on a per-case basis (as usual for case studies); instead, the findings are presented on an aggregated level, similar to expert interviews. Therefore, we would not classify this publication as an MMCS but find that it is a very good example to discuss this delineation.

MMCSs differ from vignettes, which are used for (1) data collection, (2) data analysis, and (3) research communication (Klotz et al. 2022; Urquhart 2001). Researchers use vignettes for data collection as stimuli to which participants react (Klotz et al. 2022), i.e., a carefully constructed description of a person, object, or situation (Atzmüller and Steiner 2010; Hughes and Huby 2002). We can delineate MMCS from vignettes for data collection based on this definition. First, MMCSs are not used as a stimulus to which participants can react, as in MMCSs, data is collected without the stimulus requirement. Furthermore, vignettes for data collection are carefully constructed, which contradicts the characteristics of MMCS, that are all based on collected empirical data and not constructed descriptions.

A data analysis vignette is used as a retrospective tool (Klotz et al. 2022) and is very short, which makes it difficult to analyse deeper relationships between constructs. MMCSs differ from vignettes for data analysis in two ways. First, MMCSs are a complete research methodology with four steps, whereas vignettes for data analysis cover only one step (the data analysis) (e.g., Zamani and Pouloudi 2020). Second, vignettes are too short to conduct a thorough analysis of relationships, whereas MMCSs foster a more comprehensive analysis, allowing for a deeper analysis of relationships.

Finally, a vignette used for research communication “(1) is bounded to a short time span, a location, a special situation, or one or a few key actors, (2) provides vivid, authentic, and evocative accounts of the events with a narrative flow, (3) is rather short, and (4) is rooted in empirical data, sometimes inspired by data or constructed.” (Klotz et al. 2022, p. 347). Based on the four elements for the vignettes’ definition, we can delineate MMCS from vignettes used for research communication. First, MMCSs are not necessarily bounded to a short time span, location, special situation, or key actors; instead, with MMCSs, a clearly defined case bounded in its context is researched. Second, the focus of MMCSs is not on the narrative flow; instead, the focus is on describing (c.f., McBride (2009)), exploring (c.f., Baker and Niederman (2014)), or explaining (c.f., van de Weerd et al. (2016)) a phenomenon. Third, while MMCSs do not have the depth of multiple case studies, they are much more comprehensive than vignettes (e.g., the majority of analysed publications (42 of 50, 84%) specify an a priori theoretical framework). Fourth, every MMCS must be based on empirical data, i.e., all of our 50 MMCSs collect data for their study and base their results on this data. This is a key difference from vignettes, which can be completely fictitious (Klotz et al. 2022).

5.1.2 MMCS research situations

The decision to use an MMCS as a research method depends on the research context. MMCSs can be used in the early stages of research (descriptive and exploratory MMCS) and to corroborate findings (exploratory and explanatory MMCS). Academic literature has yet to agree on a uniform categorisation of research questions. For instance, Marshall and Rossman (2016) distinguish between descriptive, exploratory, explanatory, and emancipatory research questions. In contrast, Yin (2018) distinguishes between who, what, where, how, and why questions, where he argues that the latter two are especially suitable for explanatory case study research. MMCSs can answer more types of research questions than Yin (2018) proposed. The reason for this is rooted in the higher breadth of MMCSs, which allows MMCSs to also answer rather exploratory what, whether, or which questions, besides the how and why questions that are suggested by Yin (2018).

For descriptive MMCSs, the main goal of the how and what questions is to describe the phenomenon. However, in our sample of analysed MMCSs, the analysis stops after the description of the phenomenon. The main goal of the five types of exploratory MMCS research questions is to investigate little-known aspects of a particular phenomenon. The how and why questions analyse operational links between different constructs (e.g., “How do different types of IS assets account for synergies between business units to create business value?” (Mandrella et al. 2016, p. 2)). Exploratory what questions can be answered by case study research and other research methods (e.g., surveys or archival analysis) (Yin 2018). Nevertheless, all whether and which MMCS research questions can also be re-formulated as exploratory what questions. The reason why many MMCSs answer what, whether, or which research questions lies in the breadth (i.e., higher number and variety of cases) of MMCS, that allow them to answer these rather exploratory research questions to a satisfactory level. Finally, the research questions of the explanatory MMCSs aim to analyse operational links (i.e., how or why something is happening). This is also in line with the findings of Yin (2018) for multiple case study research. However, for MMCSs, this view must be extended, as explanatory MMCSs are also able to answer what questions. We explain this with the higher breadth of MMCS.

To discuss an MMCS’s contribution to theory, we use the idea of the theory continuum proposed by Ridder (2017) (cf. Section 2.1). Despite being used in the early phase of research (descriptive and exploratory), we do not recommend using MMCSs to build theory. We argue that for theory building, data with “as much depth as […] feasible” (Eisenhardt 1989, p. 539) is required on a per-case basis. However, a key characteristic of MMCSs is the limited depth per case, which conflicts with the in-depth requirements of theory building. Moreover, a criterion for theory building is that there is no theory available which explains the phenomenon (Ridder 2017). Nevertheless, in our analysed MMCSs, 84% (42 out of 50) have an a priori theoretical framework. Furthermore, for theory building, the recommendation is to use between four to ten cases; with more, “it quickly becomes difficult to cope with the complexity and volume of the data” (Eisenhardt 1989, p. 545). However, a characteristic of MMCSs is to have a relatively high number of cases, i.e., the analysed MMCSs often have more than 20 cases, which is significantly above the recommendation for theory building.

The next phase in the theory continuum is theory development, where a tentative theory is extended or refined (Ridder 2017). MMCSs should and are used for theory development, i.e., 84% (42 out of 50) of analysed MMCS publications have an a priori theoretical framework extended and refined using the MMCS. An MMCS example for theory development is the research of Karunagaran et al. (2016), who use a combination of the diffusion of innovation theory and technology organisation environment framework as tentative theories to research the adoption of cloud computing. As Ridder (2017) outlined, for theory development, literal replication and pattern matching should be used. Both methodological steps are used by Karunagaran et al. (2016) to identify the mechanisms of cloud adoption more precisely.

The next step in the theory continuum is theory testing, where existing theory is challenged by finding anomalies that existing theory cannot explain (Ridder 2017). The boundaries between theory development and testing are often blurred (Ridder 2017). In theory testing, the phenomenon is understood, and the research strategy focuses on testing if the theory also holds under different circumstances, i.e., hypotheses can be formed and tested based on existing theory (Ridder 2017). In multiple case study research, theory testing uses theoretical replication with pattern matching or addressing rival explanations (Ridder 2017). In our MMCS publications, no publication addresses rival explanations, and only a few apply theoretical replication and pattern matching–yet not for theory testing. A few publications claim to test propositions derived from an a priori theoretical framework (e.g., Schäfferling et al. 2011; Spiegel and Lazic 2010; Wagner and Ettrich-Schmitt 2009). However, these publications either do not state their replication logic (e.g., Spiegel and Lazic 2010; Wagner and Ettrich-Schmitt 2009) or use a literal replication (e.g., Schäfferling et al. 2011), both of which weaken the value of their theory testing.

5.1.3 MMCS research benefits

MMCSs are beneficial in multiple research situations and can be an avenue to address the frequent criticism of multiple case study research of being time-consuming and costly (Voss et al. 2002; Yin 2018).

Firstly, MMCSs can be used for time-critical topics where it is beneficial to publish results quicker and discuss them instead of conducting in-depth multiple case studies (e.g., COVID-19 (e.g., dos Santos Tavares et al. 2021) or emergent technology adoption (e.g., Bremser 2017)). Especially with COVID-19, research publishing saw a significantly higher speed due to special issues of journals and faster review processes. Further, due to the fast technological advancements, there is a higher risk that the results are obsolete and of less practical use when researched with time-consuming multiple in-depth case studies.

Secondly, MMCSs can be used in research situations when it is challenging to gather in-depth data from multiple sources of evidence for each case due to the limited availability of sources of evidence or limited accessibility of sources of evidence. When researching novel phenomena (e.g., the adoption of new technologies in organisations), managers and decision-makers are usually interviewed as sources of evidence. However, in most organisations, only one (or very few) decision-makers have the ability to inform and should be interviewed, limiting the potential sources of evidence per case. These decision-makers often have limited availability for multiple in-depth interviews. Furthermore, the sources of evidence are often difficult to access, as professional organisations have regulations that prevent sharing documents with researchers.

Thirdly, MMCSs can be beneficial when the research framework is complex and requires many cases for validation (e.g., Baker and Niederman (2014) validate their rather complex a priori framework with 22 cases) or when previous research has led to contradictory results. Therefore, in both situations, a higher breadth of cases is required to also research combinatorial effects (e.g., van de Weerd et al. 2016). However, conducting an in-depth multiple case study would take time and effort. Therefore, MMCSs can be a mindful way to collect many cases, but in the same vein, being time and cost-efficient.

5.2 MMCS research rigour

Table 8 outlines two types of methodological steps for MMCSs. The first are methodological steps, where MMCSs should follow multiple case study methodological guidance (e.g., clarify the unit of analysis), while the second is unique to MMCSs due to its characteristics. This section focuses on the latter, exploring MMCS characteristics, problems, validity threats, and proposed solutions.

Table 8 Comparison of methodological steps for multiple case studies and MMCS

The characteristics of MMCSs of having only one primary source of evidence per case prevents MMCSs from using source triangulation, which is often used in multiple case study research (Stake 2013; Voss et al. 2002; Yin 2018). By only having one source of evidence, researchers can fail to develop a sufficient set of operational measures and instead rely on subjective judgements, which threatens construct validity (Yin 2018). The threats to construct validity must be addressed throughout the MMCS research process. To do so, we propose to use easily accessible supplementary data or other triangulation approaches to increase construct validity in a MMCS. For the other triangulation approaches, we see that the majority of publications use supplementary data (e.g., publicly available documents) as further sources of evidence, multiple investigators, multiple methods (e.g., quantitative and qualitative), multiple theories, or combinations of these (cf. Tables 5, 6 and 7). Having one or, in the best case, all of them reduces the risk of reporting spurious relationships and subjective judgements of the researchers, as a phenomenon is analysed from multiple perspectives. Besides the above-mentioned types of triangulation, we propose to apply a new type of triangulation, which is specific to MMCSs and triangulates findings across similar cases combined to groups instead of multiple sources per case. We propose that all reported findings have to be found in more than one case in a group of cases. This is also in line with previous methodological guidelines, which suggest that findings should only be reported if they have at least three confirmations (Stake 2013). To triangulate across multiple cases in one group, researchers have to identify multiple similar cases by applying a literal case replication logic to reinforce similar results. One should also apply a theoretical replication to compare different groups of literally replicated cases (i.e., searching for contrary results). Therefore, researchers have to justify their case replication logic. However, in our sample of MMCS, the majority (32 of 50, 64%) does not justify their replication logic, whereas the remaining publications use either literal replication (8 of 50, 16%), theoretical replication (6 of 50, 12%), or a combination (4 of 50, 8%). We encourage researchers to use a combination of literal and theoretical replication because it allows triangulation across different groups of cases. An exemplary MMCS that uses this approach is the publication of van de Weerd et al. (2016), who use theoretical replication to find cases with different outcomes (e.g., adoption and non-adoption) and use literal replication to find cases with similar characteristics and form groups of them.

Two further methodological steps, which are not exclusive to MMCS but recommended for increasing the construct validity, are creating a chain of evidence and letting key informants review a draft of the case study report. Only 36% (18 out of 50) of the analysed MMCS publications establish a chain of evidence. One reason for this lower usage may be that the majority (35 out of 50, 70%) of the publications analysed are conference proceedings. While we understand that these publications face space limitations, we note that no publication offers a supplementary appendix with in-depth insights. However, we encourage researchers to create a full chain of evidence with as much transparency as possible. Therefore, online directories for supplementary appendices could be a valuable addition. As opposed to a few years ago, these repositories today are widely available and using them for such purposes could become a good research practice for qualitative research. Interestingly, only 16% (8 of 50) analysed MMCS publications let key informants review the draft of the case study report. As MMCSs only have one source of evidence per case, misinterpretations and subjective judgement by the researcher have a significantly higher impact on the results compared to multiple case study research. Therefore, MMCS researchers should let key informants review the case study report before publishing.

MMCSs only have few (one) sources of evidence per case, so the risk of focusing on spurious relationships is higher, threatening internal validity (Dubé and Paré 2003). This threat to internal validity must be addressed in the data analysis phase. In the context of MMCSs, researchers may aggregate fewer data points to obtain a within-case overview. Therefore, having a clear perspective of the existing data points and rigorously applying the within-case analysis methodological steps (e.g., pattern matching) is even more critical. However, due to the limited depth of data at MMCSs, the within-case analysis must be combined with an analysis across groups of cases (to allow triangulation via multiple groups of cases). For MMCSs, we propose not doing the cross-case analysis on a per-case basis. Instead, we propose to build groups of similar cases across which researchers could conduct an analysis across groups of cases. This solidifies internal validity in case study research (Eisenhardt 1989) by viewing and synthesising insights from multiple perspectives (Paré 2004; Yin 2018).

Another risk of MMCSs is the relatively high number of cases (i.e., we found up to 27 for exploratory MMCSs) that is higher than Eisenhardt’s (1989) recommendation of maximal ten cases in multiple case study research. With more than ten in-depth cases, researchers struggled to manage the complexity and data volume, resulting in models with low generalisability and reduced external validity (Eisenhardt 1989). We propose to use two methodological steps to address the threat to external validity.

First, like Yin’s (2018) recommendation to use theory for single case studies, we suggest an a priori theoretical framework for MMCSs. 84% (42 out of 50) of the analysed MMCS publications use such a framework. An a priori theoretical framework has two advantages: it simplifies research by pre-defining constructs and relationships, and it enables analytical techniques like pattern matching. Second, instead of doing the within and then cross-case analysis on a per-case basis, for MMCSs, we propose first doing the within-case analysis and then forming groups of similar cases. Then, the cross-case analysis is performed on the formed groups of cases. To form case groups, replication logic (literal and theoretical) must be chosen carefully. Cross-group analysis (with at least two cases per group) can increase the generalisability of results.

To increase MMCS reliability, a case study database and protocol should be created, similar to multiple case studies. To ensure higher reliability, researchers should document MMCS design decisions in more detail. As outlined in the results section, the documentation on why design decisions were taken is often relatively short and should be more detailed. This call for better documentation is not exclusive to MMCSs, as Benbasat et al. (1987) and Dubé and Paré (2003) also criticised this for multiple case study research.To ensure rigour in MMCS, we suggest following the steps for multiple case study research. However, MMCSs have unique characteristics, such as an inability to source triangulate on a per-case level, a higher risk of marginal cases, and difficulty in managing a high number of cases. Therefore, for some methodological steps (cf. Table 8), we propose MMCS-specific methodological options. First, MMCS should include supplementary data per case (to increase construct validity). Second, instead of doing a cross-case analysis, we propose to form groups of similar cases and focus on the cross-group analysis (i.e., in each group, there must be at least two cases). Third, researchers should justify their case replication logic, i.e., a combination of theoretical replication (to form different groups) and literal replication (to find the same cases within groups) should be conducted to allow for this cross-group analysis.

6 Conclusion

Our publication contributes to case study research in the IS discipline and beyond by making four methodological contributions. First, we provide a conceptual definition of MMCSs and distinguish them from other research approaches. Second, we provide a contemporary collection of exemplary MMCS publications and their methodological choices. Third, we outline methodological guidelines for rigorous MMCS research and provide examples of good practice. Fourth, we identify research situations for which MMCSs can be used as a pragmatic and rigorous approach.

Our findings have three implications for research practice: First, we found that MMCSs can be descriptive, exploratory, or explanatory and can be considered as a type of multiple case study. Our set of IS MMCS publications shows that this pragmatic approach is advantageous in three situations. First, for time-sensitive topics, where rapid discussion of results, especially in the early stages of research, is beneficial. Second, when it is difficult to collect comprehensive data from multiple sources for each case, either because of limited availability or limited accessibility to the data source. Third, in situations where the research setting is complex, many cases are needed to validate effects (e.g., combinatorial effects) or previous research has produced conflicting results. It is important, however, that the pragmatism of the MMCS should not be misunderstood as a lack of methodological rigour.

Second, we have provided guidelines that researchers can follow to conduct MMCSs rigorously. As we observe an increasing number of MMCSs being published, we encourage their authors to clarify their methodological approach by referring to our analytical MMCS framework. Our analytical framework helps researchers to justify their approach and to distinguish it from approaches that lack methodological rigour.

Third, throughout our collection of MMCS publications, we contacted several authors to clarify their case study research methodology. In many cases, these publications lacked critical details that would be important to classify them as MMCS or marginal cases. Many researchers responded that some details were not mentioned due to space limitations. While we understand these constraints, we suggest that researchers still present these details, for example, by considering online appendices in research repositories.

Our paper has five limitations that could be addressed by future research. First, we focus exclusively on methodological guidelines for positivist multiple case study research. Therefore, we have not explicitly covered methodological approaches from other research paradigms.

Second, we aggregated methodological guidance on multiple case study research from the most relevant publications by citation count only. As a result, we did not capture evidence from publications with far fewer citations or that are relevant in specific niches. However, our design choice is still justified as the aim was to identify established and widely accepted methodological strategies to ensure rigour in case study research.

Third, the literature reviews were keyword-based. Therefore, concepts that fall within our understanding of MMCS but do not include the keywords used for the literature search could not be identified. However, due to the different search terms and versatile search approaches, our search should have captured the most relevant contributions.

Fourth, we selected publications from highly ranked IS MMCS publications and proceedings of leading IS conferences to analyse how rigour is ensured in MMCSs in the IS discipline. We therefore excluded all other research outlets. As with the limitations arising from the keyword-based search, we may have omitted IS MMCS publications that refer to short or mini case studies. However, the limitation of our search is justified as it helps us to ensure that all selected publications have undergone a substantial peer review process and qualify as a reference base in IS.

Fifth, we coded our variables based on the characteristics explicitly stated in the manuscript (i.e., if authors position their MMCS as exploratory, we coded it as exploratory). However, for some variables, researchers do not have a consistent understanding (e.g., the discussion of what constitutes exploratory research by cf., Sarker et al. (2018)). Therefore, we took the risk that MMCS may have different understandings of the coded variables.

For the future, our manuscript on positivist MMCSs provides researchers with guidance for an emerging type of case study research. Based on our study, we can identify promising areas for future research. By limiting ourselves to the most established strategies for ensuring rigour, we also invite authors to enrich our methodological guidelines with other, less commonly used steps. In addition, future research could compare the use of MMCSs in IS with other disciplines in order to solidify our findings.