In the following sections, we will introduce the developing design principles we posit. The DPs rest on examining relevant literature studies to support our proposition, as well as the practice of reviewing existing decision-making platforms. When writing this manuscript, the design and implementation of the DDDs’ platform were lacking. Searching on Google Scholar, ACM DL, or IEEExplore revealed research relevant to platforms (Farshidi et al., 2020; Broekhuizen et al., 2021; Gröger, 2018; McAfee & Brynjolfsson, 2017; Tura et al., 2018); design principles (Hermann et al., 2015); and DDDs (Bean & Davenport, 2019; Elgendy et al., 2021; Kar & Dwivedi, 2020; Mandinach, 2012; Provost & Fawcett, 2013). However, no research provides the insights the DPs need to design and implement a DDD platform.
The DPs in this paper rested on three kernel theoretical foundations: datafication, platformization, and contemporary decision theory. According to Markus et al. (2002), a kernel theory underlays a design theory (Göbel & Cronholm, 2016). Further, Kuechler & Vaishnavi (2008: p. 489) added that kernel theories “frequently are theories from other fields that intend to explain or predict a phenomena of interest”.
Our argument to inscribe knowledge from datafication is the sheer amount of data available nowadays for analysis and decision support (Elgendy & Elragal, 2016; Elragal & Klischewski, 2017). Additionally, the European data economy continues to proliferate to reach an estimated EUR 829B by 2025 (OPENDEI, 2021). The argument for selecting platformization is motivated by the fact that big data and analytics algorithms require building blocks in the form of a platform to support effective data acquisition, pre-processing, sharing, and analytics (OPENDEI, 2021). Other research also focuses on specific platform features, e.g., blockchain platforms (Farshidi et al., 2020). Lastly, the argument for selecting contemporary decision theory owes to the nature of developments that have taken place in decision theory, accumulating research into not only decision-makers but also analytics and data, shaping a new form of decision theory (Elgendy, 2021). In the following subsections, we briefly describe the kernel theoretical foundations and related work that have informed our DP design.
3.1 Datafication
A recent article by Gröger (2021) confirmed that there is no AI without data, where data preparation and quality are crucial to analytics and digitalization efforts, including DDDs. We datafy many events to make better decisions and become more efficient. Datafication is the process by which subjects and objects are transformed into digital data. Influenced by the upsurge of digitalization and enabled by big data, datafication is exaggerated as further dimensions of social life happen in the digital space. Datafication has enabled the digital world we live in. Baskerville et al. (2020) explained that an information system is traditionally understood to reflect and represent physical reality. However, such a conventional view—that digital technologies both produce and affect physical reality—is becoming less and less relevant; that is known as the ontological reversal.
We take the notions of datafication and the digital-first to DDDM by introducing the DDDM platform, which will aim at utilizing digital data and integrating both worlds, the digital and physical, to enhance the quality of decisions. Datafication is revolutionizing the world in several ways, whereby big datasets are analyzed using advanced analytics tools to turn data into meaningful insights and, after that, support decision-makers. The DDDM platform is bringing datafication to the decision world. When applied successfully, datafication brings organizations under the dominion of data-driven enterprises.
3.2 Platformization
Platforms that support decision-making are created to to store, process, integrate, manage, and analyze datasets in order to enable a data-driven environment. The aim is to develop a system, a platform, that is useful for a more significant number of organizations and stakeholders, including decision-makers. The proliferation of platforms, i.e., platformization, came as a consequence to the big data era. Big data need to stored and maintained under one integrated roof to generate value to business. Accordingly, decision-makers need constant access to it via a DDDM platform. Business models based on such platforms have an extreme competitive advantage over competitors (Sharma & Kumar, 2023). Additionally, platforms accelerate communication and collaboration amongst decision-makers intra and inter-organizations in a way that leads to better-quality decisions and pave road for innovation. Platforms enable cross-company decisions coordinated at the macro, meso, or micro levels. DDDM platforms act as the underpinning of an ecosystem of data and insight sharing. Tura et al. (2018) asserted that the design choices related to the various aspects of a platform are critical to ensure value creation. Therefore, we will use this as a motivation to address the design principles needed to design and implement a DDD platform.
The analytics paradigm spreads across many different roles and responsibilities, such as those of the data engineer, data analyst, and data scientist, and the world is suffering from a shortage of these roles. It would have saved businesses time and money if they could use analytics technologies with basic skills where they needed no or minimal code writing. Luckily, such technology exists nowadays, called low-code (aka no-code) platforms. Indeed, it would be beneficial if a person who knows business requirements and objectives had access to a DDDM platform where they could perform basic operations in data science-related roles without prior coding or data engineering knowledge. Alsahref et al. (2022) highlighted that building ML models requires domain knowledge and advanced ML programming skills. However, there is tremendous difficulty in finding skilled ML experts in the labor market. This is how no-code and low-code ML platforms come to exist. Low-code and no-code approaches help rapidly build ML models, automate data pipelines, and visualize the results. In the low-code platform, decision-makers with basic analytics skills can use existing building blocks, e.g., libraries, and still have the flexibility to customize the required task. With no code, however, it is mainly meant for decision-makers with expertise in a field or function but minimal to no prior software development knowledge. No-code enables users to drag-and-drop process objects to perform analytics tasks with minimal effort or programming skills and a great deal of flexibility to customize. While low-code ML platforms can be used by different personas like data scientists and ML developers, no-code services, e.g., AutoML, could be used by decision-makers with solid business or domain knowledge. Di Sipio et al. (2020) have underlined the importance of emerging low-code cloud platforms and their vital role in digitalization.
3.3 Contemporary Decision Theory
The study of choices to make decisions is known as decision theory (Elgendy et al., 2021). However, in many decades of multidisciplinary research, decisions and the theories surrounding them have been subject to a high degree of complexity and debate (Hansson, 1994). Decisions are by no means easy. The decision problem is a scenario in which a decision-maker selects an action from a range of options that are impacted by uncontrollable events and have varying consequences with either positive or negative payoffs (Peterson, 2011). As a result, decision theory typically concentrates on means-ends rationality or the results of decisions as assessed by preset criteria (Hansson, 2011). In addition, decision theories are typically classified as descriptive or normative. According to Peterson (2011), normative decision theory provides guidelines for what decision-makers should or must do. Hence, a normative theory of decision-making focuses on the criteria that must be met to arrive at a rational decision (Hansson, 1994). An empirical field called descriptive decision theory seeks to describe and forecast how to make decisions (Peterson, 2011). Empirical experiments demonstrating how people's conduct defied normative theories were the impetus for developing descriptive decision theories. It focuses on the reasons behind people's thoughts and behaviors rather than attempting to change, sway, or elevate them. Additionally, descriptive decision theory assumes that real-world decisions can be rational and non-rational (Bell et al., 1988). Accordingly, normative and descriptive decision theories are distinct disciplines with the potential for interaction or lack thereof (Peterson, 2011). Research has attempted to extend the concepts of game theory, information theory, decision theory, systems theory, etc., by applying them to intelligent machines and agents motivated by the rise of big data and artificial intelligence (AI). The emphasis has been on how machines make decisions and how to train them. According to Simon's (1977) theory of AI, information processing algorithms and human thinking are comparable. They looked for patterns in the data, memorized them, and then used them to draw conclusions or extrapolate. As such, several programs can mimic or even outperform human judgment or problem-solving skills (Frantz, 2003). However, more research is still needed to determine the degree of cooperation between the two and how it affects decision-making.
Furthermore, although traditional decision theories rely on a numerical depiction of a decision process, real-world situations may call for clarifying numerical terms (Grabos, 2004). While progressively developing, the instruments of conventional decision theory have not shown themselves to be entirely sufficient to back attempts to automate AI decision-making, particularly in more complex and realistic scenarios involving unexpected preferences or decisions or in scenarios where the underlying assumptions are subject to modification (Doyle & Thomason, 1999). This has spurred research on several AI frameworks and functions and directed attention toward qualitative decision theories or qualitative theoretical foundations of decision-making (Grabos, 2004). By creating qualitative and hybrid representations and methods that enhance and supplement the quantitative decision theory's capacity to handle the entire spectrum of decision-making activities, qualitative decision theories seek to enable automation (Doyle & Thomason, 1999).