Next Article in Journal
Estimation of Heart Rate and Heart Rate Variability with Real-Time Images Based on Independent Component Analysis and Particle Swarm Optimization
Previous Article in Journal
Multi-Objective Optimization of a Two-Stage Helical Gearbox Using Taguchi Method and Grey Relational Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Configurable Intelligent Design Based on Hierarchical Imitation Models

1
Department of Mathematics, Ariel University, Ariel 40700, Israel
2
Independent Researcher, Haifa 3551909, Israel
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2023, 13(13), 7602; https://doi.org/10.3390/app13137602
Submission received: 10 May 2023 / Revised: 12 June 2023 / Accepted: 20 June 2023 / Published: 27 June 2023

Abstract

:
The deterministic AI system under review is an alternative to neural-network-based machine learning. In its application fields, which are science, technology, engineering, and business, the implementation of rule-based AI systems leads to benefits such as accuracy and correctness of design, and personalization of the process itself and the results. An algorithmic AI suite is based on design and logical imitation models alone, without creating and/or using Big Data and knowledge bases. Excessive complexity of configuration and high design resource capacity, which are inherent in deterministic systems, are balanced by a special methodology. A hierarchical modeling approach gives a quasi-dynamic network effect, symmetric to the analogous effect in neural networks. System performance is improved by deterministic reference training capable of modifying imitation models in online interaction with users. Such training, which serves as an alternative to neural machine learning, can be implemented by means of experimental partially empirical algorithms and system–user dialogues to build reference model libraries (portfolios). Partially empirical algorithms based on experimental design methods and system user dialogues are used to create reference model libraries (portfolios) that form a deterministic training system, which can be an alternative to neural machine learning. Estimated resources can be saved by using modified optimization techniques and by controlling the computational complexity of the algorithms. Since the proposed system in the considered layout has no analogues, and the relevant research and practical knowledge are extremely limited, special methods are required to implement this project. A gradual, phased implementation process involves the step-by-step formation of sets of algorithms with verification tests at each stage. Each test is performed using an iteration method, and each test includes test, tweak, and modification cycles. Final testing should lead to the development of an AI algorithm package, including related methodological and working papers.

1. Actual AI: Overview of Sources

This review work is carried out with the intention of substantiating the possible effective application of deterministic AI systems as an alternative to neural networks. The principles of structural and functional formation of full-fledged intelligent systems based on relatively simple model and algorithmic approaches, with significantly limited use of big data and knowledge bases, are considered. For this purpose, a consistent review of sources is carried out, starting with the most general epistemological aspects and ending with options for practical application. Consideration of each aspect of the problem ends with a brief summary and justification for moving on to the next aspect. The result of the analytical review should be the development of foundations for designing special deterministic AI systems, including the principles and architecture of the project, design and methodology, general approaches to modeling, and the formation of algorithms. The structure and logic of the overview (including section links) are presented in Table 1.

1.1. Anthropomorphism

Historically, artificial intelligence has been contemplated in anthropomorphic terms [1,2], yet the desire to make algorithms human-like prevents an adequate comprehension of ethical problems related to emerging technologies.
An artificial neural network (ANN) is an anthropomorphic computation system designed to simulate information analysis and processing by the human brain. These systems form a foundation for AI and Machine Learning (ML) technologies. A research study [3] deals with basic knowledge and understanding in artificial neural networks. ANNs are very popular in ML studies, resulting in the rapid development of AI and ML systems for many tasks, such as text processing, speech recognition, and image processing. They are also important search tools for patterns that are too complicated or numerous to be retrieved by a human developer and recognized by a machine.
There is a difference between a normative and conceptual approach to AI issues [4]. It is important to formalize psychological and neurological terms in a quantitative language, and to explain their role in intellectual behavior. The importance of AI for understanding the human brain is, however, limited by the fact that neither AI nor the brain have an isomorphic structure. Notwithstanding the above, there is still potential for collaboration between AI and neurobiology. AI has benefited and apparently will benefit from the neuroscience, yet compliance with a biological plausibility cannot be imposed; for AI developers, biological plausibility is a roadmap rather than a mandatory requirement. AI understanding based on mental patterns of humans can reduce it to a sort of a limited copy of human intelligence.
When using AI service agents, clients expect better performance from more anthropomorphic agents. While humans prefer very human-like AI service agents [5], they are more threatened when dealing with more anthropomorphic agents, and, therefore, prefer less human-like AI agents. This effect, however, is meaningful only in social scenarios, and offers a new understanding of previous inconsistent findings regarding the effect of anthropomorphic design on clients’ willingness to use AI service agents.
While dialogue AI agents (ICA) are becoming an increasingly popular service tool for various enterprises, their successful implementation requires a good understanding ICA acceptance factors. A study [6] proposes a collective model of ICA acceptance and usage, where acceptance mainly depends on the benefits of use, which, in turn, depend on agent and user characteristics. As emphasized in this study, the proposed model is context dependent because the relevant factors depend on usage parameters. Certain strategic implications for business are also mentioned, such as service design, personalization, and customer care management.
Artificial neural networks are operated by machine learning (ML). One approach to ML for this purpose was proposed by [7], who combined imitation learning (IL) and several types of reinforcement learning (RL). In their study, these researchers examined the performance of a human teacher, who trains the agent to deal with environmental factors, and an agent learner, who has a specific goal.
Another study [8] examined the integration of algorithms into the social fabric of an organization, and the interplay between humans and machines in a human-in-the-loop configuration. Over time, humans and algorithms are configured and reconfigured in multiple ways, while the organization addresses algorithmic analysis. These new configurations call for new organizational roles and the redistribution of organizational knowledge, together with efforts to improve the algorithms and the data collection architecture. This study supports the strategic importance of a human-in-the-loop pattern in organizational efforts to ensure that the algorithm’s performance meets the organization’s requirements and is responsive to environmental changes.
The concept of relies on four elements, data, information, knowledge, and intelligence itself [9]. Data are raw facts, while information assigns meaning. Knowledge is an interpretation of information, and intelligence applies relevant knowledge to solve problems. It involves perception, judgment, rules, and expertise, leading to new knowledge. Developers use various knowledge types to create patterns in specific areas when developing intellectual software systems.
Another study [10] offers a philosophical understanding of the special nature and evolution of computer modeling. Computer knowledge in the artificial intellect (AI) is an object of modeling. This study deals with the correlation between private, subjectified, and personalized knowledge of specific individuals, and non-personalized objectified knowledge in computer modeling. Knowledge representation analysis in terms of computer modeling shows the importance of successful results, which are obtained for their current technological possibilities (i.e., a computer representation of knowledge) in this type of modeling. On the other hand, the results and performance of computer modeling provide new insights into philosophical and general scientific problems, such as the knowledge representation problem, and encourage a search for new solutions.
The status of research methodology employed by studies on the application of AI techniques to problem solving in engineering design, analysis, and manufacturing is poor. There may be many reasons for this, including unfortunate legacy from AI, poor educational system, and researchers’ sloppiness. Understanding this status is a prerequisite for improvement. The study of research methodology can promote such understanding, but, most importantly, it can assist in improving the situation. A study [11] deals with general methodological foundations of studying Artificial Intelligence for Engineering Design, Analysis and Manufacturing (AIEDAM). The urgency of this problem becomes more apparent in view of a great number of articles dealing with the myths, legends, and misconceptions related to certain AI issues (such as expert systems, fuzzy logic, and ML).
Visual recognition systems are now an integral part of modern computer vision. While interactive and embodied visual AI is not far from visual recognition [12], the crucial question remains as to the degree to which we can generalize models trained in simulation to real life. Creating such a generalizable ecosystem for studying simulated and real embodied AI remains challenging and costly, and entails the resolution of several issues including:
(a)
The inherently interactive nature of the problem;
(b)
The need to ensure that the simulated environment is closely aligned with the real world;
(c)
Creating conditions that allow replication and repeatability of experiments.
RoboTHOR offers researchers a framework of simulated environments that can be used to address and resolve the challenge of achieving similar performance levels in real environments.
AI is increasingly being used in real-time data distribution (Big Data) to enhance education by enabling personalized, flexible, and inclusive learning experiences [13]. Governments, educational sectors, and organizations are exploring the implementation of AI tools and platforms to improve the efficiency and effectiveness of monitoring the educational system compared to current methods. AI is defined as the ability of digital computers or robots to perform tasks typically associated with humans. ML and data analysis techniques are receiving significant attention, as they allow for the acquisition, structuring, and analysis of large-scale datasets, enabling the identification of patterns, trends, associations, and predictions. An intelligent system is one that “learns” from data, using it to make informed decisions in specific cases, provided the data accurately represent the objects they represent.
AI clearly has a diverse, encompassing effect on education [14] on two important levels. First, AI-based innovations are being developed to optimize and enhance existing educational systems. AI developments for the field of education range, for example, from personalized systems that use virtual assistants, to systems that track student or teacher activities. Despite the promise of these innovations, their risks for users’ privacy and well-being are not insignificant. Second, AI has created broad social changes that require reforms of traditional educational systems. AI capabilities underscore the need for educational systems to train everyday users to understand the systems that are being developed. AI also highlights the need for education in arts and humanities, and the need to train critical thinkers to be able to ask questions and evaluate answers.
AI is a well-established scientific field with remarkable achievements over the years. Alongside this, the popularity of interactive computer games and virtual environments has grown, attracting millions of users. The study [15] provides valuable insights into AI agents in virtual worlds. Views on achieving human-level AI vary, with differing opinions and challenges. Developing human-level AI is complex, as it requires integrating various fundamental human capabilities. While some researchers believe it will eventually be accomplished with new approaches, even if it proves to be an insurmountable challenge, studying human-level AI enhances our understanding of human intelligence and has positive impacts on various scientific disciplines.
Inference. Artificial intelligence is mainly perceived as an anthropomorphic phenomenon in terms of simulation of the human brain and its functions. Philosophical, methodological, and social aspects of the artificial intellect, comparative analysis of human and non-human consciousness, cognitive and psychological problems, etc., are treated accordingly. In the meantime, the anthropomorphic nature of existing systems is largely overstated, and the benefits of this approach raise reasonable doubts. When combined together, such features of anthropomorphic AI as sociability and independence can both attract and discourage users. Independence creates additional social and legal tension, which is attributed, inter alia, to uncertain moral and legal evaluations of AI’s potential implications and to the resulting liability. Better performance of artificial intelligence should be ensured by incorporating realistic considerations in solving specific problems, rather than driven by the desire to achieve “human likeliness”. This being said, a reasonable balance between independence and sociability should always be sought.

1.2. Transparency

The field of information systems (IS) is currently undergoing a significant transition from rule-based decision support systems based on deterministic algorithms [16] to the use of probabilistic algorithms (e.g., deep learning). Based on patterns in data that are identified by such probabilistic algorithms, these algorithms draw conclusions and develop predictions that apply to other data, under some uncertainty. Despite their enormous potential, research offers numerous examples of how probabilistic algorithms may have systemic biases integrated in them, as a result of which their implementation leads to systemic discrimination. In one example, decision support systems for credit loan applications disproportionally denied applications of women, and individuals living in certain areas or from a specific ethnic background.
Transparency is a significant concern in AI systems. While these systems enhance automated decision-making capabilities, they often lack transparency in providing explanations for their recommendations or predictions. ML processes over large datasets drive these systems, but the underlying reasoning remains hidden. Users also struggle to access and understand potential biases embedded in algorithms or obscured in training data. A study by [17] proposes three rules for developing meaningful explanations for non-transparent “black box” AI/ML systems. These rules involve using logic, statistics, and causal interpretations, inferring local explanations for specific cases by auditing the black box near the target instance, and generalizing multiple local explanations into simple global ones. Their approach allows for diverse data sources, languages, learning problems, and auditing methods to be employed when generating explanations.
Another article [18] offers a brief analytical review of the current situation in the explainability of AI, in the context of recent advances in ML and deep learning. AI and ML have demonstrated their potential to revolutionize industries, public services, and society, achieving or even surpassing human levels of performance in terms of accuracy for a range of problems, such as image and speech recognition and language translation. However, their most successful offering in terms of accuracy—deep learning (DL)—is often characterized as being “black box” and non-transparent. Using non-transparent “black box” models is especially problematic in highly sensitive areas, such as healthcare and other applications related to human life, rights, finances, and privacy. Since the applications of advanced AI and ML, including DL, are now growing rapidly, encompassing the digital health, legal, transport, finance, and defense sectors, the importance of transparency and explainability is increasingly recognized.
Technological innovations come with risks that require comprehensive management [19]. While AI-based advances like health IT, robotics, chatbots, and social media offer benefits, users often lack an understanding of how AI systems make decisions, especially deep learning neural networks. This lack of understanding undermines trust, hides biases, and leads to discrimination. Explainable AI (XAI) systems aim to provide visibility into the decision-making process, offering explanations that enhance trust and impact user actions. XAI enables developers to improve AI models and allows for the study of user behaviors related to privacy and personal information usage. Explanations can be tailored to meet the needs of developers, users, and stakeholders, varying in transparency and depth.
Explainable AI has been an active field since the 1980s, with early applications in expert systems. While recent advancements in neural networks have led to significant success in various domains, their lack of explainability remains a challenge. AI systems need to be able to provide human-understandable explanations for their decisions, especially in domains like biology, chemistry, medicine, and drug design where data can be represented as graphs [20]. Deep Tensor networks have been used to identify influential factors and construct interaction paths in these domains, incorporating knowledge from medical research. By annotating the interaction paths with supporting evidence from specific medical texts, the classification results can be explained to human users. Explanation is considered critical in all areas of AI, and approaches like combining logic-based systems with stochastic systems or employing transfer learning can enhance the interpretability and transferability of AI models.
The work [21] presents a preliminary analysis of the global desirability of different forms of openness in AI development, such as open source, science, data, security practices, opportunities, and goals. Increased short-term openness is generally seen as socially beneficial. However, the strategic implications in the medium and long term are complex. Assessing long-term impacts depends on whether the goal is to benefit the current generation or promote the aggregate welfare of future generations. Openness about security measures and goals is acceptable, but other forms of openness (open source, science, and opportunity) may lead to increased competition, potentially compromising safety and efficiency. The global desirability of openness in AI development involves intricate trade-offs. One concern is that openness can worsen racial dynamics, as competitors may take higher existential risks to achieve progress in developing advanced AI. Partial openness, allowing outsiders to contribute, is considered desirable.
The work [22] builds a neural network to classify X-ray images of a hand as fractured or intact. If the neural network does not have an exact answer, then the doctor can check. This kind of neural network can help doctors, who have to check a large amount of X-ray images.
Inference. Transparency is a key feature of AI systems’ operation, as the lack of transparency aggravates relevant social, legal, and technological challenges. These challenges are more inherent in enhanced autonomous systems (e.g., neural networks, ML) that operate in a black box mode. Therefore, developing an explainable AI (EAI) is an urgent problem. Such systems contain options that comprise chains of actions, judgements, operations, interim results, etc., and enable a black box–glass box transfer. An alternative approach to transparency lies in the use of inherently transparent deterministic systems (based on rules, knowledge, etc.).

1.3. Determinacy

Rule-based systems are widely used to develop AI applications and systems in various domains. This is the most successful approach to artificial intelligence that paves the way to a relatively easy development of complex large-scale applications [23]. Rule-based systems are a simplified form of artificial intelligence. This technology is based on facts and rules, yet is time consuming and difficult to implement. It ensures great performance in a specific domain, which is, however, limited by human-coded information. Rule-based systems help software developers and machines to tackle problems with multiple pattern nodes and solve tasks at a higher level of abstraction using human-like thinking and reasoning capabilities. Even though rule-based systems have several limitations, there is no doubt that with ever-evolving technology they will also evolve to be more flexible, effective, and suitable.
Software Variability Modeling (SVM) is a key challenge in software product lines, especially in configurable software product lines (CSPL) that require strict SVM. CSPL application, such as dynamic DSPL, service-oriented SPL, and autonomous or comprehensive systems [24]. Knowledge-based configuration is an accepted variability modeling method that aims to enable automatic adjustment of physical products. Conceptual clarity of modeling (e.g., availability of both taxonomic and compositional relations) can be useful. A conceptual basis to ensure multiple representations (e.g., graphical and textual) is also important. Application of the ideas and the expertise embodied in these ideas might promote the development of modeling support for software product lines.
Methods of integrated knowledge representations (BR) of analyses and syntheses may use atomic models of knowledge representation, including both an elementary data structure and rules of knowledge processing stored in the form of machine-readable data and/or program instructions. These methods are implemented in proprietary software products [25]. One or more knowledge processing rules are used to analyze the input complex CI to decompose its concepts and/or conceptual relationships into elementary concepts and/or conceptual relationships for inclusion in the elementary data structure. One or more knowledge processing rules may be applied to synthesize the RC output from stored elementary data structure according to contextual information.
An expert system is a computer program that provides expert-level solutions to important problems and is heuristic, transparent, and flexible [26]. Rule-based systems demonstrate the state of building expert systems and illustrate the main issues. In a rule-based system, much of the knowledge is represented as rules, that is, as conditional sentences relating statements of facts with one another. An expert system requires a knowledge base and an inference procedure. The knowledge base is a set of rules and facts covering specialized knowledge of the subject, as well as some general knowledge about the relevant domain. The inference procedure is a large set of functions that control the interaction and update the current state of knowledge about the case at hand.
The problem of choosing a knowledge representation model and processing techniques can be formulated as follows, how to present the knowledge structure based on such sources, such as professional literature and the knowledge of highly skilled professionals (i.e., how to choose a knowledge representation model), so that its automated processing might enable the solving of a problem in this specific area and to achieve desired results. Software developers often try to describe complex knowledge domains that deal with complex informative tasks using steady regular patterns that are user-friendly yet too primitive to represent the diversity of semantic aspects in each specific domain. An article [27] examines the set of requirements for a knowledge representation model in smart systems, offers extended semantic networks, and demonstrates that the proposed model meets the above set of requirements.
Ontologies have gained popularity and recognition in the semantic network due to their widespread use in Internet applications. Ontologies are often regarded as an excellent source of semantics and interoperability in all artificial intelligence systems. The exponential growth of unstructured data on the Internet has made automatic ontology from unstructured text the most prominent area of research. Methodologies [28] are expected to be developed in many fields (ML, text analysis, knowledge and reasoning representation, information retrieval and natural language processing) to automate the process of obtaining some degree of ontologies from unstructured text.
Knowledge-based design is an advanced form of automated design that enables informed decision-making and a fully digital representation of the product lifecycle [29]. It involves applying rules-to-inter-parametric relationships for configuring products at a lower, parametric level. By utilizing indicators, rule-based configurations can construct finished products. Tools such as knowledge-based process models and product catalogs support this process. However, effective methods for routine work with computer knowledge are still lacking. Industrial applications of knowledge-based design offer various benefits.
Inductive empirical systems simulate the human brain and operate in a “black box” mode, while deductive analytical systems use transparent formalized models and algorithms to represent knowledge. Both systems solve intellectual problems, but the solutions may lead to the development of alternative artificial intelligence systems [30]. The authors propose principles for AI system development, including the exclusion of black box technologies, the use of data conversion systems, and direct mathematical modeling. The system consists of a simulator module, an ontological module that extracts structured functional links, and interfaces for generating custom knowledge representations. This approach forms the methodological basis of an AI e-learning platform.
Inference. In contrast to widely known model-based machine-learning neural networks, which handle complex redundant data and are easy to implement and use due to their highly autonomous nature, existing rule-based systems are more methodologically simple and transparent, but are time-consuming and hard to implement. That being said, they are capable of addressing problems with multiple pattern nodes by using databases, knowledge bases, logical output models, etc., and treating such problems at a higher abstraction level with the help of the human intellect in user interaction. Despite their limitations, these systems are rapidly becoming more flexible and efficient.

1.4. Configurability

Mass customization involves providing individually designed products and services through flexible and integrated processes [31]. It is seen as a competitive strategy adopted by many companies. This paper examines scenario-based rules and methods for supporting future-oriented system architectures in mass customization, initially developed for medical visualization equipment. These architectures must accommodate space variability (creating unique products for specific clients) and time variability (meeting new requirements). Two key considerations are predicting future client needs and ensuring efficient response to changes. Knowledge-based configuration systems, utilizing declarative knowledge representation and smart search methods, are widely used by major vendors for solving configuration problems. These systems offer benefits in terms of adaptable configuration rules and efficient problem-solving algorithms.
The development of big data and cyberphysical systems has increased the demand for product design. Digital product design [32] incorporates advanced technologies like geometric modeling, virtual reality, and multi-object optimization. Intelligent design methods include analyzing customer requirements, product family design, modular design, and design variations. Trends in intelligent user products involve developing smart products based on big data and specialized design tools. Intelligent custom design enables dynamic response resolution and intelligent customization of user requirements. Future customized equipment design relies on a fusion-mapping model and swarm intelligence to enhance design intelligence. Knowledge-based intelligent design utilizes feedback features and scene-based design. With cloud databases and event-condition-action rules, intelligent design becomes more requirement-centered, knowledge-diversified, and efficient.
Parametric design is essentially a generative design [33] that can be created using a computer and mathematical interplay. Rather than changing the shape, a process-oriented method is used to modify parameters of the components that comprise the shape, resulting in a new look. As the consequences of manipulations are immediately evident, the development of product families and new product options can be rapid. With the help of parametric design, customizing products and increasing customer satisfaction are becoming more efficient, as several successful projects in the automotive industry, aircraft manufacturing, architecture, jewellery manufacturing, and other industries have already shown. Product customization is important for higher customer satisfaction. For example, Deloitte discovered that every fifth customer is ready to pay 20% more for a unique personalized product. That is why customization has become a business strategy in many fields, from footwear to automotive industry. Parametric or generative design (also known as algorithmic design) has great future potential, because it enables designers to offer unique personalized products to the customers. The results are algorithm-generated. In contrast to conventional design methods, all design processes are controllable, with the step consequences of manipulations immediately evident. Such a new approach accelerates the release of multiple product versions.
In the era of Big Data, mass customization (MC) systems are faced with the need to integrate mass customization and social IoT in order to effectively connect customers with enterprises. This is necessary not only to allow customers to participate in the production of microcontrollers throughout the entire process, but also to give enterprises the ability to control all communications within the information system. The paper [34] describes the architecture of the proposed system from an organizational and technological point of view, and discuss the key problems that mass customization faces—the social system of the Internet of Things. These include (1) the system can make convenient information queries and clearly understand the user’s intentions; (2) the system can anticipate the changing relationship between different technical fields and help the scientific staff of the enterprise to find technical knowledge; and (3) the system can combine deep learning technology and digital redundancy technology to better maintain system health. Additional issues include data management, knowledge discovery, and human–computer interaction, such as data quality management, small data samples, lack of dynamic learning, time wasting, and task scheduling.
Knowledge-based configuration systems are industrially available [35]. Major vendors of configuration systems rely on a certain form of declarative knowledge representation and smart search methods to solve the major configuration problem, due to the inherent benefits of this technology. On one hand, changes in the business logic (configuration rules) might be easier due to the declarative and modular nature of the knowledge base. On the other hand, highly optimized domain-agnostic problem-solving algorithms are available to build acceptable configurations. Their development is still in progress due to ever-emerging challenges of smart system configuration development in our increasingly automated and interconnected world; web configurators are becoming available for large diverse groups of users, representations of custom products require that the companies be integrated into delivery chain, and configuration and reconfiguration of services is becoming an increasingly serious problem.
Mass customization is designed to meet individual requirements and is therefore a method for attracting and retaining customers, which is a key issue in the design industry [36]. The development of computer-aided design opens up new opportunities for high-speed custom-made product planning, which is equivalent to mass production. Automated design is based on the reuse of food and process knowledge. The ontology has shown itself to be an acceptable highly aggregated representation of knowledge in the field of engineering design. While knowledge about food and processes at other stages of the life cycle is presented in different approaches, the product planning process and the product tailoring process are missing, leading to interruptions or additional iterations in computer-aided design. Therefore, a suitable representation of knowledge adapted for automation is still lacking.
Many companies offer websites that enable customers to design their own customized products, which a manufacturer can then customize [37]. The economic value of products developed in-house using mass customization (MC) toolkits comes from two factors, high preference for conformity and little design effort. The authors suggest a third factor, namely the creator’s involvement in the product design. Through their research, the authors obtained experimental evidence that the “I designed it myself” effect creates economic value for the consumer. Regardless of other factors, self-developed products generate significantly higher willingness to pay. This effect is mediated by the sense of accomplishment elicited by the outcome of the process as well as the perceived contribution of the individual to the self-design process. These findings are important for MK companies; it is not enough to simply design MK tools in a way that maximizes customization and minimizes design effort. To fully reflect the value of MK, toolboxes should also evoke the feeling of “I came up with it myself”.
Current food design research focuses on how to transform a predefined set of ingredients into a true set of product structures. As configurable food products increase in scale and complexity, there is an increasing interdependence between customer requirements and product structure. As a result, existing product structures cannot meet individual customer requirements, so there is a need for product variants. The purpose of this work [38] is to build a bridge between customer requirements and product structure in order to ensure rapid planning according to demand. First of all, the multi-hierarchical design models of the configured product are created, including the customer requirements model, the technical requirements model, and the product structure model. Then, using a fuzzy analytical hierarchical process (FAHP) and a multi-level comparison algorithm, the migration of multi-hierarchical models solves in a fuzzy analytic hierarchy process (FAHP) and a multi-level matching algorithm. Finally, the optimal structure based on the customer’s demands is obtained through calculations of Euclidean distance and similarity to other cases.
Companies need to adapt to changing market trends by providing customers with a diverse range of product and service offerings, covering various combinations of the two [39]. In order to achieve this, a shared knowledge model is proposed for customizing commercial offerings. This involves evaluating product and service configurations and developing a comprehensive model that encompasses the entire range of offerings. A knowledge-based model is then defined, demonstrating its relevance to use cases in the secondary and tertiary sectors.
Mass customization is a business strategy that aims to satisfy individual customers’ needs with near-mass-production efficiency. Mass customization information systems in business provide original and innovative research on IT systems for mass customization. It is a wide-ranging reference collection of chapters describing the solutions, tools, and concepts needed for successful realization of these systems. A knowledge-based configuration includes knowledge representation descriptions to capture sophisticated product models and reasoning methods in order to provide smart interactive interplay with the user. Dedicated research that provides a better understanding of the knowledge-based configuration offers a toolkit [40,41] that extends the boundaries available to configurators. State-of-the-art approaches to gaining configuration knowledge entail testing, adjustment, redundancy detection, and conflict management. Business processes, mass customized markets, product modeling, and supply chain management, required to produce customized products, are being explored. The commercial benefits of knowledge-based technologies can be applied to various business sectors, from services to industrial machines.
Some recent web application offerings are focused on providing advanced search services through virtual stores. In this context, [42] proposes an advanced type of software application that simulates a seller–buyer dialogue to dynamically customize a product to meet specific consumer needs. The proposed approach is based on a shared knowledge model that uses artificial intelligence and knowledge-based methods to simulate the customization process.
The trend of product diversification and personalization offers companies the opportunity to increase income through higher prices and market share. However, there is a risk of the “diversity paradox” where overwhelming customization options lead to decreased sales. Sales Configurator is an app aimed at helping customers choose the best solution for their needs. Research on the characteristics of effective configurators is limited, and proposed solutions lack empirical study and psychometric measurement. The study [43] identified key capabilities for sales customizers to avoid the diversity paradox, including target navigation, fluid navigation, simple comparison, benefit and cost information, and user-friendly product descriptions. This tool can serve as a diagnostic and benchmarking tool for companies looking to assess and improve their sales configurators.
Inference. Configurable deterministic AI systems (functioning as a type of rule-based system) have emerged in response to the demand for mass customization, which requires a high flexibility and process integration. Such systems are widely used in industrial and commercial design and have proven highly effective due to, inter alia, their use of the intellectual potential of designers and other experts and professionals. It should be noted that the potential application of configurable systems in sectors such as science and education is unfairly ignored. However, such systems can be efficiently applied, for example, in the mass development and production of training content, experiment planning and verification of results, etc.

1.5. Modeling and Imitating

The paper [44] examines the role of artificial intelligence in modeling and simulation (M&S). M&S implies some fundamental and highly sophisticated problems, which can be solved by using AI methods and concepts. Several key problem issues (e.g., verification and validation, reuse and possibility of layout, distributed modeling and system interation, service-oriented architecture and semanic network, onthologies, limited natural language and genetic algorithms) have been explored. A special-purpose methodology is proposed to enable agent involvement in modeling and simulation based on their activity monitoring. Feasibility of endomorphic modeling, which is an extended function that has to be available to this agent if it imitates all human abilities of M&S, has also been considered. Since this ability implies an indefinite regress, with the models indefinitely contain the models of themselves, this is a limited function. This being said, it can provide a critical understanding of a competitive coevolutionary behavior of humans or higher level primates in order to launch more intensive studies of model nesting depth. This is the extent to which an endomorphic agent is capable of mobilizing its mental resources, as necessary to create and use the models of its own “brain” and of the “brains” of the other agents. The mystery of such endomorphic agents is hindering further studies in AI, M&S, and relevant domains, such as cognitive science and philosophy.
With the rapid advancement of AI technologies, the field of modeling and simulation has also seen significant progress, particularly in the area of multi-agent modeling and simulation (MAMS). The paper [45] provides an overview of MAMS, including its concept, technical advantages, research steps, and current status. It discusses hybrid modeling and simulation that combines multi-agent and system dynamics, modeling and simulation of multi-agent reinforcement learning, and modeling and simulation of large-scale multi-agent systems. The study highlights the benefits of multi-agent simulation technology in terms of descriptive ability, emergence analysis, organizational framework, autonomic decision-making, interaction ability, distributed simulation, and model reuse. Furthermore, it presents the applications of MAMS in social, economic, and military fields. The study identifies several challenges in multi-agent reinforcement learning, such as convergence and stability, large state space, and poor knowledge transfer. The current status of large-scale MAMS is also summarized, including the solutions of reorganizing the model at the software level and utilizing distributed parallel computing to enhance computational efficiency.
Simulation studies are increasingly used in various scientific fields, including production engineering. Implementing computer-based solutions in production processes helps reduce costs associated with planning errors and streamlines the development of manufacturing plans for new products. This is particularly important for manufacturing companies aiming to optimize inventory levels while ensuring uninterrupted production. In the work [46], computer simulation models are proposed to study different production scenarios. The analysis reveals that increasing the batch sizes of input components leads to a decrease in production efficiency. Simulation modeling serves as a valuable tool for assessing process performance and visually representing various assumptions. The databases created for these simulation models can serve as a foundation for real process development. However, simulation tools do not replace decision-making by managers. Simulation experiments provide valuable data and information about processes, assisting in making informed decisions. In this study, computer simulation was used as a substitute for real experiments, tailored to the research requirements. The use of simulation models enables preliminary analysis of process development and validation of proposed changes under specified conditions. Simulations offer a means to explore specific scenarios and evaluate potential solutions without the risks associated with testing assumptions in real-world settings.
Process analysis provides valuable information on events stored in information systems. Analysis of event data can reveal many performance and compliance issues, as well as insights into how to improve productivity. Process analysis techniques tend to be backward-looking and do little to promote forward-looking approaches, as potential interventions are not evaluated. System dynamics complements this backward-looking analysis by identifying the relationships between different factors at a higher level of abstraction and using modeling to predict the outcomes of process improvement actions. The work [47] proposes a new approach to the development of system dynamics models using event data. This approach extracts various performance parameters from the current state of the process using historical performance data, and provides an interactive performance modeling platform in the form of system dynamics models that are able to answer “What if” questions. Experiments with event logs that include various parameter relationships show that this approach enables robust models and underlying relationships.
Another study [48] discusses the validation and verification of imitation models. It proposes four different approaches to defining whether or not a model is valid; presents two paradigms to associate validation and verification with the model development process; defines various validation techniques, discusses the validity of the conceptual model, model verification, operational validity, and data validity; provides a method to document the results; presents a recommended model verification procedure; and, finally, outlines accreditation. The study argues that despite the availability of literature on validation and verification, there is no set of special-purpose tests that might be easily applied to establish whether a model is “correct”. Moreover, there is no algorithm to define which methods or procedures should be used. Each new modeling project is a unique new challenge.
The paper [49] provides an overview of Simulation Model Development Environment (SMDE). Building the environment suggests that a minimal toolkit should be developed including a premodel manager, help manager, model generator, model analyzer, model translator, and a model verifier. A model generator is the most important tool. The automation-based SW paradigm has been achieved largely due to the development of DOMINO-based visual simulation support environment (DOMINO is a multifaceted conceptual framework for visual Imitation modeling). A comprehensive set of requirements for SMDE is a major technological challenge in terms of independence of the modeling domain in the field of modeling discrete events. Building the visual simulation support environment prototype implements the automation-based SW paradigm, and also enables the animation of the imitation model. The development of a conceptual basis for visual imitation modeling is one of the major challenges here.
The paper [50] introduces a flexible simulation model generator for discrete operating systems. It introduces two concepts of discrete system modeling, the operating network and operating equations. These tools are used to describe the structure of the simulated system. The model generator uses a batch input file containing a list of working equations and other system specifications to generate a simulation code.
The paper [51] presents the conceptual design and implementation of an interactive simulation simulation model prototype based on knowledge. The system manages several components, including a model database, a knowledge base and a database. Particular attention is paid to integrating the model and the knowledge base. This combination of numerical and knowledge representation components is one of the main strengths of the system. The framework approach was chosen for the semantic representation of models. The system is designed to free the scientific expert from the details of computer science and to allow them to focus on real modeling tasks.
The work [52] suggests that differential equations should be solved on the basis of AI stochastic neural network models theory. There are used multiple layer neural networks with an appropriate number of layers, for solving differential equations.
Inference mathematical modeling and simulation (M&S) and AI are closely related. On one hand, AI system architecture and algorithms are built on the basis of various models, from the most simple computational and logic models through multi-agent modeling and simulation (including endomorphic models) to semantic, ontological, and other models. Moreover, imitation models can be used as data sources to validate and verify algorithms and systems. On the other hand, AI serves as a basis for building automated model development and validation environments (generators) including such dedicated tools as premodel managers, help managers, model generators, model analyzers, model translators, and model verifiers.

1.6. Complexity

The paper [53] proposes a hierarchical system framework that uses iHLBA for job scheduling in a grid environment. iHLBA assigns the most suitable resource for each specific job and compares load clusters to the adaptive balance threshold to balance the load system. Local and global update rules are applied to obtain the updated status of resources and to define a balanced threshold, thus making it possible to assign the next job to the most suitable resource. A local update rule is responsible for updating the status of cluster and resource that was assigned to the job. The job scheduler then uses the new status to assign the next action. A global update rule updates the status of each cluster and resource in the grid system, after the resources complete their jobs. This provides the job scheduler with the most updated information on all clusters and resources, thus making it possible to assign the next job to the most suitable resource. Experimental results show that iHLBA is capable of balancing the total system load and improving performance by choosing the best resource for each specific job based on the updated state of the system.
Service-oriented computing has created a new method of service delivery based on pay-as-you-go models in which users consume services based on their Quality of Service (QoS) requirements. In these pay-as-you-go models, users pay for services based on usage and compliance with QoS limits; processing time and cost are two common QoS requirements. Therefore, to create effective planning maps, it is necessary to take into account the prices of services when optimizing performance indicators. The work [54] proposed a heterogeneous constrained budget scheduling (HBCS) algorithm that guarantees execution cost within a budget specified by the user, and minimizes the execution time of the user application. Their results show that this algorithm provides faster execution, guaranteed application cost, and lower time complexity compared to other existing algorithms, subject to a limited budget. The improvements are especially important for more heterogeneous systems, where a 30% reduction in execution time was achieved without an increase in budget.
Effective application planning is critical to achieving high performance in heterogeneous computing environments. The application scheduling problem is NP-complete in the general case and also in some limited cases. This important issue has been extensively studied, and the various algorithms proposed in the literature are mainly intended for systems with homogeneous processors. Although several algorithms for heterogeneous processors are described in the literature, they usually require more planning and do not offer reductions in planning costs. The article [55] presents two new scheduling algorithms, heterogeneous early-finish-time (HEFT) and heterogeneous early-finish (HEF) for a limited number of heterogeneous processors to achieve high performance and fast scheduling. The HEFT algorithm selects the task with the highest increasing rank value at each step and assigns the selected task to the processor that minimizes the earliest completion time using an insertion-based approach. The CPOP algorithm uses the sum of ascending and descending rank values to determine task priorities. In the processor selection phase, critical tasks are assigned to a processor, which minimizes the overall execution time for critical tasks. To provide a reliable and unbiased comparison with similar work, a parametric graph generator was developed to generate weighted directed acyclic graphs with different characteristics.
The distribution estimation algorithm (EDA) is a well-known stochastic optimization method. The average temporal complexity is an important criterion for measuring the performance of stochastic algorithms. Various types of EDA have been proposed in recent years, but relevant theoretical studies on the temporal complexity of these algorithms are relatively rare. The work [56] analyzes the temporal complexity of two early versions of EDA, Univariate Limit Distribution (UMDA) and Incremental UMDA (IUMDA). This was the first rigorous assessment of the mean values of FHT, UMDA, and IUMDA, including both polynomial and exponential cases. Their analysis shows that UMDA (IUMDA) has an O ( n ) behavior for the pseudo-modular function and that IUMDA can spend an exponential number of generations to find a global optimum.
The paper [57] proposes a method for defining conditions for goals that guarantee that the goal is sufficiently coarse to justify parallel evaluation. This method is powerful enough to reason about divide-and-conquer programs, and in the case of fast sorting, for example, concludes that a fast sorting goal has temporal complexity greater than 64 resolution steps (the creation threshold) if the input list is 10, long or longer. This method has proven to be correct, can be implemented directly, has shown to be useful on a parallel machine, and, unlike many previous works on analyzing the temporal complexity of logic programs, does not require a complex solution to the problem.
In the field of optimization, the speed of convergence is a crucial measure of efficiency. Many accelerated schemes have been developed, but they often lack intuitive explanations and rely on complex arguments from areas like control theory or differential equations. However, a study [58] offers a preliminary explanation of optimization algorithms using integration method theory, providing a clear and theoretically grounded analysis. It shows that optimization schemes can be seen as a special case of integration methods for gradient flow integration. This study explains the origin of acceleration using standard arguments. Fast methods typically require additional parameters that are difficult to estimate and are specific to a particular problem setup. The study also discusses a new approach to acceleration using general arguments from numerical analysis, where sequences are accelerated by constructing another sequence with a higher convergence rate. These methods can be combined with iterative algorithms to speed up convergence in most cases. However, extrapolation schemes are not widely adopted in practice due to the lack of theoretical guarantees and instability concerns.
Another study [59] promotes a fast approximation of solutions to optimization problems that are limited to iteratively solved traffic simulations. Given an objective function, a set of candidate variables, and a “black box” transport simulation that is solved by iteratively reaching (deterministic or stochastic) equilibrium, the proposed method approximates the best decision variable from the set of candidates without having to run a transport simulation to converge for each individual candidate variable. This method can be included in a broad class of optimization algorithms or search heuristics that implement the following logic, (1) generating variations on a given, currently best decision variable; (2) identifying one of these variations as the new, currently best decision variable; and (3) repeating steps (1) and (2) until further improvement is achieved. Probabilistic and asymptotic efficiency bounds are established, which are used to formulate efficient heuristics adapted to limited computational budgets. The effectiveness of the method was confirmed by a comprehensive simulation study of a non-trivial problem of pricing on roads. The method is compatible with a wide range of simulators and requires minimal parameterization.
Despite advances in computer capacity, the enormous computational cost of running complex engineering simulations makes it impractical to rely exclusively on simulation for the purpose of structural health monitoring. To cut costs, surrogate models, also known as metamodels, are constructed and then used in place of the actual imitation models. In a study by [60], structural damage was detected using 10 popular metamodeling techniques, including Back-Propagation Neural Networks (BPNN), Least Square Support Vector Machines (LS-SVMs), Adaptive Neural-Fuzzy Inference System (ANFIS), Radial Basis Function Neural Network (RBFN), Large Margin Nearest Neighbors (LMNN), Extreme Learning Machine (ELM), Gaussian Process (GP), Multivariate Adaptive Regression Spline (MARS), Random Forests, and Kriging. The results indicate that Kriging and LS-SVM models have better performance in predicting the location/severity of damage compared to other methods. A properly trained surrogate model can be used to efficiently reduce the computational cost of model updating during the optimization process.
Evolutionary computing using surrogates or metamodels uses efficient computational models, often referred to as surrogates or metamodels, to approximate the fitness function in evolutionary algorithms [61]. Although many proposed evolutionary algorithms using surrogates have proven to be more efficient than their non-surrogate counterparts, rigorous comparative studies of such evolutionary algorithms using surrogates have not been conducted. This lacuna can be explained by two factors. First, there is no generally accepted efficiency ratio for comparing evolutionary algorithms with surrogate support. Second, there are no reference problem algorithms specifically designed for evolutionary algorithms using surrogates. Evolutionary algorithms using surrogates mostly use standard test functions or special applications for empirical evaluations. Expensive optimization tasks (for example, optimizing the aerodynamic structure) are time consuming. In addition, simulations can be unstable, resulting in impractical isolated solutions. Finally, the design space is extremely large and the geometric representation can be critical for efficient design optimization.
In the article [62], global optimization problems and their numerical solutions are investigated. These problems often involve computationally intensive tasks due to the presence of multi-extremal, non-differentiable objective functions typically provided as black boxes. The study employs a deterministic algorithm specifically designed for global extremum search, distinct from iterative or nature-inspired approaches. Computational rules for one-dimensional problems and a nested optimization scheme for multidimensional problems are presented. The complexity of solving global optimization problems primarily stems from numerous local extrema. To address this, ML methods are utilized to identify areas of attraction for local minima. By employing local optimization algorithms in these selected areas, the convergence of the global search is accelerated by reducing the number of attempts near local minima. Computational experiments conducted on several hundred global optimization problems of varying dimensions confirm the accelerated convergence achieved in terms of the number of search attempts required to attain a given accuracy.
In another study [63], a method is proposed to enhance the search process of evolutionary multi-objective optimization (EMO) through the use of an estimated point of convergence. The study presents an approach that identifies promising regions of Pareto solutions in both goal space and parameter space. These regions are utilized to construct a set of moving vectors, from which the non-dominated Pareto point is estimated. Various methods are employed to construct these moving vectors and facilitate the search for EMO. The proposed method proves effective in improving EMO search, particularly in cases where the Pareto improvement landscape exhibits a unimodal distribution or a randomly distributed multimodal characteristic in the parameter space. The approach not only enables the generation of a greater number of Pareto solutions compared to conventional methods like the non-dominant sorting genetic algorithm (NSGA)-II but also enhances the diversity of the obtained Pareto solutions.
Stochastic feedback control [64] accelerates the convergence of the annealing algorithm, using parallel processors to solve combinatorial optimization problems, in combination with a probability measure of quality (PMQ). PMQ is used to generate an error signal for use in a closed control loop. This signal contributes to the control of the search process to modulate the temperature parameter. Such a scheme increases the stationary probability of globally optimal solutions. Other aspects of control theory are also described, including the system gain and its influence on system performance.
Deep learning applications require global optimization of non-convex objective functions with multiple local minima. The same problem is often encountered in physical simulations and can be solved using annealing-simulated Langevin dynamics methods, which is a well-established approach to minimizing multi-particle potentials. This analogy provides useful information for non-convex stochastic optimization in ML. Integrating the discretized Langevin equation yields a sequential update rule equivalent to the well-known momentum optimization algorithm. A study by [65] shows a gradual decrease in the impulse coefficient from its initial value close to unity,
Inference. Applied algorithm rate (coupled with an operating hardware rate) is among the key performance factors of AI systems. This rate can be controlled by computing-based balancing of system resources and/or applying relevant systems. Algorithm acceleration techniques including modifications of local and global optimization models, use of surrogate- or metamodel-assisted evolutionary or stochastic methods, and use of trained neural networks to control dynamic calculation models are widely applied.

1.7. Education and Science

In science, technology, engineering, art, and mathematics (STEAM) education, AI analytics is useful as an educational framework for developing students’ thinking skills based on AI-supported human-centered learning to develop knowledge and competence. The paper [66] shows how STEAM students who are not computer science majors can use AI for predictive modeling. To help STEAM students understand how AI can help them with human-centered reasoning, two AI-based approaches are illustrated, a naive Bayesian approach for ML of datasets with a teacher, and a semi-supervised Bayesian approach for ML of the same dataset. These AI-based approaches enable controlled experiments in which selected parameters can be held constant and others can be modified to simulate hypothetical “what-if” scenarios. By applying AI to discursive thinking, it is possible to develop AI thinking in STEAM students, thereby increasing their AI literacy, which, in turn, allows them to ask more accurate questions when solving problems.
Another study [67] aims to (1) develop a common framework for artificial intelligence in higher education (AAI-HE model); and (2) assess the AAI-HE model. The research process is divided into two stages, (1) developing an AAI-HE model and (2) assessing this model. The resulting system structure can be upgraded to an AAI-HE model to serve as a reference for researchers and instructors intending to explore and implement the best practices as management support tools, so that managers are able to make plans and decisions more efficiently. The introduction of artificial intelligence has significant potential to direct higher education towards technological progress. Moreover, recent advances in AI, deep learning, and computing architectures promote the use of AI by all population groups. As regards fundamental technologies, network systems and related Internet equipment, the AAI-HE model must be properly equipped.
The monograph [68] explores the methodological and technological issues of building the next generation of educational content by electronic means. It proposes an automated system to implement a new methodology that actually contains content generators and means of introducing educational materials into the educational process in the form of specialized consulting. The use of an educational content synthesis system injects electronic educational resources into the educational process, thereby significantly reducing labor and financial costs for their development. The research aims to create a next-generation automated educational content building system in the field of general and engineering subjects. A center for new educational technologies was established in the form of a technological platform to produce the learning materials.
Significant changes in modern education are associated with the introduction of artificial intelligence and robotic learning [69]. The transition from traditional knowledge bases to knowledge generators requires a methodology adequate to the content of education, which is based on a parametric simulation model of the educational project (e.g., course). Random and regular parameters of the model provide a set of parametric slices—specific models of learning tasks and theorems presented in forms for learning and control. Qualitative structuring of content by topics, complexity, graphic and numerical configurations contributes to the personalization of learning materials, initiates collaborative activities and stimulates competition. This knowledge generator methodology is consistent with didactic features and trends in educational systems.
Learning in the twenty-first century implies a capacity for e-learning. Training content management systems are integral components of e-education [70]. Existing content generators transform content rather than create original content. Creating a methodology and technology for generating original content is important and relevant. To develop adequate content generating techniques, primarily for mathematical and related subjects, the problem of generating a triangle together with its multiple attributes is considered to be the simplest object of elementary mathematics. The authors created an imitation model that describes the properties of the object and applies modified optimization methods and relevant algorithms. The current algorithmic scheme illustrates the performance of the developed system.
Another study [71] examines the impact of automated content creation on AI-based e-learning trends. Widely used content generators do not actually create new content, but modify content stored in databases. The concept of primary content generation is based on the use of simulation models of the objects being studied. The methodology of primary content generation shows the possibility of implementing AI-based content management systems in e-education.
The study [72] proposes a universal educational platform that combines online content generation and the learning interface. It introduces a methodology using imitation models to generate educational content and a matrix interface for user actions. The system builds a reference operator base and incorporates user solutions to support student learning. A demonstrator prototype has been developed and tested, showcasing all methodological options. The platform includes management, learning, and control interfaces. The study concludes that this modeling-based approach creates an algorithmic platform for content generation and learning, providing a unique opportunity for self-learning through student interactions.
Developing effective learning systems is a well-known challenge. Paper [73] explores the use of computational models of student learning to create expert models across various domains. The paper introduces a learner–learner architecture that defines the components required for learner learning, resulting in decision tree and trestle models. These models leverage a small set of prior knowledge to develop expert models. Despite limited prior knowledge, the paper successfully demonstrates the creation of a new mentor for planning experiments and learning expert models across multiple domains (language, math, engineering, and science) and knowledge types (associations, categories, and skills). This work highlights the efficacy of student–learner models in creating tutors that are difficult to develop using traditional approaches and their applicability to various subject areas with minimal prior knowledge.
A report by the US Department of Energy [74] discusses meetings attended by scientists and engineers in 2019. They focused on the potential of AI, Big Data, and high-performance computing (HPC) in the coming decade. The report emphasizes the importance of infrastructure that allows researchers to utilize computational resources effectively, with AI playing a key role. This infrastructure would enable optimized and controlled tests using AI and detection techniques, channeling data and resources based on researchers’ needs and availability.
The rapid development of AI is changing our lives in many ways. One area of application is data science. New methods for automating the creation of AI, known as AutoAI or AutoML, aim to automate the work of data scientists [75]. AutoAI systems can autonomously collect and preprocess data, develop new features, and build and evaluate models based on performance targets (such as accuracy or runtime efficiency). Interviews with 20 data scientists who work for a large global technology company and analyze data in various business environments were quite controversial; although the informants expressed concern about the trend towards automation in their work, they also strongly believed in its inevitability.
Recent advancements in neural networks have enabled the emulation of energy conservation laws in continuous-time differential equations, but discrete-time scenarios pose challenges. Previous neural network models have overlooked other physical laws. Works [76,77] propose a deep energy-based physical model with a specific differential geometric structure has been proposed, incorporating energy and mass conservation laws. Researchers have developed AI-based models for phenomena lacking clear mechanisms or formulas, using observation data. This technology enables highly accurate and fast simulations by adhering to physical laws. It overcomes the limitations of previous prediction techniques, which struggled with digitized phenomena. By reproducing physics in the digital world, this technology enables simulations of complex phenomena like wave flow, fracture mechanics, and crystalline structure growth. Sufficient observation data are required for its application.
Automated scientific discovery has gained interest with the advent of artificial intelligence. The work [78] proposes a knowledge transfer approach that leverages interdisciplinary engineering knowledge to identify unknown concepts, methods, or laws in one discipline based on counterparts in other disciplines. Through software execution, they successfully replicated three recent discoveries in mechanics, showcasing the effectiveness of their approach. Their future aim is to uncover new knowledge in mechanics, electronics, and other engineering disciplines.
Inference. Educational and research problems use alternative approaches. One is based on imitating human intellect using autonomous neural systems that are capable of data analysis, and provides users with methodological object research tools. An alternative approach is simulation modeling of an object under investigation. Application of models with a high degree of similarity (up to axiomatic) makes it possible to generate and solve a wide range of problems, from general theorems to simple applied jobs. This ensures great diversity and a basically unlimited number of generated problems.

1.8. Engineering and Business

A design framework for robust manufacturing systems is presented by [79]. It combines modeling, neural networks, and knowledge-based expert systems tools. An operation/ cost-oriented cell design methodology is used to consider both the physical design and control functions of the cell. Simulations estimate performance metrics based on input parameters and cell configurations. Expert knowledge is stored in a rule-based expert system, capturing the relationship between cell control complexity, cost, performance metrics, and configuration. Neural networks predict cell design and control complexity, trained using forward and backward datasets. This methodology has been successfully implemented, leading to the production of an automated cell in an industrial environment. It serves as an effective decision support system for cell designers and management.
The approach proposed in [80] aims to reduce the overall training costs by a factor of 15 while maintaining the same level of quality compared to current deployment approaches. This reduction in training costs is achieved by continuously deploying the model using real-time data input along with historical data, eliminating the need for frequent retraining of the model. The approach incorporates sampling techniques to include historical data, calculates online statistics, and dynamically materializes pre-processed features, all of which contribute to reducing training and preprocessing time. The authors also provide guidance on developing and deploying two pipelines and models to process real datasets.
The work [81] applies AI to stabilize the flight of a drone. Several techniques can help to make an ML project a success, including cloud dependency, a continuous integration (CI) and continuous delivery (CD) model, and investments in monitoring and observability. There exists a noticeable delay in obtaining information about position and orientation of a drone to autopilot. It has been demonstrated that it is possible to provide stable flight at a constant height in a vertical plane.
ML enables computers to emulate human thinking by analyzing data and identifying patterns. Supervised ML models learn from labeled data, while unsupervised models find patterns in unlabeled data. Iterative modifications to the code and data are common in ML projects. However, accuracy is crucial, especially in domains like medicine or spam detection. Sometimes, improving the dataset can enhance the model’s performance. The study [82] explores a data-driven approach to improve ML model performance in real-world applications.
ML is now widely used in data-driven applications across organizations. However, there is a lack of quantitative data on the complexity and challenges faced by ML production pipelines. The work [83] analyzed 3000 ML production pipelines at Google, covering over 450,000 trained models, over a period of four months. The analysis revealed the characteristics, components, and typologies of typical industry ML pipelines. The authors introduced a data model called model grapheets for reasoning about re-running components in these pipelines and identified optimization opportunities using traditional data management ideas. By reducing unnecessary computation that does not contribute to model deployment, significant cost savings can be achieved without compromising deployment speed.
ML model deployment and MLOps pipeline implementation are challenging due to iterative development, time-consuming processes, and the need for diverse skills, experience, domain knowledge, and teamwork [84]. However, with proper planning and experimentation, the expected results can be achieved. ML models may fail due to various reasons, including misalignment with business needs, testing and validation issues, and lack of generalization. Several techniques can contribute to the success of ML projects:
(a)
Cloud dependency: utilizing cloud-based tools enables efficient communication, teamwork, and automation of testing, training, validation, and model development;
(b)
DevOps Approach: adopting a DevOps approach with continuous integration (CI) and continuous delivery (CD) enables seamless updates, changes, and iterations throughout the development process;
(c)
Monitoring and observability: continuous monitoring ensures data quality and model performance, mitigating risks in real-world ML scenarios and contributing to successful production models.
ML systems in e-commerce use AI technology to analyze user activity and provide personalized recommendations. By incorporating AI, market researchers can attract more consumers with customized offers based on visit frequency, browsing history, and previous purchases. Research on college students [85] shows that consumers are attracted to these AI-generated offers. This technology has shifted consumer preferences towards online shopping by offering convenience, special discounts, and a wide range of products. AI technology in e-commerce involves robots, sound, image recognition, and other instant-response technologies. AI and computer technologies progress together, with ML and interactive learning being the main focus.
In a method for producing parameters for product design [86], explicit or implicit preferences are received from customers, directly or indirectly, and constraints are received from at least one provider. The method includes mapping preferences and constraints to a space, where searches for an optimum are limited by the constraint. The parameters for product design are generated according to at least one optimum found. The method may be performed by a system that comprises at least one processor adapted to execute code and at least one memory storing a preference data structure, designed in accordance with the space.
AI’s strength lies in creating personalized customer experiences in e-commerce through product personalization and virtual personal shoppers. It plays a crucial role in decision-making, proactive decision-making, and information delivery [87]. AI will impact e-commerce in three key ways, visual search capabilities for finding similar products, precise personalization tools across multiple channels, and interactive shopping experiences with virtual personal shoppers.
The article [88] introduces ADVISOR SUITE, a commercial system that creates intelligent personalized applications for sales consultants. This knowledge-based system simplifies development tasks by using conceptual models and declarative knowledge representations. It supports defining user models, recommendation rules, personalizing dialog flows, and creating user interfaces. The system includes user-friendly graphical tools that reduce development and maintenance costs.
Inference. Many types of AI systems—from autonomous networks to customizable knowledge-based systems—have become embedded in the business sphere, from the stage of initial design to sales of finished products. As regards customizable (configurable) design systems, AI systems prove highly efficient in all production sectors. They are used as mass customization tools in engineering design, e-commerce, and marketing, and have significant potential for further development. Given the scale of implementation of neural networks, the main problem lies more in developing ML production pipelines rather than improving AI models.

1.9. Brief Conclusion

There is an extremely wide range of problems in which the use of autonomous stochastic AI systems poses unacceptable risks due to insufficient accuracy, random errors, and inadequate data interpretation. In such problems, the use of configurable design, which provides high (sometimes absolute) accuracy, an extreme variety of virtual configurations, and the possibility of continuous intelligent interaction with users, shows great potential. The efficiency of configurable design systems could be significantly enhanced by reducing the reliance on databases/knowledge, decreasing the temporal (speed) complexity of algorithms, and finding alternatives to neural network-based machine learning.

2. Research Concepts and Approaches

The scientific background analysis provides a basis for considering the possibilities of a research project aimed at enhancing configurable intelligent design. Through analyzing sources and conducting preliminary research, fundamental approaches for implementing the project can be formulated, and its potential for practical performance in promising areas that demand high result accuracy can be assessed. Additionally, the project incorporates the benefits of continuous corrective interaction (dialogue) between an intelligent system and users.

2.1. Research Project Objectives

The global objective is to develop a trainable deterministic AI system for education, science, engineering, and business spheres. Local objectives are to develop and implement the components of the algorithmic AI set methodology using the following steps, building the concept and architecture of the algorithmic system; development of imitation, representation, and partially empirical models; configuration of ID methods and procedures, and reference simulation training; and building and validation of an AI algorithmic pilot set.

2.2. Expected Significance

A unique algorithmic set of AI should be created using only calculation and logical algorithmic models, without the formation and/or application of big data and knowledge bases. This is possible through hierarchical modeling to create a quasi-dynamic effect as an alternative to neural dynamics. Moreover, system performance is enhanced by reference learning, which modifies simulation models during online user interactions.

2.3. Working Hypotheses

The global hypothesis of this research is that when interacting with users, deterministic AI systems can exhibit universal intellectual capabilities, which are regarded as the ability to apply knowledge to specific problems in various domains. This hypothesis can be supported by proving the following basic secondary hypotheses:
(a)
A hierarchical approach enables to “understand” a user in context of the problem he is solving, and to provide various solutions to this problem and comprehensive relevant insights;
(b)
A reference approach enables to “train” the system and to enhance its features, similar to ML;
(c)
Modification of optimization models and control of computational complexity enable an efficient online use of the system;
(d)
A configured design model ensures the flexibility of the system, i.e., its applicability to various domains.

2.4. Base Principles

Non-anthropomorphism Principle. This project focuses on identifying and implementing the best non-anthropomorphic approach for the development of AI systems. A combination of potential benefits from neural and deterministic AI is supposed to use “the best pragmatic approach” rather than to imitate “human options”. Its efficacy and learnability are achieved, not by means of the system’s autonomy, but due to its active interaction with users via intuitive interfaces.
Determinism Principle. The sphere and application of AI systems under development require that the indeterminacy of results (solutions) be reduced to a minimum. This principle assumes the use of casual mathematical models based on design and logical operations, in lieu of common stochastic models. Hence, greater reliability of results is achieved due to the larger scope of calculations, which is balanced by the use of more sophisticated calculation techniques and algorithms.
Hierarchy Principle. It provides for an increasing multiplicity of fixed links due to imitation models’ hierarchy. A large-scale master model of a target mega-object, whose size and complexity are constrained by computational resources, lies at the top of such a hierarchy. When a user interacts with the system, variables and equations are substituted, thus changing both the configuration of the links and the model’s dimensions. In other words, models of the same or lower hierarchy are constructed. Such an approach creates highly diverse quasi-dynamic links, comparable to neural networks, and also reveals emerging links, which constitutes the subject of ML. These principles are behind the structural and functional features of the AI systems to be developed.

2.5. Structure and Functions

Configurable AI design is an automated dialogue system that aims to provide smart support to the users in the development, production, and application of various types of products. Its main feature is the automated generation of product configurations subject to resource availability and other constraints. While configurations are defined by sets of variables, constraints are defined by sets of functionals forming general and particular imitation models. A configuration approach enables to generate objects of various configurations while ensuring flexibility of targeted user involvement in the generation of acceptable configurations of specific products. System interaction is supported by dedicated interfaces for users, administrators, and developers. Smart interfaces imply the use of original graphical and matrix forms. Configuration systems can be used in a wide range of applications. In the educational sector, for example, each configuration presents a completed (solved) problem, which ranges from the most simple examples to general theorems. Configuration variables are the inputs, interim, and output data of a problem, with imitation model-forming functionals presenting the computational and logical operations that are applied to solve this problem. In the research (mostly applied) sector, the main focus is on the hierarchical aspects of configuration simulations. The system automatically builds a great number of specific models, and their analysis reveals a significant number of previously unknown effects and features of the objects under review. Configuration design is most popular in the engineering and production branches. When using models based on technological, economical, and other rules, an engineer gains access to a huge volume of completed (classified) design proposals that comply with all norms and regulations. Designers can be involved in generating and choosing configurations of the object under development. The application of such a design approach in e-commerce provides a unique opportunity for mass customization of production and sales. When interacting with a smart interface, a user (buyer) can configure their personal product, an analogue of which is available in the finished product or is made to order. Specific training of a deterministic AI system, through modification or building of new models for reference libraries (portfolios), can be an alternative to neural–network-based ML. Algorithmic generation of partially empirical models with experimentally-defined relatively stable variables (parameters) is one of the techniques used for smart building of novel imitation models. When coupled with experimental objects, such algorithms actually form virtual and natural suites that make it possible to plan experiments, use experimental data to build reference models, and perform a comprehensive model research of the objects. Such suites can be used in educational and research laboratories to test pilot and industrial installations, to perform statistical recording of marketing research to evaluate consumer preferences, and for other purposes. Another deterministic training technique automatically converts modified user models into reference (portfolio) models. A user can correct reference models by extending their variables and modifying functionals via a smart interface. An algorithm verifies the modified model by correlating the self-named (intersecting) variables in the initial and modified models. If the values are the same, the model is automatically added to the reference portfolio. Reference models can be further used both to build and study configurations, and to offer them to the users for smart support purposes. For example, whenever a student user enters a computational or logical formula when solving a problem online, the system identifies the relevant model [72] and offers (displays) suitable operators. The algorithmic system also offers supplementary features, such as image recognition, by correlating configuration variables. Moreover, system development entails an assessment of the computational complexity of all algorithms in order to adjust those to the available resources.

3. Research Design and Methods

The features of this research project involve the use of special structural and functional approaches and methods in combination with standard models and appropriate algorithms. Proceeding from this, the possibilities and procedures for creating an intelligent design system based on the above principles and approaches are considered both on the basis of the features of the integrated functioning of the nodes and segments of the system, and the selection of components of typical models and algorithms and their adaptation to specific application conditions.

3.1. Design Composition

The features of the functioning of the system under consideration provide for its modular hierarchical structure (Figure 1), which implies the presence of a central (general) complex that implements the main target design options, as well as local user complexes intended for collecting and primary registration of initial data.
The main node of the system management infrastructure, the general interface, is connected by data exchange channels with user interfaces, terminals, data logs, data sources, and the reference library. The operating modules of the system perform sequential data processing, realizing the actual options of intelligent design. The system does not have its own databases. An alternative to these is the replenished reference library of computational-logical functionals. Combinations of a limited set of reference functionals make it possible to form (and extract) an extremely large number of various simulation parametric models that are adequate for incoming user data. To extract an adequate model from the library, each model combination of functionals is assigned a unique identifier, which is registered in the model’s ID log. Sources for the formation of initial (pilot) models can be both user developments and centralized (library) partially empirical models. Each original model goes through a validation process. First, it is subjected to the control of algorithmic complexity (productivity), and then the system of equations making up the model is solved using optimization methods. The resulting solution in the form of a vector set of equation roots enters the user interface, where a vector of variables (parameters) extracted from the user data that is adequate to the original model is additionally formed. All these data are sent to the identification module through the user registration log. Here, the integral characteristics of the discrepancy (deviation) of the calculated and user data (variables) are calculated. If the deviation is acceptable, then the compilation module assigns a unique identifier to the model, after which it is included in the reference models library. Reference models are reapplied in the same order, but the identifiers change only if the models are corrected or modified.
The special option of machine learning (training), depicted in Figure 2, plays a pivotal role in the system. Unlike stochastic neural network AI systems that rely on statistical processing of large data arrays for training, this system enables continuous training through user and/or expert administrator interactions. It offers users an interface to modify reference models, which are then subjected to algorithmic verification. Before being added to the library of reference models, the modified models undergo an additional moderation check in a dedicated module (journal). This step aims to identify and prevent any erroneous or dishonest user actions that could potentially compromise the integrity of the model library. Moderation can be conducted either automatically or manually, and hybrid checking schemes are also feasible.

3.2. Implementation Procedures

Each stage consists of the following actions, the development of relevant models and algorithms (simulation, representation, ID, partially empirical, and training); cluster enrichment by adding new models and algorithms; control of computational complexity of algorithms and their clusters; validation tests of algorithms and their clusters; and correction and modifications of algorithms and their suites. The literature review notably failed to reveal any similar complex approach to developing a deterministic AI system. Given the unique nature of the project and the shortage of practical knowledge and relevant information, a special methodology is proposed here, step-by-step iterative modification (Figure 3).
More particularly, in view of the assumed indeterminacy of dynamic characteristics and operation stability of the AI suite, each stage should include repeated testing by dedicated groups of professionals, experts, and students. Based on the results of each testing cycle, operating documents and algorithms will be updated until acceptable operational parameters are achieved. The final test cycle will result in a working pilot layout of an AI suite to ensure implementation of the project options of smart configuration design.

3.3. Imitation Models

The object under smart analysis is presented as an x = ( x 1 , x 2 , , x N ) vector in N-dimensional configuration space. Configuration-defining vector coordinates are interconnected by means of f functionals forming a relevant equation system f n ( x ) = 0 , ( n = 1 , , N ) that represents the master (general) model. Since local optimization techniques sometimes cause a partial loss of the roots of equations [86], target function global optimization techniques are proposed as the preferred methods of solving equation systems.
S f = n = 1 N f n 2 ( x )
where S f 0 . Models are hierarchically transformed by replacing f i ( x ) = 0 initial (master) functionals with x j x 0 j = 0 functionals to capture the relevant variables ( x 0 j ) . This enables the creation of particular models of lower hierarchy. The following main deliverables are planned, standard (exemplary) models by fields, training, applied research, engineering (pilot) design, and e-business; reference functionals, polynomic, exponential, harmonical, and combined; standard and reference imitation model generation; preliminary analysis and selection of optimization techniques and models; optimization model testing and modification; and building of a basic imitation model log.

3.4. Representation Models

Solutions and results (configurations) are presented as a set of dedicated variables forming an I-dimensional representation vector r = ( r 1 , r 2 , , r l ) . These variables define the presentation components, as well as their values, location, orientation, etc. The initial configuration vector x converts into a representation vector by generating and solving the R j ( r , x ) = 0 , j = 1 , , N + 1 equation system that links the design variables of the generated configuration and its presentation features. The following main deliverables are planned, standard (exemplary) models by fields, numeric matrix, special matrix (textual, logical, operator, and others) graphic charts, graphic drawings, etc.; reference user functionals; standard and reference representation model generation; preliminary analysis and selection of representation techniques and models; representation model testing and modification; and building of a basic representation model log.

3.5. Partially Empirical Models

Imitation equations can be formulated (especially for research purposes) both analytically (expert method) and using parametric equations with experimentally defined parameters. A combined partially empirical functional F n ( p , x ) has the following two vectors, main variable vector x and parametric vector p = ( p 1 , p 2 , , p K ) , where K is the number of empirical parameters. Since the dimensions of configuration parametric space are N + K , the main N system of F n ( p , x ) = 0 equations is enriched with K equations such as x K x 0 K = 0 , where x K = ( x 1 , x 2 , , x K ) is a given experimental variables sample vector, and x 0 K = ( x 01 , x 02 , , x 0 K ) are relevant variables’ values. The number and composition of M samples and the values of variables are determined by experimental design techniques, either random or regular. The results of experiments per sample m = 1 M are presented as vector x m = ( x m 1 , x m 2 , , x m N ) , with relevant empirical parameters p m = ( p m 1 , p m 2 , , p m K ) calculated by M-fold solution of N + K equation system using the target function
S F = n = 1 N F n 2 ( p , x ) + k = 1 K ( x k x 0 k ) 2
where S F 0 . Finally, adequacy and accuracy of this model are controlled by smallness of the following criterion
δ p = m = 1 M k = 1 K p m k p a k p a k 2 / K + n = 1 N x m n x a n x a n 2 / N M
where p a k = 1 M m = 1 M p m k and x a k = 1 M m = 1 M x m k are values of the parameters and variables, respectively, as averaged by the number of experiments (samples). The following main deliverables are planned, standard partially empirical models are based on standard experimental design problems, as applied in training, in research laboratories, and during pilot tests in practical engineering; reference (debugging) functionals (for all types of functions) are created by adapting the functionals of imitation models; adapting optimization techniques and experimental design models; testing and modifications of partially empirical optimization models; and basic log of partially empirical models.

3.6. Configuration ID

Image (object) recognition in imitation systems is available by matching the identified vector x and reference vector x 0 = ( x 01 , x 02 , , x 0 N ) . The mean-square deviation can serve as an ID criterion:
σ x = n = 1 N ( x n x 0 n ) 2 N .
If the reference object configuration is presented in other J-dimensional alternative variables y 0 = ( y 01 , y 02 , , y 0 J ) , then the vector y 0 is converted into x 0 . The conversion model contains N + J equations, such as C i ( x 0 , y 0 ) = 0 , i = 1 , , N + J and is treated similarly to the master imitation model. The basic imitation model is used as a standard conversion model. Reference conversion functionals are generated by adapting the functionals of the imitation model. The basic imitation model extends and adapts to the converted and configured ID objectives. Operating calculation modules for matrix and graphical data presentation are developed. Conversion and ID models are tested and modified with their basic log formed.

3.7. Algorithm Performance

The computational complexity of algorithms is adjusted and controlled in all development and validation stages. Complexity assessment makes it possible to rule out inefficient algorithms prior to implementation, and can be used to adjust complex algorithms without testing all possible options. Deterministic AI systems feature low memory volumes, and therefore space complexity parameters can be ignored, with a focus on the time complexity of the algorithms. The following alternative (complementary) standard methods of assessment have been considered, a method for calculating elementary operations in dedicated reference algorithms; and an asymptotic method of determining a comparative complexity function growth rate regardless of the number of operations. The worst Big-O Notation scenario is applied to assess computational complexity. To implement the calculation method a replenishable set of reference algorithms is created, a time complexity assessment algorithm is developed, a stand-alone algorithmic layout is made, and validation tests are performed.

3.8. Reference Imitation Training

The imitation system is trained by modifying the initial master model with the composition of variables preserved or extended (by adding new variables). Assume that a user adds a z variable to initial configuration x. Dimensions of the extended vector x c = ( x c 1 , x c 2 , , x c N , z ) are N + 1 . To calculate the extended configuration, the user modifies the model by applying new functionals. The modified equations form a ϕ n ( x c ) = 0 , n = 1 , , N + 1 system. The modified model must be validated to confirm its adequacy. Theoretically, adequacy can be verified by identical values of all N congruent variables: x n = x c n . Given an estimated error, we can use the mean-square deviation as a criterion
σ c = n = 1 N ( x n x c n ) 2 N
where σ c 0 . Complex large-scale models require highly reliable validation. To this end, the modified model is repeatedly verified by matching the initial vector to a number of configurations of the extended vector x c j = ( x c j 1 , x c j 2 , , x c j N , z j ) , ( j = 1 , , J , where J is the number of extended configurations). The extended mean-square deviation is used here as a criterion
σ c J = j = 1 J n = 1 N ( x n x c j n ) 2 N J
where σ c J 0 . When performing the required validation, the modified model is added to the reference model library as a set of functionals and relevant values of additional variables. Every value (or a set of values in case of repeated validation) will be a unique reference model identifier. The system uses these identifiers to identify the model and retrieve the relevant reference model both for calculating extended configurations and for providing it to the users as a smart support tool.

3.9. Algorithmic Cluster on Accrual Basis

The final result of the project is a pilot algorithmic cluster for intelligent design purposes. It is developed by a stage-by-stage accrual of the source (basic) imitation module with the relevant algorithmic cluster formed at each stage. Every such cluster is created by adding new algorithms (as an extension, add-ons, dedicate modules, etc.) to the previous cluster. Basic algorithms consist of the following, determining the type and composition of configuration variables; determining the type and composition of empirical parameters; determining the type and composition of input, intermediate and output vectors and matrices; determining the type and composition of the functionals applied; source data regular or random input algorithm; source and modified functional input algorithm; hierarchy adjustment and current dimension control algorithm; universal optimization algorithm (based on modified standard algorithms); reference model ID algorithm; intermediate and output data building and presentation algorithm; and computational complexity control algorithm. Each cluster is generated as an active SW-algorithmic layout. This layout is subject to computational complexity assessments and validation tests, as specified in the project program. Pilot tests are performed according to a special iteration methodology that specifies both the order of tests and the list of requirements. A professional team tests the suite at each stage and updates the SOW and specifications accordingly. Algorithms are revised, followed by repeated testing in the next stage. The tests are performed as long as they are needed to meet the requirements. The finished cluster (after debugging and testing) will be formed as an active layout of pilot algorithmic cluster. Based on the test results, the following working documents shall be prepared, configurable intelligent design system methodology; configurable intelligent design system—SOW for the production sample; configurable intelligent design system specifications for the system components; configurable intelligent design system—technical guide for sysadmin; and configurable intelligent design system—technical guide for users.

4. Preliminary Results

Further, the facts of a specific implementation of the configurable design methodology in a number of areas of practical application are considered and discussed.

4.1. AI Education

An online web search for the keywords “configurable design” primarily yields links to technical engineering resources. However, this does not imply that the demand for a configuration approach in other areas is less prevalent. For instance, searching for a “task generator” provides numerous links to educational sites, applications, and other resources that automatically generate multivariate learning tasks for users. This highlights the relevance and significance of the problem. Typically, these resources focus on generating simple mathematical problems for elementary school students. A popular and widely used resource in this domain is the Math Goodies website [89], which offers math help with interactive math lessons and math worksheets for free (Figure 4).
It should be noted that, usually, such resources do not produce new tasks, but replicate versions of previously created and solved samples (templates) of tasks. An authentic educational content generator is built on the basis of simulation-ontological multi-parametric models of the object (subject) of the training course, section, or project. Models form a multidimensional configuration space in which certain random or regular combinations of parameters select sections of the space that represent specific task configurations [30]. Components of tasks in thermodynamics (polytropic processes), pressure–volume diagrams, heat flow–volume diagrams, work–volume diagrams, temperature–entropy diagrams are shown in Figure 5. Tasks are generated by random configurable design.
The configurable design methodology provides generated learning content with several significant advantages. The multidimensionality of the initial models ensures highly diverse content, enabling personalization and facilitating an adaptive learning approach. The multi-parametric approach enables the formation of a multi-level hierarchy of models and corresponding tasks, ranging from general to specific, facilitating the structuring of content based on various criteria, from thematic relevance to complexity. Moreover, the presence of common (parent) content models maintains methodological and technological uniformity amidst the content’s diversity and hierarchy. These content properties, illustrated in Figure 6, using problems in the course of theoretical mechanics (specifically the dynamics of a system of bodies), showcase how the generator structures tasks based on both complexity (vertical personalization) and individual options (horizontal personalization).
In this concept of configurable design, educational systems require essential components such as user and administrative interfaces, data and operation identification and verification modules, as well as user assistance and support tools. One of the most challenging aspects is the development of an alternative machine learning approach that is unconventional for deterministic AI systems. These options are implemented and tested using the current algorithmic layout of the learning platform (Figure 7), specifically on a simple example of the topic “Vector in planar coordinates”. The system utilizes a matrix interface that enables custom operations through the input of a sequence of standard operators. This approach enables the training and improvement of the system, providing methodological support to student users.
In the development of AI education systems, a crucial stage is the integration of content generators with intelligent online learning platforms. An example of such integration is the CLARITY platform developed by iTutorSoft Corporation. CLARITY is a cloud-based AI-powered learning engineering platform designed for scalable, cost-effective systemic adaptive instruction, tutoring, and blended learning. It eliminates the need for guesswork in instructional design and the impractical manual planning of a learning pathway for every individual learner. Instead, CLARITY leverages its unique features to automatically generate highly effective “Off-Road” learning experiences. Pilot tests of the beta version of the training content generator were conducted on this platform, as shown in Figure 8.

4.2. AI Science

Scientific research, particularly outside the IT industry, remains relatively inaccessible for the development and application of AI systems. There is growing interest in profile-informed neural network systems that utilize algorithms based on fundamental equations and axioms specific to particular industries or areas of knowledge. An example of this approach can be seen in the study of hydrodynamics involving the interaction of bodies with fluid or gas flow, where neural networks are utilized to create surrogate models in numerical experiments [92]. The study focuses on the two most common types of turbines, horizontal-axial (collinear) and vertical-axial (orthogonal) turbines with adjustable vane blades. The simulation models incorporate variables and parameters that determine the geometric features, operational characteristics, and airflow parameters of the turbines. Of particular relevance is the problem of determining the optimal sets of variable values (parameters) at which the turbine extracts maximum energy from the incoming airflow. Figure 9 illustrates a natural drawing example showcasing the optimal configurations of the impeller blade in a collinear wind turbine (fan-type turbine). Figure 10 presents the sequence of optimal orientations for an orthogonal turbine blade, highlighting the influence of blade curvature on orientation and applied forces. Moreover, this AI complex can assist in planning field tests for turbines, leading to significant time and resource savings.

4.3. AI Commerce

The commercial application of configurable design aims to address the significant contradiction between the need for product customization (individualization) and the demand for mass standardized production and sales. The objective is to provide businesses with an intelligent add-on tool for existing e-commerce platforms, enabling the generation of optimal offers online [86,95,96]. This add-on allows all participants in e-commerce to engage with the platforms, facilitating product profile construction, smart search, active marketing, and other functionalities.
Flexible and intelligent ordering capabilities are crucial advantages in today’s e-commerce landscape. Typically, an ordered product can be described by several independent basic characteristics, referred to as its own parameters. Based on these parameters, other order characteristics can be calculated, often subject to certain constraints. Users often prioritize these characteristics over their own parameters when placing an order. This introduces the challenge of translating the order from user-understandable characteristics into manufacturer-friendly terms. Complications arise due to the existence of dependencies and limitations among these characteristics. Additionally, not all user requirements for characteristics can be met by the manufacturer. As a compromise, manufacturers may offer users alternative ordering options with characteristics that closely align with their requested preferences.
Generators or constructors of commercial requests are built upon simulation models of products. For instance, the design of vessels (such as vases and utensils) for orders of finished products and/or 3D printing is considered [86,95]. A comprehensive model is employed, encompassing geometric and weight characteristics of the vessel. The vessel is formed using elements such as ellipsoids, truncated cones, and disks (refer to Figure 11). In addition to parameters determining the geometric contour, specifications for wall thickness, vessel material, specific gravity, and others are provided. The model comprises a total of 36 parameters categorized based on different criteria, including initial-calculated, basic-intermediate, geometric-physical, and technological-user. Configuration examples for product requests are depicted in Figure 12.
The imitational approach enables the optimization of design to ensure that the product meets the user’s requirements while considering the limitations imposed by typical geometric shapes and the manufacturing capabilities for producing ordered objects. The system aids in the classification of offers and orders, providing manufacturers with a better understanding of which technology to employ for production. Additionally, this concept allows designers to explore and develop their innovative ideas, as long as they fall within the system’s constraints, including technological, physical, and other limitations.
Another example of commercial configurable design is the tour query generator. This system enables the creation of numerous tour options tailored to each user. It also provides travel companies with valuable insights into customer demand, which can influence the offerings in the tourism market and improve the alignment between supply and demand. The tour model considered in this context is relatively straightforward, encompassing parameters such as tour timing and duration, seasonal weather characteristics, apartment size and location, and tour cost (Figure 13). Overall, the model consists of 28 parameters.
Figure 14 shows an example of an optimized tourist route configuration, using the example of London, obtained by AI generation.

4.4. Validity and Significance

The validity of a configurable design is founded on its inherent qualitative property of determinism. This approach is implemented by excluding the use of unverified external data (such as databases or knowledge) and instead employing working models that exhibit high levels of adequacy and reliability. The utilization of data arrays is minimized, typically limited to system-generated data for specific purposes or a restricted set of external data derived from the system’s experiment planning feature. The quality of model adequacy is guaranteed by their fundamental nature, which is based on established scientific and specialized methodologies, relevant laws, axioms, theorems, and so forth. Additionally, hierarchical modeling facilitates a quasi-dynamic effect as an alternative to neural dynamics. Furthermore, system performance is enhanced through reference learning, which modifies simulation models during online interactions with users. The significance of configuring systems is determined by both their specific methodological and technological characteristics and their practical relevance, as indicated by demand indicators. The sphere of education, particularly in the natural and technical domains, is characterized by high quality and standardization of methods and procedures, built upon a long-standing history and extensive expertise. This provides a solid foundation for the development of deterministic AI educational systems. The positive reputation of online educational resources, such as [89,91], can also be attributed to their utilization of educational materials directly aligned with US educational standards, which serve as a reference for standards in many states and regions. It is worth mentioning that government bodies are showing interest in supporting the creation of AI education systems, as evidenced by their funding of relevant projects and their implementation in schools and universities [68,91]. For instance, the development of an automated educational content generation system [68] resulted in the creation of a new university course on theoretical mechanics, complete with next generation educational materials. Additionally, a specialized consulting group was established to provide support for AI content generation in educational institutions. The practical application of content generators [68,69] has revealed the significant impact of new content on the didactic aspects and learning outcomes. Firstly, studying a particular theoretical material in multiple distinct configurations, as opposed to a typical book format, greatly enhances the quality of content perception. This is evident in the effective “recognition” of the material not only within related fields of knowledge but also in unexpected practical applications. The diverse range of educational tasks and theorems, characterized by methodological unity and even external similarities, fosters user interaction and collaboration, increasing their engagement and the amount of information assimilated. Furthermore, through joint training activities, leaders emerge, and users can be structured based on their level of training and intellectual capabilities, allowing for tailored provision of training materials across different subjects and levels of complexity. Materials such as challenging tasks for independent work or unique theoretical content for training seminars are offered on a competitive basis, encouraging healthy competition among users. The experience gained from AI education can be effectively applied to the realm of scientific research, where rigorous verification and testing of fundamental theoretical models are conducted. By employing robust initial models with a high degree of generality, configurable design enables the construction of a networked hierarchical system of models, yielding substantial amounts of valid specialized information. The potential of the configuration approach is exemplified in the study of wind turbines [93,94]. These investigations utilize comprehensive fundamental aerodynamic theory originally developed for the aviation industry and subsequently adapted for wind energy applications. The reliability and significance of such research are further supported by numerous copyright patents filed based on the outcomes of intelligent design and documented in relevant monographs [93,94]. The validity of commercial configurable design systems is closely aligned with industry standards and specifications. These characteristics are embedded in simulation-ontological models through appropriate parameters and boundary conditions [95,96], enabling the generation of suitable product configurations using working algorithms. While the demands for accuracy in commercial configurations may not be as stringent as those in the educational and research sectors, the levels of methodological support are comparatively lower. The significance of commercial AI systems is ultimately determined by their economic efficiency. However, during the initial stages of conceptual and methodological development, relevance and importance are usually evaluated by experts. In the case of the research project discussed, peer reviews played a crucial role in securing government funding for the development of an AI-based e-commerce platform by R. Yavich, S. Malev, and V. Rotkin. This project was conducted from 2018 to 2020 under the Israel Innovation Authority grant No 66975 (Israel) [95,96], and subsequently led to the acquisition of a US patent [95] based on the methodology and algorithms developed for the e-commerce platform.

4.5. Features and Risks

Unlike neural networks, where intellectual properties are formed on the basis of large volumes of specially adapted data, the training of the systems of configurable design under consideration is carried out directly during their operation, in the process of a working dialogue with the users. The intellectual effect is achieved by correcting and modifying the reference models, but not by independent processing and rewriting models by the user. The hierarchical interlevel transformation of models carried out by the system, accompanied by a change in their dimension, gives a jump-like effect in their configurations, offering new possible directions for transformations. Thus, the system, in interaction with the user, forms a virtual network of design trajectories, which encourages the user to search for acceptable (optimal) configurations. With the accumulation of reference configurations, this effect intensified, which increases the intellectual capabilities of the system, including the possibility of its further training. Considering the experience of preliminary studies, it can be argued that the main risks in this project are associated with the estimated resource intensity (complexity) of the algorithms. Both algorithmic and hardware methods of overcoming risks are possible. It requires continuous monitoring of the computational complexity of algorithms, and, if necessary, their replacement, modification or change in the architecture of the system. Hardware regulation of settlement resources is also necessary.

5. Conclusions

An anthropomorphic approach to machine-learnable neural networks is mainstream in AI studies. Research studies and professional education focus on this trend. Imitation of human consciousness and learnability as a fundamental option are regarded as most efficient in this context. At the same time, stochastic algorithm-based neural networks imply a certain degree of indeterminacy, similar to a human brain. They can make and accumulate errors, which can (and does) have major negative impacts including social implications. Alternative deterministic rule-based AI systems considered in this review guarantee highly accurate and determined solutions, but they have no training options, and, therefore, are relatively cumbersome and difficult to develop, and require Big Data and knowledge bases. This approach is based on the primary hypothesis that a deterministic training option (as an alternative to ML) significantly mitigates the drawbacks of rule-based systems, making them comparable to neural networks in terms of performance and resources. In addition, the use of special options that combine all the hypotheses of this study contributes to the improvement of deterministic AI systems. An application of hierarchical imitation modeling creates a quasi-dynamic network effect as an alternative to neural network dynamics, thus improving the flexibility and adaptability of deterministic systems. Application of configuration hierarchy, together with computational and logical algorithms, enables the execution of fully intelligent operations while minimizing the use of Big Data and knowledge bases. All these options together ensure the universal nature of deterministic AI systems, and the ability to reconfigure and apply such systems in education, science, engineering, business, social applications, and other sectors. The validity and significance of the project are supported by the initial results obtained from the development and testing of educational, research, and commercial AI systems for configuration. These results are manifested through the creation of numerous applications, designs, and methods that have been protected by patents and documented in the author’s monographs. All the components of the methodology, applied models, and algorithms, as well as the approaches and technologies for their implementation, are the subject of fundamental research. None of them have been previously considered in this context, either individually or collectively, and it is expected that the results will pave the way for a new direction in AI research. Considering the positive experience in solving specific problems, it appears promising to further develop the project towards creating a universal AI configuration complex, while expanding the practical application areas of the methodology being considered.

Author Contributions

R.Y., S.M., I.V. and V.R. took part in all parts of work and preparation of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank Alexei Kanel-Belov for interesting and fruitful discussions regarding this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Proudfoot, D. Anthropomorphism and AI: Turing’s much misunderstood imitation game. Artif. Intell. 2011, 175, 950–957. [Google Scholar] [CrossRef] [Green Version]
  2. Watson, D. The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence. Minds Mach. 2019, 29, 417–440. [Google Scholar] [CrossRef] [Green Version]
  3. Bhowmik, P. Research Study on basic Understanding of Artificial Neural Networks. Glob. J. Comput. Sci. Technol. Neural Artif. Intell. 2019, 19, 5–7. [Google Scholar]
  4. Arleen, S.; Kathinka, E.; Michele, F. Anthropomorphism in AI. AJOB Neurosci. 2020, 11, 88–95. [Google Scholar] [CrossRef]
  5. Yang, Y.; Yue, L.; Xingyang, L.; Jin, A.; Yifan, L. Anthropomorphism and customers’ willingness to use artificial intelligence service agents. J. Hosp. Mark. Manag. 2022, 31, 1–23. [Google Scholar] [CrossRef]
  6. Ling, E.C.; Tussyadiah, I.; Tuomi, A.; Stienmetz, J.; Ioannou, A. Factors influencing users’ adoption and use of conversational agents: A systematic review. Psychol. Mark. 2021, 38, 1031–1051. [Google Scholar] [CrossRef]
  7. Navidi, N.; Landry, R., Jr. New Approach in Human-AI Interaction by Reinforcement-Imitation Learning. Appl. Sci. 2021, 11, 3068. [Google Scholar] [CrossRef]
  8. Tor, G.; Margunn, A. Augmenting the algorithm: Emerging human-in-the-loop work configurations. J. Strateg. Inf. Syst. 2020, 29, 101614. [Google Scholar] [CrossRef]
  9. Vassev, E.; Hinchey, M. Knowledge Representation and Reasoning for Intelligent Software Systems. Computer 2011, 44, 96–99. [Google Scholar] [CrossRef]
  10. Inozemtsev, V.A. Computer modeling of knowledge in artificial intelligence. Izv. MGTU MAMI 2015, 9, 76–83. [Google Scholar] [CrossRef]
  11. Reich, Y. Layered models of research methodologies. Artif. Intell. Eng. Des. Anal. Manuf. 1994, 8, 263–274. [Google Scholar] [CrossRef] [Green Version]
  12. Matt, D.; Winson, H.; Alvaro, H.; Aniruddha, K.; Eric, K.; Roozbeh, M.; Jordi, S.; Dustin, S.; Eli, V.; Matthew, W.; et al. RoboTHOR: An Open Simulation-to-Real Embodied AI Platform. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 3164–3174. [Google Scholar] [CrossRef]
  13. Bettina, B.; Allison, L.; Mike, B. AI in Education: Learner Choice and Fundamental Rights. Learn. Media Technol. 2020, 5, 312–324. [Google Scholar] [CrossRef]
  14. Dignum, V. The role and challenges of education for responsible AI. Lond. Rev. Educ. 2021, 19, 1–11. [Google Scholar] [CrossRef]
  15. Petrović, V.M. Artificial Intelligence and Virtual Worlds—Toward Human-Level AI Agents. IEEE Access 2018, 6, 39976–39988. [Google Scholar] [CrossRef]
  16. Feuerriegel, S.; Dolata, M.; Schwabe, G. Fair AI. Bus. Inf. Syst. Eng. 2020, 62, 379–384. [Google Scholar] [CrossRef]
  17. Pedreschi, D.; Giannotti, F.; Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F. Meaningful Explanations of Black Box AI Decision Systems. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 9780–9784. [Google Scholar] [CrossRef] [Green Version]
  18. Angelov, P.P.; Soares, E.A.; Jiang, R.; Arnold, N.I.; Atkinson, P.M. Explainable artificial intelligence: An analytical review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2021, 11, e1424. [Google Scholar] [CrossRef]
  19. Rai, A. Explainable AI: From black box to glass box. J. Acad. Mark. Sci. 2020, 48, 137–141. [Google Scholar] [CrossRef] [Green Version]
  20. Goebel, R.; Cher, A.; Holzinger, K.; Lecue, F.; Akata, Z.; Stumpf, S.; Holzinger, A. Explainable AI: The New 42? In Proceedings of the Machine Learning and Knowledge Extraction (CD-MAKE 2018), Hamburg, Germany, 27–30 August 2018; Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E., Eds.; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2018; Volume 11015. [Google Scholar]
  21. Bostrom, N. Strategic Implications of Openness in AI Development. Glob. Policy 2017, 8, 135–148. [Google Scholar] [CrossRef] [Green Version]
  22. Volinsky, I.; Tal, L.; Meir, Z. X-ray image classification using neural network. Funct. Differ. Equations 2020, 27, 35–38. [Google Scholar] [CrossRef]
  23. Doe, J.; Professional AI. Rule-Based Systems. Available online: https://www.professional-ai.com/rule-based-systems.html (accessed on 23 March 2023).
  24. Tiihonen, J.; Raatikainen, M.; Myllärniemi, V.; Männistö, T. Carrying Ideas from Knowledge-Based Configuration to Software Product Lines. In Proceedings of the Software Reuse—Bridging with Social-Awareness (ICSR 2016), Limassol, Cyprus, 5–7 June 2016; Kapitsaki, G., Santana de Almeida, E., Eds.; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2016; Volume 9679. [Google Scholar]
  25. Sweeney, P.J.; Ilyas, I.F.; Zhou, W.; Oldford, W. Knowledge Representation Systems and Methods Incorporating Inference Rules. U.S. Patent 0166373 A1, 28 June 2012. [Google Scholar]
  26. Buchanan, B.G.; Duda, R.O. Principles of Rule-Based Expert Systems. Adv. Comput. 1983, 22, 163–216. [Google Scholar] [CrossRef]
  27. Katalnikova, S.; Novickis, L. Choice of Knowledge Representation Model for Development of Knowledge Base: Possible Solutions. Int. J. Adv. Comput. Sci. Appl. (IJACSA) 2018, 9. [Google Scholar] [CrossRef] [Green Version]
  28. Nabeel, A.M.; Muhammad, W.; Muhammad, U.; Ghani, K.; Waqar, M.; Mahnoor, A.H. A survey of ontology learning techniques and applications. Database 2018, 2018, bay101. [Google Scholar] [CrossRef] [Green Version]
  29. Vajna, S. Approaches of Knowledge-Based Design. In Proceedings of the 29th Design Automation Conference—ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (Parts A and B), Chicago, IL, USA, 2–6 September 2003; Volume 2, pp. 375–382. [Google Scholar] [CrossRef]
  30. Vladimir, R.; Roman, Y.; Sergey, M. Concept of A.I. Based Knowledge Generator. J. Educ. e-Learn. Res. 2018, 5, 235–241. [Google Scholar] [CrossRef]
  31. Da Silveira, G.; Borenstein, D.; Fogliatto, F.S. Mass customization: Literature review and research directions. Int. J. Prod. Econ. 2001, 72, 1–13. [Google Scholar] [CrossRef]
  32. Shuyou, Z.; Jinghua, X.; Huawei, G.; Jianrong, T. A Research Review on the Key Technologies of Intelligent Design for Customized Products. Engineering 2017, 3, 631–640. [Google Scholar] [CrossRef]
  33. Trautmann, L. Product customization and generative design. Multidiszciplináris Tudományok 2021, 11, 87–95. [Google Scholar] [CrossRef]
  34. Dou, Z.; Sun, Y.; Wu, Z.; Wang, T.; Fan, S.; Zhang, Y. The Architecture of Mass Customization-Social Internet of Things System: Current Research Profile. ISPRS Int. J.-Geo Inf. 2021, 10, 653. [Google Scholar] [CrossRef]
  35. Jannach, D.; Felfernig, A.; Kreutler, G.; Zanker, M.; Friedrich, G. Research Issues in knowledge-based Configuration. In Mass Customization Information Systems in Business; Blecker, T., Friedrich, G., Eds.; Idea Group Inc.: Calgary, AB, Canada, 2007; pp. 221–236. [Google Scholar] [CrossRef]
  36. Fabian, D.; Patricia, K.; Benjamin, S.; Sandro, W. Model and Knowledge Representation for the Reuse of Design Process Knowledge Supporting Design Automation in Mass Customization. Appl. Sci. 2021, 11, 9825. [Google Scholar] [CrossRef]
  37. Nikolaus, F.; Martin, S.; Ulrike, K. The “I Designed It Myself” Effect in Mass Customization. Manag. Sci. 2009, 56, 125–140. [Google Scholar] [CrossRef] [Green Version]
  38. Ren, B.; Qiu, L.; Zhang, S.; Tan, J.; Cheng, J. Configurable product design considering the transition of multi-hierarchical models. Chin. J. Mech. Eng. 2013, 26, 217–224. [Google Scholar] [CrossRef]
  39. Guillon, D.; Ayachi, R.; Vareilles, É.; Aldanondo, M.; Villeneuve, É.; Merlo, C. Product service system configuration: A generic knowledge-based model for commercial offers. Int. J. Prod. Res. 2021, 59, 1021–1040. [Google Scholar] [CrossRef]
  40. Blecker, T.; Friedrich, G. (Eds.) Mass Customization Information Systems in Business; IGI Global: Hershey, PA, USA, 2007. [Google Scholar] [CrossRef]
  41. Lothar, H.; Alexander, F.; Andreas, G.; Juha, T. Knowledge-Based Configuration: From Research to Business Cases; Elsevier Inc.: Amsterdam, The Netherlands, 2014; 357p. [Google Scholar] [CrossRef]
  42. Molina, M. An Intelligent Sales Assistant for Configurable Products. In Web Intelligence: Research and Development; Zhong, N., Yao, Y., Liu, J., Ohsuga, S., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2001; Volume 2198. [Google Scholar]
  43. Alessio, T.; Elisa, P.; Cipriano, F. Sales configurator capabilities to avoid the product variety paradox: Construct development and validation. Comput. Ind. 2013, 64, 436–447. [Google Scholar] [CrossRef]
  44. Bernard, Z.; Alexandre, M.; Levent, Y. Artificial intelligence in modeling and simulation. In Encyclopedia of Complexity and System Science; Springer: New York, NY, USA, 2009. [Google Scholar]
  45. Fan, W.; Chen, P.; Shi, D.; Guo, X.; Kou, L. Multi-agent modeling and simulation in the AI age. Tsinghua Sci. Technol. 2021, 26, 608–624. [Google Scholar] [CrossRef]
  46. Kikolski, M. Study of Production Scenarios with the Use of Simulation Models. Procedia Eng. 2017, 182, 321–328. [Google Scholar] [CrossRef]
  47. Pourbafrani, M.; van Zelst, S.J.; van der Aalst, W.M.P. Supporting Automatic System Dynamics Model Generation for Simulation in the Context of Process Mining. In Business Information Systems (BIS 2020); Abramowicz, W., Klein, G., Eds.; Lecture Notes in Business Information Processing; Springer: Cham, Switzerland, 2020; Volume 389. [Google Scholar]
  48. Sargent, R.G. Validation and verification of simulation models. In Proceedings of the 2004 Winter Simulation Conference, Washington, DC, USA, 5–8 December 2004. [Google Scholar] [CrossRef]
  49. Balci, O.; Nance, R.E. The simulation model development environment: An overview. In Proceedings of the 24th Conference on Winter Simulation, Arlington, VA, USA, 13–16 December 1992; pp. 726–736. [Google Scholar] [CrossRef]
  50. Yuan, Y.; Dogan Can, A.; Viegelahn, G.L. A flexible simulation model generator. Comput. Ind. Eng. 1993, 24, 165–175. [Google Scholar] [CrossRef]
  51. Hitz, M.; Werthner, H.; Guariso, G. An intelligent simulation model generator. Simulation 1989, 53, 57–66. [Google Scholar] [CrossRef]
  52. Volinsky, I.; Ifrach, O.; Loewenstein, Y. From standard approach to ode neural networks. Funct. Differ. Equations 2020, 27, 29–34. [Google Scholar] [CrossRef]
  53. Yun-Han, L.; Seiven, L.; Ruay-Shiung, C. Improving job scheduling algorithms in a grid environment. Future Gener. Comput. Syst. 2011, 27, 991–998. [Google Scholar] [CrossRef]
  54. Arabnejad, H.; Barbosa, J.G. A Budget Constrained Scheduling Algorithm for Workflow Applications. J. Grid Comput. 2014, 12, 665–679. [Google Scholar] [CrossRef]
  55. Topcuoglu, H.; Hariri, S.; Wu, M.Y. Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans. Parallel Distrib. Syst. 2002, 13, 260–274. [Google Scholar] [CrossRef] [Green Version]
  56. Chen, T.; Tang, K.; Chen, G.; Yao, X. On the analysis of average time complexity of estimation of distribution algorithms. In Proceedings of the IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 453–460. [Google Scholar] [CrossRef] [Green Version]
  57. King, A.; Shen, K.; Benoy, F. Lower-bound Time-Complexity Analysis of Logic Programs. In Proceedings of the 1996 International Symposium—Logic Programming, Zurich, Switzerland, 24–26 July 1997; Maluszynski, J., Ed.; MIT Press: Cambridge, MA, USA, 1997; pp. 261–276, ISBN 0-262-63180-6. [Google Scholar]
  58. Scieur, D. Acceleration in Optimization. Ph.D. Thesis, Université Paris Sciences et Lettres, Paris, France, 2018. [Google Scholar]
  59. Flötteröd, G. A search acceleration method for optimization problems with transport simulation constraints. Transp. Res. Part B Methodol. 2017, 98, 239–260. [Google Scholar] [CrossRef]
  60. Ramin, G.; Reza, G.M.; Mohammad, N. Comparative studies of metamodeling and AI-Based techniques in damage detection of structures. Adv. Eng. Softw. 2018, 125, 101–112. [Google Scholar] [CrossRef]
  61. Jin, Y. Surrogate-assisted evolutionary computation: Recent advances and future challenges. Swarm Evol. Comput. 2011, 1, 61–70. [Google Scholar] [CrossRef]
  62. Barkalov, K.; Lebedev, I.; Kozinov, E. Acceleration of Global Optimization Algorithm by Detecting Local Extrema Based on Machine Learning. Entropy 2021, 23, 1272. [Google Scholar] [CrossRef]
  63. Yan, P.; Yu, J.; Takagi, H. Search Acceleration of Evolutionary Multi-Objective Optimization Using an Estimated Convergence Point. Mathematics 2019, 7, 129. [Google Scholar] [CrossRef] [Green Version]
  64. Fleischer, M.A. Cybernetic optimization by simulated annealing: Accelerating convergence by parallel processing and probabilistic feedback control. J. Heuristics 1996, 1, 225–246. [Google Scholar] [CrossRef]
  65. Borysenko, O.; Byshkin, M. CoolMomentum: A method for stochastic optimization by Langevin dynamics with simulated annealing. Sci. Rep. 2021, 11, 10705. [Google Scholar] [CrossRef] [PubMed]
  66. How, M.L.; Hung, W.L.D. Educing AI-Thinking in Science, Technology, Engineering, Arts, and Mathematics (STEAM) Education. Educ. Sci. 2019, 9, 184. [Google Scholar] [CrossRef] [Green Version]
  67. Thiti, J.; Kitsadaporn, J.; Thada, J. A Common Framework for Artificial Intelligence in Higher Education (AAI-HE Model). J. Int. Educ. Stud. 2021, 14, 94–103. [Google Scholar] [CrossRef]
  68. Zvolinsky, V.P.; Rotkin, V.M.; Golovin, V.G.; Matveeva, N.I. Avtomatizirovannie Sistemi Formirovania Uchebnogo Kontenta [Automated Systems for the Formation of Educational Content]; Scientific Monograph; VSAU: Volgograd, Russia, 2017; 120p. (In Russian) [Google Scholar]
  69. Rotkin, V. Methodology of immanent learning content. Sci. Isr.-Technol. Advantages 2017, 19, 7. [Google Scholar]
  70. Roman, Y.; Sergey, M.; Vladimir, R. Triangle Generator for Online Mathematical E-learning. High. Educ. Stud. 2020, 10, 72–79. [Google Scholar] [CrossRef]
  71. Rotkin, V. Generation of training initial-generation content. Electrotech. Comput. Syst. 2020, 32, 66–73. [Google Scholar] [CrossRef]
  72. Rotkin, V. Trainable generator of educational content. Int. J. Adv. Appl. Sci. 2021, 10, 363–372. [Google Scholar] [CrossRef]
  73. MacLellan, C.J.; Koedinger, K.R. Domain General Tutor Authoring with Apprentice Learner Models. Int. J. Artif. Intell. Educ. 2022, 32, 76–117. [Google Scholar] [CrossRef]
  74. Stevens, R.; Taylor, V.; Nichols, J.; Maccabe, A.B.; Yelick, K.; Brown, D. AI for Science: Report on the Department of Energy (DOE) Town Halls on Artificial Intelligence (AI) for Science; Argonne National Lab (ANL): Argonne, IL, USA, 2020. [Google Scholar] [CrossRef]
  75. Wang, D.; Weisz, J.D.; Muller, M.; Ram, P.; Geyer, W.; Dugan, C.; Gray, A. Human-AI Collaboration in Data Science: Exploring Data Scientists’ Perceptions of Automated AI. In Proceedings of the ACM on Human–Computer Interaction, Paphos, Cyprus, 2–6 September 2019. [Google Scholar] [CrossRef] [Green Version]
  76. Kobe University; ScienceDaily. Artificial Intelligence That Can Run a Simulation Faithful to Physical Laws: Replicating Energy Conservation and Dissipation Using Digital Analysis. Available online: www.sciencedaily.com/releases/2020/12/201218094502.htm (accessed on 30 June 2022).
  77. Matsubara, T.; Ishihara, A.; Yaguchi, T. Deep Energy-based Modeling of Discrete-Time Physics. In Proceedings of the Advances in Neural Information Processing Systems, Online, 6–12 December 2020; Volume 33. [Google Scholar]
  78. Hakuk, Y.; Reich, Y. Automated discovery of scientific concepts: Replicating three recent discoveries in mechanics. Adv. Eng. Inform. 2020, 44, 101080. [Google Scholar] [CrossRef]
  79. Sagi, S.R.; Chen, F.F. A framework for intelligent design of manufacturing cells. J. Intell. Manuf. 1995, 6, 175–190. [Google Scholar] [CrossRef]
  80. Behrouz, D.; Rezaei, M.A.; Tilmann, R.; Volker, M. Continuous Deployment of Machine Learning Pipelines; DFKI GmbH Technische Universität Berlin: Berlin, Germany, 2019. [Google Scholar] [CrossRef]
  81. Avasker, S.; Domoshnitsky, A.; Kogan, M.; Kupervasser, O.; Kutomanov, H.; Rofsov, Y.; Volinsky, I.; Yavich, R. A method for stabilization of drone flight controlled by autopilot with time delay. SN Appl. Sci. 2020, 2, 1–12. [Google Scholar] [CrossRef] [Green Version]
  82. Bhowmik, P.; Partha, A.S. A Data-Centric Approach to Improve Machine Learning Model’s Performance in Production. Int. J. Eng. Adv. Technol. (IJEAT) 2021, 11, 240–243. [Google Scholar] [CrossRef]
  83. Doris, X.; Miao, A.; Parameswaran, N.; Polyzotis, P. Production Machine Learning Pipelines: Empirical Analysis and Optimization Opportunities. In Proceedings of the 2021 International Conference on Management of Data, Xi’an, China, 20–25 June 2021. [Google Scholar]
  84. Bhowmik, P. Machine Learning in Production: From Experimented ML Model to System. SciOpen Prepr. 2022. [Google Scholar] [CrossRef]
  85. Jadhav, A.P. Influence of artificial intelligence on youth: A perspective of e-commerce. IJCRT 2021, 9, 2320–2882. [Google Scholar]
  86. Yavich, R.; Rotkin, V.; Malev, S.; Zamir, E.; Kadrawi, O. Smart Content Creation for e-Commerce. U.S. Patent 20210264366A1, 26 August 2021. [Google Scholar]
  87. David, A.I. 5 Ways Artificial Intelligence Is Shaping the Future of e-Commerce. Entrepreneur, 8 November 2016. [Google Scholar]
  88. Jannach, D. ADVISOR SUITE—A Knowledge-Based Sales Advisory-System. In Proceedings of the 16th Eureopean Conference on Artificial Intelligence (ECAI’2004)—Including Prestigious Applicants of Intelligent Systems (PAIS 2004), Valencia, Spain, 22–27 August 2004. [Google Scholar]
  89. Math Goodies. Math Help with Interactive Math Lessons and Math Worksheets. Available online: https://www.mathgoodies.com/ (accessed on 23 March 2023).
  90. Rotkin, V. Intelligent generator of knowledge based on moderated machine learning. ResearchGate 2020. [Google Scholar] [CrossRef]
  91. Rotkin, V. Learning Course “Training Content Generator”—Adaptive Learning Engineering. Available online: http://www.itutorsoft.com/itutorsoft/ (accessed on 23 March 2023).
  92. Ang, E.H.; Wang, G.; Ng, B.F. Physics-Informed Neural Networks for Low Reynolds Number Flows over Cylinder. Energies 2023, 16, 4558. [Google Scholar] [CrossRef]
  93. Sokolovsky, Y.; Rotkin, V. Theoretical and Technical Basis for the Optimization of Wind Energy Plants; Lulu Press Inc.: Morrisville, NC, USA, 2017; 120p. [Google Scholar]
  94. Sokolovsky, Y.B.; Rotkin, V.M.; Limonov, L.G.; Zyryanov, V.M. Aktualnaia Vetroenergetika. Generatcia I Nakoplenie Energii [Up-to-Date Wind Power Engineering. Generation and Storage of Energy]; Monograph; NSTU: Novosibirsk, Russia, 2021; 211p. (In Russian) [Google Scholar]
  95. Roman, Y.; Vladimir, R.; Sergey, M.; Eliahu, Z.; Ohr, K. AI-based e-commerce platform development—First year. In Architecture, Methodology, Modelling, and Algorithms; Technical Report; Ariel University: Ariel, Israel, 2019; 38p. [Google Scholar] [CrossRef]
  96. Roman, Y.; Vladimir, R.; Sergey, M.; Eliahu, Z.; Ohr, K. AI-based e-commerce platform development—Second year. In Development of E-Commerce Platform for Tourism; Technical Report; Ariel University: Ariel, Israel, 2020; 34p. [Google Scholar] [CrossRef]
Figure 1. General structural and functional diagram of the imitation hierarchical design.
Figure 1. General structural and functional diagram of the imitation hierarchical design.
Applsci 13 07602 g001
Figure 2. Custom configurable system design training.
Figure 2. Custom configurable system design training.
Applsci 13 07602 g002
Figure 3. Iterative modification of algorithms. Algorithmic cluster testing scheme.
Figure 3. Iterative modification of algorithms. Algorithmic cluster testing scheme.
Applsci 13 07602 g003
Figure 4. Math Goodies worksheet example for geometry [89].
Figure 4. Math Goodies worksheet example for geometry [89].
Applsci 13 07602 g004
Figure 5. Elements of random configurations of tasks on the topic “Polytrophic processes, Thermodynamics”. Interface screenshot.
Figure 5. Elements of random configurations of tasks on the topic “Polytrophic processes, Thermodynamics”. Interface screenshot.
Applsci 13 07602 g005
Figure 6. The structuring of the learning task by levels of complexity, on the example of the theme “Dynamics of the body systems”, Mechanics [69].
Figure 6. The structuring of the learning task by levels of complexity, on the example of the theme “Dynamics of the body systems”, Mechanics [69].
Applsci 13 07602 g006
Figure 7. Intelligent task generator. Interface screenshot [72,90].
Figure 7. Intelligent task generator. Interface screenshot [72,90].
Applsci 13 07602 g007
Figure 8. Training Content Generator, screenshot of the initial page [91].
Figure 8. Training Content Generator, screenshot of the initial page [91].
Applsci 13 07602 g008
Figure 9. Imitation of the optimal blade of a collinear wind turbine. Blade twist dependence on rotation speed: numerical experiment. Interface screenshot [93,94].
Figure 9. Imitation of the optimal blade of a collinear wind turbine. Blade twist dependence on rotation speed: numerical experiment. Interface screenshot [93,94].
Applsci 13 07602 g009
Figure 10. Imitation of the optimal blade of an orthogonal wind turbine. Blade orientation effect of lift index: numerical experiment. Interface screenshot [93,94].
Figure 10. Imitation of the optimal blade of an orthogonal wind turbine. Blade orientation effect of lift index: numerical experiment. Interface screenshot [93,94].
Applsci 13 07602 g010
Figure 11. Scheme of formation of the vessel model [95].
Figure 11. Scheme of formation of the vessel model [95].
Applsci 13 07602 g011
Figure 12. Configuration elements. E-commerce products [86].
Figure 12. Configuration elements. E-commerce products [86].
Applsci 13 07602 g012
Figure 13. Tour generator layout interface. Interface screenshot [95].
Figure 13. Tour generator layout interface. Interface screenshot [95].
Applsci 13 07602 g013
Figure 14. Optimized tourist route configuration, case study of London. Interface screenshot [96].
Figure 14. Optimized tourist route configuration, case study of London. Interface screenshot [96].
Applsci 13 07602 g014
Table 1. Research background: structural-logical navigation.
Table 1. Research background: structural-logical navigation.
DesignationsBrief Summary: Key PhrasesThemes Extension
AnthropomorphismImitation of human intelligence. Machine learning of neural networks. Social aspects of human interaction with AI. AI is like a black box. Modeling methodology in AI. AI for education, science, engineering design and commerce.Transparency and improvement of modeling as factors for the effectiveness of AI and compensation for social risks. Features of the use of AI in educational research and engineering and commercial areas.
TransparencyAI problems that require increased accuracy and certainty of solutions. Immanently transparent deterministic AI. Methodology of Explainable Neural Network.The potential of deterministic AI systems as an alternative to Explainable AI with limitations on accuracy and transparency.
DeterminacyDeterministic AI systems based on rules; complex representation of knowledge; expert systems; ontology; configurable design.Multi-parameter configurable design as a perspective of universal transformable AI systems.
ConfigurabilityConfigurable design as a means of mass customization (personalization). Multiparametric hierarchical approaches to generating configurations. Collective (swarm) machine-user intelligence.A general methodology for the formation of universal (multi-profile) AI systems of configurable design with machine training options. Profile-Based Modeling in AI Systems.
Modeling and ImitatingMethodologies and environments for multi-agent (hybrid) modeling and imitation in AI systems. Natural and surrogate models. Generation and validation of simulation models.Criteria of time algorithmic complexity as basic characteristics of testing, selection and adaptation of models. Simulation modeling in AI design.
ComplexityTemporal and spatial algorithmic complexity as performance factors for AI systems. Balancing system resources to control the speed of calculations. Speed characteristics of AI-algorithms.Characteristics of temporal (speed) complexity as the main criteria for the application, adaptation, and modification of local and global optimization algorithms.
  Education and ScienceNeural network and subject-simulation approaches in the educational and research field. Profile-oriented knowledge generators. Methods for special training of specialized AI systems.Methods of setting and solving educational and research problems. Development and testing of educational and research knowledge generators.
 Engineering and BusinessStochastic and deterministic AI systems in engineering, production and commercial fields: comparative characteristics. Features of engineering and commercial systems of AI design.Techniques for the formation of configurable design systems. Development and testing of industrial and commercial AI platforms (add-ons).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yavich, R.; Malev, S.; Volinsky, I.; Rotkin, V. Configurable Intelligent Design Based on Hierarchical Imitation Models. Appl. Sci. 2023, 13, 7602. https://doi.org/10.3390/app13137602

AMA Style

Yavich R, Malev S, Volinsky I, Rotkin V. Configurable Intelligent Design Based on Hierarchical Imitation Models. Applied Sciences. 2023; 13(13):7602. https://doi.org/10.3390/app13137602

Chicago/Turabian Style

Yavich, Roman, Sergey Malev, Irina Volinsky, and Vladimir Rotkin. 2023. "Configurable Intelligent Design Based on Hierarchical Imitation Models" Applied Sciences 13, no. 13: 7602. https://doi.org/10.3390/app13137602

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop