Cognitive Systems Research

The hypothesis-driven methodology is a cognitive activity used in expertise processes to solve problems with limited knowledge and understanding. Although some organizations have standardized this approach to guide humans in carrying out expertise in enterprises, it lacks appropriate tools to assist experts in carrying out this cognitive activity, tracking understanding, or capturing the reasoning steps and the knowledge produced during the process. To acquire, share and reuse experts’ knowledge applied during expertise processes while assisting humans in bringing understanding to complex problems, this study introduces a human–machine collaborative framework that formalizes experts’ knowledge from the hypothesis-driven methodology described in the France standard NF X50-110 of ‘‘Quality of expertise activity’’. This framework utilizes Hypothesis Theory extended with qualitative doubt and a systematic reasoning process to generate a hypothesis exploratory graph (HEG). The proposed approach makes it easier to carry out expertise processes through a human–machine collaboration, offers a means to share and reuse knowledge from expertise, and provides expertise processes evaluation mechanisms. Furthermore, an experiment conducted on a use-case of expertise process verifies the feasibility and effectiveness of the approach.


Introduction
In general, problem-solving is a common practice for improving human life (Duris, 2018) in an environment.Specifically, it is a privileged activity carried out by experts in organizations to address issues that impede their growth, limit their productivity or efficiency, and cause them to be less competitive.
If organizations use Systematic Problem-Solving Processes (SPSP) such as Ishikawa or 5 Whys to find the causes or understand problems that are well defined with a limited scope, clear and known goals, clearly stated and are well understood, and occur in stable environments, they cannot apply these methods on complex problems.
The inefficiency of SPSPs to complex problems is commonly due to the increasingly convoluted environments with more and more technological diversity or variable, highly interconnected components, and dynamic environments (Taj & Zaman, 2022).
Problems' complexity and the difficulty of solving them are also strengthened by their unclear and sometimes conflicting objectives, the accessibility of multiple or unpredictable solutions, and sometimes unknown aspects.Furthermore, complex problems are characterized by many variables involved with a high degree of connectivity between them, multiple conflicting objectives, and uncertainty (Johnson, Albizri, Harfouche, & Fosso-Wamba, 2022;Nokes, Schunn, & Chi, 2010).Moreover, they often have a more extensive scope and multiple constraints, and humans usually need more knowledge to understand them (Dörner & Funke, 2017).
A means to unlock complex problems is through exploratory processes (Ossmy et al., 2022), carried out by humans to arrive at some level of uncertainty on one or more possible solutions.The expertise process is a similar methodology used to analyze complex problems indepth, looking for their most likely causes or understanding their origin to take corrective and preventive actions or make decisions.Searching for the most potential causes or understanding of complex problems is at the heart of these approaches because of limited knowledge about these problems, highly interconnected and evolving environments, or newly occurring events.It often requires experience acquired on other problem-solving tasks to identify highly likely causes or explanations of these complex problems.
In expertise processes, experts involved in enterprises' complex problem-solving put forward different hypotheses based on their background and knowledge learned from the past.These hypotheses will progressively be confirmed or refuted with newly acquired general and domain knowledge to arrive at the most plausible conclusions.These processes are often carried out collaboratively between experts from different fields who should be able to bring and share their vision of the problem and hypotheses they consider relevant for its causes or to explain it (Sounchio, Geneste, & Foguem, 2023b).https://doi.org/10.1016/j.cogsys.2024.101255Received 5 November 2021; Received in revised form 10 May 2024; Accepted 26 May 2024 A normative reference frame on expertise approaches has been defined in France with the NF X50-110 standard (Huver, Loisel, Peyrouty, Pineau, Tuffery, Peyrouty, & Chanay, 2011) ''Quality in expertise activities'' and at the European level with the CNS EN 16775 standard ''Expertise activities -General requirements for expertise services''.However, while these standards define principles and describe how to carry out expertise processes efficiently, they do not provide tools to assist humans in carrying out this activity or to formally represent, share and reuse knowledge from these activities.
To overcome these shortcomings of expertise processes, this study elaborates on a framework driven by hypotheses that combine human experts' cognitive abilities and skills with machine-defeasible automatic reasoning and uncertainty management to carry out expertise.During the process, the framework permits the progressive construction of hypothesis exploratory graphs used to make decisions at the end of the expertise.
This study lays the foundations of a formal representation and reasoning mechanism for human experts and machine collaboration in expertise processes.These foundations support a hypotheses-driven process while considering experts' doubts/uncertainty and the use of hypothesis theory for defeasible reasoning.
The human-machine collaboration combines the best of both parties in expertise processes.On the one hand, human experts, from their well-developed and coherent knowledge and strong understanding of their domain, will express meaningful hypotheses and can quickly adapt their thoughts from events, observations, or tools regarding the problem (Jing, Draghi et al., 2023;Schmidt & Boshuizen, 1993;Van de Wiel, 2017).On the other hand, machines use automated defeasible reasoning and uncertainty management to compute, track, and trace the in-depth activity of expertise processes (Sounchio, Geneste, & Foguem, 2023a).
This contribution supports expertise processes on the following points: (1) Mitigate the cognitive load of hypothesis-driven reasoning of human experts.(2) Propose an interpretable outcome for experts that embeds knowledge of an expertise process that can be shared or reused.
(3) Offer objective metrics to evaluate and compare expertise processes.
For illustration, this study exemplifies an expertise case based on the proposed framework to derive a conclusion for the problem of a manufacturing company that wants to understand why customers suddenly started returning one of its products.The illustration presents the steps followed by experts and how the state and doubts of hypotheses they expressed evolve from one step to the other.
The rest of this work is structured as follows: Section 2 is devoted to background knowledge on hypotheses and hypothetical reasoning, collaborative reasoning approaches, and a formal model for hypothetical reasoning.Section 3 describes the proposed framework to support expertise processes based on hypotheses, followed by a use case illustrating the framework in Section 4. Section 5 compares the proposed approach with existing methods and knowledge bases.

Background knowledge
This section aims to first elaborate on background concepts related to hypotheses and how hypothetical reasoning has been used in solving problems.Secondly, it presents the practical utilization of hypothetical reasoning methodology in some fields.Finally, it presents approaches for optimal collaborative reasoning and finishes with the hypothesis logic formal reasoning.

Hypothesis definition
To explore a hypothesis-driven methodology for problem-solving, it is essential first to understand the concept of hypothesis.
Back in the past, the Greek philosopher Aristotle defined hypothesis as ''a judgment, affirmative or negative, that is merely assumed without being certain, and thus it is a statement that can be used as the basis of inference only insofar as it is conceded, and so rest, upon homo logia'' (Rescher, 1968).Trochim and Donnelly (Trochim & Donnelly, 2001) think a hypothesis is a specific statement of prediction that describes in real terms (as opposed to theoretical terms) what someone expects will happen in a study or is the cause of a problem (Hurley, 2014).A more recent definition is given by Ashley (2007) as a tentative assumption made to draw out its normative, logical, or empirical consequences.From these definitions, one can summarize that hypotheses are doubtful and predictive thoughts that can be used as bases for reasoning, problem-solving, or finding plausible relationships among several variables of a circumstance (Jing, Cimino et al., 2023).
Nevertheless, reasoning with hypotheses is well known and practiced in various fields or domains such as science (Kell & Oliver, 2004), healthcare, policy-making, or management consulting (Organization et al., 2023), and it is still a challenging task to integrate it in systems (Aikins, 1979).It is drawn from the above concepts that hypotheses can be used to infer knowledge and solve problems.Their utilization in problem-solving methods will be guided by verifications of expressed hypotheses in conformity with what is known of a problem environment.
However, when confronted with problems in most fields or enterprises, experts should be privileged to express hypotheses because of their skills and high domain knowledge acquired through training and experience.Compared to beginners or novices, experts have common wisdom, a superior perception of patterns, and the ability to learn from the problem context quickly (Patel, Shellum, Miksch, & Shortliffe, 2023).These qualities make experts appropriate participants in hypotheses-driven expertise in their domain over non-experts.

Hypothetical reasoning in practice
Hypothesis reasoning is an intelligent methodology used in daily problem-solving and several domains of activities (Amsel, 2011).In practice, hypothesis-driven and exploratory approaches are used to solve complex problems in various domains.For example, in the healthcare sector, hypotheses are foundations for research and diagnosis as physicians collaboratively diagnose patients by generating, evaluating, sharing, eliciting, and negotiating hypotheses and evidence (Radkowitsch, Sailer, Fischer, Schmidmaier, & Fischer, 2022).In administrative and policy-making, Casula, Rangarajan, and Shields (2021) use hypotheses to explore the question of sexual harassment training in an organization.
Another area where hypotheses are utilized in problem-solving is in the maritime domain, where experts applied hypotheses to examine declines in marine survival for coho and Chinook salmon in the sea (Sobocinski et al., 2021).
The approach is carried out by expressing hypotheses and reasoning over them (Hurley, 2014).Generally, it is a method used to reason under incomplete or lack of knowledge and always tries to explain or justify the hypotheses expressed and to draw a plausible conclusion (Minutolo, Esposito, & De Pietro, 2016).
This method is generally used by medical practitioners (Hoffman, 2007) and is carried by humans in the following steps:

Collection of filtered information
This can be done by simple observations, measurements, or experiments.This information supports the expert suspicion about a particular case.

Hypotheses formulations
They are proposed based on practitioners' knowledge and beliefs in questions they ask in other to solve the problem.

Reasoning
Knowledge is computed based on contextualized information or what is known of the domain using an inference mechanism like induction or deduction 4. Hypotheses verification After the reasoning step, hypotheses are confirmed or not, and their conclusions help to understand the main problem.
In general, this method, regardless of the domain of applications, can produce a more refined or specialized knowledge, similar to what some authors like (Bergmann, 2002;Ruiz, Foguem, & Grabot, 2014;Sun, 2004;Sun & Finnie, 2004) call experience.
It should be noted that hypothetical reasoning could be mistakenly considered argumentation because both come from human cognition and are used in scientific reasoning and daily life for solving problems (Dung, 1995).Secondly, both of them try to capture some strengths like uncertainty when reasoning.
However, they are fundamentally different from the following points: (1) Argumentation is an approach for reasoning that uses arguments made up of premises or reasons and conclusions or claims with conflicting information, looking for the most acceptable one (Besnard & Hunter, 2008;Morveli-Espinoza, Nieves, Possebom, Puyol-Gruart, & Tacla, 2019).While hypothetical reasoning is based on a process of collecting information and asking questions from which hypotheses are derived (2) In terms of processes: generally in argumentation, what matters is not the rationality of a statement or its explanation but the fact that it argues successfully against an attacking argument (Amgoud & Ben-Naim, 2018;Dung, 1995), which is not the case for hypothetical reasoning.

Human-machine collaboration
Over time, humans have developed specific skills such as hypothetical reasoning, intuition, creativity, and induction to solve problems around them.Nevertheless, these approaches are less efficient when it comes to reasoning tasks that use algorithms and require large semantic memory (Grigsby, 2018), which on the contrary, are tasks that machines perform well due to that high storage capacity, autonomy, and speed of execution.In addition, human intelligence is adaptable and can achieve acts like understanding, perceiving, responding to sensory inputs, and synthesizing and summarizing information (Stone et al., 2016), which is still very difficult to have within machine systems.Furthermore, humans can learn new things effortlessly, adapt and reason independently, and develop gregarious attitudes and dynamisms (Pupkov, 2019).
These critical observations drew our attention to the necessity of a collaboration between human experts and machine systems to solve problems using a hypotheses-based methodology.

Collaboration
Collaboration can be defined as the ability for two or more parties/agents to work together to achieve the same goal, with a shared responsibility for the result (Koch & Oulasvirta, 2018).It can be carried out from an organization of tasks so that both parties arrive at a better result and knowledge acquisition (Cheng, Teevan, Iqbal, & Bernstein, 2015).However, when it comes to problem-solving in artificial intelligence fields, the ability to increase each party's capacity to solve a problem is considered (Akata et al., 2020).This idea of capturing more knowledge with collaboration is also supported by Giannoulis, Kondylakis, and Marakakis (2019) within domain experts, as in the case of medical expertise (Doumbouya, Kamsu-Foguem, Kenfack, & Foguem, 2018).In addition, collaborative problem-solving over individualistic ones helps divide the workload among parties participating in the collaboration or to apply for various knowledge, experiences, and events permitting interpersonal stimulation, which can lead to more creativity and better quality solutions (For Economic Co-operation & (OECD), 2017).Sometimes the terminology hybrid or collaborative intelligence is used when the entities involved are humans and machines (Dellermann, Ebel, Söllner, & Leimeister, 2019) or if the entities taking part in the collaboration are not of the same type or family.

Collaborative intelligent approaches
The collaboration's main objective is to optimize the parties' knowledge and capability in the problem being solved.Collaboration can be achieved from the cognitive science point of view in the following ways: 1. Collaboration based on conversational grounding : according to Baker, Hansen, Joiner, and Traum (1999), a good collaboration is possible if collaborating entities can create a shared knowledge base, beliefs, or assumptions surrounding their goal.The main challenge for this approach is the difficulty of grounding human language to a suitable representation of the real world that machines can fully understand and process (Chai, Fang, Liu, & She, 2016).2. Collaboration based on theory of mind: this collaboration relies on self-understanding and interpretation of others' understandings (Koch & Oulasvirta, 2018).This approach is related to the mental state of the entities involved in the collaboration, which the authors cite, such as beliefs, intentions, knowledge, desires, emotions, or perspectives.One major drawback of this approach is humans' difficulties understanding machines' mental state or mind (Fussell, Kiesler, Setlock, & Yew, 2008).3. Collaboration based on sub-tasking : for Epstein (2015), collaborative intelligent systems should partner with humans by sharing sub-tasks with both parties based on their abilities to carry out these sub-tasks efficiently.This approach of collaborative intelligence was also supported by Pierre Lévy in 1995 when he stated that collaborative intelligence is ''a form of universally distributed intelligence, constantly enhanced, coordinated in realtime, and resulting in the effective mobilization of skills'' (Suran, Pattanaik, & Draheim, 2020).
From what has been stated on hypotheses and collaborative intelligence, this study defines a novel approach to the expertise process that relies on human experts and machine systems.Since it is a humancentered activity based on hypotheses that progresses with additional knowledge, it is essential to have a formal base for hypotheses that will consider human uncertainty and be open for hypothesis states revision if new knowledge eventually supports them.The section below describes these mechanisms on which the proposed approach relies.

Hypothesis logic
Hypothesis logic, denoted by , is a bimodal logic that uses the preferential model of non-monotonic reasoning to represent an appropriate state of knowledge based on hypotheses. subsumes the Default Logic and uses the  operator that has properties of the modal system  and [] that has those of the modal system  to extend classical predicate logic.In the hypothesis logic, information can be represented in three forms: known, truth and hypothesis (Siegel, Doncescu, Risch, & Sené, 2017;Siegel, Risch, Sené, & Doncescu, 0000;Siegel & Schwind, 1993)

Syntax and axioms
The language of hypothesis logic, similar to First Order Logic (FOL), consists of the following alphabet.
• () contains the FOL In fact: and ¬ are formulas in () • All rules and axioms of FOL are also rules and axioms in  • Axioms Let  and  be FOL formulas.
If  is a tautology, then  is known, but facts that are merely true are not necessarily known 4. If hypothesis  is made, then ¬ is not known 5.  ∧  is known if and only if  is known and  is known: ( ∧ ) ⟺  ∧  6. Making the hypothesis  does not mean knowing  7.Not making the hypothesis  does not mean knowing its negation 8.If  is not hypothesized then, ¬ is known: ¬ → ¬ 9.The  operator is not distributive: ( ∪ ) ≠  ∪

Hypothesis theory reasoning mechanism
In hypothesis theory, reasoning is done by extension.
A hypothesis theory  = { ,  } is defined with a set of formulas  in  and  a set of Hypothesis.

Reasoning : Extension
The concept of extension is not based on the fixed-point theory but on the maximum consistent sets.
An extension  = ( ,  ′ ) of  = ( ,  ) is the biggest subset of  and hypotheses  ′ , such that  ′ is consistent with  .
Extensions can be classified into two groups: • An extension  is a stable extension if it satisfies the coherence property: ∀ , ¬ ∈  → ¬ ∈ .In other words, whenever it is forbidden to assume  , ¬ is proven.• An extension  is a ghost extension if: ∃ , ¬ ∈  and ¬ ∉ .This extension illustrates the expression or use of some hypotheses and their negation at the same time.
Knowledge represented in hypothesis theory has a corresponding in default theory.However, there is a specific translation from default theory to hypothesis theory from which the existence of an extension can be proven and preserved (Siegel & Schwind, 1993).

Expertise process framework
This section describes the proposed approach with the following main points: 1. the integration of hypothesis logic and linguistic terms set for uncertainty management; 2. the cognitive mechanism used as the foundation of the approach; 3. the human-machine collaboration activity, and finally the knowledge representation structure.
In the subsequence sub-sections, the above building blocks will be presented with examples to illustrate how they contribute to the overall solution.

Extended hypothesis theory with linguistic terms
An extension of the previous hypothesis theory, which does not consider qualitative uncertainty, will be integrated with human language uncertainty expression.This qualitative uncertainty is significant since it is not an easy task for a human to express exact numerical values corresponding to their doubt on hypotheses in an expertise process.
Linguistic terms are suitable candidates for the proposed integration because they have successfully been used to express hesitance, precision, and make decisions.In addition, they offer aggregation operators (Lan, Chen, Ning, & Wang, 2015).For this purpose, the proposed approach uses linguistic term sets as an entry point for uncertainty management.Linguistic terms set has the advantage of being more realistic and human friendly (Xu, 2012).The properties of this human-like uncertainty manipulation are: • Unbalanced terms: Naturally, humans do not uniformly quantify terms.• Symmetric information: Each term has an opposite term of equal magnitude.

Linguistic Term Sets (LTS)
Let  be a linguistic term set.
where  > 0 is a positive integer and the cardinality of || = 2 + 1  is a finite and totally ordered discrete term set.Its elements correspond to the possible values of a linguistic variable.
 supports the following (Xu, 2004): For details, the doubt is based on possibility values which is a totally ordered set, ranging from certainly truth corresponding to 1 to certainly false corresponding to 0 : This set defines a linguistic term set (LTS) which the experts will use to express our far they are certain about a given hypothesis.
Example 3.1 (Example of Linguistic Term Sets).This is an example of linguistic term sets that will be used in upcoming sections of this study.For convenience, the terms will sometimes be replaced by their initials.
 has the following characteristics: • The negation operator (  ) =   such that  +  = 0 Under this extended hypotheses theory, hypotheses are defined by:  ∶   , where   ∈ .
Rules : Let  ∶   and  ∶   be two hypotheses belonging to an extended hypothesis theory.

These rules can be applied to any operator since they can be written with ∨, ∧
With the extension reasoning mechanism offered by the hypothesis theory, it is possible to have multiple extensions for a given knowledge base.However, integrating linguistic uncertainty into the theory provides a means to select a set.The preferred extension will be the extension with the minimum uncertainty when considering the conjunction of all hypotheses in the sets.

Hypotheses-driven expertise framework
The proposed approach is based firstly on experts' collective reasoning, which combines their diverse domains of knowledge and cognitive capacity, allowing them to act together, thereby increasing their probability of better decision (Devadasan, Zhong, & Nof, 2013).Secondly, we defined the process of expert-machine collaboration, relying on subtasking.Finally, we elaborated on a computational decision method to support experts' doubts about hypotheses.
In the elaboration of the proposed human-machine collaboration approach, the following points motivate our choice to use the subtasking collaborative technique: • In Section 2.3.2, three main collaborative intelligence approaches were presented, which are the conversational grounding, the theory of mind, and the sub-tasking.However, the first two (conversational grounding and theory of mind) are techniques that are still challenging to implement for complex problem-solving compared to the sub-tasking since they require developing a good human and machine-understandable language or require the human's understanding of the machine's mind and inversely (Joksimovic, Ifenthaler, Marrone, De Laat, & Siemens, 2023).• Allocating some sub-tasks to human experts is suitable for collaboration because experts have some cognitive capacities gained over time, the ability to interact with additional assets related to a problem efficiently, and have access to the external environment of the problem: (i) experts often have extensive domain knowledge, (ii) they can reason with uncertainty easily from their experiences, (iii) adapt to a noisy environment, (iv) adapt to time-varying situations, (v) use additional tools to enhance the knowledge, (vi) quickly interpret new events.These points are paramount when comparing human intelligence to ordinary AI techniques, which deal primarily with precision or certainty or for hypothesis generation in problem-solving (Abraham & Nath, 2000;Jing et al., 2024).
• For human experts' collaboration solely, this approach overcomes some limits, such as the lack of efficiency, which consists of asking team members to work on tasks they do not like or lack competencies.It also avoids goal distractions since human experts and machines have the same goal (For Economic Co-operation & (OECD), 2017).
Moreover, collaborative and exploratory expertise processes as described in NFX50-110 will engage experts with an intelligent system in order to improve the overall reasoning activity by giving the difficult task of knowledge computations to the system.This brings our methodology to artificial intelligence in the loop of human intelligence collaboration (Clark, Ross, Tan, Ji, & Smith, 2018;Dellermann et al., 2019) in a reasoning process, which in general our approach of carrying has always been tackled like a human-machine interaction interface problem.
To sum up, the approach we are proposing is based on experts collective reasoning, on the one hand, experts and machine collaboration, on the other hand, and uses an incremental process to explore all possible causes of a problem.

Collaborative hypotheses-driven reasoning
This section describes how human experts and machine systems collaborate using a hypotheses-driven approach.
In order to have the best of human-machine collaboration, we designed a method which complies with criteria for human-centered automation, elaborated by T.B. Sheridan and R. Parasuraman (Bettoni et al., 2020): (1) We assign suitable task for each party, (2) Human experts are in the decision-and-control loop, (3) Human experts have authority over machine system, (4) Human experts task is easier, (5) Machine system feedback on hypotheses status and doubt empower human experts, (6) Machine system uses an interpretable and trustful logic mechanism, (7) Machine system avoids human error when they are faced with large knowledge bases and complex problems, (8) Human experts are supervisors of the system.
In addition, the proposed approach provides a cyclic process as shown in Fig. 1, where human experts teach machine systems from their understanding of a problem based on their experience, intuitions, interaction with the context of the problem, observations and interpretations of new events by providing in step-wise hypotheses and additional knowledge.In return, the machine derives conclusions regarding experts' hypotheses from the hypothesis logic defeasible reasoning mechanism, and their uncertainty is computed from the proposed extension of linguistic terms.The knowledge base formalized in first-order logic as the reasoning mechanism stores additional knowledge collected during the expertise process.

Knowledge in hypothesis-driven approach
In the hypothesis-driven problem-solving, two main types of knowledge are manipulated: • Hypotheses These are predictive thoughts from questions reflecting someone's assumption and uncertainty about a particular phenomenon.This knowledge may answer questions such as what?, why?, where?, who?, when?, how? called the 5W1H, which are rhetorical fundamental question of understanding (Hart, 2020), which is widely use in understanding enterprise architecture from Zachman perspectives (Dumitriu & Popescu, 2020).

• Additional Knowledge or observations
It is any knowledge from external knowledge-base systems or from the problem environment acquired at a given moment of the expertise process.This knowledge is added consistently to the existing knowledge for better reasoning.Additional knowledge includes observations, which are knowledge elements derived from objective information, coming from measuring equipment, sensors, or from general observation of direct phenomena perceived from the environment related to the problem (Marquis, Papini, & Prade, 2014).
At the beginning of an expertise process, additional knowledge available before the first iteration is carried out is called initial knowledge and represents what is known within the reasoning system before the first hypotheses are expressed.This knowledge can be: contextualized or domain-specific knowledge: Corresponds to what is known of the problem, general knowledge: This knowledge is not specific to the problem being solved but is related to a more fundamental or general truth about the world to which the current issue belongs.These fundamental truths could cover time, space, natural laws, rules, and policies.
The use of additional knowledge reinforces the framework dynamism in terms of knowledge and increases its adaptation to unstable and changing environments.
In general, the flow of knowledge during the construction of hypothesis exploratory graphs (HEG) is bidirectional, as illustrated in Fig. 1.On the one hand, (i) experts teach machines their thoughtful doubt about the problem being solved based on the context of the problem, such as events, new observations, their cognitive capacities such as their previous experiences, the knowledge they have, and their interactions with the problem at hand.On the other hand, (ii) they learn from the machine's conclusion about their hypotheses' validity and doubt based on available knowledge regarding the problem at hand.Human expert knowledge plays an important role in the overall construction of the hypothesis exploratory graph (HEG) because it brings a dynamic property to this graph due to human nature, which is apparently capable of deliberate thought, planning, learning from examples, problem-solving, scientific theory prediction, and moral reasoning (Stenning & Van Lambalgen, 2012).
As a result, each human expert's knowledge, domain-specific or not, contains her/his experiences, depending on what she/he has learned formally or informally from the world and past problems.This capability will permit the expert to provide quality and relevant hypotheses since (Xu, Cui, Liu, & Chen, 2007;Xu & Du, 2010) has shown that, in the case of software development, precisely test-driven and incremental software development, expert hypotheses were of high quality compared to intermediate developers.

Hypothesis expertise process and knowledge representation
The hypothesis expertise process is an exploratory procedure, which makes use of additional knowledge, hypotheses driven by questions and an iterative reasoning activity that acts on the validity and doubt of these hypotheses.The process produces a graph that has hypotheses as nodes and questions (5W1H) as (hyper)relations between hypothesis nodes.We named this graph the Hypothesis Exploratory Graph (HEG).
During its construction process, the HEG grows progressively as new nodes are added as far as questions are being asked to understand the problem and hypotheses which aim to answer them are expressed.Eventually, during this process if knowledge is discovered, it is added to the available one at each iteration of the graph construction process.
HEG starts with a special node called the initial node, which contains the problem and its initial knowledge, coming either from the human experts or a knowledge-based system and represents what is known about the environment on which the expertise will be carried.
From this initial node, the first step of the expertise process is defined by oriented edges linking the initial node (the problem) with other nodes.These edges from the initial node correspond to the first questions asked, and they are connected to the first hypotheses expressed.Fig. 2 shows an example of an industrial problem where experts are looking for the most likely causes of a returned article by a client.The question asked is Why were KW831 products rejected by customers?and the following hypotheses were expressed (It is almost certainly true that it is due to faulty measurement tools, It is highly likely that it is due to non-compliance with the manufacturing plan and It is highly likely that it is due to over-tightening of its parts ).The HEG will grow from this first level of the hypothesis until satisfaction or done time over.
An iteration always starts from the initial node, and hypotheses are verified all the way up to the last hypotheses expressed.
The hypothesis exploratory graph (HEG) will grow based on new questions and hypotheses and, eventually, additional knowledge.Fig. 7 presents a continuation of the previous graph (Fig. 2) and illustrates how the knowledge base increases at each iteration.
From the above process, we can formalize Hypotheses Exploratory Graph (HEG) as follows: • Hypothesis Exploratory Graph (HEG) definition HEG is a directed acyclic hyper-graph  = (, ), with  the set of questions for edges and  the set of hypotheses for nodes where: S.S. Sounchio et al.  - is the initial problem.
- maps an edge to its incident vertices.In other words, it returns all hypotheses of a given edge  (questions) Since  = (, ) is a directed graph, two more functions are defined: * upstream hypotheses from which a question is asked.From this graph , we can define an expertise process as: where: -  = (  ,   ) is the th iteration of the hypotheses exploratory graph, From the above definitions and reasoning process, the following characteristics can be drawn from the expertise process formalization: • Semantics in a Hypothesis Exploratory Graph (HEG) as shown in Fig. 3 is given by: -: Additional knowledge, - , : Nodes of the graph are hypotheses, -: Edge/Hyperedges of the graph are question which stands for the vertex between  and  -Semantics: ''Under hypothesis that  is Valid/Unknown and that we know , is  an answer to question ?'' • A directed path of an expertise process   = (  ,   ) is a finite sequence of distinct edges (questions) and vertices (hypotheses) from the initial node up to the node of a given iteration, with all its hypotheses being valid. = (( 1 , ℎ 1 ), … , (ℎ −1 ,   ), (  , ℎ  ), (ℎ  ,  +1 ), … , (  , ℎ  )), such that: Expertise process   = (  ,   ) decision is based on some microresults computer on the HEG:   = (  ,   ) after  iterations.  is consistent knowledge observed after the  iterations.

Hypothesis validity:
A hypothesis ℎ , is valid if ℎ , ∈  ′ ⊆ , where  ′ is an extension of the knowledge   at iteration .
Considering the hypothesis theory  = {  ,   }, that is  ′ is a subset of hypotheses such that,  = (  ,  ′ ), the set of hypotheses consistent with   .
From an open world assumption, a hypothesis that is not valid has an unknown status because a lack of sufficient knowledge may cause invalidity and allows the system to stay in alignment with the lack of knowledge when reasoning.
Hypotheses with unknown status at the th iteration belong to   ⧵  ′ .

Iteration validity:
For an expertise process of  iterations   = (  ,   ), an iteration  is valid if it has at least one valid hypothesis.
Let   be the question at iteration  ≤ .

Expertise process validity:
An expertise process   = (  ,   ) of  iterations is valid if it has at least a directed path with valid hypotheses.In other word, an expertise process • Doubt update Under the above reasoning mechanism, the doubt given to hypotheses is dynamic, and it evolves (increased or decreased) based on the hypothesis' validity after reasoning steps (Durmaz, Demir, & Sezen, 2021).This dynamism is also reflected in the linguistic terms experts use to express their doubts.The linguistic and numeric doubt are updated as follows: -Unknown validity and ignorance: If a hypothesis that has an unknown validity and an undecided uncertainty stays unknown after reasoning, its doubt increases.
If a hypothesis that has an unknown validity and an undecided uncertainty becomes valid, after reasoning, its doubt decreases.
Table 1 shows these changes of doubt during the expertise process on a hypothesis exploratory graph (HEG).In this table, the first column illustrates the unknown validity of a hypothesis, the second column shows its validity after reasoning, and the last column shows the corresponding explanation for this resulting validity.-Other cases: If a hypothesis that has a valid validity stays valid after reasoning, its doubt remains unchanged.If a hypothesis that has a valid validity and becomes unknown after reasoning, its doubt decreases.If a hypothesis that has an unknown validity and does not have an undecided uncertainty becomes valid after reasoning, its doubt decreases.
If a hypothesis has an unknown validity and does not have an undecided uncertainty stays unknown after reasoning, its doubt remains unchanged.Table 2 summarizes these changes in doubt during an expertise process on a hypothesis as its validity changes after reasoning.
Example 3.3 (Reasoning and Updates Illustration).This example illustrates how reasoning takes place and how hypotheses' status and validity are updated.
The problem at hand is to find out why a product manufactured by an enterprise is unexpectedly rejected by customers.
Knowledge Graphs are structured representations of facts, where nodes are real-world or abstract entities and vertices relations that exist between them, all backup by semantic descriptions (Chen, Jia, & Xiang, 2020;Ji, Pan, Cambria, Marttinen, & Yu, 2020), while HEGs are structured representations containing nodes made up of hypotheses and links between them are questions from the 5W1H that provide semantics for the parent nodes to its child nodes.Another difference is the fact that HEG is dynamic by nature.Its nodes' status can change from valid to unknown or reversely with new incoming knowledge or hypotheses.In addition, the uncertainty that is attached to a node evolves as the status changes at each iteration.Finally, the HEG grows in the number of nodes (hypotheses) and edges (questions) at each iteration.This dynamism is due to the non-monotonicity of the hypothesis theory used in the reasoning process and the iterative process of the framework.

Hypothesis expertise process
The reasoning phase of the hypotheses-driven expertise system is an iterative process, which has a problem to solve as an entry point and involves experts working collaboratively with a machine system for a common goal.Each iteration is defined by questions, expressed hypotheses related to them, additional knowledge and reasoning.
The iterative process is described in Fig. 4 as follows: • Step A: Experts ask questions to understand the problem at hand.
• Step B: Experts express hypotheses related to questions of step A.
• Step C: Additional knowledge of the context of the problem being solved is considered.This can be general-purpose or domainspecific knowledge obtained at each iteration of the expertise process.• Step D: Automatic reasoning is carried out with the hypothesis theory reasoning mechanism using hypotheses and the set of additional knowledge.
Each reasoning cycle modifies the hypothesis exploratory graph (HEG) in various ways: Firstly, in terms of the number of available nodes, since each new hypothesis introduced will create a new node on the graph.Secondly, in terms of node states, since the status of its nodes can change, depending on whether they are confirmed or not due to the available additional knowledge.Fig. 4 shows these hypotheses' expertise reasoning cycle.
This iterative process (see Fig. 4) grants a dynamic property to the HEG and consequently to expertize process, bringing out the question of when to stop.
We defined for this purpose two possibilities:

Time as indicator:
A length of time can be set as the endpoint for the expertise process.This will be suitable in situations where the diagnostic could be taking more time than what is available.

Satisfaction as indicator:
This is when the expertise process is stopped because one is satisfied with the solution produced.
In general, this expertise exploratory process can be described with the following algorithm 1: Algorithm 1: Hypothesis-driven process algorithm ALGORITHM(Graph, Experts, KBS): if the stopped time criterion is given then set stop time else set stop to satisfactory end while not stop time or not satisfied do Experts ask Questions; Experts express Hypotheses for these Questions; if additional knowledge then add additional knowledge to KBS; end Reason on Graph; end At the end of the reasoning process, a solution to the problem being solved is any directed path, which contains exactly distinct questions and hypotheses from the initial node right up to a node at the last iteration.The hypotheses of this directed path must all be valid ones.
In addition, it is possible to: • Derive from the HEG a strict root cause analysis graph with a simple process consisting of pruning branches nodes from HEG, where hypotheses were not validated.This asset emanated from the HEG is of great importance since it can help to identify the causes of a problem being expertized (Ghatorha, Sharma, & Singh, 2020).This strict root cause analysis graph can be used to prevent subsequent problems.• Classify hypotheses based on the 5W1H questions of the semantic links between hypotheses as categories.This classification can be done in two groups: (1) from validated hypotheses and (2) those with unknown states.This classification helps to understand the problem.

Evaluation metrics
Evaluation metrics aim to objectively gauge some aspects of the expertise process.For this purpose, they define measures to evaluate or compare expertise processes to follow the performances of experts concerning additional knowledge and iterations.They also provide a way to compare expertise processes carried out by various groups of experts on the same problem.
The metrics are: • The total number of hypotheses expressed:  This value shows how wide the problem was explored.It is a horizontal measure, which shows how spread was the expertise process.

• Valid Hypotheses: 𝑉 𝐻
The number of valid hypotheses during an expertise.     is equal to the number of valid iterations over the total number of iterations carried out during the expertise process.
To better monitor the expertise process, a comparative bar chart like the one in Fig. 5 shows over iterations how consistent expressed hypotheses are with additional knowledge.A significant difference between valid and total expressed hypotheses at each iteration can be used as a hint indicating that hypotheses are far from the problem.
An ideal expertise process will be the one with a growing or stable number of hypotheses curve and a number of valid hypotheses curve that maps it.
These four metrics are important from the point of (1) quality of an expertise, which is related to how good where the experts and (2) how far did they try to solve the problem.The first two metrics ( and  ) are from the hypotheses point of view, while to two reminders ( and  ) are global and show how depth the expertise was carried.They can be used as decision measures to either proceed or stop the expertise as well as comparison values between two or more expertise on the same problem.

Illustration
The proposed approach is demonstrated in a real-world case of a manufacturing company to show how it works.For this illustration, experts were asked to use the proposed approach to look for explanations of why customers returned an article that an enterprise manufactures.
• Iteration 1: -Question: Why were KW831 products rejected by customers?-Hypotheses: * ℎ 1,1 • Hypothesis: It is almost certainly true that it is due to faulty measurement tools.* ℎ 1,2 • Hypothesis: It is highly likely that it is due to non-compliance with the manufacturing plan.* ℎ 1,3 • Hypothesis: It is highly likely that it is due to the over-tightening of its parts.
-Observation: * Some operators were not trained to use measurement tools, so some could not measure KW831 components well.* Measurement tools are new and were calibrated before usage.
-Reasoning * ℎ 1,1 • Hypothesis: almost certainly true that it is due to faulty measurement tools.• Hypothesis: It is highly likely that it is due to the over-tightening of its parts.• Status: Unknown.

Remarks:
For the reasoning process, hypotheses with Unknown status are those that were not supported by observations, whereas Valid hypotheses are those that are consistent with observations.This mechanism is used at every iteration.
-Observation: * Only recently manufactured KW831 are rejected by customers.
-Reasoning: * ℎ 1,1 • Hypothesis: It is highly likely that it is due to the over-tightening of its parts.• Status: Unknown.* ℎ 2,1 • Hypothesis: It is probably true that it is because operators poorly did the work.• Status: Unknown.

Valid hypotheses = {}
• Iteration 3: -Question: Why were the dimensions of the KW831 parts not respected?-Hypotheses: * ℎ 3,1 • Hypothesis: It is probably true that it is due to measuring errors.
-Question: Why are these newly recruited operators not good?* ℎ 3,2 • Hypothesis: It is certainly true that operators may not have been well trained on the production line.
-Observation: * There are newly recruited operators, so they could poorly mount or measure KW831 components.* Operators worked under pressure in order to deliver KW831 products on time, so it is possible to have manufacturing errors.* Newly recruited operators are inexperienced workers. -Reasoning:

Valid hypotheses
From this exploratory process, the hypotheses statutes, doubts, and reasoning steps can be tracked.The graph in Fig. 6 shows how the statutes of hypotheses changed for each iteration, and the graph in Fig. 7 shows the final statutes of the HEG described in this illustration.
The above example illustrates how the hypothesis reasoning cycle is used in an expertise process.Two main behaviors are to be highlighted in this example: (1) the update of doubt as elaborated in Section 3.4.2using linguistic term set.Table 3 shows how the doubts were updated at each iteration.(2) the changes in the hypotheses validity status as described in Section 3.4.2.Table 4 displays how the hypotheses' status changes at each iteration.
This illustration shows a valid expertise process at its 3rd iteration since the is a directed path from the initial node to one of the last nodes (ℎ 1,3 , ℎ 2,1 , ℎ 3,2 ).The doubt in this expertise is probably true, which corresponds to the minimum doubt on this path.

Computing metrics
From this example, we can compute the following metrics: Remark.Valid expertise processes have a   = 1 because, for an expertise process to be valid, each iteration of its hypothesis exploratory graph has to be valid.However, the reciprocal is not always true.
The hypotheses validity chart for this expertise process is in Fig. 8 It can be concluded that these experts expressed relevant hypotheses, with a VHR of 0.66 and a VIR of 1.
The uncertainty in this expertise process is probably true, which corresponds to the slightest doubt of hypotheses of the directed path.

Discussion
Compared to existing systematic problem-solving processes (SPSPs) such as Ishikawa, 5Why, brainstorming, TRIZ, GANTT-charts, audits, or A3 reports, the proposed approach is suitable for complex problems, whereas SPSPs are not.Its exploratory, dynamic, and knowledgeable nature over these SPSPs offers a great advantage in these problems compared to SPSPs.
Compared to SPSPs, the following differences can be highlighted.Firstly, unlike SPSPs, the proposed framework is agile due to its iterative evolution.Furthermore, contrary to SPSPs which may need some adaptation for particular domain problems, making them unsuitable for real-world problem-solving situations, which are generally illstructured, the hypotheses-driven approach is suitable for ill-structure problems as it manages doubt effectively.Secondly, SPSPs are human only technique, whereas this study proposes a human-machine collaborative methodology to reduce the cognitive load of problemsolving.Finally, for an SPSP to be successfully run, participants need a minimum understanding or training on the SPSP methodology (Meister, Böing, Batz, & Metternich, 2018).However, the proposed approach is easier to understand and apply.
Concerning other knowledge graphs such as Cyc, YAGO, DBpedia, or Wordnet, they were designed to reason on general knowledge and cannot describe expertise processes because they are made up of facts, real-world assertions, or entities and relations.In contrast, expertise processes use hypotheses from defeasible reasoning and incorporate doubt management (Yan, Wang, Cheng, Gao, & Zhou, 2018).Even if some have common sense reasoning engines, these rules are used to infer new knowledge, not to change entities' status, as is the case in the proposed approach.
Regarding text reports, generally used to present expertise and their conclusion, the hypotheses exploratory graphs (HEGs) have the benefit of being semantically richer than text and easier to understand with their graph structure.Text reports are limited due to natural language ambiguity and high technical vocabulary, which can restrict their understanding Furthermore, the hypotheses-driven approach generates a hypotheses exploratory graph (HEG), which embeds the experts' understanding of the problem and how they proceed to conclude it.In addition, the proposed approach structure offers means to derive the most likely causes of the problem by trimming hypotheses with unknown status from the overall graph of all plausible causes.In addition, the HEG tree structure also provides traceability on how the problem was handled, making it suitable for use as legal requirements when dealing with critical domains like health (Doumbouya, Kamsu-Foguem, Kenfack, & Foguem, 2015).The hypotheses-driven framework allows a replay of the expertise process with different knowledge bases, and this replay possibility can be used to learn how knowledge influences the problem.
The particularity of the reasoning structure proposed is its ability to capture human knowledge, such as experience or intuition, through hypotheses and doubt.
Process planning is pre-production cognitive activities to determine the sequence of operations to build a product (Sanctorum et al., 2022).The approach proposed in this study was not designed for process planning problems and may need to be revised for them.However, for part designing, experts can iteratively express hypotheses on product components to identify all possible designs consistent with the knowledge available about the product.Hypotheses can be expressed for cost, quality or material, dimensioning, and shapes to determine all possible idealizations of the product being designed.From the proposed study, a part design will be valid only if experts' hypotheses about its components are consistent with additional knowledge concerning the product.
To sum up, the framework proposed in this study can be seen as a human teaching and learning activity because, in the first place, human experts, from their experience, guide the machine to where the problem could be.In return, they learn from the machine reasoning system if their intuitions were valid from the available additional knowledge of the problem.

Table 5
Comparison between SPSPs and the hypotheses-driven expertise process based on adaptability, the type of problems to solve, the type of collaboration, the supporting structure, human understanding, and management of uncertainty.The model presented in this study has the following advantages: (1) traceability of the expertise process, since it shows all the steps followed during the expertise; (2) knowledge sharing over the process, because every expert can see and learn from what others expressed as hypotheses and the output from the machine system; (3) expertise understanding structure, compared to written reports and SPSP, the HEG has a clear defined semantic; (4) agile, due to its exploratory behavior, which means no predefined path but a stepby-step evolution depending on changes of the problem and finally (5) expertise reproducibility, since it can easily be replicated with other knowledge bases.Table 5 presents significant differences between SPSPs and the hypotheses-driven approach from the adaption view, the types of problems, the human-machine collaboration perspective, the knowledge structure, human understanding implication, and the consideration of uncertainty.Based on these characteristics, the proposed hypothesis-driven approach offers outstanding advantages over SPSPs.

Conclusion
Motivated by the desire to bring together experts' knowledge, experience, and human intelligence capacities on the one hand and machine's storage capacities, computational speed, and automated reasoning on the other hand to the domain of complex problem-solving and expertise processes in particular, this study lays the foundations for acquisition, representation, and reasoning with hypotheses in a human-machine collaboration in expertise processes.
The study describes a human-machine hypotheses-driven collaboration framework by reviewing the hypothesis definition and elaborating on human-machine collaborative reasoning based on hypotheses.
After presenting relevant cognitive theories for collaborative intelligence, the choice of sub-tasking was adopted as the foundation of the proposed framework.The sub-tasking collaboration technique was complemented with the hypotheses theory for defeasible reasoning extended with linguistic terms uncertainty.
For practical use, an iterative reasoning cycle was designed for the collaboration between human experts and the machine system.
To showcase the proposed framework, a use case from a manufacturing company whose product was rejected by customers was carried out in this study.
The proposed approach reduces the human cognitive load and aids the expertise processes by utilizing defeasible reasoning, supporting knowledge management, process traceability, and proposing evaluation metrics.
The methodology presented in this study takes advantage of human cognition capabilities and defeasible reasoning by applying the hypothesis theory and linguistic terms.As a result, it produces an exploratory graph of hypotheses (HEG), which embeds knowledge of expertise and describes the expertise process well while considering experts' doubts integrated with linguistic terms.In addition, this approach gives means firstly to objectively evaluate expertise processes by use of metrics (number hypotheses, valid hypotheses rate, number iteration, and valid iteration rate).Secondly, to monitor the expertise process, using the defined hypotheses-validation graph.These tools are relevant for decision-making in the context of the expertise process and offer a great advantage over textual reports.
Furthermore, the foundation of expertise processes based on hypotheses is not limited to a specific domain.It can be applied to problems in various fields such as railway, automotive, maritime, or construction industries.This versatility allows for experience sharing and, consequently, enhances efficiency in expertise while reducing errors and time.
As a result, the proposed approach allows a formal and explicit description of expertise processes while facilitating experts' implication in a collaborative context in a dynamic environment.It can be used either for diagnosis; if giving effects, one is looking for causes using hypotheses, or for prediction, if giving causes, one is looking for effects using hypotheses.These two utilizations can be carried out from different expertise.
The perspectives of this study will focus on the design and construction of the proposed system as described in Fig. 9. Its main components are the user interface to ease inputs entering and present outputs, the short-term memory to store knowledge regarding ongoing expertise processes, the reasoning engine to reason and compute hypotheses validity and doubt, and the long-term memory to store hypothesis exploratory graphs (HEGs).For this system to be effective, implementing a hypothesis logic solver is paramount to enable reasoning over hypotheses and additional knowledge.In addition, the user interface for experts' machine interaction should based on semi-structured or natural templates that will be easy for experts and simple for machines to process.It should also offer means to visualize and query hypothesis exploratory graphs to answer questions on expertise.
Another work from this study is the reuse of hypotheses exploratory graphs (HEG) in the case-based reasoning approach to build a hypothesis generation system to assist experts in complex problem-solving.

Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Fig. 1 .
Fig. 1.Synoptic view of the human expert and machine collaboration approach showing the main forces of each participant in the collaboration and the flow of knowledge.

Fig. 2 .
Fig. 2. The first iteration in constructing the hypotheses exploratory graph of a case shows one question edge and three hypotheses nodes.

Fig. 3 .
Fig. 3. Hypothesis exploratory graph (HEG) section with an edge  and two nodes  and .

Fig. 4 .
Fig. 4. Hypotheses expertise reasoning cycle illustrating the steps of the collaborative iteration.

Fig. 5 .
Fig. 5. Hypotheses validity chart illustration the number of valid hypotheses versus the total number of expressed hypotheses.

Fig. 6 .
Fig.6.The final representation of the HEG resulting from the process in Fig.7shows valid hypotheses in green squares and hypotheses with unknown status in gray squares.

•
(Total number of hypotheses) = 6 This value shows how breadth the experts search for the causes of the problem.• Valid hypotheses rate:   =    = 4 6 = 0.66 This ratio shows how experienced were the experts in expressing relevant hypotheses for the problem being solved.• Number of iterations identified:  = 3 This value illustrates how depth the expertise was carried.• Valid Iteration Rate:   =    = 3 3 = 1 This ratio provides a measurement of how good were the experts in looking deep into the problem.

Fig. 8 .
Fig. 8. Hypotheses validity chart for the illustrative example in Section 4, showing for each iteration the number of hypotheses valid versus the total number of expressed hypotheses.

Fig. 9 .
Fig. 9.An overview architecture of the hypotheses reasoning system consists of four main components: user interface, short-term memory, reasoning engine, and long-term memory. .
no questions have been asked before.*=−1∪{ , }  , where { , }  are questions asked at iteration   = ∪  , 0 ≤  ≤  − 1, with  0 =  , ifno questions have been asked before.In other words, if no hypotheses have been asked before.*=−1 ∪ {ℎ  , } , , where {ℎ  , }  are hypotheses expressed at iteration  from question  ,Remark in the expertise process   = (  ,   )   is made up of hypotheses: formulas with the  operator and have human uncertainty within a linguistic term set.  is made up of true and known formulas: formulas without operators and formulas with the  operator.This study assumes formulas in   have no uncertainty related to them.However, if uncertainty has to be attached to formulas in the   , these formulas must be considered when computing the overall uncertainty after reasoning.•ExpertiseprocessreasoningLet   = (  ,   ) be the expertise process obtained at after  iterations, reasoning on   = (  ,   ) consists in finding the largest subset of hypotheses  ′ ⊆  from   which is consistent with the available additional knowledge   .After reasoning: All ℎ , ∈  ′ ⊆  which are consistent with   are called valid hypotheses and those belonging to  ⧵  ′ have an unknown status.
-  = ( −1 , {▵}  ) is a consistent knowledge base of  −1 and {▵}  , which in case of some conflicting pieces of knowledge is the largest set of consistent knowledge that can be formed with a subset of {▵}  and  −1 .{▵} corresponds to the additional knowledge observed at iteration .

:
It is almost certainly true that it is due to operators measurement error.

Table 1
Doubt update of ignorance from unknown validity update of hypotheses after reasoning and corresponding explanations. is unknown with doubt  0 ( 0 stands for undecided corresponding to ignorance) If  still unknown then,  ∶  −1 Doubt more if unknown ignorance stays unknown  is unknown with doubt  0 ( 0 stands for undecided corresponding to ignorance) If  still valid then,  ∶  +1 Doubt less if unknown ignorance becomes valid

Table 2
Doubt update from iteration  to iteration ( + 1) after reasoning and corresponding explanations,   ,   ∈  are linguistic terms.

Updates: -Hypothesis 𝐻𝑓 * Status/Validity: valid
It is highly likely that it is due to non-compliance with the manufacturing plan.•Status: Unknown.* ℎ 1,3