Introduction

Over the last few decades, different methods of decision making in uncertainty condition have been considered. Among the suggested approaches, evidence theory which is also called Dempster-Shafer theory (DST) introduces a stronger framework for our incomplete knowledge presentation and expression. The different type of available information calls for the development of a different method to represent and propagate the associated uncertainty. Resorting to probability theory to address this issue is questionable (Baraldi et al. 2013). The use of evidence theory started with Dempster’s work by description of the accounting principles of the upper and lower probabilities, and mathematical theory of evidence was defined precisely by Shafer (1976). However, in the last decades, Bayesian inferences (Bayes 1763) based on previous applications are valid, but the Dempster-Shafer studies as techniques of modeling in uncertainty condition have had a lot of applications; various perspectives for uncertainty management have been proposed. Buchanan and Shortcliffe (1975) proposed a model that manages uncertainty and has almost certain factors. Hence, when our knowledge is incomplete, uncertain approaches toward application are more appropriate. Fedrizzi and Kacprzyk (1988) encouraged studies in the setting of fuzzy preference using interval value for expressing experts' views and judgments presented through cumulative distribution function. Each approach toward uncertainty management has its own advantages and disadvantages (Lee et al. 1987). For example, Walley (1987) and Caselton and Luo (1992) discuss the problems dealt with Bayes common analysis which are due to the unreliability of information, and Klir (1989) has criticized the probable presentation of uncertainty for knowledge inference. The DST of Dempster-Shafer (Dempster 1967) upon multidimensional sources in which information is obtained from some various sources had lots of application, and some justifications for the appropriateness of this method for the inference of knowledge have been indicated. Dempster-Shafer theory has been applied successfully in various domains such as face recognition (Ip and Ng 1994), and so far, it has had also broad applications in the discussions of diagnosis, statistical classification (Denoeux 1995), data fusion (Telmoudi and Chakhar 2004), environmental impact assessment (Wang et al. 2006), knowledge reduction (Wu et al. 2005), organizational self-assessment (Siow et al. 2001), regression analysis (Monney 2003), multi-criterion decision-making analyses (Bauer 1997; Beynon et al. 2001), pattern classification (Binaghi and Madella 1999; Binaghi et al. 2000), reasoning and logic (Benferaht et al. 2000), medical diagnosis (Yen 1989), safety analysis (Liu et al. 2004; Wang and Yang 2001), expert systems (Beynon et al. 2001; Biswas et al. 1988), target identification (Buede and Girardi 1997), and uncertainty (Klir and Wierman 1998). In this study, researchers have applied the theory of Dempster-Shafer as well in accounting for the tools failure risk in a production organization. When failure time is unknown, loss of production system will be occurred. Several methods can be used to avoid the risk of failure in factories risk, for instance, all approach such as reliability centered maintenance (Knezevic 1997 and Moubray 1991), risk-based inspection (Chang et al. 2005), risk-based maintenance (Khan and Haddara 2003), risk-based decision making (Carazas and Souza 2010), is related to risk management tools. In the theoretical concept, we have applied tools, such as Dempster-Shafer and fuzzy theory, that provide significant patterns about the risk and reliability, and they can be extracted from the data which originated in a factory. If we do not have enough available data, we can use fuzzy and precise numbers together to calculate the risk. These results in discovering new knowledge about the failure risk to the factory.

In real situations, what happens to production systems is unpredictable. Hence, in decision making, we always encounter a risk especially with incomplete information. Although various studies have been done concern the use of the Dempster-Shafer's theory in systems recognition, accounting, and decision making, yet we encounter some problems in the application of this theory in the systems risk assessment and making administrative decisions in real production systems. This was the main reason of doing this research. The purpose of the research, therefore, was to present a composite approach for more recognition and applied tools risk assessment and its operational evidence is provided by accounting for the tools risk of a production organization.

Uncertainty and information incompleteness

There are various forms of uncertainty in the amount of uncertainties affecting the operation of the engineers, scientists, and decision makers. Each group focuses on different types of uncertainty; it is necessary before any decision being made that their type is identified and classified depending upon the amount and the correctness of available information. In the previous decades, the probability theory has been used as a device for accounting for modeling uncertainty. With regard to the limitations of the probability theory, the approaches and methods of describing and determining the uncertainty condition now are developed, and other than probability theory, some other different theories are proposed and used (Parsons 2001). Due to the limitations of the probability theory, its use in risk assessment is not always appropriate, especially when lack of information causes uncertainty. There are many approaches for accounting for uncertainty and deviation, for example, mathematics models and simulation tools. When we encounter lack or incompleteness of information, some other theories such as fuzzy set theory (Zimmermann 1991), possibility theory (Dubois and Prade 1988), Dempster-Shafer theory, and upper and lower probabilities (Dempster 1967) are more powerful.

In this research, the researchers' focus has been on the use of Dempster-Shafer theory in recognizing and accounting for uncertainty and deciding and determination of the system risk. In the real world, the static mathematical models are resulted from a system full of non-deterministic nature; their parameters become non-unique uncertainties and a chain of uncertainties has been made in this model. Among the available approaches and methods, the Dempster-Shafer theory provides an appropriate recognition tool in uncertainty condition resulted from the lack of information. This is why it uses all available data, and it also determines and quantifies its distinctive goal precisely.

The main sources of uncertainty should be identified and analyzed. Uncertainty is classified in two main groups - epistemic uncertainty and aleatory uncertainty. There are many differences between aleatory uncertainties and epistemic uncertainty; aleatory uncertainty is usually named as stochastic uncertainty, inherent uncertainty, or irreducible uncertainty. Epistemic uncertainty is usually named as subjective uncertainty or reducible uncertainty. When there is inherent variation of the physical system, we encounter aleatory uncertainty. Generally, inherent variation is due to the random nature of the input data, and it is possible to represent it mathematically by a probability distribution if sufficient experimental data is accessible. In non-deterministic systems, some factors such as ignorance, lack of knowledge, or incomplete information lead to the occurrence of epistemic uncertainty. The Dempster-Shafer theory which is also called evidence theory provides an appropriate picture of epistemic uncertainty. The advantage of using evidence theory is in its success of quantifying the degree of uncertainty when the amount of available information is small. Like most of the current theories of uncertainty, evidence theory provides two types of uncertain measure which are known as belief and plausibility.

Identification of the sources of uncertainty is the first step of a methodology in quantification of epistemic uncertainty or aleatory uncertainty. Uncertainty can occur in every phase of the modeling and simulation. Detailed up-to-date descriptions on the various forms of uncertainty are given in the literature (Oberkampf et al. 2001). The act of quantifying uncertainty and variability is significant, and researchers have proposed different mathematical models to properly illustrate them. Once enough data is on hand, every type of variability can be represented mathematically via probability distribution functions. There is approximately a general concurrence on this concern.

In most cases, when maintenance managers try to determine the best policy for their systems, they only consider the cost criterion as the most important and the only criterion to be taken into account. This is a very dangerous point of view (Faghihinia and Mollaverdi 2012). Generally, the most important focus of the current study is to quantify epistemic uncertainty in maintenance engineering models and repair, particularly to be used in multidisciplinary systems and to apply it for design optimization.

Operational risk’s management

Operational risk's management is the central core of any management strategy that is related to production properties. The control of operational risk depends on understanding it, measuring it, and knowing how to reduce it. Accordingly, the process of managing operational risk consists of some stages as follows: risk assessment and risk prioritizing; identification of possible failure scenarios (each failure scenario is estimated from the probability of failure and the degree of consequences); prioritizing risks according to their magnitude; estimating and dealing with the total risk. Assessing and dealing with the degree of risk mean managing and selecting appropriate risk response strategy. For example, tolerable risk is accepted, and if the estimated risk is not tolerable, it is reduced through appropriate risk reduction approaches.

In order to estimate the risk, we should assess the risk, and for assessment, we need the failure probability of expected risk and the degree of consequences. Then, for risk estimation, we use the integration and interaction of the failure probability and the amount of loss. According to a classical definition (Henley and Kumamoto 1981; Vose 2003), the risk of failure Ri is defined by

R i = P f x C ,
(1)

where Pf is the probability of failure and C is the cost given failure for tools operation. According to the experimental definition of probability suppose that N shows pieces of equipment, Nf shows pieces of equipment in which is not succeed prior to time a, since only breakdown prior to time a is related with losses. If the amount of experiments N is adequately large, then the entire loss caused by failures throughout N trials is Nf × C. The anticipated loss is

P f = Lim N f / N N R i = N f / N x C .
(2)

In Equation 2, Pf is approximates the failure probability of the machine prior to time a, also Ri is risk of systems. Looking over the traditional analysis of risk, what is important in assessment is having primary data for analyzing and changing the data to the information which determine the risk. If this data will not be available or the least appropriate and precise data be available, the provided analysis would not be realistic, and probably, the outcome results could not be reliable. Therefore, quantitative assessment of risk will be an appropriate device. The risk management is a decision-making process which lack of the data leads to weak decision making. Therefore, whenever we could not use these tails, it is appropriate to use semiquantitative or qualitative approaches.

Although qualitative approach of the risk estimation needs less effort to gather information, because of its subjectivity, it is not reliable. In the complete uncertainty condition, using the qualitative approach of risk assessment is suitable and applicable. What is investigated in this research is the condition we have information about the past but it is not sufficient for statistical application and decision making including the risk. Our purpose is to use all the data in order to increase the precision of decisions. If the data is available, it should not be ignored, and whenever the small data are analyzed and converted to information appropriately, they provide appropriate measure for decision making.

An overview of Dempster-Shafer theory

There are various methods of decision making under uncertain conditions. Among these methods, the Dempster-Shafer theory is a powerful method for showing and representing uncertainty of our incomplete knowledge. Theory of evidence allows one to combine evidence from different sources and arrive at a degree of belief that takes into account all the available evidence. The theory was first developed by the Dempster's work in explaining the principles of calculating the upper and lower probabilities (Dempster 1967) and then its mathematical theory developed by Shafer (1976). This theory had had a wide range of application as a model under uncertain conditions. Briefly, the Dempster-Shafer theory may be summarized as follows. The primary step consists in developing a frame of discernment Θ which is a finite set of mutually exclusive hypotheses. Then, we can define the m function which assigns an evidential weight to each subset A of Θ. This function is also called basic probability assignment.

The basic probability assignment (bpa or m) is different from the classical definition of probability. It is defined by mapping over the interval [0–1], in which the basic assignment of the null set m(ø) is zero, and the summation of basic assignments in a given set A is '1′. The basic probability assignment is called a focal point for each element for which m (A) ≠ 0 is true. This can be represented by

m A [ 0 , 1 ] m φ = 0 A Θ m A = 1 .
(3)

The lower and upper bounds of an interval can be determined through a basic probability assignment, which includes the probability of the set bounded by two non-additive measures, namely belief and plausibility. The lower limit of belief for a given set A is defined as the summation of all basic probability assignments of the proper subsets B, in which B is a subset of A. The general relation between bpa and belief can be represented by

bel A = B A m B bel φ = 0 bel 1 = 1 .
(4)

The upper bound is plausibility, which is the summation of basic probability assignments of subsets of B, for which A (i.e., BA ≠ ø) is true, the function pl is called plausibility and can be written as

pl A = B A ϕ m B .
(5)

Moreover, the following relationship is true for the belief function and the plausibility function under all circumstances.

pl A bl A pl ϕ = 0 pl θ = 1 pl ¬ A = 1 bel A
(6)

The belief interval represents a range where the probability may lie. It is determined by reducing the interval between plausibility and belief. The narrow uncertainty band represents more precise probabilities. The probability is uniquely determined if bel(A) = pl(A), and for the classical probability theory, all probabilities are unique. If U(A) has an interval [0, 1], it means that no information is available, but if the interval is [1, 1], then it means that A has been completely confirmed by m(A).

In reality, a decision maker can often gain access to more than one information source in order to make his or her decisions. The evidence theory constructs a set of hypotheses of known mass values from these information sources and then computes a new set of numbers m that represents combined evidence.

The part of DST that is of direct relevance is Dempster's rule for combination (Zimmermann 1991). When the data comes from different sources, through data fusion and combination, we can summarize the results and simplify them for decision making. Consider now two pieces of evidence on the same frame H represented by two bpa m1 and m2 Decision maker needs a rule of combination to generate a new bpa. This construction is called Dempster-Shafer rule of combination for group aggregation and can be written as

m 1 2 A B C = φ = m 1 B m 2 C 1 k when A φ and m 1 2 φ = 0 where k = B C = φ m 1 B m 2 C .
(7)

K represents a basic probability mass associated with conflicts among the sources of evidence. K is often interpreted as a measure of conflict between the sources.

Real case study

In an automobile manufacturing supplier company, different machines are working. An exclusive 5,000-ton hydraulic press machine belongs to this production institute, and there is no replacement for the above machine that in the edited strategy of this company, possessing this machine is considered a competitive advantage and its failures are considered a weak point, because any failures or break in proceedings of this machine will affect the key results of the company's operation. As this machine is essential, so all the parts used in this machine are also considered essential.

Because it is impossible to buy all the spare parts or to make extra or passive systems due to the machine's structural complexity, the reliability of the system is a more concern of the manager of the production institute. Therefore, any break in proceedings will lead to critical losses to the income and significance of this financial institute. One of the other limitations is the store accumulation which should be kept in a proper extent. Buying and providing expensive spare parts will reduce the risk but increase the costs and it is also possible that these parts remain unused for years. The management aims at assessing the likelihood of failures through the whole press machine and since it is not possible to have a replacement for the machine, desires to identify the reliability and the risk in whole press by determining the key parts' risk of break down and decisions are made on it.

To begin the analysis, a part of fault tree has been drawn in Figure 1. In this case study, three parts of the fault tree were investigated. These three parts are the computer which controls all the system; PLC control which is the connector of the hardware and the software of the press; and cushion pumps have the duty of making pressure over the parts pressed. These parts are included as the essential components of the production institute, which may be defected due to exhaustion, and providing their similar ones, imposes some costs to the organization. Over the past 13 years, there are only limited data about the failures of the control computer which is not possible to make direct decisions through statistical methods upon them. The director manager of this institute aims at deciding based on the previous data and the experts' opinions about how to react to the risk resulted from this three-part collection.

Figure 1
figure 1

Fault tree of the machine.

Risk-based analysis using Dempster-Shafer theory

Risk has two factors, first index is probability; it shows the probability of occurrence of a risk in a definite period of time. Classification of risks according to their probability of occurrence is also possible. The classification of Table 1 shows a fuzzy categorization of relative probability of an accident or breakdown occurring as a result of uncontrolled riska. This table helps us to understand the importance of an accident according to the probability of its occurrence. It is worth mentioning that in similar categorizations, the probability of accidents occurring can be defined in a fuzzy way; in this mode, the data expert must gather data in a quite precise way.

Table 1 The information failure from sources 1 and 2

With regard to checking the data, we classified the probability of failures into three levels of magnitude: L (Low), in which a breakdown may occur with a low probability in a fixed period of time; M (Medium), in which a breakdown may occur with a medium probability in a fixed period of time; H (High), in which a breakdown may occur with a high probability in a fixed period of time. The set of these occurrences form the set of Θ = {L, M, H}. The possible subsets will be eight sets of {φ},{L},{M},{H},{L,M},{L,H},{M,H}{L,M,H}. Figure 2 shows the basic probability assignment for the breakdowns probability of the production system in fuzzy logic demonstration. On the other hand, risk is the probability of occurrence multiplied by the magnitude of damage (Equation 1). Then, the researchers define fuzzy severity categories according Figure 2, consequences which are as follows: Negligible (N), a failures which if it happens, it is removable and slight; Main (M), a failures which will cause stopping but it is removable; Critical (C), it is a failures which its consequence is high and if it happens it leads to a crisis. The set of events is put on in the set of Θ = {N, M, C}. According to the above supposition, its possible forms are put on the eight subsets of {N}, {M}, {C}, {N, M}, {N, C}, {M, C}, {N, M, C}.

Figure 2
figure 2

Fuzzy categories in problem.

The lower and upper bounds of an interval can be determined from the basic probability assignment, bpa contains the probability set bounded by two measures belief and plausibility, Table 2 shows the basic probability assignment for the probability breakdowns of the production system.

Table 2 The D-S rule of combination information from sources 1 and 2

By applying D-S rule of combination on sources of information for PLC (P), the following data are generated and can be shown as Table 3.

Table 3 The results of belief function and the plausibility failures of PLC ‘P’

Degree of conflict is K = 0.27; therefore, normalization factor is (1 − K) = 0.73, in the same way, belief and plausibility functions can be determined by using corresponding equation described earlier from the above analysis. According to Equation 3, it can be written as follows:

m 1 2 p L = 0.63 m 1 2 p L , H = 0.05 m 1 2 p M = 0.16 m 1 2 p M , H = 0.05 m 1 2 p H = 0.04 m 1 2 p Θ = 0.05 m 1 2 p L , M = 0.00 .

This method will be similar for other parts namely compute ‘Co’ and cushion ‘Cu’ which can be written as follows:

m 1 2 co L = 0.68 m 1 2 cu L = 0.34 m 1 2 co M = 0.17 m 1 2 cu M = 0.35 m 1 2 co H = 0.05 m 1 2 cu H = 0.23 m 1 2 co L , M = 0.00 m 1 2 cu L , M = 0.06 m 1 2 co L , H = 0.00 m 1 2 cu L , H = 0.00 m 1 2 co M , H = 0.08 m 1 2 cu M , H = 0.26 m 1 2 co Θ = 0.02 m 1 2 cu Θ = 0.06 k = 0.40 k = 0.35 .

The belief and plausibility can be calculated according Equations 4 and 6 after finding the basic probability assignment of each device.

Uncertainty interval for failures probability

The belief interval represents a range in which true probability may lie. It can be determined by subtracting belief from plausibility. The narrow uncertainty band signifies additional strict probabilities. In conditions that U(A) has an interval [0, 1], it denotes that no information is on hand, but once the interval is [1, 1], at that time, it means that A has been entirely confirmed by m(A). The uncertainty interval for failures probability of the PLC is presented in Table 4.

Table 4 The results of uncertainty interval

From the above combinatory analysis, it was found that the failures probability of the PLC is in the Low and Medium levels because the range of [0.79 to 0.96] is acceptable; similarly, we could achieve the risk function of the computer and cushion pumps through their related data.

Uncertainty interval for consequences

Similarity, the lower and upper bounds of consequence impact can be determined from the basic probability assignment; summarizing and simplifying calculation is shown in Table 5.

Table 5 The information consequence from sources 1 and 2

According to the data of basic probability allocations, Equations 4 and 7 can be used to find the belief and plausibility functions. This method is in the same way that mentioned in the previous section, and it is shown in Table 6. The intervals shown in this table represent the values by which they are approved by the existing data. For example, regarding the cushion, the results of this table show that the breakdown status of the machine is at the level of main and critical. This interval is approved by the probability interval of [0.81 to 0.91].

Table 6 Calculate the consequence of failure from sources 1 and 2

To determine the failure consequence of each machine from the table of belief and plausibility functions, regarding the computer, the results of this table show that the breakdown status of the consequence impact is at the level of negligible and main. This interval is approved by the probability interval of [0.96 to 0.96]. The interval that has a great belief interval will be chosen since the belief function determines a low probability that is gained through the minimum available data. For example, we can show the calculating consequence of failure as follows (see Table 7).

Table 7 Consequence impact of failure and uncertainty interval ( U )

The risk diagram assessment

First, we have tried to draw the risk assessment diagram which is shown in Figure 3. These diagrams are drawn for the two selected parts of the computer and cushion pumps. In the risk assessment diagram, the x-axis is divided into five sections of results starting negligible loss until critical condition result. Also, y-axis shows the probability of failure starting with Low probability until High probability in the five sections. If enough data are available, we can use quantitative and precise numbers for probability. But, about the concerned case study, we are in the uncertainty conditions and our data are insufficient. Therefore, the non-deterministic area has been specified in the diagram. The lower bound of this domain is determined by the belief function and the higher bound by the plausibility function. In the domain, the narrower the interval band, it shows stronger probability. An example in the drawing the risk related to the failures probability occurrence in the computer, we get to Figure 3. According to the resulted evidences, probability of failure in the computer has two positions of negligible and main, and will be in the [0.2, 0.4] domain and consequence of failures in the computer has two positions of low and medium; finally, the risk will be in insignificant or minor area. Here is shown the area of the risk under uncertainty condition; for other cases, the risk assessment diagram can be drawn.

Figure 3
figure 3

Square diagram of risk assessment for two essential parts in the fault tree.

Figure 3 shows that the risk of cushion devices is located at the moderate and major level, while the risk of computers is at the level of insignificant and major. There are two solutions if we want to use the risk-based analysis for controlling and reducing the risk of this equipment, (1) decreasing the probability of failure and (2) decreasing the consequence of failure, which will lead to the reduction of failure risk of the equipment. In this section, decreasing the probability of failure about the cushion will not cause decreasing failure risk; therefore, we have to reduce the consequence impact of failure. Figure 3 also shows in the uncertainty condition although it cannot be determine precise number for risk of system but risk area can be obtained.

The analysis of risk diagram assessment

In order to make decisions, the measures should be determined. However, under many conditions, the decision information about alternatives is usually uncertain or fuzzy due to the increasing complexity of the socio-economic environment and the vagueness of inherent subjective nature of human thinking (Gui-Wu et al. 2013).

The uncertainty in the nature of fuzzy problems makes the decision makers (DMs) find a solution so that both feasibility and optimality conditions can be satisfied efficiently (Tabrizi and Razmi 2013).

The principles of classifying the risk can be different according to the fuzzy numbers, their nomination, the purposes, aims of each class, etc. The classification presented in the Figure 3 is the risk fuzzy area, by assigning different classes of the system and probable failure. If an appropriate probability distribution cannot be identified for a given situation, it becomes extremely difficult to draw reliable inferences about the given domain of study under investigation (Sundaram and Ramya 2013). It is possible to make a better assessment of the existing conditions and consequently prioritize the controlling actions.

Using the probability and the consequence of risks classification system, it is possible to assess and analyze the risk on the basis of the potential consequences and the probability of their occurrences. From the integration of the above two factors, risk diagram of the danger is resulted which combines the factors in the tables of the consequence and the probability in order to provide a proper device for estimating acceptable level of the risk. With the provision of a two characteristics assessment system for risk occurrence on the basis of the consequence and the probability of the risk, it is possible to classify and assess the risk according to the degree of its acceptability which is called as the risk diagram of the danger. In Figure 3, this diagram has been determined for two machines in this case study.

From the risk diagram, we find that the failure risk of the cushion for two dimension of square changing between moderate risk to major risk. Other results pertaining to computer is shown in diagram. Risk of computer and PLC is in the insignificant and minor area, therefore does not need improvement. Sometimes, we have to reduce consequence impact of failure, for example, we can decrease consequence impact with added redundancy. On cushion, we can only decrease failure consequence of cushion, therefore decrease risk of failure to moderate and minor risk.

Conclusions

In the discussion of risk management, we can make sound assessments in case we have a reasonable identification about the uncertainty. For identification, it is necessary to know all the factors and activities and understand all the relevant issues. Using the analysis process, the main topic on its factors and the analysis of each will lead to the identification of all related issues. The internal causes of failure include poor management, lack of risk management planning, and failure to adopt a risk limit threshold (Ariful and Des 2012). Most of time in an industry, we have the lack of data to calculate the reliability or make a decision. There are some main questions in any factory how risk-based methods can be used to optimal planning of future, and what the best model is to estimate and forecast future. Especially, a theoretical framework, models, and algorithms based on the probability theory are not capable to calculate risk, because there is a lack of data in a real situation. Reliability analyses should necessarily be a risk base linked with the losses from failures, in which decision on allocation reliability or reallocation in uncertainty condition on a basis of unknown data is new challenge. When failure time is unknown, loss of production will be occurred.

In the theoretical concept, we have applied tools, such as Dempster-Shafer theory. Dissimilarity assessment is a central problem in the DST, where the difference information content between several evidence should be quantified (Sarabi-Jamab et al. 2013).

Dempster-Shafer theory provides significant patterns about the risk and reliability, and it can be extracted from data originated in a factory. If we do not have enough available data, we can use qualitative and precise numbers together to calculate the risk. These results in discovering new knowledge about the failure risk to the factory. In this paper, due to the lack of information, we have introduced a method that determines a range for the consequence impact and calculates the probability of failure with relation between the risk and reliability. Decision maker should get to a relative certainty for assigning decisions. The uncertainty is classified under two main groups which are aleatory uncertainty and epistemic uncertainty. The major intention of this study is epistemic uncertainty which is due to the lack or incompleteness of the correct data. One of the theories which are used for decision making in such conditions is the evidence theory or Dempster-Shafer theory.

Evidence theory provides a proper tool when there is incomplete or imperfect information. In this research, using the evidence theory, the uncertainty range of belief and plausibility has been achieved, and this range provides a measure for decision making in uncertainty condition that is due to incomplete information. Dempster-Shafer theory does not provide the limitations of a model and is a logical expression of ignorance. As in the assessment of real risks in the working environments, we encounter the problem of ignorance; it is possible to reduce the degree of ambiguity and increase the reliability of the results by integrating the qualitative risk assessment diagram and the Dempster-Shafer theory. In this investigation, the Dempster-Shafer theory was used for the identification in uncertainty conditions, and its findings were applied beside a risk diagram for specifying the risk of a production system. Indeed, its application in the production organizations was examined. In the qualitative approach, there is no estimation on the probability of failure but it could rank in the fuzzy logic categories; a computational intelligence technique can provide a convenient way to represent linguistic variables, subjective probability, and ordinal categories. Linguistic variables are designed to describe imprecise facts about a system and project. It is different from the frequency of repeatable events. Hence, subjective probability is a better way to represent risk as compared to quantitative objective probability of failure. Furthermore, fuzzy severity categories are more credible than numeric scores.

Endnote

aMIL_STD_882 B.