A landscape and implementation framework for probabilistic rough sets using PROBLOG

Reasoning


Probabilistic rough sets
Formal theories for approximate reasoning are numerous and highly varied due to different types of approximate concepts intended to be modeled such as vagueness, imprecision and incompleteness.Rough set theory [2,7,19,25,38,47,48] is one such formal representation and it has been used to model incompleteness and imprecision using indiscernibility relations and approximations based on such relations.Classical rough set theory has been generalized to use base relations weaker than equivalence relations [7,8,38].For instance, tolerance spaces [23,39] are defined in terms of a base relation that is only reflexive and symmetric.Rough sets are additionally characterized by three regions, a positive region containing all individuals that are a member of a set, a negative region containing all individuals that are not in the set, and a boundary region containing elements that may or may not be in the set.For each rough set, these regions are determined by the base relation used in defining that particular set.
In [9], the authors show how Answer Set Programming (ASP) [10] can be used as a basis for representing and reasoning about classical rough relations and their generalizations together with standard relations in a nonmonotonic context.ASP is a knowledge representation framework based on the logic programming and nonmonotonic reasoning paradigms that uses an answer set/stable model semantics for logic programs.There are many other implementations of libraries and tools for rough sets in general (see [17,30,32,33] and references there).However, to the best of our knowledge, no implementation of probabilistic approaches have been reported in the literature.
Early in the development of rough set theory and its applications, it was noticed that probabilistic information implicit in rough set theory was not explicitly formalized and exploited [21].A natural extension of the rough set method that uses a probabilistic model was proposed in [43].Since then there have been many additional proposals and applications of probabilistic rough set methods [12,13,18,37,[44][45][46][47][48][49][50].
In the standard rough set model, given a crisp set S, the lower and upper approximations of S are defined based on an underlying equivalence relation.If an equivalence class is a subset of S, then all individuals in that class are in the lower approximation of S. If an equivalence class intersects with the set S, then that equivalence class is part of the upper approximation for S.
The basic intuition underlying probabilistic rough set methods is based on the observation that for those equivalence classes in the upper approximation of a set, the proportion of individuals in that class that are in the set in comparison to those that are not, offers a quantitative measure for any randomly chosen individual in the equivalence class being a member of the set S. One can then define a membership function l x ð Þ which returns this ratio for any individual.l x ð Þ, together with the use of threshold values, can be used for more fine-grain assessments of which individuals are in the boundary region of a rough set or not.

Motivations and contributions
Given the widespread use of probabilistic rough set methods, the purpose of this paper is to show how PROBLOG [4][5][6], a well-known probabilistic logic programming language, can be used as a basis for representing and reasoning about probabilistic rough relations and their generalizations in a succinct and elegant manner.In fact, the paper will show how a majority of the more well-known probabilistic rough set methods can be generalized and defined using one parameterized definition (Definition 8.4) that provides the basis for representing and reasoning with any of these methods using PROBLOG in a straightforward manner.By using higher-order PROBLOG programming techniques, a generic set of PROBLOG relations can be defined and instantiated for specifying any of the probabilistic rough set methods considered in the paper.This set of relations can be defined in roughly half a page of PROBLOG (see Program 6).
The paper also proposes a number of new generalizations of existing probabilistic rough set methods not seen in the literature.Normally, rough set methods assume a target set S that is a crisp set not definable with the elementary equivalence classes associated with an approximation space.In the first generalization, the paper proposes the use of both crisp sets and probabilistic sets as input.In the second generalization, the paper proposes the use of probabilistic base relations as a basis for approximation spaces.These additional generalizations are also subsumed by the parameterized Definition 8.4 for generalized approximation operators.The paper uses a rich set of PROBLOG examples and reproducible case studies to show the efficacy of the approaches that are proposed.
Fig. 1 provides an incremental roadmap of the variety of rough set approximation operators considered in the paper.The lower lane begins with classical approximation spaces and operators and provides definitions of the various generalizations in the literature.These incremental generalizations lead to Definition 8.4, a definition for generalized probabilistic approximation spaces and operators that can be instantiated to generate the preceding incremental generalizations.The upper lane in the roadmap begins with classical tolerance-based approximation spaces and operators that use tolerance spaces where the base relation is only reflexive and symmetric.This is generalized to the probabilistic case which then again leads to Definition 8.4 which subsumes this lane also.
The following contributions are considered in this paper: 1.The paper begins with a thorough summary of rough set methods in the literature and their generalization to probabilistic rough set methods.In doing this, the variety of choices used to define different rough set theories, both nonprobabilistic and probabilistic, are brought to light.A number of definitions for existing and new probabilistic rough set methods are considered that incrementally generalize each other as reflected in the definition roadmap in Fig. 1.
2. Definitions of generalized probabilistic approximation spaces and operators are provided.These parameterized definitions essentially subsume a majority of probabilistic rough set methods considered in the literature.This in itself is not surprising since the methods are closely related, but it does enable compact, efficient and highly expressive specification mechanisms in PROBLOG.3. As alluded to in the previous point, using the definitions for generalized probabilistic approximation spaces and operators as a basis, an efficient and compact encoding for a majority of probabilistic rough set methods encountered in the literature is proposed.It is based on parameterizing both the formal definitions and then using a set of higher-order relational definitions in PROBLOG that correlate with different parameterizations.4. A number of new concepts are introduced that enhance the expressibility and use of existing probabilistic rough set methods.
The input to a majority of rough set methods is a crisp target set in a domain, generally not definable using the elementary classes in the quotient set generated by a base relation in the approximation space in question.One then proceeds to define its lower and upper approximations.The first generalization allows for partially specified, probabilistic target sets as input to probabilistic rough set methods.
The base relations used in a majority of rough set methods are non-probabilistic.The second generalization is to allow base relations to be defined as probabilistic relations.For both these generalizations to be useful, one has to define a probabilistic neighborhood relation based on the use of thresholds in order to generate useful equivalence classes or coverings for a generalized approximation space.This is an additional contribution.5. Tolerance spaces provide a particularly interesting challenge in terms of generalizing such spaces to their probabilistic counterparts.This is because individuals in a domain can be in more than one neighborhood at the same time.The paper identifies why this is a problem for the probabilistic case and then provides a solution in terms of a modified definition for lower and upper probabilistic approximations.6.A PROBLOG programming methodology is provided for specifying and reasoning about almost any type of probabilistic rough set method based on the instantiation of a collection of generic higher-order PROBLOG relations described previously.
The resulting probabilistic rough relations can be combined with non-probabilistic and probabilistic relations defined in the conventional manner using PROBLOG's features.7. A number of examples and case studies are provided throughout, showing the efficacy, expressivity and power of these concepts.The intent is that the PROBLOG based tool proposed for specifying and reasoning about the variety of probabilistic rough set methods can be easily usable and reproducible in a straightforward manner.To show this, out-of-the-box executable PROBLOG code is provided for a majority of the examples and case studies to guide the potential user of these techniques.
All the above points contain original results either by synthesizing concepts previously introduced and investigated by other authors (points 1, 2) or by reporting original developments (points 3-7).

Feasibility of the approach
The worst-case complexity of computing PROBLOG queries is exponential wrt the number of variables/ground literals.The same applies to other probabilistic logic programming languages using the Sato distribution semantics [34].Though still exponential in the worst case, the techniques used to compute probabilistic queries in PROBLOG exhibit a much better performance pragmatically (see also a discussion in Remark 4.1).The PROBLOG solver essentially combines SLD-resolution with approximation algorithms and methods for representing Boolean formulas and computing their probabilities.As indicated in [6], PROBLOG's solver is able to deal with belief bases sometimes containing up to 100 000 formulas.In fact, PROBLOG has been used in a variety of real-world applications.For example, the PROBLOG web page [28], indicates over 20 papers concerning larger scale applications in biology, actions theories, robotics and tracking, games, natural languages, as well as in other areas.
The specification framework developed in the current paper is built on top of the PROBLOG engine.It only adds polynomial time complexity, so it can be applied in the areas mentioned in addition to other areas.Our framework inherits the methodology and applications of probabilistic rough sets, so it can be particularly useful when objects can be clustered and gathered in equivalence classes or tolerance neighborhoods reflecting indiscernibilities/similarities among objects.This typically occurs and is described in previously cited sources on probabilistic rough sets.It may also occur in the application areas indicated in [28].In [6], similarity relations are used in link mining for large networks of biological concepts.A typical query considers whether a given gene is connected to a given disease.As indicated in [6], ''probabilities of edges can be obtained from methods that predict their existence based on, e.g., co-occurrence frequencies or sequence similarities".The study shows that ''the connection query could be solved [. ..] for graphs with up to 1 400 to 4 600 edges".
We also demonstrate the use of indiscernibility and similarity relations in the case study discussed in Section 9.There the case study is simplified for the purpose of presentation and pedagogic clarity.It is intended to serve as a showcase illustrating the underlying methodology and features described in the paper.

Paper structure
Section 1 provides an introduction and overview of the topic area and the approach taken in the construction of a generic PROBLOG based tool for probabilistic rough set methods.Section 2 provides an overview of classical rough set methods and a landscape of choices that can be made when defining the underlying base relations for rough sets.Section 3 provides an overview of probabilistic rough set methods in the literature.Additionally, it considers an example of the application of the most basic probabilistic rough set method where information necessary for defining rough sets is generated from a table of data.Section 4 provides a short summary of PROBLOG and its semantic theory which is based on distributional semantics.Section 5 provides a generic PROBLOG program structure and template for specifying probabilistic rough sets using informal, but concise syntax.Section 6 provides some examples that use the generic program structure and also provides executable PROBLOG code in the appendices based on these examples.Section 7 considers the use of tolerance spaces and their generalization to the probabilistic case.Section 8 considers the generalizations of probabilistic rough set methods.It also shows how all previous probabilistic methods considered so far are subsumed by a new definition of generalized probabilistic approximation spaces and operators.Additionally, the section specifies a small, compact set of higher-order PROBLOG relations used to specify any of these methods through instantiation.Section 9 then considers a relatively complex case study on a toy recommendation system that uses many of the concepts considered previously in the paper.The case study is encoded using the generic PROBLOG relations defined in the previous section and it also shows how different types of relations interact naturally in a PROBLOG program.Section 10 then summarizes results and concludes the paper.

Approximations and approximate/rough sets
In the paper we will consider both classical rough set methods based on the use of indiscernibility relations (reflexive, symmetric and transitive), as well as their generalizations, where the requirements as to the underlying base relation are relaxed.For example, rather than using indiscernibility as a basis for the base relation, one may require similarity (proximity, tolerance) among individuals instead.In this case, transitivity would be removed as a requirement and the focus would be on reflexive and symmetric base relations instead.These are just two of many choices.
In the rest of the paper we shall assume that: 'dom' is a fixed finite domain, where x; y, and z (possibly subscripted) are used to denote objects in dom and c and d are used to denote subsets of dom.
r # dom Â dom, possibly with an index, is a base relation intended to represent indiscernibility.It is assumed that r gwill signify the equivalence class that an individual x belongs to, i.e., the set of individuals equivalent to x. jr x ð Þj will be used to denote its cardinality.Later on in the paper, the symbols s and t will be used for generalizations of the base relation r.
In the following definition we set no requirements on the base relation used to define approximations.The requirements used in subsequent parts of the paper are listed in Table 1.
The difference c È r n c þ r is called the boundary region of c wrt r.The relation r is called the base relation for approximations Note that arbitrary (n-argument with n > 1) approximate relations can be modelled by assuming that the domain dom consists of n-tuples of elements of ''more elementary" domains.In the rest of the paper we will deal with a language with arbitrary relations.Definition 2.1 applies to such relations as well.Ã Given an approximation space AS ¼ dom; r h i , any set c # dom can be partitioned into three disjoint regions: , the negative region of c in AS.

Correspondences between approximations and properties of base relations
The properties listed in Table 1 have been extensively investigated in the area of modal logics and correspondence theory that studies the relation between modalities and Kripke accessibility relations [40].Insights from these areas have been used to relate properties of approximations to properties of the underlying base relation r (see, e.g., [8,47]).In particular: D ensures that the lower approximation of a set is included in its upper approximation; T ensures that the lower approximation is included in the approximated set; B ensures that the approximated set is included in the lower approximation of the upper approximation of the set; 4 ensures that the lower approximation of a set is included in the lower approximation of its lower approximation (so that iterating lower approximations does not change the result of its first application); 5 ensures that the upper approximation of a set is included in the lower approximation of its upper approximation.
Remark 2.4.Note that requirement D is equivalent to the seriality of the base relation r.The inclusion of the lower approximation in the upper approximation is fundamental in approximate reasoning.However D alone does not guarantee that an  object is indistinguishable/similar to itself.Therefore, in the methods considered in this paper we will assume (at least) T, the reflexivity of r.Of course, D is entailed by T.Ã Fig. 2 shows dependencies among properties of binary relations that are used as a basis for approximations.These dependencies are well-known from modal logic, and later, rough set methods (see, e.g., [40,47]).Note that the bottom of the figure, D, defines serial relations while the top, T5, defines equivalence relations.TB corresponds to tolerance relations.Note also that T5 ¼ TB4 (in modal correspondence theory they both correspond to S5).
In the specification of rough set methods in PROBLOG, users will have the ability to represent selections of these properties in PROBLOG programs when specifying different approximation relations.This is shown in several sections of the paper.

Probabilistic rough set reasoning landscape
In this section, the focus will be on classical rough sets, where the base relation r is an equivalence relation.A brief description of the different ways to generalize classical rough set methods to probabilistic rough set methods is provided.There have been many investigations of probabilistic generalizations of classical rough set theory in the literature [12,13,18,37,[44][45][46][47][48][49][50].
In the classical theory, for a rough set c and for any object Similarly, for any object x 2 c È r , its equivalence class r x ð Þ intersects with c and the overlap must be nonempty (r x ð Þ \ c -£).The starting point for probabilistic generalizations of rough sets is to take into account the degree of overlap that equivalence classes have with the target set c.These proportions can then be used as a basis for determining such quantities as the probabilities of individual objects relative to rough target sets, the degree of dependencies between rough sets, and construction of probabilistic rough set approximations, among others.The most basic generalization begins with the concept of rough membership function introduced in [43] (see also [20]).
Let dom be a finite set of individuals, P : , be a probability function on the powerset of dom, and r be a binary relation that is also an equivalence relation on dom.The triplet AS P ¼ dom; r; P h iis called a probabilistic approximation space [21].
Given AS P ¼ dom; r; P h i , by definition of P, it can be assumed that The function l c x ð Þ provides a quantitative degree to which the object x belongs to the rough set c.It is defined as the conditional probability P cjr x ð Þ ð Þwhich characterizes the probability that a randomly chosen individual in r x ð Þ is a member of the rough set c. Assuming a finite domain, it is often computed as the proportion, In a similar manner, the membership function l can be generalized to rough set inclusion [26], the degree to which a rough set d is a subset of another rough set c.This inclusion, denoted by v djc ð Þ, is defined as, where it can easily be seen that v r The difference c PÈ r n c Pþ r is called the probabilistic boundary region of c wrt r.Given a probabilistic approximation space AS P ¼ dom; r; P h i , any set c # dom can be partitioned into three disjoint regions: Note that the probabilistic approximation operators in Definition 3.2 are equivalent to the approximation operators in Definition 2.1 in the following sense.Given AS ¼ dom; r h iand AS P ¼ dom; r; P h i , a non-probabilistic approximation space and a probabilistic approximation space, respectively, where the dom's are the same and the base relations r are the same, for any set c # dom, Consequently, probabilistic rough set models are a natural extension and will be shown to be generalizations of their classical counterparts.
In Pawlak et al. [21], a rough probabilistic model is introduced to characterize statistical dependencies between sets of attributes in an information system, in particular dependencies between input (condition) attributes and target (action) attributes.This is useful in constructing decision rules that take as input a subset of instantiated input attributes and return an instantiation of the target attributes.Many such tables of data lack sufficient information to induce deterministic mappings and the authors call these tables non-deterministic.Deterministic tables can be handled using the classical rough set model while non-deterministic tables require a generalization of the classical model that takes account of available probabilistic information in the rough set model.The probabilistic rough set model that is proposed in [21] is defined below in terms of 0.5-probabilistic approximations.Definition 3.3 (0.5-Probabilistic approximations).Given AS P 0:5 ð Þ ¼ dom; r; P h i , and c # dom, the 0.5-probabilistic lower and upper approximation of c wrt r are defined by: Given a 0.5-probabilistic approximation space AS P 0:5 ð Þ ¼ dom; r; P h i , any set c # dom, can be partitioned into three disjoint regions: This model has the flavor of a majority rule model.If more than 50% of the objects in an equivalence class r the idea is that one can be statistically confident that the object does or does not satisfy the properties of the concept c.
The remaining rough probabilistic models that will be considered in this paper can be viewed as generalizations of Definition 3.2 and Definition 3.3 (although see Remark 3.5) through the use of parameterization.This perspective is elegantly described in Yao [44].
Ziarko [49] proposed the variable precision rough set model where one allows for different variable levels of set inclusion in the definitions of approximation operators.Recall from Definition 3.3 that for c Pþ r , Generalizing this, for a 2 0; 1 For the lower approximation, the majority rule constraint would constrain a to be greater than 0:5.Consequently, a 2 0:5; 1

ð
. Additionally, for the upper approximation, in order for it to retain the property of being a dual operator to the lower approximation, its parameter should be 1 À a.This pair of parameters is a; 1 À a ð Þis referred to as the symmetric bounds.The use of symmetric bounds ensures that the lower approximation is always a subset of the upper approximation.Definition 3.4 (Variable precision approximations, symmetric bounds).Let a 2 0:5; 1:0 ð .Given AS VP a ð Þ ¼ dom; r; P; a h i , and c # dom, the variable precision lower and upper approximation(s) of c wrt a and r are defined by: The difference c V È r n c Vþ r is called the variable precision boundary region of c wrt r.
Remark 3.5.Observe that the variable precision lower approximation differs from the 0.5-lower approximation by using 'P a' rather than '> a' (with a ¼ 0:5).However, in order to define the classical (non-probabilistic) lower approximation, one needs a ¼ 1:0.In order to be able to subsume the 0.5-model, with P, as in Eq. ( 12), one has to use a ¼ 0:5 þ with a sufficiently small (0 < < 1=jdomj).In a similar manner, the 0.5-upper approximation differs from the variable precision upper approximation (here > is used rather than P).Ã Given a variable precision approximation space AS VP a ð Þ ¼ dom; r; a h i , any set c # dom can be partitioned using parameter a into three disjoint regions: Asymmetric bounds for variable precision rough set models were considered in Katzberg and Ziarko [14] (for the more recent versions see [49,50]).Here one introduces two parameters, a and b, for the lower and upper approximations, respectively, where 0:0 6 b < a 6 1:0.For the cases of rough set inclusion where v r x Þ , this approach is equivalent to the most general case we consider which derives probabilistic approximations using a decision-theoretic model [46].
The difference c DÈ r n c Dþ r is called the decision-theoretic boundary region of c wrt r.
The distinction between the asymmetric bounds variable precision approximation model and the decision-theoretic approximation model is that the latter proposes methods for specifying the parameters a and b in a theoretically principled manner using Bayesian decision theory.In this case, the risk in making a wrong choice for particular levels of set inclusion is quantified and reflected in the choice of a and b used in the definition of the approximation operators.For the interested reader, techniques for determining a and b are considered in [47,46].In the remainder of the paper, we will assume that these parameters are set appropriately by the users of our PROBLOG program specifications.
The following proposition is well-known.It shows that we can focus on the decision theoretic approximation model without loss of generality.the variable precision approximation model (Definition 3.4), where b ¼ 1:0 À a.
An extension of the variable precision model, called Bayesian Rough Sets, has been introduced and discussed in [37,50].The idea behind this approach, as expressed in [37], is that ''in some applications, for example in stock market, medical diagnosis etc., the objective is to achieve some certainty prediction improvement rather than trying to produce rules satisfying preset certainty requirements."Definition 3.8 (Bayesian approximations).Given AS B ¼ dom; r; P h i , called the Bayesian approximation space, and c # dom, the Bayesian lower and upper approximation of c wrt r are defined by: Given a Bayesian approximation space AS B ¼ dom; r; P h i , any set c # dom, can be partitioned into three disjoint regions: Before proceeding to a consideration of PROBLOG and how one would use PROBLOG for reasoning with probabilistic approximations of rough sets, an example as to how probabilistic rough sets can be derived from table data is provided.
A common use of rough set reasoning is based on the use of table data.Generally, one assumes a universe of individuals U, possibly infinite, but one only has access to a finite subset of these individuals dom & U.One then associates attribute/value pairs with these individuals representing input attributes and one is interested in understanding relationships between sets of individuals defined by these input attributes and sets of individuals defined by target attributes.It is often the case that these sets of individuals can not be fully characterized in terms of the input attributes and so can be interpreted as rough sets relative to an indiscernibility relationship implicit in the table of data.Given a table of data, each row can be viewed as a sample pertaining to an individual.In the following, reference will be made to Table 2 which is also found in Russell and Norvig [31].
Example 3.9.Suppose one is interested in the relationship between the input attributes Hun; Type and the target attribute WillWait.r Hun¼no;Type¼Burger ¼ x 3 ; x 7 ; x 9 f g ; r Hun¼no;Type¼Italien ¼ fg For instance, the equivalence class r Hun¼yes;Type¼Thai can be characterized as a logical rule as follows: where ' ' is to be understood as an implication from right to left.The base (target) set of interest will be the set defined by all individuals with attribute WillWait ¼ Yes.This set is denoted as WW; WW ¼ x 1 ; x 3 ; x 4 ; x 6 ; x 8 ; x 12 f g .The following logical rule can be defined for WW: The interest then is defining the rough set correlate of WW consisting of the lower and upper approximation of WW that is denoted as WW r and defined as: Recall that rough membership of a base set c is defined as , where the conditional probability provides the probability that an individual, randomly chosen from its equivalence class r x ð Þ, belongs to the rough set derived from c.In the case of the example, The following logical rule will be useful when defining the approximation operators for WW, Table 3.9 provides the conditional probabilities required which are derived from rule (22) and the sample data in The following logical rules can be used to determine membership in WW þ r and WW È r , respectively, The following logical rules can be used to determine membership in For the purpose of basic reasoning with probabilistic rough sets, Example 3.9 exhibits most of the required functionality.ww r x ð Þ, defined in (22), is a probabilistic predicate where the distribution is derived from the equivalence class r.Associated with ww r x ð Þ are a number of other properties defined using the probability model: The idea will be to embed probabilistic rough sets and their properties into PROBLOG programs.One can then combine probabilistic rough relations/properties such as ww r x ð Þ with standard probabilistic and nonprobabilistic relations in defining probabilistic knowledge bases consisting of tables and relational rules.Before proceeding to more complex examples that do this, a short summary of PROBLOG is provided in the next section.
PROBLOG is a probabilistic extension of PROLOG that is one of the more popular languages due to its ease of use in addition to having a large user community.PROBLOG3 will be used in this paper as a basis for specifying the diverse collection of probabilistic rough set models considered in this paper.
A probabilistic clause is syntactic sugar for a set of conditionalized probabilistic facts.For example, the rule in Line 6 encodes a set of probabilistic rules, one for each person, 0:8 :: Clause ( 23) implicitly represents a probabilistic fact and a clause: 0:8 :: calls aux john ð Þ: Given a probabilistic knowledge base, one can query it in various ways.In this example, one specifies two conditional probabilistic queries, Here the relation 'evidence(Á)' is used to specify evidence facts, while 'query(Á)' is used to specify the question to be asked.Note that 'query(Á)' can be used without any 'evidence(Á)'.In the example, the result of these queries would show that PROBLOG is based on distribution semantics [5,27,34].In its very spirit, PROBLOG is like restricted Prolog except that one may additionally specify particular facts (or consequences of rules) as being probabilistic.This gives rise to many possible worlds.In each world, probabilistic literals are chosen to be true or false, and the probability of this choice contributes to the probability of the world.That is, in each world one obtains a pure Prolog program and the worlds differ in the set of facts.Distribution semantics calculates probabilities using the obtained probability distribution on worlds.
The following summary of the semantics follows [15].be the maximal set of grounded logical facts that can be added to KB.The random variables corresponding to facts in F are mutually independent, so the program P defines a probability distribution over ground logic programs F 0 # F: The probability that a ground query q is true given a program P is

P qjP ð
Þis 1:0 if there is a substitution h such that F 0 [ P qh otherwise it is 0:0 and P F 0 jP À Á is as defined in Eq. 25.P qjP ð Þ is called the success probability of q.
Remark 4.1.The complexity of computing probabilities of queries directly using ( 26) is exponential wrt the number of distinct probabilistic predicates occurring in the program.However, as indicated in [5], ''as fixing the set of facts yields an ordinary logic program, the entailment check can use any reasoning technique for such programs."Such techniques are typically much more efficient, however, still exponential in the worst case.A better performance may be achieved by statistical sampling or using bounds, where the set of possible worlds where the given query is true is approximated by their subset and superset [5].Ã PROBLOG extends classical PROLOG with stratified negation and uses the symbol n+ for negation.As mentioned before, the main difference between PROBLOG and PROLOG is that PROBLOG supports probabilistic facts.The following provides an incomplete list of those parts of PROBLOG (and PROLOG) used in the paper, in addition to a number of other useful features.For additional features the reader is referred to the PROBLOG on-line tutorial [29] and related literature, including [4,5], in addition to other references there.
To specify probabilistic predicates, an operator '::' is used.If a is a tuple of constants, p is a relation symbol and r 2 0:0; 1:0 ½ is a real number, then the specification: states that p a ð Þ is true with probability r and false with probability 1 À r ð Þ. Probabilistic predicates can be used to specify probabilistic facts and heads of probabilistic clauses.Queries are used to compute probabilities.The probability of p a ð Þ is returned as the result of the query: Evidence is used to specify any observations on which one wants to condition a probability.Each piece of evidence is specified using a binary relation evidence where the first argument indicates a fact and the second one indicates a classical truth value: They indicate that p a ð Þ is true (respectively, q b ð Þ is false).Evidence facts are useful in computing (conditional) probabilities by restricting worlds to those where (in this case) p a ð Þ is true (respectively, q b ð Þ is false).Queries can be also be used inside the bodies of rules by using the subquery predicate: which evaluates p(a) and returns P as the conditional probability of p(a) given the evidence listed in ListOfEvidence.ListOfEvidence may also be empty.Annotated disjunctions support choices.If r 1 ; . . .; r k 2 0:0; 1:0 ½ such that 0:0 6 r 1 þ . . .þ r k 6 1:0; a 1 ; . . .; a k are tuples of constants and p is a relation symbol then an annotated disjunction is an expression of the form: It states that at most one of r 1 :: 0 then exactly one of the listed literals is selected, otherwise there is an implicit null choice indicating that none of the options is taken with the probability 1 An annotated disjunction may occur as a fact or as a head of a rule.Negation is expressed by n+, where 'n' stands for 'not' and '+' stands for 'provable'.That is, n+ represents negation as failure.For backward compatibility the connective 'not' can be used instead of n+. 4 number of standard PROLOG commands, available in PROBLOG will also be used:

The structure of programs used in the paper
In [9] the authors developed a generic program structure for Answer Set programs which could be used as a tool for implementing (non-probabilistic) rough sets and their generalizations.Program 2, adapted from [9], shows a corresponding structure for PROBLOG programs that takes into account probabilistic rough sets and their generalizations.For the sake of readability, a number of program structuring keywords, not belonging to the syntax of PROBLOG, will be used.These are written in boldface font and should be treated as comments: constants -specify global constants using one-argument relations, like alpha 0:9 ð Þ; crisp/probabilistic set(s) -specify the domain 'dom0 used, in addition to explicitly specifying or implicitly generating crisp or probabilistic sets.Specifications may use crisp and/or probabilistic predicates.For simplicity we assume that all constants occurring in a program have to belong to its domain 'dom'; crisp/probabilistic base relation(s) -specify explicitly or generate base relations.The relations may be crisp and/or probabilistic.The properties that may be considered (T, B, 4, 5) are listed in Lines 11-13 and should be selected or commented out in order to reflect the desired properties of relations.The property T is required in all rough set generalizations as it corresponds to the requirement that any object is indistinguishable from/similar to itself.approximations -specify or generate lower and/or upper approximations for concepts and relations.Additionally define other approximation operators; knowledge base -specify a background knowledge base using PROBLOG rules and facts.
Note that we assume that for each crisp set c i there is a base relation r i . . .ð Þ used for approximating c i .For example, a base relation for similarity among company clients is different from similarity among items being offered by the company.For the same reasons we allow for many domains.
Program 2: The structure of PROBLOG programs used in the paper.
Program 2 provides the basic structure of a program using informal syntax.In Line 1, the standard PROLOG library 'lists' is loaded.It provides built-in operations on lists.
As in [9], properties T, B, 4, 5 formulated in Lines 11-14, reflect first-order conditions shown in Table 1.In Lines 2-3, constants a and b are bound to 0:5, respectively.Symmetric bounds are used for this example so b ¼ 1 À a.

Some examples
Lines 4-5 specify two domains: for Table 2 ð Þj jr X ð Þj .This provides the probability that a randomly chosen individual in X's equivalence class r X For the sake of clarity, we denote this relation by rmb since mu is reserved for a generic definition of l, which will be provided in Program 6.
Lines 25-27 specify the generic region relations for a rough set, taking its crisp set correlate C as an input parameter.Line 28 in the knowledge base, specifies the probabilistic clause that defines the probability that an individual is in the rough set wwR X ð Þ, in terms of rmb X; ww; P ð Þ .Lines 29-30 define the instantiations of the generic relations lower X; C ð Þ and upper X; C ð Þ for the crisp relation ww.Lines 31-33 define instantiations of the generic relations pos X; C ð Þ; bnd X; C ð Þ and neg X; C ð Þ, representing the positive, boundary and negative regions for ww, respectively.Lines 24-36 specify probabilistic clauses for the relation willLike P; R ð Þwhich provides the probability a person P will like a restaurant R depending on whether P is hungry and the rating of R.

Remark 6.2.
It is important to emphasize the use of PROLOG meta-predicates in the definition of the rough membership relation rmb X; C; P ð Þwhich takes a relation C as input parameter, and uses the meta-predicates findall=3 and call=2.Since the main approximation relations, lower X; C and bnd X; C ð Þ, are all defined in terms of rmb X; C; P ð Þ , this provides an elegant, compact, higher-order relational technique for dealing with multiple numbers of rough set specifications through simple instantiation.In our case, the crisp set of interest, C ¼ ww is used throughout.Later in the paper, even more use will be made of such meta-predicates and higher-order programming techniques.Ã Given this specification, one can now combine crisp, probabilistic and rough probabilistic relations in additional rules to reason about individuals and their restaurant preferences.Here are some examples.The relation willmeet X; Y; R ð Þis defined in terms of a crisp relation type The query 'query(willmeet(jim,Y,thai))', will return the information that jim will meet individuals x 2 ; x 4 and x 8 with probability 0:667 and will meet individual x 11 with probability 0:0.This makes sense since the probability of jim meeting someone at a thai restaurant is restricted to those individuals willing to wait to go to a thai restaurant, where waiting is determined by the probabilistic rough relation wwR Y ð Þ.The next rule uses the subquery relation and conditionalizes the conclusion on the probability that Y0s rough membership in wwR will be greater than 0:7.
In this case, 'query(willmeet(jim,X,R) ' In this case, suppose jim wants to meet a client x 12 at a Chinese restaurant.What choice of Chinese restaurant would result in the highest probability of them meeting given the behavior of x 12 ?The query 'query(willmeet(jim, x12, R, chinese)' would result in the following choices willmeet jim; x12; r12; chinese ð Þ ; willmeet jim; x12; r14; chinese ð Þ , with probabilities 0:6 and 0:25.Consequently, jim should choose restaurant r12.Note that these probabilities do not sum up to 1:0 since probabilistic relation willLike P; R ð Þ is involved.

Decision rules
A common application of rough set theory is rule induction via the generation of reducts.Rather than delve into rule induction in the general case, this section shows how one can define decision rules for a concept c, based on the three regions POS c ð Þ; NEG c ð Þ, and BND c ð Þ. Yao [45] considers a semantically sound means of constructing three-way decision rules for a concept c based on the regions.Recall in Example 6.1, that one constructed a variable precision probabilistic approximation space A VP a ð Þ ¼ dom; r; a h i , where r was derived from Table 2.The table can be described as an information system which is a formal counterpart of a table: where dom is a finite empty set of objects, At is a set of attributes, V a are the value domains for each attribute in At, and I a : U ! V a is an information function which maps an object to its attribute value, for each attribute a.The indiscernibility relation, r in A VP a ð Þ was defined using the subset of attributes A ¼ Hun; Type f g # At as follows where two objects are in the same equivalence class if they share the same values for attributes A.
For each equivalence class in the quotient set dom=r associated with A VP a ð Þ , one can provide a logical definition in terms of conjunctions of attribute/value pairs.For instance, Des r Hun¼yes;Type¼French A more fine-grained approach introduces the notion of relative certainty.There are many ways to do this, but for this discussion, the following will be used, where P; N, and B stand for positive, negative, and boundary region, respectively, gand cf is a function that returns the certainty factor for a decision rule, cf can be understood as a confidence value for concluding the right-hand side of the rule.This results in three types of decision rules, In the first two cases, cf ¼ 1:0, and cf ¼ 0:0, respectively, so with absolute certainty, one can be confident that for y 2 x ½ ; y 2 c and not y 2 c, respectively.For the third rule, 0:0 < cf < 1:0, so for any y 2 x ½ , one can not say anything with certainty.
In the probabilistic case, the certainty factor for all three rule types would be in the interval 0:0 6 cf 6 1:0, so the question then becomes what rule type should be applied in what situation.The following approach (one of many) would take additional probabilistic information into account in determining which rule to use: The following are some instantiations of these decision rules for Example 3.9: 5 5 For clarity, we write cf > a rather than cf P a þ , etc. (see Remark 3.5).

P. Doherty and A. Szałas
Information Sciences 593 (2022) 546-576 Des r Hun¼yes;Type¼French The first rule is always safe to use since cf ¼ 1:0 and it will always be correct when used.The second rule is relatively safe since 1:0 À 1=3 ¼ 2=3, so statistically it will be correct two times out of three.Recall that an N rule asserts the conclusion is not true.The last rule should not be used due to lack of information.It's precondition is in the boundary region of the concept WillWait.
These decision rules can then be encoded in PROBLOG and then used to reason about the probabilistic rough set WW.

Generalization toward tolerance spaces
There are many commonsense scenarios where viewing similarity or indiscernibility between individuals as an equivalence relation does not apply, in particular in regard to transitivity.For example [23], in Fechner's weight-lifting sensitivity example, it is given that one has a set of small weights (in grams), x; y and z, each weighing 10, 11 and 12 grams, respectively.
Let s be a tolerance relation.s In other words, weights differing by 1 gram are indistinguishable from one another when held in the palms of both hands, while weights differing by 2 or more grams are distinguishable.Another example is resemblance [39], where just because Jim resembles Fred and Fred resembles Frank does not imply that Jim resembles Frank.So far, the approaches considered have focused on approximation spaces, where the base relation r is an equivalence relation.The generalization to tolerance spaces removes the constraint of transitivity placed on the base relation r as considered in Section 2.2.The paper [36] provides a detailed discussion on theory and choices when using tolerance spaces in the rough set context.
Note that given a tolerance space TS, the family of equivalence classes generated from r and their partition of dom used in approximation spaces is generalized to a family of neighborhoods generated from s.The set n s x ð Þ : x 2 dom f g is a covering of dom.Generally, to derive s, one assumes an application dependent allowed tolerance between individuals x; y, in order to determine neighborhoods.For example, one may consider: where / is a measurement function associated with the domain of interest.One can assume a tolerance for each individual x i , or a common tolerance for the whole domain dom.Another way to state this is, 8x; y 2 dom s x; y When appropriate, the notation TS s ¼ dom; s; /; h ican be used to make the measurement function / and the tolerance constant explicit.Given a neighborhood operator n s , one can also derive its tolerance relation s: Assuming a measurement function / and a tolerance constant , 8x; y 2 dom s x; y The difference c È s n c þ s is called the tolerance-based boundary region of c wrt s.
It is then relatively straightforward to generalize tolerance approximations to tolerance-based probabilistic approximations with a qualification.As pointed out, since it can be the case that an individual can belong to more than one neighborhood, for a particular tolerance space, this has to be taken into account when defining what tolerance-based probabilistic approximations mean semantically.
Recall that the intuition behind the probabilistic rough set methods that have been encountered so far is that the degree of overlap of an individual's neighborhood with a target set c determines whether that individual is in the lower or upper approximation of c. Conceptually, the focus is on neighborhoods in the boundary region of c.If a neighbor N overlaps with a degree greater than a then it is defined as being in the lower approximation.In a similar manner, if a neighbor N overlaps with a degree greater than or equal to b then it is defined as being in the upper approximation.Suppose an individual belongs to two neighborhoods, N 1 and N 2 .For N 1 , the overlap is greater than a, but for N 2 , the overlap is less than b.In this case, the individual would be defined as being in both the positive and negative region of the target set c, which does not make sense and is undesirable.
In order to remedy this situation, the definitions of tolerance-based probabilistic approximation operators have to be constrained to rule out such situations.Let's focus on the definition of lower approximation since similar arguments apply to the upper approximation.Rather than define membership in a lower approximation c þ s as xjl c;s x ð Þ P a n o , an additional constraint is added as follows: By the symmetry of tolerance relations, x 2 n s y ð Þ is equivalent to y 2 n s x ð Þ.Therefore, the additional constraint (40) states that for an individual x to be in the lower approximation of c, the degree of overlap of c with any neighborhood for which x is a member, must be greater than a.A similar argument applies for the definition of the upper approximation.This additional constraint ensures that an individual can only belong to one region of a rough set.
The difference c DÈ s n c Dþ s is called the tolerance-based probabilistic boundary region of c wrt s.

Some generalizations of probabilistic rough sets
In Section 7, the generalization of the equivalence relation r to a tolerance relation s was considered and it was shown that defining tolerance-based probabilistic approximation operators was relatively straightforward.In essence, this approach works for any binary relation t : U Â U that generates a covering for U. Since in our paper t is defined to satisfy at least reflexivity (T), it is serial, which guarantees that for any set c; c Dþ r # c DÈ r .T also guarantees that the relation t generates a covering for U; 8y9x t x; y ð Þ.
So far, definitions of similarity or tolerance relations t, have been defined qualitatively.In the following generalization, this property is relaxed.Assume a binary tolerance relation s based on measuring the quantitative resemblance between two individual images in terms of probabilities.For instance, Table 5  The PROBLOG Program 5, shows how such tables can be generated in a straightforward manner.
Program 5: The PROBLOG program which can be used to compute probabilities in Table 5.
Given this technique, one can reason about probabilistic similarity or tolerance relations t, using PROBLOG.Given that neighborhoods are defined in terms of t, Definition 7.1 would also need to be generalized in the following manner.
Let us now show that l c;t;c x ð Þ can be understood as a probability distribution.
The following definition provides a general and abstract means of considering many probabilistic rough set methods and will serve as the basis for specifying such methods in PROBLOG.Definition 8.4 (Generalized probabilistic approximation space and operators).Let 0:0 6 b < a 6 1:0 and 0:0 6 c 6 1:0.Then GAS ¼ dom; t; a; b; c h i is a generalized probabilistic approximation space, where t # dom Â dom is a (probabilistic) base relation constrained by T together with a subset of the properties B; 4; 5 f g , and n t;c x ð Þ are (probabilistic) neighborhoods derived from c. Additionally, given a set c # dom, the generalized lower and upper approximation(s) wrt a; b; c and t are defined by: The difference c GÈ t;c n c Gþ t;c is called the generalized boundary region of c wrt t.PROBLOG offers the opportunity to define second-order relations through the use of meta-predicates such as 'call' and 'apply', in addition to set-predicates such as 'findall', all which can take relations as arguments.This expressivity provides a compact and elegant means for specifying and reasoning about generalized probabilistic approximation spaces and operators as defined in Definition 8.4.This section provides an overview of the library of meta-predicates constructed for such specifications.
Program 6: Generic definitions of rough membership, approximations and regions.
Program 6 provides the relations used specifying generalized approximation spaces and operators.The PROBLOG program in A.2 provides executable code based on this specification.The conjunction 'call(Upsilon, X, Y), P), call(Gamma, G), P>=Gamma' specified in subquery in Line 3 of Program 6 first determines 'P' as the probability of t X; Y ð Þand then makes sure that the probability is greater than or equal to c, as required in Definition 8.1.
We have the following lemma.Given these relations for each c i , one can instantiate each of the generic relations provided above to generate the proper general approximation spaces for each c i .These instantiations can then be used for constructing PROBLOG theories that combine the use of many different types of relations in heterogeneous rules, as discussed in Section 9.
Remark 8.7.It is also worth emphasizing that Bayesian Rough Sets are covered by our implementation in the sense that Alpha and Beta used in Program 6 should be set to the probability 'P(C)' where 'C' is the target set specified as an argument (with the adjustment by as discussed in Remark 3.5 and shown in Table 6).Ã

Case study: recommendation
The following case study will use a number of the new generalized features that have been proposed in Section 8.In particular, for the concept Destinations used in the case study, a tolerance space will be used and its base relation will be defined as a probabilistic relation.Additionally, for the concept Customers, a classical approximation space will be used and its base relation will also be defined as a probabilistic relation.In the case study, a toy recommendation system will be specified that provides advice to travel customers as to where a good place to travel might be based on that customers similarity with other customers and their likes and dislikes.
Let's begin with the customer domain: Let's assume that partial data exists on pairwise similarity between individuals in the customer domain based on previous surveys and customer history, where each of the customers fits into a customer segment based on such features as tourists traveling with families, single tourists, health tourists, etc.For instance, 0:4 :: s c eve; jack ð Þ : 0:8 :: s c eve; kate ð Þ : 0:7 :: s c eve; mark ð Þ : 0:5 :: s c jack; kate ð Þ : 0:9 :: s c jack; mark ð Þ : 0:6 :: The model will also assume that customer segmentation creates a partition.Consequently, the base relation for the Customer concept will be an equivalence relation.This creates a probabilistic approximation space AS ¼ dom c ; r c ; P h i .In order to take advantage of the generic relations defined in Program 6, the approximation space AS will be recast as a generalized probabilistic approximation space.The generalized probabilistic approximation space for customers is where a c ¼ 0:9; b c ¼ 0:2 and c c ¼ 0:5, and r c has the properties T (reflexivity), B (symmetry) and 4 (transitivity).
Using the customer data in Eq. ( 47) and Program 7 below, the following closure of r c would result in, see Table 7.
Program 7: The PROBLOG program which can be used to compute probabilities in Table 7.For a runnable PROBLOG code see respective parts of A.2.
In a similar manner, a resemblance relation between destinations can be defined.The destination domain consists of: The base relation s d , will be defined as a tolerance relation, where the following partial data exists about s d , concerning The generalized probabilistic approximation space for destinations is: Using the destination data in Eq. ( 50), the following closure of r d would result in probabilities shown in Table 8.
Given any subset of customers in dom c , one can define lower and upper approximations for that subset using GAS c and Definition 8.4.In a similar manner, given any subset of destinations in dom d , one can define lower and upper approximations for that subset using GAS d and Definition 8.4.
Given the probabilities in Table 7 and Table 8, one can express rules for calculating success probabilities of recommendations which would be part of the knowledge base for a PROBLOG program, e.g.,

Definition 2 . 1 (
Approximations, base relations, boundary regions, approximate sets).Let r be a binary relation on dom and c # dom .Then the lower approximation c þ r and the upper approximation c È r of c wrt r are: P. Doherty and A. Szałas Information Sciences 593 (2022) 546-576

Fig. 2 .
Fig. 2. Relationships among properties of base relations considered in the paper.An arrow P !Q indicates that the requirements P are weaker than the requirements Q.

Definition 3 . 1 (
the quotient set of dom, denoted dom=r and defined as r x ð Þjx 2 dom f g ;for each set c # dom and equivalence class r x ð Þ, the probabilities P c ð Þ and P cj r x Rough membership).Given AS P ¼ dom; r; P h i , for a subset c # dom, the rough membership function with regard to c is: Given a membership function l c;r x ð Þ for a rough set c, one can now define the classical rough set approximation operators for c probabilistically in terms of l c;r x ð Þ. Definition 3.2 (Probabilistic approximations).Given AS P ¼ dom; r; P h i , and c # dom, the probabilistic lower and upper approximations of c wrt r are defined by: P. Doherty and A. Szałas Information Sciences 593 (2022) 546-576

Definition 3 . 6 (
Decision-theoretic approximations).Let 0:0 6 b < a 6 1:0.Given AS DT a; b ð Þ ¼ dom; r; P; a; b h i , and c # dom, the decision-theoretic lower and upper approximation(s) of c wrt a; b and r are defined by: Given a decision theoretic approximation space AS DT a; b ð Þ ¼ dom; r; a; b h i , any set c # dom can be partitioned using the parameters a and b into three disjoint regions (see Fig.

fg
and r is the indiscernibiltiy relation constructed from the combination of attributes Hun and Type.Since the domain of Hun (Yes, No), has cardinality 2 and the domain of Type (French, Thai, Burger, Italien), has cardinality 4, there are 8 equivalence classes uniquely distinguished by the values assigned to the attributes.The eight equivalence classes compose the quotient set dom=r: see

the pairwise resemblance of destinations to each
GAS d ¼ dom d ; s d ; a d ; b d ; c d h i ;ð51Þwhere a d ¼ 0:8; b d ¼ 0:1, and c d ¼ 0:7, and s d has the properties T (reflexivity), and B (symmetry).
called an approximate set.

Table 1
Properties of base relations in terms of approximations and first-order correspondences.

Table 2
Restaurant Example: Russell, Norvig[31, p. 657]with the attributes: Alt: is there a suitable alternative restaurant nearby?Bar: has a bar area?Fri: is today Friday or Saturday?Hun: hungry right now? Pat: people in the restaurant, Price: the price range, Rain: raining outside?Res: is a reservation made?Type: the kind of restaurant, Est: host's waiting time estimate (in minutes), WillWait: will wait for a table?

Table 3
Conditional probabilities for equivalence classes.
An instantiation of the program template (Program 2) considered in Section 5.1 is shown in Program 3 where, in addition to Table 2, we use data for table rating contained in Table 4.An executable PROBLOG code for this example is shown in A.1.A brief description of the program is now provided and a number of queries that use this example are then considered that illuminate the use of probabilistic rough sets.Consider the annotated program schema shown as Program 3.
and Table4.Lines 7-9 specify schemata for the actual table data.Line 11 defines the crisp set ww X ð Þ of interest.Since it is not definable relative to the indiscernibility relation r based on the two attributes hun and type, a rough set correlate wwR X ð Þ will be defined in Line 28 with its lower and upper approximations and regions (Lines 29-33).
Lines 14-17 specify the properties of r X; Y ð Þand generate the transitive closure of s X; Y ð Þ, since we are interested in an equivalence relation among individuals.Line 18 specifies the useful relation r 1 X; L ð Þ, which given an individual X, returns a list L containing all individuals in X's equivalence class r X ð Þ ¼ X ½ .Lines 19-20 specify the generic relations for the lower and upper approximations of the rough set wwR X ð Þ (lower X; C ð Þ and upper X; C ð Þ, respectively).The parameter C is a relation for a crisp set.In the example, it is ww.
The information table associated with the part of Table2used in Example 3.9 is, describes the target set of all individuals that are willing to wait.Let x ½ 2 dom=r.In the classical case, if x 2 POS WW ð Þ, it belongs to WW with certainty.If x 2 NEG WW ð Þ, it does not belong to WW with certainty.If x 2 BND WW ð Þ, if can not be decided with certainty whether or not it belongs to WW.Consequently, one has three types of decision rules: Example 6.3.In this example, the PROBLOG code required is shown as Program 4. Extending the PROBLOG code in Program 3 for the decision rule encoding is straightforward and compact.In the full code (see A.1), one does not require a rule for each equivalence class.There is one rule each for positive, negative, and boundary regions, respectively, where the a check is built Definition 7.1 (Tolerance relation, tolerance space, tolerance neighborhood).A tolerance relation s on a set dom, is a relation s # dom Â dom, that is reflexive and symmetric.A tolerance space TS ¼ dom; s The tolerance neighborhood of x wrt s, denoted n s , is n s 2 (Tolerance-based rough membership).Given TS ¼ dom; s Tolerance-based approximations are straightforward generalizations of rough approximations where equivalence classes and partitions are replaced by neighborhoods and coverings.It is important to note that an individual x, can now be a member of more than on neighborhood.This contrasts with rough approximation spaces where an individual has one unique equivalence class it belongs to.Definition 7.3 (Classical tolerance-based approximations).Given AS P ¼ dom; s; P h i , and c # dom, the lower and approximation of c wrt s are defined by:

Table 5
Image resemblance.Then P 0 t;c satisfies the Kolmogorov probability axioms.Proof.Note that by axiom T, for every x 2 dom; n t;c x ð Þ -£, so jn t;c x ð Þj -0:0.Axiom 3: for c 1 ; c 2 # dom such that c 1 \ c 2 ¼ £, and for every x 2 dom,P 0

Table 6
Lemma 8.5.Definition 8.4 subsumes Definitions 3.2,3.3,3.4,3.6,3.8,7.3 and 7.4, where each definition can be instantiated by the relevant values for t; a; b, and c, in addition to choosing the elementary granule types used, as shown in Table6.Ã Approximation operators ( is explained in Remark 3.5, which also applies to Bayesian approximations).
Lemma 8.6.[Correctness] Assuming that the relation t is at least reflexive, Program 6 correctly implements generalized probabilistic approximation spaces as formalized in Definition 8.4.Assume for each relation c i of interest, the following: for values a i ; b i and c i , for each c i .