Incremental approaches for updating approximations in set-valued ordered information systems q

Incremental learning is an efﬁcient technique for knowledge discovery in a dynamic database, which enables acquiring additional knowledge from new data without forgetting prior knowledge. Rough set theory has been successfully used in information systems for classiﬁcation analysis. Set-valued information systems are generalized models of single-valued information systems, which can be classiﬁed into two categories: disjunctive and conjunctive. Approximations are fundamental concepts of rough set theory, which need to be updated incrementally while the object set varies over time in the set-valued information systems. In this paper, we analyze the updating mechanisms for computing approximations with the variation of the object set. Two incremental algorithms for updating the approximations in disjunctive/conjunctive set-valued information systems are proposed, respectively. Furthermore, extensive experiments are carried out on several data sets to verify the performance of the proposed algorithms. The results indicate the incremental approaches signiﬁcantly outperform non-incremental approaches with a dramatic reduction in the computational speed.


Introduction
Granular Computing (GrC), a new concept for information processing based on Zadeh's ''information granularity'', is a term of theories, methodologies, techniques, and tools that makes use of granules in the process of problem solving [1,2].With the development of artificial intelligence, the study on the theory of GrC has aroused the concern of more and more researchers [3][4][5].Up to now, GrC has been successfully applied to many branches of artificial intelligence.The basic notions and principles of GrC have appeared in many related fields, such as concept formation [6], data mining [7] and knowledge discovery [8,9].
Rough Set Theory (RST) is a powerful mathematical tool for dealing with inexact, uncertain or vague information [10].It is also known as one of three primary models of GrC [11].In recent years, there has been a rapid growth of interest in RST and its applications.It seems to be of fundamental importance to artificial intelligence and cognitive sciences, especially in the areas of machine learning, decision analysis, expert systems, inductive reasoning and pattern recognition [13][14][15][16].The data acquired for rough set analysis is represented in form of attribute-value tables, consisting of objects (rows) and attributes (columns), called information systems [17].In real-life applications, data in information systems is generated and collected dynamically, and the knowledge discovered by RST need to be updating accordingly [12].The incremental technique is an effective method to update knowledge by dealing with the new added-in data set without re-implementing the original data mining algorithm [18,19].Many studies have been done towards the topic of incremental learning techniques under RST.Considering the problem of discretization of continuous attributes in the dynamic databases, Dey et al. developed a dynamic discreduction method based on RST and notions of Statistics, which merges the two tasks of discretization and reduction of attributes into a single seamless process, so as to reduce the computation time by using samples instead of the whole data to discretize the variables [20].Considering the problem of dynamic attribute reduction, Hu et al. proposed an incremental positive region reduction algorithm based on elementary set, which can generate a new positive region reduction quickly when a new object is added into the decision information systems [28].From the view of information theory, Wang et al. proposed an incremental attribute reduction algorithm based on three representative entropies by considering changes of data values, which can generate a feasible reduct in a much shorter time.However, the algorithm is only applicable on the case of the variation of data one by one [21].Furthermore, Wang et al. developed a dimension incremental strategy for attribute reduction based on information entropy for data sets with dynamically increasing attributes [22].Since the core of a decision table is the start point to many existing algorithms of attribute reduction, Yang et al. introduced an incremental updating algorithm of the computation of a core based on the discernibility matrix, which only inserts a new row and column, or deletes one row and updates corresponding column when updating the discernibility matrix [29].Considering the problem of dynamic rule induction, Fan et al. proposed an incremental rule-extraction algorithm (REA) based on RST, which updates rule sets by partly modifying original rule sets without re-computing rule sets from the very beginning and the proposal approach is especially useful in a large database, since it does not re-compute the reducts/rules that are not influenced by the incremental data set [23].Nevertheless, alternative rules which are as preferred as the original desired rules might exist since the maximum of strength index is not unique.The REA may lead to non-complete rules, then an incremental alternative rule extraction algorithm (IAREA) was proposed to exclude the repetitive rules and to avoid the problem of redundant rules [24].Zheng et al. developed a rough set and rule tree based incremental algorithm for knowledge acquisition, which is not only obviously quicker than that of classic algorithm, but also has a better performance of knowledge learned by the proposed algorithm to a certain degree [25].Liu et al. defined a new concept of interesting knowledge based on both accuracy and coverage of the generated rules in the information system, and presented an optimization model using the incremental matrix for generating interesting knowledge when the object set varies over time [26,27].
The main goal of RST is to synthesize approximations of concepts from the acquired data, which is a necessary step for expressing and reducing incomplete and uncertain knowledge based on RST [30][31][32].The knowledge hidden in information systems can be discovered and expressed in the form of decision rules according to the lower and upper approximations [36][37][38][39].In order to resolve the problem of high computation complexity in computing approximations under the dynamic information systems, many incremental updating algorithms have been proposed.Therefore, extensive efforts have been devoted to efficient algorithms for computing approximations.Li et al. presented an incremental method for updating approximations in an incomplete information system through the characteristic relation when the attribute set varies over time, which can deal with the case of adding and removing some attributes simultaneously in the information system [40].Since the domain of attributes may change in real-life applications, attributes values may be added to or deleted from the domain, Chen et al. proposed the incremental updating approach of approximations while attributes values coarsening or refining in the complete and incomplete information systems [35].Zhang et al. discussed the change of approximations in neighborhood decision systems when the object set evolves over time, and proposed two fast incremental algorithms for updating approximations when multiple objects enter into or get out of the neighborhood decision table [33].Li et al. firstly introduced a kind of dominance matrix to calculate P-dominating sets and Pdominated sets in dominance-based rough sets approach, and proposed the incremental algorithms for updating approximations of an upward union and downward union of decision classes [34].Instead of considering the incremental updating strategies of rough sets, Cheng proposed two incremental methods for fast computing the rough fuzzy approximations, which are established respectively based on the redefined boundary set and the relation between a fuzzy set and its cut sets [41].
However, to our best knowledge, previous studies on incremental computing approximations mainly concerned in the single-valued information systems, but little attention has been paid to the set-valued information systems.Set-valued information systems are an important type of data tables, and generalized models of single-valued information systems [42].In many practical decisionmaking issues, set-valued information systems have very wide applications, which can be used in intelligent decision-making and knowledge discovery from information systems with uncertain information and set-valued information.In such systems, some of the attribute values of an object may be set-valued, which are always used to characterize the incomplete information, i.e., the values of some attributes are unknown or multi-values.On the other hand, we often encounter the scenario where the ordering of properties of the considering attributes plays a crucial role in the analysis of information systems.Considering attributes with preferenceordered domains is an important characteristic of multi-attribute decision making problems in practice.Greco et al. proposed the Dominance-based Rough Sets Approach (DRSA) [44,45].This innovation is mainly based on the substitution of the indiscernibility relation by a dominance relation.Furthermore, Qian et al. established a rough set approach in Set-valued Ordered Information Systems (SOIS) to take into account the ordering properties of attributes in set-valued information systems, and classified the SOIS into two categories: disjunctive and conjunctive systems [43].Since the characteristics of the set-valued information systems is different from that of single-valued information systems (such as: some of the attribute values for an object are set-valued), the method for knowledge acquisition in the single-valued information systems cannot be applied directly to the set-valued ones.For this reason, the incremental method for updating approximations in the dynamic set-valued information systems is discussed in this paper.In [46], Zhang et al. proposed an incremental method for computing approximations in set-valued information systems under the tolerance relation, when the attribute set varies with time.In this paper, we focus on updating knowledge under the variation of the object set in SOIS.Firstly, we discuss the principles of incremental updating approximations when the objects in the universe change (increase or decrease) dynamically in the conjunctive/disjunctive SOIS.Then two incremental updating algorithms are proposed based on the principles.Finally, the performances of two incremental algorithms are evaluated on a variety of data sets.
The remainder of the paper is organized as follows.In Section 2, some basic concepts of RST in SOIS are introduced.The principles and some illustrated examples for incremental updating approximations with the variation of the object set are presented in Section 3. In Section 4, we propose the incremental algorithms for computing approximations based on the updating principles.Performance evaluations are illustrated in Section 5.The paper ends with conclusions and further research topics in Section 6.

Preliminaries
For convenience, some basic concepts of rough sets and SOIS are reviewed in this section [42,43].
A set-valued information system is an ordered quadruple S = (U, C [ {d}, V, f), where U = {x 1 , x 2 , . . ., x n } is a non-empty finite set of objects, called the universe.C is a non-empty finite set of condition attributes and d is a decision attribute with C \ {d} = ;; , where V is the domain of all attributes, V C is the domain of all condition attributes and V d is the domain of the decision attribute; f is a mapping from U Â (C [ {d}) to V such that f : U Â fCg ! 2 Vc is a set-valued mapping and f: U Â {d} ?V d is a single-valued mapping.
In an information system, if the domain (scale) of a condition attribute is ordered according to a decreasing or increasing preference, then the attribute is a criterion.Definition 1.A set-valued information system S = (U, C [ {d}, V, f) is called a SOIS if all condition attributes are criterions.
In real problems, many ways to present the semantic interpretations of set-valued information systems have been provided [47][48][49][50].Qian et al. summarized two types of set-valued information systems with two kinds of semantics, which are known as conjunctive ("x 2 U and c 2 C, f(x, c) is interpreted conjunctively) and disjunctive ("x 2 U and c 2 C, f(x, c) is interpreted disjunctively) set-valued information systems.According to the introduction of the following two dominance relations to these types of set-valued information systems, SOIS can be also classified into two categories: conjunctive and disjunctive SOIS [43].
For convenience, we denote R Analogously, 8D 6 i ð1 6 i 6 rÞ, the lower and upper approximations of D 6  i are defined as: Example 4 (Continuation of Example 3).

Table 1
A conjunctive set-valued ordered information system.
Table 2 A disjunctive set-valued ordered information system.
(2) Analogously, from Table 2, we have . Thus, we get the unions of classes as follows:

Incremental updating approximations in SOIS when the object set varies with time
With the variation of an information system, the structure of information granules in the information system may vary over time which leads to the change of knowledge induced by RST.For example, let us consider a practical information system from the test for foreign language ability of undergraduates in Shanxi University, the test results can be expressed as a set-valued information system where the attributes are all inclusion increasing preferences and the value of each student under each attribute is given by an evaluation expert through a set-value [43].However, during the process of evaluating the undergraduates language ability, data in an information system does not usually remain a stable condition.Some objects may be inserted into the original information system due to the arrival of the new students.On the other hand, some objects will be deleted from the original information system with the graduation of the senior students.Then the discovered knowledge may become invalid, or some new implicit information may emerge in the whole updated information system.Rather than restarting from scratch by the non-incremental or batch learning algorithm for each update, developing an efficient incremental algorithm to avoid unnecessary computations by utilizing the previous data structures or results is thus desired.
In this section, we discuss the variation of approximations in the dynamic SOIS when the object set evolves over time while the attribute set remains constant.For convenience, we assume the incremental learning process lasts two periods from time t to time t + 1.We denote a dynamic SOIS at time t as S = (U, C [ {d}, V, f), and at time t + 1, with the insertion or deletion of objects, the original SOIS will change into a new one, denoted as Here, we only discuss the incremental approach for updating approximations in the cases that a single object enter and go out of the information system.The change of multiple objects can be seen as the cumulative change of a single object.The approximations can be updated step by step through the updating principles in the case that a single object varies.accordingly.Here, we discuss the principles for updating approximations of D P i from two cases: (1) The deleted object belongs to D P i , i.e., x 2 D P i ;

Principles for incrementally updating approximations with the deletion of a single object
(2) The deleted object does not belong to D P i , i.e., x R D P i .
Case 1: The deleted object x belongs to D i , i.e., x 2 D P i .
Example 5 (Continuation of Example 4) (1) For Table 1, according to Proposition 1, we compute the lower approximations of D P 2 by deleting x 1 and x 2 from U, respectively.
Assume the object x 1 is deleted from Table 1, and Assume the object x 2 is deleted from Table 1, and (2) For Table 2, according to Proposition 1, we compute the lower approximations of D P 2 by deleting x 1 and x 3 from U, respectively.
Assume the object x 1 is deleted from Table 2, and Assume the object x 3 is deleted from Table 2, and Proof.According to Definition 4, we have R DP (1) For Table 1, according to Proposition 2, we compute the upper approximation of D P 2 by deleting x 1 from U. Assume the object x 1 is deleted from Table 1, and (2) For Table 2, according to Proposition 2, we compute the upper approximation of D P 2 by deleting x 1 from U. Assume the object x 1 is deleted from Table 2, and Case 2: The deleted object x does not belong to D i , i.e. x R D P i .
Proof.According to Definition 4, we have 8x 2 , we know that D P i +½x DP A .However, it may exist that x 2 ½x DP A , and after the deletion of x; (1) For Table 1, according to Proposition 3, we compute the lower approximation of D P 2 by deleting x 3 from U. Assume the object x 3 is deleted from Table 1, and (2) For Table 2, according to Proposition 3, we compute the upper approximation of D P 2 by deleting x 4 from U.
Assume the object x 4 is deleted from Table 2, and Proof.According to Definition 4, we have that Since the deleted object x R D P i , there exists an object x 2 , we have 8x Example 8 (Continuation of Example 4) (1) For Table 1, according to Proposition 4, we compute the lower approximations of D P 2 by deleting x 3 and x 5 from U, respectively.
Assume the object x 3 is deleted from Table 1, and À fx 3 g ¼ fx 1 ; x 2 ; x 4 ; x 6 g.Assume the object x 5 is deleted from Table 1, and ) For Table 2, according to Proposition 4, we compute the upper approximation of D P 3 by deleting x 3 and x 4 from U, respectively.
Assume the object x 3 is deleted from Table 2, and x 5 g.Assume the object x 4 is deleted from Table 2, and

Principles for incrementally updating approximations with the insertion of a new object
Given a SOIS (U, C [ {d}, V, f) at time t, when the information system is updated by inserting a new object x (x denotes the inserted object) into the unverse U at time t + 1, two situations may occur: (1) x forms a new decision class, i.e., 8x 2 U; f ðx; dÞ -f ðx; dÞ; (2)   x does not form a new decision class, i.e., 9x 2 U; f ðx; dÞ ¼ f ðx; dÞ.The difference between the two situations is: in the first situation, in addition to updating the approximations of union of the existing decision classes, we need to compute the approximations for the new decision class.Firstly, for updating the approximations of the union of the existing decision classes D P i (1 6 i 6 r) when inserting an object, we discuss the principles through two cases similar to the approach taken in the model of deletion: (1) The inserted object will belong to D P i , i.e., x# d x, where x 2 D i ; (2) The inserted object will not belong to D P i , i.e., x † d x, where x 2 D i .To illustrate our incremental methods for updating approximations when inserting a new object into SOIS, two tables (Tables 3  and 4) are given as follows.We assume that the objects in Table 3 will be inserted into Table 1, and the objects in Table 4 will be inserted into Table 2.
Case 1: The inserted object x will belong to D i .
(2) Otherwise, R DP Proof.According to Definition 4, we have 8x 2 . Thus, when the object x is inserted into U, we have Example 9 (Continuation of Example 4) (1) For Table 1, according to Proposition 5, we compute the lower approximations of D P 2 when the object x 7 and x 8 in Table 3 insert into Table 1, respectively.
Assume the object x 7 in Table 3 is inserted into Table 1, Assume the object x 8 in Table 3 is inserted into Table 1, (2) For Table 2, according to Proposition 4, we compute the lower approximation of D P 2 when the objects x 7 and x 8 in Table 4 insert into Table 2, respectively.
Assume the object x 7 in Table 4 is inserted into Table 2, and [ fx 7 g ¼ fx 1 ; x 2 ; x 5 ; x 7 g.
Assume the object x 8 in Table 4 is inserted into Table 2, and Case 2: The inserted object x will not belong to D i .
The object inserted into the conjunctive set-valued ordered information system (Table 1 where According to Definition 4, we have Example 12 (Continuation of Example 4) (1) For Assume the object x 10 in Table 3 inserts into Table 1, and x 2 ; x 3 ; x 4 ; x 6 g. (2) For Table 2, according to Proposition 8, we compute the upper approximations of D P 2 when the object x 9 and x 10 in Table 4 insert into Table 2, respectively.
Assume the object x 9 in Table 4 inserts into Table 2, and Based on the above analysis, we can compute the approximations of the union of existing decision classes D P i (1 6 i 6 r) when inserting a new object into SOIS.However, when a new object x is inserted into the universe, it might happen that x will form a new decision class, i.e., 8x 2 U; f ðx; dÞ -f ðx; dÞ.Then the universe U 0 ¼ U [ fxg will be divided into r + 1 partitions, such as: (1) If ½x Since 8x 2 U; f ðx; dÞ -f ðx; dÞ; x will form a new decision class.U 0 will be divided into r + 1 partitions, such as: It is easy to obtain that the union of the new decision class D new is: Hence, if ½x Example 13 (Continuation of Example 4) (1) For   Proof.When the object x inserts into U; U 0 ¼ U [ fxg.According to Definition 4, we have R DP Example 14 (Continuation of Example 4) (1) For Table 1, according to Proposition 9, we compute the lower approximations of D P new when the object x 11 in Table 3 inserts into Table 1.Assume the object x 11 in Table 3 inserts into Table 1, and [fx 11 g ¼ fx 2 ; x 6 ; x 11 g.Because of ½x 11 DP C ¼ fx 2 ; x 11 g, we have

Static (non-incremental) and incremental algorithms for computing approximations in SOIS with the variation of the object set
In this section, we design static and incremental algorithms on the variation of the object set in SOIS corresponding to Sections 2 and 3, respectively.

The incremental algorithm for updating approximations in SOIS when inserting an object into the universe
Algorithm 3 is an incremental algorithm for updating approximations in SOIS while inserting an object into the universe.Step 2 compute the A-dominating set with respect to the inserted object x.
Step 3-25 update the approximations of the union of classes D P i , when the inserted object x will belong to the union of classes D P i . Step

Experimental evaluations
In this section, in order to evaluate the performance of the proposed incremental algorithms, we conduct a series of experiments to compare the computational time between the non-incremental algorithm and the incremental algorithms for computing approximations based on standard data sets.
The algorithms are implemental using the JAVA programming language in Eclipse 3.5 with Java Virtual Machine (JVM) 1.6 (available at http://www.eclipse.org/platform).Experiments are performed on a computer with 2.66 GHz CPU, 4.0 GB of memory and 32-bit Windows 7 OS.We download four data sets from the machine learning data repository, University of California at Irvine [51], where the basic information of data sets is outlined in Table 5.The data sets 1-4 in Table 5 are all incomplete information systems with missing values.In our experiment, we represent all the missing values by the set of all possible values of each attribute.Then this type of data sets can be regarded as a special case of the set-valued information system.Besides, we also use the setvalued data generator to generate two artificial data sets 5-6 in order to test the efficiency of the proposed algorithms, which are also outlined in Table 5.
Generally, we perform the experimental analysis with applying the non-incremental algorithm along with our proposed incremental algorithms when the objects inserting into or deleting from the information system, respectively.In order to present more informative comparative data and acquire more dependable results in our experiments, we compare the computational efficiency of the algorithms according to the following two aspects: (1) Size of the data set: To compare the computational efficiency and distinguish the computational times used by the nonincremental and incremental algorithms with differentsized data sets, we divide each of the six data sets into 10 parts of equal size, respectively.The first part is regarded as the 1st data set, the combination of the first part and the second part is viewed as the 2nd data set, the combination of the 2nd data set and the third part is regarded as the 3rd data set, and so on.The combination of all ten parts is viewed as the 10th data set.(2) Update ratio of the data set: The size of updated objects which inserting into or deleting from the universe may different, that is, the update ratio, i.e., the ratio of the number of updating (deleting or inserting) data and original data, may different.Here, in order to analyze the influence of the update ratio on the efficiency of algorithms, we compare the computational time of the static and incremental algorithms with different update ratios.That is to say, for each data sets, we conduct the comparison experiments with the same original data size, but different update ratios, i.e., deleting ratios and inserting ratios.

A comparison of computational efficiency between static and incremental algorithms with the deletion of the objects
To compare the efficiency of static (Algorithm 1) and incremental (Algorithm 2) algorithms for computing approximations when deleting the objects from the data sets.Firstly, we compare the two algorithms on the six data sets in Table 5 with the same updating ratio (the ratio of the number of deleting data and original data), but different sizes of the original data.Here, we assume that the updating ratio is equal to 5%.The experimental results are shown in Table 6.More detailed changing trendline of each of two algorithms with the increasing size of data sets are illustrated in Fig. 1.Secondly, we compare the computational time of the two algorithms with the same size of original data, but different updating ratios for each data sets (from 5% to 100%).we show the experimental results in Table 7.More detailed changing trendline of each of two algorithms with the increasing updating ratio of data sets are presented in Fig. 2.
In each sub-figures (a)-(f) of Fig. 1, the x-coordinate pertains to the size of the data set (the 10 data sets starting from the smallest one), while the y-coordinate presents the computational time.We use the star lines to denote the computational time of the static algorithm on different sizes of data sets, and the plus lines denote the computational time of the incremental algorithm on different sizes of data sets when deleting the objects into the universe.It is easy to see the computational time of the both algorithms usually increases with the increase of the size of data sets according to Table 6 and Fig. 1.As the important advantage of the incremental algorithm shown in Table 6 and Fig. 1, when deleting the objets from the universe, we find that the incremental algorithm is mush faster than the static algorithm for computing the approximations.Furthermore, the differences become larger and larger when increasing the size of data sets.
In each sub-figures (a)-(f) of Fig. 2, the x-coordinate pertains to the ratio of the number of the deleting data and original data, while the y-coordinate concerns the computational time.According to the experimental results in Table 7 and Fig. 2, we find that, for the static algorithm, the computational time for computing approximations with deletion of the objects from the universe is decreasing monotonically along with the increase of deleting ratios.It is because with the increase of ratios, the size of the universe decreases gradually.On the contrary, for incremental algorithm, we can see that the computational efficiency for com-  puting approximations is changing smoothly along with the increase of deleting ratios.It is easy to find out the incremental algorithm always performs faster than the non-incremental algorithm for computing approximations until a threshold of the deleting ratio.The threshold differs depending on the data sets.For example, in Fig. 2(a), (e), and (f), the thresholds of ratios are around 85%; In Fig. 2(b) and (c), the thresholds of ratios are around 65%; In Fig. 2(d), the incremental algorithm consistently outperforms the static algorithm even in the value of 90%.

A comparison of computational efficiency between static and incremental algorithms with the insertion of the objects
Similar to the experiment schemes for comparing the efficiencies between static and incremental algorithms when deleting the objects from the universe, we also adopt such schemes to compare the performance of algorithms on the case of inserting the objects into the universe.Firstly, we compare the two algorithms, i.e., Algorithm 1 and Algorithm 3, on the six data sets in Table 5 with the same updating ratio (the ratio of the number of inserting data and original data), but different sizes of the original data.Here, we assume the updating ratio is equal to 5%.The experimental results are shown in Table 8.More detailed change trendline of each of two algorithms with the increasing size of data sets are presented in Fig. 3. Secondly, we compare the computational times of the two algorithms with the changing of updating ratios for each data sets.We show the experimental results in Table 9, and more detailed change trendline of each of two algorithms with the increasing size of data sets are given in Fig. 4.
In each sub-figures (a)-(f) of Fig. 3, the x-coordinate pertains to the size of the data set (the 10 data sets starting from the smallest one), while the y-coordinate presents the computational time.We use the star lines to denote the computational time of static algorithm (Algorithm 1) on different sizes of data sets, and the plus lines denote the computational time of incremental algorithm (Algorithm 3) on different sizes of data sets when inserting the objects into the universe.Obviously, according to Table 8 and Fig. 3, we can find that the computational time of the both algorithms usually increases with the increasing size of data sets.However, the incremental algorithm is much faster than the static algorithm for computing the approximations when inserting the objects into the universe.Furthermore, the differences between static and incremental algorithms are getting larger when increasing the data size.
In each sub-figures (a)-(f) of Fig. 4, the x-coordinate pertains to the ratio of the number of the inserted objects and original data, while the y-coordinate concerns the computational time.According to the experimental results as shown in Table 9 and Fig. 4, we find that the computational time of both static (Algorithm 1) and incremental (Algorithm 3) algorithms are increasing monotonically along with the increasing of insert ratios.It is easy to get the incremental algorithm is always faster than the static algorithm when the inserting ratio increases from 10% to 100% according to Fig. 4(a)-(e).In Fig. 4(f), we find the incremental algorithm is mush faster than the static algorithm when the inserting ratio is less than 85%, but slower than the static algorithm when the inserting ratio is more than 85%.

Conclusions
The incremental technique is an effective way to maintain knowledge in the dynamic environment.In this paper, we proposed incremental methods for updating approximations in SOIS when the information system is updated by inserting or deleting objects.Through discussing the principles of updating approximations by deleting objects from the information system and inserting objects into the information system, respectively, we proposed the incremental algorithms for updating approximations based on SOIS in terms of inserting or deleting an object.Experimental studies pertaining to four UCI data sets and two artificial data sets showed that the incremental algorithms can improve the computational efficiency for updating approximations when the object set in the information system varies over time.In realworld applications, an information system may be updated by inserting and deleting some objects at the same time.In our further work, we will focus on improving the incremental algorithm for updating knowledge by deleting and deleting some objects simultaneously.Furthermore, as an information system consists of the objects, the attributes, and the domain of attributes values, all of the elements in the information system will change as time goes by under the dynamic environment.In the future, the variation of attributes and the domain of attributes values in SOIS will also be taken into consideration in terms of incremental updating knowledge.

0 , respectively at time t + 1 .
we denote the union of classes and the A-dominating set as D P i and ½x DP A , respectively at time t, which are denoted as D P i À Á 0 and ½x DP A According to Definition 4, the lower and upper approximations of D P i with respect to A # C are denoted as R DP time t, which are denoted as R DP , respectively at time t + 1, respectively.
Given a SOIS S = (U, C [ {d}, V, f) at time t, the deletion of object x 2 U ( x denotes the deleted object) will change the original information granules ½x DP A (x 2 U, A # C) and the union of decision classes D P i (1 6 i 6 r).The approximations of D P i will change

Proposition 9 .
At this point, in addition to updating the approximations of the existing unions of decision classes, we need to compute the unions of new decision class D new : D P new ¼ D P iþ1 [ fxg.Let S = (U, C [ d, V, f) be a SOIS, A # C. When x is inserted into U, if 8x 2 U; f ðx; dÞ -f ðx; dÞ, then the lower approxi- mation of the union of the new decision class D P new can be computed as follows:

8x 2 U
; f ðx; dÞ -f ðx; dÞ, then the upper approxima- tion of the union of the new decision class D P new can be computed as follows: new ¼ D P iþ1 [ fxg.

4. 2 .
The incremental algorithm for updating approximations in SOIS when deleting an object from the universe Algorithm 2 is an incremental algorithm for updating approximations in SOIS while deleting an object from the universe.Step 3-16 update the approximations of the union of classes D P i , when the deleted object x belongs to the union of classes D P i .Step 4-8 compute the lower approximations of D P i by Proposition 1. Step 9-16 compute the upper approximations of D P i by Proposition 2. Step 18-34 update the approximations of the union of classes D P i , when the deleted object x does not belong to the union of clas-ses D P i .Step 19-27 compute the lower approximations of D P i by Proposition 3. Step 28-33 compute the upper approximations of D P i by Proposition 4.

Fig. 2 .
Fig.2.A comparison of static (Algorithm 1) and incremental (Algorithm 2 algorithms versus the deleting ratio of data.

Fig. 3 .
Fig. 3.A comparison of static (Algorithm 1) and incremental (Algorithm 3 versus the size of data when inserting the objects. f) be a conjunctive SOIS and A # C. The dominance relation in terms of A is defined as: R ^P A ¼ fðy; xÞ 2 U Â Ujy# A xg ¼ fðy; xÞ 2 U Â Ujf ðy; aÞ

Table 1 ,
Ug. Any element from U=R ^P A is called a dominance class with respect to A. Dominance classes in U=R ^P A do not constitute a partition of U in general.They constitute a covering of U. U=R ^P C ¼ f½x 1 ^P C ; ½x 2 ^P C ; ...; ½x 6 ^P C g, where ½x1 ^P C ¼ fx 1 ; x 2 ; x 3 g; ½x 2 ^P C ¼ fx 2 g; ½x 3 ^P C ¼ fx 2 ; x 3 g; ½x 4 ^P C ¼ fx 2 ; x 4 g; ½x 5 ^P C ¼ fx 2 ; x 5 g; ½x 6 ^P C ¼ fx 6 g.Analogously, U=R ^6 C ¼ f½x 1 ^6 C ; ½x 2 ^6 C ; ...; ½x 6 ^6 C g, where ½x 1 ^6 C ¼ fx 1 g; ½x 2 ^6 C ¼ fx 1 ; x 2 ; x 3 ; x 4 ; x 5 g; ½x 3 ^6 C ¼ fx 1 ; x 3 g; ½x 4 ^6 C ¼ fx 4 g; ½x 5 ^6 C ¼ fx 5 g; ½x 6 ^6 C ¼ fx 6 g.From Table2, U=R _P ; x 3 ; x 5 g; ½x 6 _6 C ¼ fx 3 ; x 4 ; x 6 g.Assume that the decision attribute d makes a partition of U into a finite number of classes.Let D = {D 1 , D 2 , . . ., D r } be a set of these classes that are ordered, that is, "i, j 6 r, if i P j, then the objects from D i are preferred to the objects from D j .The sets to be approximated in DRSA are upward and downward unions of classes, which are defined respectively as D P 2 D P

Table 1 .
Assume the object x 7 in Table 3 inserts into Table 1, and

Table 2 ,
according to Proposition 6, we compute the upper approximations of D P 2 when the object x 7 in Table 4 inserts into Table 2. Assume the object x 7 in Table 4 inserts into Table 2, and

Table 4
).The object inserted into the disjunctive set-valued ordered information system (Table2).
11 (Continuation of Example 4) (1) For Table 1, according to Proposition 7, we compute the lower approximations of D P 2 when the object x 9 in Table 3 inserts into Table 1.Assume the object x 9 in Table 3 inserts into Table 1, and For Table 2, according to Proposition 7, we compute the lower approximations of D P 2 when the object x 9 in Table 4 inserts into Table 2. Assume the object x 9 in Table 4 is inserted into Table 2, and U 0 = U [ {x 9 }.Since f(x 9 , d) = 1, then D P 1 ; x 2 ; x 5 g; x 9 # C x 1 , that is,

Table 1 ,
according to Proposition 9, we compute the lower approximations of D P new when the object x 11 and x 12 in Table 3 insert into Table 1, respectively.Assume the object x 11 in Table 3 inserts into Table 1, and fD 1 ; D 2 ; D new ; D 3 g; D P new ¼ D P 3 [ fx 11 g ¼ fx 2 ; x 6 ; x 11 g.Because of ½x 11 DP ; x 6 ; x 11 g.Assume the object x 12 in Table 3 inserts into Table 1, andU 0 = U [ {x 12 }.Since "x 2 U, f(x, d) -f(x 12 , d) = 3 and f(D 2 , d) < f(x 12 , d) < f(D 3 , d), then D ¼ fD 1 ; D 2 ; D new ; D 3 g; D P new ¼ D P 3 [fx 11 g ¼ fx 2 ; x 6 ; x 12 g.Because of ½x 12 DP For Table 2, according to Proposition 9, we compute the lower approximations of D P new when the object x 11 and x 12 in Table 4 are respectively inserted into Table 2. Assume the object x 11 in Table 4 inserts into Table 2, and U 0 = U [ {x 11 }.Since "x 2 U, f(x, d) -f(x 11 , d) = 3 and f(D 2 , d) < f(x 11 , d) < f(D 3 , d), then D ¼ fD 1 ; D 2 ; D new ; D 3 g; D P new ¼ D P 3 [ fx 11 g ¼ fx 2 ; x 5 ; x 11 g.Because of ½x 11 DP Assume the object x 12 in Table 4 inserts into Table 2, and

Table 2 ,
according to Proposition 9, we compute the lower approximations of D P new when the object x 11 in Table 4 inserts into Table 2. Assume the object x 11 in Table 4 inserts into Table 2, and

Table 5
A description of data sets.

Table 7
A comparison of static and incremental algorithms versus different updating rates when deleting the objects.
5-10 compute the lower approximations of D P i by Proposition 5. Step 11 compute the upper approximation of D P i by Proposition 6. Step 13-24 update the approximations of the union of classes D P i , when the inserted object x will not belong to the union of classes D P i .Step 13-18 compute the lower approximations of D P i by Proposition 7. Step 19-24 update the approximations of D P

Table 8
A comparison of static and incremental algorithms versus different data sizes when inserting the objects.

Table 9
A comparison of static and incremental algorithms versus different updating rates when inserting the objects.