Skip to content
BY-NC-ND 4.0 license Open Access Published by De Gruyter Open Access December 31, 2018

Ontology learning algorithm using weak functions

  • Linli Zhu , Gang Hua EMAIL logo and Adnan Aslam
From the journal Open Physics

Abstract

Ontology is widely used in information retrieval, image processing and other various disciplines. This article discusses how to use machine learning approach to solve the most essential similarity calculation problem in multi-dividing ontology setting. The ontology function is regarded as a combination of several weak ontology functions, and the optimal ontology function is obtained by an iterative algorithm. In addition, the performance of the algorithm is analyzed from a theoretical point of view by statistical methods, and several results are obtained.

1 Introduction

As a structured concept representation model, the ontology has been applied to the field of artificial intelligence since its inception, and then applied to other areas of the computer, such as machine vision, parallel computing, information query extension, mathematical logic representation, and so on. In the past decade, as a useful tool, the research and application of ontology has expanded to the entire engineering science. Related applications of ontology models can be found in various fields such as chemistry, biology, pharmaceutical, material, medicine, neuroscience, and social sciences. In each special application field, a large number of professional ontology are constructed every year and applied to specific practices (for instance, “GO” ontology in gene science and “PO” ontology in plant science). More related ontology work and engineering application can be referred to Biletskiy et al. [1], Benedikt et al. [2], Rajbabu et al. [3], Vidal et al. [4], Annane et al. [5], Adhikari et al. [6], Mili et al. [7], Ferreira et al. [8], Bayoudhi et al. [9], and Derguech et al. [10].

The core of various ontology algorithms is the similarity calculation between ontology concepts. For ontology mapping, the essence is to calculate the similarity of concepts between different ontology, so the ontology similarity calculation algorithm we designed is also applicable to ontology mapping. Some related studies on ontology mapping can be referred to Ding and Foo [11], Kalfoglou and Schorlemmer [12], Currie et al. [13], Wong et al. [14], Qazvinian et al. [15], Nagy and Vargas-Vera [16], Lukasiewicz et al. [17], Arch-int and Arch-int [18], Forsati and Shamsfard [19], and Sicilia et al. [20].

In recent years, as the amount of data processed by various types of applications has expanded the data stored and processed by the ontology has also been expanding. This has led to an increase in the requirements of ontology algorithms in the era of big data. Especially in biology and pharmacy where the ontology is responsible for handling large amounts of information. In order to meet the needs of practical engineering, learning algorithms are gradually applied to ontology similarity calculation and ontology mapping, and then applied to various subject areas. Several ontology learning algorithms and application to different engineering application can be found in Gao and Zhu [21], and Gao et al. [22], [23], [24], [25], and [27].

Several papers contributed to the theoretical analysis of machine learning based ontology algorithms. Gao et al. [28] studied the strong and weak stability of k-partite ranking based ontology algorithm. Gao and Xu [29] considered the uniform stability of learning algorithms for ontology similarity computation. Gao and Farahani [30] presented the generalization bounds and uniform bounds for multi-dividing ontology algorithms with convex ontology loss function. Gao et al. [31] proposed a partial multi–dividing ontology algorithm with the aim of obtaining an efficient trick to optimize the partial multi–dividing ontology learning framework, and several theoretical results from a statistical learning theory perspective are obtained.

In this paper, we continue to study the theoretical analysis of ontology learning algorithm and focus on the multi–dividing ontology algorithm. The rest of the paper is organised as follows. First, we introduce the setting of multi–dividing ontology learning algorithm, and some notations are introduced. Then we manifest the main algorithm which is based on weak ontology functions. Finally, we give some results and detailed proofs from the perspective of statistical learning theory.

2 Setting

We use graph G = (V, E) to express the structure of the ontology and call it ontology graph in which each vertex represents a concept and each edge indicates a direct relationship (for example, belonging between two concepts). Assume S : V × V → R+∪ {0} is the similarity function in ontology, and we usually unitize its value to [0,1]. That is to say, similarity function S : V × V → [0, 1] maps each pair of vertices (concepts) to a real number belonging to interval from 0 to 1. Let v1 and v2 be two vertices in the ontology graph, S(v1, v2) = 1 indicates that the concepts corresponding to v1 and v2 have the same meaning. Conversely, S(v1, v2) = 0 means that there is no relationship between v1 and v2. Fixed threshold M ∈ [0, 1] with the help of field experts, then for vertex v, we return a set of concepts {v|S(v, v)≥ M } to the user as similarity vertices. In what follows, we always assume that n is the number of ontology samples, and it’s called sample capacity.

Let S = {v1, · · · , vn} be the ontology sample set which is independent identically distributed according to an unknown distribution D(we write vi D for i ∈ {1, · · · , n}), and l be the ontology loss function (we always assume it is convex, and can be express as l(f , v) with respect to ontology function f : V → R and ontology sample v). The expected risk of ontology model is

er(f)=EvDl(f,v).

However, er(f) can’t be computed since we don’t know D. Instead, it is naturally to obtain optimal ontology vector via the ontology empirical framework as follows

er^S(f)=1ni=1nl(f,vi).

When it comes to ontology learning setting, we aim to learn an ontology function f : V → R which maps each vertex to a real number. The similarity between ontology vertices v1 and v2 can be measured by means of |f (v1)− f(v2)|: the bigger of value |f (v1)− f (v2)| is, the smaller similarity between v1 and v2 will be; the smaller of value | f (v1)− f (v2)| is, the larger similarity between v1 and v2 will become. In order to connect the statistic learning theory, all the information for a vertex v is package to a p-dimensional vector. To simplify expression, without confusion,we use v to express vertex, its corresponding vector and its corresponding ontology concept, and this mathematical symbol is no longer bolded in the following context.

Suppose that (vi , yi) are independent and identically distributed random variables to certain unknown distribution D,where yi Y is the label of ontology vertex vi. Fixed the ontology function f , denote l(f , vi , yi) as the ontology loss, and then the expected ontology risk can be stated as

er(f)=V×Yl(f,vi,yi)D(dvi,dyi).

Given ontology training sample set {(vi , yi)}ni=1, then the corresponding empirical ontology risk can be expressed as

er^(f)=1ni=1nl(f,vi,yi).

When it comes to the pairwise setting, we also assume that (vi , yi) and (vj , yj) are independent and identically distributed random variables to certain unknown distribution D, where yi , yj Y are the label of ontology vertex vi and vj. Let l(f , vi , vj , yi,j) be the ontology loss function, where yi,j can be regarded as the function of yi and yj. Thus, the expected ontology risk becomes

er(f)=(V×Y)2l(f,vi,vj,yi,j)D(dvi,dyi)D(dvj,dyj).

With the ontology sample set {(vi , yi)}ni=1, the corresponding empirical ontology risk in the pairwise setting can be denoted by

er^(f)=2n(n1)i=1nj=i+1nl(f,vi,vj,yi,j).

2.1 Multi-dividing ontology setting

Since most ontology graph structures are trees (no cycle graph), multi-dividing ontology learning trick is popular in recent years. In this special ontology learning setting, all the vertices in the ontology graph are divided into k parts for k levels. The values of different levels are determined by domain experts in the specific application. For ontology function f , what we want is that the real number corresponding to the vertex in level a is greater than the real number corresponding to the vertex in any level b, where 1 ≤ a < bk. That is to say, in the ideal case, f (v) > f (v) if the level of vertex v is smaller than the level of vertex v.

Formally, the learner is inferred to an ontology training set S = (S1, S2, · · · , Sk) ∈ Vn1 × Vn2 ×· ··× Vnk which consists of a sequence of ontology training samples Sa = (v1a,,vnaa)Vna(1ak).By virtue of ontology sample S, a real-valued ontology function f : V → R is learned which allocates the future S a vertices larger value than Sb, where a < b. Set Da as the conditional distributions for each rate 1 ≤ ak and n=i=1knias the total size of ontology sample set, where ni = |Si| for i ∈ {1, · · · , k}.

The expected multi-dividing ontology expected risk on the ontology graph for an ontology function f : V → R associated with the ontology function f : V → R is defined as

er(f)=a=1k1b=a+1kEvDa,vDb{l(f,v,v))}.

The other expression for expected ontology risk can be formulated by

er(f)=a=1k1b=a+1kVa×Vbl(f,va,vb)Da(dva)Db(dvb).

A large class of learning algorithms is generated by regularization schemes. They penalize an empirical error which is chosen here to be the multi-dividing empirical error on the ontology graph defined for a f : V → R associated with the sample S as

er^S,l(f)=a=1k1b=a+1k1nanbi=1naj=1nbl(f,vi,vj).

Thus, the optimal ontology function can be obtained by f=argminer^T,l(f).We simply write er^S,l(f)aser^(f).

The aim of this paper is to provide a new ontology learning algorithm in terms of weak ontology functions and give the theoretical analysis of in the multi-dividing setting.

3 New ontology learning algorithm and theoretical analysis

In this section, we first present our new ontology learning algorithm in multi-dividing setting based on weak ontology functions. Then, the theoretical analysis of the proposed ontology algorithm is derived.

3.1 New ontology learning algorithm with weak ontology functions

Assume that the whole procedure can be separated by weak ontology functions in which it will be produced in each round. The new ontology learning algorithm keeps a distribution Dt over V×V which is passed on round t to the weak ontology function. In fact, it selects Dt to emphasize different parts of the ontology training samples, and the large weight assigned to a pair of ontology vertices implies that it is very important for the weak ontology functions to map their order correctly.

We assume that the weak ontology functions keep the form ft : V → R, and it provides order information in the same fashion as final ontology function. In the normal ontology setting (S = {v1, · · · , vn} and not to be divided into k parts), the procedure can be stated as follows:

Step 1: Given initial distribution D over V × V and set D1 = D;

Step 2: For t = 1,··· , T, do the following actions: train weak ontology function by means of distribution Dt; obtain weak ontology function ft : V → R; select parameter αt R; calculate

Dt+1(v1,v2)=Dt(v1,v2)eαt(ft(v1)ft(v2))Zt,

where Z t is denoted as regularization parameter and thus Dt+1 will be determined;

Step 3: Return the final ontology function as the combination of weak ontology functions:

f(v)=t=1Tαtft(v).

In the step 2 above, one problem is how to get parameter αt. One method is to minimize Z t, i.e.,

αt=argmin{Zt}=argminv1,v2Dt(v1,v2)eαt(ft(v1)ft(v2)).

In the multi-dividing ontology setting, the above algorithm can be rewritten as follows.

Initialize: For each pair of (a, b) with 1 ≤ a < bk, and vSaSb,setρ1a,b(v)=1naifvSaandρa,b(v)=1nbif v ∈ Sb.

For t = 1, · · · , T:

  • train the weak ontology function in terms of Dt(if v1∈ Sa and v2∈ Sb,then Dt(v1,v2)=ρta,b(v1)ρta,b(v2)) and obtain weak ontology function ft : V → R;

  • for each pair of (a, b) with 1 ≤ a < bk, select αta,bRand update

ρt+1a,b(v)=ρta,b(v)eαta,bft(v)vSaρta,b(v)eαta,bft(v)

if v ∈ Sa and

ρt+1a,b(v)=ρta,b(v)eαta,bft(v)vSbρta,b(v)eαta,bft(v)

if v ∈ Sb.

Select balance parameter αt for each weak ontology function, and return the final ontology function f(v)=t=1Tαtft(v).

3.2 Theoretical results

To ensure that the calculations for each iteration are valid, it must be confirmed that Dt(v1,v2)=ρta,b(v1)ρta,b(v2)is true at each step (here v1∈ Sa and v2∈ Sb). To show this, we use mathematical induction, assume that this is true for round t, and when it comes to round t + 1, according to the produces, we have

Dt(v1,v2)=ρta,b(v)eαta,bft(v)vSaρta,b(v)eαta,bft(v)×ρta,b(v)eαta,bft(v)vSbρta,b(v)eαta,bft(v)=ρta,b(v1)ρta,b(v2).

Note that our final ontology function has the form f(v)=t=1Tαtft(v),and we can set Θ : V × V → {−1, 0, 1} as

Θ(v1,v2)=sign(t=1Tαtft(v1)t=1Tαtft(v2)).

That is to say, if t=1Tαtft(v1)t=1Tαtft(v2)>0then Θ(v1, v2) = 1; if t=1Tαtft(v1)=t=1Tαtft(v2),then Θ(v1, v2) = 0; and ift=1Tαtft(v1)t=1Tαtft(v2)<0then Θ(v1, v2) = −1. For each pair of (a, b) with 1 ≤ a < bk, if function Θ(va, vb) ≠1 where va ∈ Sa and vb ∈ Sb, then it implies the error by the multi-dividing rule. Thus, the generalization error (expect risk) of Θ in multi-dividing ontology setting is denoted as

Δ(Θ)=a=1k1b=a+1kPvDa,vDb{Θ(v,v)1}=a=1k1b=a+1kEDa,DbI(Θ(v,v)1),

where I is the truth function, i.e., I(x) = 1 if x is true, otherwise I(x) = 0. Given the ontology training set S = (S1, S2, · · · , Sk) ∈ Vn1 × Vn2 × ··· × Vnk which consists of a sequence of ontology training samples Sa=(v1a,,vnaa)Vna(1ak),the expected empirical error of Θ can be denoted as

Δ^(Θ)=a=1k1b=a+1k1nanbi=1naj=1nbI(Θ(via,vjb)1).

The results presented in our paper aim to show that the difference betweenΔ^(Θ)andΔ(Θ)is small with large possibility. Setting Γ as the function space for functions Θ we have the following theorem.

Theorem 1 Suppose all the weak ontology functions belong to function space F with a finite VC dimension K, the ontology functions f (as the weighted combinations of the weak ontology functions) belong to function space F. Let S = (S1, S2, · · · , Sk) ∈ Vn1 × Vn2 × ·· ·× Vnk be ontology training set which consists of a sequence of ontology training samples Sa=(v1a,,vnaa)Vnaand Sa Da (1 ≤ ak). We have with probability at least 1 − δ (0 < δ < 1), the following inequality holds for any f ∈ F:

|er(f)er^(f)|2a=1k1b=a+1k{K(log2naK+1)+log18δna+K(log2nbK+1)+log18δnb},

where K = 2(K + 1)(T + 1) log2(e(T + 1)), T is the number of weak ontology functions in ontology algorithm and e is the base of the natural logarithm.

Proof of Theorem 1 First, we show that for each pair of (a, b) with 1 ≤ a < bk, and each δ > 0, there is a small number ε satisfying

P{ΘΓ:|a=1k1b=a+1k1nanbi=1naj=1nbI(Θ(viavjb)1)
a=1k1b=a+1kEva,vbI(Θ(va,vb)1)|>ε}δ,

where the value of ε will be determined later.

Define Ξ:V×V{0,1}asΞ(va,vb)=I(Θ(va,vb)1). Clearly, Ξindicates whether Θ makes mistake or not for the ontology vertices pair (va, vb) for va ∈ Sa and vb ∈ Sb according to the multi-dividing rule. We infer

a=1k1b=a+1k1nanbi=1naj=1nbI(Θ(viavjb)1)
a=1k1b=a+1kEva,vbI(Θ(va,vb)1)=a=1k1b=a+1k{1nanbi=1naj=1nbΞ(via,vjb)1nai=1naEvb{Ξ(via,vb)}+1nai=1naEvb{Ξ(via,vb)}Eva,vb{Ξ(vi,vj)}}=a=1k1b=a+1k{1nai=1na(1nbj=1nbΞ(via,vjb)EvbΞ(via,vb))+Evb{1nai=1naΞ(via,vb)EvaΞ(va,vb)}}.

Obviously, it is enough to show that there exist ε1 and ε2 with ε1 + ε2 = ε such that (∃va ∈ V a in each pair of (a, b) for (1) and ∃vb ∈ Vb in each pair of (a, b) for (2))

(1)P{ΞΥ:|a=1k1b=a+1k1nbj=1nbΞ(va,vjb)a=1k1b=a+1kEvbΞ(va,vb)|ε1}δ2,

and

(2)P{ΞΥ:|a=1k1b=a+1k1nai=1naΞ(via,vb)a=1k1b=a+1kEvaΞ(va,vb)|ε2}δ2

respectively, where Υ is the function space of Ξ.

Now, we only prove (2) in light of standard results, and (1) can be yielded in the same fashion. Let Υvb be the set of all such functions Ξ for a given vb, then the selection of Ξ in (2) is from function space vbΥvb . In view of theorem of Vapnik [32] which provides a selection of ε2 in (2) relying on the size na of Sa for each pair of (a, b), complexity K of vbΥvb (considered as VC Dimension), and the possibility δ. Specifically, for any δ > 0, set

ε3=2a=1k1b=a+1kK(log2naK+1)+log18δna,

we have

P{ΞvbΥvb:|a=1k1b=a+1k1nai=1naΞ(via,vb)a=1k1b=a+1kEvaΞ(va,vb)|ε3}δ.

Next, we need to determine the VC Dimension of vbΥvb : K. For a given vb ∈ Vb, we obtain

Ξ(va,vb)=I(Θ(va,vb)1)=I(sign(t=1Tαtft(va)t=1Tαtft(vb))1)=I(t=1Tαtft(va)t=1Tαtft(vb)0)=I(t=1Tαtft(va)c)0)

where c=t=1Tαtft(vb)is a constant since vb is given. It reveals that the functions in the space vbΥvb are the subset of all potential thresholds of all the linear combination of T ontologyweak functions. Using the standard result on VC Dimension of weak functions, we yield that if the ontology weak function space has VC Dimension K bigger than two, then K can’t exceed to 2(K + 1)(T + 1) log2(e(T + 1)).

Therefore, we get the desired conclusion.

According to Theorem 1 above, the generalization bound converges to zero at a rate of O(a=1k1b=a+1kmax{log(na)na,log(nb)nb}).

For each pair of (a, b) with 1 ≤ a < bk, the shatter coefficient is denoted as ra,b(F, na, nb) (see Gao and Wang [33] for more details). Then we deduce the following result. Theorem 2 Let F be the real valued ontology function space on V, then with probability at least 1 − δ (0 < δ < 1), for any f ∈ F, we have

|er(f)er^(f)|a=1k1b=a+1k8(na+nb)(log4δ+logra,b(F,2na,2nb))nanb.

Theorem 2 implies that if the ontology function is a linear function in the one-dimensional function space, then for each pair of (a, b) with 1 ≤ a < bk, ra,b(F, na, nb) are constants, regardless of the values of na and nb, and thus the bound converges to zero at a rate of O(a=1k1b=a+1kmax{1na,1nb}).It further reveals that the bound yield in Theorem 2 is sharper than bound obtained in Theorem 1. However, if the ontology function is a linear function in the d-dimensional function space (where d ≥ 2), then ra,b(F, na, nb) with order O((nanb)d), and in this case the bound in Theorem 2 has convergence rate relying on VC dimension, i.e., still O(a=1k1b=a+1kmax{log(na)na,log(nb)nb}).

4 Conclusion

Multi-dividing ontology learning algorithms have been proved to be effective in biology science, plant science, robot structure analysis, etc. It is necessary to give a deep theoretical analysis of this kind of algorithm. In this paper, we give a new ontology learning algorithm based on weak ontology functions, and we discuss the generation bound in this special setting. The obtained ontology algorithm and theoretical conclusions have potential engineering use in various fields.

  1. Conflict of Interest

    Conflict of Interests The authors hereby declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgement

We thank the reviewers for their constructive comments in improving the quality of this paper. This work was supported in part by the National Natural Science Foundation of China (51574232), the Open Research Fund by Jiangsu Key Laboratory of Recycling and Reuse Technology for Mechanical and Electronic Products (RRME-KF1612), the Industry-Academia Cooperation Innovation Fund Project of Jiangsu Province (BY2016030-06) and Six Talent Peaks Project in Jiangsu Province (2016-XYDXXJS-020).

References

[1] Biletskiy Y., Brown J.A., Ranganathan G.R., Bagheri E., Akbari I., Building a business domain meta-ontology for information preprocessing, Inform. Process. Lett., 2018, 138, 81–88.10.1016/j.ipl.2018.06.009Search in Google Scholar

[2] Benedikt M., Grau B.C., Kostylev E.V., Logical foundations of information disclosure in ontology-based data integration, Artif. Int., 2018, 262, 52–95.10.1016/j.artint.2018.06.002Search in Google Scholar

[3] Rajbabu K., Srinivas H., Sudha S., Industrial information extraction through multi-phase classification using ontology for unstructured documents, Comput. Ind., 2018, 2018, 137–147.10.1016/j.compind.2018.04.007Search in Google Scholar

[4] Vidal J.C., Rabelo T., Lama M., Amorim R., Ontology-based approach for the validation and conformance testing of xAPI events, Know. Based Syst., 2018, 155, 22–34.10.1016/j.knosys.2018.04.035Search in Google Scholar

[5] Annane A., Bellahsene Z., Azouaou F., Jonquet C., Building an effective and efficient background knowledge resource to enhance ontology matching, J. Web Semant., 2018, 51, 51–68.10.1016/j.websem.2018.04.001Search in Google Scholar

[6] Adhikari A., Dutta B., Dutta A., Mondal D., Singh S., An intrinsic information content-based semantic similarity measure considering the disjoint common subsumers of concepts of an ontology, J. Assoc. Inf. Syst. Tech., 2018, 69, 1023–1034.10.1002/asi.24021Search in Google Scholar

[7] Mili H., Valtchev P., Szathmary L., Boubaker A., Leshob A., Charif Y., Martin L., Ontology-based model-driven development of a destination management portal: Experience and lessons learned, Software Pract. Exper., 2018, 48, 1438–1460.10.1002/spe.2581Search in Google Scholar

[8] Ferreira W., Baldassarre M.T., Soares S., Codex: A metamodel ontology to guide the execution of coding experiments, Comput. Stand. Inter., 2018, 59, 35–44.10.1016/j.csi.2018.02.003Search in Google Scholar

[9] Bayoudhi L., Sassi N., Jaziri W., How to repair inconsistency in OWL 2 DL ontology versions?, Data Knowl. Eng., 2018, 116, 138–158.10.1016/j.datak.2018.05.010Search in Google Scholar

[10] Derguech W., Bhiri S., Curry E., Using ontologies for business capability modelling: describing what services and processes achieve, Comput. J., 2018, 61, 1075–1098.10.1093/comjnl/bxy011Search in Google Scholar

[11] Ding Y., Foo S., Ontology research and development. Part 2–a review of ontology mapping and evolving, J. Inform. Sci., 2002, 28, 375–388.10.1177/016555102401054867Search in Google Scholar

[12] Kalfoglou Y., Schorlemmer M., Ontology mapping: the state of the art, Knowl. Eng. Rev., 2003, 18, 1–31.10.1017/S0269888903000651Search in Google Scholar

[13] Currie R.A., Bombail V., Oliver J.D., Moore D.J., Lim F.L., Gwilliam V., Kimber I., Chipman K., Moggs J.G., Orphanides G., Gene ontology mapping as an unbiased method for identifying molecular pathways and processes affected by toxicant exposure: Application to acute effects caused by the rodent non-genotoxic carcinogen diethylhexylphthalate, Toxicol. Sci., 2005, 86, 453–469.10.1093/toxsci/kfi207Search in Google Scholar PubMed

[14] Wong A.K.Y., Ray P., Parameswaran N., Strassner J., Ontology mapping for the interoperability problem in network management, IEEE J. Sel. Area. Comm., 2005, 23, 2058–2068.10.1109/JSAC.2005.854130Search in Google Scholar

[15] Qazvinian V., Abolhassani H., Haeri S.H., Hariri B.B., Evolutionary coincidence-based ontology mapping extraction, Expert Syst., 2008, 25, 221–236.10.1111/j.1468-0394.2008.00462.xSearch in Google Scholar

[16] Nagy M., Vargas-Vera M., Multiagent ontology mapping framework for the semantic web, IEEE T. Syst. Man. Cy. A, 2011, 41, 693–704.10.1109/TSMCA.2011.2132704Search in Google Scholar

[17] Lukasiewicz T., Predoiu L., Stuckenschmidt H., Tightly integrated probabilistic description logic programs for representing ontology mappings, Ann. Math. Artif. Intel., 2011, 63, 385–425.10.1007/s10472-012-9280-3Search in Google Scholar

[18] Arch-int N., Arch-int S., Semantic ontology mapping for interoperability of learning resource systems using a rule-based reasoning approach, Expert Syst. Appl., 2013, 40, 7428–7443.10.1016/j.eswa.2013.07.027Search in Google Scholar

[19] Forsati R., Shamsfard M., Symbiosis of evolutionary and combinatorial ontology mapping approaches, Inform. Sciences, 2016, 342, 53–80.10.1016/j.ins.2016.01.025Search in Google Scholar

[20] Sicilia A., Nemirovski G., Nolle A., Map-On: A web-based editor for visual ontology mapping, Semantic Web, 2017, 8, 969–980.10.3233/SW-160246Search in Google Scholar

[21] Gao W., Zhu L.L., Gradient learning algorithms for ontology computing, Comput. Intell. Neurosci., 2014, http://dx.doi.org/10.1155/2014/438291doi.org/10.1155/2014/438291Search in Google Scholar

[22] Gao W., Zhu L.L., Guo Y.,Wang K.Y., Ontology learning algorithm for similarity measuring and ontology mapping using linear programming, J. Intell. Fuzzy Syst., 2017, 33, 3153–3163.10.3233/JIFS-169367Search in Google Scholar

[23] Gao W., Zhu L.L., Wang K.Y., Ontology sparse vector learning algorithm for ontology similarity measuring and ontology mapping via ADAL technology, Int. J. Bifurcat. Chaos, 2015, 25, DOI: 10.1142/S0218127415400349.10.1142/S0218127415400349Search in Google Scholar

[24] Gao W., Farahani M.R., Aslam A., Hosamani S., Distance learning techniques for ontology similarity measuring and ontology mapping, Cluster Comput., 2017, 20, 959–968.10.1007/s10586-017-0887-3Search in Google Scholar

[25] Gao W., Baig A.Q., Ali H., Sajjad W., Farahani M.R., Margin based ontology sparse vector learning algorithm and applied in biology science, Saudi J. Biol. Sci., 2017, 24, 132–138.10.1016/j.sjbs.2016.09.001Search in Google Scholar PubMed PubMed Central

[26] Gao W., Zhu L.L., Wang K.Y., Ranking based ontology scheming using eigenpair computation, J. Intell. Fuzzy Syst., 2016, 4,2411–2419.10.3233/JIFS-169082Search in Google Scholar

[27] Gao W., Guo Y., Wang K.Y., Ontology algorithm using singular value decomposition and applied in multidisciplinary, Cluster Comput., 2016, 19, 2201–2210.10.1007/s10586-016-0651-0Search in Google Scholar

[28] Gao W., Gao Y., Zhang Y.G., Strong and weak stability of k-partite ranking algorithms, Information, 2012, 15, 4585-4590.10.1117/12.2030493Search in Google Scholar

[29] Gao W., Xu T.W., Stability analysis of learning algorithms for ontology similarity computation, Abstr. Appl. Anal., 2013, http://dx.doi.org/10.1155/2013/174802doi.org/10.1155/2013/174802Search in Google Scholar

[30] Gao W., Farahani M.R., Generalization bounds and uniform bounds for multi-dividing ontology algorithms with convex ontology loss function, Comput. J., 2017, 60, 1289–1299.10.1093/comjnl/bxw107Search in Google Scholar

[31] Gao W., Guirao J.L.G., Basavanagoud B., Wu J.Z., Partial multi-dividing ontology learning algorithm, Inform. Sciences, 2018, 467, 35–58.10.1016/j.ins.2018.07.049Search in Google Scholar

[32] Vapnik V.N., Estimation of Dependences Based on Empirical Data. Springer–Verlag, 1982.Search in Google Scholar

[33] Gao W., Wang W.F., Analysis of k-partite ranking algorithm in area under the receiver operating characteristic curve criterion, Int. J. Comput. Math., 2018, 95, 1527–1547.10.1080/00207160.2017.1322688Search in Google Scholar

Received: 2018-11-04
Accepted: 2018-11-11
Published Online: 2018-12-31

© 2018 Linli Zhu et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/phys-2018-0112/html
Scroll to top button