An Algebraic Approach to Functional-Dependency Semantics for First-Order Languages

. Motivated by applications to relational databases, N. Alechina proposed in 2000 a many-sorted ﬁrst-order logic with specialized models (called fd-models), where an ordinary ﬁrst-order structure was equipped with a system of functions realizing functional dependences be-tween variables (interpreted as attributes). Due to these constraints, only a restricted set of assignments is available in such a model. This factor may rouse some undesirable effects when formulas of the language are evaluated. We develop for the relevant ﬁrst-order languages another, assignment-free algebraic semantics based on improved fd-models.


Introduction
In 80-ies, various first-order logics were widely used for understanding relational databases and turned out, in particular, to be a useful tool of analysis of the concept of functional dependency between attributes in them.A natural mode of using a logical language to speak about a database is to interpret the formal variables of the language as attributes in its scheme.In this way functional dependences are introduced in the semantics of the language, and they may also been reflected into its syntax.Let us illustrate this point by two examples from the literature.
The language of dependence logic is the ordinary first-order language expanded by new atomic formulas =(t 1 , t 2 , . . ., t n ) (called dependence formulas), where t 1 , . . ., t n are arbitrary terms; such a formula is informally read as 'the value of t n is functionally dependent on the values of t 1 , . . ., t n−1 '.The intended semantics is the so called team semantics.A team for a first-order structure A = (A, . ..) is a set of assignments φ for the variables of the language in A. By definition, a dependence formula =(t 1 , t 2 , . . ., t n , t) is satisfied by a team Θ if, for all φ, φ ∈ Θ, φ(t) = φ (t) whenever φ(t i ) = φ (t i ) for i = 1, 2, . . ., n.This truth condition resembles the well-known definition of a functional dependency of attributes in a database relation; see also Example 2.2 below.We leave out here truth conditions for compound formulas, and only note that formula counts as valid in a model A if it is satisfied in it by all teams.
Functional dependences between variables dealt with in the dependence logic are predetermined by the "content" of a concrete team and may vary when changing teams.Another, less popular kind of logic of dependence, also suggested by relational databases, was discussed in Alechina (2000), and deals with "built-in" functional dependences.It is based on a many-sorted first-order language without function symbols and dependence formulas of any kind, but not all assignments of values to variables are available in its models.In more detail, there is a set I of sorts (interpreted as attribute names), and each variable (attribute) v i belongs to a different sort i ∈ I.A model of such a language L(I) is a first order structure A := ((A i ) i∈I , ...), where A i are nonempty sets (domains of attributes), not necessarily disjoint or even different.An fd-model of L(I) is then a pair (A, F ), where F is a (possibly, empty) set of functions f ij : A i → A j (functional dependences).The set Φ of assignments admissible in an fd-model is given by Imitating the traditional definitions, a formula is said to be true in the fd-model if it is satisfied by all assignments from Φ, and valid if it is true in all models.The logic given rise to by this semantics is called L fd .
A logical consequence relation relatively to a given fd-model (not discussed by Alechina) also could be defined in the same way: a formula α logically implies β if every assignment from Φ that satisfies α satisfies also β.However, the restricted set of admissible assignments may turn out to be too small to get adequate characterizations of validity and logical consequence: as the following example shows, Φ may be even empty.
If φ(x 3 ) = a 31 , then φ(x 1 ) = a 11 , φ(x 2 ) = a 21 , and there is no choice for φ(x 4 ).Likewise with φ(x 3 ) = a 32 .One may conclude that the given set of dependences forces at least some of the four attributes to conflict with each other.Therefore, with the described semantics, some semantic properties of formulas and relations between them may become inadequate; for example, it may happen that every formula logically implies any other formula (when Φ = ∅).
A possible way to avoid such difficulties could be admitting partial assignments defined only on compatible subsets of variables.We choose another strategy, and develop in this paper an algebraic, assignment-free semantics for a certain first-order logical language.The primary object in this kind of semantics will be the so called fd-frame (this concept is borrowed from Cīrulis (2004)), which consists of a set of variables, a family of value sets for them, and a (closed in a sense) set of unary functional dependencies between variables; as we shall see in the next section, admitting dependences between single variables only is not a real restriction, but simplifies matters.A subset of variables is recognized as compatible in a frame if there is a variable they all depend on.We associate with an fd-frame F a certain set S elements of which are informally interpreted as statements about variables (and their values) in F, and turn it into an algebraic structure with operations interpreted as negation, conjunction, disjunction (the two latter being partial unless all variables are pairwise consistent) and existential and universal quantification.Moreover, we introduce on S a relation interpreted as entailment (consequence relation).Finally, we show by an example how the statement algebra can be used to interpret an appropriate first order language (similar to that of Alechina's L fd ): with any formula, a statement from S is associated as its meaning, but no formula with incompatible free variables becomes meaningful in a given frame.On this ground, the semantic notions of validity and logical consequence are then introduced in the semantics.However, we do not consider any formal system for this logic, and have not aimed to introduce statement algebras as a class of structures respect to which some predefined formal logical system should be sound or even complete.
It is worth to note already here that variables in an fd-frame are still rather formal: as shown by examples in the next section, they may, but not necessary have to, be thought of as attributes related with some database or an information system of any other kind.Several ideas developed in this paper go back to Cīrulis (1987), where they are realized in another form.

Functional-dependency frames
We begin with some preliminaries.A preorder on some set P is a reflexive and transitive relation on P .Upper and lower bounds for subsets of a preordered set are defined just as in ordered sets; however a least upper bound (l.u.b.) and a greatest lower bound (g.l.b.) of a subset P , if defined, may be not unique.P is said to be bounded complete if every subset bounded from above has a l.u.b. (equivalently, if every its non-empty subset has a g.l.b.), and finitely bounded complete if a l.u.b. exists (at least) for every finite bounded set.However, greatest lower bounds need not exist in a finitely bounded preordered set.Every (finitely) bounded preordered set P has least or initial elements-the l.u.b-s of the empty subset.P is finitely bounded complete if and only if it has initial elements and every two-element bounded subset has a l.u.b.If the preorder under consideration is antisymmetric, i.e., P is actually a poset, l.u.b.-s and g.l.b.-s are unique and are called joins and meets, respectively.

Functional dependences between variables
A system of functional dependences is determined by the following data: The set D may be interpreted as a binary relation on Var.We shall usually write x ← y for (x, y) ∈ D. Therefore, x exists in F.
We call the relation ← (i.e., D) a dependency relation, read 'x ← y' as 'x (functionally) depends on y', and consider each function d y x as the respective dependence.Therefore, x here is a function of y: if x ← y and y has a value v, then x has the value d y x (v) (of course, there is also a feedback: if x has a value u, then y must have a value in (d y x ) −1 (u)).
We assume that F satisfies the following conditions: (F1) F is closed under composition: if d z y and d y x are in F , then d y x d z y belongs to F , (F2) the identity map id Valx on Val x belongs to F for every x.
and the relation ← is evidently a preorder.The described system (Var, Val, F ) satisfying (F1) and (F2) will be called an fd-structure ('fd' for 'functional dependency'), and the preordered set Var, its scheme.If ← is antisymmetric and, therefore, an order relation, it is natural to consider dependeces in F as inclusion dependences.
The family F is said to be surjective if the following condition is fulfilled: (F3) every dependence d y x is surjective.
Dependences may be required to be surjective because x, in the case when it depends on y, is supposed to have only the possible values determined by y.Therefore, (F3) actually requires the value sets Val x to be non-redundant.Variables x and y in an fd-structure are equivalent (in symbols, x ↔ y) if x ← y and y ← x.As it should be in this case, the dependences d x y and d y x are mutually inverse bijections.Indeed, for every possible value v of y, the corresponding value of x is d y x (v), and this, in its turn, forces y to have the value d x y (d y x (v)), which therefore must equal to v. Thus d x y d y x is the identity map on Val y .Likewise, d y x d x y is the identity map on Val x ; this proves that the above claim is valid.The system F may be called reflexive if it satisfies a condition which provides the converse: x is in F and is bijective, then its inverse belongs to F ; therefore The following natural extension of condition (F4): (F5) if x, y ← z and there is a function d : Val y → Val x such that d z x (w) = d(d z y (w)) for all w ∈ Val z , then d belongs to F ; therefore d = d y x , will not be needed in this paper.Below, some other natural additional conditions on fdstructures will be discussed, but first we consider several examples of such structures.

Examples
The first two examples are related to information systems in the sense of Pawlak (1981) (more recent sources are, e.g., Khan and Banerjee (2009), Pancerz (2014)), known also as knowledge representation systems, attribute-value systems and information tables.The notion of information system essentially coincides with that of relation (with unordered columns labeled by attributes) in a relational database, of many-valued context (Granter and Wille (1999)) and of Chu space (e.g., Pratt (1994Pratt ( ),(2005)), also Wolski and Gomolińska (2013)).
Example 2.1.An information system is a quadruple (Ob, At, V, I), where -Ob is a nonempty set of objects, -At is a set of their attributes, -V is a family (V a , a ∈ At) of sets; each V a is considered as the set of possible values (domain) of a, -I is a function Ob × At → (V a : a ∈ At) such that I(o, a) ∈ V a ; it is called an information function.
Normally, attributes in an information system are thought of as formally independent in the sense that the information function may be quite arbitrary (unlike database relations, where some constraints may be built in its scheme).Accordingly, let F consist of all identity maps on the sets V a ; thus, the set At is trivially ordered: a ← b in At iff a = b.Then the triple (At, V, F ) is an fd-structure; of course, (F3), (F4) and (F5) also are trivially fulfilled.
A version of information systems with built-in dependencies (and, hence, with a non-trivially ordered scheme) have been discussed by the present author in Cīrulis (2002) and later in Cīrulis (2004).
Example 2.2.Real dependencies between attributes of an information system are supposed to be caused by the behavior of the information function on the object set as a whole.Namely, an attribute a is considered as dependent on a set B of attributes if the value of a for every object turns out to be uniquely determined by values of attributes in B for this object: Subsets of At may be treated as complex attributes.Let At + be the collection of all finite such attributes.For A, B ∈ At + , put A ← B :≡ a ← B for all a ∈ A. Now, for every o ∈ Ob let φ o be a function on At such that φ o (a) = I(o, a) for all a.For every X ∈ At + , let V X be the set of all restrictions {φ o |X : o ∈ Ob}.Then A ← B if and only if there is a mapping d B A : for every o ∈ Ob.With F + the set of all such functions and V + := (V X : X ∈ At + ), the triple (At + , V + , F + ) is an fd-structure satisfying (F3), (F4) and, of course, (F5).Notice that the dependency relation on its scheme is governed by the so called Armstrong axioms (with A ⊆ B, d B A is a so called inclusion dependence).
A variant of such an fd-structure occurs when some conflict (or concurrence) relation-a symmetric and antireflexive relation -lives on the set of attributes and only conflict-free subsets of At are allowed to be included in At + .
In the next example, the role of variables is played by entities which usually are not thought of as such.
Example 2.3.Suppose that I and O are the input set and, respectively, the output set of some automaton, possibly, nondeterministic.Let I * be the set of all input words over I, and write x ≤ y to mean that an input word x is a prefix of an input word y.Let, furthermore, O * x be the set of all words over O of length equal to the length of x.For x ≤ y, define a function d y x by the rule that d y x (w) is the prefix of w in O * x ; let F be the set of all such functions Then the triple ) is an fd-structure.Its scheme I * is even tree-ordered; the conditions (F3), (F4) and (F5) are obviously fulfilled.
Thus, input words are treated here as variables, and output words of an appropriate length, as their values.Of course, one may choose for variables some set X of more conventional entities (together with a fixed bijective correspondence between I * and X), and redefine the above fd-structure accordingly.
Our last example of an fd-structure is more abstract.It is suggested by some ideas in quantum logic; see Section 4 in Cīrulis (2015) for another realization of them.
Any Boolean algebra is an example of such a poset.Elements p, q of P are said to be orthogonal (in symbols, p ⊥ q) if p ≤ q ⊥ , and a subset M of P is said to be orthogonal (or an orthosubset) if it does not contain 0 and its elements are mutually orthogonal.
Example 2.4.Given an orthoposet P , we may perceive it as (possibly, an approximation to) the set of inner events of some system, device or a similar object S, where ≤ serves as a part-of (or inclusion) relation for events, while the orthocomplement of an event p means the complementary event of p.A maximal orthosubset may then be thought of as an exhaustive collection of mutually exclusive events.
We consider any injective function defined on a maximal orthosubset M as an observable parameter of S and consider such parameters as variables.Let M x stand for the domain of a variable x, and Val x , for its range (which may be a proper subset of its codomain); then every element of M x can be interpreted as an event that x has a concrete value.Conversely, any observed value of x signalizes that a certain event has occurred.If to every event q from M y there is an event p in M x such that q ≤ p, then to any v ∈ Val y there is u ∈ Val x such that x has the value u whenever y has the value v (again, also conversely).In such a situation, we say that x functionally depends on y, and write x ← y; of course, then there is a function d y x : Val y → Val x such that always u = d y x (v).Let Var be some set of variables on P , and let F be the corresponding set of dependence functions; then (Var, (Val x ) x∈Var , F ) is an fd-structure, and (F3), (F4), (F5) also are satisfied.Notice that two variables are equivalent in this model if and only if they have a common domain in P .
Ending this subsection, we state a problem: Characterize those fd-structures that can, up to isomorphism, be realized on some orthoposet in the way described in the last example.

Frames
Suppose we are given some fd-structure (Var, Val, F ). Informally, a subset X of Var may be regarded as compatible if values of its elements are "coexistent" or "available simultaneously".For instance, this is certainly the case if there is a variable which all variables in X depend on.More formally, X is compatible whenever it has an upper bound in the preordered set Var.
What about the converse?A compatible subset itself may be thought of as a complex, or compound, variable.However, it is, in a sense, only a virtual one and should be somehow represented by an element of Var.Then we, for example, could implicitly take into account also dependency on several variables.A good candidate for an "actual" complex variable representing X in Var is a least upper bound of X.
Notice that a compatible subset of Var represented by its l.u.b. necessarily has an upper bound; this implies that the compatible subsets of variables may be identified with the bounded ones.Thus, we state the following condition: (V0) the set of variables Var is bounded complete.This condition is not an essential restriction on Var: it may be considered as a principle which says that we always can, if necessary, "define" or "construct" a new variable which serves as a l.u.b. of that or other compatible set of variables and add it to Var.However, we are forced to accept the fact that different compatible subsets may be represented by the same actual attribute (or equivalent attributes).
Next, it is a plausible intuitive idea that the value set of an actual complex variable has to be built up from the value sets of the component variables in a regular way.Let again X be a compatible subset of Var with a l.u.b. y.Then any element v ∈ Val y should be completely determined by its components d y x (v) with x ∈ X.Put in precise terms, this condition reads as follows: In other words, elements of Val y should be separable by the dependences d y x with x ∈ X.If the condition is satisfied, we say that the fd-structure respects least upper bounds (in Var).In particular, if X is empty (recall that then its l.u.b. is a least element of Var), then the if part of the above condition is trivially fulfilled, and Val y must have only one element.Thus, the condition (V1) implies that a variable depends of all variables if and only if it is constant.Another consequence is that the variables (their values) belonging to any compatible subset correlate with those in another one having the same (least) upper bounds.
In fact, X itself can be provided with an appropriate set Val X of "complex" values.For for every variable z that depends on both x 1 and x 2 .We further say that an assignment φ for X (i.e., a function which assigns a value from Val x to every variable x ∈ X) is a consistent if φ(x 1 ) and φ(x 2 ) are consistent for all x 1 , x 2 ∈ X; in particular, then φ(x 1 ) = A consistent assignment may be thought of as a record of "simultaneously possible" values of variables from X. Now, the subset Val X ⊆ (Val x : x ∈ X) of all consistent assignments may be considered as the domain of the complex attribute X.For example, if there is some v ∈ Val y such that φ v (x) = d y x (v) whenever x ∈ X, then φ v necessarily belongs to Val X .On the other hand, every value of X should correspond in this sense to an appropriate value of y, which determines the former one.This can be put in precise terms as follows: We say that the family Val, or the fd-structure under consideration itself, is saturated, if this condition is fulfilled.
If both discussed conditions, (V1) and (V2), are fulfilled, then the family of mappings (d y x : x ∈ X) embeds Val y into the direct product of all domains Val x with x ∈ X; in fact, maps Val y even onto Val X .We may consider also the "inverse" of this embedding-a mapping : Val X → Val y defined by for every φ ∈ Val X and v ∈ V y , (φ) = v if and only if φ = φ v .
With one exception in the next subsection, the full strength of the condition (V0) will not be needed in this paper: some finitary consequences of it are sufficient.Namely, the set Var is assumed to be finitely bounded complete, and every finite but nonempty its subset, to have a g.l.b..These conditions can be equivalently reworded in terms of two binary operations as in the following definition, which goes back to Cīrulis (2004).
Definition 2.5.A (finitary) fd-frame F is an fd-structure where -every pair of variables bounded above has a l.u.b., -every pair of variables has a g.l.b., -there are initial (i.e., least) variables, -F respects least upper bounds and is saturated.
For convenience of notation, we also assume that (i) for any two variables x and y having least upper bounds one of these bounds is selected and denoted by x ∨ y, (ii) for any x and y selected and denoted by x ∧ y is one of their greatest lower bounds, (iii) one of the initial variables is selected and denoted by o.Therefore, after identifying equivalent elements, the preordered set of all variables becomes a nearsemilattice in the sense of Cīrulis (2004).Now the definition of consistent values can be simplified: by (F1), Let us turn back to the examples of the preceding subsection.The fd-structure of Example 2.1 is not a frame (for distinct variables do not have any g.l.b.), but it becomes a frame if one adds a new initial attribute (thus, making Atr what is called a flat domain) with an one-element domain and corresponding dependences.On the other hand, any attributes in a Pawlak-style information system are normally thought of as compatible; this requires the attribute set to be extended as in Example 2.2.The fd-structure of Example 2.2 is an fd-frame, and so is its variant mentioned just after the example.The fd-structure of Example 2.3 is a frame if the empty word is included in I * .Observe that the frame has essentially incompatible sets of variables, and that this feature cannot be avoided in any natural way.As to the last example, we note without proof that, in the case when every maximal orthosubset of P is a domain of a variable, (i) the set of variables here proves to be bounded complete if the underlaying orthoposet P is orthocomplete, i.e., every orthosubset of P has the join, and (ii) then the fd-structure under question is an fd-frame.

On independency of variables
Let F be any frame satisfying (F3) and (F4).In terms of functional dependences, also independency of variables in F can be characterized.We begin with the following informal description of independency: y is independent from x if, at any given value of x, y can take every of its possible values.
In a more technical language, the independence relation ⊥ is characterized as follows: y ⊥ x :≡ all values from Val y are consistent with every value from Val x .
Put another way, this means that y ⊥ x iff every assignment to variables x and y is consistent.The following properties of ⊥ are easily verified.
(b) Suppose that x ← y, y ⊥ z, and take u ∈ Val x , w ∈ Val z .Then u = d y x v for some v by (F3); moreover, d y y∧z (v) = d z y∧z (w) by (1).Using (F1) several times, now Thus, x ⊥ z.
We can prove more: the relation ⊥ is also additive.
Lemma 2.8.Suppose that x, y ← z and that y is a l.u.b. of Y := {y i : i ∈ I}.Then y ⊥ x if and only if y i ⊥ x for all i.
Proof.Necessity follows from item (b) of the previous lemma.Sufficiency: assume that both suppositions of the lemma are fulfilled and that y i ⊥ x for all i.Without restriction of generality, we may also assume that z is a l.u.b. of x and y.Choose any u ∈ Val x and v ∈ Val y , and put v i := d y yi (v) for all i ∈ I; then v i ∼ v j for all i, j ∈ I.Moreover, v i ∼ u by the last assumption.Thus, a function φ on {x} ∪ Y defined by φ(x) = u and φ(y i ) = v i for i ∈ I is a consistent assignment, and there is an element w ∈ Val z such that u = d z x (w) and, for all i ∈ I, v i = d z yi (w)-see (V2).By virtue of (F1), now d y yi (d z y (w)) = d z yi (w) = v i = d y yi (v) for all i.As the frame respects least upper bounds, it follows that v = d z y (w).Therefore, i.e., v ∼ u.Since both v and u are arbitrary, it follows that y ⊥ x.
One more useful connection between ⊥ and ←, (2) Therefore, z − x is a greatest in the sense of ← variable that depends on z and is independent from x.Let us call the operation − subtraction.This name is suggested by a set-theoretic interpretation of axioms (2): they are satisfied for x, y, z arbitrary sets if ← stands for set inclusion, ⊥, for their disjointness, and −, for set subtraction.
It is easily seen that Note also that subtraction is stable w.r.t.↔: Some of the following properties of subtraction will be referred to in subsection 3.4.
Theorem 2.10.For all x, y, z, We saw that subtraction can be introduced on Var if this set is bounded complete.However, it is explicitly assumed in Definition 2.5 that Var is finitely bounded complete; consequently, existence of substraction should be considered as an additional condition on the scheme of F. So, we shall say that the scheme of an fd-frame is subtractive if subtraction (i.e., an operation − satisfying (2)) on it exists.

Statements
We consider that the basic sentences about variables of an fd-frame that are of interest for logic are those of the form the value of a variable x belongs to a subset U of Val x .
Such a sentence may be codified as a pair (x, U ).We therefore call a statement any pair (x, U ) with U Val x , let S x stand for the set {x} × B x of statements with x fixed (here, B x is the powerset of Val x ), and denote by S the set (S x : x ∈ Var) of all statements associated with a frame.Each set S x inherits the structure of a Boolean algebra from B x : where −U stands for Val x U , i.e., − is here the complementation in the Boolean algebra B x .Of course, the Boolean algebras S x are interconnected: whenever x ← y, there is a natural pair of mappings The many-sorted algebra B(F) := (B x , ε x y , π y x ) x←y∈Var will be our starting point in describing the structure of S.
So, ε x y is the preimage map, and π y x is the image map of d y x : It is well-known, and can be easily checked, that then they form a so called adjunction (known also as a (contravariant) Galois connection): (3) here, ε x y is said to be the right adjoint of π y x , and π y x , the left adjoint of ε x y .For further references, we give a list of basic properties of ε x y and π y x .Actually, only the properties (a)-(d), (g),(h),(m) and (p) in the subsequent proposition are independent: the remaining ones can be derived from them.On the other hand, the condition (3) is equivalent to the conjunction of (c) and (g)-(i).
Proposition 3.1.The mappings ε x y and π y x have the following properties: Proof.We shall check here only the properties (j),(k),(m) and (p).Notice that (n) and (o) are consequences of (m), and (q) is a consequence of (p), (a) and (e).Also, (l) is a conjunction of (d) and (j).
(j) By (c), Then by (h,c,g) and (b,f,m), To prove the reverse inclusion, recall that the frame is saturated, and suppose that v ∈ ε Due to (j), the pair (ε x y , π y x ) becomes a kind of what is known as an embeddingprojection pair: by (l), ε x y is a Boolean embedding, while (k) and (o) show that π y x can be considered as a projection.

Entailment
Our main goal in this subsection is to find an appropriate consequence relation, or entailment, for statements.
The intuitive (and vague) idea behind the assertion ("metastatement" about two statements from S) '(x, U ) entails (y, V )' is that it should be considered as a constraint "every time" when x takes a value in U , y necessary has a value in V .
For x and y compatible, the constraint can be formalized as follows: for every φ ∈ Val {x,y} , φ(y) ∈ V whenever φ(x) ∈ U , and amounts, due to (V2), ( F3) and (F1), to the condition Rewritten in a more compact form, this condition reads as In particular, if also x ← y, then However, we a looking for an entailment applicable to arbitrary statements.Notice that the defining condition ε x z U ⊆ ε y z V in the prospective definition (4) is, by (3), equivalent to the condition π z y ε x z U ⊆ V which, due to Proposition 3.1(p), is equivalent to the inclusion ε x∧y x π x x∧y U ⊆ V not involving z anymore.Eventually, we assume the following general definition.
Two other forms of the defining condition are useful.Recall that values Lemma 3.3.For all statements (x, U ) and (y, V ), The left-hand assertion (x, U ) (y, V ) can be rewritten in an expanded form as Clearly, the two assertions are equivalent.
We now easily obtain several natural properties of entailment.
Theorem 3.4.In S, x∧y U ⊆ V , and in virtue of Proposition 3.1(g,c), On the other hand, by Proposition 3.1(h,c,g),(b,f) and (p), whence, together with the previous inclusion, ε x∧z z π x x∧z U ⊆ W , i.e., (x, U ) (z, W ).Among the remaining properties, only (e) requires a comment.Assume that (x, U ) (y, V ), and choose some v ∈ −V and u The converse now follows due to (d).
At last, logical equivalence of statements is defined in terms of entailment in the usual way: We do not consider this relation in more detail (but see Section 4).

Logical operations
Items (d) and (e) of the previous theorem show that we already have a reasonable negation-like operation − on S, while items (f) and (g) show that the operations ∩ and ∪, if considered as partial operations on S, are rather restricted forms of conjunction, resp.disjunction in the preordered set of all statements.We are now going to demonstrate how they can be extended for an arbitrary pair of statements (x, U ) and (y, V ) with x and y compatible.
Assume that x ∨ y exists for some x and y, and let Both operations and are, evidently, idempotent, commutative and associative Therefore, these operations are partial semilattice operations extending ∩, resp., ∪; due to Proposition 3.1(d), they even prove to be connected by de Morgan laws.For instance, However, (S, , ) is not a partial lattice, because the operations induce different order relations (still related to ; cf. ( 5)): Nevertheless, the subsequent theorem, which generalizes items (f) and (g) of Theorem 3.4, shows that the operations and can be used for characterizing some greatest lower bounds and least upper bounds in S w.r.t. the preorder .
Theorem 3.5.In (S, ), (a) provided that y and z are compatible, (x, U ) (y, V ), (z, W ) iff (x, U ) (y, V ) (z, W ), (b) provided that x and y are compatible, Proof.(a) Assume that y and z are compatible.To prove the 'if' part, it suffices to show that We check the first of the two entailments: by Proposition 3.1(a,g,i), the other one is proved similarly.We mention an alternative version of disjunction and conjunction on S: These are total operations; moreover, they both are idempotent commutative, and associative.However, none of them correlates well with entailment: the analogue of Theorem 3.5 for these operations does not hold true.This is the main reason why we do not make use of them.

Quantifiers
Thus, the operations , and − on S may be considered as logical operations with statements, namely, as conjunction, disjunction and negation, respectively.We still need some algebraic facilities for treating quantifiers over statements.An appropriate tool is provided by algebraic logic, where a unary operation Q on a Boolean algebra B is said to be an (existential) quantifier (also, cylindrification) if it satisfies three axioms see, e.g., Halmos (1955Halmos ( ), (1956)).A quantifier is always a closure operator on B; moreover, its range is a subalgebra of B. These two conditions jointly are characteristic for quantifiers (Theorem 3 in Halmos (1956)), and we take them for a definition of Boolean quantifier.
Example 3.6.For x ← y, the function Q x on B y defined by Q x (V ) := ε x y π y x V is a quantifier.Indeed, by Proposition 3.1(h), (c,g) and (m), i.e., Q x is a closure operator.By Proposition 3.1(l,k), Q x preserves ∪, so that the range of Q x is closed under unions.It is closed also under complementation: if V ∈ Q x (B ), then V = Q x (V ) for some V ∈ B y and, using Proposition 3.1 (d,m,d), Thus, the range of Q x is a Boolean subalgebra of B y , and Q x itself is a quantifier.
Therefore, the statement (y, Q x (V )) may be regarded as resulting from (y, V ) by existential quantification.The following observation allows us to replace it by an equivalent simpler one: (y, Q x (V )) (x, π y x (V )).One direction here follows from Theorem 3.1(m), the other one is a tautology.(The idea that an existential quantifier is expressible by the left adjoint of the preimage map ε x y is known well; in an abstract form, it is common in categorical logic; see, e.g.Pitts (2000)).
To introduce the usual notation for quantifiers, it is necessary to make use of variable subtraction; therefore, in the rest of the section the underlaying frame F is supposed to be subtractive.We write ∃ x (y, V ) for Q y−x (y, V ), i.e., assume the following definition of the quantifier ∃ x in S: Dually, universal quantifiers on a Boolean algebra is characterized as an interior operators whose range is a subalgebra.To give an example, we have to introduce one more family of mappings µ y x : B y → B x , where x ← y: µ x is the right adjoint of ε x y : Now, the operation Q x on B y defined by Q x := ε x y µ y x is an universal quantifier, and We omit proofs of these claims: they are similar to those related with the operation Q x , and assume the definition For illustration, we list a few natural and easily verified properties of existential quantifiers.
Theorem 3.7.In S, Proof.(a) We have to prove that ε , and the inclusion reduces, by Proposition 3.1(a,f), to the tautology π An inspection of this proof shows that the considered properties of existential quantifiers are connected with certain properties of independency and subtraction of variables.Other desirable properties of quantification my call for additional specific assumptions on the structure of Var and, consequently, of F .For example, if we want statements ∃ y−x (y, V ) and (x, π y x V ) to be logically equivalent, then the equivalence y − (y − x) ↔ x should be fulfilled in Var.This latter equivalence proves to be a consequence of (F6), but, seemingly, cannot be derived from the definition (2) without any additional assumption on the relations ← or ⊥.
Properties of universal quantifiers on S will not be considered here in detail, for they can be expressed in terms of existential quantifiers in the usual way.
Proposition 3.8.For all x, y ∈ Var and Proof.In view of Theorem 3.4(d), it suffices to show that −π y This ends our construction of the statement algebra of an fd-frame.

Conclusion
We have noted in Introduction that some semantic anomalies may appear for certain logical languages taking account of functional dependency between variables, when the set of total assignments accessible in a model is restricted.In the paper, another, purely algebraic and assignment-free semantics for such languages is presented.The basic concept is that of fd-frame, which incorporates variables, their value sets and functional dependences between them.Associated with a frame is an algebraic structure S (algebra of statements), elements of which are interpreted as statements 'a variable x has a value in a subset U '; this algebra provides an entailment relation on S (a kind of consequence relation), negation, partial conjunction and disjunction, and both quantifiers over statements.However, as noticed in subsection 3.4, properties of quantifiers depend of the structure of the set of functional dependences of the frame; this point requires further investigation.
The equivalence relation corresponding to the preorder may be regarded as logical equivalence of statements in a frame.It can be verified that is even a congruence relation of S; we thus could build up the quotient algebra of S modulo .Recall analogical constructions in the traditional propositional and first-order calculi, where identifying logically equivalent logical formulas gives rise to the so called Lindenbaum-Tarski algebra of abstract propositions, which is, in the case of classical logics, always a Boolean algebra.We postpone to another paper this construction of Lindenbaum-Tarski algebra for S, and note here only, that it, in particular, turns out to be an orthoposet.It could be possible to approach in this way the representation problem stated just after Example 2.4.
We now describe a logical language for which fd-frames provide a semantics; so, it is a language appropriate to speak about this semantic framework.Let L be a first-order language with -a denumerable set Var of variables, -for each n-tuple x := (x 1 , x 2 , • • • , x n ) of mutually distinct variables, a set P x (possibly, empty) of n-place predicate symbols, -logical connectives ¬, ∧, ∨ and quantifier symbols ∃, ∀.
Atomic formulas of L are those of the form P x with P ∈ P x (therefore, L may be treated, like Alechina's L fd , as a many-sorted language with just one variable of each sort); other formulas of L are formed from these applying connectives and quantifiers in a usual way.Let F rm stand for the set of all formulas.
If Var is really the scheme of an fd-frame, then it may have incompatible subsets of variables, and then not every formula in F rm can be recognized as meaningful.We define recursively the set M F rm of formulas meaningful relatively to the scheme, and the type τ (α) of every meaningful formula α.Roughly, the type of a formula is the join of those variables which it depends on.
-if α = P (x 1 , x 2 , • • • , x n ) and the subset {x 1 , x 2 , • • • , x n } of Var is compatible, then α ∈ M F rm and τ (α Therefore, all formulas are meaningful in the case when every finite subset of variables is compatible.Now, an fd-model of L is a system A consisting of a subtractive frame F (with Var its set of variables) and a subset |P | of Val τ (α) for every atomic formula α := P (x 1 , x 2 , . . ., x n ) from M F rm.
Finally, we can introduce the following basic semantical notions: -a formula α is true in a fd-model A if it is meaningful and V A (α) = Val τ (α) , -α entails β in A if both α and β are meaningful and V A (α) V A (β), -a formula α is valid if it is true in every fd-model where it is meaningful, -α logically implies β if α entails β in every fd-model where both α and β are meaningful.
This ends the description of semantics of L. The language together with this semantics give rise to a logic, which may be called a logic of functional dependency.Investigation of metalogical properties of this logic L, including development of an appropriate deductive system, and its comparison with Alechina's logic mentioned in Introduction is a natural task for further work.

−
a nonempty set Var of variables, − a family Val := (Val x ) x∈Var of nonempty sets of values for each variable, − a family F := (d y x : Val y → Val x ) (x,y)∈D of functions (dependences), where D is some subset of Var 2 .
Lemma 2.6.If x, y ← z, then y ⊥ x if and only if to every u ∈ Val x and v ∈ Val y there is w ∈ Val z such that u = d z x (w) and v = d z y (w).Proof.Assume that x, y ← z; then x and y have a l.u.b.Next, y ⊥ x iff any value v ∈ Val y is consistent with any value in u ∈ Val x .As the frame is saturated, this is the case iff u = d x∨y x (w 0 ) and v = d x∨y y (w 0 ) for an appropriate w 0 ∈ Val x∨y .As d z x∨y is surjective, there is w ∈ Val z such that w 0 = d z x∨y (w).Thus u = d z x (w) and v = d z y (w) by (F1).
(c) Evident.(d) We know that always o ← x.If x ⊥ x, then the set Val x is a singleton; hence, d x o is a bijection and, by (F4), x ← o; thus, x ↔ o.The converse follows from (c).(e) By (b), y ⊥ z ∧ x whenever y ⊥ x.Conversely, suppose that y ← z and y ⊥ z ∧ x.If v ∈ Val y and u ∈ Val x , then Proof.(a) By the definition of z − x and, in the opposite direction, Lemma 2.7(b), as z − x ⊥ x.(b) By (2) and Theorem 2.9.(c) Suppose that x ← y.By (2), z − y ← z, z − y ⊥ y and also z − y ← z − x, since z − y ⊥ x by Lemma 2.7(b).(d) By definition of subtraction, always z − x ← z.Further, z ← z − x if and only if z ⊥ x: see (a).
U .This means that d y x∧y (v) = d x x∧y (u) for some u ∈ U .Then there is w 0 ∈ Val x∨y such that u = d x∨y x (w 0 ) and v = d x∨y y (w 0 ).By (F3), w 0 = d x∨y z (w) for some w ∈ Val z ; hence, u = d z x (w) and v = d z y (w), see (F1).It follows that w ∈ ε x z U and v ∈ π z y ε x z U .Therefore, ε x∧y y π x x∧y U ⊆ π z y ε x z U , as needed.
For all x, y, y ⊥ x if and only if x ∧ y ↔ o.Proof.By (1), y ⊥ x if and only if d y x∧y (v) = d x x∧y (u) for all u ∈ Val x and v ∈ Val y .Sufficiency of the condition now follows from the fact that Val o is a singleton set (as F respects upper bounds; see the preceding subsection).Necessity: the dependences d x x∧y and d y x∧y are surjective; so independence of y from x implies that Val x∧y must be a singleton and, consequently, x ∧ y is an initial variable (see (F4)).Let us return to examples of frames from subsection 2.2.In Example 2.2, complex attributes A and B are independent if and only if, for every o 1 , o 2 ∈ Ob, there is o ∈ Ob such that φ o |A = φ o1 |A and φ o |B = φ o2 |B.In Example 2.3, two input sequences x and y are independent if and only if they have only the trivial prefix in common.In Example 2.4, it turns out that variables x and y are independent if and only if {1} is the single orthosubset of which both M x and M y are refinements.(An orthosubset R of P is a refinement of an orthosubset Q if every element of Q is the join of a subset of R.) We now introduce a new, independency-based operation on Var.Assume for a moment that the set of variables is bounded complete.Consider the set [x] Being bounded by z, it has a l.u.b.; we denote one of these by z − x.By virtue of Lemma 2.8, z − x belongs to [x] z and, hence, is greatest in this set: F6) if, for all z, z ⊥ x whenever z ⊥ y, then x ← y, z := {y ← z : y ⊥ x}.