7.1 Introduction
Consider the kinship dataset shown below. The situation here is the same as that described in Chapter 2. Art is the parent of Bob and Bea. Bob is the parent of Cal and Cam. Bea is the parent of Cat and Coe.
parent(art,bob) 
parent(art,bea) 
parent(bob,cal) 
parent(bob,cam) 
parent(bea,cat) 
parent(bea,coe) 
Suppose now that we wanted to express information about the grandparent relation as well as the parent relation. As illustrated in the preceding chapter, we can do this by adding facts to our dataset. In this case, we would add the facts shown below. Art is the grandparent of Cal and Cam and Cat and Coe.
grandparent(art,cal) 
grandparent(art,cam) 
grandparent(art,cat) 
grandparent(art,coe) 
Unfortunately, doing things this way is wasteful. The grandparent relation can be defined in terms of the parent relation, and so storing grandparent data as well as parent data is redundant.
A better alternative is to write rules to encode such definitions and to use these rules to compute the relations defined by these rules when needed. As we shall see in this chapter, we can write such definitions using rules similar those that we used to define goal relations in Chapter 3. For example, in the case above, rather than adding grandparent facts to our dataset, we can write the following rule, safe in the knowledge that we can use the rule to compute our grandparent data.
grandparent(X,Z) : parent(X,Y) & parent(Y,Z) 
In what follows, we distinguish two different types of relations  base relations and view relations. We define base relations by writing facts in a dataset, and we define view relations by writing rules in a ruleset. In our example, parent is a base relation, and grandparent is a view relation.
Given a dataset defining our base relations and a ruleset defining our view relations, we can use automated reasoning tools to derive facts about our view relations. For example, given the preceding facts about the parent relation and our rule defining the grandparent relation, we can compute the facts about the grandparent relation.
Using rules to define view relations has multiple advantages over encoding those relations in the form of datasets. First of all, as we have just seen, there is economy: if view relations are defined in terms of rules, we do not need to store as many facts in our datasets. Second, there is less chance of things getting out of sync, e.g. if we change the parent relation and forget to change the grandparent relation. Third, view definitions work for any number of objects; they even work for applications with infinitely many objects (e.g. the integers) without requiring infinite storage.
In this chapter, we introduce the syntax and semantics of view definitions, and we describe the important notion of stratification. In subsequent chapters, we look at many, many examples of using rules to define views. And, in Chapters 11 and 12, we look at some practical techniques for using rules to compute view relations.
7.2 Syntax
The syntax of view definitions is almost identical to that of queries as described in Chapter 3. The various types of constants are the same, and the notions of term and atom and literal are also the same. The main difference comes in the syntax of rules.
As before, a rule is an expression consisting of a distinguished atom, called the head and a conjunction of zero or more literals, called the body. The literals in the body are called subgoals. In what follows, we write rules as in the example shown below. Here, r(X,Y) is the head; p(X,Y) & ~q(Y) is the body; and p(X,Y) and ~q(Y) are subgoals.
Despite these similarities, there are two important differences between query rules and rules used in view definitions. (1) In writing query rules, we use a single generic predicate (e.g. goal) in the heads of all of our query rules. By contrast, in view definitions, we use predicates for the relations we are defining (e.g. r in the example above). (2) In writing query rules, the subgoals of our rules can mention only predicates for relations described in the dataset. By contrast, in view definitions, subgoals can contain view predicates as well as predicates for base relations.
One benefit of the more flexible syntax in view definitions is that we can define multiple relations in a single set of rules. For example, the following rules define both the f relation and the m relation in terms of p and q.
f(X,Y) : p(X,Y) & q(X) 
m(X,Y) : p(X,Y) & ~q(X) 
A second benefit is that we can use view relations in defining other view relations. For example, in the following rule, we use the view relation f in our definition of g.
g(X,Z) : f(X,Y) & p(Y,Z) 
A third benefit is that views can be used in their own definitions, thus allowing us to define relations recursively. For example, the following rules define a to be the transitive closure of p.
a(X,Z) : p(X,Z) 
a(X,Z) : p(X,Y) & a(Y,Z) 
Unfortunately, our relaxed language allows for rulesets with some unpleasant properties. To avoid these problems, it is good practice to comply with some syntactic restrictions on our datasets and rulesets, viz. compatibility and stratification and safety (which we describe later).
A ruleset is compatible with a dataset if and only if (1) all symbols shared between the dataset and the ruleset are of the same type (symbol, constructor, predicate), (2) all constructors and predicates have the same arity, and (3) none of the predicates in the dataset appear in the heads of any rules in the ruleset.
7.3 Semantics
The semantics of view definitions is more complicated than the semantics of queries due to the possible occurrence of view predicates in subgoals; and, consequently, we take a slightly different approach.
To define the result of applying a set of view definitions to a dataset, we first combine the facts in a dataset with the rules defining our views into a joint set of facts and rules, hereafter called a closed logic program, and we then define the extension of that closed logic program as follows.
The Herbrand universe for a closed logic program is the set of all ground terms that can be formed from the symbols and constructors in the program. For a program without constructors, the Herbrand universe is finite (i.e. just the symbols). For a program with constructors, the Herbrand universe is infinite (i.e. the symbols and all compound terms that can be formed from those symbols).
The Herbrand base for a closed logic program is the set of all atoms that can be formed from the constants in the program. Said another way, it is the set of all facts of the form r(t_{1},...,t_{n}), where r is an nary predicate and t_{1}, ... , t_{n} are ground terms.
An interpretation for a closed logic program is an arbitrary subset of the Herbrand base for the program. As with datasets, the idea here is that the factoids in the interpretation are assumed to be true, and those that are not included are assumed to be false.
A model of a closed logic program is an interpretation that satisfies the program. We define satisfaction in two steps  we first deal with the case of ground rules, and we then deal with arbitrary rules.
An interpretation Γ satisfies a ground atom φ if and only if φ is in Γ. Γ satisfies a ground negation ~φ if and only if φ is not in Γ. Γ satisfies a ground rule φ : φ_{1} & ... & φ_{n} if and only if Γ satisfies φ whenever it satisfies φ_{1}, ... , φ_{n}.
An instance of a rule in a closed logic program is a rule in which all variables have been consistently replaced by terms from the Herbrand universe, i.e. the set of ground terms that can be formed from the program's vocabulary. As before, consistent replacement means that, if an occurrence of a variable in a sentence is replaced by a given term, then all occurrences of that variable in that sentence are replaced by the same term.
Using the notion of instance, we can define the notion of satisfaction for arbitrary closed logic programs (with or without variables). An interpretation Γ satisfies an arbitrary closed logic program Ω if and only if Γ satisfies every ground instance of every sentence in Ω.
As an example of these concepts, consider the dataset shown below.
p(a,b) 
p(b,c) 
p(c,d) 
p(d,c) 
And let's assume we have the following view definition.
r(X,Y) : p(X,Y) & ~p(Y,X) 
The following interpretation satisfies the closed logic program consisting of this dataset and ruleset. All of the facts in the dataset are included in the interpretation, and every conclusion that is required by our rule is included as well.
p(a,b) 
p(b,c) 
p(c,d) 
p(d,c) 
r(a,b) 
r(b,c) 
By contrast, the following interpretations do *not* satisfy the program. The one on the left is missing conclusions from the rule; the one in the middle is missing the facts from the dataset; and the one on the right satisfies the rules but does not contain all of the facts from the dataset.
p(a,b) 
p(b,c) 
p(c,d) 
p(d,c) 


p(a,b) 
p(b,c) 
p(c,d) 
r(a,b) 
r(b,c) 
r(c,d) 

On the other hand, the model shown above (the interpretation before these three nonmodels) is not the only interpretation that works. In general, a closed logic program can have more than one model, which means that there can be more than one way to satisfy the rules in the program. The following interpretations also satisfy our closed logic program.
p(a,b) 
p(b,c) 
p(c,d) 
p(d,c) 
r(a,b) 
r(b,c) 
r(c,d) 

p(a,b) 
p(b,c) 
p(c,d) 
p(d,c) 
r(a,b) 
r(b,c) 
r(d,c) 

p(a,b) 
p(b,c) 
p(c,d) 
p(d,c) 
r(a,b) 
r(b,c) 
r(c,d) 
r(d,c) 

This seems odd in that there is no reason to include r(c,d) or r(d,c) in our interpretation. On the other hand, given our definition of satisfaction, there is no reason not to include them.
The reason that this seems wrong is that we normally want our definitions to be if and only if. We want to include among our conclusions only those facts that must be true. (1) All factoids in our dataset must be true. (2) All factoids required by our rules must be true. (3) All other factoids should be excluded.
This is the classic definition of what is know as logical entailment. A factoid is logically entailed by a closed logic program if and only if it is true in every model of the program, i.e. the set of conclusions is the intersection of all models of the program.
One way to ensure logical entailment is to take the intersection of all interpretations that satisfy our program. This guarantees that we get only those conclusions that are true in every model. For example, if we took the intersection of the three models shown above, we would get our original model.
Another approach is to concentrate on minimal models. A model Γ of a logic program Ω is minimal if and only if there is no proper subset of Γ that is a model for Ω. If there is just one minimal model of a closed logic program, then minimality guarantees logical entailment. For example, the first model given above is minimal, and every factoid in that model must be present in every model of the program.
Many closed logic programs have unique minimal models. For example, a closed logic program that does not contain any negations has one and only one minimal model. Unfortunately, closed logic programs with negation can have more than one minimal model.
One way of eliminating with ambiguities like this is to concentrate on programs that are semipositive or programs that are stratified with respect to negation. We define these types of programs and discuss their semantics in the next two sections.
7.4 Semipositive Programs
A semipositive program is one in which negations apply only to base relations, i.e. there are no subgoals with negated views.
The semantics of a semipositive program can be formalized by defining the result of applying the view definitions in the program to the facts in the program's dataset. We use the word extension to refer to the set of all facts that can be "deduced" in this way.
An instance of an expression (atom, literal, or rule) is one in which all variables have been consistently replaced by ground terms. For example, if we have a language with object constants a and b, then r(a) : p(a,a), r(a) : p(a,b), r(b) : p(b,a), and r(b) : p(b,b) are all instances of r(X) : p(X,Y).
Given this notion, we define the result of the application of a single rule to a dataset as follows. Given a rule r and a dataset Δ, let v(r,Δ) be the set of all ψ such that (1) ψ is the head of an arbitrary instance of r, (2) every positive subgoal in the instance is a member of Δ, and (3) no negative subgoal in the instance is a member of Δ.
Using this notion, we define the result of repeatedly applying a semipositive program Ω to a dataset Δ as follows. Consider the sequence of datasets defined recursively as follows. Γ_{0} = Δ, and Γ_{n+1} = ∪v(r,Γ_{0}∪...∪Γ_{n}) for all r in Ω. The closure of Ω on Δ is the union of the datasets in this sequence, i.e. C(Ω,Δ) = ∪Γ_{i}.
To illustrate our definition, let's start with a dataset describing a small directed graph. In the sentences below, we use the edge predicate to record the arcs of one particular graph.
edge(a,b) 
edge(b,c) 
edge(c,d) 
edge(d,c) 
Now, let's write some rules defining various relations on the nodes in our graph. Here, the relation p is true of nodes with an outgoing arc. The relation q is true of two nodes if and only if there is an edge from the first to the second or an edge from the second to the first. The relation r is true of two nodes if and only if there is an edge from the first to the second and an edge from the second to the first. The relation s is the transitive closure of the edge relation.
p(X) : edge(X,Y) 
q(X,Y) : edge(X,Y) 
q(X,Y) : edge(Y,X) 
r(X,Y,Z) : edge(X,Y) & edge(Y,Z) 
s(X,Y) : edge(X,Y) 
s(X,Z) : edge(X,Y) & s(Y,Z) 
We start the computation by initializing our dataset to the edge facts listed above.
edge(a,b) 
edge(b,c) 
edge(c,d) 
edge(d,c) 
Looking at the p rule and matching its subgoals to the data in our dataset in all possible ways, we see that we can add the following facts. In this case, every node in our graph has an outgoing edge, so there is one p fact for each node.
Looking at the q rules and matching their subgoals to the data in our dataset in all possible ways, we see that we can add the following facts. In this case, we end up with the symmetric closure of the original graph.
q(a,b) 
q(b,a) 
q(b,c) 
q(c,b) 
q(c,d) 
q(d,c) 
Looking at the r rule and matching the subgoals to the data in our dataset in all possible ways, we see that we can add the following facts.
Finally, looking at the first rule for s and matching its subgoals to the data in our dataset in all possible ways, we see that we can add the following facts.
s(a,b) 
s(b,c) 
s(c,d) 
s(d,c) 
However, we are not quite done. With the facts just added, we can use the second rule to derive the following additional data.
s(a,c) 
s(b,d) 
s(c,c) 
s(d,d) 
Having done this, we can use the s rule again and can derive the following fact.
At this point, none of the rules when applied to this collection of data produces any results that are not already in the set, and so the process terminates. The resulting collection of 25 facts is the extension of this program.
7.5 Stratified Programs
We say that a set of view definitions is stratified if and only if its rules can be partitioned into strata in such a way that (1) every stratum contains at least one rule, (2) the rules defining relations that appear in positive subgoals of a rule appear in the same stratum as that rule or in some lower stratum, and (3) the rules defining relations that appear in negative subgoals of a rule occur in some lower stratum (not the same stratum).
As an example, assume we have a unary relation p that is true of all of the objects in some application area, and assume that q is an arbitrary binary relation. Now, consider the ruleset shown below. The first two rules define r to be the transitive closure of q. The third rule defines s to be the complement of the transitive closure.
r(X,Y) : q(X,Y) 
r(X,Z) : q(X,Y) & r(Y,Z) 
s(X,Y) : p(X) & p(Y) & ~r(X,Y) 
This is a complicated ruleset, yet it is easy to see that it is stratified. The first two rules contain no negations at all, and so we can group them together in our lowest stratum. The third rule has a negated subgoal containing a relation defined in our lowest stratum, and so we put it into a stratum above this one, as shown below. This ruleset satisfies the conditions of our definition and hence it is stratified.
Stratum  Rules 
2  s(X,Y) : p(X) & p(Y) & ~r(X,Y) 
1  r(X,Y) : q(X,Y) r(X,Z) : q(X,Y) & r(Y,Z) 
By comparison, consider the following ruleset. Here, the relation r is defined in terms of p and q, and the relation s is defined in terms of r and the negation of s.
r(X,Y) : p(X) & p(Y) & q(X,Y) 
s(X,Y) : r(X,Y) & ~s(Y,X) 
There is no way of dividing the rules of this ruleset into strata in a way that satisfies the definition above. Hence, the ruleset is not stratified.
The problem with unstratified rulesets is that there is a potential ambiguity. As an example, consider the rules above and assume that our dataset also included the facts p(a), p(b), q(a,b), and q(b,a). From these facts, we can conclude r(a,b) and r(b,a) are both true. So far, so good. But what can we say about s? If we take s(a,b) to be true and s(b,a) to be false, then the second rule is satisfied. If we take s(a,b) to be false and s(b,a) to be true, then the second rule is again satisfied. The upshot is that there is ambiguity about s. By concentrating exclusively on logic programs that are stratified, we avoid such ambiguities.
Although it is sometimes possible to stratify the rules in more than one way, this does not cause any problems. So long as a program is stratified with respect to negation, the definition just given produces the same extension no matter which stratification one uses.
Finally, we define the extension of a ruleset Ω on dataset Δ as follows. The definition relies on a decomposition of Ω into strata Ω_{1}, ... , Ω_{k}. Since there are only finitely many rules in a closed logic program and every stratum must contain at least one rule, there are only finitely many sets to consider (though the sets themselves might be infinite). With that in mind, let Δ_{0} = Δ, and let Δ_{n+1} = Δ_{n} ∪ C(Ω_{n+1},Δ_{n}). The extension of a program with k strata is just Δ_{k}.
The extension of any closed logic program without constructors must be finite. Also, the extension of any nonrecursive closed logic program must be finite. In both cases, it is possible to compute the extension in finite time. In fact, it is possible to show that the computation cost is polynomial in the size of the dataset.
In the case of recursive programs without constructors, the result must still be finite. However, the cost of computing the extension may be exponential in the size of the data, but the result can be computed in finite time.
For recursive programs with constructors, it is possible that the extension is infinite. In such cases, the extension is still welldefined; and, although we obviously cannot generate the entire extension in finite time, if a factoid is in the extension, it is possible to determine this in finite time.
The preceding section illustrates our method of computing extensions for semipositive programs. We now extend our example to show how to compute the extension of a stratified program.
Suppose we add the rule shown below to the program in the preceding section. The relation t here is the complement of the transitive closure of the edge relation.
t(X,Y) : p(X) & p(Y) & ~s(X,Y) 
Since this rule contains a negated relation, it would necessarily appear at a higher stratum than the s relation, and so we would not compute the conclusions until after we were done with s.
In this case, there are sixteen ways to satisfy the first two subgoals of our rule; and, as we saw in the preceding section, nine of them satisfy the s relation. The upshot is that the remaining seven facts satisfy the t relation. So, we can add these to our extension.
s(a,a) 
s(b,a) 
s(b,b) 
s(c,a) 
s(c,b) 
s(d,a) 
s(d,b) 
Note that, in the presence of rules with negated subgoals, it is sometimes possible to stratify the rules in more than one way. The good news here is that does not cause any problems. So long as a program is stratified with respect to negation, the definition just given produces the same dataset no matter which stratification one uses. Consequently, there is just one extension for any safe, stratified logic program.
Exercises
Exercise 7.1: Say whether each of the following expressions is a syntactically legal view definition.

(a) 
r(X,Y) : p(X,Y) & q() 

(b) 
r(X,Y) : p(X,Y) & ~q(Y,X) 

(c) 
~r(X,Y) : p(X,Y) & q(Y,X) 

(d) 
p(X,Y) & q(Y,X) : r(X,Y) 

(e) 
p(X,Y) & ~q(Y,X) : r(X,Y) 
Exercise 7.2: Suppose we have a dataset with two symbols a and b and two unary relations p and q where all possible facts are true, i.e. the dataset is {p(a), p(b), q(a), q(b)}. Suppose we have a closed logic program consisting of this dataset and the rule r(X) : p(X) & ~q(X).

(a) 
How many interpretations does this program have? 

(b) 
How many models does it have? 

(c) 
How many minimal models does it have? 
Exercise 7.3: Say whether each of the following rulesets is stratified.

(a) 
r(X,Y) : p(X,Y) & ~q(Y,X) r(X,Y) : p(X,Y) & ~q(X,Y) 


(b) 
r(X,Z) : p(X,Z) & q(X,Z) r(X,Z) : r(X,Y) & ~r(Y,Z) 


(c) 
r(X,Z) : p(X,Z) & ~q(X,Z) r(X,Z) : r(X,Y) & r(Y,Z) 
Exercise 7.4: What is v(r,Δ) where r is r(X,Y) : p(X,Y) & p(Y,X) and Δ is the dataset shown below?
p(a,a) 
p(a,b) 
p(b,a) 
p(b,c) 
Exercise 7.5: What is C(Ω,Δ) where Ω is {r(X,Z) : p(X,Z), r(X,Z) : r(X,Y) & r(Y,Z)} and Δ is the dataset shown below?
p(a,a) 
p(a,b) 
p(b,a) 
p(b,c) 
Exercise 7.6: What is the extension of strata Ω_{1} and Ω_{2} on Δ, where Ω_{1} is {q(X) : p(X,Y)}, where Ω_{2} is {r(X,Y) : p(X,Y) & ~q(Y)}, and where Δ is the dataset shown below?
p(a,b) 
p(a,c) 
p(b,d) 
p(c,d) 
