OSCAR: An Agent Architecture Based on Defeasible Reasoning.

2008 
OSCAR is a fully implemented architecture for a cognitive agent, based largely on the author’s work in philosophy concerning epistemology and practical cognition. The seminal idea is that a generally intelligent agent must be able to function in an environment in which it is ignorant of most matters of fact. The architecture incorporates a general-purpose defeasible reasoner, built on top of an efficient natural deduction reasoner for first-order logic. It is based upon a detailed theory about how the various aspects of epistemic and practical cognition should interact, and many of the details are driven by theoretical results concerning defeasible reasoning. 1. Epistemic Cognition The “grand problem” of AI has always been to build artificial agents with human-like intelligence. That is the stuff of science fiction, but it is also the ultimate aspiration of AI. In retrospect, we can understand what a difficult problem this is, so since its inception AI has focused more on small manageable problems, with the hope that progress there will have useful implications for the grand problem. Now there is a resurgence of interest in tackling the grand problem head-on. Perhaps AI has made enough progress on the little problems that we can fruitfully address the big problem. The objective is to build agents of human-level intelligence capable of operating in environments of realworld complexity. I will refer to these as GIAs — “generally intelligent agents”. OSCAR is a cognitive architecture for GIAs, implemented in LISP, and can be downloaded from the OSCAR website at http://oscarhome.socsci.arizona.edu/ftp/OSCAR-web-page/oscar.html. OSCAR draws heavily on my work in philosophy concerning both epistemology [1,2,3,4] and rational decision making [5]. The OSCAR architecture takes as its starting point the observation that GIAs must be able to form reasonable beliefs and make rational decisions against a background of pervasive ignorance. Reflect on the fact that you are a GIA. Then think how little you really know about the world. What do you know about individual grains of sand, or individual kittens, or drops of rain, or apples hanging on This work was supported by NSF grant no. IIS-0412791. Copyright © 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. all the apple trees scattered throughout the world? Suppose you want to adopt a kitten. Most AI planners make the closed world assumption, which would require us to know everything relevant about every kitten in the world. But such an assumption is simply preposterous. Our knowledge is worse than just gappy — it is sparse. We know a very little bit about just a few of the huge number of kittens residing in this world, but we are still able to decide to adopt a particular kitten. Our knowledge of general matters of fact is equally sparse. Modern science apprises us of some useful generalizations, but the most useful generalizations are high-level generalizations about how to repair cars, how to cook beef stroganoff, where the fish are apt to be biting in Pina Blanca Lake, etc., and surely, most such generalizations are unknown to most people. What human beings know about the world is many orders of magnitude smaller than what is true of the world. And the knowledge we lack is both of individual matters of fact and of general regularities holding in the world. In light of our pervasive ignorance, we cannot get around in the world just forming beliefs that follow deductively from what we already know together with new sensor input. We must allow ourselves to form beliefs that are made probable by our evidence, but that are not logically guaranteed to be true. For instance, in our normal environment, objects generally have the colors they appear to have, so we can rely upon this statistical generalization in forming beliefs about the colors of objects that we see. Similarly, objects tend to retain certain kinds of properties over time, so if we observe an object at one time, we tend to assume that, in most respects, it has not changed a short time later. None of these inferences can deductively guarantee their conclusions. At best, they make the conclusions probable given the premises. GIAs come equipped (by evolution or design) with inference schemes that are reliable in the circumstances in which the agent operates. That is, if the agent reasons in that way, its conclusions will tend to be true, but are not guaranteed to be true. Once the cognizer has some basic reliable inference schemes, it can use those to survey its world and form inductive generalizations about the reliability of new inferences that are not simply built into its architecture. But it needs the built-in inference schemes to get started. It cannot learn anything about probabilities without them. Once the agent does discover that the probability of an A being a B is high, then if it has reason to believe that an object c is an A, it can reasonably infer that c is a B, and the probability of this conclusion being true is high. This is an instance of the statistical syllogism [9]. Notice that in order for the agent to reason this way with new probability information, the statistical syllogism must be one of its built-in inference schemes. An agent whose reasoning is based on inference schemes that are less than totally reliable will sometimes find itself with arguments for conflicting conclusions — “rebutting defeat” [6,7] — or an argument to the effect that under the present circumstances, one of its built-in inference schemes is not reliable, or less reliable than it is assumed by default to be. The latter is one kind of “undercutting defeat” [6,7]. Undercutting defeaters attack an inference without attacking the conclusion itself. For intance, if I know that illumination by red light can make an object look red when it is not, and I see an object that looks red but I know it is illuminated by red lights, I should refrain from concluding that it is red, but it might still be red. In the human cognitive architecture, we find a rich array of built-in inference schemes and attendant undercutting defeaters. One of the tasks of the philosophical epistemologist has been to try to spell out the structure of these inference schemes and defeaters. I have made a number of concrete proposals about specific inference schemes [1,8,9,6,2,3,4]. Often, the hardest task is to get the undercutting defeaters right. For example, in [2] I argued that the frame problem is easily solved if we correctly characterize the undercutting defeaters that are associated with the defeasible inference schemes that we employ in reasoning about causation, and I implemented the solution in OSCAR. The natural temptation is to try to build an implemented defeasible reasoner on the model of familiar deductive reasoners. First-order deductive reasoners generate the members of a recursively enumerable set of deductive consequences of the given premises. By Church’s theorem, the set of consequences is not decidable, but because it is r.e., its members can be systematically generated by an algorithm for constructing arguments. (This is what the completeness theorem for first-order logic establishes.) However, the situation in defeasible reasoning is more complex. If we assume that it is not decidable whether there is an argument supporting a particular conclusion (for first-order logic, this is Church’s theorem), then it cannot be decidable whether there are arguments supporting defeaters for a given argument. This means that in constructing defeasible arguments, we cannot wait to rule out the possibility of defeat before adding a new step to an argument. We must go ahead and construct arguments without worrying about defeat, and then as a second step, compute the defeat statuses of the conclusions in terms of the set of arguments that have been constructed. So argument construction must be separated from the defeat status computation. Most implemented systems of defeasible reasoning do not make this separation, and as a result they are forced to focus exclusively on decidable underlying logics, like the propositional calculus. But that is too weak for a GIA. The knowledge of a GIA can, in principle, include any or all of modern science and mathematics, and that requires a full first-order language. A GIA cannot wait until all possibly relevant arguments have been constructed before computing defeat statuses, because the process of argument construction is non-terminating. It must instead compute defeat statuses provisionally, on the basis of the arguments constructed so far, but be prepared to change its mind about defeat statuses if it finds new relevant arguments. In other words, the defeat status computation must itself be defeasible. This is the way human reasoning works. We decide whether to accept conclusions on the basis of what arguments are currently at our disposal, but if we construct new arguments that are relevant to the conclusion, we may change our mind about whether to accept the conclusion. The literature on nonmonotonic logic and most of the literature on defeasible reasoning has focused on what might be called simple defeasibility. This is defeasibility that arises from the fact that newly discovered information can lead to the withdrawal of previously justified conclusions. But as we have seen, there is a second source of defeasibility that arises simply from constructing new arguments without adding any new information to the system. We can put this by saying that the reasoning is doubly
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    8
    References
    4
    Citations
    NaN
    KQI
    []