logo
    Towards Socially Intelligent Agents with Mental State Transition and Human Utility.
    3
    Citation
    43
    Reference
    20
    Related Paper
    Citation Trend
    Abstract:
    Building a socially intelligent agent involves many challenges, one of which is to track the agent's mental state transition and teach the agent to make rational decisions guided by its utility like a human. Towards this end, we propose to incorporate a mental state parser and utility model into dialogue agents. The hybrid mental state parser extracts information from both the dialogue and event observations and maintains a graphical representation of the agent's mind; Meanwhile, the utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset, Social IQA. Empirical results show that the proposed model attains state-of-the-art performance on the dialogue/action/emotion prediction task in the fantasy text-adventure game dataset, LIGHT. We also show example cases to demonstrate: (\textit{i}) how the proposed mental state parser can assist agent's decision by grounding on the context like locations and objects, and (\textit{ii}) how the utility model can help the agent make reasonable decisions in a dilemma. To the best of our knowledge, we are the first work that builds a socially intelligent agent by incorporating a hybrid mental state parser for both discrete events and continuous dialogues parsing and human-like utility modeling.
    Keywords:
    Representation
    Models of conversation that rely on a robust notion of cooperation don't model dialogues where the agents' goals conflict; for instance, negotiation over restricted resources, courtroom cross examination and political debate. We aim to provide a framework in which both cooperative and non-cooperative conversation can be analyzed. We develop a logic that links the public commitments that agents make through their utterances to private attitudes---e.g., belief, desire and intention. This logic incorporates a qualitative model of human action and decision making that approximates principles from game theory: e.g., choose actions that maximize expected utility. However, unlike classical game theory, our model supports reasoning about action even when knowledge of one's own preferences and those of others is incomplete and/or changing as the dialogue proceeds---an essential feature of many conversations. The logic validates decidable inferences from utterances to mental states during interpretation, and from mental states to dialogue actions during language production. In a context where the agents' preferences align we derive axioms of co-operativity that are treated as primitive in BDI logics for analyzing dialogue. Thus models of cooperative conversation are a special case in our framework.
    Citations (7)
    For human agent cooperation, reasoning about the partner is necessary to enable an efficient interaction. To provide helpful information, it is important not only to account for environmental uncertainties or dangers but also to maintain a sophisticated understanding of each other's mental state, a theory of mind. Sharing every piece of information is not a good idea, as some may be irrelevant at time or already known, leading to distraction and annoyance. Instead, an agent will have to estimate the novelty and relevance of information for the receiver, to trade off the cost of communication against potential benefits. We propose the concept of theory of mind based communication as principled formulation to ground an agents cooperative communication on an understanding of the receiver's mental states to support her awareness and action selection. Therefore we formulate the problem of whether, when and what information to share as a sequential decision process with the human belief as central source of uncertainty. The agent's communication decision is obtained online during interaction by combining a second level Bayesian inference of human belief with planning under uncertainty, evaluating the influence of communication on the human belief and her future decisions. We discuss the resulting behavior on an illustrative communication scenario with different uncertain state aspects that an observing agent can communicate to the actor.
    Relevance
    Human communication
    Social relationships are highly complex activities that are very difficult to model computationally. In order to represent these relationships, we may consider various aspects of the individual, such as affective state, psychological issues, and cognition. We may also consider social aspects, as how people relate to each other, and to what group they belong. Intelligent Tutoring Systems, Multi-agent Systems and Affective Computing are research areas which our research group have been investigating, in order to improve individual and collaborative learning. This paper focuses on a Social Agent which has been modelled using probabilistic networks and acts in an educational application. Using the Social Agent as a testbed, we present a way to perform the deliberation process in BDI and Bayesian Networks (BN). The assemblage of mental states and Bayesian Networks is done by viewing beliefs as networks, and desires and intentions as particular chance variable states that agents pursue. In this work, we are particularly concerned with the deliberation about which states of affairs the agent will intend. The focus of this paper is on how to build a real application by using the deliberation process developed in one of our previous work.
    Citations (3)
    The AI community has always been interested in designing intelligent agents which function in a multi-agent arrangement or a man-machine scenario. More often than not, such settings may require agents to work autonomously (or under intermittent supervision at the least) in partially observable environments. Over the last 10 years or so, the planning community has started looking at this interesting class of problems from an epistemic standpoint, by augmenting the notions of knowledge and beliefs to AI planning. In this paper, we present a system that synthesizes plans from the primary agent's perspective, based on its subjective knowledge, in a multi-agent environment. We adopt a semantic approach to represent the mental model of the primary agent whose uncertainty about the world is represented using Kripke's possible worlds interpretation of epistemic logic. Planning in this logical framework is computationally challenging, and, to the best of our knowledge, most of the existing planners work with the notion of knowledge, instead of an agent's subjective knowledge. We demonstrate the system's capability of projecting beliefs of the primary agent on to others, reasoning about the role of other agents in the prospective plans, and preferring the plans that hinge on the primary agent's capabilities to those which demand others' cooperation. We evaluate our system on the problems discussed in the literature and show that it takes fractions of seconds to search for a plan for a given problem. We also discuss the issues that arise in modeling dynamic domains with the representation our system employs.
    Citations (1)
    This paper formalizes a well-known psychological model of emotions in an agent specification language. This is done by introducing a logical language and its semantics that are used to specify an agent model in terms of mental attitudes including emotions. We show that our formalization renders a number of intuitive and plausible properties of emotions. We also show how this formalization can be used to specify the effect of emotions on an agent's decision making process. Ultimately, the emotions in this model function as heuristics as they constrain an agent's model.
    Heuristics
    Citations (57)
    The mental state S of an agent A refers to the state of an agent described using attributes such as beliefs, intentions, commitments, capabilities, plan, etc. We argue that attributes of mental states should not only correspond to their commonsense counterparts as suggested in (Shoham, 1993), but also that their semantics be mutually understood by all the agents the agent A is likely to interact with. Mutual understanding or agreement involves infinitely recursive beliefs, and we propose that, for action performing agents, they need to be truncated at a finite depth and supported with explicit assumptions and inference rules resulting in actions on the world. We also propose that an intelligent agent must also have apt attribute called awareness, which like mutual beliefs, is infinitely recursive, and for practical reasoning purposes must be truncated. We propose a mental structure of an agent as a hierarchy of knowledge structures Kn, kn-1, ..., K0, where Ki implements the abstract mental substate Si. We finally work out an example discussing the role played by a few selected mental attributes such as intention, commitment, etc, and make suggestions for future work.< >
    Rule of inference
    Mental model