Self-recognition in conversational agents.

2021 
In the standard Turing test, a machine has to prove its humanness to the judges. By successfully imitating a thinking entity such as a human, this machine then proves that it can also think. Some objections claim that Turing test is not a tool to demonstrate existence of general intelligence or thinking activity. A compelling alternative is the Lovelace test, in which the agent must originate a product that the agent's creator cannot explain. Therefore, the agent must be the owner of an original product. However, for this to happen the agent must exhibit the idea of self and distinguish oneself from others and most importantly from one's creator. Extensive analysis of Turing test urges us to confirm that it is a longstanding practical tool, as sustaining the idea of self within the Turing test is still possible if the judge decides to act as a textual mirror. Considering self-recognition tests applied on animals through mirrors appear to be viable tools to demonstrate the existence of a type of general intelligence. Methodology here constructs a textual version of the mirror test by placing the agent as the one and only judge to figure out whether the contacted one is an other, a mimicker, or oneself in an unsupervised manner. This textual version of the mirror test is objective, self-contained, and devoid of humanness. Any agent passing this textual mirror test should have or can acquire a thought mechanism that can be referred to as the inner-voice, answering the original and long lasting question of Turing "Can machines think?" in a constructive manner still within the bounds of the Turing test. Moreover, it is possible that a successful self-recognition might pave way to stronger notions of self-awareness in artificial beings.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    0
    Citations
    NaN
    KQI
    []