A conversation with ELIZA
In computer science, the ELIZA effect is a tendency to project human traits â such as experience, semantic comprehension or empathy â onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum that imitated a psychotherapist. Many early users were convinced of ELIZAâs intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
History
The effect is named for ELIZA, the 1966 chatbot developed by MIT computer scientist Joseph Weizenbaum. When executing Weizenbaumâs DOCTOR script, ELIZA simulated a Rogerian psychotherapist, largely by rephrasing the âpatient â âs replies as questions:1
Human: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
Human: He says Iâm depressed much of the time.
ELIZA: I am sorry to hear you are depressed.
Human: Itâs true. Iâm unhappy.
ELIZA: Do you think coming here will help you not to be unhappy?
Though designed strictly as a mechanism to support ânatural language conversationâ with a computer,2 ELIZAâs DOCTOR script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the programâs output.3 As Weizenbaum later wrote, âI had not realized⊠that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.â 4 Indeed, ELIZAâs code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZAâs questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion.5
Although the effect was first named in the 1960s, the tendency to understand mechanical operations in psychological terms was noted by Charles Babbage. In proposing what would later be called a carry-lookahead adder, Babbage remarked that he found such terms convenient for descriptive purposes, even though nothing more than mechanical action was meant.6
Characteristics
In its specific form, the ELIZA effect refers only to âthe susceptibility of people to read far more understanding than is warranted into strings of symbolsâespecially wordsâstrung together by computersâ.7 A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words âTHANK YOUâ at the end of a transaction. A naive observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.7
More generally, the ELIZA effect describes any situation 8 9 where, based solely on a systemâs output, users perceive computer systems as having âintrinsic qualities and abilities which the software controlling the (output) cannot possibly achieveâ 10 or âassume that [outputs] reflect a greater causality than they actually doâ.11 In both its specific and general forms, the ELIZA effect is notable for occurring even when users of the system are aware of the determinate nature of output produced by the system.
From a psychological standpoint, the ELIZA effect is the result of a subtle cognitive dissonance between the userâs awareness of programming limitations and their behavior towards the output of the program.12
Significance
The discovery of the ELIZA effect was an important development in artificial intelligence, demonstrating the principle of using social engineering rather than explicit programming to pass a Turing test.13
ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as âgeneral personal assistants â and âspecialized digital assistantsâ.14 General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants âoperate in very specific domains or help with very specific tasksâ.14 Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that âthere are some acts of thought that ought to be attempted only by humansâ.15
When chatbots are anthropomorphized, they tend to portray gendered features as a way through which we establish relationships with the technology. âGender stereotypes are instrumentalised to manage our relationship with chatbotsâ when human behavior is programmed into machines.16 Feminized labor, or womenâs work, automated by anthropomorphic digital assistants reinforces an âassumption that women possess a natural affinity for service work and emotional labourâ.17 In defining our proximity to digital assistants through their human attributes, chatbots become gendered entities.
See also
- Chatbot psychosis
- Chinese Room
- Duck test
- Intentional stance
- Loebner Prize
- Philosophical zombie
- Uncanny valley
References
Further reading
Footnotes
-
GĂŒzeldere, GĂŒven; Franchi, Stefano. âdialogues with colorful personalities of early aiâ. Archived from the original on 2011-04-25. Retrieved 2007-07-30. â©
-
Weizenbaum, Joseph (January 1966). âELIZAâA Computer Program For the Study of Natural Language Communication Between Man and Machineâ. Communications of the ACM. 9. Massachusetts Institute of Technology: 36. doi:10.1145/365153.365168. S2CID 1896290. Retrieved 2008-06-17. â©
-
Suchman, Lucy A. (1987). Plans and Situated Actions: The problem of human-machine communication. Cambridge University Press. p. 24. ISBN 978-0-521-33739-7. Retrieved 2008-06-17. â©
-
Weizenbaum, Joseph (1976). Computer Power and Human Reason: From Judgement to Calculation. W. H. Freeman. p. 7. ISBN 978-0716704645. â©
-
Billings, Lee (2007-07-16). âRise of Roboethicsâ. Seed. Archived from the original on 2009-02-28. (Joseph) Weizenbaum had unexpectedly discovered that, even if fully aware that they are talking to a simple computer program, people will nonetheless treat it as if it were a real, thinking being that cared about their problems â a phenomenon now known as the âEliza Effectâ. â©
-
Green, Christopher D. (February 2005). âWas Babbageâs Analytical Engine an Instrument of Psychological Research?â. History of Psychology. 8 (1): 35â 45. doi:10.1037/1093-4510.8.1.35. PMID 16021763. â©
-
Hofstadter, Douglas R. (1996). âPreface 4 The Ineradicable Eliza Effect and Its Dangers, Epilogueâ. Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. Basic Books. p. 157. ISBN 978-0-465-02475-9. â© â©2
-
Fenton-Kerr, Tom (1999). âGAIA: An Experimental Pedagogical Agent for Exploring Multimodal Interactionâ. Computation for Metaphors, Analogy, and Agents. Lecture Notes in Computer Science. Vol. 1562. Springer. p. 156. doi:10.1007/3-540-48834-0_9. ISBN 978-3-540-65959-4. Although Hofstadter is emphasizing the text mode here, the âEliza effectâ can be seen in almost all modes of human/computer interaction. â©
-
Ekbia, Hamid R. (2008). Artificial Dreams: The Quest for Non-Biological Intelligence. Cambridge University Press. p. 8. ISBN 978-0-521-87867-8. â©
-
King, W. (1995). Anthropomorphic Agents: Friend, Foe, or Folly (Technical report). University of Washington. M-95-1. â©
-
Rouse, William B.; Boff, Kenneth R. (2005). Organizational Simulation. Wiley-IEEE. pp. 308â 309. ISBN 978-0-471-73943-2. This is a particular problem in digital environments where the âEliza effectâ as it is sometimes called causes interactors to assume that the system is more intelligent than it is, to assume that events reflect a greater causality than they actually do. â©
-
Ekbia, Hamid R. (2008). Artificial Dreams: The Quest for Non-Biological Intelligence. Cambridge University Press. p. 156. ISBN 978-0-521-87867-8. But people want to believe that the program is âseeingâ a football game at some plausible level of abstraction. The words that (the program) manipulates are so full of associations for readers that they CANNOT be stripped of all their imagery. Collins of course knew that his program didnât deal with anything resembling a two-dimensional world of smoothly moving dots (let alone simplified human bodies), and presumably he thought that his readers, too, would realize this. He couldnât have suspected, however, how powerful the Eliza effect is. â©
-
Trappl, Robert; Petta, Paolo; Payr, Sabine (2002). Emotions in Humans and Artifacts. Cambridge, Mass.: MIT Press. p. 353. ISBN 978-0-262-20142-1. The âEliza effectâ â the tendency for people to treat programs that respond to them as if they had more intelligence than they really do (Weizenbaum 1966) is one of the most powerful tools available to the creators of virtual characters. â©
-
Dale, Robert (September 2016). âThe return of the chatbotsâ. Natural Language Engineering. 22 (5): 811â 817. doi:10.1017/S1351324916000243. ISSN 1351-3249. â© â©2
-
Weizenbaum, Joseph (1976). Computer power and human reason: from judgment to calculation. San Francisco, Cal.: W. H. Freeman and Company. ISBN 0-7167-0464-1. OCLC 1527521. â©
-
Costa, Pedro. Ribas, Luisa. Conversations with ELIZA: on Gender and Artificial Intelligence. From (6th Conference on Computation, Communication, Aesthetics & X 2018) Accessed February 2021 â©
-
Hester, Helen. 2016. âTechnology Becomes Her.â New Vistas 3 (1):46-50. â©