6

ELIZA is an oft referenced early chatbot developed in the 1960's. The legend behind it is that the simplistic chatbot was so convincingly human to early users that they supposedly forgot it was a computer and spilled their deepest secrets to it. E.g. these quotes from the Wikipedia article:

However, many early users were convinced of ELIZA's intelligence and understanding, despite Weizenbaum's insistence to the contrary.

Reference: Baranovska, Marianna; Höltgen, Stefan, eds. (2018). Hello, I'm Eliza fünfzig Jahre Gespräche mit Computern (1st ed.). Bochum: Bochum Freiburg projektverlag. ISBN 978-3-89733-467-0. OCLC 1080933718.

The algorithms of DOCTOR [ELIZA] allowed for a deceptively intelligent response, which deceived many individuals when first using the program.

Reference: Wardip-Fruin, Noah (2014). Expressive Processing: Digital Fictions, Computer Games, and Software Studies. Cambridge: The MIT Press. p. 33. ISBN 9780262013437 – via eBook Collection (EBSCOhost).

Some of ELIZA's responses were so convincing that Weizenbaum and several others have anecdotes of users becoming emotionally attached to the program, occasionally forgetting that they were conversing with a computer. Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation.

Reference: Weizenbaum, Joseph (1976). Computer Power and Human Reason: From Judgment to Calculation. New York: W. H. Freeman and Company. ISBN 0-7167-0464-1.

The source given for the first quote is a book in German so I can't look into. The second one is a book which is at best a secondary source and mostly a tertiary source. The third is a primary source, but like all other primary sources I've been able to find, the source is the inventor of the chatbot himself, Joseph Weizenbaum. While Weizenbaum seems to be a potentially reliable source, he certainly had a vested interest in the ELIZA being a particularly engrossing computer program.

The story of ELIZA as a tale of how easily humans are duped into believing an obviously dumb computer program is actually intelligent has spread pretty wide at this point. While I certainly believe that people found ELIZA an interesting bit of software, I have trouble believing the more outlandish claims of people becoming "emotionally attached" to the program and believing it to be intelligent.

Is there any good evidence for these stronger claims about the so-called ELIZA effect, where people strongly anthropomorphized ELIZA and had powerful emotional reactions, besides the claims of Weizenbaum himself?

Note: Sherry Turkle's The Second Self has some relevant passages in the first chapter, on p. 40. I find the framing around the passages, which seemingly report on the experiences of some individuals with ELIZA, to be somewhat odd. Who the people who's experience she's relating, when the experiences happened, how many they were, and in what context they had this experience are all left vague. Further, the book was published a good 20 years after the creation of ELIZA, so well after the legend had already formed.

Schmuddi
  • 9,539
  • 5
  • 44
  • 46
  • 4
    I find the claim to be a bit vague. Is the claim stronger than people getting immersed in games (even text adventures) to a degree where they could be described as "getting emotionally attached" to non-player characters and forgetting to be in a game for a while? – Hulk Feb 17 '23 at 10:23
  • 3
    The era of ELIZA was 60 years ago. Computers and their popular perception have changed so much since then that it would be more surprising if you *could* empathise with the impressions and reactions of the original users. – Kilian Foth Feb 17 '23 at 12:22
  • 3
    Related to the comment by @Hulk, I'm sure people got emotionally attached to their Tamagotchi too, knowing full and well that they are not "real". We're not surprised when people cry at a movie, so the claim that they get emotional with a program shouldn't be surprising. I think the more important claim here is the part about _intelligence_. – pipe Feb 17 '23 at 16:12
  • 6
    Coming back to the vagueness, the [ELIZA Effect](https://en.wikipedia.org/wiki/ELIZA_effect) is, by definition, "the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers". It is named *after* the ELIZA program, but extends beyond it. So is the question "Do people have this susceptibility", or is the question specific to the ELIZA program? – Oddthinking Feb 17 '23 at 16:56
  • 1
    Fun thing to try: https://web.njit.edu/~ronkowit/eliza.html. – Hilmar Feb 18 '23 at 21:19
  • If you've ever tried ELIZA you can quickly see that it is not terribly smart at all, just repeating your own sentence fragments back at you with some pre-programmed variations (in fact this is already obvious from the Wikipedia page). Perhaps we'll feel the same way about GPT in 60 years. – user253751 Feb 21 '23 at 18:37
  • Well we do have ELIZA effect all over again with ChatGPT, with people attributing sentience despite the creators being very clear and open about what it is (a glorified text completion, not some general intelligence with sentience). – kutschkem Feb 22 '23 at 06:55
  • @kutschkem For as long as we don't actually know what ChatGPT is doing to achieve its learning goals, confidently claiming that it is not sentient is the same erroneous logic as confidently claiming that it is sentient. Most experts are willing to admit that much, too. We don't know if the flaws in its "intelligence" are currently due to the learning algorithm, or a lack of stable memory infrastructure. If the latter-- is a person with Alzheimer's still sentient? Most would say yes. – Onyz Feb 27 '23 at 15:39

0 Answers0