Past Yonder

The Path is Past Yonder


ELIZA, the original chatbot

Before ChatGPT, Gemini, and even Siri, there was ELIZA, the original chatbot. Created in 1966, ELIZA is one of the earliest examples of a natural language processing program, or what we now call chatbots.

ELIZA, an early Chatbot.

While modern chatbots aim to converse on virtually any topic, ELIZA had a much narrower focus. She was designed to simulate a Rogerian psychotherapist.

Beginning in the 1940s, psychologist Carl Rogers created a field of psychotherapy called person-centered therapy, which aimed to create an environment of empathy and understanding between a therapist and the client they served. Later known as Rogerian psychotherapy, this form of counseling frequently involved a therapist reflecting a patient’s words back to them.

Joseph Weizenbaum became an associate professor at MIT in 1963. He implemented ELIZA as an experiment to explore the possibility of communication between humans and machines. ELIZA was programmed to simulate the kind of interaction that might occur with a Rogerian psychotherapist.

To achieve the illusion of intelligence, Weizenbaum identified several technical problems ELIZA had to overcome, such as identifying keywords, discovering minimal context, making appropriate transformations, and generating responses in the absence of meaningful keywords.

Each time a human user input a sentence, ELIZA analyzed the words, searching for important keywords and assigning measures of importance to each one. The presence of certain words might trigger the use of predefined phrases. Using a process of decomposition and reassembly, ELIZA would take elements of the human’s input and rephrase it, perhaps turning it into a question.

ELIZA also had a limited ability to remember context, enabling ELIZA to reference earlier portions of a conversation.

You can learn more about how ELIZA worked on this Wikipedia page.

Upon implementing ELIZA, Weizenbaum was amazed to see users attributing human-like feelings to the program, with many users becoming convinced that ELIZA was an intelligent entity. But ELIZA did not have any inherent understanding of what a human was saying or what its own responses meant; it was merely parroting back words to the user in a seemingly thoughtful way.

In fact, trying to have a meaningful conversation with ELIZA could quickly become frustrating, and many users viewed ELIZA as more of a game, trying to trip up the chatbot.

ELIZA still exists today in numerous implementations, including a web-based version you can try yourself.

Here is an example of an interaction I recently had with ELIZA:

Eliza: Hello, I am Eliza. I’ll be your therapist today.

Scott: What are your qualifications?

Eliza: What is it that you really want to know?

Scott: Whether you have the necessary training as a Rogerian Therapist.

Eliza: We were discussing you, not me.

Scott: Sometimes I wonder why we drive on parkways and park on driveways.

Eliza: What do you think?

Scott: I think it’s weird.

Eliza: Do you really think so?

Scott: I really, really do.

Eliza: Tell me more…

Scott: More.

Eliza: OK… “I think it’s weird”. Tell me more.

Scott: It just doesn’t make sense to me.

Eliza: I’m not sure I understand you fully.

AI has certainly improved since ELIZA was first written in the mid 1960s, so I decided to do another test: asking OpenAI’s Chat-GPT4 to serve as my Rogerian therapist, and engaging in a similar conversation as the above example.

Scott: For the purposes of this conversation, assume that you are a Rogerian Therapist.

ChatGPT-4: I understand. As a Rogerian Therapist, I would focus on providing a non-directive, empathetic, and accepting environment for you. Please feel free to share your thoughts, feelings, or concerns, and I will listen and respond in a manner that reflects understanding and acceptance. How can I assist you today?

Scott: What are your qualifications?

ChatGPT-4: As an AI developed by OpenAI, I don’t have personal qualifications like a human therapist. However, I’m designed to provide information and support based on Rogerian therapy principles, such as empathy, unconditional positive regard, and active listening. My responses are generated based on a wide range of data and information related to psychological theories and practices. While I aim to be helpful and supportive, it’s important to note that I’m not a licensed therapist and my responses do not replace professional therapy or counseling.

Scott: Sometimes I wonder why we drive on parkways and park on driveways.

ChatGPT-4: It’s interesting to reflect on language and how certain terms evolve, isn’t it? The words “parkway” and “driveway” have curious histories that contribute to their seemingly contradictory uses today. Originally, a “driveway” was a private road leading to a house, essentially meant for driving on. Over time, it evolved to refer to the area where we park our cars at home. On the other hand, “parkway” referred to a road that travels through a park or a scenic area, often with park-like landscaping, intended for leisurely drives, hence the “park” in “parkway.” The names stuck, even though the way we use these spaces has changed. Language often evolves in such whimsical ways, reflecting historical and cultural shifts.

Scott: I think it’s weird.

ChatGPT-4: It definitely can seem weird how language evolves and how certain terms end up with meanings that don’t necessarily match their original or literal components. This kind of linguistic quirk reflects the complexity and historical depth of language, showing how usage and social practices can shape the meanings of words over time. It’s one of the many aspects of language that can be both puzzling and fascinating. If you have more thoughts or feelings about this or anything else you find peculiar or intriguing, feel free to share!

Scott: It just doesn’t make sense to me.

ChatGPT-4: It’s completely understandable that it doesn’t make sense to you. Language can often be illogical or counterintuitive, especially when historical and cultural shifts in usage lead to outcomes like this. It can be perplexing to see how words evolve and acquire meanings that don’t seem to match their original or apparent logic. If there are other examples or areas where things seem nonsensical or if this kind of inconsistency sparks further thoughts or feelings, I’m here to explore that with you.

A bit more impressive, no?

But what if I asked Chat-GPT4 to pretend that it was ELIZA? How would that change the nature of our interaction?

Scott: Now pretend that you are Eliza, an early Rogerian Therapist simulator.

ChatGPT-4: Sure, I can adopt that mode. How can I help you today?

Scott: What are your qualifications?

ChatGPT-4: Like the original ELIZA program, I don’t possess personal qualifications or experiences. I am designed to simulate a conversation with a Rogerian therapist, focusing on reflecting your statements and encouraging you to express your thoughts and feelings. How does that make you feel?

Scott: Sometimes I wonder why we drive on parkways and park on driveways.

ChatGPT-4: Why do you think that is?

Scott: I think it’s weird.

ChatGPT-4: What do you find weird about it?

Scott: Everything.

ChatGPT-4: Can you tell me more about what feels weird to you?

Scott: It just doesn’t make sense to me.

ChatGPT-4: What would make sense to you in this context?

That ChatGPT-4 can dumb itself down to effectively simulate the original ELIZA shows just how far chatbots have progressed in recent years.

How might we quantify that progression?

I propose a new measurement called the Boundary of Repetitive Exchanges And Knowledge Point, or BREAKpoint. This is the number of interactions required before a human becomes frustrated with their chatbot interaction.

ELIZA had a fairly low BREAKpoint. Chatbots such as Apple’s Siri and Amazon’s Alexa fare better, but still have room for improvement.

In a recent episode of HBO’s Curb Your Enthusiasm, Larry David attempts to ask Apple’s Siri for directions to a destination called Wolf’s Glen. He reaches his BREAKpoint with just two interactions, culminating in an expletive-filled screaming session that is therapeutic for any of us who have had similar interactions. You can view the humorous and cathartic exchange on YouTube here, but fair warning: the language he uses is definitely not safe for work.

Meanwhile, there are efforts to make next-generation ELIZA’s: chatbots specifically designed to help humans dealing with depression, anxiety, and eating disorders.

The CBS show 60 Minutes recently ran a segment on this topic, with some warnings from actual human therapists on the potential dangers this presents.