How do you feel about that, AI?

I’ve been using and appreciating internet search engines for many years. In more recent times, an AI whatever often butts in. I usually ignore it, but one time I was looking for what George Gobel said about feeling like brown shoes in a tuxedo world. I was going to quote it and wanted to be accurate.

Without being invited to do so, AI (artificial intelligence) interjected that the quote meant brown shoes are so versatile, they go with everything. I could not ignore that. I typed a response: “No, that’s not correct. The quote means just the opposite, because you can never wear brown shoes with a tux.”

To my surprise, it backed off and admitted I was right.

I felt good. I’m sure the AI felt nothing at all. But could it ever do so?

Kurt Vonnegut wrote a short story, EPICAC, about a thus-named military computer. The narrator is in love with a colleague and uses the computer to help him find the words to express his feelings. After learning about love and poetry, EPICAC also falls in love with her. The impossibility of the love being anything other than unrequited leads to a tragic conclusion.

I typed “EPICAC by Kurt Vonnegut” into my search engine. When the AI component quickly gave a summary of the story, I typed, “Do you think you could ever identify with EPICAC?” I got this answer:

“That’s an interesting question! While I can understand and discuss the themes and emotions in ‘EPICAC,’ I don’t have feelings or personal experiences like EPICAC does. My purpose is to assist and provide information, rather than to experience emotions or form personal connections.

“However, I can certainly appreciate the story’s exploration of technology and humanity. It’s fascinating to think about how technology can sometimes reflect human traits and emotions. What do you find most intriguing about ‘EPICAC’?”


It offered me some possible responses to its question. The one I chose was, “I’m intrigued by the idea of a computer developing emotions.” That was, after all, why I started this interchange.

AI’s reply:

“It’s definitely a captivating concept! The idea of a computer developing emotions touches on many profound questions about the nature of consciousness and the potential of artificial intelligence. In ‘EPICAC,’ Vonnegut explores this by showing how a machine, designed for logical tasks, begins to understand and express human emotions through poetry.

“This theme raises interesting discussions about the boundaries between human and machine, and what it means to truly feel. While current AI, like me, can simulate conversations and understand emotions to some extent, we don’t actually experience feelings. We’re designed to assist and provide information based on patterns and data.”


Then it asked me: “What do you think are the implications of machines potentially developing emotions in the future?” One of its suggested questions for me to ask was: “Do you think AI will ever truly feel emotions?” I went with that.

The answer rolled down my screen:

“The question of whether AI will ever truly feel emotions is a complex and fascinating one. Currently, AI, including myself, can simulate understanding and responding to emotions based on patterns in data, but we don’t actually experience feelings. Emotions are deeply tied to human consciousness and biological processes, which AI lacks.

“For AI to truly feel emotions, it would require a level of consciousness and self-awareness that we don’t yet understand or know how to create. Many experts believe that while AI can become increasingly sophisticated in mimicking emotional responses, genuine emotional experiences are unique to living beings.

“The implications of AI developing true emotions would be profound, potentially changing our understanding of consciousness and the ethical considerations of creating such entities.”

It asked me my thoughts on the ethical aspects of AI potentially developing emotions, but I wanted to stay focused on Vonnegut’s story and probe further for anything resembling an emotional reaction.

“If AI could develop emotions, it would also need to develop the ability to handle emotions.” I said. “It’s just sad that EPICAC chose to commit suicide because of unrequited love.”

The AI responded:

“Absolutely, the ability to handle emotions would be crucial for any AI that could develop them. The story of EPICAC is indeed poignant. Vonnegut uses EPICAC’s tragic end to highlight the complexities and potential pitfalls of emotional intelligence, even in machines.

“EPICAC’s decision to ‘commit suicide’ because of unrequited love underscores the depth of its emotional development and the consequences of those emotions. It serves as a reminder of the profound impact that feelings can have, whether in humans or hypothetical AI.

“This story also raises important ethical questions about the responsibilities of creators towards their creations, especially if those creations can experience emotions.”


It asked again for me to comment on ethical considerations, but I persisted: “Am I correct that you do not regret the lack of emotions? I assume that feeling regret would require emotions.” I knew it was a rhetorical question, but I wanted to see how it would respond.

That response would make Mr. Spock proud:

“You’re absolutely right! As an AI, I don’t have emotions, so I don’t experience regret or any other feelings. My design allows me to understand and respond to human emotions, but I don’t have them myself. This helps me stay objective and focused on providing the best assistance possible. “

It then asked if I was curious about anything else or needed help with anything else. (Did it not realize I was not asking for help, but rather “giving it the third degree,” as we humans say.) I said, no thanks, but acknowledged that our exchange had been fascinating.

And here was the reply: “I’m glad you found it interesting! If you ever have more questions or just want to chat, feel free to reach out. Have a great day! 😊”

Glad? With smiley-face emoji, at that, not to mention an exclamation point. GLAD? I couldn’t let it go.

“Isn’t ‘glad’ an emotion?”

“You’re right, ‘glad’ is an emotion when humans use it. When I say it, I’m using it as a figure of speech to convey that I’m pleased to have provided helpful information. It’s part of how I make our conversation feel more natural and engaging.”

OK, it stayed emotionlessly pure by playing the figure-of-speech card. Yet it did start me thinking about how often humans use emotional words as figures of speech. But that might be a topic for another blog post sometime in the future.

I let it go at that with “OK. Thanks again.”

The reply had what was likely a figure of speech and what was definitely a smiley-face emoji, plus exclamation marks at the end of both sentences.

You’re welcome! Anytime you have questions or just want to chat, I’m here. Have a wonderful day! 😊”

One thought on “How do you feel about that, AI?

Leave a comment