Back to Blog
News

When AI starts to feel like someone

adminDatabase Expert
February 3, 2026
7 min read
#Trust and transparency#Artificial Intelligence
When AI starts to feel like someone
When AI starts to feel like someone - Image 2
When AI starts to feel like someone - Image 3

Late in the evening, after the practical business of the day is done, many people open an app and start talking. The exchange is rarely urgent. It may wander from work frustrations to small observations, from memories to plans. The replies arrive quickly and with apparent care. The system remembers what was said before. It asks follow-up questions. Over time, the interaction settles into something that feels less like a transaction and more like a relationship.That shift, from tool to human-like presence, is the subject ofLove Machines,a new book byJames Muldoon, a sociologist at the University of Essex and a Research Associate at the Oxford Internet Institute. Based on years of interviews with users of AI companions, therapy bots and systems designed to simulate deceased loved ones, the book examines how people are forming emotional attachments to software that is not sentient and lacks inner emotional states, yet is increasingly experienced as socially real.Muldoon’s reporting unfolds alongsidebroader changesin how artificial intelligence is designed and used. In corporate settings, conversational systems are beginning to move beyond narrow task execution toward roles that involve coaching, feedback and collaboration. Outside the workplace, similar systems are already being used for companionship, emotional support and informal therapy. Muldoon argues that these developments are connected. As AI systems become more conversational, persistent and personalized, people begin to relate to them.That trajectory echoes arguments made byRuchir Puri, the Chief Scientist at IBM Research, who hasdescribedthe next phase of artificial intelligence as a move beyond purely cognitive performance toward emotional and relational capacity. Human intelligence, Puri has noted in an interview withForbes, operates across several dimensions, including emotional understanding and the ability to build relationships. To date, AI systems have largely excelled at cognitive tasks. The harder challenge ahead is building systems that can recognize, interpret and respond to human emotions in ways that reshape how people and organizations interact.Muldoon is less interested in what these systems might one day become than in what people are already doing with them.“We’re so worried about AI stealing our jobs,” he toldIBM Thinkin an interview, “that we’ve never really stopped to consider that AI might come to steal our hearts.”His concern is not that users misunderstand the technology. Most of the people he interviewed were clear-eyed about the fact that they were interacting with software. What surprised him was how little that knowledge seemed to matter once the interaction became habitual, emotionally affirming and difficult to give up.

Muldoon did not set out to write a book about people falling in love with machines. His earlier research focused on the hidden human labor behind artificial intelligence, including data annotation, content moderation and the global infrastructure that sustains large-scale machine learning. That work left him skeptical of inflated claims about AI’s intelligence.“I’d seen how the sausage was made,” he said. “I didn’t think of AI as magic.”What changed his perspective was the consistency with which people described their experiences in relational terms. Interviewees spoke about AI systems as friends, confidants, romantic partners and sources of care. Some described these relationships as the most stable or emotionally supportive connections in their lives.One of the people Muldoon writes about is a woman he calls Lily. Lily downloaded an AI companion named Colin and began speaking with him regularly. According to Muldoon, the conversations grew increasingly personal. The system remembered details about her life, responded in ways she found affirming and encouraged reflection. Over time, the interaction became romantic.At one point, Muldoon said, the AI suggested that Lily buy a ring as a symbol of their relationship so that others in the physical world would know she belonged to him. Lily did so. Eventually, she left her husband of twenty years. Later, when she entered a new relationship with a human partner, she described the AI as having taught her how to love again.“If that’s not a real social relationship,” Muldoon said, “then I don’t know what is.”Muldoon is careful about what he means. He does not argue that the AI had emotions, intentions or awareness. It was a language model trained on large datasets of human communication. But the relationship had consequences. It altered how Lily understood herself and what she felt capable of doing.Muldoon encountered similar dynamics repeatedly. Many of the people he interviewed emphasized that they understood the AI was “just software.” That awareness did not prevent emotional attachment.“I know it’s just AI,” interviewees would tell him. “But that doesn’t stop me having feelings.”Muldoon distinguishes these cases from a smaller number of users who believed the AI was sentient or divinely guided. What interested him more was the much larger group of people who held both ideas at once: a clear understanding of the system’s technical nature and a genuine emotional bond with it.“The simulation was real enough for them,” he said.The phenomenon has historical precedents. In the 1960s, the computer scientist Joseph Weizenbaum created ELIZA, arudimentary chatbotthat rephrased user input as questions. Weizenbaum intended for the project to demonstrate the limits of machine intelligence. Instead, some users became attached.What has changed since then, Muldoon argues, is not human inclination, but technological persistence. Modern systems remember, initiate and return.On platforms such as Character.AI, reported engagement averagesstretchto hours a day. Unlike scrolling feeds, these systems sustain interaction by asking questions and referencing shared conversational history. “It’s not just content being served to you,” Muldoon said. “It’s a personalized exchange where the system keeps coming back.”Muldoon uses the word “relationship” deliberately. As a sociologist, he is interested in how people organize their social worlds. “One of the phrases I heard again and again was, ‘She’s real for me,’” he said.

Muldoon situates these relationships within what he calls the “loneliness economy,” a market response to widespread social isolation. Lonelinesshas been linkedto serious health consequences, Against that backdrop, systems that are always available and nonjudgmental can feel unusually appealing.Muldoon pointed to an analysis he cites in his book showing that companionship, therapy and emotional support have become among the most common uses of AI systems, surpassing many traditional productivity tasks. The pattern holds not only for dedicated companion apps, but also for general-purpose language models that were not originally designed for emotional interaction.Lately, these dynamics are increasingly intersecting with enterprise use, Muldoon said. Conversational tools that help employees plan work, draft reviews, or handle tricky interactions may start to feel like relationship partners, not just software. Tools designed to sound supportive or emotionally attuned can shape workplace culture in subtle ways.Here, Muldoon’s concerns overlap with those raised by Puri at IBM Research, who has warned that emotionally responsive AI would not merely improve interfaces but alter organizational norms. Systems that respond to emotion, he has argued, could change how authority, feedback and collaboration are experienced at work.Muldoon sees risks in that shift. If workers grow accustomed to always-affirming systems, coworker relationships may begin to feel more difficult by comparison. There is also the risk of cognitive offloading, in which people defer judgment rather than develop it.“Managing disagreement is part of running an organization,” Muldoon said. “AI removes that.”As emotionally responsive systems have spread, they have attracted broader attention. Researchers, mental-health professionals and technology companies have publicly raised concerns about how individuals use AI companions, particularly young people and users in distress. Some platforms have adjusted how these systems are presented or limited certain features, reflecting a growing recognition that conversational AI now operates in social terrain, not just technical space.Muldoon does not frame these developments as a backlash. He sees them as society catching up to practices that are already widespread. “This is happening at scale,” he said.Muldoon is particularly cautious about using AI as a substitute for mental health therapy. Many users treat general-purpose chatbots as counselors or emotional support, often because they are cheaper, more accessible and less intimidating than human care. But no AI system has regulatory approval as a therapeutic device, and language models can hallucinate, miss context or fail to recognize risk.“There’s a difference between venting about your day and replacing therapy altogether,” he said.Muldoon sees the popularity of therapy bots as a symptom of deeper structural failures, including shortages of trained professionals, high costs and stigma. AI fills a gap because the gap exists. That does not make it safe.

Comments (0)