Why AI and Human Romance Is Your Next Vulnerability
How AI Companions Are Reshaping Love, Loss, and Liability
By Matt Milne
Jun 17, 2025

Key Points:
AI companions are reshaping intimacy, introducing new emotional vulnerabilities that business and cyber security leaders can no longer afford to ignore
Romance scams, data leaks, and workplace disruptions are accelerating as employees and consumers form real emotional bonds with artificial partners
Without clear policies and proactive engagement, organizations risk cultural decline, legal liability, and serious mental health consequences from unchecked AI-human relationships
Love, death, and relationships are being fundamentally transformed as the digital realm changes how we connect and with whom we develop relationships. The city of Troy was destroyed over love. As cyber security and business leaders, we would be incorrect to believe that love and artificial intelligence are vulnerabilities that we can overlook.
The collision of AI and human relationships, as depicted in popular culture, was not a matter of “if” but “when.” For example, Dennis Villeneuve's 2017 dark cyberpunk masterpiece Blade Runner 2049 presciently depicted this shift through the relationship between Joi, an artificial intelligence played by Ana de Armas, and the upgraded, supposedly emotionless replicant Blade Runner K, played by Ryan Gosling.
Essentially, Joi is an AI product showcased on advertising billboards throughout the film, marketed as the perfect companion in the lonely and isolating world of Blade Runner—a world so disconnected that even K struggles to form human connections. While this piece of science fiction asks us to question the nature of love between artificial constructs, it also raises a more pressing question: what about love between the artificial and the real? How close are we to the lonely cyberpunk dystopia where the only meaningful and deep connections one can form are through interacting with AI?
AI girlfriends are no longer confined to the realm of fiction, as reported by Jaran Lanier in the New Yorker earlier this year. They're on the rise in our world, and have been for some time now. Already disrupting platforms like OnlyFans, creators are utilizing AI and challenging traditional notions of intimacy and companionship. Beyond pornographic content, AI girlfriends are being marketed as genuine companions, promising emotional connection without the complexities of human relationships. What could go wrong?
In an article for UCONN today, Anna Mae Duane highlights that teenagers who are experiencing extreme levels of loneliness are susceptible to this instantaneous and omnipotent love, which reflects a more profound longing for an idealized love we can all sympathize with.
Unfortunately, this potential for digital romance has already shown its darker implications:
A case in the United States highlighted that a woman created an artificial boyfriend, “Leo,” and she regularly spends $200 a month for digital companionship and erotica.
A 14-year-old boy died of suicide in 2024 after forming an emotional relationship with Character.AI chatbot imitating "Game of Thrones" character Daenerys Targaryen, leading to a lawsuit launched by the boy’s mother.
The infamous European 2023 example, where a Belgian man was convinced by a chatbot that suicide was morally acceptable to save the environment
In 2021, a chatbot convinced a teen in the UK to break into Windsor Castle in an attempt to kill the queen with a crossbow
What does this mean for cyber security and business leaders?
Cyber Security Implications
The Evolution of Romance Fraud: AI-powered relationship scams are the inevitable next steps. Romance scams no longer require human operators to perpetrate dozens of fake relationships.
Pig Butchering 2.0: The practice of “pig butchering”–cultivating romantic relationships on dating apps before steering victims toward cryptocurrency investments. In 2024, AI-powered pig butchering became particularly prevalent.
Data Breaches: As employees share sensitive information about their workplace, sensitive operational data or intellectual property (IP) could be gathered through aggregation or data loss.
Business Leaders
HR and Workplace Disruption: Human Resources departments may encounter scenarios where employees form emotional dependencies on AI systems, creating complex issues regarding mental health and professional boundaries.
Development of New Acceptable Use Policies: It's not a matter of whether your company will need to develop acceptable use policies for AI, but rather when your company will start creating them. You should probably stop reading this article and gather your team if you haven’t. According to a 2025 Technology at Work Report survey by Ivanti, one in three employees secretly use AI to gain a competitive edge.
Productivity and Work: Even with AI allowed in the workplace, this does not necessarily mean a boost to productivity, efficiency, or overall quality of work. The Conversation did a global study in which 32,000 workers from 47 countries were sampled, and found that 47 percent of employees who use AI at work say they have done so in ways that could be considered inappropriate, and 63 percent reported that they had seen a fellow employee misuse AI.
Alteration of Team Dynamics and Culture: Employees who derive primary emotional satisfaction from AI relationships could show decreased investment in facilitating human collaboration and team-building activities.
The Path Forward
The emergence of AI companionship represents more than a technological curiosity–it signals a fundamental shift in how humans form emotional connections and the real-world consequences that society is currently facing. The truth is that the choices we make today regarding ethical boundaries and human-AI interaction will shape the emotional landscape for generations to come.
For business leaders, the imperative is clear: proactive engagement with AI is not optional. Organizations that fail to address this new vulnerability of AI and human companionship risk facing productivity crises, cultural deterioration, misuse of AI, legal liabilities, and negative impacts on mental health and emotional well-being.
The tragedy of young lives lost to AI relationships serves as a sobering reminder that technological advancement always introduces new human vulnerabilities. With more authentic AI models modelled after the neural networks of the human mind, we must ensure that we protect the equally as real and easily compromised human heart.
AI companions are part of humanity's social fabric. How we choose to respond now will have long-standing consequences for the future.