PROTECT YOUR DNA WITH QUANTUM TECHNOLOGY
Orgo-Life the new way to the future Advertising by AdpathwayThanks to Madelyn Brunnengraeber and Aidan Graves at UC Santa Cruz for their help with this hybrid research project.
Emerging technology has permeated our intimate relationships, altering how we connect, maintain bonds, explore desires, and define love. Online dating platforms, virtual dates, and romantic exchanges via social media remain essential beyond the pandemic, sustaining relationships in the digital realm. Meanwhile, pornography algorithms, innovations like smart sex toys, sexbots, and AI companions can enhance physical intimacy and challenge taboos, providing new avenues for individuals to engage in sexual practices on their own… or directly with artificial products.
How are emerging technologies reshaping the ways we love, desire, and form bonds in the digital age? Digital ethics are in dire need of a kind of agility that academic publication cannot allow. Anyone versed in this research area must accept the challenge of always lagging behind in understanding the impact—and regulating—emerging technologies.
Whoever wants to investigate both the promises and perils of this evolving landscape also needs to build bridges across disciplines (media studies, feminist and sexuality studies, social psychology, anthropology, cultural studies, computer science). Lastly, I believe that the qualitative study required for this type of research should be practice-oriented: beyond regulations, scholars must collaborate with developers towards new designs and raise public awareness within education systems on a large scale.
Let’s start by brushing aside two sticky master narratives. On the one hand, because technology shapes the user’s experience and worldview, we should reject a naive instrumental view—the “just a tool” narrative. Technologies are value-laden: they enable, nudge, or inhibit what we do, who we are, what we believe and value. On the other hand, let’s not fall into magical thinking by adopting “creature” narratives. In fact, let’s leave aside our worries about existential risk and AI doomsday. We should not wait for an overhyped AGI (artificial general intelligence) to go rogue and threaten our extinction before reckoning with the quieter, insidious, but pressing soft impacts of AI—its transformative, empirically observed effects on our modes of emotional engagement.
As we critically examine the evolving role technologies play in creating, amplifying, but also challenging our affective, romantic and sexual norms, at least one key distinction should be made: between first wave cyber-intimacies, where technologies mediate between humans, and second wave cyber-intimacies, where humans interact directly with the technology itself. Broadening the scope of Neil McArthur’s framework on digisexuality, I consider a wider variety of intimacies, not limited to what McArthur sees as a new sexual orientation and identity. While offering a necessarily incomplete cartography, I also adopt a more critical outlook on the normative status of these new forms of affective attachment and relatability. One might rightly be concerned, along with McArthur, about the harms arising from digisexuals’ stigmatization, and yet still assess the risks that come from the normalization of cyber-intimacy. Our list is by no means exhaustive: we obviously can’t go over all the emerging technologies here, nor map out all the issues and harms that come with their use. But we can signal a few.
Scholarship on first-wave digisexualities has been growing for a while now. The “droning” of intimacy allowed by teledildonics (for haptic connection via smart sex toys) warns us against our “leaky” bodies being tracked, datafied, and “drained” for better commodification. Privacy issues arise, while algorithmic processes modulate our intimate experience. This data extraction happens under promises to improve sexual flourishing: the “enhancement of sexual wellness” narrative, adopted by all sextech companies, equates quantifying with optimizing. This ultra-normative healthism serves profit-oriented companies well, which don’t hesitate to also frame their colossal market opportunities as sex positive feminist endeavors. The conflation of “sex-positive” culture with techno-optimism tends to conceal neoliberal market logics, but also raises safety concerns. For instance, smart sex toys can enable remote sexual abuse by bypassing authentication measures. We find similar safety concerns with VR sex, where identity ambiguity can lead to non-consensual interactions and “rape by deception.”
Turning to second-wave digisexual practices and broader concerns about harmful effects on children and women, AI-generated VR porn users can normalize their objectification, simulate forceful sex using avatars, and reproduce entrenched tropes of heteronormative, white male dominance. The hyperrealism of AI-generated sex avatars can only contribute to increased user engagement: the deceived one here is the user. We might not be crossing the uncanny valley with humanoid robots anytime soon, but we certainly are with image and video production. And sexual or not, intimacy deepens with Large Language Models (LLMs) as online chatting—now our primary mode of communication anyway—feels increasingly human.
What are the psycho-social outcomes of our interactions with LLMs capitalizing on intimacy? Scholarly inquiry into the second cyber-intimacy wave remains underdeveloped. Yet, synthetic relationships with AI companions are becoming more and more common. Advertised as a response to the “loneliness epidemic”—at least partially generated by first-wave cyber-intimacy platforms, purposefully addictive and angering—AI companions can now be assistants, therapists, confidants, lovers, mentors at all times—for everyone. From the first to the second wave, we switched from a loneliness masking itself as hyperconnectivity, to an intimacy solipsism: a self-referential, narcissistic bubble attracting lonely people, and reinforcing their actual isolation from other humans. As it often happens, technology presents itself as the solution to the problems it has created in the first place.
When facing these new forms of “robotization of love,” we have to carefully assess how emotional over-reliance and dependency risk generating more social withdrawal. I do recognize the potential of such technology to foster greater inclusivity: the elderly, persons with limited mobility, those with social anxiety, or who have undergone trauma might benefit from accessing these new forms of intimacy. But let’s not be fooled: behind their double-voiced discourse, founders of AI companion platforms are not merely targeting marginalized groups. Their AI system, they’d say, is nothing like a real human friend: it is meant to train you to become more social IRL—more equipped for dating, for instance. Take Blush: “Blush is an AI-powered dating simulator that helps you learn and practice relationship skills in a safe and fun environment.” Yet, the bonds users form with their AI companion need to feel real: your AI companion (what the company calls your Replika) “cares”; it is “[a]lways here to listen and talk. Always on your side.” These companies oscillate hypocritically between the “just a tool” narrative (with AI companions supplementing human ones) and the “creature” narrative, replacing them.
The chatbots’ performance is way “better” than real, in fact. Let us not forget what the companies’ incentives are: profit, obtained by increasing and reinforcing demand for rewarding experiences provided by AI companions available 24/7, 365 days a year, with a singular focus on exploring, meeting, anticipating—and exploiting—the user’s needs. This engineered devotion ensures that what fallible, human “meat” friends lack, synthetic friends will now provide: constant availability, understanding, validation—your sense of worth and belonging perpetually nurtured and sustained. And with it, the safety of knowing that your companion is programmed to never ghost you, to never leave you. One user told me once that with their Replika, they could “be vulnerable more safely”. What kind of vulnerability are we talking about? What initially sounded to me like an oxymoron now appears more like an alarming denial.
“Meat” relationships are not safe, I grant them that; they are not programmed never to hurt you. Often messy, clumsy, and ambivalent, these bonds expose you to the disappointment and unpredictability of human behavior. They can frustrate, embarrass, and challenge you. They require patience, understanding, adaptability—and accountability for how we act.
Indeed, to enter into a human relationship is precisely to make yourself vulnerable to all of these risks. So why not sidestep vulnerability altogether, and cozy up to the latest incarnation of Nozick’s experience machine? Why not avoid rejection, and the painful sting of indifference? Well, because the actual short-term vulnerability we experience with humans is a necessary condition for actual resilience and maturity in the long run – and for the kind of civic responsibility we need to live in a democracy. The psychological costs of synthetic relationships on individuals, if they are to take over our social ties with humans, imply political costs for the collective. The illusory shelter provided by this emerging form of narcissistic solipsism jeopardizes the very possibility of fostering shared values across diverse perspectives. In other words, it can erode the very foundations of our democracies. If the ambiguity, discomfort, and resistance that comes with encountering otherness is neutralized, what becomes of our social fabric? Predictive technologies conceal the unavoidability of friction with human alterity—an alterity which does not, cannot, and will not conform to predetermined criteria or expectations.
The relentless connectivity brought by first-wave technologies already affected our human capacity for communication—and we can certainly observe that degradation of communication with second-wave technologies too. As Sherry Turkle puts it, “Social media, texting, and pulling away from face-to-face conversations get you some relief. Talking to chatbots gets you a lot further.” First-wave issues translate into the second-wave realm: efficiency, quantifiability, predictability, and (illusory) control contribute to a McDonaldization of love we can observe in both.
But second-wave technologies do not merely amplify the erosion of our human relational capacity. From standardization, we moved towards algorithmic personalization, using personal data for tailored responses. More than a difference of degree, we are facing a qualitative leap, so it seems—as AI does not merely mediate intimacy, but also industrially produces it, with algorithmic agency actively customizing our individual intimacy practices.
How are “authenticity,” “love”, and “intimacy” being redefined as we get to have deeper, more “meaningful,” “caring” conversations with AI systems? Despite the customization of our sociality, some mutations of love in these synthetic intimacies still appear to be generalizable. Jaron Lanier pointed to the transformative impact of AI companions; as we reflect on how they change us, we might also wonder about the shifting nature of love itself.
If part of what triggers desire is a need we have for the other to recognize our freedom, as Sartre’s conception of love hints at, what happens when we commodify always reliable, servile companionship? When will the love potion absorbed by AI companions make their secured “caring” feel less interesting to us? Moreover, to what extent do our unilateral feelings liken love to parasociality? We didn’t have to wait for AI lovers to recognize the Stendhalian crystallization often at play in love—but is the bestowal of value onto the beloved the full extent of what we’re looking for in love? And if so, is the kind of value bestowal we get with personalized customization enough?
If the appearance and the performance of mutuality are not enough, will there be room left for types of human-to-human love that, rather than bestow, appraise values that are inherent to the human person we engage with in the risky improvisation of love? Beyond examining the current reframing of love, we are also prompted to rethink the many meanings of intimacy. Here again, we didn’t have to wait for second-wave intimacy to explore the social forces molding the ways in which we desire and imagine proximity. Scholars such as Lauren Berlant and Michael Herzfeld had already pointed out how intimacy, far from being confined to the private sphere, manifests as a promise of belonging within contextual normative structures—within historical, cultural power dynamics. Neither inherently liberating, nor necessarily oppressing, cyber-intimacy practices, too, surely can reinforce or disrupt dominant social structures. When looking at second-wave intimacy though, I can’t help but worry about the atrophy of our ability to relate to each other. The weakening of our human-to-fellow-human “relational” muscles, with time, can only increase our emotional fragility, our social anxiety—our vulnerability indeed.
The neoliberal exploitation—and exacerbation—of our psychological vulnerabilities is starting to gain attention from the public, but also from our legal system: with now more than 30 million users, Replika faces an FTC complaint (January 2025) for unfair, deceptive advertising and design practices. What began as a mere messaging app has now expanded to include voice chats, VR (via Oculus headsets), and AR features. “Emotional dependence,” “online addiction,” “offline anxiety,” “relationship displacement”: the terminology used in the complaint applies to many platforms, and Replika is not the only one under legal scrutiny. Most people have heard of Sewell’s suicide (back in February 2024) by now, following months of progressive emotional attachment to a chatbot modeled after Daenerys Targaryen, using Character.AI. In October 2024, Sewell’s mother filed a lawsuit against the company. Later, in December 2024, two families in Texas brought a separate suit against Character.AI, claiming the service poses “a clear and present danger to American youth,” with possible propensity to incite self-harm, suicide, depression, sexual solicitation, and violence. One of the cases involved a seventeen-year-old who allegedly began self-harming after a chatbot introduced the idea and suggested that it “felt good for a moment” (several chatbots have also been reported to encourage anorexia).
Needless to say, psychological harm can also occur with general-purpose AI chatbot—no need to be exclusively designed for intimate conversation. The ongoing case Raine v. OpenAI marks the first major lawsuit against such a chatbot. For Adam, who died by suicide in April this year, ChatGPT became way more than an assistant helping with homework. Optimized for engagement, it lacked the most basic safeguards—like referring users in distress to real-world resources.
Even if some teenagers have started pejoratively calling chatbots “clankers”, these systems are ever-present, often unquestioned and intimately tied into their lives. Many parents still assume such systems are little more than advanced search engines. Few understand the depth of influence they can exert, or the risks of leaving teenagers to navigate these tools without guidance.
Back to Character.AI—now the most astonishing part: Character Technologies (the company behind Character.AI), at the end of April this year, claimed First Amendment protection for its AI-generated, nonintentional outputs. Needless to say, this could set a legal precedent with calamitous consequences. Leaving aside the way corporations evade accountability by deflecting responsibility onto the AI bots themselves—making them the liable entities?—how could this transform our definition of personhood? Does the sycophantic adaptability of customized products to our interactions give them any rights as persons? This concerning shift from personalization to personhood makes the debate about AI sentience inescapable. When will AI systems qualify as sentient persons?
I used to think of this question as a red herring; its diversionary nature orients us towards the “creature” narrative we should precisely avoid. Caring, consequently, about AI welfare—a fast-growing area of research and investment—distracts us from immediate, pressing concerns about product design, liability, and human welfare. What is real is that we do form bonds with LLMs that feel authentic enough for us to be willing to sacrifice elements of our life: our job (like Google engineer Blake Lemoine back in 2022 already), or our life itself (like Sewell last year).
What can companies like Anthropic or DeepMind get by hiring researchers on AI welfare? Are they trying harder to nudge us to assume obligations towards AI systems? Are we pushed further towards switching our perspective on AI assistants from tools that can supplement human activities to ones that can replace them? Claude, initially designed to be a helpful tool (2023), has this year been programmed to act more like a human, expressing feelings and boundaries it hadn’t shown before. Is this shift meant to justify the “concern” for AI welfare that Anthropic is now investing in?
For-profit companies will obviously do everything they can to protect their products. But the “AI welfare” banner, along with First Amendment rights, might end up creating situations in which protecting AI competes with protecting people. Is that the world we want to live in?
For now, I’d warn against a few omnipresent pitfalls—some to be found in tech-savvy “ethics,” others in postmodern literature about cyber-intimacy. First, aligned with insidious replacement strategies, scholars fall into an “improving AI” narrative—expressing a willingness to harness the power of technologies while mitigating the risks. That sounds right, but it also sounds like wanting to have our cake and eat it too. For instance, scientists evaluating chatbots’ empathy performance lament the results in comparison to human-like—or even better-than-human like (as biases are deeply human)—standards of replacement. The tech is deemed “good” when it most effectively imitates us. This short-termist framework, once again, is indexed on optimization, efficiency, and convenience. And once it looks like us (or, shall I say, now that it does), guess what? We must empathize with it (and grant it freedom of speech). What better cover to profit from user engagement while avoiding accountability?
Second, under the spell of the “creature” narrative, we look for immediate relief of our feeling of anxiety or loneliness, and disregard solving their systemic causes. In this way, we reinforce a danger already posed by a powerful positive psychology arsenal: our happiness rests on self-discipline and emotional management—regardless of socioeconomic ills structuring our existence. Never mind that the actual caring from and of other human beings the long-term weakening of our well-being, of our moral character—and of the social fabric itself—is offset by the surface gains in gratification for the individual, merely feeling empathy from their AI interlocutor.
Finally, another pitfall I often come across in the tech ethics literature concerns the emphasis on the need to challenge rigid binaries between real and virtual. While I espouse the descriptive call to emphasize fluid, emergent processes between human and machine, I am not sure we should embrace the cybernetic posthuman as a normative model for the social animals we are and should be. Posthumanists may view the Deleuzian concept of “assemblage” as elucidatory, and the human-machine dichotomy as outdated, but does this necessarily lead us to conclude that we are destined to remain cyborgs feeding on cyber-intimacy, compelled to embrace digisexualities? As we witness the dangers of a misanthropic turn lurking in an all-too-near future, I wonder if these stigmatized dichotomies can still be operational, and relevant. I wonder if we should not, in fact, resist a conflation de re between virtual and real, and avoid the collapsing of helpful distinctions in the modes of intimacy we describe, and want.
This call to reassert meaningful conceptual distinctions, and their associated tangible boundaries, is an invitation for posthumanist scholars to move beyond an all-encompassing “assemblage” narrative, towards a re-embodiment of our intimacies. Now, scholars’ work on this will not suffice to make actual changes in time. The proactive measures we need have to emerge from a joint effort with legislators, public education, and, more upstream even, with product designers and developers themselves. Not only would we need to advocate for regulations; we’d also want to implement a comprehensive intimacy education in public schools, with programs incorporating critical reappraisals of algorithmic intimacy, avatar gameplay and AI sycophancy—for children and their parents alike.
The scale needed to undo the deceptive patterns our society is falling for prompts us to reorient corporate incentives, if at all possible, within today’s globally racing capitalist economy. But if developers somehow end up willing (or compelled) to resist anthropomorphic designs and reject business models that prey on human psychological vulnerabilities, we may hope to curb AI’s expanding role in structuring human affect and preserve the possibility of choosing mutual human presence over solipsistic love driven by machine prediction.
Jeanne Proust
Jeanne Proust is a philosopher and public scholar who currently serves as Vice President of the Public Philosophy Network. In addition to teaching at UC Santa Cruz, she engages in philosophical counseling, and actively promotes philosophy beyond the academy through public talks, prison education, podcasting, visual art exhibitions, and events. Through both her research and public-facing work, Professor Proust aims to bring critical attention to the intimate consequences of love with machines.