There’s something strange happening in our inboxes.
The texts and emails we’re getting lately sound more thoughtful. More supportive. More emotionally articulate than we’re used to. Friends who usually struggle to say sorry are suddenly fluent in accountability. Coworkers who normally come off as dry or detached are now signing off with warmth and nuance. Even that one ex who never got closure quite right? He’s somehow managed a message that sounds suspiciously… evolved.
It should be a good thing.
But here’s the catch: more and more of these emotionally intelligent messages are being written with help from AI.
And it’s working—a little too well.
The Rise of the Artificial Empath
Let’s not pretend this is a far-off dystopia. It’s already happening. People are using ChatGPT to write apologies, check in on friends, process tough breakups, or navigate tricky emotional territory—and on the surface, it all looks pretty impressive.
The grammar is solid. The tone is gentle. The pacing is just right. You read it and think, Wow, they really put some thought into this.
But did they?
According to a 2023 Europol Tech Watch report, large language models like ChatGPT have a very high chance of being misused for manipulation and internet crimes. While most headlines focus on obvious criminal stuff—phishing emails, disinformation campaigns, AI-generated scams—what doesn’t get talked about enough is the everyday emotional manipulation that’s flying under the radar.
Let’s call it emotional laundering.
Instead of learning how to feel or communicate better, people are skipping the messy human part and letting a bot do the heavy lifting. It’s not illegal. But it is, in a subtle way, dishonest. Because it makes us believe that someone has shown up with real vulnerability when, in fact, they’ve outsourced it to a machine.
When the Words Feel Real, But the Connection Doesn’t
Here’s where it gets tricky: we’re wired to respond to language. Research has shown that emotionally validating words light up the same brain regions as being physically comforted. So when you read something that says, “I know I let you down, and you didn’t deserve that,” your nervous system relaxes a little. You start to believe this person gets it.
But what if they don’t?
What if they just typed “write a sincere apology for missing someone’s graduation” into a chatbot?
You’d still feel something—because the words themselves are designed to elicit that. But you’d be reacting to an illusion of empathy, not the real thing.
And here’s the wild part: An April 2023 article from SHRM.org have already raised flags about this in professional settings. Employers are starting to notice that AI-generated content can sound human—sometimes too human. There’s now concern about employees passing off AI-written communications as their own, especially in HR, leadership, or client relations where emotional tone really matters.
The result? We’re building relationships on text that looks emotionally intelligent but isn’t actually rooted in emotional presence.
Why This Feels So Tempting (and So Dangerous)
Let’s be honest: empathy is exhausting. Real empathy requires attention, emotional effort, and a willingness to stay in discomfort without rushing to fix or polish things.
AI, on the other hand, doesn’t get tired. It doesn’t get triggered. It doesn’t freeze up mid-conversation or go blank when someone starts crying. It just keeps spitting out perfectly-phrased sentences that sound right.
So it makes sense why someone might lean on it. Especially if they were never taught how to be emotionally available. Or if they’re afraid of being misunderstood. Or if they just want to get through a hard conversation with the least amount of friction.
But here’s the cost: we’re reinforcing the idea that sounding emotionally aware is more important than being emotionally aware.
And that’s dangerous. Because it rewards the performance of empathy without requiring the presence of it.
AI Is Helping People Say the Right Thing—Without Learning the Right Skill
Back in April 2023, Monica J. White wrote a piece for DigitalTrends.com exposing some of the creepier ways ChatGPT was being misused. She described how users could generate scam emails in perfect English, imitate celebrities, or write malware scripts with cleverly-worded prompts.
But what caught my eye was how easy it was to ask ChatGPT to “write a supportive message to a friend who’s depressed” or “compose an apology for cheating on someone.”
It wasn’t just the content. It was the detachment.
Someone could essentially outsource the entire act of emotional repair—the thinking, the self-reflection, the tone—to a tool that doesn’t feel anything.
And it would work. The recipient would likely feel seen. Maybe even moved.
But what happens when we start expecting people to be that polished all the time—when in reality, most humans struggle to articulate their feelings in real-time?
What happens when we begin to prefer the AI version of someone’s emotional life?
The Deeper Loss: Relationships Built on Scripts
Imagine this: you’re texting with someone and they always say the right thing. They validate you. They check in. They offer thoughtful insights. But over time, something starts to feel off. Their responses lack texture. There’s no awkward pause. No mess. Just an endless stream of comforting phrases that don’t seem to change, even when your emotions do.
That’s the risk of AI-scripted empathy.
It smooths out the conversation too much.
And while that can be soothing in the short term, over time it erodes the core of real connection: mutual emotional risk.
Real empathy is clumsy. It stumbles. It overcorrects. It grows. That’s what makes it trustworthy.
If someone says the perfect thing every time, you might start to wonder: Is this even them talking? And if you ask, and they say, “Oh yeah, I used ChatGPT for that,” what happens to the intimacy you thought you shared?
This Isn’t a Rant Against AI. It’s a Plea for Honesty.
I’m not against using ChatGPT. I’ve used it myself to clarify a thought, warm up a cold sentence, or practice phrasing something hard before I say it out loud.
But there’s a difference between getting help and letting a tool stand in for your actual self.
Faking empathy, even with good intentions, means you’re not giving the other person the real you. You’re giving them an emotionally airbrushed version. One that says all the right things, but none of the true things.
And if they respond to that—if they open up, or soften, or start to trust you—they’re not connecting with you. They’re bonding with your emotional stunt double.
That’s not fair to them.
And honestly? It’s not fair to you either.
Because hiding behind AI means you miss out on building the very skills that make relationships meaningful: emotional courage, attunement, reflection, growth.
Let the Flaws Show
There’s no doubt that AI can write emotionally intelligent messages. But emotional intelligence isn’t what you say. It’s how you show up.
So if you’re tempted to ask ChatGPT to write a tough message for you, pause.
Use it as a starting point, maybe. A draft. But then close the window, take a breath, and ask: What do I actually feel here? What do I want them to know?
Even if it comes out messier, even if it’s not eloquent—let that be okay.
Because in a world full of polished words, sometimes the most powerful thing you can offer is a little unpolished truth.