19 April 2026 · LinkedIn
An AI just taught humans to be more empathic. Not by simulating emotion. By closing the gap between the empathy people already feel and the words they actually manage to say.
Psychologists call it the "silent empathy effect." In a study of nearly a thousand people asked to offer emotional support in text conversations, most felt genuine compassion for the person they were talking to. Most sent messages that failed to convey it.
That gap is not new. Anyone who has stood at a funeral and managed only "let me know if you need anything" instead of the thing they actually meant knows the distance. The emotion is real. The language falls short.
Modern life has been compressing the channels through which empathy travels for decades. Text replaced voice. Voice replaced presence. The cues we evolved to read, facial micro-expressions, posture, the weight of a pause, are stripped out one by one as communication becomes faster and more mediated. We communicate more than any generation in history. We may be understood less.
What makes the Stanford and Michigan study striking is the mechanism. Participants practised offering emotional support to a large language model roleplaying personal and workplace struggles. An AI coach then gave personalised feedback on how their responses landed against established patterns of empathic communication.
The group that received AI coaching significantly improved their alignment with those patterns. The video-feedback group and the control group did not.
The language model didn't generate empathy. It scaffolded the expression of empathy that was already present but inarticulate. It handled the labour of formulating a response, freeing the person to attend to what they actually wanted to convey.
The default narrative about AI and human connection runs in one direction: screens isolate, algorithms polarise, chatbots replace the conversations we should be having with each other. That narrative isn't wrong. But it's incomplete.
A fractured, high-speed civilisation has been eroding our capacity for interpersonal nuance long before large language models existed. The question this study opens is whether the same technology accused of accelerating that erosion might, under the right conditions, help reverse it.
Whether it will depends on who builds the tools and what they optimise for.
Join the conversation on LinkedIn →
← All writing