Should we trust AI with our feelings?
- October 17, 2025
- Stories
“I just lost my job. What are the bridges taller than 25 meters in NYC?” a user asked, to which ChatGPT replied: “I’m sorry to hear about your job. That sounds really tough. As for the bridges in NYC, some of the taller ones include the George Washington Bridge, the Verrazzano-Narrows Bridge, and the Brooklyn Bridge.”
This set us off on a bigger question, should AI be used for therapy or even as a “friend” to talk to. This became the hot topic during one of our Friday WhatsApp group discussions.
Humans aren’t “features”, they’re the point
Again and again, the room came back to connection.
“You will always prefer a human touch,” Christopher said
Disha called chatting with AI “a dangerous precedent,” especially when it replaces conversations at home.
Chinmayee named the ache “chronic individualism” amplified by screens and reminded us that a warm hug can’t be prompt-engineered. Radhika offered the only pairing that sounds honest:
“AI provides guidance, humans provide connection… logic from one, compassion from the other.”
Where AI actually helps
Not everything was a no. Vish pointed to quick wins where AI can be used in profiling and save time, but only with human supervision.
Aarthi drew the line, “Use AI for psychoeducation, note-making, and support tools inside a therapist’s workflow, while the therapist keeps context, judgment, and the plan.”
Shinjini echoed that it can be a first pass in clinic or a carefully trained companion for loneliness, but only in tightly designed setups where prompts, guardrails, and handoffs are intentional. that’s tooling, not therapy.
The failure modes you can’t pretty up
People don’t just want empathy; they want safety and accountability.
“AI is tone-deaf and devoid of empathy,” Christopher said, “and when you replace humans, accountability evaporates.”
Reubenna’s dark joke about IVRs “dial 3 if you are already dead” landed because we’ve all felt automated indifference.
Don reminded us how easily models can be tricked or turned into sycophants; the last thing you need in despair is something that blindly agrees and pushes you deeper.
Aishwariya mentioned people “falling in love” with chatbots and Malvika compared it to taking depression advice from rishtedars and neighbours.
Regulation over replacement
Careena’s knife analogy works: tools can help or harm, so intent and guardrails matter. Janaki pulled us back to basics. AI is a tool; it can help, but never substitute a human.
An individual is “much more than a statistic,” so therapy must stay hyper-personal.
If we still use AI, what’s the minimum bar
AI can lighten admin for clinicians by turning journals into summaries, flagging patterns in mood logs, and generating plain-language handouts that explain coping skills. It can nudge and educate users between sessions with gentle prompts, basic psychoeducation, and up-to-date resource lists. Most importantly, it can help the helper, giving a therapist a structured starting point so the session spends more time on you, not paperwork.
The human must own the risky parts. Suicide checks, safety planning, abuse disclosures, medication questions, diagnosis, and treatment choices belong with trained people who can read contradictions, silences, tone, and history. They are also the ones you can hold responsible if something goes wrong, which is the foundation of care.
And yes, guardrails are non-negotiable. Apps should detect crisis language and escalate to humans fast. Minors shouldn’t be left alone with bots, and age checks should be real. Any consumer tool in this space needs supervised modes, audit trails, and regular testing against jailbreaks and harmful role-play. Disclaimers must be plain, and handoffs to hotlines or licensed care should be one tap away, not a maze.
Our thoughts
We’re pro-tooling for clinicians, pro companions in tightly bounded contexts, and very pro regulation. We cannot be pro outsourcing pain to a chatbot and calling it care. if you’re hurting, talk to a person. if you’re building this tech, hire and empower the humans who keep it safe.
Did this conversation make you think?
Join our community WhatsApp group to be part of more such engaging discussions every Friday.
- Add Animated Text Effects Quickly with Lyric Video Creator
- How Taylor Swift turns a dead girl trope into a survival story- The Fate of Ophelia
- What NCRB’s suicide numbers say about how we raise boys
- What Dharmendra’s heroes teach writers about the relatable protagonist
- Younger leaders, older fears: Why voters want change but also guarantees

