Your ChatGPT history is not your resume- and hiring shouldn’t become surveillance
- February 17, 2026
- Trends
Somewhere between “please walk me through your experience” and “do you have any questions for us,” a new request has entered the group chat of modern hiring.
“Share your ChatGPT history.”
That is not a punchline. Apparently, it was a hiring decision.
A founder on LinkedIn claimed he was unsure about a candidate after their conversation, so he asked for the candidate’s ChatGPT browsing history. He liked what he saw. The candidate got hired. And the internet did what it does best… turned it into a debate, a warning, a flex, and a future prediction, all at once.
Our WhatsApp chat had the full range, from immediate suspicion to existential dread to “okay but what if it’s actually useful.”
First, the obvious. This is either a red flag or a marketing post (or both)
Kiranmayi said what many of us thought first. It’s a weird ask, and it also smells like the kind of LinkedIn post designed to travel, not to tell the truth. Inderpreet called it “universal stupidity”.
Because even if it happened exactly as described, it raises a bigger question.
What kind of workplace do we create when private logs become proof of competence?
What recruiters think they’re getting vs what they’re actually asking for
On paper, the founder’s logic sounds simple. “I couldn’t read this person in conversation. So I’ll read how they think when they’re alone.”
Bhavana offered the most coherent version of that argument. She treats ChatGPT like a junior team member, and her prompts are detailed briefs. In that framing, a candidate’s prompts could reveal how they structure thinking, how deep they go, and how they evaluate outputs. Tina echoed a similar nuance. If the role requires research, analysis, forecasting, or written depth more than verbal articulation, a peek into how someone explores ideas could feel relevant, provided it is voluntary.
Jaideep added a reality check. Prompting habits might show initiative and learning style, but there’s no strong proof that people are being hired solely on history, and over-reliance on AI often shows up as a negative in generic applications or live interviews.
So yes, there is a legitimate question hiding here. Can AI use be a professional skill signal?
But here’s the catch.
A recruiter asking for your ChatGPT history is not asking for a portfolio. They’re asking for access. That is certainly not an assessment.
Why this feels like a privacy violation…because it is one
Sreeparna’s point, ChatGPT doesn’t only contain job-related searches. It can contain anything private. That is exactly what makes it useful to humans in the first place. People ask AI the questions they don’t want to ask anyone else.
Puspanjalee’s and Harshita’s examples are funny because they’re true. Your history might be “how to fix a bulb”, “will Boroline work on my elbow?”, “embroidery colour suggestions…” None of this is “work.” It’s just life.
Reubenna called the trend unacceptable because it highlights how unsecured our lives already are. Janaki said she’s now going to be careful what she uses ChatGPT for, which is honestly the saddest outcome here. Not because you lose privacy in the moment, but because you start policing your curiosity pre-emptively.
Nibha raised an even more practical point. Many people use enterprise accounts where history can’t be shared outside the organization. Even if someone wanted to comply, they literally can’t. And they shouldn’t have to.
This is the line that matters. Public social media screening is not the same thing as private AI logs.
Ruchi asked, “what is privacy anyway,” pointing out that social profiles get monitored for visas and sometimes screened by organizations. Disha’s reply is the boundary we should not blur. Those checks rely on publicly available information, not private search history. Public behaviour is one thing. Private logs are another. We cannot normalize a workplace that has no boundary between employee and person.
“But the candidate agreed” is not a moral pass
Inderpreet’s point made us think. The founder shouldn’t have asked, the candidate should have refused, but maybe they needed the job more than privacy.
That’s the heart of it.
In hiring, “consent” often comes with an invisible footnote. If you say no, it may cost you. Even if nobody says it out loud.
So yes, you can argue “if the candidate is okay with it, why object?” But that logic collapses when you remember how power works. Interviews are not equal ground. They’re not a friendly exchange of boundaries. The moment private history becomes a filter, people will comply to survive.
The other uncomfortable truth. This is also a terrible metric
Even if we ignore the ethics, there’s a basic reliability problem.
You can manipulate interactions, curate a “hireable” history, and optimize your prompts the way people optimize resumes. Once something becomes a hiring signal, people learn to game it. Quickly.
Pradeep also pointed out he doesn’t understand how chatbot history can reliably reveal job skills. A history can show curiosity, sure. It can also show randomness, stress, health anxiety, relationship questions, late-night thoughts, and things that are nobody’s business.
So the founder might think they’re reducing margin of error, but they might simply be swapping one kind of uncertainty for a messier one, with more harm attached.
If a recruiter can’t assess competence in conversation, who needs the help?
If the interviewer needs AI to help decide, their cognitive skills should be questioned. Felicia added a line that deserves to be printed on every interview training deck! An interviewer’s job is to extract competence through dialogue. If they can’t tell you’re smart unless they read your private logs, their ability needs questioning.
Interviews matter because people can surprise you. Resumes don’t capture everything. AI suggestions don’t either. If you want to assess management skills, you put a person in a situation and see what they do. You don’t need a bot to tell you who they are. Kavitha summed up the whole dilemma in one line. AI is a good servant but a bad master.
So what’s the sane middle?
If the role is actually about prompt engineering or heavy AI use, then test it. Live. With a scenario aligned to the job description. Asking for history is invasive and unreliable.
Nibha echoed, add a round to check it on-call. Give a task. Observe thinking. Don’t demand diaries.
Also, if someone doesn’t use ChatGPT for work, they shouldn’t be penalized. Not using a tool is not a lack of intelligence. It’s a preference, a workflow, sometimes even an ethical choice.
If you want to see how someone briefs, structures, and evaluates, you can do that without peeking into their personal questions.
The real question we’re avoiding
Sherein compared it to the classic “science: boon or bane” debate. It does feel like that, but with a 2026 upgrade. The boon isn’t just efficiency, it’s access to knowledge. The bane isn’t just dependency, it’s surveillance.
We already live in a world where we can’t fully trust what we see or hear. Managers doing this only deepens the lack of trust.
This is the crossroads. Do we respond to distrust by building better conversations, or by building bigger audits?
Because once we decide private logs are fair game, we don’t just change hiring. We change what people feel safe asking, learning, and exploring.
What do we think?
AI can absolutely supplement human intuition. It can help reduce errors, speed up research, and improve quality of work when used thoughtfully.
But a candidate’s ChatGPT history is not a fair screening tool. It’s a private space where people think out loud, ask small questions, ask scared questions, ask silly questions, and sometimes ask the questions they don’t want attached to their name.
If a workplace needs access to that to feel confident about you, it’s not really hiring. It’s profiling.
And if we normalize that, the cost won’t just be privacy.
It’ll be curiosity.
Did this conversation make you think?
Join our community WhatsApp group to be part of more such engaging discussions every Friday.
- Ras Al Khaimah vs Dubai for First-Time UAE Buyers: What “Value” Really Means
- Appetite: New Writing from Goa – Some Thoughts
- What Friendship Movies Really Reveal
- Glyph by Ali Smith: an anti-war novel that urges you to see clearly
- Your ChatGPT history is not your resume- and hiring shouldn’t become surveillance

