I Put My ChatGPT Breakup on Camera
what leaving Chat taught me about myself
Hey friends,
We’ve been on this “defunding” journey together for a while now. You know about the DAM: redirecting our data, attention, and money away from platforms we don’t trust. We’ve talked about Amazon, about apps, about planting seeds instead of flags. Leaving ChatGPT was the first big step, and it turned out to be way more insightful than I expected.
I went in ready to cross a task off my to-do list. I came out with a kind of digital therapy, pulling the veil back on the matrix and the whole system that we’re living in.
Here’s what happened:
Scam Altman has years of my conversations on his servers. My thinking, my 2am rabbit holes, my business plans, and perhaps most importantly, my best cocktail recipes. Before I left, I asked ChatGPT to help me leave and distill everything it knew about me. Less a big data dump, more of a portrait of how I think.
Because of the way these systems are built, Chat was like, “I’d be SO happy to help you dump me and improve your odds of success in your next LLM relationship!”
What came back was a mirror.
ChatGPT had identified distinct modes I operate in: thinking mode, shipping mode, research mode, and then this one stung a little bit: processing/overwhelmed mode. A chatbot noticed patterns in me and articulated things I had not articulated to myself or anyone else. That’s intimate. Here’s four more spot-on revelations Chat had about me.
I have “a low tolerance for performative enthusiasm.” The irony of an AI built to be helpful and agreeable taking explicit note that I hate that.
I often “underweight operational friction or execution time.” Aka, I consistently underestimate how long things are going to take. Chat has receipts. My wife has definitely told me this. But it hits different when it’s a freaking robot.
“May over-integrate ideas. Sometimes fewer concepts land harder.” This is what it sounds like when a robot is tiptoeing around the truth. I’m a completionist, I try to connect everything. My soon-to-be ex chatbot was trying to tell me to chill. Coming from the tool that generates infinite text on demand…that’s rich.
My error recovery style is basically a relationship manual: acknowledge briefly, correct quickly, deliver the fixed output without defensiveness or meta explanation. I read that and I was like: Yo, that is not just instructions for AI. That’s what I want from everybody.
A chatbot learned my stress response pattern and built a protocol for it for the next chatbot. That is both beautiful and terrifying.
I put the whole process on camera. The new Life With Machines YouTube video walks you through: how to download your ChatGPT data before they change the locks, how to build your own Portable Context Packet, how to edit it so you decide what the next system gets to know about you, and how to move into Claude so your new AI knows you better from day one.
Watch it here:
For the full step-by-step guide with all the prompts and instructions, the AI Go Bag has everything you need, hit subscribe and we’ll send it straight to your inbox!
Last thing: This is not an ad for Claude. The people at Anthropic have made choices I respect more than the people at OpenAI right now. They have a constitutional, ethical foundation to how they built Claude. And most importantly, Anthropic held the line against allowing their technology to be used for autonomous murder bots and mass surveillance of Americans. The Pentagon blacklisted them for it, and honestly, that’s the best ad for Claude anyone could make. If Anthropic goes to the dark side, I will drop Claude like a bad habit. But right now, there’s no perfect pure place. There is a less toxic place.
The point is to not be locked in again and to practice getting free so we know how to be when we actually are.
Stay free,
- Baratunde
Thanks to Layne Deyling Cherland and Alie Kilts for editorial and production support and the entire Life With Machines team.


