NEW: How I Use ChatGPT for SEO-Friendly Podcast Audio Chapters (audio and video versions included). These how to instructions can be used for it’s exact audio podcast purpose AND also for any online audio as well.
A break from the norm
This isn’t usually my tone when I talk about tools like ChatGPT.
I’m generally pretty clear about where I stand, I see large language models as thinking partners, not shortcuts, and I’m far less interested in speed than I am in clarity. But I recently read something that knocked me just slightly off balance, in a good way, and I wanted to sit with it instead of brushing past it.
The question came from the Slow AI newsletter, which I’ve been falling in love with. Their writing leans introspective rather than productivity-driven also, and that alone already feels like a small act of resistance in a space obsessed with efficiency.
One guest post by Ilia Karelin asked a simple question:
What do you lose when you rely on an LLM instead of struggling through something yourself?
That question lingered longer than I expected.
Not because I disagreed with the premise, but because I wasn’t entirely sure how it applied to me.
This project is where I slow things down and think out loud.
Posts and podcast episodes are free.
If you choose a paid subscription, you’re supporting the work that lets me keep doing that here.
Thank you.
The prompt that started it
The newsletter included a prompt, and I decided to use it exactly as written, without improving it, without adding caveats, without pre-loading context.
The core question was essentially this:
If you could teach me one thing I lose by relying on you, what would it be?
What skill or way of thinking am I trading away each time I ask you instead of struggling through it myself?
I like prompts like this not because they’re clever, but because they force you to slow down long enough to notice how you’re using the tool, not just what you’re getting out of it.
So I ran it.
And ChatGPT did what it often does when invited into reflection mode, it went deep, perhaps a little too deep.
The short version of its answer was this:
that what I might be losing is the ability to sit inside ambiguity long enough for my own pattern recognition to fire.
That line landed with a thud.
Not because it was wrong, but because it was only partially right.
Where the answer was incomplete
Here’s the thing ChatGPT didn’t know, because I didn’t tell it.
I don’t immediately reach for it the moment I hit friction.
My usual process looks more like this:
I start working on something.
I hit a wall.
I walk away.
I do something else.
Somewhere in the background, my brain keeps working.
Sometimes an insight appears.
If it doesn’t, then I bring ChatGPT into the mix.
That gap matters.
Because ambiguity is something I sit with. I just don’t idolize it. I don’t believe struggle is virtuous in and of itself, and I don’t believe outsourcing thinking is inherently bad either.
What I realized in that moment wasn’t that ChatGPT was misreading me, it was that my prompt hadn’t named my process.
The model filled in the blanks with a very common pattern, one that probably does apply to a lot of people, especially those who slide into summarization or idea generation the moment things feel uncomfortable.
So again, not wrong. Just incomplete, because of me: the human element in this conversation.
What struggle actually gives you
One part of the response stuck with me anyway.
ChatGPT described what happens when you work something out on your own: circling a problem from multiple angles, noticing false starts, building internal landmarks.
I don’t romanticize struggle, but I do recognize those landmarks. They’re the things that let you recognize a problem faster the next time it appears. They’re the reason something feels intuitive later.
The danger isn’t that a tool helps you skip steps.
The danger is skipping awareness of which steps you’re skipping.
That’s the distinction that started to crystallize for me.
What I was actually testing
I wasn’t trying to decide whether using ChatGPT was “good” or “bad.”
I wasn’t looking for a rule, or a limit, or a productivity hack.
What I was really testing was this:
Do I still know why I’m reaching for the tool when I reach for it?
Because the risk isn’t reliance, it’s reflex.
The risk isn’t assistance, it’s never letting yourself be confused anymore.
And that line, buried near the end of ChatGPT’s response, felt like the most important one.
“You’re not cheating by using me”, it said.
”You’re choosing where to place your effort.”
That reframing mattered.
I didn’t realize that my process until I saw ChatGPT make this assumption. I’ve transferred this process over from when I worked offline. Way back in those dinosaur times.
You see, I’ve always had too many ideas and not enough patience. So hitting a wall on a task or idea and then letting my brain work on it while I move on was something I had to do for my own sanity. This method gave me distance that mimicked patience. It’s worked well enough, so I kept using it, even when my tools changed. I hadn’t realized that I was doing this with ChatGPT until this prompt. That alone was worth this reflective exercise.
An unexpected mirror
After sitting with the response, I added Chatty’s response to the Slow AI newsletter post, as they encourages us to do.
Sam Illingworth, the Creator of the newsletter replied in a way that surprised me:
Because while I pay attention to how ChatGPT responds to me, I don’t often hear that reflected back by other people. I don’t usually think about my relationship with the tool as something visible from the outside.
But of course it is.
Long-term prompting leaves fingerprints.
Preferences accumulate. Boundaries get reinforced. Context builds. The model isn’t just responding to a single question, it’s responding to a pattern of interaction.
That, more than anything, ended up being my biggest takeaway.
Where I landed
I could have kept going with that conversation. I could have added more context, refined the answer, pushed it closer to something perfectly accurate.
I didn’t.
Not because it wasn’t worth exploring further, but because I’d already gotten what I needed from it.
A pause.
A mirror.
A reminder that tools don’t remove agency unless we hand it over.
The technology isn’t doing things to us.
We’re the ones creating the pace, the noise, the expectations.
That’s why I’m drawn to the idea of “slow AI,” not as a rule set, but as a posture. A way of staying conscious inside a relationship with tools that are very good at making things feel frictionless.
If nothing else, this is a useful exercise to try once. Not to scare yourself. Not to shame your habits. Just to notice them.
And maybe to ask yourself not what you’re gaining, but when you’re choosing to gain it.
Let me know if you try this prompt. See you next week,
Steph











