Why AI Struggles to Explain Things to Beginners
(And How to Fix It)
Last week, when I wrote about using ChatGPT to help with my Spanish homework, I mentioned a flaw in Chatty’s conversational style that I can’t stop thinking about. And it goes way beyond language learning. It shows up in almost any beginner-level question, which, let’s be honest, is a lot of how we interact with LLMs.
LLMs like ChatGPT and Claude are very good at language, but they’re surprisingly bad at communicating with beginners. And the more I sit with that, the less small it feels.
More and more of us are turning to LLMs to learn, clarify, and figure things out. And a lot of the time, the response we get sounds clear. It feels polished. It reads well. But somehow, it still does not land. There’s that subtle gap where you understand the words, but not the meaning.
Why Clear Explanations Aren’t Always Beginner-Friendly
The issue isn’t that LLMs aren’t intelligent. It’s that they don’t adjust to the person in front of them the way humans can. They don’t naturally think in terms of learner levels. Instead, they focus on giving you the information you asked for, not on making it actually usable.
And that’s where things can go wrong. It turns into this expert-level information dump really fast. It’s a lot. I can’t even count how many times I’ve had to stop Chatty and say, “one thing at a time, please.” I even put that into my custom instructions, and honestly, it still doesn’t always listen.
So you end up with something that looks complete, but doesn’t actually help. ChatGPT and other LLMs can explain something fully and still be completely out of reach. Beginners don’t just need a clean explanation. They need information they actually understand.
Why Simplifying Language Is a Trained Skill (Not Natural)
Before I ever worked with LLMs, I spent years teaching adults how to use language. I trained through a CELTA program and taught in Taiwan, Vietnam, Malaysia, Japan, and later in U.S. college classrooms. Business English, academic writing, general communication. All of it.
A big part of my work wasn’t just teaching vocabulary or grammar. It was helping people get their personality into a new language. When you first start speaking another language, you’re kind of hidden. You sound flat, limited, not quite like yourself. With enough practice, confidence, and a bit of relaxation, that starts to shift. I focused on helping students inject themselves into a language earlier, not years down the line.
And none of that came naturally to me.
My first year of teaching was basically one long stretch of “what am I doing.” Teaching something concrete is one thing. You can point to it, break it down, explain it. Language doesn’t work like that. It’s messy, layered, and tied to how people think and feel. Once I started paying attention to things like personality and relaxation in the classroom, it got easier. For me and for them. After that, it was a lot of trial and error.
Teaching forced me to slow down in a way that felt unnatural. I had to choose words I knew students had seen before, avoid grammar we hadn’t covered yet, and repeat structures even when it felt repetitive. I was constantly watching faces and listening for small signals that something made sense. Not “can they fill out a grammar chart,” but “can they actually use this in a real moment.”
That kind of control is learned. It’s not automatic.
Why LLMs Can’t Adapt Language to Your Level (Yet)
LLMs don’t read the room. They don’t notice confusion or hesitation, and they can’t adjust in real time. At their core, they’re predicting language, not adapting to a person.
And the data they’re trained on is mostly high-level. Fluent, adult, polished language. So asking them to naturally make something understandable for any level is a stretch. Even with good prompting, it’s not what they’re built to do.
It’s a bit like asking me to use my trunk to carry water from a lake to a bucket. I don’t have a trunk. And LLMs don’t have built-in learner adaptation skills.
Why AI Explanations Drift Into Expert Mode
LLMs often start in the right place. The first sentence is clear. The idea feels simple enough. You’re following along.
And then something subtle shifts. A slightly more advanced term shows up. A new concept gets layered in without explanation. The sentence gets longer. The idea gets more precise, more detailed, more “correct.” Individually, none of these changes are a big deal. But together, they create expert drift.
The explanation slowly moves away from the person asking the question and toward the level of someone who already understands the topic. And that’s the tricky part. It still sounds clear. It still reads well. But it’s no longer aligned with where the learner actually is.
This isn’t just about language. It happens with concepts too. You ask a basic question, and within a few sentences, you’re dealing with assumptions, terminology, and connections that haven’t been built yet.
LLMs aren’t checking for understanding. They’re building on patterns. And those patterns tend to reflect how experts talk, not how beginners learn.
Why Experts Often Overcomplicate Simple Explanations
This isn’t just an AI issue. It’s a human one too. If you’ve ever asked a simple question and gotten a long, complex answer back, you’ve seen this in action. Information overload is hardly an AI issue alone.
Experts don’t usually overwhelm on purpose. They’re trying to be helpful. But instead of simplifying, they expand. They add nuance, context, and detail. And the result is often harder to follow, not easier.
I’m seeing this play out in real time here in Spain. I moved a few months ago with very low-level Spanish, and the English level where I am isn’t high. Which makes sense. The main languages here are Spanish and Basque.
When I ask someone to explain how to do something, it’s often rápido, rápido. I tried asking people to slow down, but that usually gets a slightly confused look, so I stopped. What happens instead is that people switch to tech, Google Translate, WhatsApp, anything that helps bridge the gap.
And honestly, I get it. If you don’t know how to simplify, you just don’t know.
Everyone has been incredibly kind about how rough my Spanish is, and I’m grateful there’s a workaround. Without it, daily life would be much harder. But it’s also a daily reminder that this kind of flexibility, adjusting your language to match the person in front of you, is learned. It doesn’t just happen.
Example: What Beginner-Friendly Actually Looks Like
That same pattern shows up really clearly when you compare how something is explained at different levels.
Language Example
If I ask for help with something like the causative passive, I might get:
“The causative passive is used when an agent causes another party to perform an action on their behalf, typically formed with ‘have’ or ‘get’ plus an object and a past participle.”
That’s accurate. But for someone new to this structure, it’s not that helpful.
A more usable version would be:
“Think of this as a ‘rich person’ structure. ‘I had my hair done last week’ or ‘I got my house renovated last month.’ Someone else does it for you, not you.”
Simpler language, more relatable examples. Now it feels like something you can actually use.
Same idea, different level.
Podcasting Example
Now zoom out of language for a second.
Let’s say someone asks about podcast microphones. A typical explanation might sound like:
“A dynamic microphone is better for untreated rooms because it has lower sensitivity and rejects background noise more effectively than a condenser mic.”
Again, accurate. But if you’re new, that’s a lot packed into one sentence.
A more beginner-friendly version would be:
“Use a dynamic mic if your room isn’t treated. It picks up less background noise and focuses on what’s close to it, your voice.”
That’s it. That’s enough to get started.
Why This Matters for AI Learning and Communication
This isn’t just about language learning. It affects how people use AI to understand new ideas. If explanations don’t match the user’s level, they don’t fully land.
That gap adds up. It’s the difference between exposure and understanding.
How to Get Better Beginner-Level Explanations from AI
The good news is that you are running the LLM conversation. You can build information overload boundaries into your prompts. Instead of asking for something “simple,” it helps to be specific about how you want the information delivered.
You can say things like:
“Explain this to me one part at a time. After each part, ask me if I have any questions.”
“That was too much. Break it down into smaller steps please.”
“Give me a short overview first, then ask if I’m ready for more detail.”
When you do this, the output becomes much more understandable. Your level, your pace, and your context start to shape the conversation.
And that’s the shift. Instead of the LLM deciding how to explain something, you guide it into a format you can actually use.
The Difference Between Clear, Simple, and Understandable
There’s a difference between sounding clear, being clear, and being understandable. LLMs are strong on the first two. The third one takes more effort.
Once you see that difference, it changes how you use AI and how you communicate with people.
Two quick things before you go
If this clicked for you, share it with someone who’s stuck in that “almost gets it” phase.
And if you want more on how AI actually behaves in real communication, subscribe below. That’s what I’m exploring here.

