LLMs Are as Confused as We Are
Disclaimer: The scenario in this post is inspired by real-world experiences from my career, but the specific interactions are fictional. They are not meant to call out any employer or client I’ve worked with. Everything here is a composite of pieces I’ve collected over time, assembled into a realistic story.
Also, I used an LLM to help write the ChatGPT-like assistant messages in this post. Everything else—from the narrative to the analysis—was written by me, with the LLM only assisting with fixing typos and grammar inconsistencies.
When two people with completely different backgrounds (tech and non-tech) try to work with each other, miscommunication is inevitable. This is the essence of requirements elicitation, and if there’s anything I still struggle with, it’s this. But for years, I treated this struggle as a personal flaw rather than normalizing it as an inherent part of software development. There was always this hidden expectation from a “good developer”: figure out the gaps, fill them in, and never ask too many questions.
And the rise of large language models has proven me wrong. Even the most capable models like GPT and Claude fail when you don’t give them clear requirements. At the end of the day, they still operate under the same principle: garbage in, garbage out. And if these supposedly “intelligent” systems struggle with ambiguity just as much as humans do, then perhaps the problem was never about developers failing to read minds.
So, to understand how we normalized ambiguity as a developer’s responsibility, let’s go back to 2021 and revisit a typical client-developer interaction.
Meet Ram
Welcome to 2021 and meet Ram. He just graduated from a reputed university with a degree in computer science. He completed his internship during his 7th semester and now works full-time at the same company after graduation.
His daily routine involves working with CRUD applications.
One random day, his company lands a big project from an important client. This means Ram will be tasked with working on complex projects involving complex data modeling and hierarchies, with some edge cases.
Now, there are two problems. First, it’s a legacy project written by a developer with no clear architecture. The code isn’t idiomatic and is full of outdated dependencies. Because it requires Node 10, it won’t work on Ram’s new M1 MacBook. Second, he’ll be the only person working on it.
For the first few weeks, he’s asked to review the codebase and document it. Then, slowly, the client starts demanding new feature requests.
He adds the button and completes the task. Now they’re both happy.
Few months later
But Wait
“But wait, you can’t easily do that. We have 20,000 users right now in our database, and I haven’t got enough time to even review everything.”
“We do not even have a role field in our user table. What should be the default role?”
“In our current form, the user inputs their email and password after verifying their OTP, and only then can they signup. But with this new multi-step form, it could break the flow.”
Ram panics. He has to think before replying anything.
Because he’s inexperienced—and this is his first time communicating with someone from a completely different background—he was taught not to disappoint clients, but not how. So, his first instinct is to agree with everything the client says.
So he collects his questions and messages the client.
Ram nods and writes it down with confusion
The client sighs and looks frustrated
The client starts wondering if Ram is the right guy for the job. “Others warned me this—devs are anti-social people and I always have to talk with them in a literal sense.”
The meeting didn’t go well and ended with more confusion for Ram.
Throughout this whole conversation, Ram was asked to use his brain and do whatever makes sense. The client wasn’t cooperating properly because, from his perspective, he’s paying the developer to use his expertise. He doesn’t want to think much—he expects the developer to read his mind and work smoothly.
Frustrated by Ram’s “never-ending questions,” the client decides to try ChatGPT instead. After all, everyone says AI can replace developers now. No more annoying questions, just pure productivity!
Client Meets ChatGPT
Client: “Wow! No questions, no confusion, just a perfect solution! This is exactly what I needed. I knew I was right—Ram was just overcomplicating things!”
The client excitedly copies the code, pastes it in his cPanel editor, and deploys it to production…
30 minutes later…
Client realizing: “Wait… this is the EXACT same question Ram asked me! And now I have a production issue because I didn’t think about existing users!”
In another timeline, where things worked slightly differently, the client might have texted Ram something like:
Client (frustrated): “So ChatGPT ALSO needs all the context? And it gave me code without asking about my database? At least Ram asked the right questions BEFORE I broke production!”
He messages Ram the next morning: “Hey, about those questions you had—let’s talk through them.”
I’ll leave this for you (readers) to figure out.
LLMs Are Humans Developers Too (But Worse)
And I mean it sarcastically. The thing that was supposed to replace developers suffers the same weaknesses as them, but worse:
- Making dangerous assumptions
- Ignoring critical context
- Providing solutions that break in production
- Asking questions AFTER the damage is done
If anything, LLMs amplify these problems. They respond with confidence and code snippets that look perfect, until it breaks the production.
The Real Problem Was Never Coding
This post wasn’t meant to patronize people from non-technical backgrounds or victimize developers. Instead, it was supposed to highlight a core issue that has always existed: ambiguity and gaps in translation—not coding. Coding is maybe the second problem, coming after communication.
And, It’s ironic that people have made a new career out of this: prompt engineering, where you carefully describe the problem you want to solve.
And It’s also funny because as developers, we’ve been asking the same thing for years.
- Clarity
- Context
- How do you want your software to work ?
Whether you’re talking to a human developer or an AI assistant, the answer remains the same: be clear about what you need.