← Back to Blog
AI Engineering

The Day I Realised AI Has a "Try Hard" Mode (And How to Trigger It)

Context isn't there to make output sound nicer—it tells the model how much scrutiny is required. Discover how adding identity, audience, and stakes to your AI systems unlocks a different tier of reasoning altogether.

AIPrompt EngineeringContext DesignLLMAI SystemsProduct Development

I'll admit it: I'm obsessed with how AI systems actually behave once they're put to work in the real world.

Not in demos.
Not in toy prompts.
But inside production tools where accuracy, judgement, and accountability actually matter.

Most of my time is spent building and stress-testing AI systems for professional and compliance-heavy environments—the sort where "mostly right" isn't good enough. And recently, I stumbled across something that genuinely stopped me in my tracks.

It wasn't a new model.
It wasn't a clever prompt.
It was a shift in how the model appeared to reason.

And once you notice it, you can't unsee it.

Prompting Is the Shallow End of the Pool

We've all been taught the same story: if you want better AI output, write a better prompt.

So we tweak wording.
We add adjectives.
We politely ask it to "think step by step" and hope for the best.

That works—up to a point.

But what I discovered is that prompting is only the surface layer. The real leverage doesn't come from what you ask an AI to do. It comes from the signals you give it about how seriously the task should be taken.

The Mystery of the Missing Gaps

Here's exactly what happened.

I was testing a mature document-analysis engine we've built for drafting and reviewing professional policies.

Test 1

I gave the AI a high-quality, professional instruction to review a specific document and identify gaps.

It responded quickly and confidently.
It found one clear gap.
The answer was solid. Sensible. Safe.

A comfortable B+.

Test 2

I ran the same document and the same request again.

This time, I added a small amount of additional context—not instructions, not logic, just signals that anchored the work to a real organisation, a defined voice, and a real audience.

Nothing that felt "clever".
Nothing that looked technical.
Mostly things people treat as decorative.

The result was not incremental.

Instead of one gap, the AI surfaced five.

More importantly, the nature of the gaps changed. It started connecting policy completeness, legal exposure, reputational risk, and internal consistency—areas it had barely touched in the first run.

It didn't just write better.

It reasoned differently.

From Drafting to Accountability

That was the moment the penny dropped.

When an AI is given a generic task, it defaults to what I think of as efficient mode. It does a reasonable scan, finds the obvious issues, and produces a plausible answer with minimal scrutiny.

That's fine for drafting.
It's not fine for responsibility.

But when you anchor the task to a real identity—a real organisation, a real audience, a real set of expectations—something changes.

The system behaves less like a drafting assistant and more like a responsible author.

It stops asking "What can I write?"
And starts asking "What's missing, and why does that matter?"

That shift isn't about creativity.
It's about stakes.

Context Is Not Decoration

This is the part I think many AI products are still missing.

Context isn't there to make output sound nicer.
It's there to tell the model how much scrutiny is required.

When systems treat context as first-class—not an afterthought—they unlock a different tier of reasoning altogether. The model doesn't just comply with instructions; it evaluates them.

That's the difference between generating text and supporting decisions.

Why This Changes How Serious AI Tools Are Built

This realisation fundamentally changed how I think about building AI systems.

We're not just assembling clever wrappers around models anymore. We're designing environments that tell the model who it is working for, who will read the output, and what's at stake if it gets it wrong.

Do that well, and the same model suddenly behaves like a far more capable professional.

Not because it's trying harder—
but because it understands that the work matters.

The Real Lesson

If you want your AI to think harder, don't tell it to "think harder".

Give it a reason to care.

Give it identity.
Give it audience.
Give it stakes.

And if you're building systems that do this consistently—quietly, structurally, and by design—you'll find yourself working with an AI that feels less like a tool and more like a judgement engine.

That's where the real advantage is heading.

And that's the layer I'm focused on building in.

I'm less interested in what AI can generate on demand, and far more interested in what it does when it understands responsibility.

That's the layer I'm designing for—and the one I think will define the next generation of serious AI products.