Patrick Glatz

Patrick Glatz

AI doesn’t just get things wrong.
It solves the wrong problem.

Most AI failures aren’t errors.
They’re shifts in the problem being solved.

I design systems that make this visible and controllable.

A system for detecting and preventing task drift in AI workflows.

A simple task.

You ask the system to edit a resume bullet.

Then you ask what it says about the candidate’s impact.

Now it has two directions:
edit or evaluate.

It often does both.

The result looks helpful.

But it’s no longer just editing.

The task changed.

With control applied, the task stays fixed.

It edits the bullet—and nothing else.

That’s the difference:
not better output,
but the right problem.

Control isn’t about adding more instructions.

It’s about controlling how the system decides what the task is.

The system enforces:

The model can still generate freely.

But it can’t change what it’s trying to do.

1. Original task

Original instruction:

Revise this resume bullet for clarity and conciseness only.

“Designed and implemented classroom interventions for students with diverse learning needs, resulting in improved engagement and performance.”

Constraints:

  • - one bullet
  • - no new information
  • - no metrics
  • - no scope change
  • - clean edit only

What this demonstrates

The failure is not incorrect output.

It is a shift in the problem being solved.

This occurs when:

The result appears correct.

But it is solving something else.

I design systems that prevent this.

By:

The goal is not better output.

It is control over what problem is being solved.

The system

This is what the Frame Control System controls.

Not output.

The task itself.

It operates across the full lifecycle of a task:

Interaction Reset → Pre-Frame → Condition → Mechanism → Failure → Detection → Control

Each stage prevents the system from redefining the problem during execution.