The Grace Margin
Why the world runs on exceptions and what happens when we automate them away
The pharmacist who knows you fills your prescription two days early because you’ll be traveling.
The police officer who lets you off with a warning after slight speeding.
A teacher rounds a grade up because a student’s been going through hell at home.
A landlord waives the late fee because you’ve never missed a payment in four years.
A border agent looks at your slightly-expired document, looks at you, and waves you through.
None of these are policy; most of them technically violate policy. And yet they happen constantly in life. They happen quietly, invisibly, without being recorded or reported.
They are the space between what a system prescribes and what a person actually does; they are the silent artefacts of the fact that the world really runs on exceptions.
I don’t know what the right terminology is for this space, so I will call it the grace margin: the built-in tolerance in human systems where someone can look at the rules, look at the situation, and choose empathetic human judgment over procedure or policy.
The grace margin (n.): the space between what a system prescribes and what a person actually does. It is the moment someone looks at the rules, looks at the situation, and chooses judgment over procedure. It functions as both a compassion mechanism and an error-detection system, and it cannot survive the elimination of human discretion.
Or, as someone once put it rather more elegantly: “I don’t want to break the rules, but I do like them all bendy.“
You have been the beneficiary of the grace margin more than you realize. Every time a process should have gone against you but didn’t, because someone exercised discretion. Every time a deadline was quietly extended, a document accepted despite a minor error, a rule bent just enough to accommodate reality.
Some cultures, some organizations, and some people are better at having a grace margin than others (in my experience, Australians have a slightly larger grace margin than Finns).
You may also have felt its absence: the insurance claim denied on a technicality; the parking fine that would have been waived if you could have spoken to a person instead of a machine; the automated system that couldn’t process your situation because your situation wasn’t one of the available options.
Computer says no.
The grace margin, I would argue, is load-bearing infrastructure of our society, and it’s under threat.
Every time a human inside a system breaks protocol, they are generating a signal: this rule doesn’t fit this situation. The call center agent who keeps waiving a particular fee is telling the organization that the fee is wrong. The nurse who regularly bends triage protocol for a certain type of case is telling the hospital that the algorithm is missing something. Not that they’re listened to enough, but exceptions are how human systems can learn that they’re broken.
Remove the exceptions, and you get a less compassionate system.
You also get a system that has lost an important mechanism for detecting its own failures.
But the grace margin has been shrinking for decades, long before anyone started worrying about AI.
Every time a company introduces a “this call may be recorded for quality and training purposes“, it is eliminating the space where someone could quietly do the right thing off-script.
Every time a process is “optimized” and human touchpoints are removed in favor of digital systems and portals and chatbots, the organization is systematically closing the gaps where judgment once lived.
Every KPI, every compliance framework, every brand guideline that dictates how an employee may (or may not) speak to a customer, these are all mechanisms for compressing the grace margin by design.
The logic is understandable. The grace margin is where compassion lives, but it’s also where corruption lives. The customs official who waves someone through. The loan officer who approves a friend. The regulator who looks the other way. Every argument for human discretion is also an argument for human bias, favoritism, and abuse. Organizations have legitimate reasons to constrain it.
It’s where inefficiency lives, too. The ten-minute conversation that could have been an automated approval. The experienced employee who “wastes” time understanding a situation that a system could have processed in seconds. The manager who lets a team member take an unscheduled day off because they can see something’s wrong, absorbing the productivity hit without logging it. Grace takes time. It is, by every metric we’ve built to measure organizational performance, waste.
The trouble is that organizations can’t separate the grace from the corruption or the inefficiency, so they’ve opted to eliminate all of it. The grace was collateral damage.
Here’s where it gets dark.
Through decades of process optimization, compliance enforcement, and the relentless pursuit of efficiency, many organizations have already compressed the grace margin almost to zero. The humans are still there, but they operate within such tightly constrained parameters that their capacity for independent judgment — for grace — has been effectively removed.
They follow scripts and policy. They apply rules. They escalate to systems that apply more rules. Somewhere, nominally, a human could intervene. But the organizational architecture makes that intervention so difficult, so career-threatening, so structurally discouraged, that it almost never happens.
The employees are still present, but the function they were supposed to serve, the error-detection, the compassion, the messy human judgment that keeps rigid systems from becoming cruel ones, that has already been hollowed out.
The body is there; the soul left years ago.
Which brings us to AI.
Not as a revolutionary force, but as an inheritance.
When people worry about AI replacing humans in organizations, they tend to imagine a dramatic transition: robots in, people out. The reality is far less cinematic and far more disturbing.
AI doesn’t arrive to destroy the grace margin.
It arrives to find the grace margin already gone, and it makes the absence permanent.
A human operating under rigid constraints might still, on a good day, choose to deviate. The possibility exists even when it’s suppressed.
AI doesn’t suppress the possibility. It doesn’t have it. There is no gap between what the system prescribes and what the system does, because the system and the actor are the same entity. The concept of an exception becomes architecturally impossible by default; not because it can’t be designed in, but because the incentive is never to do so. You can’t appeal to a process’s better nature, because it doesn’t have one.
And this is not a flaw of AI. This is what we are asking it to do.
When an organization implements AI to “improve efficiency” and “ensure consistency,” what it is actually implementing is the elimination of variance. And within that variance lies the person choosing to be decent when the system doesn’t require it.
Efficiency and grace are, in this framing, fundamentally opposed, and we have made our choice about which one we worship.
I want to be clear about something: this is not an argument against AI.
AI implemented with deliberate, designed space for exceptions, for escalation to genuine human judgment, for override mechanisms that are actually accessible and not merely theoretical, with resources specifically dedicated to handle those exceptions does not have to eliminate the grace margin.
It could, in principle, even expand it: by handling routine cases efficiently, AI could free humans to focus their judgment on exactly the cases that need it most. The pharmacist freed from repetitive dispensing tasks might have more time to notice the patient who needs an accommodation or additional guidance, not less.
But that requires a design philosophy that treats human judgment as a feature and a genuine resource, not an annoying inefficiency to be ironed out. It requires organizations to want grace in their systems.
The entire trajectory of corporate optimization for the past forty years has been in the opposite direction: judgment is risk, discretion is liability, exceptions are inefficiency.
AI didn’t create that philosophy, but it does codify and then ossify it if done wrong.
This incoming wave of cruelty won’t be dramatic.
It won’t look like a dystopian film; not at first, anyway.
It will be mundane and bureaucratic and wrapped in the language of fairness. The system will treat everyone “equally,” which means the person whose situation doesn’t fit the model will be treated identically to the person whose situation does. Consistency will be maintained. Efficiency will improve. And the outliers - the complicated situations, the edge cases, the people whose lives don’t fit neatly into available simplified categories - will find that there is no one left to appeal to. No one who can look at the rules and then look at them and choose to do the decent thing.
The world will, on balance, be more efficient. It might even serve people well, on average. But for the outliers, there is cruelty in store.
Everyone, every single one of us, will be an outlier eventually.
When you’re grieving and can’t meet a deadline. When the system has your details wrong. When your situation is the one the algorithm wasn’t trained on.
The grace margin is where someone catches you.
Codify its absence, and there is nothing left but the fall.
If these ideas resonate, I’d recommend three books that, read together, map the architecture of what’s being lost: Dan Davies’ “The Unaccountability Machine” on how organizations systematically eliminate accountability; Josh Bornstein’s “Working for the Brand” on corporate control over human agency and speech; and James C. Scott’s “Seeing Like a State” on what happens when complex, local, human knowledge is overridden by systems that demand legibility above all else. The grace margin lives in what Scott calls mētis; the practical, informal knowledge that no system can fully capture and no algorithm can replicate.




Love this, Sami!
April and I have a neighboring concept, the Grace Factor, which we apply to bad things that tend to occur now and then. Rather than curse the flat tire or the hotel room key that won't open the room, chalk them up as consuming some part of human activities that sometimes fail. Call it 3% or 5. Now you have that part of the Grace Factor behind you.
You nailed this Sami.
"The entire trajectory of corporate optimization for the past forty years has been in the opposite direction: judgment is risk, discretion is liability, exceptions are inefficiency."
"It will be mundane and bureaucratic and wrapped in the language of fairness. And the outliers - the complicated situations, the edge cases, the people whose lives don’t fit neatly into available simplified categories - will find that there is no one left to appeal to."
But when your only scorecard is money, productivity and efficiency, then this is the natural result.
- - it's already happening. It's slowly getting harder to get to real people.
I read the other day about people using AI for their responses in dating apps... bots flirting with bots. Unsure if I'm an outlier here (and as someone who's usually an early adopter), but I'm consciously looking at how I can shift my life the other way... more analog. ...More dirt & sun.