Three months after launch, Sarah sat across from her CFO Mike.
Their "fully autonomous" AI was flagging legitimate suppliers as fraud risks. Missing actual compliance violations. Creating more work, not less.
"You told us this would work," Mike said.
Here's the thing: the AI was working. Just not the way Sarah promised.
Her mistake? Designing AI governance like enterprise software: configure once, go live, done.
But many AI systems are "non-deterministic." They don't calculate answers—they guess at the most statistically likely response. Which means your AI will be wrong. Often. At scale.
The question isn't if your implementation will need adjustment. It's whether you've designed your governance to handle it when it does.
The teams getting AI right understand one truth: you solve complex problems through iteration, not perfect planning.
Startups pivot constantly. Fortune 500 mergers fail spectacularly. Even your kitchen renovation probably didn't go as planned.
This isn't incompetence. It's how humans solve hard problems. Test, learn, adjust, repeat.
And since AI simulates human intelligence? It inherits all our limitations—plus a few new ones. 😅
Tonight, I've got a framework for you. I'll show you exactly how to build AI systems while accounting for reality instead of assuming it doesn’t exist.
Onwards!
📰 In this week’s edition:
🎥 Smarter Sourcing with AI Agents (Webinar)
🔗 My favorite “Must Reads” this week
📋 5 procurement jobs that caught my eye
🏆 The Road to the ProcureTech Cup : Episode 3
🌙 When to Trust AI and When to Step In…
Note: Some of the content listed above is only available in the email version of this newsletter. Don’t miss out! Sign up for free to get the next edition.
👀 In Case You Missed It…
My Best Linkedin post this week:

When to Trust AI and When to Step In
My Version of the “Human-in-the-loop” Matrix
Humans are terrible at solving complex problems on the first try…
Don't believe me? Let's look at the evidence.
Startups get it wrong constantly. They pivot, iterate… Only 50% make it past the 5 year mark… Experimentation is the whole point.
Big businesses fumble strategy all the time. Remember the AOL-Time Warner merger? Hailed as visionary in 2000, unwound by 2009. Or Daimler-Chrysler? Kraft Heinz? The business graveyard is full of "brilliant" first attempts.
Even in your personal life, think about the last time you planned a major home renovation. Did everything go according to plan? Or did you discover that the "perfect" kitchen layout actually blocked the natural flow of your space, requiring adjustments you never anticipated?
This isn't incompetence… It's just how humans work.
Why We Never Get It Right the First Time
Complex problem-solving involves uncertainty, multiple variables, and ambiguous feedback. Our brains simply can't process all the possibilities upfront.
We're also plagued by cognitive biases: overconfidence makes us think our initial solution is better than it is, anchoring locks us into our first idea, and confirmation bias has us seeing only evidence that supports our chosen path.
It’s even worse when you are solving a problem as a group and social dynamics enter the fray (e.g. social status hierarchies…)
The solution? Iteration.
Define the problem, brainstorm solutions, test them, learn, refine, repeat. Every trial produces new information. Every "failure" is actually data.
This is how drugs get discovered. How products get refined. How startups find product-market fit.
Why Does This Matter in the Age of AI?
AI systems inherit human limitations… And add new ones of their own.
Many AI subdomains are non-deterministic, meaning they use statistics to determine the most likely answer based on your inputs.
They don't calculate a precise solution; they make educated guesses.
Remember: AI attempts to *simulate* human intelligence. That means it has blind spots, biases, and weaknesses too. The difference? AI can make mistakes at scale and speed that humans never could.
So what do you do? Throw AI out altogether?
Of course not.
Building Systems That Work With Reality, Not Against It
Implementing AI in procurement, or anywhere else, means accepting the fallibility of these systems and designing your governance around iteration and oversight.
A few weeks ago, there was an interesting discussion on LinkedIn on this exact topic… Kevin Frechette (Fairmarkit’s CEO - See our upcoming webinar) suggested a draft "Human in the Loop" matrix (as in, when to include humans in business processes facilitated by AI).
I asked Kevin if I could *iterate* on his original idea in this newsletter (putting my claims into action 😅). Here’s my version:

In any system promoting automation (AI or not), human involvement should be based on two factors which are my axes:
Risk Level (Low to High). What is the potential harm of the task/process not being done correctly? You would calculate that by combining its likelihood (or probability) and severity (or negative impact)
Examples of high risk tasks/processes:
Price negotiations for enterprise software licenses
Selecting a the right supplier for critical single-source components
Contract compliance decisions that could trigger regulatory penalties
Examples of low risk tasks/processes:
Generating RFP drafts
Routing low-value purchase requests
Formatting supplier communications
Task Type (Deterministic to Non-Deterministic). Does the task have one objectively correct answer, or are there multiple valid approaches? You would assess this by examining whether success can be measured against clear rules or requires judgment and context.
Examples of deterministic tasks/processes:
Checking if an invoice matches a purchase order (three-way match)
Verifying a supplier is on an approved vendor list
Confirming required insurance certificates are current
Examples of non-deterministic tasks/processes:
Recommending the "best" supplier when multiple qualify
Drafting negotiation strategies
Suggesting where to consolidate spend
This gives you 4 quadrants to refer to when deciding how much “human-in-the-loop” you’ll need for any particular task/process:
Low Risk + Deterministic Tasks = Minimal human intervention
Example: Matching invoices to POs for office supplies under $500
This is where AI shines. Clear rules, low stakes. Let systems handle the routine work autonomously. Humans should monitor accuracy rates and review exceptions, but don't slow down the process with unnecessary checkpoints.Low Risk + Non-Deterministic Tasks = Moderate human oversight
Example: Categorizing miscellaneous spend items into taxonomy categories
Is that $50 purchase "IT Equipment" or "Office Supplies"? There might be valid arguments for multiple categories, but the stakes are minimal. Let AI make the call based on historical patterns and similar purchases. Have humans spot-check categorization accuracy quarterly and refine the model, but don't bottleneck every decision.High Risk + Deterministic Tasks = Moderate human oversight
Example: Releasing a purchase order for high-value raw materials
There are clear rules: approved supplier, correct specifications, valid budget code, min/max inventory levels. The financial exposure is significant, but because success criteria are objective, AI can do much of the heavy lifting. However, humans should validate outputs and approve before execution because the underlying data might be wrong.High Risk + Non-Deterministic Tasks = High human involvement
Example: Selecting between three qualified suppliers for a strategic component
There's no single "correct" answer… You're weighing cost, quality, relationship, capacity, and risk. The AI can analyze data and surface insights, but humans must be deeply involved throughout: reviewing the reasoning, challenging assumptions, applying business context, and making the final call. When the stakes are high AND judgment is required, you need maximum human oversight.
The key insight: The higher the stakes and the less clear-cut the answer, the more humans you need in the loop.
Designing for Iteration
The matrix above is just a starting point.
As your collaboration with an AI system gets more mature for any given process, you'll be able to reduce the amount of “Human-in-the-Loop” as you make the process more deterministic.
How do you do that?
By creating business rules!
“In X situation, we systematically do Y.”
“In A situation, we systematically do B.”
The real power of using AI is unlocked when you get serious about figuring out how you organize work and why…
That’s why your AI governance processes should include:
Clear feedback loops. How will you capture when AI gets it wrong?
Regular review cycles. Not just "set and forget"
Documented learnings. What worked, what didn't, and why
Adjustment mechanisms. How quickly can you refine the systems? Are you waiting on an IT ticket that’s been opened for 6 months?!
Psychological safety. Can your team admit when the AI failed without fear? How are you ensuring you’re creating spaces for your team to iterate?
The organizations that succeed with AI aren't those with the most sophisticated models or the biggest budgets. They're the ones who build systems that learn, adapt, and improve over time.
Because just like startups, just like Fortune 500 strategies, just like your kitchen renovation… You're not going to get it right the first time.
And that's okay.
The question isn't whether your AI implementation will need adjustment. It's whether you've designed your governance to handle it when it does.
👀 In Case You Missed It…
The Last 3 Newsletter editions:
1/ Your Colleagues Are Not The Problem...
2/ Procurement Is Becoming the New IT
3/ The Dirty Little Secret Behind Gen AI Functionality Pricing

Just because something is automated doesn't mean it's autonomous.

Need Help Building Your Digital Procurement Roadmap?
I’ve been helping global procurement teams digitalize their processes and practices for 12+ years. Reply to this email to get a conversation started.You have something to share with our 10,000+ readers?
The digitally-minded procurement professionals reading this newsletter are thirsty for knowledge, insights, and solutions. Reply to this email to get in front of them.
See you next week,
P.S. Please rate today’s newsletter.
Your feedback shapes the content of this newsletter.
How did you like today's newsletter?
First time reading? Sign up here