Hi {{FIRST_NAME|readers}},
Something different this week.
We've been working on an authoritative deep dive into the 5 foundational capabilities you need before you deploy AI in procurement.
It's a big piece… 😅
So we're giving you three ways to consume it:
🎥 Live Video. Join me and the Fairmarkit team this Wednesday (March 25th, 11am ET) for a 45-minute executive conversation. Reserve your spot here.
📖 All at once. The full long-form article drops in the next few weeks.
📬🍎 In “5 bites”. One foundation per day over 5 days. This is the experiment. Reply and let us know if you want in. Click this link and we’ll send you Part 1 tomorrow!
A big thank you to Fairmarkit for being open to trying that third format with us. Content partners who let you try weird stuff are the best kind.
Now. Part two.
Speaking of doing the hard work before you deploy... (Foundations!)
Tonight's Sunday Night Note is about that 6-month implementation timeline in your business case.
It's not a forecast. It's a wish… 🧞♂️🌈
And a Nobel Prize-winning researcher with 16,000 projects can prove it.
Onwards!
📰 In this week’s edition:
🎥 Before You Deploy AI: The 5 Capabilities You Need (sponsored)
🌙 How to Predict Your ProcureTech Timeline
📢 This week’s “Must Reads”
🏆 The Road to the ProcureTech Cup: Episode 22
📋 3 procurement jobs that caught our eye
Note: Some of the content listed above is only available in the email version of this newsletter. Don’t miss out! Sign up for free to get the next edition.

How to Predict Your ProcureTech Timeline
"We'll have it fully deployed in 6 months."
I've heard this sentence hundreds of times. From vendors. From system integrators. From internal project leads who genuinely believe it.
And almost every single time, it's wrong.
Not because people are liars. Not because they're incompetent. But because of something far more insidious: they're building their estimate from the bottom up.
They look at their project. Their team. Their requirements. They map out the tasks, assign durations, add a little buffer (maybe 15%, if they're "conservative"), and arrive at a number that feels reasonable.
Then reality happens:
The data migration takes three times longer than planned.
The change management that was supposed to be "straightforward" turns into a 9-month organizational therapy session.
That integration with your ERP? Nobody mentioned it would require a custom middleware layer.
Six months becomes twelve. Twelve becomes eighteen. And somewhere around month fourteen, everyone pretends the original timeline never existed.
Here's the thing: there's a better way to estimate your ProcureTech projects. And it doesn't require a crystal ball. It requires a history book.
The Planning Fallacy Is Eating Your Projects Alive
Oxford professor Bent Flyvbjerg has spent decades studying why projects fail. His database covers more than 16,000 projects across 136 countries. The findings are brutal.
Only 8.5% of major projects come in on budget AND on time.
Want to add "delivered the promised benefits" to that criteria? That drops to 0.5%.
Half a percent.
For IT projects specifically, the picture is equally grim. McKinsey and the University of Oxford found that 66% of enterprise software projects over $15M experience cost overruns. The average large IT project exceeds its budget by 45% while delivering 56% less value than predicted. And one in six IT projects is a genuine "black swan," with cost overruns averaging 200%.
These aren't obscure academic data points. This is the exposed reality of how projects actually perform versus how we think they'll perform.
Flyvbjerg calls this the "planning fallacy" (a term originally coined by Nobel laureate Daniel Kahneman). It's the systematic tendency to underestimate costs, timelines, and risks while overestimating benefits. And it affects virtually everyone, from governments building high-speed rail to procurement teams deploying a new sourcing platform.
The kicker? The planning fallacy isn't about stupidity. It's about perspective.
The Inside View vs. The Outside View
Here's what happens when you plan a ProcureTech project the "normal" way:
You sit down with your team. You list the tasks. You estimate durations. You factor in the things you know about your specific situation (your data quality, your team's bandwidth, your stakeholder landscape). You build a bottom-up plan.
This is what Kahneman and Flyvbjerg call the "inside view."
It feels rigorous. It feels detailed. It feels like the responsible thing to do.
And it's almost always wrong.
Why? Because the inside view treats your project as unique. It focuses entirely on the specifics of this deployment, this team, this organization. It ignores the exposed base rate of how similar projects have actually performed in the real world.
It's like estimating how long your kitchen renovation will take by carefully planning every step, while ignoring the fact that most kitchen renovations go over schedule.
The "outside view" (or Reference Class Forecasting) flips this on its head.
Instead of starting from the inside out, you start from the outside in:
Step 1: Identify a “reference class”. Find a set of past projects comparable to yours. Deploying a new S2P suite? Look at other S2P deployments. Implementing an Intake & Orchestration platform? Find organizations that have already done it.
Step 2: Gather actual outcome data. Not what those projects were planned to cost or take. What they actually cost. How long they actually took. What benefits they actually delivered (ideally directly from someone who has no incentive to modify the numbers…)
Step 3: Position your project within that distribution. Use the statistical reality of those outcomes to anchor your estimate (instead of bottom up planning). Then adjust for your specific circumstances, but only at the margins.
"But Our Project Is Different..."
I can already hear the objections. This feels way too easy…
"Our situation is unique."
"We have a great implementation partner."
"We've done thorough requirements gathering."
"Our vendor has a proven methodology."
Every single project team says this. And Flyvbjerg's database of 16,000+ projects tells us that virtually every single one of them is wrong.
Your project is not as unique as you think. That's not a criticism. It's a statistical fact.
The S2P deployment you're planning? Hundreds of organizations have done it before you. The data migration challenges you'll face? They've been faced (and usually underestimated) before. The change management hurdles? They're predictable, because they follow patterns.
The organizations that get this right aren't the ones with the best plans. They're the ones that take the “outside view” seriously, before they ever build the “inside view”.
What This Looks Like for ProcureTech
Let’s make this practical.
Imagine you're building a business case for a new procurement platform. Your vendor says implementation will take 6 months. Your SI partner agrees. Your internal team maps out the workstreams and lands on 7 months (because they're being "realistic").
Now apply reference class forecasting:
You research (or ask around) how long similar deployments have actually taken at comparable organizations. Not the vendor's best-case reference customer. Not the case study on their website. The actual, unvarnished reality across a meaningful sample (not a single reference call).
You'll probably find that the real distribution looks something like this: the fastest deployments took 8-9 months. The median was 12-14 months. And a meaningful chunk (let's call it 20-30%) took 18+ months.
Your 7-month estimate? It's not at the optimistic end of the distribution. It's off the distribution entirely.
Now you have a choice. You can ignore this data and proceed with the inside view. Or you can anchor your business case to reality.
The teams that anchor to reality don't just get better estimates. They make better decisions. They budget for adequate change management. They phase their rollouts more intelligently. They set expectations with leadership that are actually achievable.
And when the inevitable surprises happen (because they will), they have contingency built in rather than scrambling to explain why everything is behind.
Think Slow, Act Fast
Another core principle pairs perfectly with reference class forecasting: "Think slow, act fast."
Most ProcureTech projects do the opposite. They rush through planning (because everyone's excited, the vendor is eager, and the CFO wants ROI yesterday) and then slog through execution as reality catches up.
The teams that win spend more time upfront. They gather reference class data. They run experiments. They pilot. They de-risk before they commit.
Then, once they start building, they move fast because they've already answered the hard questions.
I've written before about the Pixar parallel: “you build the story before you animate.”
In ProcureTech terms, you prove the concept before you deploy the platform. Reference Class Forecasting is how you pressure-test your story against what's actually happened to everyone else who tried to tell the same one.
Where to Find Your Reference Class
This is where it gets tricky for procurement, because our industry isn't great at sharing real implementation data. Vendor case studies are marketing materials (sorry, not sorry). Analyst reports give ranges but rarely expose the full distribution. Conference presentations cherry-pick success stories with people willing to share them.
So where do you look?
Your own organization's history. How did your last technology deployment actually perform against its original business case? What about a suite for sales (Salesforce), IT (ServiceNow), or another business function? If you've never gone back to check, that's a problem worth fixing.
Peer networks. Talk to procurement leaders off the record who have deployed similar platforms. Not "how did it go?" But "what was your original timeline, and what was the actual timeline?" You'd be surprised how candid people are when you ask the right question.
Implementation partners. A good SI will have data on dozens of comparable deployments. Ask them not for the best case, but for the distribution. If they can't (or won't) share it, that tells you something.
Industry benchmarks. Just ensure you’re looking into the data gathering method… People have a tendency to remember better stats than what happened in reality when surveys are in play…
As with anything, you need to consider how the data was gathered, who’s presenting the data and what their inherent biases and incentives are… This will color the analysis.
The Prisoner's Dilemma in Your RFP
Here's something else nobody talks about: the process you use to gather planning data is structurally designed to give you bad planning data.
Think about what happens during an RFP or vendor evaluation. You ask three vendors and two SIs for implementation timelines and cost estimates. Each one knows they're being compared against the others. Each one knows that whoever comes in with the lowest cost and shortest timeline typically has the best shot at winning.
This is a textbook Prisoner's Dilemma.
Every vendor and SI is incentivized to give you their best-case scenario, not their realistic one. Because if Vendor A says "12 months" and Vendor B says "8 months" (for the same scope), Vendor B wins the conversation. Even if Vendor A was being honest and Vendor B was being... optimistic.
Nobody wants to be the one who says "this will actually take 18 months and cost twice what the other guys quoted." Even if that's the truth. Especially if that's the truth.
This is called "strategic misrepresentation": deliberately understating costs and timelines to get a project approved. It's not always malicious. Sometimes vendors genuinely believe their best-case numbers… But the competitive structure of an RFP makes the problem worse, because it rewards the most aggressive estimate, not the most accurate one.
And here's the uncomfortable part: sometimes you want the best-case numbers too.
Because a business case that says "18 months and $1.2M" doesn't get approved. A business case that says "9 months and $600K" does. You know the real number is probably somewhere north of the vendor's estimate, but if you put the realistic figure in front of your CFO, the project dies before it starts. So you take the optimistic number, add a modest buffer, and tell yourself you'll "manage it tightly."
The vendor is incentivized to understate. You're incentivized to let them. And now everyone is building a plan anchored to a number that nobody actually believes.
This is the planning fallacy and strategic misrepresentation working together, and it's the single biggest reason ProcureTech projects blow up their timelines and budgets. Not because anyone lied. Because the system made honesty expensive.
So what do you do?
Recognize when the Prisoner's Dilemma is present. Any setting where vendors or SIs are competing for your business is a setting where their planning data is structurally biased toward optimism. That doesn't mean their estimates are useless. It means you should treat them as floor estimates, not expected outcomes.
Gather your reference class data in settings where the dilemma doesn't exist. Peer conversations, user group discussions, post-implementation retrospectives, independent research. These are contexts where people have no incentive to shade the numbers. The procurement leader who deployed the same platform two years ago and is telling you over coffee that it took 14 months instead of the promised 8? That's your reference class talking. Listen to it.
Ask vendors the right question. Instead of "how long will this take?", try "across your last 20 deployments of similar scope, what was the range of timelines from fastest to slowest?" If they only want to talk about the fastest one, you have your answer.
The Bottom Line
Reference class forecasting isn't pessimism. It's realism.
And it works. The method was so effective that the UK Government officially adopted it as mandatory guidance for major transport projects through the Department for Transport. The American Planning Association endorsed it, recommending that planners should never rely solely on conventional “bottom up” forecasting.
Multiple studies across infrastructure, energy, and IT have confirmed that projects using reference class forecasting produce significantly more accurate cost and schedule estimates than those relying on traditional bottom-up planning alone.
It doesn't mean every project will fail. It means your estimate should reflect how projects like yours have actually performed, not how you hope yours will perform.
The math is simple: if you anchor your plans to the inside view, you're betting against decades of empirical evidence. If you anchor to the outside view, you're giving yourself a fighting chance.
Next time someone tells you their ProcureTech deployment will take 6 months, ask one question:
"Can you show us how long your last 10 comparable deployments took? Not the 10 best ones. The last 10…"
If things start to get dicey, 6 months isn’t an estimate… It's a (MS Project backed) wish. 😅
👀 In Case You Missed It…
The Last 3 Newsletters:
1/ The Most Important Requirement That’s Not in your RFP
2/ What Is a Conference Room Pilot (CRP) in Procurement Technology?
3/ The Pixar Method for Procurement Transformation

Plans are worthless, but planning is everything

2 other ways we can help this week:
A Skeptic’s Take on AI in Procurement. Our consulting principal recently took the stage at Zip Forward: LLM market overview, vendor motivations, and a framework to chart your own path… Without the hype. Watch the replay.
ProcureTech Unpacked (our inaugural 100% virtual conference) is happening April 22–24, 2026. Three half-days built around two things: making you at least one industry friend who's as serious about procurement transformation as you are, and walking away knowing how to navigate the ProcureTech market without getting played by vendor marketing. That's it. No fluff. 60% sold out.
Get your tickets.
See you next week {{FIRST_NAME|readers}},
— The Pure Procurement Newsletter Team
P.S. Please rate today’s newsletter.
Your feedback shapes the content of this newsletter.
