6 minutes
Approximate Correctly: What Physics Taught Me About Business Decisions
The most useful skill in business isn’t precision. It’s knowing when precision doesn’t matter.
I learned this in a physics lab, not in a McKinsey office with a great view.
Physics, Not Math
I studied physics, not mathematics. The distinction matters more than it sounds.
Mathematics demands logical completeness. If the derivation isn’t sound, the result is wrong. It doesn’t matter how well it describes the world. Math is precise by definition. An answer that works but can’t be fully proved isn’t an answer.
Physics is different. If a model describes what happens, it’s useful, even if it’s incomplete, even if it’s technically “sort of wrong.” The goal isn’t logical perfection — it’s to describe reality well enough to do something with it.
This produces a different relationship with approximation. In physics, approximations aren’t failures of rigor. They’re tools. Considered choices about what the problem actually requires.
The Taylor Expansion
Here’s a concrete example from undergraduate physics that I still think about regularly. You don’t need to follow the math — the logic is what matters.
When you encounter a complicated function, one of the first things you do is expand it as a Taylor series. This is an infinite sum of terms, each one smaller than the last. You look at the first-order term. Then the second. Then you check the size of the subsequent terms relative to everything else.
Often the third-order term is negligible, a small correction on top of something much larger. So you drop it. You keep only the linear and quadratic part. And suddenly the complicated function becomes something simple. Maybe even something you recognize, something you already know how to work with. The approximation doesn’t just get you a usable answer. It unlocks an entire library of known results that now apply to your problem.
The approximation is a feature, not a bug. You’ve traded a small amount of precision for a large amount of leverage.
The Business Translation
Early in my career, moving from physics into consulting, I was deeply uncomfortable with deciding things immediately.
In the lab, you still went deep. You got detailed results. The process was rigorous. The idea that you’d look at a problem, make a rough call, and move on felt like cutting corners.
Then it clicked.
We weren’t at the end of the calculation. We were at the start of it. The question wasn’t precision. It was direction: what’s the right order of magnitude, and does it point clearly one way? If yes, decide and move. The refinement comes later, if it’s needed at all.
It was the Taylor expansion. We were at the first-order term — far enough from the details that they didn’t matter yet. Act on the approximation. Get closer. Sharpen when it’s worth it.
That shift changed how I approached decisions under uncertainty. Not less rigor — appropriate rigor. Matched to where you actually are in the problem.
The False Precision Trap
The failure mode I see most often in business isn’t sloppy thinking. It’s the opposite.
It’s the Excel model where every cell links to another, assumptions buried so deep nobody can trace a number back to first principles. It looks rigorous. It feels authoritative. The output has three decimal places.
But the inputs are guesses. And compounding uncertain estimates through a complex model doesn’t reduce the uncertainty. It hides it.
The number that comes out the other end feels precise. It isn’t. You’ve just made the assumptions invisible.
A back-of-envelope calculation that states its assumptions clearly and gets the order of magnitude right is more honest, and usually more useful, than a spreadsheet that launders uncertain guesses into precise-looking outputs.
A Real Example
We were setting geographic expansion strategy. The question was where to focus next. Which markets, which product capabilities, which currency infrastructure to build out.
There was pressure to do the full analysis. Understand the entity structures in each market. Size the employee base. Model the addressable market bottom-up. It would have taken weeks and produced a detailed, precise-looking answer.
Instead, we followed the money. Where were transactions actually going? A back-of-envelope look at payment flows showed that vast majority of cross-border volume was moving between Europe and the US. Not because anyone had designed it that way. But because that’s where our customers’ businesses actually operated.
That single number made the decision. If majority of the money is moving Europe to US, unlock USD. Everything else is a rounding error at this stage. We didn’t need to understand entity structures or model the US employee base. We needed to know the order of magnitude. The order of magnitude pointed clearly in one direction.
The detailed analysis would have confirmed the same answer, eventually. But it would also have introduced enough complexity to slow the decision and create room for disagreement about assumptions. The approximation got us to the right call faster, with less noise.
The Sanity Check
When I’m in a meeting and a number doesn’t feel right, I do the same thing.
Step back. Express it in orders of magnitude. Strip away the false precision and ask: is this a tens problem, a hundreds problem, a hundred-thousands problem? Then work backwards from the number to the things you already know, the anchors you’re confident about. Is this consistent with what we know about the market size? The number of users? The cost structure? What would have to be true about the world for this number to be right? If someone tells you a market is worth €50M and you know there are 500,000 potential customers, that’s €100 per customer per year. Does that feel right given what they’d pay? Often the answer is immediately obvious.
If the number survives that check, you can trust the order of magnitude even if you can’t trust the exact value. If it doesn’t — if backing it out produces something that contradicts a known anchor — something in the model is wrong regardless of how internally consistent it looks.
This is harder to fake than a spreadsheet. When you reason from first principles through a chain of estimates, every assumption is visible. There’s nowhere to hide a bad one.
When You Need More Precision
The approximation approach has a clear stopping condition: if two options look identical at the order-of-magnitude level, sharpen your estimate.
A 10x difference almost always makes the choice obvious. A 2x difference might matter depending on what else is in play. A 1.1x difference means either the options are essentially equivalent, or choose on other grounds, or you genuinely need more precision.
Before investing in more precision, ask: would the decision change if the number were three times higher? Three times lower? If the answer is no, you already have enough. Document the assumptions and move.
If the answer is yes, now the detailed work is justified.
Most of the time, the answer is no.
The Real Skill
The leaders I’ve worked with who are best at this have a particular kind of intellectual courage. They’re willing to commit to an approximate answer — out loud, in a meeting, in front of people — when more precision wouldn’t change the call.
That’s not laziness. It’s clarity about what the problem actually requires.
In a culture that rewards false precision, being willing to say “it’s around ten times bigger, and that’s enough to decide” reads as confidence. Because that’s what it is. You’ve understood the problem well enough to know what you need from it.
The real skill isn’t getting to the exact answer. It’s knowing when the approximate one is already enough — and having the confidence to act on it.