In the last twelve months, every marketing measurement company on Earth has announced an AI agent. Open LinkedIn on a Tuesday. There's a new one. Introducing our copilot. Meet our AI assistant. Ask your data anything.
I want to be honest about something before we go further. I run one of those companies. We ship agents. Customers use them.
So when I say that almost all of these agents are about to make marketing teams a lot of money, and almost all of them are about to make marketing budgets disappear faster than any human ever has — I'm not pointing at the industry from outside it. I'm pointing at us, too.
That's why I'm writing this.
The meeting that broke something
About fourteen months ago, I sat across from a CFO at a company that had been on Lifesight for two years. Eight figures of paid media, annually. Sophisticated team. Our platform told them paid social was running at 3.8x ROAS. The CMO had been showing that number to her board for the better part of two years.
The CFO leaned forward and asked the simplest version of the question.
What happens to revenue if we turn paid social off for a month?
Not modeled. Not estimated. Actually turn it off. What happens.
I'd been in plenty of measurement conversations. I'd never been in one where the question was that direct, asked by the person who actually controlled the budget. I didn't have a good answer. The product I'd spent years selling wasn't built to answer it.
It could tell him which touchpoints his customers hit before they converted. It couldn't tell him whether any of those touchpoints actually caused the conversion.
Andy Grove has a line I think about a lot. Only the paranoid survive. The version I keep coming back to, in my head, is narrower. Only the founders who hear the question they can't answer survive. That meeting was that question.
What I knew. What I had to admit.
Touching is not causing.
Anyone who's worked in measurement for more than a year knows this. A streetlight is present at every car accident. That doesn't make it the cause.
What I had to admit, sitting across from that CFO, was that knowing it didn't matter. Our platform — the product driving most of our revenue — was a touch tracker dressed in better clothing.
We'd been running a parallel project on the side. A small team building incrementality testing. Holdout groups. Geo-based experiments. The kind of work where you actually remove a variable and watch what changes, instead of modeling what an algorithm tells you should have happened.
We started running both systems on the same customer data, same channels, same windows.
The numbers diverged. Not by a little.
In the majority of customer audits, the highest-spend channel's attributed value was overstated by more than forty percent. The CMO who was budgeting against attribution was operating in a different reality than the one her business was actually in.
Forty percent isn't a calibration issue. That's a different reality.
Then the agents started shipping
The second thing happened maybe six weeks after that meeting. I started seeing agent demos from every measurement vendor I knew.
I sat in a few of them. They were impressive. Smart, fast, well-designed. The pattern was always the same. The agent reads your dashboards and recommends what to do.
Here's the part that should have been a relief, and was the opposite.
A junior analyst handed credible-looking but wrong data will optimize confidently in the wrong direction. They won't squint. They won't cross-check. They'll trust the dashboard, because it looks credible. The output looks like work.
An AI agent reading a broken attribution dashboard does what that junior analyst does. Faster. With more confidence. At more decisions per day. Without anyone in the loop who knows enough to second-guess the data.
I keep using the GPS line in my head. If you put a GPS in a car that's already on the wrong highway, the GPS makes the drive more efficient. The route recalculates faster. The ETA is more accurate. You arrive in the wrong city sooner.
The industry is shipping GPS systems for cars on the wrong highway, at scale, in 2026. Including, until recently, us.
That was the second realization. The first was that attribution was broken. The second was that an industry running on broken attribution was about to be amplified at machine speed.
Those two things, taken together, told me what we had to do.
The decision
Two paths.
One. Keep selling attribution. Add incrementality as an upsell. Let customers have both dashboards and reconcile the gap themselves. This is what most measurement vendors do, and it's a defensible business. Two products. Two revenue streams. Two answers. Let the buyer decide which one to trust.
Two. Rebuild the platform around a single causal stack. One system. One answer. Walk away from attribution as a standalone product. Eat the revenue hit. Bet the company on the idea that the market would demand what we were building before our cash ran out.
I chose the second one. I want to be honest about what it cost.
We lost customers. Not many. But the ones we lost were the ones who needed attribution numbers to defend existing budgets to their boards. One of them said it to me directly. We need those numbers for our deck. If your platform doesn't produce them, we'll find one that does. I understood. I let them go.
We lost roughly a fifth of our revenue inside two quarters. Those were difficult quarters. There were investor conversations I would prefer not to live through twice.
And we lost the comfort of building inside a known category. No Forrester Wave for what we were doing. No Gartner Magic Quadrant. No RFP template that asked for it. We were building for a demand the market hadn't formally named yet.
Every CEO I respect has had at least one moment like this. Clay Christensen spent his career studying companies that should have made this kind of decision and didn't. The pattern in the ones that didn't is always the same. The existing product was profitable enough that questioning it felt premature, until it was too late.
The thing I had to decide was not whether to bet the company. The thing I had to decide was whether the company I had was the one I wanted in three years.
What we built. And what's still being built.
I'll keep this short, because I wrote a manifesto explaining the architecture, and if you want the long version, you can read that one.
In short. Three measurement methodologies that calibrate each other instead of disagreeing on three different dashboards. Marketing mix modeling for strategic allocation. Incrementality testing as the calibration anchor. Attribution reconciled against the calibrated model — useful for granularity, not used for picking truth. One reconciled answer that the CMO and the CFO can look at without fighting about whose number is real.
Then we put agents on top. They don't read dashboards. They query the calibrated state directly, and every recommendation they make is auditable back to the experiment that justified it.
The line I keep using is the one we put on the manifesto. Every other vendor in this space gave their agents a dashboard to read. We gave ours a causal engine to think with. That's a different product. It's also a different bet about where the industry is going.
Honest caveat. The reconciliation layer is the part that's still being refined. There are edge cases — small experiment samples, channels with low signal, brands at low budget bands — where the calibration doesn't hold confidently. I'd rather build the framework for handling those edge cases in public than ship past them quietly.
What I'm afraid of
Two things.
The first is being early. Markets reward right answers at the right time. They punish right answers at the wrong time roughly the same as they punish wrong answers. We could be right about where the industry is going and wrong about how fast it gets there. CFOs might keep accepting the answers they're getting for another three years. CMOs might prefer the comfortable fiction of attribution dashboards for longer than I think. If that happens, we'll have rebuilt for a market that catches up too late to matter.
The second is being captured by our own narrative. When you stake a company on a thesis, there's a gravity to defending the thesis. I've watched founders I respect become advocates for their position rather than students of the truth, and I don't want to become that. The Decision Desk exists, partly, as a forcing function against it. If I write that incrementality testing has limitations I hadn't accounted for, those words are out in public. If I have to retract or revise, I'll do that in public too.
What I'm watching
This is the close I'd like to make a habit. Three things I'm watching over the next year.
One. Whether CFOs keep escalating the question they're already asking. Early signal says yes. Every measurement deal in our pipeline now has finance involvement at a depth it didn't have eighteen months ago. If that trend holds, the buying committee for measurement looks fundamentally different by 2027.
Two. Whether competitors converge on calibrated methodologies, or stay comfortable inside the current categories. Some vendors are starting to use the word causal without doing the work that would make it true. I expect a few more "AI agent on top of the same broken stack" launches before the market starts pricing this dimension differently.
Three. Whether we keep the discipline ourselves. Growth pressure is real. The temptation to relax the standard we're publishing — to ship a feature that lets the agent reason over reported numbers when the causal answer is missing — exists every quarter. The Decision Desk is, partly, my way of putting that standard on the record so I have to defend it.
If you're a CMO or CFO sitting in the question I sat in fourteen months ago, I'd like to hear from you. Not to demo. To compare notes. The whole point of doing this in public is to find the people who've been sitting with the same questions, and to learn what they're seeing that I'm not.
Thanks for reading the first one.
Tobin Thomas
Co-Founder & CEO, Lifesight
