Skip to main content

DIGITAL STRATEGY & CONSULTING

Your Automation Strategy Has Been Making Decisions Without You

By Elizabeth Stewart, Senior Marketing Automation Specialist
Automation Strategy - WOW
Share on LinkedIn
Share on Facebook
A marketing automation platform at a typical enterprise evaluates rules in the millions per day. Somewhere between a few hundred and a few thousand of those evaluations result in a customer being contacted, routed, offered something, or withheld from something. Each one is a decision the business is making about a person. 

Almost none of those decisions are being made by anyone.  

They were made, in the past tense, by people who in many cases may no longer work at the company, under business conditions that likely no longer apply, inside tooling that has been probably migrated at least once since. The rules are still firing but the business has moved on. 

This is the part of automation strategy that the last ten years of conversation has skipped. We didn't automate work away. We automated new work in, and a lot of organizations haven't stopped to reckon with that yet. The work moved out of the queue and into a governance function that almost no organization has formally staffed, which is exactly when automation creates more work instead of less. It shows up in flat retention numbers, in softening expansion, and in automation ROI calculations that stopped telling a complete story several years ago. 

The reckoning is overdue. 

85%

Richmond Fed research found that over 85% of firms cited cost savings as the primary motivator for automation, dwarfing every other reason. 

The Efficiency Case Was Sound (and Narrowly Defined)

To be clear, the people who built the current automation stack were right to build it the way they did. 

Boards wanted margin improvement. Leadership wanted leaner operations. Richmond Fed research found that over 85% of firms cited cost savings as the primary motivator for automation, dwarfing every other reason. The decisions delivered on what they promised, and they were made by capable teams working with the information and tools available at the time. The business case was airtight against the question it was asked. 

The question was just too small. 

Efficiency asks whether you can do the same work with fewer resources. Capacity asks whether you have room to do better work. Those are related numbers, but they are not the same number. Almost every automation program at enterprise scale was funded against the first and evaluated against a shadow of the second. The gap between those two definitions is where automation strategy tends to disintegrate over time and it is where a lot of mature programs have arrived right now. 

This is not a failure of the original thinking. It is the predictable maturity curve of any capability that starts as an efficiency play and becomes load-bearing for the customer experience. The work ahead is recognizing which stage your organization is actually in and adjusting the operating model to match.

Automated Decisions Nobody is Revisiting

The millions of rule evaluations described earlier are not abstractions. Each one represents a judgment call about how the business should treat a customer under specific conditions. Who gets contacted. When. Under what assumed intent. With what offer. That is automation decision making, and at enterprise scale, it is a strategic function (whether or not the company has recognized it as one). 

The judgment calls embedded in those flows were provisional when they were built. They were based on what the business looked like that quarter, under the deadline pressure the team was operating under, with the information they had. 

Then the flows kept running. And running. 

A year passes and the customer base shifts a bit. Two years pass and the product expands into a segment that didn't exist when the logic was written. Three years pass and the team that built the flow has moved on, and the institutional memory of why that particular rule fires on day 47 instead of day 30 leaves the building with them. The logic stays long after the conditions it was built for don't. 

There is no standard equivalent to a financial audit for this decision layer. When Enron and WorldCom collapsed, Congress passed Sarbanes-Oxley and the financial decisions inside every public company in the country became subject to annual audit and real consequences for getting it wrong. The general ledger got a formal accountability regime. The decision ledger that automation built alongside it never did, which is why the controls around automated customer decisions at a lot of enterprises would not pass the standards we apply to any other category of consequential business choice. 

The WOW and Optimizely AI Experimentation Series makes this point directly: organizations already automate routine tasks at scale, yet few actually measure whether those automations still create value. 

Calling this negligence gets the causality wrong. The accountability dissolved gradually in the normal course of business, which is what makes it difficult to address.  

  • Every reorganization moves the people who knew why a rule was written.
  • Every tool migration makes the prior logic harder to read.
  • Every quarter where nothing visibly breaks reinforces the assumption that the flow is fine, until the lagged signal arrives in the form of a retention number that leadership cannot trace back to any specific cause.

Automation ROI Measures What Automation Costs, Not What It Costs You

If you've sat through a quarterly review in the last five years, you have seen the slide. Automation saved X this quarter. Headcount avoided, hours recovered, cost-per-interaction down. The number is defensible and the savings are real. 

What is missing from the slide is the other ledger. There is no quarterly figure for the customer who would have churned if the conversation had gone differently, and no line for the trust that breaks down when a rule fires on the wrong segment. The experience ledger was never formally built, so organizations built their automation ROI calculations against the figure they could report on. 

The gap between those two ledgers widens over years and surfaces as things like flat NRR or softening retention. Those are genuinely lagged signals and arrive quarters after the decisions that caused them, so they are difficult to attribute back to any specific flow or rule without an audit function. Automation ROI calculations don’t account for experience degradation, not because anyone was hiding it, but because the reporting infrastructure wasn't built to catch it. 

Forrester principal analyst Max Ball confirmed this pattern in a Wall Street Journal report, observing that chatbot adoption has been hampered because organizations looked at cost savings without looking at the customer experience those savings produced. That same structural pattern shows up wherever automation has been deployed at scale, well beyond chat. It is a governance problem and governance problems require governance responses. 

The experience issues showing up in enterprise dashboards right now are not a sign that automation was the wrong strategy. They are the lagged result of a measurement architecture that was optimized for one kind of accountability and not the other, and the architecture will keep producing the same result until leaders decide the experience ledger is worth formally building and reporting on. 

Why Your Marketing Automation Strategy Keeps Creating More Work

When performance goes flat, the reflex is to authorize more automation. Add another flow, invest in another platform, extend the existing logic further down the funnel. The instinct is logical because action is what gets rewarded and more automation feels like the kind of action that compounds. 

Unfortunately, the instinct is wrong. 

What customers need from the automation layer is better judgment about when automation should fire and when it shouldn't. Adding flows deepens the problem by expanding the operational surface area that has to be monitored, audited, and eventually rewritten when the business changes again. (And it will, of course.) A larger automation footprint operating with the same judgment layer is a larger liability, and it will compound faster than the efficiency gains it produces. 

Left quotation mark icon

Machines excel at execution: monitoring, pattern detection, prioritization and acting within defined constraints. Humans remain essential for judgment: setting intent, defining values, managing risk and owning outcomes when edge cases appear.

SALIM GHEEWALLA, FOUNDER & CEO OF UTILLTISE
Right quotation mark icon

A mature marketing automation strategy treats the boundary between automated decision making and human judgment as a design choice that someone actively owns, revisits on a schedule, and assigns to a specific person with the authority to change it. The tooling does not get to decide where that line sits by default. The organization does, deliberately, as a matter of policy. That policy is easier to enforce inside a composable architecture, where individual decisions can be audited and updated without disturbing the whole system, and genuinely hard to enforce inside a rigid one. 

If automation represents a meaningful share of how the company interacts with its customers, and no one holds a title that reflects accountability for the decisions automation is making, the org chart is out of date. The tooling question is a downstream question. The accountability question is the real one, and it is the one that determines whether the next generation of automation investment produces returns or deepens the problem. 

In Closing

The first move is smaller than it sounds, yet harder than it looks. Pull a list of the ten automated flows that touch the most customers each quarter. For each one, write down the name of the person currently accountable for the decisions that flow is making. Not the platform. Not the team that owns the platform. The name of a human being whose judgment is running at scale every time that flow evaluates. 

At the average enterprise, that exercise produces one or two names and eight or nine blank lines. Those blank lines are the work. They are the places where the organization has been relying on decisions nobody currently is responsible for and they are the reason automation ROI calculations have stopped predicting what happens downstream of them. 

What comes next depends on where you are in the journey. For some organizations, it begins with understanding how human-centered automation and intelligent systems work together. For others it's about building automation that measures and corrects itself over time. And for enterprises further along, the work is treating automation as a strategy rather than a shortcut. 

The exercise is the same regardless of maturity. Ten flows. Ten names. Whatever is left over is your automation strategy making decisions without you. 

FAQs

What is automation decision making and why does it matter? 

Automation decision making refers to the judgment calls about who a business contacts, when, under what conditions, and with what assumed intent, all embedded into automated workflows. It matters because those decisions run at scale and continue running long after the business conditions that justified them have changed. Without an ownership model, automation decision making accumulates into a governance gap that only surfaces through lagged experience metrics. 

How does automation affect cost-to-serve over time? 

Automation reduces direct cost-to-serve in the short term, which is why it gets funded. Over a longer horizon, the experience cost of poorly maintained flows can offset those cost-to-serve reduction gains through degraded retention and lower NRR, costs that show up on a different line of the P&L and on a delayed schedule. The savings are real and so are the offsetting costs. A lot of enterprises do not report both numbers on the same page, which is how the gap persists. 

When does automation create more work instead of less? 

When flows are deployed faster than they are governed, exceptions accumulate. Each exception generates manual remediation and internal fixes that were not in the original business case, alongside customer recovery on the interactions the flow mishandled. At sufficient scale, the remediation work can exceed the labor the automation was meant to remove, which is the point at which automation stops functioning as an efficiency play and starts functioning as an operational tax. That is when automation creates more work instead of less, and the answer is not another platform. 

What does a mature automation strategy actually look like? 

A mature automation strategy treats the boundary between automated decisions and human judgment as a design choice that leadership revisits on a defined schedule. Experience outcomes get measured alongside cost outcomes and reported to the same people at the same cadence. A named owner holds authority over the logic inside each consequential flow, with a mandate to change or retire it. And when the business changes, that owner is accountable for changing the automation with it, before the next reporting cycle makes the misalignment someone else's problem. 

Strategies that win. Outcomes that wow.