Skip to main content

DESIGN & UX

The Most Dangerous Assumption in CX: That Your Customers Are Still the Same 

By Joe Mallek, Senior Partner
The Most Dangerous Assumption in CX is That Your Customers Are Still the Same - WOW 2
Share on LinkedIn
Share on Facebook

Enterprise organizations spend years building toward the customer. Research programs, experience frameworks, personalization infrastructure, and digital transformation initiatives. All of it organized around a single animating belief: we know who our customer is and what they need from us.

For most organizations that belief was genuinely earned through serious research, real investment, and results that confirmed the direction. The conversation does not revolve around what to do. It revolves around whether the organization is ready to do it.

What is often left out of the program is a reason to keep asking a critical question: is our customer still the customer we think they are?

CX Frameworks Don't Just Describe the Customer. They Become the Customer.

When a CX program matures, a quiet but potentially problematic pattern sometimes appears within the process of initiating new work. Briefs arrive... with the customer already defined: i.e. Personas are attached as background, segmentation models are referenced as given. The team gets up to speed and then gets to work, right? But the problem we often don’t consider is that he investigation that produced those fundamental customer artifacts was done a long time ago, perhaps by someone who doesn’t even work here anymore. But despite that, inside the organization, they still carry the authority of current truth.

This is what it looks like when customer experience frameworks stop being tools and start being assumptions. The original intent was to understand the customer well enough to design for them. And at the time, the research was serious, the findings were grounded, and the frameworks earned their place. The unintended downstream outcome is an organization that has replaced the customer with a model of one, and structured its entire operation around keeping that model intact and happy.

Nobody decided this would happen. It happens because frameworks that work get relied upon, and frameworks that get relied upon stop getting questioned. The customer in the brief ... is the customer in the system ... is the customer in the roadmap. At some point that customer exists more vividly inside the organization than they do in the real world.

The CX Framework Worked ... That's the Problem.

Marketing plans against the personas. Product prioritizes against the journey. Experience design is organized around the service model. The service team is trained and measured against the lifecycle stages.

The technical infrastructure runs the same way. The segmentation logic is in the marketing automation platform. The persona definitions are in the personalization engine. The journey architecture is in the experimentation roadmap. Lifecycle marketing programs are built around the behavioral paths the framework anticipated.

At that point every team is optimizing inside the model, competing to perform better within its boundaries, not questioning whether the boundaries are still in the right place. That work produces real results, which is exactly what makes the model so hard to revisit.

Which may explain why Nielsen Norman Group found that only 46% of organizations update their customer personas every one to four years, and another 26% only every five years or not at all. Every team ends up designing for the same customer simultaneously, across every touchpoint, using systems configured around the same original picture.

The organization ends up designing for the customer it had defined, not the customer that kept evolving after the research was done. The older that picture gets, the bigger the chances that the organization’s efforts are off target.

The Most Dangerous Assumption in CX is That Your Customers Are Still the Same - WOW

The Customer Your Organization Knows is Already Someone Else

Five years is a long time in a customer's life. The competitive landscape they navigate has changed. The tools they use to research, compare, and decide have changed. Their expectations of what a good experience feels like have been reset by companies outside your category who happened to get there first. The customer who sat across from your research team when the personas were built has lived through all of it. The persona hasn't.

The Trap of the Validated Model

Consider a large regional bank that made a serious investment in its digital customer experience five years ago. The research was rigorous — ethnographic interviews, behavioral data analysis, journey mapping across acquisition, onboarding, and retention. The team landed on two well-defined customer segments: a relationship-oriented customer who valued personal guidance and trusted the bank's counsel on financial decisions, and a self-directed customer who wanted digital tools, transparency, and speed.

The bank built around both. The digital platform was architected to serve the self-directed customer. The relationship model — branch experience, advisor touchpoints, lifecycle communications — was designed for the relationship-oriented one. It worked. Engagement metrics were strong across both segments, the model validated itself quarter after quarter, and leadership had every reason to trust what they'd built.

What the model didn’t have was a category for what happened to both segments over the next five years. The relationship-oriented customer didn’t abandon the desire for guidance, but they started arriving at conversations already informed, having researched products independently, compared rates on aggregator platforms, and read peer reviews before speaking to anyone at the bank. The self-directed customer hadn’t become more loyal to digital; they had become more skeptical, quicker to comparison shop, and less responsive to the personalization logic the platform was built around.

Because neither shift announced itself in the data, the bank saw no reason to change course. The five-year-old personas remained anchored in every planning document, and the original segmentation logic continued to power the personalization engine. Quarterly reviews remained green across the board, providing no incentive to question the behavioral assumptions that were quietly eroding beneath the surface.

Yet, there was one number that didn't fit — a quiet signal that the engine was starting to misfire. Personalization response rates in the highest-value segment had been drifting downward for three quarters. It wasn't a catastrophe, just a slow, persistent leak.

The platform was functioning perfectly and the logic remained sound, yet the very customers it was designed for had stopped responding to it. The system was still running exactly as it was told to, but the connection to the human on the other side was gone. And because the system wasn’t "broken," nobody had thought to ask why.

Left quotation mark icon

At some point, the customer exists more vividly inside the organization than they do in the real world.

Right quotation mark icon

The System Doesn't Question What You’ve Fed It

Customer experience initiatives are typically evaluated on what they produce, be it the platform, the program, or the results. The model underneath rarely gets the same scrutiny. 

When new behavioral data comes in, it gets routed through the frameworks already in place. Customers whose behavior doesn't fit an established segment get absorbed into the nearest available one. The system keeps reporting, the team keeps optimizing, and at no point does the infrastructure pause to ask whether the categories it's sorting people into still reflect who those people actually are. 

The data looks like insight. It is the organization's existing assumptions about the customer, expressed back to the team in the language of performance metrics. 

Execution speed determines the scale of the problem. Organizations that have built real execution capability — automated orchestration, real-time personalization, rapid experimentation — are reaching more customers, across more touchpoints, more often. Every one of those interactions is running on the same model. And a machine that runs well produces results that look like validation, which is precisely what makes the model underneath it so difficult to examine. 

This is where CX maturity shifts from a question about execution to a question about the operating model behind it. It’s in the segmentation logic, the data interpretation layer, the experimentation framework, the decision systems shaping every experience the customer has. 

The Organizations That Get This Right Never Stop Asking the Question

The most mature CX organizations apply the same rigor to the customer model after it is built as they did when building it. Personas and journey frameworks are treated as working interpretations and are considered current until the evidence says otherwise, and someone is always assigned to look for that evidence. 

Behavioral signals get examined for what they reveal about where the model is losing accuracy. Leadership revisits the customer narrative on a deliberate schedule. When data surfaces patterns that don't fit established segments, the question the organization asks is what the model is missing. 

In practice, the CX programs that stay relevant are the ones running two workstreams simultaneously. One focused on improving the experiences already built, and one focused on actively looking for evidence that the premise behind those experiences needs updating. The second workstream is the harder one to sustain because acting on it has real consequences, like rethinking segmentation structures, revisiting journey assumptions, adjusting the metrics the team is held to. Understandably, this is sometimes tough to swallow. The thought of needing to update things that perhaps were a challenge to arrive at in the first place becomes a potential reason to defer the conversation. But the organization that recognize this as a necessary cost to ensure their CX stack is staying relevant are at a distinct advantage.  

Cross-industry partners accelerate that second workstream in ways internal teams structurally cannot. An agency working across industries and client environments accumulates a view of how customer behavior is shifting across markets that no single organization's data can produce. When a behavioral shift is underway broadly, that outside perspective can surface  it earlier. For organizations serious about keeping their customer model current, that pattern recognition is where external partnership earns its keep.  

A customer model treated as a living document — revisable, regularly interrogated, never fully settled — stays useful in a way that a finished one never can. The organization that builds that discipline keeps designing for the people actually in front of them. 

The Most Dangerous Assumption in CX

CX programs produce clarity about customers, and that clarity becomes the organization's internal model of who the customer is, what they want, and how they behave. Systems align around it, teams get measured against it, and investment decisions get made in service of it. All of which is exactly what a well-built CX program is supposed to produce. 

The model was built at a specific moment in time, against a specific version of the customer, and the organization has been executing against it ever since. The work has continued, the systems have scaled, the results have looked healthy enough that nobody scheduled a conversation to ask whether the foundation still holds. 

The most dangerous assumption in CX is a simple one: that the customer your organization defined is still the customer you have. 

Strategies that win. Outcomes that wow.