Customer Retention

Stop doing these 3 things now to improve your customer retention strategy

By
Joel Passen
January 16, 2023
5 min read

Customer retention is the ultimate force multiplier in any B2B SaaS business. It involves building strong relationships with existing customers, ensuring they stay loyal to your brand, helping them use more of your product or service, and becoming advocates who bring in more customers through word of mouth. By investing in customer retention and ultimately increasing your customers' lifetime value (LTV), SaaS businesses unlock tremendous potential for growth and profitability.

Sometimes the SaaS world seems like alphabet soup. Lots of acronyms. As a reminder, Lifetime Value (LTV) is an essential metric for SaaS businesses. It measures the profitability of a customer over their entire lifetime of their contract or subscription. LTV provides an indication of how much revenue can be expected from a customer within any given point in time. 

Calculate LTV

Here’s how I suggest calculating LTV. First, determine the average revenue per user (ARPU). This is calculated by dividing total revenues by the number of users over a specific timeframe. Then, divide this result by the customer churn rate for that same period — this will estimate how long each customer’s subscription lasts on average. Multiply the ARPU and estimated lifecycle together to get your lifetime value. Doing so will allow you to accurately measure customer loyalty and help you devise meaningful customer retention strategies. 

Over the course of my career, I’ve learned that sometimes the best strategy is to stop doing something rather than create a new process. Making changes and implementing new processes and workflows can be time-consuming, lead to more complications, and cause confusion for your teams and customers. Simply put, here are a few things you can do to stop pissing off your customers because we can all agree that pissing off customers is a bad strategy.  

Stop ignoring customer feedback

Ignoring customer feedback is more than a mistake; it’s negligence. Customer feedback is the single most valuable thing a customer can provide — arguably more than their contract value. Insights about your products or services allow you to make improvements and create better experiences for every customer and every prospective customer. 

I’ve written about the perils of relying on surveys to capture customer feedback. So as a modern business leader, it’s high time you establish the channels to capture it and share it with the teams that can benefit the most. Have a system for everyone in your organization to access and analyze customer feedback — make feedback a collective reality. Democratize it. 

At one company where I served as the chief revenue officer, we provided hiring software to medium-sized employers, which helped them attract job applicants and manage the interview and hiring processes. We monitored customer feedback carefully. In fact, we monitored feedback so closely that it became a part of our culture and was more or less the genesis of my current company, Sturdy. 

In addition to fielding and responding to occasional issues and concerns about how our service worked, we identified patterns within the feedback: features that were missing, UI that was confusing, bugs that caused frustrations, coaching opportunities for associates, and more. These patterns in the customer feedback informed the creation of very focused rules of engagement and playbooks that ultimately increased our LTV. This lift in LTV helped us successfully sell that business to one of the largest payroll providers in the world. 

Stop overpromising

Whether the account manager said “yes” when they should have said “no,” or what they said was accurate until someone else messed it up, overpromising often comes back to haunt post-sales teams. Poorly aligned expectations leave everyone involved feeling disappointed and let down. This fracture in the customer-to-business relationship is one of the leading causes of cancellations. It’s also one that often goes undocumented or improperly categorized. 

Just as important as capturing the reasons why customers cancel, customer success teams should identify and document common trends and topics that indicate overpromises. By understanding the areas where false promises are made, you can enable customer-facing teams to consistently provide accurate information about the capabilities of your product and services. 

Shameless plug for Sturdy — Our AI looks for Signals of overpromises in communications with your customers. This Signal detects when a customer indicates a discrepancy between the product or service they expected and the one they received.

Here are some overpromise signals that were detected in customer-business emails. Sound familiar? 

"This is something that was promised in the implementation stage."

"… even excited about the features that were promised. But do feel ... underdelivered on the capabilities."

"Below is a list of things that were promised and hasn’t happened:"

"That was promised, but I still have not received anything."

"We can't use these services that were promised/promoted."

Stop doing Silly QBRs 

Ok. This may seem trivial and maybe even a little silly itself, but I can’t let this one go. For those unfamiliar with the term, a Quarterly Business Review (QBR) is a look into the performance and value of your service over the past quarter. The objective of a QBR is to identify areas of improvement and offer strategies for moving the relationship with your customer forward. As the name suggests, QBRs are typically conducted at least once per quarter and most often with a typical, boring format — a presentation on some slides.  The TLDR — 95% of the time, QBRs are awful. Personally, I loathe being on either end of them.

I suggest taking a page out of Customer Success Keynote Speaker & Educator Aaron Thompson’s playbook and turning QBRs into something meaningful for your customers. Use them as an opportunity to strengthen your relationship. Don’t just go through the motions. Here are some other tips from Aaron’s blog post on LinkedIn titled “Stupid Is As Stupid Does...And QBRs Are In Fact Stupid

  1. Make them a conversation, not a presentation.
  2. Come with more questions than statements.
  3. Don't get into SLAs, IRTs, or anything tactical. The topic du jour is their business strategy, and you are there to learn, not to teach. 
  4. Make them 50% retrospective and 50% prospective. 100% strategic still. 
  5. Get Creative. Much like Spotify's #Wrapped2019 (and 2020 and 2021) campaign, they demonstrate value to their millions of subscribers at the end of each year at scale.

At several of the companies that I’ve started, advised, consulted for, and worked at, we’ve used the ‘stop, start, continue’ framework. If you aren’t familiar, the ‘stop, start, continue’ framework facilitates retrospectives. The outcome is improving future work performance through open communication and collaboration. In that vein, if you stop doing these things that damage customer relationships, you will open up the possibility of developing deeper relationships with your customers based on trust and value. Implementing even one of these changes can significantly impact your customer retention strategy. Which of these are you going to commit to first? 

Similar articles

View all
AI & ML

Your AI isn’t the problem. Your data is.

Joel Passen
May 6, 2026
5 min read

IT leaders may have resisted AI early, but that phase passed quickly. The real concern wasn’t whether to use it. It was how to control it. Governance, security, visibility. In the end, it came down to preventing sensitive work from being done in personal accounts. Reasonable.

So they got comfortable, signed off, and rolled it out. ChatGPT, Copilot, Claude, company-wide, with guardrails.

People are using it. That part worked.

The disappointment

The problem is what revenue leaders are finding now that it’s live.

The data they actually want to use isn’t accessible in any meaningful way. And that matters more than most people realize, because LLMs are only as useful as what you put in front of them. They’re exceptional at reasoning over structured, coherent information. They’re not designed to reconcile fragmented, inconsistent data spread across a dozen systems.

Nobody’s model is.

So instead, people compensate.

They cut and paste. Drop in exports. Upload a batch of emails and call transcripts, and hope coherence comes out the other side.

It doesn’t. They get fragments. Plausible-sounding ones, but fragments.

The diagnosis

What commercial leaders are running into isn’t a model problem. It’s a data problem.

The data they actually care about isn’t unified. It lives across email, Slack, Zoom, support tickets, calls, and CRM notes. Different systems. Different formats. No shared identity. No relationship context.

Even with connectors. Even with MCPs.

Because underneath it all, the data isn’t organized in a way a model can reason on. There’s no canonical view of the world.

The model doesn’t know that the same person shows up in Zoom, Slack, Zendesk, and Salesforce. It doesn’t understand that those interactions belong to the same thread, the same account, the same moment in a relationship.

So it fills in the gaps.

Not because it’s weak. Because it has to keep trying.

The gap

Meanwhile, the models themselves have gotten amazingly powerful. Reasoning is sharper than it’s ever been and getting better daily.

But the data layer most companies are feeding them? Still immature.

According to MIT’s 2025 State of AI in Business, over 80% of companies have explored or deployed LLMs, but only around 5% are seeing meaningful business impact.

High adoption. Low transformation.

That’s not a model problem.

What’s possible

What it looks like when this actually works is different.

Not dashboards. Not reports. Not exports.

A conversation. Like having the best revenue ops analyst you’ve ever worked with on call, one who has read every email, sat in on every call, and never forgets anything.

You ask: “Which accounts have shown signs of churn risk in the last 90 days?”

And instead of a guess, you get a ranked list. Accounts. ARR. The exact messages where the signal showed up. What changed. What triggered it. What to do next.

So you ask a follow-up: “Which of these are new customers?”

Now you’re looking at onboarding breakdowns. Common threads. Where the process is failing.

So you keep going: “Where are we missing expansion opportunities?”

And it surfaces accounts where someone said, “We’re thinking about rolling this out to another team.” But nothing was logged. No opportunity created. No follow-up.

That’s the shift.

You’re no longer stitching together context. You’re interrogating it.

What changes

What changes when you fix the data layer, when your commercial data is normalized, deduplicated, and accessible, isn’t just speed.

It’s the level of questions you can ask.

These aren’t dashboard queries. They’re judgment calls. The kind that used to require a senior operator spending a weekend in spreadsheets and Salesforce. When your data layer is clean and the model has real context to work with, they become a 90-second conversation.

That’s the difference. Not a better model. A better fuel.

The data infrastructure reality

Most teams won’t get there by accident. The infrastructure problem is real: identity resolution across systems, conversation reconstruction across channels, deduplication, and signal enrichment. It’s six to twelve months of plumbing if you build it yourself.

The companies that crack it first won’t just be more efficient. They’ll be operating with a fundamentally different information advantage. They’ll see churn coming, spot expansion signals, catch friction early, before any of it shows up in the numbers.

At that point, the question changes.

It’s not whether AI works.

It’s whether your data is ready for it.

And whether you’re going to build that layer, or keep working around the absence of it.

This is what we're building at Sturdy.ai. The data layer your LLM actually needs.

Insight Updates

The Moment B2B Sales Teams Forget Everything They Learned During the Deal

Joel Passen
May 6, 2026
5 min read

It’s not the close. It’s not the kickoff call. It’s the 48 hours in between — when the contract gets signed, the champagne (metaphorically) gets popped, and everything the sales team learned over months of conversations, negotiations, and relationship-building quietly disappears.

The delivery team inherits a contract and a few CRM notes. Not the story behind the deal.

This is the handoff problem. And it’s costing companies more than they realize.

Why the Knowledge Dies at the Signature Line

Think about what actually happens during a complex B2B sale.

Over weeks or months, a sales team accumulates an extraordinary amount of institutional knowledge. They learn why the buyer is actually moving now — not the official reason, but the real one. The compliance incident that became a board-level conversation. The internal champion who’s been pushing for change for two years and finally got budget. The exec who’s skeptical and needs to see a specific proof point before they’ll get on board.

They learn who matters and how decisions actually get made, which is almost never what the org chart suggests. They learn what got promised in the final stretch: the SLA clause that got added at the last minute, the integration that’s now contractually locked, the go-live date that the CFO has already presented to her board.

None of that lives in the CRM. It lives in emails, call recordings, Slack threads, and people’s heads.

And the moment the deal closes, the sales team moves on to the next one. That’s their job. That’s how they get paid. But the institutional knowledge they spent months building the context that would let an implementation team start informed, instead of starting over, largely evaporates.

Onto the next pipeline review.

The Cost Nobody Is Measuring

Companies measure churn. They measure NPS. They measure time-to-value.

Most don’t measure the cost of the knowledge gap at handoff — because it doesn’t show up as a line item. It shows up as implementation delays. Escalations. Customers who feel like they have to repeat themselves six months into a relationship that should already be mature.

It shows up as promises made during the sale that nobody on the delivery side knew about. Commitments that surface in month three as a nasty surprise. Expectations that were set in a negotiation conversation that never made it into a system anyone on the CS team can see.

The SaaS industry has spent a decade optimizing the top of the funnel. Sophisticated systems for capturing and qualifying demand. Playbooks for every stage of the sales motion. Entire conferences dedicated to pipeline hygiene.

And then we hand a contract and a prayer to the team responsible for actually delivering the value we sold.

What Good Looks Like

I’ll make this concrete.

We recently ran Sturdy against a real deal, a $190K ACV implementation that had just closed. Board-level compliance incident drove the urgency. CFO was the economic decision-maker: analytical, direct, not interested in being charmed. An integration was contractually locked in Exhibit A. Timeline slippage wasn’t just an ops problem; it would retrigger board scrutiny because of the prior incident.

The implementation team knew all of that before the first kickoff call.

Not because someone wrote a perfect handoff email at 11 pm the night before go-live. Because Sturdy read across the entire deal — emails, calls, negotiations — and surfaced the context that actually matters: why they bought, who really matters internally, what was promised, and where the risk lives.

That’s the brief I show in the video. Notice how specific it is. Notice that it doesn’t just describe what happened, it tells the delivery team what to do with it.

That’s what institutional knowledge looks like when it doesn’t get lost.

The Broader Shift

The handoff problem is really a symptom of something larger.

B2B revenue has always been a team sport — sales, CS, implementation, product, and finance all own a piece of the outcome. But the systems we’ve built treat each function as a silo. Data gets entered into the CRM by whoever remembered to do it. Calls get recorded and filed somewhere nobody looks. Emails pile up in inboxes that get searched only when something’s already on fire.

The signals are there. The context exists. It’s just buried, and it disappears at exactly the moments in the customer lifecycle when it’s most needed.

The companies that figure this out and build systems to capture, preserve, and operationalize institutional knowledge across the revenue lifecycle will have an operational advantage over those still relying on heroic individual effort and the hope that someone wrote a good handoff doc.

This isn’t an incremental improvement. It’s a different way of operating.

The moment a deal closes should be the moment an organization puts everything it learned to work.

Right now, for most companies, it’s the moment they forget it.

That’s the problem Sturdy was built to solve. If this resonates, start at sturdy.ai.

Insight Updates

Sturdy's MCP Server: One Call. Every Source. Already Resolved.

Joel Passen
May 4, 2026
5 min read

Another Step to Unlocking AI Outcomes: Resolve the Data First

The bottleneck is not your AI model. It’s the data it has access to. Sturdy’s MCP server delivers pre‑resolved, canonically organized context so your LLM can reason over it instead of guessing around it.

Another Step to Unlocking LLM Outputs: Resolve the Data First

For years, the problem was that data lived in silos. Different systems for sales, support, and calls. But the worst offenders were email and Slack. Email isn’t one silo; it’s as many silos as there are people on your team. Every rep, every CSM, every exec running their own inbox, none of it visible to anyone else. Slack is no different. Conversations buried in channels and DMs that nobody ever sees again.

What Changes

"Your LLM now has a single, usable data layer any user can query to inspect the full context of every prospect and customer."
“Every team now works from a single view of the relationship, not fragments of it. Sturdy gets everyone on the same page, no matter what screen they use.”

MCPs were a material step forward. They give LLMs a standardized way to reach outside their context window and pull live data from external systems without a human copying it in manually. An account record, an open ticket, a call summary, all accessible at query time without a custom integration.

Today, teams are dealing with a different version of the same problem. Every MCP server exposes a slice of the picture. The LLM can pull structured records, read a ticket, or fetch a call summary. What it cannot do is answer a question that requires all of them at once, because the data across those systems was never resolved against each other.

The entities don’t match. The timeline is fragmented. The thread that started the conversation often isn’t there at all.

The question every revenue team actually needs answered isn’t “what does this system say about the account?” It’s the question that requires the full picture: what has every person at our company said to every person at this company, across every channel, and what does that tell us about where this relationship actually stands right now.

No single MCP server can answer that. Most LLMs, handed raw data, will approximate an answer and present it with false confidence. That’s not intelligence. It’s a good guess.

That answer doesn’t live in any single system. It lives in the relationship between all of them. And if the LLM has to call multiple MCP servers to piece it together, resolve duplicate records, and reassemble a coherent account state on every query, the fragmentation problem hasn’t been solved. It’s just been moved into the inference layer.

What Sturdy’s MCP Does

Sturdy ingests from all of it. Email, call transcripts, support tickets, Slack, CRM, and meeting tools. Every channel where communication happens.

Before any of that reaches an LLM, Sturdy does the work that makes it usable. Entities are deduplicated and matched to canonical records. Interactions are classified. Signals are enriched, permission‑scoped, and source‑referenced. The relationship between interactions across systems is established once upstream.

Not inferred at query time. Resolved in advance, maintained continuously, and auditable.

That last part matters more than it sounds. LLMs are getting better at fuzzy matching, but revenue decisions cannot rely on it. “Probably the same account” is not good enough when you’re making retention calls, forecast commits, or expansion bets.

Then Sturdy exposes all of it through a single MCP server. One call. Pre‑resolved context with citations. The LLM starts from the signal, not the raw material.

The Token Cost Nobody is Talking About

There’s a practical consequence to raw MCP that most teams haven’t priced in yet. When an LLM has to reconstruct account context from scratch on every query, it burns tokens doing work that shouldn’t need to happen at query time.

Pulling from multiple sources. Resolving conflicts. Traversing relationships. Figuring out what it’s looking at.

At low volumes, this is invisible. At scale, it isn’t. The rediscovery tax on a raw MCP call runs roughly 60 to 80 percent of total token consumption per query. That’s the LLM figuring out context, not reasoning over it.

Sturdy removes most of that overhead. The context arrives already structured. The LLM starts from a position of knowing. The inference budget goes toward answering the question, not reconstructing the data.

What This Means for Teams Building on it

Sturdy’s MCP is designed for teams that have already provisioned an LLM and are now trying to make it useful. CTOs deploying models across their organization. Heads of Data and AI are trying to get real answers out of them. Operations teams are building agents that need reliable account intelligence.

The properties that matter:

Canonically resolved
Entity deduplication and matching happen upstream. The same account appears as one account regardless of how many systems it lives in.

Permission‑aware
Access controls are baked into the data layer. What a user can see reflects what they’re authorized to see in the source systems.

Source‑referenceable
Every signal comes with a citation. When something surfaces, the underlying interaction is linked.

Model‑agnostic
The data layer doesn’t change based on which model you use.

Nobody wants to spend 12 to 18 months normalizing data before they can build something useful. Resolving that data upstream changes what your LLM can do on day one.

Talk to us about connecting Sturdy to your existing AI deployment.

Your customers are already telling you what's going to happen.

Connect what customers say to the reasons your numbers move. Contextual revenue intelligence, ready for any LLM — or running natively in Ask Sturdy from day one.

Unlock Your Accounts
Stop doing these 3 things now to improve your customer retention strategy