Welcome to the Sturdy Blog
News and Resources
The latest from Sturdy — product news, insights, and resources.
Sturdy's MCP Server: One Call. Every Source. Already Resolved.
Another Step to Unlocking AI Outcomes: Resolve the Data First
The bottleneck is not your AI model. It’s the data it has access to. Sturdy’s MCP server delivers pre‑resolved, canonically organized context so your LLM can reason over it instead of guessing around it.
Another Step to Unlocking LLM Outputs: Resolve the Data First
For years, the problem was that data lived in silos. Different systems for sales, support, and calls. But the worst offenders were email and Slack. Email isn’t one silo; it’s as many silos as there are people on your team. Every rep, every CSM, every exec running their own inbox, none of it visible to anyone else. Slack is no different. Conversations buried in channels and DMs that nobody ever sees again.
What Changes
"Your LLM now has a single, usable data layer any user can query to inspect the full context of every prospect and customer."
“Every team now works from a single view of the relationship, not fragments of it. Sturdy gets everyone on the same page, no matter what screen they use.”
MCPs were a material step forward. They give LLMs a standardized way to reach outside their context window and pull live data from external systems without a human copying it in manually. An account record, an open ticket, a call summary, all accessible at query time without a custom integration.
Today, teams are dealing with a different version of the same problem. Every MCP server exposes a slice of the picture. The LLM can pull structured records, read a ticket, or fetch a call summary. What it cannot do is answer a question that requires all of them at once, because the data across those systems was never resolved against each other.
The entities don’t match. The timeline is fragmented. The thread that started the conversation often isn’t there at all.
The question every revenue team actually needs answered isn’t “what does this system say about the account?” It’s the question that requires the full picture: what has every person at our company said to every person at this company, across every channel, and what does that tell us about where this relationship actually stands right now.
No single MCP server can answer that. Most LLMs, handed raw data, will approximate an answer and present it with false confidence. That’s not intelligence. It’s a good guess.
That answer doesn’t live in any single system. It lives in the relationship between all of them. And if the LLM has to call multiple MCP servers to piece it together, resolve duplicate records, and reassemble a coherent account state on every query, the fragmentation problem hasn’t been solved. It’s just been moved into the inference layer.
What Sturdy’s MCP Does
Sturdy ingests from all of it. Email, call transcripts, support tickets, Slack, CRM, and meeting tools. Every channel where communication happens.
Before any of that reaches an LLM, Sturdy does the work that makes it usable. Entities are deduplicated and matched to canonical records. Interactions are classified. Signals are enriched, permission‑scoped, and source‑referenced. The relationship between interactions across systems is established once upstream.
Not inferred at query time. Resolved in advance, maintained continuously, and auditable.
That last part matters more than it sounds. LLMs are getting better at fuzzy matching, but revenue decisions cannot rely on it. “Probably the same account” is not good enough when you’re making retention calls, forecast commits, or expansion bets.
Then Sturdy exposes all of it through a single MCP server. One call. Pre‑resolved context with citations. The LLM starts from the signal, not the raw material.
The Token Cost Nobody is Talking About
There’s a practical consequence to raw MCP that most teams haven’t priced in yet. When an LLM has to reconstruct account context from scratch on every query, it burns tokens doing work that shouldn’t need to happen at query time.
Pulling from multiple sources. Resolving conflicts. Traversing relationships. Figuring out what it’s looking at.
At low volumes, this is invisible. At scale, it isn’t. The rediscovery tax on a raw MCP call runs roughly 60 to 80 percent of total token consumption per query. That’s the LLM figuring out context, not reasoning over it.
Sturdy removes most of that overhead. The context arrives already structured. The LLM starts from a position of knowing. The inference budget goes toward answering the question, not reconstructing the data.
What This Means for Teams Building on it
Sturdy’s MCP is designed for teams that have already provisioned an LLM and are now trying to make it useful. CTOs deploying models across their organization. Heads of Data and AI are trying to get real answers out of them. Operations teams are building agents that need reliable account intelligence.
The properties that matter:
Canonically resolved
Entity deduplication and matching happen upstream. The same account appears as one account regardless of how many systems it lives in.
Permission‑aware
Access controls are baked into the data layer. What a user can see reflects what they’re authorized to see in the source systems.
Source‑referenceable
Every signal comes with a citation. When something surfaces, the underlying interaction is linked.
Model‑agnostic
The data layer doesn’t change based on which model you use.
Nobody wants to spend 12 to 18 months normalizing data before they can build something useful. Resolving that data upstream changes what your LLM can do on day one.
Talk to us about connecting Sturdy to your existing AI deployment.
Another Step to Unlocking AI Outcomes: Resolve the Data First
The bottleneck is not your AI model. It’s the data it has access to. Sturdy’s MCP server delivers pre‑resolved, canonically organized context so your LLM can reason over it instead of guessing around it.
Another Step to Unlocking LLM Outputs: Resolve the Data First
For years, the problem was that data lived in silos. Different systems for sales, support, and calls. But the worst offenders were email and Slack. Email isn’t one silo; it’s as many silos as there are people on your team. Every rep, every CSM, every exec running their own inbox, none of it visible to anyone else. Slack is no different. Conversations buried in channels and DMs that nobody ever sees again.
What Changes
"Your LLM now has a single, usable data layer any user can query to inspect the full context of every prospect and customer."
“Every team now works from a single view of the relationship, not fragments of it. Sturdy gets everyone on the same page, no matter what screen they use.”
MCPs were a material step forward. They give LLMs a standardized way to reach outside their context window and pull live data from external systems without a human copying it in manually. An account record, an open ticket, a call summary, all accessible at query time without a custom integration.
Today, teams are dealing with a different version of the same problem. Every MCP server exposes a slice of the picture. The LLM can pull structured records, read a ticket, or fetch a call summary. What it cannot do is answer a question that requires all of them at once, because the data across those systems was never resolved against each other.
The entities don’t match. The timeline is fragmented. The thread that started the conversation often isn’t there at all.
The question every revenue team actually needs answered isn’t “what does this system say about the account?” It’s the question that requires the full picture: what has every person at our company said to every person at this company, across every channel, and what does that tell us about where this relationship actually stands right now.
No single MCP server can answer that. Most LLMs, handed raw data, will approximate an answer and present it with false confidence. That’s not intelligence. It’s a good guess.
That answer doesn’t live in any single system. It lives in the relationship between all of them. And if the LLM has to call multiple MCP servers to piece it together, resolve duplicate records, and reassemble a coherent account state on every query, the fragmentation problem hasn’t been solved. It’s just been moved into the inference layer.
What Sturdy’s MCP Does
Sturdy ingests from all of it. Email, call transcripts, support tickets, Slack, CRM, and meeting tools. Every channel where communication happens.
Before any of that reaches an LLM, Sturdy does the work that makes it usable. Entities are deduplicated and matched to canonical records. Interactions are classified. Signals are enriched, permission‑scoped, and source‑referenced. The relationship between interactions across systems is established once upstream.
Not inferred at query time. Resolved in advance, maintained continuously, and auditable.
That last part matters more than it sounds. LLMs are getting better at fuzzy matching, but revenue decisions cannot rely on it. “Probably the same account” is not good enough when you’re making retention calls, forecast commits, or expansion bets.
Then Sturdy exposes all of it through a single MCP server. One call. Pre‑resolved context with citations. The LLM starts from the signal, not the raw material.
The Token Cost Nobody is Talking About
There’s a practical consequence to raw MCP that most teams haven’t priced in yet. When an LLM has to reconstruct account context from scratch on every query, it burns tokens doing work that shouldn’t need to happen at query time.
Pulling from multiple sources. Resolving conflicts. Traversing relationships. Figuring out what it’s looking at.
At low volumes, this is invisible. At scale, it isn’t. The rediscovery tax on a raw MCP call runs roughly 60 to 80 percent of total token consumption per query. That’s the LLM figuring out context, not reasoning over it.
Sturdy removes most of that overhead. The context arrives already structured. The LLM starts from a position of knowing. The inference budget goes toward answering the question, not reconstructing the data.
What This Means for Teams Building on it
Sturdy’s MCP is designed for teams that have already provisioned an LLM and are now trying to make it useful. CTOs deploying models across their organization. Heads of Data and AI are trying to get real answers out of them. Operations teams are building agents that need reliable account intelligence.
The properties that matter:
Canonically resolved
Entity deduplication and matching happen upstream. The same account appears as one account regardless of how many systems it lives in.
Permission‑aware
Access controls are baked into the data layer. What a user can see reflects what they’re authorized to see in the source systems.
Source‑referenceable
Every signal comes with a citation. When something surfaces, the underlying interaction is linked.
Model‑agnostic
The data layer doesn’t change based on which model you use.
Nobody wants to spend 12 to 18 months normalizing data before they can build something useful. Resolving that data upstream changes what your LLM can do on day one.
Talk to us about connecting Sturdy to your existing AI deployment.
Our articles

Where good (business) ideas die
Years back I had an idea that every time a customer expressed some sort of "love" we would reach out and ask them to be a reference. The way this was supposed to work was that the Support/CS person would forward any happy customer to the marketing team as a "Reference Lead.” Then, marketing would reach out to the customer. Nothing groundbreaking here. If your business doesn't already do this, go ahead and give it a shot. Happy customers close deals for you.
And at the end of the first month, nothing. Why?
Do none of our customers like us?
Did our Support Team drop the ball?
Did the Marketing team drop the ball?
Did the customer refuse?
If you manage groups of people, you can certainly think of other examples.
Like, "Whenever there is a new customer contact, make sure you log it to Salesforce, dang it!"
Or, "Whenever there is a bug report, log it to JIRA."
The reference harvesting failure has stuck with me. It was so simple, yet it failed spectacularly.
I have three takeaways from this that guide me today:
First, in our world of "Knowledge Work" almost every new policy/idea requires a new manual task. Add it to Excel. Track it in CRM. I would say we've built an entire ecosystem centered on digital logging, but it is more like a multiverse. Every silo has its own physics with its own rules and workflows.
Second, every ‘silo-bounce’ increases the failure rate. "Take this thing from Support and log it for the Product Manager so they can recommend it to Engineering." Boing. Boing. Crash. Intersections are more dangerous than freeways.
Finally, whenever you implement a policy, it will fail unless you lean in and check on it regularly, and you probably won't. No coach, no team.
The future will be a much better place for your co-workers and customers.
Artificial Intelligence, after you do the hard things like building integrations, cleaning data, de-duping, creating a UI and then a data-API, will improve your business, your customers, and your life.
There will be no more manual logging. There is no need to ask someone to forward an event.
Your coworkers won't have the soul-sucking task of "logging it if it is important." Your customers won’t email managers, "No one has gotten back to me."
Until that time...
Tomorrow, your team will be assigned a new task to log something for someone else's team. Some people will forget. The other team will be required to read that information. Some people won't do it.
In three months, your CEO will be annoyed. "What ever happened with that one thing I asked for?"
This is one of the reasons my team and I started building Sturdy in 2019. There are too many people logging minutiae so that someone might find the time to read it. There are too many customers that fall through the cracks that could easily be saved. There are too many good ideas that die because of failed execution and lack of accountability.
It doesn't have to be this way.
.png)
Why We Don't Have Nice Things
I have always been fascinated by how product roadmaps are maintained. So much so that I feel it necessary to pen a bombastic screed on the topic.
(As an aside, when you talk to VC’s, they’ll ask, “What’s your {2-5} year roadmap?” I want to say, “Whatever needs to get built,” but I think better of it. Life Pro Tip: use words like, “disintermediate.”
I find there is little utility in years-long product roadmaps. Unless you ignore your users/customers. If you have a team conducting market research to determine what to build and then put it in a 2-year plan, then you’re ignoring your users. If you have a team advocating for your users and having hard conversations with engineering and sales, you are not ignoring your users.
This is why Gmail, 20 years later, still has the attachments at the bottom of the email instead of at the top, where they belong: the revenue team is filling the roadmap with better ways to sell your data. I digress.)
The three drivers of a company’s product roadmap are:
Things users want;
Things your sellers want;
Things your product team/engineers want.
They don’t overlap as often as you might think.
Your users want usability (and probably a ton of user-permissions stuff). They bought your product missing certain features, and they are OK with that. They primarily want your existing stuff to get better, easier to use, and easier to get data from.
Your sellers want new features. They usually want the best feature that your competitors already have.
Your product team is more complicated. Most teams want insane reliability, security, and speed. Teams run by CTO’s aspiring to wear black turtlenecks build their own UI framework from scratch so that the one thing the new thing does will be 1% better at something.
Where do they overlap?
- Your Revenue Teams and Users overlap around UI and reporting. If it looks pretty and has cool reports, it will sell software (1).
- Users and Engineering overlap in the desire for performance and reliability (2).
- Development and Revenue overlap at shiny things (3). When you hear “Minimally Viable Product,” you’ve found it. When you hear “App Store”, or “I took some screenshots,” you’ve found it.
- If you are wondering what happens when they all intersect, I don’t know. I can’t remember all three teams agreeing on a feature.
Your existing customers don’t care about shiny things. But you need to grow revenue, and the CTO is on board, so guess what gets built?
(I would like to say that building shiny things isn’t wholly a bad idea. You need to go for it every now and then. Sometimes, really cool stuff gets built. But, in my experience, that shiny MVP is going to the back of the update line the day it's shipped, and it will suck, forever. Related to this is why your “Admin” area is terrible. Don’t lie, you know it is.)
I have sat in so many board meetings where the CTO presents a roadmap, and the COO/Customer Leader freaks out. I was in an amazing one over a decade ago when the CTO’s priority was “voice enabling the product.”
Everyone blew a gasket.
If your customer falls in the woods, and no one is listening, do they make a sound?
If a user reports a bug or asks for a feature, if someone remembers to do it, it will be manually logged in a drop-down menu in some silo. It’s also probably logged by someone who has no incentive other than to close the ticket as quickly as possible. In other words, if it gets logged, it will be stored somewhere that’s hard to get to, and no one will read it.
If a user is confused, or says something sucks, someone wraps the user in a warm blanket of apologies and moves on. In the worst case scenario, the user will get something like, “that’s actually how we intended it to work!”
(Once, in a design review, a UI team told me they hid a feature because they didn’t want the users to actually use it. It allowed people to opt in to having a paper check instead of a direct deposit. “How many support tickets did this cause last month?” No one knew.)
It takes hard work to know what the customer wants, or hates. It also requires honesty, and a bit of self-flagellation.
I ran into a CxO who wanted AI to “automatically write knowledge base articles.” I hear this as, “Our product is so confusing that we can’t manage the number of questions about how to use it.”
Get honest: fix the product. No one, ever, renewed because of an awesome knowledge base. Good products don’t need AI knowledge bases. They also don’t need churn prediction or quarterly business reviews, but that’s for another time.
To break this cycle, you must be rigorous about logging every feature request, bug, and UI issue. You’ll need to understand why customers are saying, “how do I do this?” and “that’s confusing.”
(Another data point: track when your people apologize. “What are we apologizing for?”)
How will you gather this brutal truth? You need to put someone in charge of collecting data from your 5-50 systems, organizing it by account, and attaching a cost-benefit analysis to each issue. Then put it in a spreadsheet and review it every week with the Revenue, Ops, Customer and Engineering teams. Soon everyone will develop a healthy anxiety about the quality of your product. Saying “no” to shiny things will get easier.
Do this and your customers will like you again.
End rant.
Do the hard things,
Steve

Your customers don’t care about your retention rates
I spoke to an entrepreneur this week, and he said, “This company cut CS by 50% just to see what would happen.”
The same person said, “90% of the companies I talk to are canceling their CSP.”
After a recent merger of two large CSPs, one of their executives posted his resignation on LinkedIn, the TL;DR was that CS has a lot of promise but executive leadership refuses to give it the budget it needs.
CS is approaching a crisis. The root of the problem is retention, and the belief means that only one group ‘owns’ the number.
Why? No matter how much tech or flesh you throw at a retention problem, CS isn’t going to improve it in any meaningful way…alone.
If your Marketing team targets customers who won’t get value from your product and they buy it, what happens?
If your product is confusing, or buggy, or just sucks, what happens?
If your Sales team sells deals with false promises, what happens?
If your onboarding process stinks, what happens?
If your Accounting team pisses people off, what happens?
The answers to the above are obvious. What is not obvious? Which of these problems is afflicting your business right now, as you read this, because each of those issues is in a different system, silo and team.
You aren't paying attention.
No one owns retention. The obsession with retention has led us to ignore what really matters: what makes customers happy, and what does not.
Today, we have the opportunity to automatically discover almost every issue that detracts from customer satisfaction, route it to the right person, and track its resolution. The Marketing VP targets customers who need the product, the Product Team has a customer-led roadmap, the Billing Team realizes that the auto-renewal process does more harm than good, and the CRO learns which sellers are over and under-selling.
When was the last time you heard someone say, “We leave no stone unturned in our quest to resolve every customer issue rapidly and intelligently?”
I have spoken to several executives who say, “I just wouldn’t know what to do with this type of data.” I make a note to never buy their products. They don’t care about customers.
Call me crazy. I want to live in a world where every product or service I buy is awesome. So does everyone else. Focus on being awesome, and you won’t need to worry about retention.
Let’s try to make it a reality together.

You're in the pros
My neighbor asked me to speak with his son (who is not connected here on LI). The son is a mid-market account manager (post-sales) at a large SI (pure services). His remits are expansion/upsell, renewal assistance, and retention/escalation. His book has 30 customers, and its approximate value is just shy of $1mm annually.
He's stuck.
He's stuck at his company. They pay well. His role isn't challenging him anymore. He doesn't want to do pure sales or pure CS work. He is smart. He is motivated to create a career path. Right now, he can't see the forest from the trees.
After 20 minutes, he asked me what he should start, continue, and stop doing. Great question in this context.
Here was my advice. If you know me well, you know it took many more words than LinkedIn will accept in a single post. 😉
🏅 Start thinking of yourself as a professional athlete.
Professional athletes spend +90% of their time preparing for competition. Prepare like a pro for both internal and external meetings. Study your customers and learn everything you can about them. This will prepare you for your account reviews with your leadership. This will help you blow out your KPIs. This will build the foundation of success. Preparation is hard. It's tedious. You will be working harder than ever. Keep doing it. You will not see results for at least 6 mo. Keep going.
💡 Continue asking for help.
Tapping into the expertise and experiences of others is a dying art. New people offer new perspectives. Getting advice will help you learn how other pros have built their careers. As an early/mid-career person, building relationships and networks will serve you well now and in the future. You're defined by the company you keep. Expand your community. It will, eventually, unlock opportunities.
🛑 Stop going through the motions.
Lacking purpose, passion, and interest is a career-advancement death sentence. Most importantly, it leads to dissatisfaction, stagnation, and lack of fulfillment in every aspect of your life. Stop just trying to make your numbers. Kill your number. Stop relying on what got you here. Dig deeper to force yourself to grow. Every day can be the first day of school. You have the power to reinvent yourself every day.
You are in the pros now. Be a pro.

The six attributes that we consistently interview for
There were 453 jobs posted on Indeed in the US for customer success managers in the past 14 days.
On average, companies interview five candidates before making a hiring decision for a mid-level customer success position. That’s a lot of interviews—and time. With productivity being top of mind for customer leaders, new hires, assuming a good fit, will eventually increase capacity, but the process is a body blow to short-term productivity.
Then there is the risk of a bad hire - the real kidney punch. I won’t go into that in this post.
All this hiring is encouraging, and it also got me thinking about how leaders can directly impact the hiring process without all kinds of process changes and wrangling of resources.
Interviews. Ask better questions. Get better information. Make better hiring decisions.
I’ve hired dozens of post-sales people over the years, and here are six attributes that I consistently interview for.
Technical Preparedness: We sold a solution and are now delivering one. Our people must have the chops/cognition to understand complex platforms, workflows, and ecosystems. Additionally, we have to ensure from the get-go that our associates know how to prepare for a solution-oriented meeting with a customer—substance over fluff.
Attention to Detail: Our teammates must be organized, willing to follow processes, and steadfast in capturing data.
Coachability: Ideal candidates will be open and even excited about learning quickly. We look for people who take direction well. We don’t have a long window for ramp. Humility is key.
Sticktoitiveness: Being on the frontline is arduous. Our associates must be able to manage the emotional peaks and valleys.
Work Ethic: Drive is a key value here. We need people who want to work hard while they’re at work consistently and who take pride in the quality of their output.
Resourcefulness: Our teammates need to be hyper-resourceful, diggers of information, and, most of all, intellectually curious so that they can identify root causes.
Note: I haven’t hired a person in the last 20 years without them taking an assessment designed by Gary Kustis There’s nothing like getting another, unbiased data point with which to make a decision. I'm happy to share how and when I use assessments - just message me.
Also, if you're interested in interviewing like I am, check out what my friends Intertru Inc are doing. Unique and effective.
Otherwise, if you want a copy of our full behavioral interview guide for CS, you can grab it here!

The Scary Six: Response Lag
I was speaking to the COO of one of our customers a few weeks back, and he said that Sturdy’s “Response Lag” signal was his “Laptop Smasher.” This signal is defined as a “customer is asking for a status update on an unresolved issue.” If your goal is to make sure your customers feel heard, then it is a bad one.
While not AI-based, the attached regex will help you find some of these messages on your own. If your support or BI system allows you to filter on inbound messages it will provide cleaner results. (There’s quite a bit of contextual difference between a customer asking for an update and one of your people asking a customer for an update).
In most cases, this signal is pretty rare. Typically, it occurs about once per every 1,000 conversations (again, only detecting messages “coming in” from a customer).This signal is important to track for two reasons. The first is that it is almost never self-reported. It is rare for a customer-facing person to say, “Yeah, the customer is upset because I never got back to them.” I am almost certain that you have CS/Support teammates who have a much higher incidence of this signal than your best performers. It also means that you have a grumpy customer that you don’t know about.
The second reason is that Response Lags provide really good data for resolving hidden process or product gaps. If a customer is asking for an update on an issue, it is likely that several other customers are asking about the same thing. Every business is different, but Response Lags will likely indicate that there is a product, process, or person responsible for the plurality of them.
At Sturdy, we use machine learning to track, record, and alert you of Response Lags. We’re also working on some cool stuff that will track the response time of any open issue from any conversation (without requiring a customer to hit the “is this resolved?” button).
How cool would it be to have a dashboard of every “waiting for a response,” email, chat or phone call? We’re working on it.
Give the regex a try, and feel free to DM me with any questions. I hope you don’t smash a laptop. Of course, please regale us in the comments of any learnings you’d like to share on the subject.
Do that hard things,
Steve
.png)
The Scary Six: Executive Change
At the end of last year, I shared a regular expression (regex) that identifies "contract requests." That's a scary signal for people who like to keep customers.
Today, I want to discuss the scariest of the Scary Six, "Executive Change."
At my last company, Newton, this signal had the highest correlation to churn and initially resulted in a loss about 50% of the time (for many of Sturdy's customers, this is also true).
So what is it? Let's say you sell accounting services, and this happens:
"Hi, I am the new CFO, and I would like a quick rundown of your capabilities."
The response is often,
"So nice to meet you! LMK when you have 30 minutes for a quick call!"
(By the way, usage will be high during this time, and their Health Score will be green.)
On to the regex…
The first two are specific to HR services/tech, so replace "hr" with "e-commerce," "accounting," "logistics," or whatever business you're in.
Here's what they do:
1. The first detects when someone says, "Hey, we have a new VP of HR coming on board soon."
2. The second, "I will be taking over the Admin role for this account."
3. The third, "Hey, I wanted to let you know that I will be leaving at the end of January."
Remember that they return a fair number of false positives (FPs). FPs are not included in the churn rate calculation.
The frequency of "Executive Change" varies depending on the industry and segment. In the SMB cohort, it occurs in about .1% to .2% of customer conversations. In huge enterprises, around .04%.
Interestingly, this signal is much more common in the HR space, firing at .3% per conversation.
There is also a lot of variation in the severity. Still, the correlation to cancellation is the 2nd highest of any signal we currently detect at Sturdy ("I want to cancel" being the highest, obviously). For SMB customers, the churn rate for this signal, if untreated, will approach 70%. It will be lower for enterprise customers.
Another critical point is that this is a leading indicator. It often occurs long before the cancellation event.
Why is this signal such a strong indicator? At the beginning of the post, we showed a sample trigger-sequence that ended something like, "Let's do a quick demo!"
What's wrong here? I think it is because one or all of the following is happening:
1. The value of your service can't be communicated in a "quick demo."
2. The new contact has undoubtedly used and trusts a competing solution.
3. The person conducting the demo has not been trained to sell your product, overcome objections, and destroy your competition's product.
This is a perfect recipe for failure. Here's a scenario...
Acme Corp sells HR Software on M2M and yearly contracts; it receives:
10k emails and tickets per month (items).
10k items equals to about 2k conversations (1 convo = ~ 4.4 items)
.3% detection per conversation = 6 Exec Changes
Two false-positive (30% FP rate)
50% churn x 4 = 2 losses
If untreated, Acme loses two customers to this signal per month.
The good news is that, in my experience, treatment will save about one of these customers each month. How?
1. Train everyone who touches customers, billing, CS, and marketing to identify the signal.
2. Immediately send the signal to your sales AND marketing teams.
Someone should attempt to discover the product the new contact used at their former company.
3. A salesperson must schedule a demo as soon as possible. (At Newton, our KPI was to conduct the demo within ten days). The seller should come armed with useful information, like usage data candidates hired (e.g.), and be prepared to sell against the new contact's previous solution.
4. In parallel, the marketing team checks LinkedIn to see if the previous contact has landed a new job. If not, someone should reach out and see if they need help in their job search (after all, you sell to companies that hire these people). If the person has landed somewhere, send them a note, a gift basket, or whatever you think is appropriate.
5. Send the previous contact to the Sales team as an SQL.
(Shameless plug: Sturdy has AI-language models that find 1, automatically route 2, and can tell you if 3 and 4 happened.)
The result of this process is a successful "double-dip". You may save a customer and gain a lead for your sales team. Ironically, if your competition is not tracking the Executive Change signal, your chance of closing that deal is very high.

The Scary Six: Contract Request
The second line of that image is a regular expression (aka regex). If your support or ticket system supports regex, try that search against the content of your tickets. You can probably hand this to your BI team, too. It will find customer comments like, “Hey, we’re just cleaning up some files, and can we get a copy of our agreement?”
For some background, at my last company, I had a standing meeting on my calendar every week to read random support tickets. From this, the concept of the “Scary Six” was born.
One of the Scary Six was a “Contract Request.”
At Newton, about 70% of the time, when a customer requested a copy of their contract, it was a risk to their revenue longevity. We audited them regularly and found they broke down into the following buckets:
We want to know when we can or how easy it is to cancel (50%).
We just need our contract because we lost it (30%).
We are getting bought, going out of business, etc. (10%).
We need to see if we can cut some costs (10%).
We saw this flag about once per 6,000 email conversations (.0167%). Generally, this average rings true for most businesses we work with today.
Combining these two metrics, we estimated that for every 10,000 email conversations, we received about 2 Contract Requests. In other words, for every 10,000 emails, we had 1.4 customers at risk.
Once we identified Contract Request as a revenue impact, our incredible CS team trained everyone to identify “Contract Request” language. We then built a process for addressing them.
The before/after impact of identification and triage was remarkable and resulted in doubling the retention rate for this signal.
Over the next few weeks, I will post the rest of the “Scary Six” with their regex. Those left on the list are “Executive Change,” “Renewal,” “Response Lag,” “Overpromised,” and obviously, “Cancellation.”
Please let me know if you have any other “Scary” triggers. I hope you give this a shot and find it illuminating.
.avif)
There's a New Sheriff in Town
When I started my first SaaS company, I had a standing meeting on my calendars every week to read random support tickets (random is the crucial word, by the way). Reading tickets was always illuminating and often painful. One of our learnings was a churn risk called "New Sheriff."
First, don't get me wrong, we trusted our team. But if there's one thing that always bothered me, I never really knew what our customers said about us. And, for that matter, what we were saying to them.
Eventually, we built a suite of search strings, and if you want to try some yourself, here are a few simple ones:
We would search for product issues with things like: "doesn't work"; "confusing"; "annoying" bug, and "clear cache."
Searching for things like "gotten back to me" and "still waiting" would indicate that our customer was still awaiting a response. I would look for revenue issues with: "new VP,"; "new vice president,"; "new manager,"; "has left the company,"; "copy of our contract,"; "renewal date," and "overdue."
You are probably thinking, "Why would I look for "new VP" or "new manager"?" It comes up like this, "Our HR Manager has left the company recently, and I need a login for our VP of HR, Jim Smith."
At Newton, HR executives were responsible for hiring/firing HR software decisions. We sold HR software.
A new HR executive was the highest indicator of churn in our business. By that, I mean, left unattended, our customer was almost certainly (80%+) going to churn at renewal. From this, the term "New Sheriff" was coined. A "New Sheriff" customer was no longer forecasted to be a long-term customer and thus needed to be resold.
We trained everyone at Newton on identifying a "New Sheriff" and where to send the alert - manually.
When we got a "New Sheriff" alert, several people got to work. The CS team would pull usage data and some other vital metrics. The account management team would reach out to identify the new VP and schedule a demo of our solution.
Our sales leadership would also reach out to the former executive. We'd offer to help them network to find a new job or make inroads at their new company.
In doing this, we turned our "churniest" event, one with an 80% churn rate, to one with a 30% churn rate (from -.8 to -.3). We also gained a lead for our sales team that closed 80% of the time (from 0 to +.8). In other words, we turned a very churny event into one that gained a half a customer.
If you'd like to capture "New Sheriffs," give me a shout, and I'll send you a few more advanced search strings. (If you’re a Sturdy customer, our models auto-flag this as “Executive Change.”)- Steve@sturdy.ai
.png)
The Rise of AI Operations Management
About the Author
Hi, I'm Steve Hazelton. I am one of the founders of a startup that helps businesses better understand their customers by using AI to identify risks and opportunities inside the unstructured data trapped in emails, chats, and phone calls. I received help writing this article by using generative AI for data collection (if you don’t know what that means, you are not alone, go here).
Caveat Emptor: Since I am the founder of a startup that relies heavily on AI to find happy and unhappy customers, I certainly have "bought in" to using AI for business to leverage untapped data streams. So, consider yourself warned. I also started my career a long time ago as a recruiter for technology companies, later built and sold an HR tech company, and later started an AI company…so the intersection of AI and career advice collide here.
Introduction
Often lost in this discussion of AI technology is the discussion of its impacts on our teammates and coworkers. What does the widespread adoption of AI in business mean for the careers of people who work at these businesses? While many of us are, and should be, concerned about job destruction, I want to talk about job creation.
Note: There is, at present, endless discussion on AI technology, which I won't bore you with here. If you want to learn about the different types of AI and AI tools, like Generative AI, Synth AI, Machine Learning, and others, you can read an article our co-founder wrote here.
From a career standpoint, the most significant change our businesses will see this decade is creating a new, high-paying job in AI Operations. This article will help you, the reader, define this role when you decide to hire this person, or if you desire to be that person, how to create the role in your company.
This person will leverage AI tools and products to improve a business's top and bottom-line revenue. They'll find revenue opportunities, prevent cancellations or churn, and make people more efficient. They will be indispensable.
Dan Corbin, an instructor at the Pragmatic Institute, states, "If you can change your mindset as a company and understand the capabilities of AI, this is where AI operations come in. You need this AI Operations Director to ask, "How do we tackle this from a macro level?" You must think about AI from an organizational perspective to leverage it to its full capacity."
We are at the beginning of a major, major shift in employment. Fifty years ago, did companies have an IT Manager? No, but they do now. Thirty years ago, did they have an e-commerce manager? Again, no, but they do now. Ten years from now, will companies have someone in charge of AI operations? Of course, they will.
Adoption is inevitable because the gains are too significant. As Mike Evans, Director of Customer Care and Analytics at Laerdal Medical, states, "You need someone to own this." Companies like MassPay have already implemented such a role, which they attribute to the 100% customer retention of their "Top 100" last year. Hawke Media, the top performance marketing agency in the country, shifted the purview of their existing Director of Business Intelligence to include AI Ops. They then improved revenue retention by 30% MoM in less than six weeks.
With that out of the way, let's get started.
The Role of a Director of AI Operations
As businesses continue to embrace AI technologies, the need for dedicated professionals to implement and oversee these systems will become critical. Enter the Director of AI Operations.
At its core, this role is creative: you need to think of new ways to solve old problems in ways that have never been done before. "How can we use tomorrow's tools to solve our problems today?" This is a key point. AI will help you solve problems in ways you never considered before because they were previously impossible.
The Director of AI Operations will be the key player responsible for developing, implementing, and managing AI strategies.
The AI Business Director should be able to create and implement an AI strategy that answers the following:
How can we use AI to find revenue opportunities?”
How can we use AI to identify and reduce revenue risks?”
How can AI make our teammates more efficient?”
This multifaceted role requires a deep understanding of AI vendors, data privacy, and a visionary mindset to leverage AI's potential effectively.
Note: This job does not require coding. This person isn't building AI; they are identifying the areas where AI can improve business performance.
Let's take a closer look at the responsibilities of this job:
Strategy Development: "What problems are we trying to solve?" The Director of AI Operations collaborates with various departments to identify areas where AI can be integrated to capture risk and opportunities or to improve efficiency. They create a comprehensive AI strategy aligned with the business's overall objectives.
Data Discovery: "What data could AI illuminate that we've previously been unable to use?" For example, a Director of AI Operations could use emails to create a new data stream that correlates customer product confusion with unhappiness and eventual cancellation.
Data Management: "What are the security, privacy, and regulatory challenges with our approach." AI heavily relies on quality data to make accurate predictions and decisions. The Director ensures that data is collected, cleaned, and stored securely. This person should be able to deep-dive on a vendor's privacy and regulatory compliance.
Implementation: "What systems will we need to leverage, and how will we accomplish this?" Once the AI strategy is in place, the Director oversees the implementation of AI projects, ensuring seamless integration with existing systems and addressing any technical challenges that arise. Just as important, this person will need to drive the "people-side" integrations and help people leverage these new data streams.
Performance Monitoring: "What are the success criteria?" Monitoring the performance of AI systems is critical. The Director tracks key performance indicators to measure the impact of AI applications and makes adjustments as needed. Critical here is to answer, "Is this driving the desired outcomes?"
Ethical Considerations: "Should we even use AI for this?" Some AI systems handle sensitive data and will replicate previous biases. A crucial question is, "Is the AI making decisions?" If "yes," then much thought should be put into whether or not AI is appropriate.
Growth of AI Jobs
According to a study conducted by the World Economic Forum, AI is estimated to create 58 million new jobs by 2024. This includes a wide range of roles, from data scientists and AI engineers to, of course, Directors of AI Operations. According to HireEZ, one of the world's largest outbound recruiting platforms, demand for AI-related positions has risen by 60% since 2021.
On the flip side of that coin, higher education and e-learning platforms are seeing a surge in interest in AI courses. Pablo Garcia, Content Lead at CXL, the top marketing e-learning platform, states, "CXL saw a much higher interest in AI courses among our students in 2023, with a 785% increase in engagement for the Advanced AI in Marketing course."
As more companies recognize the potential of AI and seek to stay competitive in the market, the demand for AI professionals is set to skyrocket. A survey conducted by Deloitte further reinforces the growing importance of AI in businesses. It revealed that around 61% of surveyed companies have already implemented some form of AI into their operations. That means 61% of respondents are looking to hire a Director of AI Operations if they haven't already. Don't get left behind.
Where to Start
If becoming your company's Director of AI Operations seems daunting, don't be discouraged if you don't have experience. Very, very few people do. Get started now, and you'll be far ahead of everyone else.
Where would I start? I would look at my current role and think, "How could AI help my current company keep customers longer? Or, how could AI make my group more efficient?"
What problems are we trying to solve?”
What data could AI illuminate that we’ve previously been unable to use?”
What are the security, privacy, and regulatory challenges with our approach?”
What systems will we need to leverage, and how will we accomplish this?”
What are the success criteria?”
Should we even use AI for this?”
Get on it! Also, if you’d like more help, download our AI Retention Plan & Calculator.
Are you looking to hire someone to manage AI Operations?
While writing this article, a friend sent me a job description for a “Head of AI Product Management” at a significant online streaming company. It pays 900k/year. Hmmm… A recent Sturdy poll on LinkedIn concluded that over 50% of respondents already have someone in an AI Operations position, with 25% looking to fill the role in 2024. If you’re interested in hiring for a Director of AI Operations, feel free to copy and paste the job description below:
Director of AI Operations
Role Overview: The Director of AI Operations will be the key player responsible for developing, implementing, and managing AI strategies. This multifaceted role requires a deep understanding of AI vendors, data privacy, and a visionary mindset to leverage AI's potential effectively.
Key Responsibilities:
- The Director of AI Operations collaborates with various departments to identify areas where AI can be integrated to capture risk and opportunities or to improve efficiency. They create a comprehensive AI strategy aligned with the business's overall objectives.
- Identify areas of improvement throughout the business and implement AI workflows.
- The Director ensures that data is collected, cleaned, and stored securely. This person should be able to deep-dive on a vendor's privacy and regulatory compliance.
- The Director tracks key performance indicators to measure the impact of AI applications and makes adjustments as needed.
- Once the AI strategy is in place, the Director oversees the implementation of AI projects, ensuring seamless integration with existing systems and addressing any technical challenges that arise.
- Stay updated with the latest generative and synthesis AI trends and technologies to ensure the company stays ahead of the curve.
- Develop AI strategy and roadmap and act as the foremost thought leader on ethical considerations such as how AI systems handle sensitive data.
- Train colleagues and other teams on AI workflows and best practices for their departments.
Requirements:
- Bachelor’s degree in Computer Science, Data Science, or a related field. Master’s degree preferred.
- Proven experience identifying areas where AI can improve business performance and executing those strategies.
- Strong analytical and problem-solving skills.
- Ability to work collaboratively with diverse teams.
- Excellent communication skills, both written and verbal.
As always, thanks for reading. Feel free to reach out to us to talk further.
Steve


