Problem-First AI Strategy: Automation That Delivers Business Value | Scott Litman
In this episode, Brendon Dennewill sits down with Scott Litman, SVP at Capacity and founder of Lucy AI (acquired by Capacity in 2024), to explore the reality behind AI adoption in enterprise organizations. Scott brings over 25 years of digital transformation experience and shares insights from guiding Lucy's integration into Capacity's AI platform, which now serves over 2,500 companies.
The conversation cuts through AI hype to reveal why 95% of enterprise AI projects haven't reached production and what separates successful AI implementations from failed proof-of-concepts.
Scott emphasizes that while individual users have incredible AI superpowers at their fingertips, enterprise adoption faces unique challenges around data quality, security, and scale. He advocates for treating AI as automation rather than magic, focusing on solving specific business problems through strategic problem selection and rigorous testing.
This episode is essential for RevOps professionals, enterprise leaders, and B2B growth teams who need practical guidance on moving from AI experimentation to production-ready solutions that deliver measurable business outcomes.
What You'll Learn
- Why enterprise AI adoption lags: Individuals leverage vast public datasets, while enterprises work with limited, sensitive data that demands unique methods and strict security.
- POC vs. production: Demos are easy; scaling to enterprise-grade solutions requires testing, governance, and discipline.
- Choosing the right problems: Target inefficient, error-prone, or time-consuming tasks where automation can turn days into minutes/hours.
- Framework for adoption: Begin with clear policies, targeted training, and a focus on automation over transformation to set realistic expectations.
- Driving user adoption: Offer persona-based training that solves real pain points and ensures early wins that delight users.
- Automation over hype: Treat AI like prior digital transformations—speed of execution matters more than novelty.
- Training and change management: Bridge skill gaps and standardize success through structured policies, approved tools, and clear use case guidelines.
Resources Mentioned
- EOS (Entrepreneurial Operating System)
- Capacity AI Platform
- HubSpot Inbound 2025
- MIT Enterprise AI Report
- Microsoft Azure OpenAI Service
- Anthropic Claude
Listen
About the Guest
|
|
Scott Litman | SVP at Capacity Scott Litman has an immense level of experience in entrepreneurship going from startup, funding, growth and exit five times. He has received a number of honors & awards including E&Y Entrepreneur of the Year, Business Journal Titan of Technology, University of Minnesota Outstanding Achievement, Smart Business Dealmakers Hall of Fame, Twin Cities Business 100 People to Know / Watch (3 times) and Minnesota Business The (Real) Power 50. Additionally, along with Dan Mallin, he founded the Minnesota Cup, the largest state wide business plan competition in the US. |
Episode transcript
Introducing Scott Litman and the Journey from Lucy AI to Capacity
Brendon Dennewill: Today I'm joined by Scott Litman, SVP at Capacity and the founder of Lucy AI, which was acquired by Capacity in 2024. He guided Lucy's integration into Capacity's AI platform, now serving over 2,500 companies, and currently leads product innovation and integration strategies for enterprise-scale AI solutions. He's a longstanding AI advocate known for helping enterprises move past AI hype toward measurable business outcomes.
Welcome back to the RevOps Champions podcast, Scott.
Scott Litman: Thank you for having me again.
Brendon Dennewill: Scott, a lot has happened since you were on the show a few years ago. You built an AI business, Lucy, which was then acquired by Capacity last year. That seems super exciting for both companies as they scale together. What key developments have influenced your business and leadership approach in the last few years?
Scott Litman: On the business and leadership side, one of the things I'd highlight is that my business partner Dan and I have been proponents of EOS, the Entrepreneurial Operating System, and the Traction framework. We've been using it since 2008. We started working with a company then called Reside, which joined forces with Magnet 360, and it was a requirement: we would be a Traction company.
We've become more and more immersed in it over time. When I'm on the board of a company or serving as an advisor, one of the requirements is that we're going to be an EOS company. I feel EOS is critical to help management teams work together, leverage all the talents of the team, and make sure everybody's rowing in the same direction: that meetings result in clear actions, people are accountable to those actions, and they move forward. So I'm a huge fan of EOS and Traction, and it's something that's core to all the businesses I've worked on.
Brendon Dennewill: So essentially, from a leadership perspective, nothing new. You're using the same EOS framework you've been using for a couple of decades.
Scott Litman: Two decades, yes. We get better and better at it and have been constantly evolving, but that has been core to how we run companies.
Leading Through Change: The Principle of Constant Evolution
Brendon Dennewill: As we head into Q4 2025, you're at the epicenter of what's essentially disrupting every business on the planet: AI. One of the things I often remind myself is that while change is inevitable and we have to be open to it, we also have to ask what stays the same. Do you have a principle or a mantra you lean on during big decisions, uncertainty, or the next phase of growth?
Scott Litman: A core value of the last three companies I've led has been constant evolution. We've really baked into the DNA of everything we operate the idea that there is going to be constant change. The pace of change is much faster today than it was three companies ago, or even two companies ago at Magnet 360. But we always had the idea that if we sit still, the world is going to pass us by. If we're not constantly innovating and looking for what's next, somebody else will get there first. And whatever differentiates us today will lose that differentiation in six to twelve months. So the idea is that we are always moving, always evolving, and we know the world is going to keep changing.
Brendon Dennewill: Absolutely. And with that, change is something we as humans, both leaders and the teams we work with, have to navigate together. So as you're preparing your teams and rethinking processes to get real business value from AI, how do you think about that?
Personal AI vs. Enterprise AI: Why the Benefits Are Not Equal
Scott Litman: The first thing to say is that when we say "AI," we have all been living with it for most of our adult lifetimes. I was doing a keynote at a global event for a Fortune 500 company about a year and a half ago. I asked the audience: show of hands, how long have you been working with AI? Six months? Twelve months? By the time I got to two years, most hands had already gone up. But what they were really hearing wasn't AI broadly. They were hearing generative AI.
Generative AI has been with us for almost three years now. It goes back even further: OpenAI started championing it in 2018, but it didn't land on the radar of 99% of humanity until November 2022. So when people talk about "AI," they mean generative AI. For practitioners of RevOps specifically, we have truly been using AI for a long time, whether it's driving newsfeeds, optimizing search in Google, or the tools inside HubSpot and other marketing automation platforms that optimize campaigns.
One of the key things is understanding that generative AI is a specific lens within the broader world of AI. The other important point is that the benefits of generative AI are very unequal. Individuals have incredible superpowers through their personal use of these tools, whether you use Grok, Claude, ChatGPT, Gemini, or whatever LLM you prefer. It is unbelievable how much capability is packed into free or $20-a-month tools. And yet inside a company, particularly for companies with 50 employees or more, the benefits are not the same.
Brendon Dennewill: Can you say more about that?
Scott Litman: The public tools individuals use are trained on billions of data points: a massive firehose of public content, plus a lot of copyrighted material that probably shouldn't have been included, which will get settled through legal processes. But the point is, those public tools have massive amounts of data. By comparison, even the biggest companies have a thimbleful.
Targets, Amazons, and Walmarts don't have anything close to the volume of data that exists in the public domain. That creates real challenges with how AI tools are deployed in the enterprise. I saw an MIT report from August, only a month old at the time, which said that since November 2022, only 5% of enterprise proofs of concept or development projects have reached production. IDC has somewhat different numbers, closer to 80% not moving forward and 15 to 20% that have. But the pattern is clear: a lot of money has been spent in the enterprise on these tools, and the struggle to find the right use case, establish safe and secure data access, ensure data quality, and move from POC to production has been very challenging.
I was at an event with leadership from Anthropic and OpenAI about four or five months ago, put on by Gene Munster and his team at Loop Ventures. Even the leadership of those companies was saying: we still haven't fully found our footing in the enterprise. We're still seeing all these POCs, all this testing, and it's getting better every day. But there's no killer story yet about transforming the lives of 50,000 employees at scale.
That's not to say AI has failed in the enterprise. It's just a very different experience than what happens for the individual. Individuals use these tools every day across a wide range of use cases. At the enterprise level, we have to be very disciplined problem selectors to determine what problems should be automated, can be automated at scale, and can be done successfully.
Defining "Enterprise" and the Data Quality Gap
Brendon Dennewill: When you talk about enterprise, what size businesses are you referring to?
Scott Litman: I started by saying 50 to 50,000 employees, and what I wanted to convey is that this goes beyond personal tools. Once you get to 50 people, you might have 50 individuals each using their LLM of choice ad hoc, but as a company, have you built a solution that uses AI and impacts all employees? Whether it's 50 employees or 50,000, that challenge is widespread.
When I say "enterprise," I'm thinking more toward 500 employees and above, because that's where most of the customers I've worked with at Lucy and now at Capacity sit. I mentioned larger companies specifically to make the point that even the biggest businesses, regardless of budget, are struggling to deploy this stuff in the right way.
Brendon Dennewill: As you were alluding to, those larger companies have the benefit of proprietary data to work with, which many smaller companies don't have. But in some ways, smaller companies can leverage public domain data. Does that level the playing field?
Scott Litman: That's an interesting take, and in a way it does. But here's the thing: enterprise customers often complain that employees are so accustomed to using ChatGPT or Claude that they spurn the tools the company officially supports.
Take a hypothetical. Imagine an employee at General Mills, not our customer, using ChatGPT for research. Are they going to get results? Absolutely. But are those results based on quality industry information? Maybe. Maybe the results are being influenced by a blogger, a Reddit post, or competitor PR. Versus an employee at General Mills doing research on their own products and audiences, pulling from research paid for through third-party vendors or conducted internally: that is a quality, accountable source of data. We did this study with this vendor six months ago. It's not from the public domain.
There is a real difference in data quality. The challenge is that the person using ChatGPT doesn't think about it. They ask the question, get an answer in two seconds, check the box, and move on. Is it factual? Is it accurate? Who knows? Versus working with enterprise tools where you are drawing from trusted sources. But the enterprise tool experience is often not as fast, slick, or seemingly comprehensive as the public tools. So it creates this strange split: people are used to one user experience in their personal lives and a very different one in their corporate lives.
How to Get Started: Policies, Training, and Use Case Selection
Brendon Dennewill: Let's talk about that. Various surveys of business leaders show that over 95% agree the future of AI is unquestionable, but fewer than 5% are actively doing something about it. For companies of all sizes trying to plan for 2026, what advice do you have for getting started with AI?
Scott Litman: The first thing I'd say is that this is a wave of digital transformation not unlike ones you and I have been through before: waves around mobile, e-commerce, the web becoming personalized. In every one of these waves, there is a moment of intense hype, a feeling that everybody's doing it but you. We are in that right now with AI.
There will be winners and losers. Businesses that figure out how to adopt this correctly and faster than others will gain short-term advantages over competitors that don't. But those wins won't be permanent: competition will eventually catch up, whether it takes six months, twelve months, or two years. So try to be a winner rather than a laggard, and we're in a good window for this right now as the technology matures.
There's also meaningful low-hanging fruit. When I do strategy for businesses of almost any size, the first step is making sure AI policies and procedures are in place. What are the approved tools? How do you use them? Which use cases are permitted? What are the prohibitions around data? Can company data be used, and if so, how? Should you license directly from Microsoft, OpenAI, Anthropic, or AWS to get an LLM that is yours alone, where your information can't be shared to the broader training pool? There are also industry-specific tools that wrap foundational AI in ways suitable for regulated industries like financial services. Getting policies and procedures in place is one of the least expensive things a company can do to start seeing consistent benefit.
The second thing is training. If you've ever watched someone in their twenties use these tools compared to someone in their fifties, there's a noticeable difference. I consider myself reasonably skilled at this, but I've watched my son, with very limited business experience, spinning the Rubik's cube at light speed while I'm slowly chunking it around by comparison. How do you ensure that everyone in your organization understands how these tools work and how to use them well? The skills are uneven, and sometimes it's generational. Training is a very inexpensive way to make sure the organization is getting full benefit.
After that comes use case selection. What are the most important use cases driven by where you're inefficient, where you're slow, where you have problems that can be solved? At the end of the day, this isn't just digital transformation: it's automation. What task has lots of hands on it and is inefficient? What task is prone to error? What task is taking days or weeks that we could change? If we can turn days and weeks into minutes and hours, that's where you focus your energy. And if you're really good at identifying the right problem and building a purposeful solution, you can be very successful, just like in prior waves of technology adoption.
The HubSpot INBOUND Playbook and the Engineering Mindset
Brendon Dennewill: Everything we've been talking about connects to something that's been top of mind since coming back from HubSpot's INBOUND event in San Francisco. They did an incredible job of showing what is happening and what will happen. HubSpot's CEO and senior leadership came out on stage and said the Inbound playbook they built the business around for 19 years doesn't work anymore. They introduced what they're calling the "Loop" marketing playbook, built around an infinity loop concept. And what they've been doing for the last two years is rolling out AI-related products and agents with exactly that engineering mindset: find where you have problems, and fix them with automation.
I was updating my email signature this morning and noticed a Mark Twain quote I put at the bottom last year: "The secret of making progress is to get started." It still holds. That's the same idea. You talked about adoption, training, policies, and procedures as relatively easy places to begin. Start with problems you have, solve those, and learn through the process.
I want to dig into adoption and training a little more, because I completely agree. Our business is similar to what you were doing at Magnet in that we help clients adopt CRM technologies as those technologies continue to evolve. Whether it's Salesforce or HubSpot, they're both building more capability every week. But adoption and training are parts of change management, and change management has never been more necessary in business than it is today. Tell me more about how you think about making adoption and training actually work.
User Adoption: Making the First Experience a Success
Scott Litman: That's a great question. My first job was at Microsoft, a long time ago. Something that stuck with me back then was the observation that we put all these new features out and nobody uses more than 5% of them. It's true. We all cut, copy, and paste; we all type; we all bold and underline. There are maybe a thousand things Word can do, and most of us use twenty of them plus a couple of tricks we picked up along the way.
For years at Magnet 360, we were a Platinum Salesforce partner. Salesforce was constantly acquiring companies, and every year at Dreamforce they'd announce grand proclamations about everything their platform would do. And yet at the end of the day, 95% of Salesforce usage is Salesforce CRM. Everything else is a sprinkling here and there for specific use cases. The tech adoption problem is industry-wide.
That said, something we do at Capacity, and brought over from Lucy, is a very intentional approach to user adoption. We know we can't teach people to do 100% of everything our platform can do, so we focus on one core thing.
Here's how it works using Capacity as the example. We're all about knowledge management: connecting to data within an enterprise. A customer like Pepsi, one of our largest and longest-tenured customers, has us connected to 800 SharePoint sites and over a hundred different tools, vendors, and systems. When someone enters that environment, there is a tremendous amount available to them.
To guide user adoption, we start with a personal exercise: a day-in-the-life interview for each role. If someone is a marketer, a market researcher, or in sales, what are the things they do? How do they do it? What are their pain points? Then when we get into training and onboarding, we speak directly to those pain points. We tell them: we have heard through interviews with your colleagues that this task takes you days or weeks. We're going to show you how to do it in minutes. And then we do onboarding that is not generic "here's how Capacity works," but very specific training around each persona: their language, their problems, their pain points.
We recognize that audiences are fickle. If the tool doesn't immediately work for them, they'll tune out and write it off forever. Think about how many times you've downloaded an app to your iPhone or Android device, tapped it once, and 30 seconds later got defeated and never opened it again. We all use only six to eight apps daily. The other two hundred never got their daily usage. So one of the most critical things in onboarding is making sure the very first interaction is a successful one.
If we know someone is in product research, we load up an example tailored to product research. We give them a couple of softballs: here's exactly what to ask the system, here's what to do. We want them to be amazed and delighted. We worked with one of the biggest ad agencies on one of the biggest auto accounts, and on the very first day of onboarding for the system champion, he found something from a Super Bowl commercial from the previous year. He said, "I have been looking for this for a week and I found it in two seconds." That moment of amazement and delight made him a proponent of the system forever. That's what we need to create. You train the core use case, you ensure success in the first interactions, and then people can branch off and learn more from there.
Managing Tool Overwhelm: Arrows in the Quiver, Clubs in the Bag
Brendon Dennewill: Yes, I love that. Scott, I want to come back to something you mentioned at the top. We've been hearing this a lot since returning from INBOUND, where HubSpot made over 200 announcements. If you're not thinking about it in the right way, that can become overwhelming very quickly. We've been a HubSpot partner for 12 years, and we hear this constantly. When people say to us, "We're only using 20% of what HubSpot can do," our response is: that's okay, as long as it's solving the business problems you have.
Whether it's HubSpot, Capacity, or any other technology platform, they're adding arrows to the quiver you're already paying for. Sometimes those arrows are free, sometimes there's an added cost. You only pull out a particular arrow when you need it for a specific business problem. Don't feel like you have to use them all. I'm telling myself this as much as anyone, because I suffer from shiny object syndrome as much as the next person.
Scott Litman: I like to go with the golf bag analogy, even though I'm a casual golfer at best. Every club has a specific use, even if I can't always execute it correctly. The challenge with a system like HubSpot is that the golf bag now has 200 clubs. I don't envy the challenge you face as advisors in knowing which clubs to recommend.
Brendon Dennewill: Absolutely. That's part of why businesses partner with us: we know how to match the business problem to the tools available, build and integrate those tools with everything else they're using, and then train the specific people on how those tools are going to help them day-to-day.
Scott Litman: 100 percent.
Brendon Dennewill: And honestly, the golf bag analogy is probably more accessible than quivers and arrows for most people. Though where I grew up in southern Africa, the Khoisan hunters might have had only four or five arrows in a quiver, and each one had a different purpose depending on the job. But the golf bag gets the point across more cleanly. So where does all of this leave us?
Planning for 2026: AI Budgets, Strategic Leadership, and the "E-Business" Lesson
Brendon Dennewill: I think a lot of this comes down to overcoming the overwhelm and developing the strategic leadership required to navigate the next three, six, nine, and twelve months, until this technology normalizes the way other waves of technology have.
Scott Litman: Remember when people used to put an "e" in front of "business" because it was "e-business"? We're going to go through the same thing with AI. Businesses are trying to signal to investors that they're using AI, thinking it'll impact their valuation or make them more attractive. Eventually it's just going to be business. AI will be part of every tech stack. It will be routine, not a differentiator, just as using technology tools better than your competition has always been an advantage. It's simply part of the set of tools businesses have.
Brendon Dennewill: Right. Let me ask the question from a different angle. Based on everything we've discussed, knowing the challenges businesses of all sizes face as they plan for 2026: we've been asked a lot this year about demand from our clients for help with their AI adoption journey. Our honest answer is that demand hasn't been as high as you'd expect, given that you can't go anywhere without reading about AI.
It clicked for me a month or two ago. I think when companies were doing their planning and budgeting for 2025 last year, 95% of them did not include an AI budget. Which is why we're three quarters of the way through 2025 and there's no roadmap, no strategy, no training program, because no one allocated a budget in the first place. So this is somewhat of a public service announcement: if you are currently in planning for 2026, make sure you are doing something strategic about budgeting for how you incorporate AI into your business.
Scott Litman: Budgeting is certainly part of it, and I agree: make sure there's a 2026 budget for this. The other thing is that in the first wave, going back to Q1 2023 when businesses started asking "what are we going to do about AI?", particularly inside mid-sized and larger companies, IT ran around carrying a hammer. Microsoft and OpenAI had given them a hammer, and they pounded nails. They would say: look at this capability, look at this neat thing I can do. But did it solve an actual business problem? Business leaders didn't know what to ask for. And so 2023 was a lot of that.
One of the things we see today is that businesses are actively looking for where expertise lives. The job of the professional services partner, the way you are with HubSpot, is to understand that there are 200 new arrows mixed in with hundreds of prior ones. As a customer on your own, you have no idea which to pick up. Even if it looks easy, like you could just grab a club and swing, you need to trust a professional who truly understands which tool fits which problem.
I'll say that I've been fortunate to have over 25 years in digital transformation. Lucy was started six years before the world was thinking about AI for this kind of enterprise data retrieval. We've been at it a while, integrated with a lot of systems, and worked with some very complex enterprise security environments. When someone has a conversation with a member of my team in an area where we have deep expertise, it's closer to going to the Mayo Clinic, where you're working with a doctor who has done a thousand of the hard procedures. It's very different from someone who discovered this two and a half years ago and called themselves an expert a year later. You need to combine true expertise in industries, technologies, and tools with the latest AI capabilities and put it all together.
The Microsoft-OpenAI Dynamic and Choosing the Right Experts
Brendon Dennewill: Which of course is part of the dilemma you described earlier. In Q1 2023, mid-market companies were asking their IT leaders: what do we do about AI? And the answer was essentially: here are our two options, Microsoft or ChatGPT.
Scott Litman: Over the last two years it's been Microsoft and OpenAI, and in some cases AWS. And by the way, Microsoft is just OpenAI under Microsoft's brand. Microsoft's Azure OpenAI Service was built on a $10 billion investment into OpenAI. That relationship is actually changing now, but when people say "I'm going to use Microsoft for AI," they're using OpenAI under the Microsoft umbrella and integrated into the Microsoft tech stack.
Brendon Dennewill: Good context. And where I was going with that is: all these IT leaders in 2023 were essentially making decisions based on what felt safest, not necessarily what was right. There's a saying you've probably heard many times: no one ever lost their job for choosing IBM.
Scott Litman: And now no one loses their job for choosing Microsoft. If you pick IBM today, you might lose your job. [laughs]
Brendon Dennewill: Exactly. But the point stands: choosing what feels politically safe and choosing what is strategically right are often different things. The leaders who told the person with the hammer "you're the expert, we trust you" were optimizing for job security, not business outcomes.
Scott Litman: It's a very safe decision to work with Microsoft.
Final Advice: Find the Problem, Test Rigorously, and Go from POC to Production
Brendon Dennewill: Scott, as we wrap up, I'd like to ask you to leave our listeners with one or two last pieces of advice as they plan for 2026.
Scott Litman: The theme of our conversation today is that this is all here to stay. It's a permanent change and one that's rapidly evolving. If you're a business leader, you don't have to have FOMO and feel like everybody else is already doing it. Yes, it feels that way because of the widespread personal usage we discussed. But businesses are still finding those big use-case successes. So look honestly at what your biggest problems are and where you could automate.
Actually, I think it becomes less intimidating when you frame it as automation rather than AI, because we've been using automation our entire working lives. AI is just a form of automation. Find the problem that needs to be automated and improved, and focus your energy there.
The other thing I'll say is: don't be deceived by the POC. It has never been easier to build a proof of concept that looks like it can do the job. But it is hard to go from POC to production. Before you unleash your new tool on your audience, test it rigorously across all uses and use cases. Make sure you've gone from the parlor trick of a POC that looks great to something that actually solves the problem and works every day.
Brendon Dennewill: And you're speaking specifically to companies trying to build their own internal AI engine using various tools and technologies, starting with something they can build on and control over time.
Scott Litman: Yes. A business can create a POC of an idea in minutes or hours, and it looks like something. It's like in the old days when someone would see a website mockup on a screen and the boss would walk in and say, "Great, are we ready to launch Monday?" And you'd have to explain: we haven't built anything yet. We just finished the mockup approval. We still have 90 days of development ahead.
We see the exact same dynamic with POCs today. People use incredibly capable tools like Lovable, they do vibe coding, they build something in an afternoon, and they go into the meeting looking like a star: full navigation, features, you can type a question and something happens. But if you then unleash it on real production data and real usage, it falls apart. It goes from parlor trick to a path to nowhere. You have to apply the same discipline around security, scalability, data quality, and testing that you would with any production system. It's never been easier to make something look like something, and never been harder to get it actually to production.
Brendon Dennewill: That's a really interesting point, especially for entrepreneurial businesses still run by their founders. One thing we know about entrepreneurs is that we don't have unrealistic goals: we just have unrealistic timelines. If you can see something that looks like the finished product in an afternoon, it's very easy to ask why it isn't done yet. Scott, thanks so much for being on the show. It was great to catch up, and we'll be watching your continued success at Capacity and everything else you do here in the Twin Cities.
Scott Litman: It's great to see you. Thanks for having me, and I appreciate the time.
Brendon Dennewill: Thanks, Scott.



