Surveys show that more than 80% of family businesses are now adopting AI in some form— often to improve efficiency, risk management, and customer relationships. At the same time, over half of family business leaders say inadequate technology adoption is a moderate or high risk to their growth in the next 12–24 months. Boards and lenders are asking, “What’s your AI strategy?” and “How will it create value?”
In many of the family-owned and privately held companies I work with, that pressure is landing in a way that leaves leadership teams both intrigued and, at times, paralyzed. Executives are using AI to tighten emails, summarize meetings and reports, improve copy, and accelerate research. But they are far less certain about how, or whether, to use it in the work that actually determines their future. That hesitation is understandable. Across the broader market, AI adoption is high, pilots are everywhere, and yet only a small minority of organizations appear to be generating measurable value at scale.
That uncertainty is showing up most clearly in areas like:
- Strategic planning
- Governance and board effectiveness
- Executive coaching and team performance
- Succession, culture, and change leadership
My goal here is simple: to offer family business owners, CEOs, boards, and executive leadership teams a clear, practical way to think about AI in these critical areas—one that embraces the tool, grounds its use in real business value, and refuses to outsource the work only humans can do.
I do not claim to be an expert, but I have been using Perplexity Pro and Claude extensively since attending a generative AI conference at Northwestern University over two years ago. It has transformed my business model and workflow. What follows is based on my experience, extensive research, and real-world experiences with clients.
What AI is good at—and where it breaks down
The latest generation of AI tools is legitimately impressive. Used well, they can expand a team’s capacity overnight. Used poorly, they can create noise, erode trust, and slow real progress.
AI reliably excels at:
✓ Summarizing and synthesizing informatio
It can condense long documents, pull out themes, and present alternative framings of the same content. That’s valuable when leaders are buried in data and starved for time.
✓ Drafting and redrafting communications
AI can help sharpen language, adjust tone, and improve clarity in emails, board materials, and strategy documents.
✓ Automating repetitive, low-value work
It’s well-suited to tasks like data clean-up, basic analysis, formatting, and first-pass documentation—the kind of work that eats time but doesn’t require deep judgment.
✓ Accelerating idea generation and scenario exploration
Leaders can quickly explore “what if” scenarios, test assumptions, and generate options that can be pressure-tested.
✓ Research & analysis*
Agentic AI can collect information from a plethora of resources at lightning speed. Research and analysis that used to take days now takes minutes. And an AI agent can access information from exponentially more sources in seconds. Just make sure you check the sources and ensure the agent isn’t hallucinating when suggesting conclusions, implications, and indicated actions.
What many leaders are discovering, however, is that the problem is not simply whether AI can produce a decent answer. The bigger issue is whether it can learn in context. A growing body of implementation research suggests that many enterprise AI tools fail not because leaders lack interest, but because the tools do not adapt well to real workflows, retain useful feedback, or improve over time.
When AI is used primarily to remove toil and support higher-quality human thinking, it can reduce burnout, create capacity, and free more time for meaningful work. But the broader enterprise evidence also suggests that value tends to appear only when AI is attached to specific workflows and clear outcomes, not simply when more people have access to a tool.
The problems arise when we ask AI to take on work that is fundamentally human:
🅧 Oversight overload and “AI brain fry”
When people juggle multiple tools and constantly supervise AI agents, they often report mental fog, slower decisions, and more errors—a phenomenon some researchers now call “AI brain fry.”
🅧 Damaged work relationships and trust
When AI is used as an intermediary in sensitive interpersonal situations, colleagues begin to question authenticity and effort: “Am I dealing with you or your bot?” That undermines trust—the very thing high-performing teams depend on.
🅧 Illusion of progress without real value
Many companies are generating more AI output without corresponding improvement in strategic clarity, execution, or culture. Independent implementation research suggests that only a small single-digit percentage of enterprise AI pilots mature into scaled, workflow-embedded systems with measurable P&L impact. In other words, AI can make it much easier to produce words, slides, summaries, and analyses. It does not, by itself, create better judgment, stronger teams, deeper relationships, or a healthier culture.
One of the clearest patterns in current AI research is the gap between experimentation and real business impact.
Many organizations are piloting tools enthusiastically, but very few are embedding them deeply enough in workflows to produce sustained operational or financial gains.
For boards and ownership groups, that means the right question is not “Do we have AI?” but “How, specifically, can AI improve how we work, make decisions, or create value?”
Why “AI-only” strategy and coaching is a dangerous illusion
Given AI’s capabilities, it’s natural for leaders to ask, “If AI can draft a strategy document or coaching plan in minutes, why pay for outside help?”
There are three reasons this logic is flawed, especially in family-owned enterprises.
1. Strategy is a series of hard choices rooted in context, debate, and tradeoffs
AI can strengthen analytics, surface patterns in your data, and echo industry playbooks. It cannot:
- Carry the weight of choosing between paths when you can’t have all of them
- Understand the family history and team dynamics, culture, governance dynamics, and unspoken tensions in the room
- Take accountability for tradeoffs that will define jobs, relationships, and legacy
As a McKinsey colleague put it to me recently, the real promise of agentic AI is to let humans operate above the loop—using agents to execute and analyze, while leaders provide direction and judgment. AI can widen your field of view; it cannot decide who you are or what you stand for as an enterprise. As Simon Sinek might put it, AI cannot determine your why.
This is also where many organizations confuse fast output with durable capability. A tool can generate a respectable strategy draft in minutes and still be poorly suited to the actual work of strategic choice if it cannot absorb the nuances of your operating model, learn from prior decisions, or adapt to the way your team works over time.
2. Coaching and team development are about trust and courage, not just insight
AI can help script a difficult conversation or offer language for feedback. Used as a private thinking partner, that can be helpful.
But research on AI in relationships warns that when people outsource too much interpersonal work to machines, they lose the very friction and lubrication that builds trust and capability. People start questioning whether effort, empathy, and vulnerability are real.
Effective executive coaching and team development rely on:
- Self-knowledge and the emotional intelligence needed to cultivate healthy relationships
- Vulnerability and psychological safety built over time through messy human interactions
- The courage to surface the hard truths others are afraid to bring up—in the room, not just in a prompt
- Real-time sensing of what’s not being said—the silence, side glances, and tension that no model will fully capture
Those are relational muscles. AI can inform them; it cannot exercise them on your behalf, and it certainly cannot interpret them.
The enterprise research is useful here as well. Many users report liking AI for early drafts, brainstorming, and low-stakes support, yet still prefer humans for high-stakes, context-rich work that requires memory, judgment, and adaptation over time. That preference is not a rejection of AI. It is a reminder that trust-intensive work requires more than fluent language generation.
3. Leadership teams need more constructive contention, not less
High-performing executive teams do not avoid conflict; they practice constructive contention—challenging each other’s assumptions, surfacing risk early, and arguing issues not character.
When leaders rely on AI to smooth every rough edge, they often remove the productive friction and debate that leads to better strategy, stronger alignment, and heightened commitment. Studies suggest that over-scripting interactions with AI can prevent the awkward, vulnerable conversations where real learning and trust develop.
In my experience, teams don’t need a more polished way to avoid hard conversations. They need a structured, safe way to have more of them!
That is human work.
Practical suggestions for owners, boards, and executive leadership teams
Rather than debating “AI: yes or no?” a better question is, “Where does AI belong in our work, and where does it not?”
Here is a simple lens I suggest to owners, boards, and executive teams—particularly in privately held companies:
1. Start by naming the real problem
Before you talk tools, ask:
- Are we trying to solve a technical problem (too much information, slow analysis, repetitive tasks)?
- Or a human problem (misalignment, low trust, conflict avoidance, unclear accountability)?
This matters because many organizations are investing heavily in AI while still struggling to define the workflow, owner, and business outcome they are trying to improve. When that happens, the result is usually more experimentation than transformation. Starting with the real problem—whether it is technical, relational, or both—immediately improves the odds of using AI well.
2. Use AI aggressively where it removes drudgery*
In strategic planning, board work, and leadership development, appropriate AI use includes:
- Market intelligence and competitive analysis
- Summarizing pre-read materials and stakeholder input so leaders arrive better prepared
- Drafting first-pass documents—strategy options, principles, charters—that the team can debate and refine
- Automating meeting summaries, repetitive reporting, dashboards, and status updates so managers can spend more time leading and coaching
In family enterprises, this is particularly valuable because it frees owners, next-generation successors, stakeholders, and key executives to spend more time on the human dimensions of governance, strategy, culture, and succession that only they can handle.
One additional lesson from current implementation research is that the most reliable early ROI often comes from targeted workflow improvement rather than broad, enterprise-wide ambition. In practice, that means narrow, clearly bounded use cases—pre-read synthesis, document review, meeting follow-up, knowledge retrieval, routine analysis—often outperform grand promises about “transforming the enterprise.” That may happen eventually, but a crawl, walk, run approach is wiser.
3. Learn from shadow use before you formalize it
In many companies, some of the most useful AI adoption is already happening informally. Employees are using personal tools to summarize information, speed up writing, and automate parts of their work—even when official enterprise programs are still stuck in pilot mode. Rather than treating all of that behavior purely as noncompliance, leadership teams should study it carefully. It often reveals where AI is truly useful, where the guardrails are weak, and which “power users” can help shape a more disciplined development program and rollout.
4. Keep AI out of the moments that define trust, culture, and legacy
Draw a line around:
- Family, board, and leadership conversations about purpose, values,
and legacy - Debating and aligning on a longer-term vision of success and crafting strategies to achieve it
- Executive sessions where performance, succession, and role clarity are discussed candidly
- Coaching conversations that require psychological safety, humility, and real-time sensing
- Cultivating a leadership team that leans on each other’s strengths and weaknesses to create a sum that is greater than its parts
You can use AI beforehand to prepare your thinking; you should not ask it to run these conversations for you.
5. Align expectations between executives, middle management, and investors
One of the clearest findings in current research is the gap between executive enthusiasm for AI and the more cautious, burdened reality of middle managers. Executives often see AI as strategic leverage, while managers see the integration headaches and extra oversight, and they worry about losing their jobs.
Family businesses and PE-backed companies add another layer: investors and boards increasingly expect an AI story but do not always appreciate the human capacity required to deliver it.
Closing those gaps requires:
- Co-creating AI priorities and guardrails with middle management instead of handing them a mandate with no definition of the problem to be solved or what success looks like
- Being explicit with boards and investors about what AI will and will not do in the next 12–24 months—and how you will measure value beyond tool usage
- Being clear about what we do not know. When it comes to Generative AI, none of us has a crystal ball, and the game is changing daily. Be careful about making longer-term promises such as “this will not result in headcount reductions.”
It is also worth being candid about where early ROI is materializing. Current enterprise evidence suggests that some of the clearest returns are coming not from the flashiest front-end use cases, but from targeted operational and back-office improvements, including document processing, workflow acceleration, and reduced reliance on outsourced support. That is useful for boards and investors because it shifts the conversation from vague AI aspiration to specific economic value creation.
6. Be disciplined about build-versus-buy decisions
Many leadership teams assume that building internally is the most strategic path because it appears to offer more control. The evidence so far suggests otherwise. In many organizations, externally partnered solutions that are tailored to a real workflow are reaching deployment more often than purely internal builds. The reason is not mysterious. Internal teams often underestimate the amount of ongoing learning, integration, change management, and workflow adaptation required to make AI useful in practice.
For privately held and family-owned companies, the better question is usually not “Should we build our own AI?” but “Where do we need a trusted partner who can help us solve a specific problem, integrate into how we work, and improve over time?”
Where this leaves you as a leader
For owners and leaders of privately held enterprises, the AI era does not lower the bar on leadership; it raises it.
You will increasingly be expected to:
- Use AI fluently enough to remove low-value tasks from your own workload and your organization’s workflows
- Lead with a clear, credible view of where AI belongs in your strategy and where it does not
- Invest in the “brain capital” of your enterprise—the judgment, trust, and governance maturity that AI cannot provide but can amplify
- Distinguish between tools that merely generate output and systems that can actually learn, adapt, and become embedded in the way your organization works
- Choose external partners that will help you accelerate progress
In my own endeavors with CEOs and their leadership teams, AI is now part of how we work—but never the point of the work. It helps us get to the right conversations faster and be better prepared. It can remove toil, accelerate and deepen research, ensure that conclusions are more fact-based, and help polish communications. But it does not replace the constructive conversations, judgment, trust, and accountability required to optimize performance and steward a business toward sustained profitable growth.
That, I would argue, is the balance worth striving for.
*Although these are commonly successful use cases, AI agents have been known to hallucinate (provide false or fictitious information). At this point in time (until models improve) it is always recommended that AI output is double-checked by a human for accuracy.
