The 18-Month Gap: Why AI Governance is an Information Flow Problem
By Cora-Lynn Lynds | February 2026
Something shifted at Davos this year. The World Economic Forum released data showing that 87% of organizational leaders now identify AI-related vulnerabilities as their fastest-growing cyber risk.
But it's not the number that caught my attention, it's what changed underneath it. In 2025, nearly half of executives worried about adversarial AI (attackers using artificial intelligence against them). By 2026, that fear dropped to 29%. What replaced it? Data leaks from their own GenAI tools. Data leaks from a company’s internal Generative AI is now the top concern at 34%.
The threat model flipped. Organizations aren't primarily worried about external actors weaponizing AI anymore. They're concerned about information flowing out through the AI systems they deployed themselves.
Shadow AI is often framed that as the governance problem: employees going rogue, data slipping into tools IT never approved. And it is a real concern! But its not the only concern.
This data points to something running alongside it. These leaks aren't coming from shadow AI. They're coming from sanctioned tools, officially deployed, integrated into enterprise systems. The governance gap is both what people are using without permission and what has been approved without fully understanding how information flows through it.
Both can be true at once. And organizations that only focus on one will miss the other.
The four articles/reports from WEF, Kroll, NYU's Center on International Cooperation (CIC), and the IAPP … when looking at them together through the lens of information flow rather than technical security, shows the following:
The Recognition-Response Gap
The WEF data reveals what they're calling a "recognition-response gap." Organizations know they're exposed (87% identify AI vulnerabilities, 94% see AI as the most significant cybersecurity driver) but response capability lags dramatically behind. Less than 45% of private-sector CEOs feel confident in their institutional defenses.
That spread tells a story:
Organizations understand the risk. They just can't close the gap fast enough.
The numbers are stark. Organizations assessing AI tool security nearly doubled in a year, from 37% to 64%. That looks like progress until you examine the details:
Only 40% conduct periodic reviews before deployment.
Another 24% perform one-time assessments.
Roughly one-third deploy AI tools with no security validation process at all.
To borrow the Forbes article's metaphor: they're building the seatbelts after the crash test.
This isn't a criticism of the organizations involved. The incentive structure rewarded speed. Organizations that deployed Gen AI early reported productivity improvements that created competitive pressure. Moving fast made sense.
The question now is how to build governance capacity without losing the value AI creates?
Why This Isn't Just a Shadow AI Problem
Much of the conversation around AI governance has focused on shadow AI (employees using ChatGPT, Claude, or other tools without organizational approval). That's a legitimate concern. When people paste sensitive information into consumer AI tools, data leaves the organization through channels that were never designed, approved or monitored.
But the WEF data suggests we need to hold two concerns at once:
Shadow AI is about unauthorized tools.
Authorized tools that create information pathways organizations haven't fully mapped.
When enterprises connect GenAI to Slack, Teams, SharePoint, and proprietary databases, they create new ways for information to move (and potentially exit) that traditional security controls weren't designed to catch.
Consider the scenario from the WEF report:
Someone prompts a customer service AI to "summarize all client contracts above $10 million." Or a financial planning tool gets queried about merger scenarios under evaluation. These are conversational queries that mimic legitimate use. They don't look like data exfiltration. They don't trigger the same alerts as someone downloading a large file or running an unauthorized database query.
Traditional data loss prevention tools were designed for a different pattern. They detect bulk transfers and obvious policy violations. AI systems extract information through conversation; through the same interfaces employees use for legitimate work every day.
This is why I keep coming back to information flow as the lens for thinking about AI governance.
The question isn't just "what tools are people using?" It's "how does information move through our systems now that AI is part of the workflow?" That's a different question, and it requires different approaches to answer.
The Good News: AI Is Delivering Value!
I want to be clear about something: this isn't an argument against AI adoption. The productivity gains are real. The efficiency improvements are real. The competitive advantages are real. That's precisely why deployment moved so fast: organizations saw genuine value and moved to capture it!
The challenge isn't AI itself. It's that governance is running about 18-24 months behind deployment, according to the WEF. That's a gap worth closing, not a reason to slow down or reverse course.
The organizations I see making progress aren't the ones trying to restrict AI use. They're the ones working to understand it: mapping how information flows, identifying where new pathways have been created, and building governance that works with how people actually use these tools rather than against it.
The Broader Context: Global Fragmentation
The NYU Center on International Cooperation's (CIC) brief on the UN Global Dialogue provides useful context here. As of late 2025, there were 2,220 AI governance initiatives worldwide, with little coordination between them. Meanwhile, 118 countries remain entirely excluded from existing international AI governance efforts.
CIC researchers frame AI governance as fundamentally about who sets the rules, who coordinates, and who benefits. That's a global question, but it applies at the organizational level too.
Within most organizations, AI adoption has been fragmented across departments, teams, and individual users, each making their own decisions about tools and data access, often without visibility into what's happening elsewhere. The marketing team uses one set of AI tools. Finance uses another. Customer service has its own. IT may not have a complete inventory of what's deployed, let alone how information flows between these systems.
The CIC proposes organizing global AI governance around three pillars: managing risks, distributing rewards, and aligning rules. I'd argue these same pillars apply internally:
Managing risks means building shared capacity for understanding AI risks across the organization, not just making it IT's problem or a legal problem. Instead, developing organization-wide awareness of how information flows through AI systems.
Distributing rewards means ensuring AI benefits reach across the organization equitably, not just early adopters or technically-savvy teams, but everyone who could benefit from these tools.
Aligning rules means creating governance that matches how work actually happens. Policies are built around real workflows, not theoretical ideals, making it easier for people to follow.
What the Vendor Landscape Reveals
The IAPP's 2026 AI Governance Vendor Report offers another perspective. They've catalogued over 60 vendors across four capability areas:
Assurance and auditing,
Consulting and advisory,
Policy and compliance, and
Technical assessments.
Their key observation resonates with what I see in practice: "AI governance is not a single function, discipline or technology."
No single provider spans all governance tasks. Organizations typically need multiple vendors working across different areas, or they need to build capabilities internally.
The IAPP also notes that many vendors remain "cross-sectoral" rather than specializing in particular industries. This suggests the market hasn't yet matured enough to offer governance solutions tailored to specific organizational contexts (healthcare v. finance v. nonprofit v. manufacturing).
The implication? Organizations need to do more of this contextual work themselves.
Governance isn't a product you buy.
It's a capability you build.
One that requires understanding your specific workflows, your particular data sensitivities, and your organization's actual risk tolerance.
A Framework Worth Considering
Kroll's AI strategy brief provides practical scaffolding for thinking through these issues. Their framework emphasizes several elements that align with an information flow approach:
Start with clear problem definition. What do you actually want AI to solve? What's the value of having those problems solved by AI versus your existing approaches? This clarity shapes everything that follows.
Build a comprehensive inventory. Not just what's officially sanctioned, but what's actually in use. Are all AI models in the organization identified and accounted for? How do they impact business risk?
Understand your data deeply. Kroll makes the observation that AI is "95% data, 5% algorithm." The organizations struggling most with AI governance are typically the ones who haven't first addressed their information management fundamentals. You can't govern AI tool usage if you don't understand where your sensitive data lives, how it flows, and who has access to it.
Identify obstacles proactively. Operational obstacles like prohibited systems still in use. Cultural obstacles like inadequate training. Regulatory obstacles as frameworks like the EU AI Act come into force. Identifying these early allows you to address them before they become crises.
Kroll also frames regulatory compliance, including the EU AI Act and the DOJ's updated Evaluation of Corporate Compliance Programs, not as constraints but as guides for building defensible governance programs. Compliance requirements can provide structure and accountability mechanisms that organizations struggle to create on their own. They're not just boxes to check; they're frameworks for thinking through what good governance actually looks like.
The Questions That Matter
Pulling these threads together, I keep coming back to the difference between compliance-oriented questions and information flow questions.
Compliance asks:
Do we have an AI policy?
Have employees signed it?
Are we meeting regulatory requirements?
Information flow asks:
Do we actually know what AI tools are being used across the organization, not just what's sanctioned, but what's in use?
Can we map how sensitive information flows through those tools? Where does data enter? Where might it exit?
What queries could extract information we'd want to protect?
Are our controls designed for how AI systems actually work (i.e. conversational interfaces, semantic queries, cross-platform integrations) or are they still optimized for traditional data loss patterns?
Both sets of questions matter. But organizations that only ask the first set will find themselves with policies that don't match reality: governance that exists on paper but doesn't protect against actual risk.
The organizations making progress seem to share something in common: they're asking the second set of questions first, then building compliance frameworks that reflect what they learn.
Closing the Gap
The WEF report frames 2026-2027 as a "critical exposure window." Organizations deployed AI at scale while security practices remain immature. The 18-24 month gap is real.
But I also think this framing can be paralyzing if we're not careful. Not every organization is defending against nation-state actors or managing frontier AI models. For most organizations, the path forward is more modest and more achievable:
Gain visibility into current AI usage, both deployed and shadow.
Map information flows: understand how data moves through your systems now that AI is part of the workflow.
Build governance into workflows rather than around them: create policies people can actually follow.
Create feedback loops that allow continuous improvement as tools and uses evolve.
The 18-month gap won't close overnight. But organizations that start with understanding how information actually moves are the ones positioned to make real progress. They're not trying to stop AI adoption. They're trying to make it sustainable.
And that seems like a goal worth working toward.
Sources
World Economic Forum. Global Cybersecurity Outlook 2026. Davos, January 2026.
Yıldız, Güney. "CEOs On Alert: Davos 2026 Flags AI Security Failures As Critical Risk." Forbes, January 22, 2026.
Camelli, Thibault and Joris de Mooij. Guidance for the New Global Dialogue on AI Governance. NYU Center on International Cooperation, January 2026.
IAPP. AI Governance Vendor Report 2026. International Association of Privacy Professionals, 2026.
Kroll. "AI Strategy: Building a Future-Proof Framework." Kroll Cyber Publications, 2026.
—
Cora-Lynn Lynds is a governance and AI strategy consultant at InfoStrat who helps mission-driven organizations make confident decisions in uncertain systems.