Across many organizations, the pace of AI adoption is outstripping the systems meant to govern it. As companies push to deploy new tools, a familiar tension is emerging: the technology rewards speed, while governance frameworks are built to slow things down.
Keith Enright, partner and co-chair of the Technology Innovation Group at Gibson Dunn and former global chief privacy officer at Google, noted that there is a perception — rightly or wrongly — that making the right bets on AI is existential, and that speed has become a survival imperative.
The result is a growing gap between what AI can do and what organizations are prepared to manage. Boards are demanding acceleration, CEOs are pushing enterprise-wide deployment, and legal and compliance teams are being told not to stand in the way of innovation.
Governance Built for a Slower Era
Compliance reviews, risk assessments and legal approvals were built for earlier generations of enterprise software, when development cycles unfolded over months or years. AI systems can move far faster, allowing new capabilities to be tested and deployed before oversight processes fully catch up.
Harvard Business School professor Suraj Srinivasan framed the problem with a simple analogy during a recent Newsweek AI Agenda webinar, saying that the faster the car, the better the brakes you need. In theory, governance should function as those brakes. In practice, most organizations are still figuring out how oversight should work when AI systems can generate insights, decisions, and automation at a pace that human-designed review processes were never intended to handle.
A Problem of Design, Not Just Technology
Paul McDonagh-Smith, senior lecturer at MIT Sloan School of Management, said governance is increasingly about institutional design — how companies distribute authority, responsibility, and accountability as AI tools spread across the enterprise. The question is no longer simply whether AI works, but who owns the decisions it influences.
Philip Brittan, CEO of enterprise intelligence platform Bloomfire, pointed to a related concern. As AI capabilities expand across departments, organizations risk creating fragmented systems of oversight, where different teams deploy tools under different assumptions about risk, accountability, and control.
Brittan warned that deploying AI on top of messy, ungoverned knowledge — including conflicting policies and outdated content with no clear ownership — doesn't mean moving fast. It means accelerating toward a problem that won't be visible until it becomes extremely expensive to fix.
The Temptation to Bypass Oversight
Srinivasan described compliance and oversight structures as mechanisms designed to slow things down so organizations don't make big mistakes. They are intentional design features meant to introduce deliberation into high-stakes decisions. AI is now stress-testing that design at every level.
When AI becomes a declared strategic priority, pressure cascades through the organization. The temptation to bypass established governance processes grows, and what starts as a one-time shortcut can quickly become a structural vulnerability.
The alternative is more difficult but more durable: evolving the compliance apparatus itself, increasing throughput, improving cross-functional coordination, and recalibrating risk assessment processes for a higher-velocity environment.
AI in Action: Clinical Trials Get Smarter
One example of AI successfully operating within strong governance comes from Lindus Health, a clinical research organization that has embedded AI agents directly inside its clinical trial platform. Sponsors can now describe the data they want to explore in plain language, and the agent generates queries, builds visualizations, and returns validated answers within seconds — all inside the secure environment where the data already resides.
Meri Beckwith, co-founder and co-CEO of Lindus Health, stressed that integrating LLMs into existing enterprise platforms unlocks real value only when strong safeguards are in place, and that it is critical that any insights surfaced are based on hard data. Each deployment includes encryption, secure access controls, detailed audit logs, and validation processes aligned with regulatory expectations.
The Central Governance Lesson
AI adoption is no longer just a technical upgrade — it is forcing organizations to rethink how governance itself operates. The task is not only to manage risk, but to design oversight structures capable of operating at the same pace as the technology they are meant to control.
For companies experimenting with AI today, the central lesson is clear: speed can unlock enormous value, but without clear ownership and consistent oversight, it can also create blind spots that grow as quickly as the technology itself.
True resilience in the AI era will belong to organizations that treat governance not as friction to bypass but as infrastructure to modernize — strong enough to handle speed, flexible enough to adapt, and credible enough to withstand scrutiny when it inevitably arrives.







