Turning AI Governance From Burden To Benefit
by Gary Drenik · ForbesFor organizations looking to advance from one-off Artificial Intelligence (AI) and Machine Learning (ML) successes, AI governance is the critical missing piece in achieving scale. Done well, governance accelerates AI adoption by mitigating risk, ensuring performance, providing compliance, and ultimately engendering trust. However, despite the abundance of high-level frameworks and principles, the biggest hurdle remains how to make AI governance work in practice. The challenge is not defining AI governance but bridging the gap between principles and the actual practices that ensure AI systems are trustworthy, reliable, and compliant.
Most current AI governance frameworks fail to address the real-world tasks needed to govern AI effectively. Recent research commissioned by Domino Data Lab and conducted by BARC found that 95% of enterprises face a governance remodel or reboot to update their frameworks and processes for today’s modern model landscape. Most of the commonly proposed frameworks are so far removed from the actual practice of developing, deploying, and maintaining AI/ML solutions that they risk adding numerous layers of extra work and delays while doing little to reduce actual risks. AI models often take more than twice as long to validate than to build, during which models decay, the chance of adoption declines, and morale dies. Instead of fostering innovation, poorly implemented governance stymies AI impact while providing only a mirage of safety.
Where Good Intentions Fall Short
As AI continues to evolve and permeate every part of the enterprise, fears surrounding its misuse and the potential consequences of unchecked deployment are mounting. For example, according to a recent Prosper Insights & Analytics survey, 92% of Boomers, 88% of Gen-X, 84% of Millennials, and 84% of Gen-Z are concerned about privacy violations from AI using their data. However, fears about data privacy are just the tip of the iceberg – badly governed AI applications can tarnish corporate reputations, incur regulatory fines, rack up uncontrolled costs, and inflict damage to the bottom line through sheer bad performance. The more organizations leverage AI and ML, the more urgent the need for robust AI governance.
Governance is action. It’s the set of activities—planning, approvals, access control, monitoring, remediation, and auditing—that must be embedded throughout every AI/ML project. But while there is clear interest in responsible and ethical AI, few enterprise leaders are taking steps to ensure governance for AI implementation. Data from EY’s AI Pulse Survey reveals that while more than half (53%) of senior leaders say that there is increased interest in responsible AI in their organization this year, only one-third (32%) say their organization is addressing AI bias fully at scale – which is only the most obvious aspect of AI governance. Organizations must go far deeper to meaningfully tackle AI risks.
MORE FOR YOU
Today’s NYT Mini Crossword Hints And Answers For Thursday, November 14
Gaetz Resigns From House Before Ethics Report Can Be Released
NYT ‘Strands’ Today: Hints, Spangram And Answers For Thursday, November 14th
“Governance doesn’t happen just because you have committees, principles, or policies in place,” explains Dr. Kjell Carlsson, Head of AI Strategy at Domino Data Lab. “If the action isn’t taken during development and deployment, there is no AI governance.”
The disconnect between high-level frameworks and practical execution is a big problem. Even the NIST AI Risk Management Framework, which is detailed and comprehensive, falls far short of providing actionable guidance on the specific governance tasks required throughout a typical AI project. This results in what Carlsson describes as a “governance gap.” Organizations may have councils, frameworks, and principles in place, but nothing to ensure the full management and mitigation of risk. AI solutions end up hampered by big delays and emerge no more trustworthy than before.
“Organizations will frequently establish an AI governance council and agree on principles or a high-level framework, but then the initiative stalls,” Carlsson says. “An AI governance council can stop some risky projects from going into production, but it will also stop game-changing AI projects where risks could have been mitigated.”
Learning from the Experts
This confusion over what innovation-friendly AI governance should look like in practice is unwarranted. “The process of governing AI and ML solutions—of ensuring that they are accurate, reliable, fair, and compliant with existing regulations—has mostly been solved,” Carlsson points out. Advanced data science teams in regulated industries like finance and pharmaceuticals have been executing rigorous governance processes for years. These teams have honed their practices to ensure that AI models used in credit scoring, drug discovery, and other sensitive areas are governed effectively.
The real challenge is scalability. Organizations should not be asking ‘what should I be doing for AI governance’ but rather ‘how do I make AI governance faster, easier, and more scalable?’ Even the most advanced teams struggle with the manual effort of governing a growing portfolio of AI projects. Delays, frustration, and the impact on team morale are common, making it clear that current governance practices are not sustainable at scale.
Automation: The Key to Scalable Governance
Even among firms with advanced governance processes, manual efforts create significant delays–especially as the number of AI projects grows. According to Carlsson, “No amount of exhortations, incentives, or penalties to individuals and teams will overcome these challenges.” To be sure, the fragmented nature of AI ecosystems, with their range of tools, technologies, and environments, only adds to the complexity. The way to address these challenges is through automation, Carlsson argues.
But automation is not about replacing human judgment. It’s about streamlining governance tasks to provide timely visibility and control over AI projects. According to Carlsson, prime opportunities for automation include:
- Unified Visibility: Providing comprehensive, real-time information about risk, performance, and cost across AI projects.
- Auditability and Reproducibility: Enabling reviewers to replicate and validate AI solutions, diagnose issues, and make informed recommendations.
- Access Management: Controlling access to sensitive data, code, and models, and granting timely access where risks have been mitigated.
- Policy Management: Coordinating and enforcing policies that guide risk management activities.
- Task and Approvals Management: Streamlining the coordination of governance tasks and approvals to reduce manual overhead.
Building AI Governance Maturity
“AI governance is extremely important not just because it reduces risk but because it builds trust, which in turn accelerates AI/ML adoption and impact,” says Carlsson. To his point, effective governance ensures that AI solutions perform reliably, minimize regulatory risks, and protect an organization’s reputation. But to deliver on this promise, governance must evolve beyond principles and focus on the specific actions that drive visibility, monitoring, control, and remediation.
Organizations need to dramatically advance their AI governance maturity and implement technology solutions that give humans the visibility and control they need to govern. By embracing automation and streamlining governance processes, organizations can reduce risks, enhance performance, and unlock the full potential of AI.
This is the pathway to not only safer AI but also to realizing the transformative impact of AI at scale.