The Pitfalls Of AI Self-Regulation

by · Forbes
AI RegulationAdobeStock_750462976

As AI rapidly evolves, the gap between technological advancements and regulatory frameworks is widening, causing widespread concern. According to a recent survey from Prosper Insights & Analytics, 88% of adults in the U.S. have privacy concerns with AI using their data.

Prosper - Concern About Privacy From AIProsper Insights & Analytics

The study also found that the top three concerns among U.S. adults around recent developments in AI are the need for human oversight (39%), incorrect information or “hallucinations” (36%) and lack of trust that AI has their best interests in mind (33%).

It’s clear that guardrails are needed, but businesses and policymakers are grappling with how best to regulate AI. California Gov. Gavin Newsom recently vetoed the significant yet controversial Senate Bill 1047, which would have required businesses that develop large language models to establish and implement a slew of ethical standards, safety protocols, reporting and oversight.

TJ Leonard, CEO of Storyblocks, the leading stock media subscription for video creators, says that while AI regulation is necessary given the technology’s unique potential risks, the industry must tread carefully. He says, “Placing undue culpability on developers for vaguely defined harm could stifle innovation, and while we don’t want people using AI tools to wage terrorist attacks, leaving the responsibility to developers feels misguided and potentially ripe for misuse.”

MORE FOR YOU
Trump Vs. Harris 2024 Polls: Harris Leads In 4 Latest Surveys
NSA Tells iPhone And Android Users: Reboot Your Device Now
Harris And Trump’s Biggest Celebrity Endorsements: NASCAR’s Danica Patrick Says She’ll Vote For The First Time In 2024

However, relying solely on businesses’ self-regulation simply isn’t enough to ensure the responsible development and deployment of AI technologies.

The Problem with Self-Regulation

AI companies’ ethical commitments alone are insufficient. Historical examples have consistently demonstrated the limitations of industry self-governance. Leonard says the technology industry’s track record shows that promises of ethical conduct often succumb to competitive pressures and the relentless drive for innovation, sometimes at the cost of public interest.

For example, in the early days of social media, companies made assertions about safeguarding user data and moderating content, only for later revelations to expose significant lapses and breaches of trust. The infamous Cambridge Analytica scandal highlighted how platform self-regulation can fail to protect user data and privacy, leading to widespread calls for stricter oversight.

AI has moved fast, creating a dynamic environment marked by vast opportunities, significant risks, and a pressing need for governance. In this uncharted territory, forward-thinking brands like Google, Meta and Microsoft have initiated self-regulatory practices such as supporting watermarking of AI-generated content through the Coalition for Content Provenance and Authenticity (C2PA) and backing legislation like the California Digital Content Provenance Standards Act (AB 3211). However, these efforts alone are not a panacea.

According to Leonard, "Responsible AI development is not just a competitive advantage; it’s a societal obligation. Engaging proactively with policymakers to create an environment that balances innovation with protection is crucial."

Imbalanced Policy Discussions

Today, the AI policy arena has largely been overshadowed by the influence of tech giants, resource-constrained startups at a disadvantage. This imbalance in policy discussions poses the risk of anti-competitive regulations that could disproportionately favor established players. Startups with limited capacity to engage in legislative dialogue and advocacy may find themselves sidelined in the debates shaping AI's future.

Bcause large companies have the resources to lobby state and federal legislative bodies and influence policy outcomes, the risk lies in regulations that might inadvertently create barriers to entry, stifling innovation from smaller firms. These limitations not only hinder diverse participation but also compromise the broader industry’s ability to push for fair regulation.

Leonard suggests that these market forces may initially set engagement rules, with regulators arriving later to codify them: “To foster a truly competitive and innovative environment, it’s critical to ensure that both small and medium-sized players also have a voice in shaping AI regulations."

Need for Flexible Regulations

Need AI’s capabilities are progressing at an unprecedented pace, challenging the relevance and applicability of current regulatory frameworks. To keep up, regulations must be both flexible and enforceable.

The upcoming U.S. election adds a layer of complexity, as political outcomes could significantly redirect the trajectory of AI regulation. Different administrations will inevitably possess different views on the balance between innovation and stricter oversight, further complicating the regulatory environment.

Leonard adds, “The more significant development than Gov. Newsom’s veto is the greater focus on AI’s impact in electoral integrity, actor rights and content labeling. These are all steps in the right direction, as they reinforce crucial principles and the importance of copyright issues and transparency.”

Regulations must be clear yet adaptive, enabling businesses to innovate responsibly while safeguarding our society.

Fostering Fair and Effective Oversight

At this stage, the best way to create a more sustainable foundation for ethical AI innovation is twofold: 1) bring diverse voices to the table, and 2) set high standards for ethical data partnerships.

Think of responsible AI development as a team sport. You've got companies, lawmakers and everyday folks all playing their part. To make the conversation truly informed and inclusive, you need everyone — from startups to artists — to join in.

In a world where fake news and doctored content spread and those responsible face no consequences, companies must go the extra mile with ethical data licensing and responsible AI. Consider the adage that all company communication should be written as if it were going to appear on the front page of a newspaper. What if AI companies applied the same standard to the data we use to train these foundation models?

Leonard points to pursuing data licensing partnerships with companies prioritizing responsibly sourced content: “By investing in ethical data sourcing, companies can compete not just on the strength of their AI models but on the integrity of their data pipeline. This strategy may seem limiting in the short term, but it's a forward-thinking approach that will pay dividends as regulatory scrutiny intensifies and public awareness grows.”

Navigating the Future

The call to action is clear: Stakeholders must work together to develop robust but flexible regulatory measures that protect societal interests while spurring innovation. As AI continues transforming our world, ensuring its responsible governance and ethical applications will determine whether it becomes a force for good or a continued source of contention across industries.