Companies have to address the risks posed by GenAI
by Ed Watal · BetaNewsEven though it’s only been two years since the public demo of ChatGPT launched, popularizing the technology for the masses, generative AI technology has already had a profound and transformative effect on the world. In the years since the platform’s launch, critics have regularly pointed out the risks of generative AI and called for increased regulation to mitigate them. Once these risks are addressed, companies will be more free to use AI in ways that help their bottom line and the world as a whole.
We must remember that artificial intelligence is a powerful tool, and as the adage goes, “With great power comes great responsibility.” Although we have seen AI make a positive impact on society in several ways -- from boosting productivity in industrial settings to contributing to life-saving discoveries in the medical field -- we have also seen wrongdoers abuse the technology to cause harm.
That’s not to mention the potential consequences that AI technology could have -- no matter the intent of use -- that could be to the detriment of society if left unchecked. Some of the areas in which critics have expressed the most concern about AI having a negative impact include:
- Job displacement: One of the chief concerns surrounding the artificial intelligence revolution is the potential for job displacement. As AI technology becomes more sophisticated, it will be used to automate tasks traditionally performed by human workers, leading to job losses in various industries.
- Bias and discrimination: In their current stages, artificial intelligence models are still dependent on preexisting data. Because of this, AI algorithms can perpetuate biases in the data sets upon which they are trained, potentially leading to discriminatory outcomes.
- Privacy concerns: The data-driven nature of artificial intelligence models means that these tools collect and process large amounts of data. As a result, many critics have raised concerns about the privacy of their data and the potential for security breaches.
- Existential risk: In the long term, some critics have expressed fear that the continued development of AI technology could lead to “superintelligent” AI systems that could pose a threat to humanity.
Thankfully, there is a way for us to avoid these potential consequences: responsible use of the technology. By using AI tools to address these concerns and attempt to reduce the technology’s potential negative impacts, leaders in AI can contribute to a future where artificial intelligence can be used to benefit the greater good.
For example, employers can invest in reskilling and upskilling programs to address the potential AI has to displace jobs. Alternatively, to reduce the possibility of bias, they can train AI models on diverse data sets.
Accountability and regulation of generative AI
The success of the artificial intelligence revolution comes down to a need for greater accountability. Of course, there are already national and international laws to which AI developers and users must adhere out of legal obligation, but just as AI technology is in its infancy, so too are these laws. Many lawmakers are still uncertain about how artificial intelligence technology should be regulated because there is still great uncertainty about the long-term implications that AI will have on society.
Ultimately, those who are most suited to regulate the ethical standards of artificial intelligence use are those with hands-on experience with the technology. AI developers and users have a better understanding of the technology's challenges and shortcomings than anyone else. Many leaders in AI have become proponents of self-regulation by creating guidelines for using the technology with an ethical framework for its implementation and applying these guidelines to the industry as a whole.
How the PROSE Framework paves a path to a brighter future for generative AI
One of the key proposals that AI industry leaders are advocating for is the PROSE Framework. PROSE -- which stands for Policy, Regulation, Ontology, Standards, and Ethics -- is designed to ensure the responsible use of artificial intelligence technology while also encouraging responsible decision-making and growth. Reviewed and found as comprehensive by former generals in the US Army who were responsible for AI implementations, the PROSE Framework has become a useful resource for C-suite executives and other leaders hoping to implement AI into their organizations.
The PROSE Framework provides foundational policies and ethical guidelines that dictate how AI should -- and should not be -- used. For example, it specifies data types that should not be used for AI and decisions that should not be delegated to AI. On a deeper level, PROSE also defines a mechanism of unified ontology and associated standards for a common data format and exchange protocol between AI models, agentic systems, and APIs.
With standards in place like the PROSE Framework, businesses using artificial intelligence will be held to an even higher standard of compliance and accountability than they would with only governmental regulations. If we want to create a future where generative AI tools can be used freely for their many benefits, we must embrace steps like self-regulation that hold AI developers and users accountable for the potential impacts that the technology could have on society.
Image credit: Lishchyshyn/depositphotos.com
Ed Watal is the founder and principal of Intellibus, an INC 5000 Top 100 Software firm based in Reston, Virginia. He regularly serves as a board advisor to the world’s largest financial institutions. C-level executives rely on him for IT strategy & architecture due to his business acumen & deep IT knowledge. One of Ed's key projects includes BigParser (an Ethical AI Platform and an A Data Commons for the World). He has also built and sold several Tech & AI startups. Ed has substantial teaching experience and has served as a lecturer for universities globally, including NYU and Stanford. Ed has been featured on Fox News,Information Week, and NewsNation.