3 components CIOs need to create an ethical AI framework
[ad_1]
According to PwC data, only 20% of companies say they have an ethical artificial intelligence framework in place, and only 35% have plans to improve the governance of AI systems and processes in 2021. That’s a problem. No wonder Biden is working on an AI Bill of Rights.
I think every CIO needs a responsible AI plan before they implement the technology. Businesses shouldn’t wait for this to be mandatory. It doesn’t matter if the CIO buys the technology or builds it. AI as a technology is neutral – it is not inherently ethical or unethical. We need processes in place to confirm that ethics are built into AI systems.
AI gives us improved customer service, more personalized shopping experiences, and faster hiring of employees, but all of these could have unintended consequences, like racial or gender discrimination.
Consider hiring as an example. If AI isn’t designed or implemented responsibly and ethically, it could end up doing more harm than good, potentially leading the company to bypass applicants.
To avoid such situations, there are three parts to creating a responsible AI framework, whether CIOs create AI or install AI tools from technology vendors.
1. Review AI every step of the way
CIOs can’t just make responsible use of AI a checkout exercise at the end of product deployment. AI should be explainable, transparent and, if possible, provable, throughout its lifecycle. These principles must be incorporated from the design of the product. In this way, CIOs can uphold ethical principles such as accountability, legality and fairness, to name a few.
When creating or implementing responsible AI, teams should focus on three things: the type of decisions made by the AI; the impact that AI decisions could have on humans and society; and how far humans are inside or outside the decision-making loop. By examining AI across these three domains, businesses can determine the appropriate level of governance and feel more secure in the responsible use of AI. This helps mitigate the risk that AI may make decisions with potential bias.
Also take the time to catalog the approvals and decisions made about using AI along the way. This is a key part of building AI governance and traceability.
2. The impact of catalog AI on systems
An AI model is not foolproof. It changes over time and its impact on systems may change as well. That’s why it’s important to know which systems AI is using and which systems it is affecting. Both of these need to be closely monitored throughout the AI lifecycle and revisited. If there is a change in either system, humans should step in.
AI shouldn’t function entirely outside of human oversight and input. In fact, it’s critical that when CIOs monitor the impact of AI, they take corrective action. Many businesses today are powered by technology, but we remain human-led. Technology – especially AI – still needs us to intervene to confirm that biases are not being introduced into the system and to confirm that its decisions remain fair.
3. Evaluate who and how AI will impact
A responsible AI framework is all about reducing the potential damage caused by AI. It is therefore impossible to have strong AI governance in place that does not assess the decision made or the outcome. Does AI pose a risk to an individual or a company? Does this lead to an unethical result?
This can be difficult for CIOs to assess. There are ethical principles that can help guide the use of AI. Governments have also enacted their own regulations to try to control harmful AI. CIOs need to confirm that these different frameworks are taken into account and that the impact of AI is closely and regularly monitored.
CIOs who implement AI can see many benefits. However, there are many risks that businesses need to analyze, consider, and overcome. With the right responsible and ethical AI framework in place, CIOs can push the business to new heights and confirm that the business, its employees and customers can trust their use of AI.
[ad_2]
Source link
Comments are closed.