We see the use of Artificial Intelligence or AI all around us in uses that may be visible to us as also in uses not directly visible to us. It is here to stay and as we learn to live with it, however, there remains a concern about whether we can totally trust AI. Hollywood may have painted a picture of the rise of machines that may instill fear in some of us. Fear of AI taking over jobs, of AI reducing intelligent human beings, and of AI being used for illegal purposes. In this article we discuss what actions can be taken by organizations to build trust in AI, so it becomes an effective asset. The idea is as old as 1909, EM Foster’s “The Machine Stops”. 

What does it mean to trust an AI system? 

For people to begin to trust AI there must be sufficient transparency of what information AI has access to, what is the capability of the AI and what is the programming that the AI is basing its outputs on. While I may not be the guru in AI systems, I have been following its development over the last seven to eight years delving into several types of AI. IBM has an article that outlines the several types of AI that may be helpful. I recently tried to use ChatGPT to provide me with information and realized the information was outdated by at least a year. To better understand how we can trust AI, let us look at the factors that contribute to AI trust issues.  

Factors Contributing to AI Trust Issues 

A key trust issue arises in the algorithm used within the neural network that is delivering the outputs. Another key factor is the data itself that the outputs are based upon. Knowing the data that the AI is using is important in being able to trust the output. It is also important to know how well the algorithm was tested and validated prior release. AI systems are run through a test data set to determine if the neural network will produce the desired results. The system is then tested on real world data and refined. AI systems may also have biases based on the programming and data set. Companies face security and data privacy challenges too when using AI applications. Additionally, as stated earlier there remains the issue of misuse of AI just as cryptocurrency was in its initial phases.  

What can companies do to improve trust in AI? 

While there is much to be done by organizations to address the issues listed above and it may take a few years to improve public trust in AI, companies developing and using AI systems can use a system-based approach to implementing these systems. The International Organization for Standardization (ISO) recently published ISO/IEC 42001 – Management System Requirements for Information Technology AI systems. The standard provides a process-based framework to identify and address AI risks effectively with the commitment of personnel at all levels of the organization.  

The standard follows the harmonized structure of other ISO management system requirement standards such as ISO 9001 and ISO 14001. It also outlines 10 control objectives and 38 controls. The controls based on industry best practices asks the organization to consider a lifecycle approach to developing and implementing AI systems including conducting an impact assessment, systems design (to include verification and validation), control of quality of data used and processes for responsible use of AI to name a few. Perhaps one of the first requirements that organizations can do to protect themselves is to consider developing an AI policy that outlines how AI is used within the ecosystem of their business operations.  

Using a globally accepted standard can deliver confidence to customers (and address trust issues) that the organization is using a process-based approach to responsibly perform their role with respect to AI systems. 

To learn more about how QMII can support your journey should you decide to use ISO/IEC 42001, or to learn about our training options, contact our solutions team at 888-357-9001 or email us at info@qmii.com.  

-by Julius DeSilva, Senior Vice-President

Recommended Posts