Can We Trust AI? 

We see the use of Artificial Intelligence or AI all around us in uses that may be visible to us as also in uses not directly visible to us. It is here to stay and as we learn to live with it, however, there remains a concern about whether we can totally trust AI. Hollywood may have painted a picture of the rise of machines that may instill fear in some of us. Fear of AI taking over jobs, of AI reducing intelligent human beings, and of AI being used for illegal purposes. In this article we discuss what actions can be taken by organizations to build trust in AI, so it becomes an effective asset. The idea is as old as 1909, EM Foster’s “The Machine Stops”. 

What does it mean to trust an AI system? 

For people to begin to trust AI there must be sufficient transparency of what information AI has access to, what is the capability of the AI and what is the programming that the AI is basing its outputs on. While I may not be the guru in AI systems, I have been following its development over the last seven to eight years delving into several types of AI. IBM has an article that outlines the several types of AI that may be helpful. I recently tried to use ChatGPT to provide me with information and realized the information was outdated by at least a year. To better understand how we can trust AI, let us look at the factors that contribute to AI trust issues.  

Factors Contributing to AI Trust Issues 

A key trust issue arises in the algorithm used within the neural network that is delivering the outputs. Another key factor is the data itself that the outputs are based upon. Knowing the data that the AI is using is important in being able to trust the output. It is also important to know how well the algorithm was tested and validated prior release. AI systems are run through a test data set to determine if the neural network will produce the desired results. The system is then tested on real world data and refined. AI systems may also have biases based on the programming and data set. Companies face security and data privacy challenges too when using AI applications. Additionally, as stated earlier there remains the issue of misuse of AI just as cryptocurrency was in its initial phases.  

What can companies do to improve trust in AI? 

While there is much to be done by organizations to address the issues listed above and it may take a few years to improve public trust in AI, companies developing and using AI systems can use a system-based approach to implementing these systems. The International Organization for Standardization (ISO) recently published ISO/IEC 42001 – Management System Requirements for Information Technology AI systems. The standard provides a process-based framework to identify and address AI risks effectively with the commitment of personnel at all levels of the organization.  

The standard follows the harmonized structure of other ISO management system requirement standards such as ISO 9001 and ISO 14001. It also outlines 10 control objectives and 38 controls. The controls based on industry best practices asks the organization to consider a lifecycle approach to developing and implementing AI systems including conducting an impact assessment, systems design (to include verification and validation), control of quality of data used and processes for responsible use of AI to name a few. Perhaps one of the first requirements that organizations can do to protect themselves is to consider developing an AI policy that outlines how AI is used within the ecosystem of their business operations.  

Using a globally accepted standard can deliver confidence to customers (and address trust issues) that the organization is using a process-based approach to responsibly perform their role with respect to AI systems. 

To learn more about how QMII can support your journey should you decide to use ISO/IEC 42001, or to learn about our training options, contact our solutions team at 888-357-9001 or email us at info@qmii.com.  

-by Julius DeSilva, Senior Vice-President

Responsibly Implementing Artificial Intelligence

Artificial Intelligence (AI) entered our lives stealthily and not before long has become an integral part of all we do. From choosing a playlist, to self-driving cars, to providing service desk support to name a few. Some people have openly embraced AI while others approach it more cautiously afraid of the domination and ‘rise of the machines. Along with the opportunities that AI presents, also come risks and therefore responsibility. ISO in December of 2023 published a management system standard, ISO/IEC 42001, that provides a framework for organizations looking to use a process-based approach to managing risks and opportunities associated with use of Artificial Intelligence.

What is AI system?

As defined by ISO/IEC 22989 and artificial intelligence system is and engineered system that generates outputs such as content, forecasts, recommendations, or decisions for a given set of human-defined objectives. Artificial intelligence can then further be broken down into various subcategories from weak AI to strong AI. There are also various associated terms that are used within the industry that wall within the realm of Artificial Intelligence systems. These include Autonomous AI system, Machine Learning, and Cognitive Computing to name a few.

An integrated standard approach

In structuring the standard ISO/IEC follows the harmonized 10 clause structure that is applicable to standards such as ISO 9001 and ISO 45001. This will make it easy for organizations seeking to integrate the requirements into their existing management system. Like other ISO management system standards, ISO/IEC 42001 is not prescriptive within the standard clauses. It does however, similar to ISO/IEC 27001 include an Annex of controls that must be considered and that must be justified when not applicable. Annex A has a total of 38 controls that are split among the 10 control objectives. As a risk-based standard it requires organizations to conduct an impact analysis, conduct a risk assessment and then implement controls to treat the risk to an acceptable level.

ISO/IEC 42001 control areas

The 10 control areas of Annex A intend to:

  • Provide management commitment and direction
  • Establish organizational accountability
  • Determine and provide resources
  • Assess the AI system impacts
  • Provide a framework for managing the AI system life cycle
  • Control data used within AI systems
  • Provide a framework for communication with interested parties
  • Ensure responsible use of AI systems
  • Mange relationships

ISO/IEC 42001 also makes reference to the NIST Risk Management Framework, developed to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI).

Next Steps for Companies seeking to align to ISO/IEC 42001

If your organization is seeking to demonstrate a responsible use of AI systems and choosing to align with the ISO /IEC 42001 framework, the next steps would be to:

  1. Conduct as “As-Is” assessment – Identify what controls and resources are already in place within the existing management system.
  2. Conduct an Impact Assessment – Annex A controls provide a structure of how to achieve this and Annex B provides further guidance. This requirement supports the requirements of the EU AI Act. Inputs to the assessment will come from an understanding of the organizational context and the needs of the interested parties.
  3. Conduct a Risk Assessment – to identify potential risks and opportunities for users and society. The assessment should include the implication for deploying AI systems.
  4. Develop Risk Treatment Controls – Identify measures that the organization will implement to mitigate the risks to an acceptable level and then a plan to ensure the effectiveness of controls implemented.
  5. Implement and monitor the controls and system, with an aim to driving continual improvement and ensuring the responsible use of AI.

To learn more about how QMII can support your implementation of ISO/IEC 42001 reach out to QMII solutions team at info@qmii.com or call us at +1 (888) 357-9001.

-By Julius DeSilva, Senior Vice-President