Slide background
Slide background
Slide background
Slide background

Information Commissioner consults on draft guidance for explaining use of AI in decision-making about individuals

The Information Commissioner’s Office has launched a consultation on new draft guidance for organisations using, or thinking about using, artificial intelligence (AI) to support or make decisions about individuals.

The guidance, which was produced with the Alan Turing Institute and can be viewed here, lays out four key principles, which the ICO said was rooted within the General Data Protection Regulation (GDPR).

The watchdog said organisations must consider these when developing AI decision-making systems. These principles are:

  • Be transparent: “make your use of AI for decision-making obvious and appropriately explain the decisions you make to individuals in a meaningful way.”
  • Be accountable: “ensure appropriate oversight of your AI decision systems, and be answerable to others.”
  • Consider context: “there is no one-size-fits-all approach to explaining AI-assisted decisions.”
  • Reflect on impacts: “ask and answer questions about the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome.”

Writing on the ICO’s blog, Simon McDougall, the ICO’s Executive Director Technology and Innovation, said: “What do we really understand about how decisions are made about us using artificial intelligence (AI)? The potential for AI is huge, but its implementation is often complex, which makes it difficult for people to understand how it works. And when people don’t understand a technology, it can lead to doubt, uncertainty and mistrust.

“The decisions made using AI need to be properly understood by the people they impact. This is no easy feat and involves navigating the ethical and legal pitfalls around the decision-making process built-in to AI systems.”

The ICO has previously said in an interim report released in June that context was key to the explainability of AI decisions.

McDougall said: “This remains key in the draft guidance, with some sections aimed at those that need summary positions for their work, and others including lots of detail for the experts and enthusiasts.

“Our draft guidance goes into detail about different types of explanations, how to extract explanations of the logic used by the system to make a decision, and how to deliver explanations to the people they are about. It also outlines different types of explanation and emphasises the importance of using inherently explainable AI systems.”

The consultation runs until 24 January 2020. The final version of the guidance will be published later in the year.

McDougall said the ICO would continue to work on its related AI project, developing a framework for auditing AI systems, on which it is also consulting.

See also: Government automated-decision making by Robin Allen QC and Dee Masters, who run the and tweet at @AILawHub