5 suggested draft principles for Ethical Use of Data Analytics and AI

(Written on a personal basis – no endorsement or approval is implied by any organisation that I am associated with.)

Over the past couple of months I have been reading and thinking quite a lot about ethics in data analytics and artificial intelligence, as well as completing a Microsoft course on it.

What follows is my current suggested shortlist for 5 key principles for Ethics and Data Analytics and AI. I try to bring together in this list what I consider to be the most important principles arising not only from the Microsoft course, but also from several existing published frameworks (see note * below for a list).  These frameworks tend to be much longer documents which while very useful as reference documents, don’t to my mind meet the need for a quick document that practitioners and executives sponsoring, using or building AI projects are far more likely to read.

5 Suggested key principles for Data Analytics and AI work (DRAFT v0.2)

  1. Avoid harm to others (including by respecting their privacy, equality and autonomy, and speaking up about potential harm/violations of these principles)
  2. Increase societal well-being (including by sharing prosperity from AI benefits widely, and taking extreme care before introducing advanced AI that might lead to supremacy of AI intelligence)
  3. Professionalism: clean the data, treat data as an asset, comply with legal requirements and any applicable professional body codes, thoroughly assess and balance benefits v risks, keep models under review, and be flexible. Builders and owners of AI systems must take responsibility for outcomes.
  4. Act to preserve or increase trust (including via explain-ability as far as possible, transparency and accountability – particularly where explain-ability is impossible, engage widely with diverse stakeholders, build ethics into design)
  5. Retain human control: humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

Comments/criticisms most gratefully received!

Note (*): the sources I have drawn on in compiling the above list include:

Ethics and Law in Data and Analytics (Microsoft edX Course)

Discussions (still ongoing) with colleagues on the joint Institute and Faculty of Actuaries and Royal Statistical Society Data Science Focus Group, including outputs from joint workshops considering the Industrialisation and Professionalism of Data Science. Any errors in the draft principles are mine and mine alone however, and they should not be taken as being endorsed by anyone else at this stage!

The Partnership of the Future (Microsoft CEO Satya Nadella’s 6 principles for future AI work, June 2016).

Data Ethics Framework (from the UK Government’s Department for Digital, Culture, Media & Sport, published 13 June 2018 and updated 30 August 2018).

Seven IEEE Standards Projects Provide Ethical Guidance for New Technologies (from the Institute of Electrical and Electronic Engineers, IEEE, May 2017).

Ethical Guidance for Applying Predictive Tools within Human Services (MetroLab Network, September 2017).

AI Now 2017 Report (Alex Campolo, Madelyn Sanfilippo, Meredith Whittaker, Kate Crawford, AI Now 2017 Symposium and Workshop, January 2018).

Code of Ethics and Professional Conduct (Association for Computing Machinery, July 2018, see also here).

AI Principles (Asilomar conference, Future of Life Institute, January 2017).

I am grateful to Leisha Watson, Regulatory Lawyer at the Institute and Faculty of Actuaries for drawing most of the above to my attention.

Another Microsoft course completed: Introduction to Artificial Intelligence

I am pleased to report that I have just passed another Microsoft course, this time from the Microsoft Professional Program for Artificial Intelligence:

Introduction to Artificial Intelligence, with a final mark of 100%.

This was a fascinating course, providing a very good introduction to machine learning, text analysis, computer vision (including face recognition and video analysis) and conversation as a platform (chatbots and Natural Language Processing [NLP]).

PJLeeMicrosoftDAT263xIntroductionToArtificialIntelligenceFinalMark(100PC)Oct2018.jpg

Let your users ask “What’s my next step?” – a very useful AI addition to your apps

One example of how #AI can make it easier for your staff, customers or suppliers to interact with your software tools is to add a combined”Next Step / Tell me what you want to do” facility.

This uses natural language processing (NLP) combined with knowledge of who the user is (and what their role is, e.g. whether they are a member of staff, a customer, or a supplier, or a user with admin rights for example) and the context (which page or part of the app they are on, and what data they have stored in the system), to add two powerful new ways for the user to interact (with minimal training) with the app:

What’s my next step?

On any page, simply clicking the Go button asks the system “What’s my next step?”.  The system then look intelligently at the user’s identity, role, data and location within the app and makes one or more suggestions as to what the user could usefully do next to make the most of the app.

Here are a couple of examples, taken from InQA’s WebPocketMoney application (referred to in this previous post).

Example 1: a new user has just registered and wonders what they should do.  They could consult the online help file which will tell them that they need to register their family within the system.  But far more simply, then can just ask “What’s my next step?” by clicking the Go button.  The system guides them step by step, telling them initially that they need to create one or more families in the system:

Some reasons why your company/organisation should start using AI now

AI built in to the heart of user interfaces

Within a few short years, some companies and organisations will have adopted Artificial Intelligence (AI) in at least one part of their work: interfacing with their customers.  (I’m using customers in the widest sense of the word: it could be students in education, or patients in healthcare for example).

Imagine the following:

  • Instead of having to log in to a website or an application, the application simply recognises the user’s face or voice
  • Instead of having to click on a menu to navigate the app, the user can just talk to it, either by speaking or using a chatbot type interface.
  • Instead of calling customer service (and being told “you are currently number two in a queue” or “Our business hours are 0900 to 1700 Monday to Friday, please call back during those times” ), they can get an immediate response (24 hours a day, 365 days a year) from a chatbot.

If customers have a choice between interacting with one organisation in that way, or another in the more traditional way, I think they will vote with their feet.

It’s a straightforward matter of economics