Ethical artificial intelligence

By Hugh Miller
Principal
6 August 2019


By Hugh Miller - Principal
6 August 2019

Share on LinkedIn
Share on Twitter
Share by Email
Copy Link


By Hugh Miller
6 August 2019

Share on LinkedIn
Share on Twitter
Share by Email
Copy Link

As artificial intelligence becomes cheaper and easier to build and apply, it is natural that more attention has been given to the ethical issues that surround it.

Some examples are obvious – if a prediction model can be made more accurate (e.g. an insurance premium) using variables such as race, is it ethical to do so? However, there are other types of ethical issues, some subtle, that can arise too.

One interesting contribution to this topic is the EU’s Ethics Guidelines for Trustworthy Artificial Intelligence, released in April 2019 and put together by a European group of artificial intelligence experts. By expanding to the term ‘trustworthy’ the guidelines also include recognition that artificial intelligence should also be lawful and robust, in addition to ethical.

The most practical part of the document is a toolkit of things to consider across seven areas that speak to different aspects of artificial intelligence risk.

  1. Human agency and oversight: Including fundamental rights, human agency and human oversight
  2. Technical robustness and safety: Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility
  3. Privacy and data governance: Including respect for privacy, quality and integrity of data, and access to data
  4. Transparency: Including traceability, explainability and communication
  5. Diversity, non-discrimination and fairness: Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation
  6. Societal and environmental wellbeing: Including sustainability and environmental friendliness, social impact, society and democracy
  7. Accountability: Including auditability, minimisation and reporting of negative impact, trade-offs and redress.

Taken from: Ethics guidelines for trustworthy artificial intelligence 

For those interested, it’s worth a read, but some thoughts I had on the way through:

  • Human Agency (#1) is a challenge for prediction engines. Many models produce a recommended course of action but won’t always contextualise choice for people making the decision and may lead people to trust a computer more than a human recommendation; in medicine for example, a treatment option often needs significant amounts of context that allow informed decision making. It also reminded me of David Wilheim’s presentation from the 2017 IDSS where car insurance repairer recommendations had to be both useful but promote choice and agency.
  • Resilience to attack (#2) may range from serious hacking attacks (like hacking a model) through to gaming (e.g. a user might fiddle with inputs in an insurance rating engine to find cheap rates attached to unusual configurations). With cutting edge vision recognition systems able to be fooled by stickers, it’s a timely reminder than not all users of a system will do so in good faith.
  • Transparency (#3) is an area that has developed significantly over the past decade. While models have grown more complex, there are a variety of ways to unpack a model, and see why a particular output is being produced for a certain set of inputs.
  • Fairness (#4) is also an important topic, albeit more subjective. The first challenge is even defining it, which was well-covered in this paper by Chris Dolman and Dimitri Semenovich at the recent Actuaries’ Summit.
  • Environmental impacts (#6) are worth considering, as the demand for computer processing time increases with the complexity of some algorithms, or the popularity of the product. For example, the Bitcoin network now requires the same amount of electricity as a small to medium sized country like Ireland, for a payments network that has significantly less throughput than other networks like Visa or Mastercard. Much of this mining is done in China, where about half the power is generated from coal. While not strictly artificial intelligence, it’s easy to believe that the development of currencies like Bitcoin did not have environmental concerns front and centre.

More generally, it’s easy to see that some high-profile uses of artificial intelligence will have to embrace more formal governance structures along the lines of the EU guidelines; executives, regulators and consumers will all have questions about computer models that need to be answerable, and artificial intelligence-based decisions need to be defendable. This requires a good mix of technical skills, business understanding and concern for the public good; perhaps another opportunity for actuaries?

As first published by Actuaries Digital, 6 August 2019


Other articles by
Hugh Miller

Other articles by Hugh Miller

More articles

Hugh Miller
Principal


Well, that generative AI thing got real pretty quickly

Six months ago, the world seemed to stop and take notice of generative AI. Hugh Miller sorts through the hype and fears to find clarity.

Read Article

Hugh Miller
Principal


Inequality Green Paper
calls for government policy reform to tackle economic equality gap

In a Green Paper commissioned by Actuaries Institute, Hugh Miller and Laura Dixie, analyse the impact of economic inequality in Australia.

Read Article



Related articles

Related articles

More articles

Jonathan Cohen
Principal


The Australian privacy act is changing – how could this affect your machine learning models?

In the first of a two-part series, we look at the proposed changes, their potential impacts for industry and consumers, and what you can do

Read Article

Daniel Stoner
Principal


Analytics quick wins: Five New Year resolutions to thrive in a climate of economic uncertainty

How can advanced analytics help you through the COVID-19 storm? We reveal some ways to strengthen your business and prepare for success

Read Article