AI discrimination potential explored
By Resolve Editor Kate Tilley
The potential for artificial intelligence (AI) to create discrimination in insurance pricing and underwriting was explored in a webinar as part of Dive In, a festival for diversity and inclusion in insurance.
Australian Human Rights Commissioner Lorraine Finlay joined Dr Fei Huang, UNSW Senior Lecturer, School of Risk and Actuarial Studies, and Chris Doman, Chair of the Actuaries Institute’s Anti-Discrimination Working Group in a session chaired by Sarah Tuhtan, Corporate Counsel with BMS Bluebook and AILA’s Queensland President.
The panel delved into the contents of the Australian Human Rights Commission and the Actuaries Institute’s December 2022 Guidance resource: Artificial intelligence and discrimination in insurance pricing and underwriting.
The guidance resource says AI can enable good, data-driven decision making and be used to analyse large amounts of data quickly, accurately and cost effectively.
“While AI promises faster and smarter decision making, AI-informed decision making carries certain risks. It can assist in identifying and addressing bias or prejudice that can be present in human decision making, but it can also perpetuate or entrench such problems. It can sometimes result in decisions that are unfair or even discriminatory,” the resource says.
Algorithmic bias
The risk of algorithmic bias is where AI is used to produce outputs that treat one group less favourably than another, without suitable justification. “Algorithmic bias can include statistical bias and may result in unfairness and, in some circumstances, unlawful discrimination. It can arise through problems with the data being used by the AI system, or problems with the AI system itself,” the resource says.
Ms Finlay told the webinar there was uncertainty about how Australia’s discrimination laws apply to AI-generated decision making, hence the need for the guidance resource.
Mr Doman agreed, saying there were many questions and the Actuaries Institute was “excited and encouraged by” the resource because it provided guidance to answer some of the questions at low cost. It focused on the federal legislation and particularly the four protected attributes – race, age, disability and sex.
The guidance resource notes that race includes colour, descent, national or ethnic origin, or immigrant status. The Sex Discrimination Act includes sexual orientation, gender identity, intersex status, marital or relationship status, pregnancy or potential pregnancy, breastfeeding or family responsibilities.
Two possible exemptions
Some discrimination acts have exemptions that mean discrimination by insurers may be lawful in certain circumstances.
For example, the Age Discrimination Act and the Disability Discrimination Act provide that discrimination on the basis of age or disability for the provision of insurance by refusing to offer a product, or imposing terms or conditions on which the product is offered, is not unlawful if the discrimination:
- is based on actuarial or statistical data on which it is reasonable to rely, and the discrimination is reasonable having regard to the matter of the data and other relevant factors (the data exemption), or
- if no such actuarial or statistical data is available and cannot reasonably be obtained, the discrimination is reasonable having regard to any other relevant factors (the no data exemption).
Dr Huang told the webinar discrimination was not new but had become more complex because of the speed of AI. More guidance was needed and insurance was a good starting point because it was socially acceptable to charge different prices for cover, but groups could not be disproportionately affected by an algorithm. “Even basic concepts like ‘what’s fair’ differ, there’s no one size fits all, it depends on the context.”
She said the guidance resource “doesn’t prevent all problems but shows what’s likely right or wrong”, using sample case studies.
Ms Finlay said Australian discrimination law was based on direct discrimination because of one of the four protected attributes, but also indirect discrimination.
Indirect discrimination
The guidance resource says indirect discrimination occurs when a term, condition, requirement, or practice that applies to everyone disadvantages people with a protected attribute, and the requirement is not reasonable in the circumstances.
For example, an insurer who requires all customers to prove their identity by providing a driver’s licence is likely to indirectly discriminate against anyone who is unable to drive because of a disability. A person with disability who is unable to comply with the requirement may be denied insurance, and it would be reasonable to allow them to prove their identity in another way.
Mr Doman warned that if biased data was used to inform AI that would be reflected in the model. But removing protected attributes from data could “make things worse”. “There are circumstances when some bias can legitimately remain.”
He said insurers could not rely on the no data exemption unless they could show they had made “reasonable efforts” without success to find appropriate data.
Ms Finlay said there was a need to bridge the gap between theory and practice. The resource could provide practical guidance, but there was no perfect answer. It was important to consider the impacts of technology on human rights to ensure those impacts were positive.
Tips for insurers
The resource offers eight tips for insurers to help minimise the risks of a successful discrimination claim when using AI for insurance pricing and underwriting:
- Consider carefully whether (and how) protected attributes are likely to be related to risk for each type of insurance. The considerations should (where possible) be based on data. If such a relationship is likely:
- For protected attributes with an insurance exemption, collect data on which it is reasonable to rely, and base any discrimination on that data, in accordance with the requirements of the data exemption. If such additional data cannot be reasonably obtained, consider whether the no data exemption might apply.
- For protected attributes without an insurance exemption, the protected attribute should not be used directly in price setting. An insurer should also take steps to ensure its prices are reasonable and not indirectly discriminatory. That may include using a protected attribute within underlying pricing models, where data is available, to test for or mitigate against indirect discrimination.
- Check data for representativeness, accuracy, errors, omissions or other issues. Model outputs used as input data for insurance risk models also should be checked. If issues are identified with the data:
- Data may be pre-processed to address issues such as missing values or errors.
- An insurer might consider obtaining more or different data, if there are issues of representativeness, biases in accuracy, or other structural issues that may disproportionately impact protected groups.
- Ensure customers can understand why higher premiums may apply and what they can do to reduce their risk exposure and hence their premiums.
- Document insurance pricing decisions, including the reasons for those decisions. The documentation will be helpful in explaining to a court, if necessary, why decisions were made and why the insurer considers they are reasonable. The documentation may also be a valuable risk management tool, enabling greater transparency and understanding of pricing decisions within the insurer, which in turn may help identify any risks of unlawful discrimination.
- Where appropriate, give customers reasons for decisions. Explaining the reasons may help a customer understand why an insurer considers its decision was not discriminatory, and potentially prevent a discrimination claim being brought at all. An option for human review of automated decisions may be a useful risk mitigation strategy for various issues, including discrimination.
- Test and monitor models and their outputs. Test prices (wherever possible) to assess whether they might give rise to claims of indirect discrimination, particularly whether pricing decisions would be considered reasonable in the circumstances. Monitoring processes may include automated and human routines.
- Ensure relevant decision-making staff are suitably trained in concepts of discrimination.
- Seek legal advice if an insurer is unsure of the correct course of action and concerned about breaching anti-discrimination legislation.
Dr Huang told the webinar the resource guidance was “a great starting point” but covered only underwriting and pricing. “I want a clear ethical framework to inform the industry and [methodology] to audit performance. We have a long way to go.” She was keen to “understand the pain points for the industry so I can help resolve them”.
Mr Doman said the resource was “the start not the end” and called for greater collaboration to get answers.
Ms Finlay said “the law hasn’t kept up with reality and the need for explainability in AI-generated decision making”.
For more information, here are two articles co-authored by Dr Huang: