UK government using AI to decide benefit payments and spot sham marriages, investigation reveals

Certain nationalities and ethnicities may be being discriminated against by AI, critics say
Access to benefits can be determined in some cases by artificial intelligence
Yui Mok/PA Wire
Saqib Shah24 October 2023

The government’s use of artificial intelligence (AI) risks producing discriminatory results against benefit claimants and ethnic minorities, an investigation has found.

A total of eight Whitehall departments and some police forces are using the burgeoning technology to make life-altering decisions for members of the public, reported the Guardian.

In one case, Labour MP Kate Osamor claimed that an algorithm used by the Department of Work and Pensions (DWP) to detect fraud may have led to dozens of Bulgarians having their benefits suspended. 

Meanwhile, an internal Home Office evaluation seen by the Guardian showed that an algorithm used to indicate sham marriages disproportionately singled out people from Albania, Greece, Romania and Bulgaria.

Several police forces are using AI tools and facial recognition cameras for surveillance and to predict and prevent future crimes. The investigation claims that when the sensitivity settings are dialled down on the cameras – as they may be in an effort to catch more criminals – they incorrectly detected at least five times more black people than white people.

The findings come as the UK prepares to host an international summit on AI at Bletchley Park. The event is viewed as a means for the UK to stamp its authority on AI regulation, and grapple with the existential threat some luminaries, including Elon Musk, believe it poses.

But while the summit focuses on the headline-grabbing future of the technology, Britain is already harnessing AI in many areas that affect the lives of everyday people.

The wide-ranging use of the AI in the public sector was uncovered after the Cabinet Office began encouraging departments and law enforcement to voluntarily disclose their use of the tech, specifically when it could have a material impact on the general public. 

A separate database compiled by the Public Law Project also tracks the automated tools used by the government and ranks them based on transparency.

Experts and tech insiders have repeatedly warned that AI can reinforce biases that are engrained in the datasets used to train the systems. After pressure from rights groups over the dangers of predictive policing and face recognition surveillance, the EU passed a landmark AI law earlier this year banning the systems.

The DWP told the Guardian that its algorithm does not take nationality into account. And both the DWP and the Home Office insisted that the processes they use are fair because the final decisions are made by people. The Met did not respond to the findings.

John Edwards, the UK’s information commissioner, said he had examined many AI tools being used in the public sector, including the DWP’s fraud detection systems, and not found any to be in breach of data protection rules.

Create a FREE account to continue reading

eros

Registration is a free and easy way to support our journalism.

Join our community where you can: comment on stories; sign up to newsletters; enter competitions and access content on our app.

Your email address

Must be at least 6 characters, include an upper and lower case character and a number

You must be at least 18 years old to create an account

* Required fields

Already have an account? SIGN IN

By clicking Sign up you confirm that your data has been entered correctly and you have read and agree to our Terms of use , Cookie policy and Privacy notice .

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged in