news artificial intelligence
We need standards for AI adoption in the public sector
Government and regulators need to act swiftly to keep up with the pace of innovation in artificial intelligence and its use across the public sector, says a new report by the UK’s Committee on Standards in Public Life.
Clear standards and greater transparency in the use of artificial intelligence (AI) are needed as technologically assisted decision-making is adopted more widely across the public sector, the Committee on Standards in Public Life warns in its new report on AI and public standards.
During its review of AI adoption and its impact on public standards, the committee found that there isn’t currently enough information available on where machine learning is being used in government.
“Artificial intelligence – and in particular machine learning – will transform the way public sector organisations make decisions and deliver public services,” said Jonathan Evans, chair of the committee. “Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector.
“Explanations for decisions made by machine learning are important for public accountability,” Evans continued. “Explainable AI is a realistic and attainable goal for the public sector - so long as public sector organisations and private companies prioritise public standards when they are designing and building AI systems.”
Addressing data bias
Data bias was highlighted a serious concern in the report. The committee believes further work is needed on measuring and mitigating the impact of bias to prevent discrimination via algorithm in public services.
“Our message to government is that the UK’s regulatory and governance framework for AI in the public sector remains a work in progress and deficiencies are notable,” Evans said. “On transparency and data bias in particular, there is an urgent need for practical guidance and enforceable regulation.”
“All public bodies using AI to deliver frontline services must comply with the law surrounding data-driven technology and implement clear, risk-based governance for their use of AI. Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards.”
Commenting on the new report, SAS UK & Ireland’s head of data science Dr Iain Brown said: “Making decisions with the help of intelligent machines is one of the most progressive steps the public sector has ever taken. Yet, on this mission to reach a new level of quality in public services, responsible and ethical use of AI should be the priority.
“SAS research suggests that, whilst those working with AI are enthusiastic about its potential, the greatest barrier to this potential comes from concerns over trust. Maintaining the trust of the public, through complying with the new recommendations and being proactive in offering visibility over how AI is used, is essential if the technology is to become mainstream in the public sector. This includes informing the public on how data is collected and ensuring that decisions made can be justified.
“Ethical frameworks like FATE provide key guidelines to help public services enable responsible decision-making. AI implementation should be fair in that it’s devoid of bias and discrimination; accountable with clear ownership of AI-powered decision making; transparent, meaning the methodology behind AI-generated decisions is clearly laid out; and explainable in that AI-enabled decisions must make clear sense.”