The Australian Human Rights Commission has used a landmark report to call for a temporary ban on the use of facial recognition and other biometric technology in “high-risk” government decision making until new laws are developed.
It has also urged the federal government to employ an independent “artificial intelligence safety commissioner” that would lead a multi-disciplinary taskforce on AI-informed decision making.
The long-awaited human rights and technology report [pdf], released on Thursday, makes 38 recommendations to embed human rights in the design and regulation of technologies like facial recognition and AI.
It is the result of three years of work to understand and address the implications on privacy, freedom of expression and equality and devise reforms to legislation and policies that apply to both the public and private sectors.
One of the central recommendations in the report is that the “federal, state and territory governments… introduce legislation that regulates the use of facial recognition and other biometric technology”.
The legislation would apply to use of the “technology in decision-making that has a legal, or similarly significant, effect for individuals, or where there is a high risk to human rights, such as in policing and law enforcement”.
Until governments develop the recommended legislation, however, the commission has recommended that “a moratorium on the use of facial recognition and other biometric technology” be introduced.
The moratorium, which the AHRC first called for in December 2019, would apply to “decision making that has a legal, or similarly significant, effect for individuals, or where there is a high risk to human rights”.
The commission considers high-risk areas – “contexts where the consequences of error can be grave” – as policing and law enforcement and social security in government, and banking in the private sector.
“Our country has always embraced innovation, but over the course of our human rights and technology project, Australians have consistently told us that new technology needs to be fair and accountable,” Australian human rights commissioner Edward Santow said.
“That’s why we are recommending a moratorium on some high-risk uses of facial recognition technology, and on the use of ‘black box’ or opaque AI in decision making by corporations and by government.”
The AHRC has similarly recommended that the federal government introduced legislation requiring that agencies notify affected individuals “where artificial intelligence is materially used in making an administrative decision”.
Agencies should also “not make administrative decisions, including through the use of automation or AI, if the decision maker cannot generate reasons or a technical explanation for an affected person”.
Where agencies do use AI, a human rights impact assessment (HRIA) should be undertaken before an “AI-informed decision making system [is used] to make administrative decisions”, the commission said.
The Australian Human Rights Commission (AHRC) also recommended that the federal government establish an AI safety commissioner as an independent statutory office to promote human rights in the development and use of AI in Australia.
The commissioner would “monitor and investigate developments and trends in the use of AI” and provide advice and guidance to government and the private sector on “how to comply with laws and ethical requirements in the use of AI”.
Guidance would encourage the private sector to “undertake a HRIA before using an AI-informed decision-making system” and help both the public and private sectors “generate reasons, including a technical explanation, for AI-informed decisions”.
The commissioner would also conduct an audit of “all current or proposed uses of AI-informed decision-making by or on behalf of government agencies”, which the AHRC said could help build trust in government.
“Australians should be told when AI is used in decisions that affect them,” Santow said.
“The best way to rebuild public trust in the use of AI by government and corporations is by ensuring transparency, accountability and independent oversight, and a new AI safety commission could play a valuable role in this process.”
The commission has also urged the federal government to introduce legislation that requires corporations to notify affected individuals when AI is used in a decision making process that “affects the legal, or similarly significant, rights of the individual”.
Elsewhere in the report, the AHRC has recommended that the federal government embed a “human rights approach in the procurement of products and services that use AI”.
It has also asked NBN Co to “implement a reasonable concessional broadband rate for people with disability who are financially vulnerable”, like it did during the height of the Covid pandemic.