Skip to main content

New Toronto Declaration calls on algorithms to respect human rights

New Toronto Declaration calls on algorithms to respect human rights

Share this story

Illustration by Alex Castro / The Verge

Today in Toronto, a coalition of human rights and technology groups released a new declaration on machine learning standards, calling on both governments and tech companies to ensure that algorithms respect basic principles of equality and non-discrimination. Called The Toronto Declaration, the document focuses on the obligation to prevent machine learning systems from discriminating, and in some cases violating, existing human rights law. The declaration was announced as part of the RightsCon conference, an annual gathering of digital and human rights groups.

“We must keep our focus on how these technologies will affect individual human beings and human rights,” the preamble reads. “In a world of machine learning systems, who will bear accountability for harming human rights?”

The declaration has already been signed by Amnesty International, Access Now, Human Rights Watch, and the Wikimedia Foundation. More signatories are expected in the weeks to come. 

“In a world of machine learning systems, who will bear accountability for harming human rights?”

While not legally binding, the declaration is meant to serve as a guiding light for governments and tech companies dealing with these issues, similar to the Necessary and Proportionate principles on surveillance. It’s unclear how the principles would translate into specific development practices, although more specific recommendations on data sets and inputs may be developed in the future.

Beyond general non-discrimination practices, the declaration focuses on the individual right to remedy when algorithmic discrimination does occur. “This may include, for example, creating clear, independent, and visible processes for redress following adverse individual or societal effects,” the declaration suggests, “[and making decisions] subject to accessible and effective appeal and judicial review.”

In practice, that will also mean significantly more visibility into how popular algorithms work. “Transparency is integrally related to accountability. It is not simply about making users comfortable with products,” said Dinah PoKempner, general counsel at Human Rights Watch. “It is also about ensuring that AI is a mechanism that works for the good of human dignity.”

Many governments are already moving along similar lines. Speaking at RightsCon’s opening plenary session, Canadian heritage minister Mélanie Joly said algorithmic transparency efforts were crucial for the broader exchange of information online. “We believe in a democratic internet,” said Joly. “So for us, transparency of algorithms is really important. We don’t need to know the recipe, but we want to know the ingredients.”