DC AG launches algorithmic bias bill

Law would levy $10,000 per infection for algorithmic bias

DC Attorney General Karl A Racine has proposed legislation that would prevent algorithmic discrimination. The Stop Discrimination by Algorithms Act (SDAA) would prevent companies from making automated decisions that discriminate against marginalized groups in the District. 

The Center of Privacy and Technology and Communications and Technology Law Clinic (both part of Georgetown University Law Center) helped draft the bill, in conjunction with civil rights nonprofit Color of Change. 

Related Resource

Are you ready for the hybrid workplace?

Invest in headsets and webcams to drive worker productivity

Whitepaper front with title over image of a man sat at a desk looking at charts on a desktop computer

The Act forbids discrimination by algorithms that make decisions about credit, education, housing, employment, and public services. Companies must also inform individuals about how they use algorithms to make those decisions. 

Organizations must document how they build their algorithms and how they make decisions. They must file annual audit reports with the Attorney General in which they check their algorithms for discrimination

The proposed legislation comes with teeth, as the it allows the Attorney General to penalize companies that violate the rules by up to $10,000 per violation. It also permits private lawsuits. 

The text of the bill warned that algorithmic decision-making systems that don't account for bias can harm marginalized groups. "Despite their prevalence and the potential problems they pose, algorithms are poorly understood by most individuals, in part because of the many entities involved and the lack of accountability among those entities," it said. 

The move prompted support from organizations and individuals including Timnit Gebru, the AI researcher who complained that Google silenced her concerns about its AI algorithms. 

"At this point it should be clear that multinational corporations will not self regulate," said the scientist, who this month launched the Distributed Artificial Intelligence Research Institute. "To the contrary, they push out people with the slightest criticism of their proliferation of harmful systems. Without laws requiring companies to assess the potential for discriminatory impact of their algorithms, what they do instead is eject people like me who attempt to do that internally, even though this was literally in my job description." 

The proposed legislation mirrors efforts at the federal level. In 2019, Democrats introduced an Algorithmic Accountability Act that would have pressed the Federal Trade Commission to create rules for assessing bias in automated systems. In May this year, Democrats tried again with the Algorithmic Justice and Online Platform Transparency Act of 2021. 

The FTC has already warned organizations to use AI responsibly, threatening potential action for those that fail to do so. It has issued guidance for companies to follow.

Featured Resources

How virtual desktop infrastructure enables digital transformation

Challenges and benefits of VDI

Free download

The Okta digital trust index

Exploring the human edge of trust

Free download

Optimising workload placement in your hybrid cloud

Deliver increased IT agility with the cloud

Free Download

Modernise endpoint protection and leave your legacy challenges behind

The risk of keeping your legacy endpoint security tools

Download now

Most Popular

How to move Microsoft's Windows 11 from a hard drive to an SSD
Microsoft Windows

How to move Microsoft's Windows 11 from a hard drive to an SSD

4 Jan 2022
How to boot Windows 11 in Safe Mode
Microsoft Windows

How to boot Windows 11 in Safe Mode

6 Jan 2022
Microsoft Exchange servers break thanks to 'Y2K22' bug
email delivery

Microsoft Exchange servers break thanks to 'Y2K22' bug

4 Jan 2022