Skip to content

Testimony opposing Washington HB 1951, weak algorithmic discrimination legislation

Despite good intentions, the bill looks like it was written by tech lobbyists.

a biulding with a dome, and lots of trees aroud it, reflected in in a pond,
Washington's short legislative session is in high gear, Unfortunately, despite HB 1951's good intent of dealing wth algorithmic description, it looks like it was written by tech lobbyists. Here's my written testimony.

Chair Walen, Ranking Member Robertson, and members of the committee,

I'm Jon Pincus of Bellevue, a technologist and entrepreneur. I served on the Washingtonn state Automated Decision Systems Workgroup in 2021, and write about algorithmic justice as well as other topics on The Nexus of Privacy Newsletter.

My position on HB 1951 is CON. While the goal of providing protections against algorithmic discrimination is vital, the bill as drafted does not actually provide meaningful protections to Washingtonians. Other witnesses at the hearing discussed several of the key problems with the bill including the lack of requirements for transparency (aka notice) or opt-out, the narrowness of the definition of "automated decision tool" (Sec. 1(3)), and the lack of a private right of action.

Coming at it from the technical perspective, I'd also like to highlight the weakness of the impact assessment requirements in Section 3. As written, these impact assessments don't even require basic safeguards such as intesting systems for bias or measuring the impact of real-world harms, and fall short of industry best practices.  Microsoft's Responsible AI Impact Assessment Template, for example, includes identification of stakeholders, restricted and unsupported uses, sensitive uses, and consideration of the impact of potential failures and misuse on stakeholders -- including "identify and document whether the consequences of misuse differ for marginalized groups."  None of these requirements appear in HB 1951.  The discussion of Proactive equity in design in the Algorithmic Discrimination Protections section of the White House Office of Science and Technology Policy (OSTP)'s' Blueprint for an AI Bill of Rights similarly highlights other key features that should be required as part of impact assessments.  

The principles in The Blueprint for an AI Bill of Rights also highlight the importance of several issues witnesses discussed in the hearing directly impact Washingtonians. For example,

  • The Blueprint states that "You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you." HB 1951's lack of transparancy requirements means that consumers, workers, and renters in Washington have no way of knowing whether they're being affected by AI-enabled decisions.
  • The Blueprint' states that "You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter."   HB 1951's lack of an opt-out requirement means that we can't protect ourselves from being harmed by racist, anti-LGBTQ+, or inaccurate AI systems.

And I also want to echo Mr. Scherer's point about the narrowness of HB 1951's defintion of an "automated decision tool" in Sec. 1(3). Limiting the scope only to systems that are "specificall* developed and marketed to, or modified to, make, or be a controlling factor in making, consequential decisions" makes it easy for developers to be exempt from this definition by marketing the tool for a range of purposes. And it's just as easy for deployers to get around this definition by claiming that a tool's output is only one factor in a decision made by a human.

California's recently-issued draft regulations for automated decision technology, by contrast, have a broader definition. Consumer Reports describes California's broad definition as "smart", noting "It’s clear that decision-making tools can be risky even if a human is empowered to intervene. Humans can wind up rubber-stamping machine recommendations. Sometimes, adding human discretion to the process makes decisions more biased."  

While it might be tempting to just pass *something* in hopes of strengthening it later, I would strongly caution against this approach. In practice, once a low bar is established for algorithmic regulation, it's politically extremely difficult to raise it.  New York City's Automated Employment Decision Tool Law is a case study of why a weak law doesn't protect Washingtonian's; as Alfred Fox Cahn says, it's a "fig held up as proof of protection from these systems when in practice, I don’t think a single company is going to be held accountable because this was put into law.”

Please don't repeat this same path with algorithmic regulation here in Washington state -- we deserve real protections, not just a fig leaf!

Jon Pincus, Bellevue, 98005