Skip to content

HB 2157 (High-risk AI systems) testimony, follow-up mail, and thoughts on amendments

Testimony

I'm Jon Pincus of Bellevue. I'm a technologist and strategist, run the Nexus of Privacy newsletter, served on the state Automated Decision Systems Workgroup in 2022, and I am PRO on HB 2157.

High-risk AI systems today are often deployed without serious consideration of algorithmic discrimination or other risks, so regulation is clearly needed. The approach of risk mitigation and impact assessments aligns with industry best practices. Transparency requirements for disclosure and explanation are also welcome, and a civil remedy is important as well - especially given the constrained fiscal circumstances.

As the bill moves forward, I hope you will look for opportunities to strengthen it. For example, these requirements should apply to any high-risk automated decision system, as is the case in Senator Nobles' SB 5956. And the Civil Remedy language in Section 5 could be strengthened by including a penalty for repeated violations and further limiting -- or even removing -- the right to cure.

And please take the claims from tech lobbyists that this bill will cause the sky to fall with a grain of salt. They always say that. I just checked, and the sky has not yet fallen.

Thank you for the opportunity to testify today, and please feel free to follow up with me if there are any questions.

Follow-up mail

Chair Ryu, Ranking Member Barnard, and members of the Committee,

Thank you for the opportunity to provide testimony on SB 2157. I support this bill, please advance it -- and as I said in my testimony, please look for opportunities to strengthen it.Seeing that this bill has been scheduled for a executive session this week, Please advance this bill, and look for opportunities to strengthen it in this committee and throughout the process. Here are several suggested improvements, the rationale for which is describe in more detail below

  • Remove 1(10)(b)(v), the exemption for autonomous vehicles
  • In 1(3)'s definition of consequential decision, adding new clauses covering bodily injury and/or death and impacts to public safety
  • Treat violations as an unfair and deceptive business practice under the Consumer Protection Act (chapter 19.86 RCW), instead of current language in Section 5
  • Remove 5(2), the right to cure

One point that may not have come across in the testimony is that requirements in Section 2 and 3 like doing risk analysis, risk mitigation and impact assessments is really "systems engineering 101" for any system that is broadly deployed, let alone used in high-risk situations. But that doesn't mean it always happens! Take the recent blackout in San Francisco where Waymo's robotaxis blocked streets and snarled traffic, including getting stuck at intersections with their hazard lights turned on. A city-wide power failure is exactly the kind of thing that Waymo – and SF and California regulators – should have considered up front before deploying autonomous vehicles at scale, and made sure the system would fail gracefully.

Obviously, though, they hadn't.

And from a business perspective, why should Waymo bother? Under California's current laws and regulatory regime, they was able to get approved for deployment without doing it. Even after this fiasco, they don't appear particularly concerned that they'll face any significant consequences to this even in San Francisco (they're currently refusing to disclose how many of its robotaxis had stalled during the blackout, claiming it's a trade secret) let alone other cities.

HB 2157 Section 2(1) and 3(1) change the incentives for developers and deployers of high-risk AI systems by establishing a duty of care. If a developer or deployer (or company that acts as both developer and deployer) is actually doing the risk analysis, mitigation, and impact analysis as they should, compliance should be straightforward.

Of course, the devil is in the details. For one thing, the current version of 2157 doesn't apply to Waymo – autonomous vehicles are exempt under Section 1(10)(b)(v). I'm not sure just why they're exempt; from an algorithmic discrimination perspective, research shows that driverless cars are worse at detecting darker skinned pedestrians and children. This points to one straightforward way to strengthen the bill: remove this exemption and take a close look at others in 1(10)(b). And even once this exemption is removed, it's not clear that the definition of consequential decisions in 1(3) includes whether or not a vehicle hits dark-skinned pedestrians and children, so that definition is also worth worth looking at – for example, adding clauses bodily injury and/or death as well as impacts to public safety.

Another particularly devilish detail is enforcement. Thinking about the other testimony in the session, it strikes me that tech lobbyists are trying to create a false dilemma where it's impossible for the legislature to regulate their industry

  • They oppose any private right of action -- even the extraordinary mild one in the current version of the bill, offering only injunctive relief, without any penalties even for repeated violations
  • On the other hand, especially given our constrained fiscal circumstances, any significant budget for AG enforcement will make it very difficult to pass anything.

I know, I know, the lobbyists are just doing their job. But still, strong enforcement is needed for the bill to actually change incentives for the lobbyists' clients.

During the hearing, Yuki Ishizuka from the Attorney General's Office mentioned that they would like AG enforcement, and I very much agree. At the same time there's value in making the private right of action stronger. One good solution, taken by HB 2225 as well as My Health My Data and other bills, is to treat violations as an unfair and deceptive business practice under the Consumer Protection Act (chapter 19.86 RCW). After all, if risk mitigation and impact analysis hasn't been done on a system, it is indeed deceptive to claim it is suitable for a high-risk environment – and unfair to everybody who's potentially harmed by the use of the system.

The right to cure also relates to the need for strong enforcement as well as fiscal challenges. As Mr. Ishizuka pointed out, it's a barrier to enforcement for AGO; that's just as true for civil enforcement. In testimony on past privacy bills the AGO has also described a right to cure as a drain on enforcement resources, so removing it will also helps address the fiscal constraints. And the right to cure also interferes with the attempt to change incentives; it developers and deployers of high-risk AI systems not to worry about compliance until the AGO or somebody else notices that they haven't and pays the legal costs to begin a civil action. The best solution here is to remove the right to cure. If that is for some reason politically impossible then please work with the AGO to craft limits on the right to cure that will lessen its impact.

I realize that it may be challenging to make all of these improvements before advancing the bill. Fortunately, there will be opportunities for additional discussions and potential amendments later in the legislative process. Then again, tech lobbyists will be pressing to weaken the bill as well, so please advance as strong a bill as you can!

Thank you again for the opportunity to provide input, and I look forward to working with you on this and other bills throughout the legislative session.

Thoughts on Amendments

Chair Ryu, Ranking Member Barnard, and members of the Committee,

Thank you for your ongoing work in this bill. I wanted to weigh in quickly on the proposed substitute. I appreciate the detail in the summaries; that said, I haven't had time to do a detailed comparison, so am relying on the summary's accuracy.

The first bulleted item – extending the definition of protected classes – is an important improvement.  

I'm disappointed that the Section 5 language on remedies has remained unchanged; I'm concerned that with limits on the AGO's resources this will lead to a situation where the likelihood of enforcement is so low that the bill will not have the necessary affect. However, to some extent this is a fiscal issue, so it can be addressed in Appropriations.

I'm also disappointed that the exemption for autonomous vehicles is still there and that systems used in situations where there is a risk of bodily injury or death are not considered high risk.  Maybe there's some logic behind it but this just seems like a loophole to me.  Fortunately this is the kind of small change that is very easy to make as a floor amendment.

I don't have much to say about the other changes, on the surface they seem like useful clarifications, but on short notice it's very hard to tell whether they introduce loopholes. 

Still, even though there are still important improvements needed, there are opportunities to address them.  Please adopt the proposed substitute and DO PASS 2157!