SB 5956 (AI student discipline and surveillance in schools) testimony and follow-on mail
Testimony
I'm Jon Pincus of Bellevue, a technologist and strategist. I run the Nexus of Privacy newsletter, served on the state Automated Decision Systems Workgroup in 2022. I am also here to support the very timely SB 5956.
Automated decision systems, including AI-based systems, are often discriminatory -- and it's a great point by Mr. Harris that this magnifies existing disrimination of minoritized students . Predictive risk scoring and watchlists, for example, tend to embed strong racial and gender biases. The bill is quite right to ban use of these in schools.
The ban on using biometrics and facial recognition as surveillance technology is also very important. Surveillance isn't safety -- and these systems are also extremely discriminatory. Section 6's limitations on disclosing information to law enforcement are valuable as well. One suggestion here: make it clear that these limitations also apply to service providers. And with all the complexities related to these kinds of systems, the guidance and model policies are very much needed.
And finally, I applaud the bill's authors for regulating automated decision systems in general, not just AI systems. I hope this becomes a role model for all Washington state legislation! WIth this and the other AI-related bills going forward, Washington really has a chance to be a national leader.
Thank you for the opportunity to testify today, and please feel free to follow up with me if there are any questions.
Follow-on mail
Chair Wellman, Ranking Member Harris, and members of the Committee,
Thank you for the opportunity to provide testimony on SB 5956. As you could probably tell from my testimony, I am very excited about this bill. Seeing that it's been scheduled for an executive session, I wanted to follow up with a few additional thoughts.
My suggestion that Section 6 should clarify that the limitations on sharing with law enforcement should also apply to service providers is an attempt to limit indirect data flows, This clarification should apply to vendors as well. The language in Section 5(6) of Sen. Trudeau's SB 6002 (Drivers Privacy Act, regulating ALPRs) could be a useful starting point:
"Any ... vendor must provide technical controls preventing unauthorized data sharing, secondary transfer, or access by nonauthorized agencies, including federal civil immigration enforcement in accordance with this chapter."
One complexity here is that when vendors or service providers are located in other states, the protections of the Shield Law and Keep Washington Working are undercut. To be honest, I'm not sure of the implications of the loss of protection in this context. Fortunately Sen. Hansen is on this committee, and he is far more knowledgeable about the Shield Law than I am!
Speaking of Sen. Hansen, his question during the hearing about a pilot program taking disciplinary history into account along with other factors in a pilot program to identify situations for intervention also brings up some interesting complexities. It seems to me that the current language in the definition of "risk scores" in Section 4(1)(a) does not prevent this in general. That said, it's possible that the specific automated decision system used in the pilot does in fact purport to measure the student's likelihood of misconduct or future disciplinary actions and use that is a factor in its decision.
If so, the system may well introduce some discriminatory bias. I hope the pilot includes analysis to detect whether this is the case. Still, if I understood properly, the pilot is getting encouraging results. As long as minoritized students are benefiting equally, then a narrowly-scoped time-limited exemption for that particular pilot program may be appropriate. That would allow the system to be modified to avoid this harmful practice before being rolled out more broadly.
Finally, the incident involving a AI-powered gun detection system that Derick Harris of Black Education Strategy Roundtable mentioned in his out testimony really highlights the importance of Section 9 and 10's guidance and model policies. From Student handcuffed after Doritos bag mistaken for a gun by school’s AI security system:
"Kenwood Principal Kate Smith said the school district's security department reviewed and canceled the gun detection alert after confirming there was no weapon, according to a statement sent to parents that was shared with CNN. Smith said she reported the matter to Kenwood's school resource officer, who called local police for support.
The principal didn't immediately realize the alert had been canceled, a spokesperson for Baltimore County Public Schools told WBAL"
Ensuring that people – not automated systems – are responsible is incredibly important. That said, people don't necessarily know how to interpret the result of the automated system, often tend to assume that the automated systems are correct, and in stressful situations involving the threat of violence the default tendency may well be to overreact. So guidance on appropriate processes, and model policies, are an equally important complement to the human-centered approach.
Please advance SB 5956 at the executive session this week. As it moves forward in the process, please let me know what I can do to help!
Jon Pincus, Bellevue, 98005