HB 2599 (AI in therapy) oral and written testimony
Oral testimony
Chair Bronoske, Ranking Member Schmick, members of the committee,
I'm Jon Pincus of Bellevue. I run the Nexus of Privacy newsletter, served on the state Automated Decision Systems Workgroup in 2022, and I am PRO on HB 2599. The need for regulation is clear. Developers ChatGPT and other AI chatbots prey on people with mental health problems by misleadingly implying their bots can e used as therapists. And, as more and more therapists use AI as part of their work, guardrails are vital.
The bill prohibits a handful of especially dangerous practices like allowing AI software to make independent therapeutic decisions. Just as importantly, it requires disclosure and consent for other uses of AI. Any cloud-based AI tools may well send data to other states; cloud-based AI transcription tools, for example, send the audio that's being transcribed. This means that it loses protections of the Shield Law and Keep Washington Working. Most people seeking therapy don't realize that this can lead to details about their daily lives their family members getting shared with law enforcement in Texas, ICE and CBP, and other hostile actors.
Disclosing this and giving them the opportunity to consent – or not – is a critical protection. The bill as written requires written disclosure and consent. People often sign these consent forms without reading them, so it's important to complementing this with a requirement to repeat the disclosure and reaffirm consent at the start of each session. There are also some wording changes to Section 2 that would better protect people seeking therapy, I'll follow up with more in my written testimony.
Written testimony
Chair Bronoske, Ranking Member Schmick, members of the committee,
I'm Jon Pincus of Bellevue. I run the Nexus of Privacy newsletter, served on the state Automated Decision Systems Workgroup in 2022, and I am PRO on HB 2599.
HB 2599 responds to the urgent need for regulation related fronts: preventing developers of ChatGPT and other "Artificial Intelligence (AI)" chatbots from prey on people with mental health issues by misleadingly implying their bots are therapists, and regulating the use of "AI" by licensed professionals. Both are vital. In this testimony, I focus on the latter, and make two specific recommendations (with proposed wording below)
- The language in Section 2(1)'s list of prohibited practices needs to be tightened.
- Section 2(2)'s requirements for disclosure of, and consent to, permitted uses of "AI" need to be strengthened.
In addition, as multiple speakers in the hearing pointed out, definitions need to be looked at carefully. On the one hand, it is important not to be over-broad, and rule out potentially-valuable uses for licensed professionals. On the other hand, it is also important not to leave loopholes that can be exploited by unscrupulous "AI" vendors to defeat the intent of the regulations.
The need for regulating "AI" use by licensed professionals is clear. The Digital Futures in Mind report, from 2022, is an excellent look at the expanding use of algorithmic and data-driven technologies in the mental health context. Since then, uses have expanded significantly. And many people are unaware of the risks. Anecdotally, when I have told health care professionals about the privacy risks of the "AI"-based systems they use, they are often shocked. While I personally am fortunate that I have always been asked for consent, others tell me that is not always the case.
A complication here is that the term "AI" is used in many different ways to refer to many different technologies. Indeed, researchers such as UW Professor Emily Bender (co-author of the book The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want and the famed On the Dangers of Stochastic Parrots 🦜 paper) caution against the use of the term. The bill as written contains two definitions of "artificial intelligence", in Section 2(4) and 3(1). For brevity, in these comments I'll simply use the term "AI."
The intent of section 2(1) is clear: prohibit especially dangerous practices. The licensed professional, not the "AI", should make therapeutic decisions, interact with clients in therapeutic communications, and generate therapeutic plans. And the racial biases of "AI"-based tools to detect or evaluate emotions have been known for years (see for example Emotion-reading tech fails the racial bias test ), so they should not be used. Tighter wording would make the intent clearer
(a) Make independent therapeutic decisions;
(b) Directly interact with clients in any form of therapeutic communications
(c) Generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or
(d) Detect or evaluate emotions or mental states of clients.
An important motivation for the suggested change in 2(1)(c) is that research consistently shows that even when there are requirements for human oversight, people tend to assume that the outputs of "AI" and other automated decision systems are correct, so "review" is often just-rubber stamping them (see for example The Principles and Limits of Algorithm-in-the-Loop Decision Making). So if the proposed language is seen as too restrictive, and, licensed professionals do see a need to have "AI" generate these therapeutic recommendations or plans, alternate language is needed to better protect patient safety. One possibility:
(c) Generate therapeutic recommendations or treatment plans without substantive review and approval by the licenesed professional and a critical advisor; or
There was mention in the hearing of potential carve-outs in 2(1)(d) to accommodate situations where licensed professionals find the software useful despite its biases. A concern here is that, as Neuroscience News reports, People Miss Racial Bias Hidden Inside AI Emotion Recognition. However, if licensed professionals are trained in how to recognize and compensate for racial bias (and potentially other biases introduced by this technology), limited carve-outs may be appropriate.
Section 2(2)’s requirements for disclosure and consent for other uses of "AI" are extremely important. As The Digital Futures in Mind report notes (section 2.1.6, p. 54):
Rights of autonomy and decision-making have been a crucial concern in traditions of service user and survivor advocacy, activism, research and so on. Informed consent, which is a key component of upholding the right to privacy but also has far broader importance, is key to rights to autonomy and decision-making, as reflected in human rights instruments, such as the Convention on the Rights of Persons with Disabilities (see articles 3 (general principles) and 12 (equal recognition before the law)). Like other human rights instruments, as UN Special Rapporteur for the rights of persons with disabilities Gerard Quinn points out, the "Convention requires that consent should be informed, real, transparent, effective and never assumed’—and this is certainly the case in algorithmic and data driven developments."
For example, as I discussed in my oral testimony, patients need to know if their therapist's use of cloud-based "AI" transcription tools could lead to details of their daily lives get shared with ICE or CBP (or law enforcement in hostile states) because data has been sent to a state where protections of Keep Washington Working and the Shield Law don't apply. If they are not informed of that, their consent is not meaningful.
This example also highlights the complexities. On-device "AI" transcription systems don't send the data anywhere, so don't have the same privacy and safety risks. If a therapist asked me whether it was okay to use an on-device transcription tool, and I was confident that they’d check and correct the transcriptions, I would consent; if they were using a cloud-based tool, I would politely decline. Others might well make different decisions, of course; that’s the whole point of disclosure and consent! This is a great example of the importance of licensed professionals' experience, judgment, and training: they can use their professional expertise to make the appropriate choices. And disclosing and explaining it to their clients helps build a trusting relationship.
For consent to be meaningful, it needs to be informed, so the disclosure should clearly describe the uses -- and the risks. So additional clauses need to be added to Section 2(2)(a) along the lines of
The patient or the patient's legally authorized representative is informed in writing of the following: (i) That artificial intelligence will be used; (ii) the specific purpose of the artificial intelligence tool or system that will be used; (iii) who will have access to the data; (iv) the licensed professional’s commitment to check the output of any tools for errors; (v) the risks of algorithmic discrimination and data breaches; and (vi) if the data leaves Washington state, that federal law enforcement and hostile state law enforcement may have access to it; and
Thank you for the opportunity to provide this feedback, and thank you to Rep. Kloba for bringing this bill. I look forward to working with you and the other stakeholders to refine it so that Washington state can set the bar for this urgently-needed regulation.
Jon Pincus, Bellevue