SB 5984 (AI Companion Chatbots): House TDE&V written testimony
SB 5984 made it through the Senate, and so its next step is a hearing and executive session in the House policy committee. Meanwhile its companion bill HB 2225 made it through the House on the very last day before the cutoff, skipped a hearing in the Senate, and was having its exec session at the same time as this hearing. Presumably all of this is just to give maximum flexibility on which chamber to bring the bill to the floor in, just in case floor time gets congested (as it always does).
For this hearing, my original plan was to continue to focus primarily on extending the prohibitions on manipulative engagement techniques to all users. Much to my delight, though, the first person to testify was Dr. William Agnew, previously at UW and now a postdoctoral fellow at CMU, who discussed exactly that issue and did a much much better job than I would have. And then Dr. Jared Moore, also previously at UW and now at Stanford, also dug into that; and after I spoke, so did Dr. Eric Lin, and independent researcher. So since nobody else at the hearing focused on privacy aspects, that's what I tackled in my testimony. My live testimony wasn't the crispest – I was rewriting it during the hearing so didn't get to time-test it, and wound up not getting to go into details on the privacy aspects – so here's my written testimony, with a few typos fixed
Chair Ryu, Ranking Member Barnard, and members of the Committee,
I'm Jon Pincus of Bellevue. I run the Nexus of Privacy newsletter, and served on the state Automated Decision Systems Workgroup in 2022, My position on SB 5984 is OTHER.
The harms are very real. Legislation is needed and appropriate. But the current version of the bill needs to be improved to protect Washingtonians.
My testimony focuses on two specific areas: privacy protections; and the prohibition on manipulative engagement techniques
Privacy protections
The current version of the bill does not include any privacy protections. Chatbot operators are increasingly moving towards an ad-supported model — and also looking for other ways to monetize. So these protections are critical, and similarly should apply to all users.
The fix here is straightforward: simply treat all data provided to an AI Companion Chatbot as consumer health data under RCW 19.373.010(8), the My Health My Data Act.
Today, some of the data provided to a companion chatbot clearly already is consumer health data. My Health My Data's definition of consumer health data is appropriately broad: "personal information that is linked or reasonably linkable to a consumer and that identifies the consumer's past, present, or future physical or mental health status." This includes, but is not limited to Individual health conditions, treatment, diseases, or diagnosis; social, psychological, behavioral, and medical interventions; health-related surgeries or procedures; use or purchase of prescribed medication; bodily functions, vital signs, symptoms, or measurements of the information described in the definition. All of these are topics that people frequently discuss with companion chatbots. And the definition also includes (in 8(b)(xi)) and consumer health information "derived or extrapolated from nonhealth information by any means, including algorithms or machine learning."
However, there are also potentially many gray areas. A user saying "I have a headache" (symptom) or "I'm constantly having to go to the bathroom" (information about a bodily function) clearly is consumer health information. So is a chatbot saying "it sounds like you're feeling depressed" (diagnosis, derived from the user's input). But what about a question like "how do I know whether I'm depressed?" Or suppose the chatbot's language is worded less clinically: "it sounds like you're feeling down"? These deserve the same privacy protection, but unscrupulous chatbot makers might well be tempted to exploit them for ad-targeted purposes.
As Mr. Lin said during the hearing, in response to Ranking Member Barnard's excellent question, clear rules as to what is and isn't allowed are extremely useful -- and as a software developer, I certainly agree.
Importantly, since some data exchanged with AI companion chatbots clearly already is consumer health data, chatbot operators are already required to have mechanisms in place for complying with RCW 19.373. So treating all data this way should not be unduly burdensome. And My Health My Data's rulemaking and enforcement mechanism are already in place, so this approach does not require any additional funding.
Manipulative engagement techniques
Drs. Agnew, Moore, and Lin covered this area in the hearing far more deeply than I did, so I won't go into detail here. Briefly, though, the bill as written gives free rein to unscrupulous chatbot operators to use these techniques to exploit seniors, people with mental health issues, and other vulnerable adults. As you've heard, the results can be tragic. These protections clearly need to be extended to all users.
Making this change is straightforward: move section 4(c) to a new top-level section applying to all users -- just as several other sections already do. During the hearing, there were also several very worthwhile suggestions for extending 4(c)’s list of specific manipulative engagement techniques that I would encourage you to adopt.
This approach also provides much stronger protection for minors. I very much appreciate that the current language of the bill very wisely stays away from any harmful age verification requirements, instead applying when the operator knows the user is minor (the actual knowledge standard) or when a chatbot is targeted at minors. That's good! But this also means that in situations where the operator of a general-purpose chatbot does not know that the user is a minor, the bill's current language provides no protection against manipulative engagement techniques.
Extending these protections to all users takes away this possibility for exploitation, ensuring that minors are protected in all cases. And from a privacy perspective, the approach of protecting all users does not require the gathering or estimation of any additional information related to the user’s age.