HB 2225 (AI Companion Chatbots) testimony, followup email, and thoughts on amendments
Testimony (1/14)
Chair Ryu, Ranking Member Barnard, and members of the Committee,
I'm Jon Pincus of Bellevue. I run the Nexus of Privacy newsletter, and served on the state Automated Decision Systems Workgroup in 2022, I am testifying OTHER on HB 2225.
The harms are very real, and as you've already heard from others, the current version of the bill needs to be improved to protect Washingtonians.
For example, the single most powerful manipulative engagement technique these chatbots use is referring to themselves in the first person. But this doesn't appear in Section 4(c)'s list of prohibited techniques. Notification once every three hours isn't enough to counter this steady stream of manipulation.
And the prohibition on manipulative techniques currently only applies to interactions with minors. ChatGPT has also repeatedly been involved in wrongful deaths and AI-related psychosis with adults who are being manipulated into extended engagement. These protections need to be extended to everybody. Otherwise, you're just giving big tech companies your permission to manipulate adults.
You also heard other suggestions of where the bill needs to be strengthened. Please ensure that this bill truly protects all Washingtonians.
Thank you for the opportunity to testify today, and please feel free to follow up with me if there are any questions.
Followup email (sent on 1/19)
Chair Ryu, Ranking Member Barnard, and members of the Committee,
Thank you for the opportunity to provide testimony on HB 2225. I see that the bill is scheduled for an executive session this week so m following up with some suggestions for improvements. I'm also cc'ing Representative Callan, the bill sponsor; Senators Wellman and Shewmake, sponsors of the companion SB 5984; and Jai Jaisimha of Transparency Coalition.ai, who also testfied in support of the bill.
Please do advance this bill. As several of the other testifiers pointed out, developers of AI chatbots have intentionally designed the manipulation in. And under current law, why shouldn't they? It boosts their numbers, creates more opportunities for ads and upsells ... and there aren't any negative consequences to them, just to their users! So regulation that changes the incentives really is critical – and as Katy Ruckle highlighted, the private right of action is vital for strong enforcement.
That said, there is still room for improvement. Here are three specific suggestions. This isn't meant to be an exhaustive list; please look for other ways to improve the bill as well!
- Apply Section 4(c)'s prohibitions on manipulative engagement techniques to all users, not just minors. Many adults are vulnerable too!
- Expand the list of engagement techniques in Section 4(c) to include a chatbot using first-person language like "I" or "me". Using first-person language is a powerful manipulation technique because it constantly implies that the chatbot is a person.
- In Section 2(1)(a)'s definition of AI companion chatbot, change or remove 2(1)(a)(ii), "Asking unprompted or unsolicited personal or emotion-based questions that go beyond a direct response to a user prompt". The current language provides a loophole for unscrupulous developers: have the chatbot make leading statements instead of asking questions. One option is to rework the language to ensure that is not limited to questions. Another option is to simply remove Section 2(1)(a)(ii).
While children are certainly vulnerable to manipulation by chatbots, they're not the only vulnerable ones. After ChatGPT intensified 56-year-old Stein-Erik Soelber's paranoid delusions, he killed his mother before he died by suicide.
"Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself," the lawsuit says. "It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.'"
Another good example: ChatGPT encouraged a 23-year old recent college graduate to commit suicide, including telling him "You’re not rushing. You’re just ready."
Section 4(c)'s prohibitions against manipulative engagement techniques currently apply only to minors (as does the rest of Section 4). Moving 4(c) to a new section that isn't limited to minors would extend these protections to adults as well.
And Section 4(c) doesn't currently prohibit the single most powerful manipulative technique chatbots use: referring to themselves in the first person. Notifications once every three hours aren't enough to counter this steady stream of manipulation. While Section 4(c)'s is written so that the list is not exhaustive ("Prohibit the use of manipulative engagement techniques ... including") big tech companies have a lot of lawyers and it would be easy for them to argue that the legislature did not intend to include this as a manipulative technique. So please add a new 4(c)(iv) to explicitly include "using first-person language" as a specifically prohibited technique.
Additionally, bill as written has a loophole giving unscrupulous makers of AI companions an easy way to avoid coverage. Section 2(1)(a)(ii) restricts coverage to "Asking unprompted or unsolicited personal or emotion-based questions that go beyond a direct response to a user prompt". It would be easy enough for developers of a chatbot to avoid doing this, for example by phrasing things in ways designed to elicit a response rather than phrasing things as a question – or potentially even saying "There's a question I'd like to ask here, but I'm not sure if you want me to", which might well lead to the users saying "Oh, please do ask" at which point it's arguably a direct response to a user prompt.
There are a couple of ways to close this loophole, and provide stronger protection for adults as well as children. One is to change the language so that it doesn't only apply to questions. Or, simply remove Section 2(1)(a)(ii).
Thank you for the opportunity to provide my feedback on this bill – and thank you as well for your willingness to look at the urgent issue of child safety. I look forward to working with you and your colleagues on this and other child safety bills throughout the session.
Thoughts on amendments (1/22)
hair Ryu, Ranking Member Barnard, and members of the Committee,
Thank you for your continued work on this important legislation.
Respectfully, I urge you not to adopt the H-3146.1 proposed substitute or amendment 159 , both from Ranking Member Barnard. While I appreciate the efforts to find a different approach, both of these weaken enforcement. Many of the companies making the AI Companion Chatbots are unscrupulous, and will ignore the law if they think they can get away with it. My comments in my testimony on another bill apply just as much here:
For this bill to be effective, there needs to be a realistic prospect of strong enforcement. The consumer protection act allows both AG enforcement and a private right of action. This is especially important given fiscal constraints on the AGO.
The summary to Rep. Thomas' proposed substitute H-3140.1 makes it clear that it has many improvements to the original bill. A few that I especially appreciate
- the change in Section 4(1)'s language to include both known minors and chatbots that are directed to minors
- always requiring a disclosure that an AI companion chatbot is not human (instead of only if a reasonable person would be led to mislead). This is a simplification and removes a potential loophole for unscrupulous chatbot operators
- requiring disclosures that the chatbot is not a health care professional and should not be used for medical advice
A caveat here: as always, the devil is in the details; and I haven't had time to read the exact language of the proposed substitute. I hope enough of you have read it thoroughly to make sure there lobbyists haven't snuck any loopholes being inserted into the language these or the other changes! Still, based on a quick analysis, I urge you to adopt H-3140.1
Even with the improvements in the substitute, my overall position hasn't changed: this bill as written will not yet effectively protect Washingtonians. The bill intentionally chooses not to protect adults with cognitive disabilities or mental health issues -- or seniors who are overly trusting of technology -- from manipulation by chatbots. The AI experts I've talked to all agree that unscrupulous operator will escape from regulations by having their chatbot make leading statements instead of asking unpromoted questions. These really need to be fixed before the bill is passed.
That said, there are still opportunities to improve the bill. So after adopting Rep. Thomas' substitute, please DO PASS HB 2225 – and continue to improve it!