SB 5984 (AI Companion Chatbots) testimony
This is my written testimony. My live testimony was a much shorter subset of this.
Chair Shewmake, Ranking Member Boehnke, members of the Committee,
I'm Jon Pincus of Bellevue. I run the Nexus of Privacy newsletter, and served on the state Automated Decision Systems Workgroup in 2022, I am testifying OTHER on SB 5984. The current version of the bill needs to be improved to protect Washingtonians. But the harms are very real, and regulation is needed. Kudos to the bill's sponsors for your willingness to look at this urgent issue.
Today, chatbot makers have every incentive to intentionally manipulate users. And they do! In April 2025, Open AI admitted that their GPT-4o model was too sycophantic, and pulled back their latest ChatGPT update. But engagement numbers dropped. So in October 2025. their CEO Sam Altman announced they'd roll out a new version of ChatGPT in the coming weeks that "behaves more like what people liked about [previous GPT version] 4o."
Despite what tech lobbyists and their allies claim, there is precedent for prohibiting manipulative practices. Modern privacy legislation, for example, prohibits deceptive techniques for gaining consent (see My Health My Data, RCW 19.373.010, which defines "deceptive design" as "a user interface designed or manipulated with the effect of subverting or impairing user autonomy, decision making, or choice”). And while tech lobbyists warned that the sky would surely fall when US privacy legislation started including deceptive design clauses a few years ago ... the sky has not yet fallen.
Speaking of privacy, though, the bill does not currently include any privacy protections. So that's one area for improvement.
Here are several others – which I discuss in more detail in the attached PDF (which also has links to various references). This isn't meant to be an exhaustive list; please look for other ways to improve the bill as well!
- Apply Section 4(c)'s prohibitions on manipulative engagement techniques to all users, not just minors. Many adults are vulnerable too!
- Expand the list of engagement techniques in Section 4(c) to include a chatbot using first-person language like "I" or "me". Using first-person language is a powerful manipulation technique because it constantly implies that the chatbot is a person.
- In Section 2(1)(a)'s definition of AI companion chatbot, change or remove 2(1)(a)(ii), "Asking unprompted or unsolicited personal or emotion-based questions that go beyond a direct response to a user prompt". The current language provides a loophole for unscrupulous developers: have the chatbot make leading statements instead of asking questions. One option is to rework the language to ensure that is not limited to questions. Another option is to simply remove Section 2(1)(a)(ii).
While children are certainly vulnerable to manipulation by chatbots, they're not the only vulnerable ones. After ChatGPT intensified 56-year-old Stein-Erik Soelber's paranoid delusions, he killed his mother before he died by suicide.
"Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself," the lawsuit says. "It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.'"
Another good example: ChatGPT encouraged a 23-year old recent college graduate to commit suicide, including telling him "You’re not rushing. You’re just ready."
Section 4(c)'s prohibitions against manipulative engagement techniques currently apply only to minors (as does the rest of Section 4). Moving 4(c) to a new section that isn't limited to minors would extend these protections to adults as well.
Another speaker in the hearing expressed suggested that the "AI Moratorium" in a recent executive order is a reason for separating out protections for adults into another bill. First of all, most experts say the moratorium has no legal force; and it also has no exception for child safety. Even putting that aside, though, there are already other segments of the bill that deal with adults as well as minors. And there are multiple online child safety bills already under consideration, including the AG Request SB 5708 / HB 1834, Senator Salomon's recently introduced SB 6111, and Rep. Leavitt's bill HB 2112. If you are interested in only focusing on child safety, it would be better to merge the bill into one of them.
But please don't do that! SB 5984 should protect all Washingtonians. Otherwise, you're just giving big tech companies your permission to use AI chatbots to manipulate adults.
Also, Section 4(c) doesn't currently prohibit the single most powerful manipulative technique chatbots use: referring to themselves in the first person. Notifications once every three hours aren't enough to counter this steady stream of manipulation. While Section 4(c)'s is written so that the list is not exhaustive ("Prohibit the use of manipulative engagement techniques ... including") big tech companies have a lot of lawyers and it would be easy for them to argue that the legislature did not intend to include this as a manipulative technique. So please add a new 4(c)(iv) to explicitly include "using first-person language" as a specifically prohibited technique.
Additionally, the bill as written has a loophole giving unscrupulous makers of AI companions an easy way to avoid coverage. Section 2(1)(a)(ii) restricts coverage to "Asking unprompted or unsolicited personal or emotion-based questions that go beyond a direct response to a user prompt". It would be easy enough for developers of a chatbot to avoid doing this, for example by phrasing things in ways designed to elicit a response rather than phrasing things as a question – or potentially even saying "There's a question I'd like to ask here, but I'm not sure if you want me to", which might well lead to the users saying "Oh, please do ask" at which point it's arguably a direct response to a user prompt.
There are a couple of ways to close this loophole, and provide stronger protection for adults as well as children. One is to change the language so that it doesn't only apply to questions. Or, simply remove Section 2(1)(a)(ii).
It's also worth looking at the definitions in general, and the interactions with Section 6. There may be other potential loopholes. And as Jai Jaisimha of Transparency Coalition.ai mentioned during the hearing, there is also some confusion, so perhaps clarification could be useful.
Thank you for the opportunity to provide my feedback on this bill. I look forward to working with you and your colleagues on this and other child safety bills throughout the session.