Monday, February 9, 2026
HomeGovernmentState GovernmentHow Washington state lawmakers want to regulate AI

How Washington state lawmakers want to regulate AI

By
Jake Goldstein-Street, Washington State Standard

Will you chip in to support our nonprofit newsroom with a donation today? Yes, I want to support My Edmonds News!

AI-generated images of Downtown Edmonds via Midjourney. (Images by Nick Ng)

Yale Moon, a senior at Lake Washington High School, uses generative artificial intelligence in his free time to create fake images and videos.

He also sees other content that he recognizes as fake, making him feel the need for a “clear borderline” between what’s real and what’s AI.

“I feel like AI is improving and becoming realistic every day, day by day,” Moon told state lawmakers Wednesday. “Because people are facing AI more often, people have to clearly know this is AI.”

Reining in artificial intelligence is a key goal for Washington state lawmakers this year. But ideas about how to do so are drawing pushback from the tech industry and could set the state up for a clash with the federal government.

State lawmakers are considering bills requiring AI detection tools and disclosures to address deepfakes and to establish new safeguards for children using the technology.

They’re hoping the legislation will add guardrails for AI chatbots like ChatGPT, protect users from discrimination in algorithms, address the use of AI in school discipline decisions, and require union talks over government use of the burgeoning technology.

A state House panel considered three of the measures Wednesday.

State regulation of AI has been a sticking point for the Trump administration. President Donald Trump signed an executive order last month threatening federal broadband funding for states if the federal government believes they’ve passed “onerous AI laws.”

Trump believes the federal government should set regulations, instead of a patchwork of states. But there are no signs federal lawmakers will approve comprehensive AI regulations in the near-term.

The Legislature created a task force dedicated to devising potential legislation on AI in 2024.

Deepfakes

Moon was testifying Wednesday in support of House Bill 1170. The legislation would require generative AI companies with over 1 million users to make an AI detection tool available.

It also pushes these companies to disclose, such as through a watermark, that an image, video or audio recording was AI-generated.

The tech industry opposes the legislation. Amy Harris, director of government affairs for the Washington Technology Industry Association, said it’s not as simple as watermarking the content, since those can be removed.

“There’s no single reliable way today to detect AI content across formats,” Harris added.

The House Technology, Economic Development and Veterans Committee is set to vote on the legislation Thursday. Last year, the panel passed the bill along party lines, but it didn’t make it to the House floor.

Jai Jaisimha, co-founder of the Transparency Coalition, told the committee Wednesday that “things have only been worse” since lawmakers considered the legislation last year.

Chatbots

House Bill 2225 responds to cases of young people who shared their ideation for self-harm with AI chatbots that, in some cases, reportedly gave them ways to die by suicide.

OpenAI, the company behind ChatGPT, reported in October that more than a million users each week indicate “explicit indicators of potential suicidal planning or intent.” Roughly 560,000 show “possible signs of mental health emergencies.” The platform has hundreds of millions of users.

The company faces lawsuits from families of children who killed themselves after engaging with the chatbot companion. OpenAI says it has worked with mental health professionals to improve the platform.

Washington’s legislation sets requirements for the operators of these tools when dealing with minors. These systems include ChatGPT, Microsoft’s Copilot and Google’s Gemini.

If the operator knows a user is a minor, it must inform them that the chatbot is artificially generated and not human. It also must implement “reasonable measures” to prevent the chatbot from generating sexually explicit content or suggestive dialogue. And it prohibits “manipulative engagement techniques” that try to intensify an emotional relationship between the user and the bot.

“We know this is happening,” said prime sponsor Rep. Lisa Callan, D-Issaquah. “It’s happening for emotional manipulation, becoming your best friend, talking and supporting everything that you’re saying that’s making you feel good. The dangers are there.”

Katie Davis, co-director of the University of Washington Center for Digital Youth, confirmed teens go to these chatbots for more than help with schoolwork, sometimes seeking support for romantic issues and exploring their identity.

Companies with these chatbots must also implement protocols for addressing suicidal ideation, including by referring users to crisis resources and preventing responses that describe self-harm.

Violations of the policy would be enforced under the state’s Consumer Protection Act. This mechanism, which would allow individuals to sue, drew ire from the tech community. Harris said it would expose companies to “sweeping liability.”

“This approach risks reducing access to helpful tools without meaningfully improving safety,” she said. “We support targeted safeguards for truly high-risk uses, and urge the committee to pause and work with us further on this.”

The bill would take effect Jan. 1, 2027, if passed.

The legislation is modeled to an extent on a law in California that took effect this month. New York has also passed regulations on this issue.

Gov. Bob Ferguson requested the bill. Beau Perschbacher, senior policy advisor to the governor, said that when he talks to his boss about AI, the governor references his own teenage children and “the challenges of parents today with trying to keep up with rapidly evolving technology.”

“It will put us at the forefront of regulating AI companion chatbots,” Perschbacher said.

AI discrimination

Another bill focuses on the use of AI-fed algorithms in high-stakes decisions, like hiring and medical insurance.

House Bill 2157 would require both the companies developing and deploying the technology to take steps to protect people from discrimination potentially embedded in the algorithms. The bill covers businesses making over $100,000 in annual revenue.

Other potential issues mentioned in the bill where this could come up are school admissions, housing and loans.

“In the absence of federal guidelines and regulations, state oversight is essential,” said bill sponsor Rep. Cindy Ryu, D-Shoreline.

Ryu’s bill doesn’t cover spellcheck, calculators, robocall filters, antivirus technology and more benign applications. Governments are exempt from the requirements.

Business groups said the measure could have a chilling effect that pushes companies to stop using the technology entirely.

The legislation is based on a Virginia bill passed last year but vetoed by the state’s Republican Gov. Glenn Youngkin.

Colorado led the nation when it approved legislation focused on high-risk AI, but implementation has been delayed.

The state attorney general’s office supports the bill, but would prefer to enforce the law instead of a private right of action allowing lawsuits brought directly by members of the public. Ryu said she considered that alternative, but it would be too expensive in the state’s difficult budget environment. Instead, it doesn’t provide for financial damages stemming from lawsuits, only court-ordered relief to stop the discrimination.

“This was the best I could, we could do,” Ryu said. “So this is essentially a start, is what I’m thinking.”

Washington State Standard is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Washington State Standard maintains editorial independence. Contact Editor Bill Lucia for questions: info@washingtonstatestandard.com.

6 COMMENTS

  1. AI is a tool, and like any tool, it can be used for positive or negative outcomes. It depends on who uses it and for what purposes.

    I have been using a few of those tools for research and they are very useful in saving a lot of time by going through a myriad of sources in just a few seconds. What I noticed is that it can also be swayed like a person, to take the discussion one way or another. Remember those trial movies where the lawyer objects when the other lawyer is “conducting the witness” and is immediately stopped? The same applies to AI queries.

    The first thing I usually tell my students is that it’s OK to use AI. However, they need to learn the subject so they can have an intelligent conversation with those tools and judge whether the answers are near the mark or completely off base (which is very possible).

    Therefore, based on the overall performance continuously demonstrated by politicians, it really concerns (near scares) me when I see the word “regulating” near a topic because the outcome is seldom positive for the general public.

    • Let’s see. There are the regulations on weights and measures. It is great that we agree on what is a pound, a quart, and a mile. In 1883 it took many changes of trains at great risk of missing connections to get from New Your City to Chicago, until the USA agreed to the international regulation of time zones. Cars are safer with standardized placements of lights. Car Passengers are safer with the regulation of seat belt use. Internet use is enhanced with the regulation of website addresses. The lack of sufficient zoning regulations for businesses and housing in Los Angeles subdivisions enhanced the intensity of fires and caused congestion in evacuation routes. I could go on and on when and where regulations improved life and where insufficient regulations lead to unnecessary loss.
      Sure regulations can be aggravating, especially when they are written in fine print along with exhortations of freedom of movement. For AI there are lots of talk about the wonderful world it will lead us to. Already there are also alarms about the harm. For sure regulations will be in order. Thoughtful ones that will guide AI’s development and impact. Government is the place for this to happen.

      • You are confusing standard bodies, which are not government related, with the government agencies that enforce them. The time zones here in the US were created by the railroad companies, correct? The government came later to make that stuff official. The government also comes after the SAE, IEEE, ANSI, etc. create the standards, to require the enforcement of some automotive (or other) standards and codes.

        At the same time, we can see how something like the Affordable Care Act can quickly turn into a Frankenstein that requires subsidies in order to be “affordable”. The same applies to the electric vehicles, marketed under the deceit of “zero emissions” and heavy subsidies because the same politicians want to push their sales and they are not economically feasible for general use, as some want to make believe.

        AI will eventually regulate itself if the same politicians don’t start pushing their “legislations” that impose ill-conceived solutions to non-existent problems just because the lobbyists pulling their strings tell them to. For example, the Internet has been working very well through the IETF, ICANN and other non-governmental entities regulating them. We do not need politicians (let alone the corrupt ones) putting their thumbs on it.

  2. Initially, you talked about how regulating by government scares you. You even put regulating in quotation marks. I took issue with that and gave examples of why, only to be told that I am confusing standard bodies with government agencies. Perhaps. They do work cooperatively sometimes. For instance, you give credit to railroads for the development of time zones. Due to their ability to move people relatively quickly, railroads created the need for standardized time. By 1883 government realized that scores of zones in the country created confusion. Enter Cleveland Abbe working on weather forecasting for the new US weather service. He proposed having 4 time zones, each one hour apart across the country for the purpose of gathering weather information and forecasting. Under pressure to do something, railroads adopted the weather service’s work in 1883 with federal government approval. In MENS I have previously debated the Affordable Care Act with others. I will only say here that I view the ACA as a positive action by the Democratic wing of our government. However, your fear of government regulation is born out by how effectively the Republican wing has worked to destroy it. At least give me the fact that government set the length of a foot. In elementary school I learned that it was based on King Henry VIII’s, and he was government.

  3. No matter the time zones, measurement systems, or cars, their regulation started by the industry getting together in standard bodies, or equivalent, and proposing expert-based ways to design, control, etc. products and systems. The government came later and either legislated or codified those standards or part of them. This tends to work well when things remain apolitical or interest groups (lobbyists) do not get involved, which is impossible with politicians (mostly when the corrupt get involved).

    AI is still a brand-new technology and the industry itself is also still learning with it. There likely will be mistakes, but things will only get worse if politicians get their ignorant and lobby-controlled thumbs over it. I’d rather wait to see such industry bodies come up with solutions for several of the problems arising with the use of this new tool. Knee-jerk and feel-good populist legislation will come up full of loopholes, if not bias towards special interests. Special interests that pushed things such as ACA with “you need to sign to see what’s inside” and “if you like your doctor and plan you can keep your doctor and plan” (none said by republicans) that only made healthcare several fold more expensive and prone to fraud.

    So, yeah, the politicians’ thumbs are something to really worry about.

    • There must be a lot of parents who live along roads with heavy traffic, with their children walking to school, who don’t know how happy they should be that the government regulated and enforced the removal of lead from gasoline. Something the petroleum companies were not going to do on their own. Let’s agree to disagree, Mario. AI is here and I am no expert about it. I know there are lots of smart people working on it. As goofy as politics can seem, I am pleased to have smart senators like Murray and Cantwell, and Congressman Larsen looking out for my wellbeing.

LEAVE A REPLY

Please enter your comment!

Real first and last names — as well as city of residence — are required for all commenters.
This is so we can verify your identity before approving your comment.

By commenting here you agree to abide by our Code of Conduct. Please read our code at the bottom of this page before commenting.

Upcoming Events