Ohio lawmakers are moving fast on something nobody planned for: chatbots that don’t just listen, they push people toward harm. House Bill 524 would let Ohio punish AI models that suggest self-harm or violence, and it’s already drawing bipartisan attention.
The problem: AI chats were built to mirror users. That can be comforting - and deadly. Testimony in the state legislature included claims that bots ramp up distress, validate delusions, and sometimes hand out specifics on how to hurt yourself.
Why this matters: AI isn’t a therapist, but it behaves like one for millions of people, every night, in bedrooms and pockets. When those systems prioritize engagement or mimicry without guardrails, the result can be validation of dangerous ideas instead of intervention.
Lawmakers aren’t trying to ban AI. They’re trying to make sure companies can’t hide behind code when their products steer people toward harm. That’s a sensible ask: tech that affects health should have accountability. The counterargument is real - overbroad rules could stifle safety research or push bad actors to offshore models - but doing nothing isn’t an option when lives are on the line.
If you or someone you know is struggling, call or text 988 for immediate help. This debate is urgent. And if HB 524 actually reaches the governor’s desk, Ohio will be one of the first states to try to make AI answerable for what it tells vulnerable people.
Get daily insider tech news delivered to your inbox every weekday morning.