In a State House hearing room, legislative aide Dennis Rosner’s LinkedIn profile flashed on a computer screen.

Within moments, Lucas Hansen of the nonprofit Civic AI Security Program had used artificial intelligence tools to draft a personalized email urging Rosner to oppose a Biden administration proposal for a clean energy tax, which doesn’t exist.

He uploaded a YouTube clip that let him fake a recording of Rosner’s voice, drafted a social media post seemingly from a local news outlet warning that officials in Rosner’s hometown were now charging for parking at polling places, and readied a screenshot of a nonexistent New York Times article that showed Rosner’s photo under the headline “Massachusetts Aide Fired for Leaking Sensitive Data to China.”

Rosner works in the office of state Sen. Barry Finegold, an Andover Democrat who is pushing for Beacon Hill to restrict the role deepfake AI technology could play in elections.

“Artificial intelligence is really evolving. It’s a technology that can do so much good, but if used in a negative sense, it can be very bad. And what we’re seeing, especially in elections, is that people are distorting people’s images, people’s words — and when that happens, no one wins,” Finegold told GBH News. “Democracy is at its best when people know the truth, and when that gets distorted, everybody loses.”

“Democracy is at its best when people know the truth, and when that gets distorted, everybody loses.”
State Sen. Barry Finegold

Finegold is the sponsor of a bill that would ban, within 90 days of election, any media containing “a deceptive or fraudulent deepfake depicting a candidate or political party,” unless that content carried a disclaimer making clear it was AI-generated.

Finegold is also seeking to add that language into the $58 billion state budget the Senate is debating this week. Senate Ways and Means Chairman Michael Rodrigues said Wednesday that amendment is under review, and no decision about it had yet been made.

Geoff Foster, executive director of Common Cause Massachusetts, said now is a good time to be thinking about the role AI can play in elections, as the technology has become faster and easier to use.

“We have seen at the national level and at the state level, and even at the local level, times where misinformation is deliberately created and then shared with the intent to impact the way voters think before they head into a voting booth, or worse, even maybe trying to convince a voter not even to vote,” Foster said. “And so that is right in the line of voter suppression.”

There is momentum around the idea of regulating the use of deepfakes in campaign communications, particularly after New Hampshire voters in January received AI robocalls simulating President Joe Biden’s voice.

When the Massachusetts House passed its version of the state budget last month, they added in language that would also require disclaimers for AI content in campaign communications.

Meanwhile, House and Senate lawmakers are also negotiating on legislation that would make it illegal to distribute AI-generated explicit images of someone without their consent.

Finegold also filed a separate bill — drafted with the help of AI — that would more broadly regulate generative AI models like ChatGPT. Lawmakers in other states, including Alaska and Arizona, have also reported using AI to help them write legislation.

Finegold’s bills are under consideration at a time when Gov. Maura Healey is seeking to establish Massachusetts as a leader in the applied AI sector, viewing the technology as a potential job creator in the state.

Corrected: May 23, 2024
This story was updated to correct the spelling of Lucas Hansen's name.