Updated May 11 at 12:37 p.m.

When historians look back on 2023, they’ll describe it, among other things, as the moment when everybody started freaking out about artificial intelligence.

The deep unease currently thrumming through the culture involves apocalyptic scenarios in which AI develops consciousness and pursues humanity’s destruction, and more mundane fears (relatively speaking) about ChatGPT and similar tools becoming so sophisticated that they render entire professions obsolete.

But two companion bills currently under consideration on Beacon Hill would address a more immediate set of concerns. The bills would establish a new state commission tasked with answering a couple of key questions: first, how are artificial intelligence and other decision-making tools that replace human judgment being used by state government right now? And second, what steps, if any, should Massachusetts take to regulate their use moving forward?

Kade Crockford runs the Technology for Liberty Program at the ACLU of Massachusetts, which supports the legislation in question. She says the standing commission could look at practices that are already in place — including the use of so-called risk-assessment instruments to help determine whether criminal defendants are incarcerated prior to trial. That practice, Crockford says, is a clear-cut example of state government using tech in a way that “reach[es] into people’s lives, and mak[es] decisions that can completely change the course of a person and a family’s life.”

And yet, Crockford adds, the automated tools used by the state to make such decisions are — at present — largely unknown, totally unregulated and deeply opaque. Even if the individuals impacted by those tools are aware of their existence, there’s no easy way to learn why a particular decision was reached, or to challenge it if they disagree.

In addition, Crockford argues, if a particular tool or system was designed with the best of intentions, it might still prove to be operating in a fundamentally unfair or unjust way if policymakers and others were given chance to pull back the hood and scrutinize its inner workings.

“The justification that governments use ... is that they want to make sure that they’re taking the bias, the human bias, out of the equation for decision making,” Crockford said. “[But] we have seen that in many cases, these systems actually solidify, replicate, codify, calcify that bias.”

State Rep. Simon Cataldo, a co-filer on the House bill, raises that same concern. He cites the juvenile probation system and the Disabled Persons Protection System as two government entities already using unregulated risk-assessment systems to make consequential decisions. And, like Crockford, he believes the time is ripe for the Legislature to step in.

“There’s just too much that we don’t know,” Cataldo said. “And the [discrepancy] between what’s happening and our understanding of what’s happening is only going to widen over time, as the prevalence of these technologies increases, unless we act.”

At this stage of the legislative session, predicting the fate of any particular bill is a dicey business. But the House bill in question — also co-filed by state Rep. Sean Garballey, with the Senate counterpart filed by state Sen. Jason Lewis — seems to have a few key advantages.

While proposals for an oversight commission didn’t become law in the last session, they were reported favorably by the Joint Committee on Advanced Information Technology, the Internet and Cybersecurity. What's more, similar steps aimed at cataloguing and regulating AI and related technologies have already been taken, or are being pondered, in several other states, which could make it easier for concerned lawmakers to convince skeptical colleagues that this issue is important.

Finally — and perhaps most valuable — supporters of increased oversight have the zeitgeist on their side. As Crockford puts it: “You can’t open up The New York Times without finding six different stories about ChatGPT. So, we’re hopeful that the renewed interest in generative AI, and machine learning technologies in general, will prompt some action."

Correction: A previous version of this story misspelled Kade Crockford’s name.

This story first ran in GBH News’ politics newsletter. Click here to subscribe and get our rundown of the Massachusetts’ latest political happenings every Thursday morning straight to your inbox.