The White House’s release of new guidelines for the use of artificial intelligence had some experts celebrating the move — while others expressed concern that the White House release is just guidelines, not actual rules, for how to use the technology.

That includes Professor Usama Fayyad of Northeastern University, who runs the school's Institute for Experiential AI.

“The spread of AI, it's so pervasive, it's almost like electricity,” Fayyad told GBH’s Morning Edition co-host Jeremy Siegel. “We can't even envision what the uses might be until we see them. But we need to be ready once we see them, to have some kind of beginnings of a policy or a guideline or a standard.”

Fayyad said he was encouraged that the White House guidelines called on the National Institute of Standards and Technology to pay attention to developing standards for AI — and was disappointed to see they stopped short of requiring those guidelines.

“Of course, the question then becomes: are guidelines enough? Is encouragement enough?” he asked.

The government itself is already using AI services, he said.

“If I'm generating a report, if I am collecting data on compliance with certain rules, if I am looking at images for security or doing surveillance, if I am watching borders, if I am surveilling economic activities and so forth — all of these actions involve a lot of knowledge economy work,” he said.

Some generative AI companies have claimed that their tools can cut some of the busywork for government workers, Fayyad said.

“A lot of the government agencies are behind, right? There's more demand for their services than they are able to meet that demand, and therefore this technology can help accelerate a lot of those tasks,” Fayyad said. “Now, I say accelerate very carefully because you cannot rely directly on the output of what the AI produces. What the AI produces is a fast draft. That draft may have errors in it, it may have issues in it, it may have problems in it, it may have biases in it. It may have discrimination within it. You need a human to quickly check that.”

Right now, federal agencies might have internal policies for how employees and contractors are allowed to use generative AI. But there are no laws.

“We all know that it's not the job of the executive branch to come up with the laws. That's the job for Congress,” Fayyad said.

Fayyad said he hopes the law can keep up with technology.

“Regulation never leads,” he said. “It shouldn't lead. It should be reactive and it should respond.”

What concerns him, he said, is what would happen if regulators move too slowly.

“If we don't move, we may fall way too far behind and a lot of damage can happen before we catch up,” he said. “Our failure to begin to regulate and to start that wheel turning might make us fall too far behind in areas where it can become a bit too dangerous, where biases will come into the system and we'll start affecting the livelihoods of people, may affect the safety of people, may affect how the government is done and administered, may affect many aspects of things that we can think about.”