This is a web edition of GBH Daily, a weekday newsletter bringing you local stories you can trust so you can stay informed without feeling overwhelmed.
☀️Sunny day with highs around 38. Sunset is at 5:21 p.m.
Moderna, the Cambridge drug company, said Food and Drug Administration officials have changed course and decided to review its new mRNA flu vaccine. Last week, FDA officials said they were rejecting the shot without reviewing a 40,000-person clinical trial the company had run.
Boston University health law Prof. Alan Sager called the FDA’s initial refusal confounding: “mRNA vaccines are quicker to develop, they’re cheaper to develop, and they’re also safer because you don’t need live viruses,” Sager told GBH’s Marilyn Schairer.
Dr. Jerry Avorn, a professor of medicine at Harvard who studies drug approval policies, said the back-and-forth could have wider-reaching effects.
“This kind of chaotic decision-making and its reversal create a real problem for medication development and approval,” Avorn told Schairer. “It sends a message to drug companies, doctors and patients that FDA is shooting from the hip on these very important issues, and often wildly missing the target. Even though the decision has been reversed, its chilling effect will remain part of the vaccine discovery ecosystem, along with all the other zany decisions FDA and CDC have made in the last year.”
Four Things to Know
1. A federal judge in Boston threw out a religious nonprofit’s lawsuit against the state of Massachusetts. Your Options Medical had claimed that the state’s informational campaign about anti-abortion centers violated their rights to free speech and free religious exercise.
Anti-abortion centers, usually called crisis pregnancy centers or pregnancy resource centers, are often religiously affiliated and offer free services like pregnancy tests. State officials and reproductive rights advocates have criticized them, saying the centers mislead pregnant people who are seeking medical counseling on all their options — including abortion. Judge Leo T. Sorokin said Your Options Medical had failed to prove that the state targeted the centers “for actual or threatened enforcement action, let alone to stifle its protected speech or viewpoint.”
2. Crews are still looking for the Lily Jean, a 72-foot fishing boat that sank about 20 miles off the coast of Gloucester in January, killing seven people on board. Officials do not yet know why the boat sank, and they did not get any distress or mayday calls before it did.
The day the Lily Jean sank was exceptionally cold, with highs in the teens and lows in the single digits on land. State Sen. Bruce Tarr said he believes “freezing spray” may have contributed to the sinking. “Ice accumulates on a vessel,” Tarr said. “And the additional weight of ice can affect the stability of the vessel.”
3. With more people seeking help paying for food and heating as prices rise and federal aid is cut, Catholic Charities Boston put out a special request for donations yesterday in honor of Ash Wednesday and the start of Lent. The organization usually seeks about $200,000 in Ash Wednesday donations; this year, they’re aiming for $400,000.
“We’ve been seeing traditional safety net programs unraveling,” said Kelley Tuthill, president and CEO of Catholic Charities Boston. “We anticipate more people coming to our food pantries, more people needing help keeping the bills paid, because the other programs just aren’t going to be there.”
4. Researchers from the Harvard T.H. Chan School of Public Health reviewed 55 studies with 500,000 participants from the last 26 years and found that people who regularly participated in spiritual practices — from going to religious services to meditating or praying — had lower risks of hazardous drug and alcohol use.
Dr. Howard Koh, the study’s lead author, said he hoped more people knowing about the correlation would help doctors and other medical professionals view their patients more fully. “Health is more than just the body and the mind. It’s about aligning the body, mind and soul,” Koh said. “It helps clinicians view patients more holistically when you’re understanding that a person finds value and meaning and purpose in their lives.”
ChatGPT and Massachusetts are teaming up. What does that mean?
By Katie Lannan, GBH News State House reporter
About 40,000 workers across Massachusetts’ state government will soon be able to use an AI-powered assistant on the job through a new contract with ChatGPT. So what exactly does that mean for the state’s workers and residents? And how are lawmakers in Massachusetts looking at regulating AI?
First: it may not be something we notice right away. The software will be rolled out in phases, starting with the state’s technology office. Healey’s team is billing it as a way to help with tasks like outlining, summarizing and quick research — with human oversight to make sure what comes out is accurate. The state says any decisions about benefits or service eligibility will still be made by humans.
For some idea of where state government might take AI, we can look to a showcase Healey hosted last fall, where UMass Amherst students worked with state agencies to demonstrate tools that could help with specific challenges. Their projects did things like help staff at the Department of Unemployment Assistance direct callers to the right resources, or make it easier for people applying for environmental permits to navigate a complex system.
The administration says that the new AI tool is voluntary: no worker has to use it. And they seem to be trying to reassure state workers that their jobs are still safe.
The local National Association of Government Employees, the union that represents a large number of state employees, is striking a wary note about the whole thing: They told the State House News Service last week that some workers aren’t eager to embrace this tool, and worry that they would be required to use it or see it winnow away their job duties, despite the administration’s pitch.
And a lot of people do have data privacy concerns when it comes to AI use. The governor’s office says the state’s version of ChatGPT will be walled off in its own secure environment, and its chat inputs will not be used to train public AI models. It’ll be governed by the state’s privacy and data protection laws and policies. Health and human services agencies and other parts of state government that work with sensitive data will have their own workspaces. But there is no way for residents to opt in or opt out.
The governor’s been talking for a while now about wanting to make Massachusetts a leader in the AI space, particularly when it comes to applied AI — using the technology for real-world problem-solving. The state’s also been piloting AI-related curricula in 30 school districts, to help prepare students for future tech careers.
There’s also been conversation at the State House about regulating AI products. Healey has said generally that she thinks there needs to be some guardrails in place, but that that’s not necessarily where she’s directing her attention. “I’m less focused on regulation and more focused on implementation,” she said over the summer.
Driven by both the growth in AI and the intensifying focus on election integrity as this fall’s midterms inch closer, the House passed a pair of bills last week. One would set a clear requirement that if any campaign ads use AI-generated audio or video, that would have to be disclosed in the ad.
The other bill would ban what the House is calling “deceptive communications” in campaign ads in the 90 days before an election. That includes AI deepfakes that try to misrepresent a candidate or trick voters, but it also covers media that’s meant to disrupt election operations, or mislead voters about how and when they vote.
Generally speaking, lawmakers seem to want to support AI as a potential economic driver and a sector for innovation. But they also want to make sure people here are protected from any elements that might be predatory, whether that’s misinformation or data privacy issues.
The chairs of the Legislature’s internet and cybersecurity committee have said they do think the state should be able to regulate AI, but that they also want to collaborate with the industry and harness the positives. Which is kind of an interesting tension — it’s not going to be easy to strike that balance, especially when tech companies can be a big lobbying force up at the State House.
Dig deeper:
-Spotting deepfakes: MIT Museum says ‘look again, look closely’
-The risks of AI in schools outweigh the benefits, report says
-Sorting AI slop from what’s real is going to take metadata and trusted sources