Artificial intelligence is not new in health care, said Bill Fandrich, executive vice president of technology and operations for Blue Cross Blue Shield of Michigan. But there are new opportunities.

“What is new is generative AI,” Fandrich told GBH’s Morning Edition co-host Jeremy Siegel. “And the implications of generative AI are very different than traditional AI.”

Generative AI, like ChatGPT, uses existing language input to generate new content. In the health insurance world, Fandrich said, he envisions it being used to offer patients more clarity.

“I don't think there's a person out there that hasn't been frustrated with their experiences in the health care system,” Fandrich said. “The power of generative AI and the power of what we're able to do now is to bring all that information and, based on your specific situation and the events that are occurring, personalize it so we can bring it to give you the information you need at that time to make the right decision, but help you ease some of the administrative burden and the frustration that occurs trying to figure this out yourself.”

Blue Cross currently uses chatbots — though not ones Fandrich would classify as AI — to respond to member questions, like whether a certain procedure is covered or how much it might cost.

He acknowledged that people may be distrustful of a health insurer using generative AI.

“I think the number one thing: We are not taking humans out of the ultimate decision,” he said. “Why we're using generative AI is to bring the volume and complexity of this information and help the individual make a choice and decision.”

He also acknowledged that people in Massachusetts, with its robust health care sector, may be more wary of the way innovations in AI will affect their jobs.

“I think what we've always seen with innovation is new jobs, new opportunities, new capabilities,” he said. “There's no question: Over time this changes the talent base of every company and society in general. It would be inappropriate to guarantee anything, except for it's the responsibility I feel we have, feel that responsibility to provide the tools, the methods, the training to upskill so that they are positioned for the jobs of the future.”

What might help, he said, are more specific guidelines and regulations. He said he and his colleagues have concerns over how patient information is used, bias and ethics, accuracy and who has access to data.

“We would love to have some set of standards or through the government, some regulatory body set the boundaries and ground rules,” he said. “Everything we are designing is to abide by our commitment, our commitment to protect that information and use it effectively and responsibly.”