A Google A.I. program has beaten a master Go player — not once, not twice, but three times, clinching the best-of-five match between a computer and a human playing a notoriously complex game.

Lee Sedol, a 9-dan professional player considered to be one of the world's top Go players, expressed stunned resignation at the post-game press conference.

"I don't know what to say," he said through an interpreter. "I kind of felt powerless."

"I lost with quite a bit of, I guess, futility," he said later.

But he took personal responsibility for the match, held in his native South Korea — and didn't want people to conclude this meant a human could never best AlphaGo.

"Today's defeat was Lee Sedol's defeat," he said. "It was not the defeat of human beings."

The AP notes that the game was long thought to be unwinnable by A.I.:

"The highly anticipated showdown between human and machine has crushed the pride of Go fans, many of them in Asia, who believed Go would be too complex for machines to master. Some thought it would take at least another decade for computers to beat human Go champions."

But the head of the team that designed AlphaGo made it sound like the surprising thing was that humans could possibly compete against computers.

Lee Sedol "stretched Alpha Go to its limit," said Demis Hassabis, the head of the DeepMind team. Hassabis suggested it was incredible that a human mind could provide any challenge to a computer capable of calculating 10,000 positions per second.

As NPR's Geoff Brumfiel explained earlier this week, Go has fewer rules — and more choices for every turn — than chess does.

That requires a different A.I. than the one Deep Blue used to defeat human chess masters, Geoff writes:

"The Google program, known as "Alpha Go," actually learned the game without much human help. It started by studying a database of about 100,000 human matches, and then continued by playing against itself millions of times."As it went, it reprogrammed itself and improved. This type of self-learning program is known as a neural network, and it's based on theories of how the human brain works."AlphaGo consists of two neural networks: The first tries to figure out the best move to play each turn, and the second evaluates who is winning the match overall."

"When you watch really great Go players play, it is like a thing of beauty," Google co-founder Sergey Brin said after the match. "So I am very excited that we have been able to instill that kind of beauty in our computers."

But beyond the beauty of Go, Google hopes the self-learning technique they've been refining with Alpha Go could be applied to a wide range of real-life applications.

While the A.I. has won the best-of-five series, the human and computer Go masters will face off twice more, to finish out the five games.

Copyright 2016 NPR. To see more, visit http://www.npr.org/.