Loading

ENTERTAINMENT

AI Ethics, Computer With Souls, Self-Playing Games

By Alex Barasch

LOS ANGELES (Variety.com) – From Galatea to GLaDOS, our cultural fascination with not-quite-human consciousness and intelligence has spanned thousands of years. At on Saturday, Dr. Tyr Fothergill and Dr. Catherine Flick reflected on the place of AI in video games – both how it’s depicted in series like “Mass Effect” and how the technology is increasingly used within the industry in reality.

According to Flick, a scholar of computing and social responsibility at De Montfort University, the stories we tell about artificial intelligence in games reflect our values and attitudes toward AI as a whole.
As she explained,
“AI is usually viewed as operating within the context of human morality.”
Eric Walpole, who wrote the character of GLaDOS from “Portal,” had one essential rule for her dialogue: She shouldn’t talk like a computer. While Flick describes GLaDOS as “notoriously evil,” those glimpses of something like humanity complicate our understanding of her role. She might, Flick suggests, simply be following her programming, prioritizing the perfection of the portal gun over her test subject.

Frank Lantz’s viral clicker game “Universal Paperclips” takes the same anxieties to a new extreme when an AI tasked with maximizing paperclip production determines that the most efficient way is to eliminate life on Earth. As Flick put it, “This is a lesson about the dangers of creating super-intelligent machines without programming protections for human existence into them, and the dangers presented by a lack of human values.”

In the “Mass Effect” franchise, the arc of the mechanoid species known as the Geth can also be taken as a cautionary tale. After recruiting Legion, a Geth unit who’s evolved beyond the constraints of his original programming and achieved self-awareness, the player must decide whether the rest of his race should be liberated at the cost of their “organic” oppressors.

For Fothergill, a research fellow with the Human Brain Project who specializes in human-nonhuman relationships, fictional beings like the Geth beg the question: “What is it to be human? Is it consciousness or, as Legion said, a soul?” According to one philosophy, known as “strong AI,” it’s simply a matter of information processing. If a computer can be programmed with the same inputs and outputs as a human brain, the argument goes, that computer has a mind in the same way we do.

This is a highly controversial take on consciousness – but regardless of how we define the mind, it’s clear that real-world AI is progressing rapidly. Google’s AlphaStar AI made headlines earlier this year when it defeated top-tier professional gamers in “StarCraft II,” a complex strategy game that demands both instantaneous decision-making and long-term tactics to succeed. Even when handicapped so that its reaction times were slower than its human competitors, AlphaStar dominated 10-1. Its accomplishments are a significant step up from those of IBM’s Deep Blue, which made history when it defeated reigning chess world champion Garry Kasparov in 1997. But some of the same questions still apply: the AI may be good at
beating
the game, but is it really
playing
it?

“AlphaStar doesn’t derive any fun, so far as we know,” said Flick. Nor would it be fun for the average human player to compete against – “especially the first ten times it kicks your arse.” Like other AI now learning to “play,” its understanding of the task it’s been assigned can lead to some baffling decisions by a human player’s standards. Fothergill described one case in which an AI was programmed to win a boat racing simulation – and did so by ignoring the race entirely, instead driving around in circles to rack up bonus points for stunts.

Sometimes, these imperfections are deliberate. When it comes to enemy NPCs, Flick noted, “We want fights to be challenging, but not frustratingly so. If [mob AI] were perfect – if it shot you in the head every time you stuck your head above the parapet – that wouldn’t be much fun.” And in the case of in-game allies, “We don’t want them to steal our glory by rushing up to the boss and killing them and taking all of the credit,
so we probably have to dumb them down in terms of their accuracy and productivity.”

The goals and datasets these engines rely on are essential to the outcome – AlphaStar now has the equivalent of hundreds of years of experience with “Starcraft II,” but it was initially trained by “supervised learning” from real human matches. Increasingly, the game industry itself is learning from players, too. “Companies create profiles of your style of gameplay,” Fothergill explained. “The cutting edge of gaming AI is pointing toward a customized gameplay experience, particularly within role-playing games.” Data including the choices you make in an RPG, when you play, and even when you quit can be used for “more than you might expect… and may potentially be used against you if it ends up in the wrong hands.” As the panelists noted, player data from war games is already being
used by the military to identify potential recruits
.

Where artificial intelligence goes from here remains an open question. Fotherhill joked that we often model our AI on ourselves “because we’re really bad programmers” – and noted that being trained on human behavior means inheriting human biases, too.

Despite our fixation with simulating personhood, there are also aspects of the human experience we may never be able to convey. “How do you program a goal like ‘have dreams’ or ‘have a life’?” Fotherhill asked. In games and in reality, our creations are only as human – and humane – as the data we give them.

tagreuters.com2019binary_MT1VRTP0JWC97W-FILEDIMAGE

MOST POPULAR