Featured News

Cowboys and Neurons: HBO’s Westworld Asks Tough Questions About Artificial Intelligence

Robot Human

In HBO’s award-winning series Westworld, guests to the eponymous park are invited to live out their wildest fantasies in an environment which relies on artificial intelligence (AI). For those unfamiliar with the show, Westworld is a theme park with an old American west setting, operated by the fictitious Delos Inc, where visitors interact with “hosts,” robots possessing artificial intelligence. However, this park is a far cry from Disney World. In Westworld, guests can kill, rob, and generally brutalize hosts at will. As season one unfolds, the secrets of the park, and its founder Robert Ford, played by the incomparable Anthony Hopkins, become more and more disturbing. As the lines between what constitutes humanity, emotions, and intelligence blur, viewers are left with many questions. Chief among them: if a being is intelligent, does it matter what they’re made of?

The concept of non-human beings endowed with intelligence dates back to at least Homer in the late eighth or early seventh century B.C. As society has developed and our ability to tell stories enhanced by technology, the idea of intelligent machines has captured the minds of societies across the globe. In film, we’ve witnessed robots charged with murder – like in Will Smith’s I, Robot – or even destroy a society, like in the Arnold Schwarzenegger’s Terminator franchise. In our own lives we have seen tools like Amazon’s Alexa, or IBM’s Watson, a computer that was able to handily beat Jeopardy! legend Ken Jennings at the very game he mastered.

All of these examples showcase levels of intelligence that, when placed on a spectrum, could be metaphorically light years apart. There is no denying that the fictitious Skynet hacking into the American military’s nuclear arsenal is a display of intelligence many orders of magnitude beyond Alexa ordering college kids a pizza. But at a certain point, isn’t intelligence simply intelligence?

To learn more about what constitutes intelligence, I spoke with Jason Moore, PhD, of Penn’s Institute for Biomedical Informatics. Moore’s first recommendation was to consider the Turing test. Developed by computer scientist and World War II hero Alan Turing in 1950, the Turing test is the golden standard for deciphering whether or not human-level intelligence is being displayed by a machine. The method of running the test is simple. A human test administrator, a human test taker, and a mechanical test taker sit in three separate rooms. As the test administrator submits questions to the test takers, he or she attempts to identify answers based on whether or not they came from a human or a machine. If the administrator is unable to differentiate human answers from those given by a machine, the machine has passed the test. For those looking for a modern take on the Turing test, Alex Garland’s 2014 film Ex Machina explores the subject at length and with frightening consequences. Beyond the Turing test, scientists are currently working on tests that may be able to detect consciousness – the feeling that you are you.

If humans can develop machines that pass the Turing test, what kind of rights might they be entitled to? Moore suggests starting with the “Three Laws of Robotics.” First written by 20th century writer and biochemist, Isaac Asimov, the three laws outline a set of overriding commands that could be programmed into robots so that humans might avoid any possible malevolent intentions on their mechanical friends’ behalf. The laws are simple:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Taking a look at these laws, they seem to offer a great deal of protection to human beings. So what do intelligent machines get out of these laws? Moore points to a recently suggested revised version of the laws:

  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
  2. A robot must respond to humans as appropriate for their roles.
  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

For Westworld viewers, these laws are likely refreshing. A key storyline of the show is the journey of Maeve, played by Thandie Newton. In Westworld, when hosts are “killed,” they go through a reboot process that erases their recent memories, while keeping their core character composition intact. In Maeve’s case, these reboots fail to remove a series of incredibly disturbing memories, forcing her to relive these experiences over and over again. Clearly this type of treatment would be unethical to subject a human being to, so what about an intelligent machine? Moore says, it doesn’t matter whether these traumatic feelings are being sensed by a human or a machine. “If an AI is capable of pain, love, etc., then I think we should treat them like we do other humans.”

While Westworld’s main town of Sweetwater, and more importantly, its host inhabitants, are likely many decades or even centuries away, the questions about how people treat intelligent machines is relevant and should be discussed now. Whether society likes it or not, artificial intelligence will continue to play an ever increasing role in people’s lives. “Computers already do things that we can’t or don’t have time to do. Doing complex arithmetic is a good example of non-AI computing,” Moore said. Indeed, reports suggest that between 400 - 800 million jobs will be displaced by automation related to improved machine intelligence over the next 12 years. Perhaps this changes how people assess the current state of affairs between human beings and intelligent machines.

“I think more about the ethics of how AI will impact humans rather than the impact of humans on AI,” Moore said. “I don’t think much about the effect of humans on AI because I think it will be decades before humans create a real AI that is conscious. Certainly not in my lifetime.”

Topics:

You Might Also Be Interested In...

About this Blog

This blog is written and produced by Penn Medicine’s Department of Communications. Subscribe to our mailing list to receive an e-mail notification when new content goes live!

Views expressed are those of the author or other attributed individual and do not necessarily represent the official opinion of the related Department(s), University of Pennsylvania Health System (Penn Medicine), or the University of Pennsylvania, unless explicitly stated with the authority to do so.

Health information is provided for educational purposes and should not be used as a source of personal medical advice.

Blog Archives

Go

Author Archives

Go
Share This Page: