There’s a strong human tendency to anthropomorphize everything. If you stub your toe on a rock then you can be angry at the rock. So it doesn’t take much to start assigning intelligence or imagining intelligence in something. Artificial intelligence is a very broad term that can have many possible meanings. I think of it as automating tasks that used to require human intelligence in order to accomplish. It’s sort of the idea of automating automation. I’m Daniel Lowd, an associate professor in the Department of Computer and Information Science at the University of Oregon and I study machine learning and artificial intelligence. I grew up watching movies like Short Circuit where some kind of lightning bolt turns a robot into a sentient being that can philosophize about life and its own existence. It’s fun to imagine all the ways in which a different kind of life form could exist and could understand the world and could interact with humans. I think some of the biggest misconceptions about artificial intelligence are the risk of it becoming exponentially more powerful and independent and making decisions to hurt people on its own. When we’re building artificial intelligence systems, we’re designing them and to optimize for some goals. There’s no risk of a spam filter suddenly deciding to start sending spam and having its own marketing empire. Its actions are limited to just a single binary prediction. Yes or no. Is this spam? Yes or no. There are no other actions it can take and has no other things it can do. In machine learning, the most common setting is that you have some data and you want to build a model for that data. You have a bunch of examples and each of those examples is an input and an output. So it’s like a question and then the right answer. And you want to build a function that will map from questions to answers and not just for the ones you’ve seen before but for ones you haven’t seen before. Some of the risks with AI are similar to what you might have with any other kind of technology. So whenever the stakes are higher such as in healthcare or autonomous vehicles you also want to have greater liability. Any time that you’re automating something with a new technology there’s a risk of what if you automate it wrong. When the stakes are lower like say spam filtering for e-mail if you make a few mistakes it could be frustrating. You might miss something important. But it’s hopefully not going to kill anyone if some spam makes it into your inbox or alternately if some of your legitimate email is marked as spam. So there have been examples where you teach an AI to play a videogame. You say, “the goals to get a score that’s as high as possible.” And if it finds a bug in the video game that it can exploit in order to get lots and lots of points then it will exploit the bug because you’ve only given the command to optimize a particular score. There could be surprises in the kinds of mistakes that AI might make and that’s why we need to be looking into what some of those potential pitfalls are, how to identify them, and how to fix them.