AI - our best friend, or is it?
AI | Ivana Simic

AI - our best friend, or is it?

Monday, Nov 28, 2016 • 6 min read
We attended WebCamp2016 in Zagreb. It was a 2-day conference on various topics, but the ones we found most interesting are the ones about Artificial Intelligence.

WebCamp2016

Last month we visited WebCamp Zagreb. It’s a two-day conference on web technologies. During the conference, we attended a lot of great talks, but the ones we found the most interesting were the ones covering the topic of Artificial Intelligence.

“Artificial intelligence and bots are changing the way we interact with the world both online and IRL. In some aspects, we’ve improved significantly, but there are still leaps and bounds to go both in technological and user experience improvements. Both designers and developers can easily integrate AI into their work today and improve their products greatly if they take into account current limitations.” - Ashley Hathaway

The first one covering this topic was the talk “Bots, AI APIs and messy interactions” by Ashley Hathaway. She talked about how AI is already changing the way we use our phones and interact with the world. For example, using our speech to interact with the phone decreases the time looking at it.

AI as our right-hand assistant…

Even though there is still a lot to achieve in this field, we can already use AI to help our users with recommendations so they can make decisions more easily. Take Netflix for example. When you are setting up the account, the first thing you need to do is pick some of your favorite TV shows and movies. Netflix then uses your picks to give you a list of recommendations for other shows and movies that you might like. Of course, it doesn’t mean they always get it right, but it can save you some time from browsing the net looking for a new show to watch. A colleague suggested that Deezer is doing a similar thing with their Flow where they play you somewhat randomly picked music based on the songs you marked as your favorites. It does a pretty good job, but occasionally plays a song that you might not like.

But the part of using AI Ashley talked about the most were chat bots. Making those, brands can make their interactions with users more direct and personal. If we just look at the numbers – there are over 900 million Facebook users and around 11 000 bots already created. Why? Well, computers are good at remembering stuff, automating mundane tasks, and making accurate calculations very fast. And when users prompt your bot for information, the response is imminent; they don’t have to wait for someone to read the message first and then look for the information. Of course, there are still skeptics who’d prefer talking to a human as well as there are brands unsure of payoff. But if we follow the best practices Ashley gave in her talk, we might step in the right direction and make a bot everyone would want to use.

So, what are those? Well, there are some things to consider before making a bot, like jobs the user needs to do and how fast they want the information. It’s important to keep in mind that not all users have the same mindset, and have a different kind of thinking. So, even though computers are good at finding the information fast, there should be an indication of how long the task may last. Some tasks may require complex calculations, so you shouldn’t let your user wait and then wonder if the bot is even doing something – display a simple message such as „Fetching information, this may take a while.“

What we also need to know is how much information is needed - we don’t want to provide too much nor too little of it. Besides just written content, we should think about the visual design of our bot making it more appealing to users. And, the one we find really important, always close the loop. The user gives you feedback – thank them, display a message, the user tells you what to do, again – display a message.

But what about the mishaps that can happen along the way? Of course, there are many, but let’s mention a few of them. First thing Ashley mentioned was a lack of sensitivity. We surely wouldn’t like a Skype notification to be the way to find out that someone died. There’s also a simple mishap that can happen with speech recognition - misheard phrases. There is a way, though, we can handle those. Ask your user „Did you mean?“ and try to get it right. Also, there are game bots on which you can play, for example, a card game to kill some time.

Now, not all people will use bots, but there is something Ashley mentioned about using the AI that we find useful. While we were in Zagreb, we were getting something she called „Passive notifications“ on our phones telling us when and where we can catch the next bus. So, we got a piece of useful information before we even needed it, but it didn’t make us look at our phones before we wanted to. You could send some useful reminders in that fashion, so the user remembers something they need to do during the day. Not-so-passive reminders can be very annoying and distracting - instead of allowing the user to focus on their current task, they are forced to look at the phone. We shouldn’t use a waking alarm as a passive notification, but there are useful use cases for it - like that bus notification. The phone should not be ringing just because it found a bus near that goes off soon.

Right now, AI enables us to give users some recommendations and help them make a decision, but there is still a lot of work to be done. In the near future (less than 5 years as Ashley predicts) this will lead to more notifications on complete tasks, less time looking at the phone, and more information brought to our attention before we need it.

…or as our arch enemy?

The future looks bright, right? Well, depending on who you ask. The AI is not considered something that should just use a bunch of data to recommend a movie. We strive to create an AI that will have the ability to learn and improve itself. Imagine creating artificial intelligence that could do that. But, if it could improve itself, what if it improves so much that it becomes superior to us? What if it decides to destroy us? That exact questions were covered in the keynote “Superintelligence – the idea that eats smart people” by Maciej Cegłowski on the second day of the conference.

“A skeptical view on the seductive, apocalyptic beliefs that prevent people in tech from really working to make a difference (…) This talk is an attempt to vaccinate the next generation of developers against the seductive ideas of existential risk, superintelligence, and the charismatic religious figures who will try to eat their brains.” - Maciej Cegłowski keynote description

Let’s think about it for a second. How likely is it, really, that a person would work on AI thinking their invention might be a step leading to our destruction? Maciej tried to discourage that way of thinking simply saying that the superior intelligence would be created by us. Given that we don’t even have the complete understanding of how our brains work, it is hard to imagine that happening. He compared people working on the AI looking for the superintelligence with the alchemists in the middle ages. When you look at it, superintelligence is the philosopher’s stone of our age. And much like the alchemists never found theirs, we probably won’t find ours. That doesn’t mean the research will all be in vain. If you look back, the alchemists had a few important findings as side effects of their research. For example, they established techniques that proved essential to the birth of the modern lab science. But, still… What if?

Maciej gave a cartoon example of what would happen if we somehow created an AI that can improve itself. Think about an AI whose only goal is to make people laugh. First, it starts with jokes kids, for example: “What is gray and can’t swim? A building”. But then, as it improves itself, it finds a decent joke that will make most people laugh. Still, it’s not enough. After all, our AI has only one goal, and it is to make us laugh. So it continues to improve itself until it finds a perfect joke to make us laugh as hard as possible. Eventually, it would come up with a joke that will make us laugh so hard we would literally die laughing (Monty Python anyone?). Interesting scenario, right? But it wouldn’t happen if we got our specs right. Maybe we could give it one more goal – to keep us alive?

Anyway, why would an AI destroy humanity? If you watch Sci-Fi, it almost always tries to do so. But to think about it, humans have superior intelligence compared to other living beings, and we are not obsessed with, let’s say, cats taking over the world one day and destroying all of us. No, we strive and fear something superior. Is it so hard to believe that our superintelligence would fear something superior to itself? Like a mega intelligence or something like that? Maybe we worry about it all too much because we only look at what could go wrong…

Still, we find AI really interesting, and don’t think worrying about AI one day taking over the world is helpful. Some things Maciej pointed out make it hard to believe that will ever happen anyway. So, we should just enjoy the ride making great things. Maybe we could start with human-helping bots, that doesn’t sound so terrifying, does it?