I love Alexa, the Natural Language Processing (NLP) system in my Amazon Echo. She wakes me up every morning, she knows my schedule, she can control many of my z-wave and Zigbee compatible devices (switches, outlets, etc.), she can tell me the temperature and weather forecast, she can play any piece of music available on Amazon Prime, she can play me an “A” (via a skill called The Pianist), and she’s the perfect oven timer. I love her so much (and, as we shall discuss, “love” is an interesting way to describe our relationship), I was going to order an Echo for every room in my house – until I saw a picture of my new love to be … Google Home.
For all that is lovable about Alexa, (an app store full of skills and unmatched shopping assistance on Amazon) you must learn how to ask her for many of the things the system can access. And while you use natural language to accomplish these requests, the trigger phrases do not always come naturally.
Google Home is the child of Google Now – and “OK, Google” almost always delivers exactly what I’m looking for. When Google combines Google Assistant, Google Now and everything that led up to Parsey McParceface (Google’s open source natural language parser) into one neat package, Alexa will have a some serious competition. The good news is that Alexa is evolving daily and the competition from Google Home will be good for everyone, including Siri, Cortana and every other well-funded Natural Language Processing (NLP) interface. In the very near future, we will talk to everything and everything will talk back.
Cool and Creepy
Sometimes I ask, “Alexa, what’s the temperature outside?” She answers by speaking the current temperature followed by an abbreviated weather report. She’s so human-like, I have to resist the temptation to say “Thank you” when she finishes. Importantly, Alexa is not a she; it is an NLP system. Amazon has anthropomorphized Echo with a female voice and a feminine name, which makes it easy to call Alexa a “she.”
Alexa does not say “You’re welcome” if you thank it for telling you the weather. And you don’t have to ask it nicely. If you don’t want to think of your Echo as a female named Alexa, you can program your echo to respond to the word “Amazon,” in which case “Amazon, weather” will get you the exact same response – not as anthropomorphized, and nowhere near as much fun to use.
Every time Alexa answers me, I am reminded of an old Star Trek episode (the original series, of course) called “Tomorrow Is Yesterday.” The computer keeps calling Captain Kirk “dear,” which annoys him throughout the episode. Spock explains that the computer had been re-programmed on the female-dominated planet Cygnet XIV and they gave it a “personality.” That was Gene Roddenberry and D.C. Fontana back in 1967 (Stardate 3113.2). Wow, did they call this one right.
Should you treat your household NLP interface like a person? Should you be polite when you speak to it, or is it OK to be abrupt or even abusive? The devices won’t care. They don’t have feelings; they are computers that have been programmed to speak with us. Will we care? How will we teach our children to differentiate between machines that sound and act like people, and other disembodied voices that are actual people? Will a child (or a grown-up) know when a customer service representative is a person or an NLP interface? In the very near future, it will be almost impossible to tell.
Reinforcement learning is a training methodology used to train machine-learning systems. It is also an excellent training methodology for living organisms (such as humans). If you create an environment where one kind of action results in a reward and another kind of action does not, the subject of the training (machine-learning algorithm or living organism) will “learn” to do its best to respond with the action that comes with a reward.
So, to prevent breeding a generation of verbally abusive children, will we program our NLP interfaces to reward civility? That would require system creators to push the anthropomorphic features of the interfaces even further than they already have. But does that make sense? The more human-like the system, the more likely we are to confuse it with a person. On the other hand, the less human-like the system, the less likely we are to empathize with it, so we’ll probably use it less. This is going to be a fascinating experiment in applied anthropology.
Last week, I introduced Alexa to one of my favorite 5-year-olds. The child was gleeful! Her fearless, 5-year-old mind, inquisitive nature and naiveté combined to put on quite an exhibition of Echo’s feature set. That’s when it hit me: by the time she is old enough to know she is talking to an NLP interface, she won’t actually be able to tell that she is talking to a machine. What will the world be like when everything you talk to talks back? How will our language and our communication skill adapt? What new behaviors will evolve to deal with things that speak as we do?
Will we need legislation that requires NLP systems to identify themselves before interacting? That would certainly be a buzz kill: “Good morning, Mr. Palmer. I am an NLP system designed to assist you.” If you remember, C-3PO in the Star Wars movies introduced itself by saying, “I am C-3PO, human/cyborg relations.” But it was a droid with a body that looked human-ish. NLP systems will just be friendly, helpful disembodied voices.
In 2013, Joaquin Phoenix starred in Her, a movie about a man who falls in love with an NLP system named Samantha. (If Alexa sounded like Scarlett Johansson, the voice of Samantha in the movie, I think lots of people would fall in love with it too.) At the time, most people thought the storyline and the device were pure science fiction. That may have been true for most people, but for Google, Amazon, Facebook and everyone working on NLP systems, it was a goal. Now we’re just a few months (maybe a few years, but not more) away from actually having to deal with it. This is going to be awesome!