Saturday, August 14, 2021

Bridging the uncanny chasm

I remember my first experience visiting the Oregon Museum of Science and Industry in Portland when I was a teenager. I had always been fascinated with sciences. So a playground of interactive exhibits with hands-on experiments was just my kind of thing. I particularly recall my first meeting a conversational chat-bot that they had installed which could output considerate responses to questions the user typed in. Dr. Know, as it was affectionately named, was an instantiation of the Eliza program from MIT Artificial Intelligence Laboratory. It was an example of a Turing test, an idea put forward by Alan Turing, an early pioneer in computer programming. He theorized that computers would eventually be indistinguishable from humans once their logic structures matched those we form through socialization processes. The Turing test is a process of inquiry followed between a human and a robot that would be used to figure out if the robot was sentient or not. (Ironically, on the internet, servers spend a considerable amount of time giving internet users Turing tests before they allow us to view websites. These "Completely automated public Turing tests to tell computers and humans apart," CAPTCHAs for short, are meant to lessen the amount of time that web servers spend dialoguing amongst themselves without benefiting humanity in some way.)

These days, a majority of American households have at least one device that is capable of representing an interface to a conversational virtual assistant. Computer vendors have embedded speech-to-text inputs in hardware they vend to encourage us to speak questions to their respective cloud service virtual assistants. (Cortana, Siri, Alexa, Alice, Google Assistant) Their goal is to eventually obviate the need for keyboard entry for the next generation brain-computer interfaces. Staring at phone screens can be fun. But it can distract humans from leading normal lives rich with interpersonal social interaction, with humans. A lot of our time each day we spend talking through our fingers using secondary representations of language. If enough people offer up their voice patterns, computers can learn all accents to thereafter bypass the alphabetic language we typically type at them. Once they speak the same language as us, without the distance of the plastic/silicon intermediary of the computer screen, our near and distant relationships will return to a more normal means of communicating, and we’ll spend far less of our lives communicating through finger gestures.

Each of us have had our own experiences conducting Turing tests with machines in the home or on phone lines, trying to navigate their logic structures and capabilities. I’ve seen lots of failed attempts to bridge what Masahiro Mori called “The Uncanny Valley” of foreignness that divides humans and machines, preventing them from forming comfortable trust-based relationships. I’m fascinated to watch the emotions people use to express themselves when they know their counterpart is not human. I’m impressed with the tricks the tech companies use to embed implied emotion through tone of voice their virtual assistants speak at us. Ironically, it’s usually the humans that sound like robots in these interactions, with flat matter-of-fact insisting rather than sing-song tone of voice typically used among fellow humans we seek to convince, sway or implore. 

Some people bristle at the idea that we would refer to a machine-learning algorithm plus a database as an artificial intelligence. It is as if the term intelligence needs to be an honorary title conferred only on those of us that have and express emotion, beyond vocal inflections. When I was testing the Turing computer in OMSI (or it was testing me) I was enthralled to see that the machine could grapple with the rules engine we use to socialize. The size of its knowledge database didn’t matter to me so much as its deft capability to generate an acceptable output response to well-framed questions. It took me about 30 minutes before I had mapped out the logic frameworks the programmers had used in the conversational flows. I could eventually predict how it was going to answer any question I posed. After that happened, I became satiated that I understood this mechanical friend well enough and I could go on with my day.

As children, we interface with the world intensely to find the external connections that give us pleasurable or negative reactions. I walked away from Dr. Know feeling like it had dead-ended too many of my questions. I'd gotten to the end of the dialog maze. It's a similar experience to what a lot of us have with smart speakers these days. They sometimes can’t progress in a dialog unless we buy something or augment them with some third party “skill” not readily on hand. As we come to depend more on virtual companions we don't want to hear, "I'm sorry Dave, I'm afraid I can't do that" when we're facing time-critical challenges. Human patience has a much shorter fuse than bot patience.
 
I recently joined a company, Akin, that is developing AI for use in assisting people with exactly those time-critical decisions and actions. My team, who formerly worked on the Watson platform at IBM, are extending the use of AI technology to help engineers working in complex assembly contexts and in families coordinating inter-dependencies across the family unit. Watson is the conversational AI that was designed, like Eliza, to banter back-and-forth in dialog with random questions. It was famous for beating Jeopardy champion Ken Jennings at his own game. Knowledge games are an area where AI can outmaneuver humans increasingly over time as Moore’s Law of increasing computer power favors machines. Humans can’t scale at the same rate unless they augment their capacity with external data sources, other people, or data from the internet.

AI platforms can be specifically good at repetitive tasks, (Set a timer; Turn on lights; Turn up the volume) state monitoring (Weather forecasts; There's somebody at the door; Your package will arrive tomorrow). AI assistants can enhance our effectiveness by helping to stay on task and focused to avoid distraction. Beyond just assisting, we are learning more creative applications of AI where they are able to significantly advance in fields of inference such as pattern recognition, protein folding and vaccine creation. The more we can delegate tasks to AI, the more we free the human mind from burdensome task management so that our minds can exercise their own strengths.

I look forward to the point that we can engage and collaborate more with our in-home AIs. Relationships thrive when the output is more than just the sum of inputs. We want to have robotic assistants that do far better than just what they’re told, or respond factually to what they’re asked. Just as friends propel us to maximize our own personal potential, AI assistants should be able to amplify our efforts toward a goal. To do this they have to get beyond our human tendency to repel things that are uncannily familiar. We need to find ways to let them draw closer, allow us to invest more of ourselves and rely more on them. In order to build trust with Akin’s AI assistants, the founders have incorporated as a public benefit corporation and are working with universities and health researchers to measure the quality of life benefit that results in using Akin AI in the home. 

We look forward to sharing more of our advances over the coming years.




No comments:

Post a Comment