How Do You Design A Lovable Robot?

In a TED Talk, robot ethicist Kate Darling of MIT describes how she demonstrated a small dinosaur robot to a friend. This little robot named Pleo had some sensors and motors built in, it could walk around and move its head, but it could also tell if it was standing upright or lying down. If he was lying down or hanging down, he would start crying. Darling’s friend examined the robot while holding it upside down, which caused the robot to cry. Darling felt so uncomfortable doing this that she took the robot back from her friend.

Pleo Toy Robot

Her reaction to this, this pity for the dinosaur robot, which was ultimately nothing more than a toy, astonished her herself, and she asked herself why we form emotional connections with machines.

Darling’s reaction was not unusual; it happens to others. In P. W. Singer’s book ‘Wired for War‘, American soldiers speak of the ‘robo hospital’ rather than the ‘Joint Robotics Repair Facility’ where they send their drones and demining robots for repair. These same soldiers give their robots ‘funerals with full military honors.’ And last but not least, we already learned about the overturned Kiwibots that are immediately put back up by passers-by because they looked so ‘sad’. All this shows us that we humans are very quick to make such emotional connections. Everything that moves and looks like life can awaken our compassion.

AlphaSophia and me

FUTURE MINDSET:
How to forecast and design your future

How can we predict the future and better prepare for it? This is a question that many companies are asking themselves, and one that also has consequences for their personal futures.

This online course teaches a set of tools that students can use to discover for themselves signs of what may be coming, how to influence them to make better decisions today. With this knowledge and tools, students will be able to shape their own personal future or that of their company and organization.

REGISTER for this course!
Watch. the promotion video!

In addition to the dinosaur robot, there are a number of other robots for children that visibly express emotions, evoking interesting reactions in them. Anyone who has children of preschool age knows from their own painful experience that in games they always want to win. If not, then you risk a nervous breakdown.

But now onto the stage are Cozmo and Vector, two small robots with distinct personalities. These small block-shaped robots on wheels, have an arm-like lifting device that allows them to lift, move and flip dice. If they are successful in the games, they celebrate their win by raising their arms, spinning in circles, making cheering noises and blinking their eyes. However, if they lose, they get loudly annoyed and throw themselves around in anger.


Children are so taken with them, and so awed by their reactions, that the inventors of these robots could overhear dialogue from children where two five-year-olds whispered to each other to let the robot win so it wouldn’t get angry.

This distinct personality of toy robots is something Sherry Turkle wracked her brains over in her book Alone Together. She believes that it is relatively easy to build a machine in such a way that it would be easy to use – even though many long-suffering users of machines and software report the opposite. As a former software developer, I know the challenge of making user interfaces intuitively understandable and easy to use for users. By no means an easy task.

But Turkle wants to give machines an endearing and winning personality, and that’s a task on a different level of difficulty. Computer scientist John Lester even thinks so:

In the future, we will not simply use our tools or enjoy them, we will even want to take an interest in them and care for them. They will teach us how to treat them and how to handle them. We will move toward loving our tools; and our tools will evolve toward becoming lovable.

What makes a robot or an AI likeable? This is a question that chatbot manufacturers are struggling with, and they have had very ambivalent successes with one and the same technology. For example, Microsoft experienced a minor disaster with its English-language Twitter chatbot Tay when it was unleashed on Twitter, and within a few hours this bot, posing as a 16-year-old teenage girl, turned into a rude and racist Twitter participant. The exact same technology known in Chinese networks as Xiaoice, on the other hand, garnered millions of followers. What was the difference?

As Stanford researchers found, it’s largely because of how the AI is advertised. The expectations of people interacting with it will be lower from a bot that advertises itself as more of a toddler than from one that is extolled at the level of human experts.

Today, AI agents are often associated with some sort of metaphor. Some, like Siri and Alexa, are viewed as administrative assistants; Xiaoice is projected as a “friend,” and Woebot as a “psychotherapist.” Such metaphors are meant to help us understand and predict how these AI agents are supposed to be used and how they will behave.

Humans in general are very forgiving of machines, as ELIZA, a primitive chatbot pretending to be a psychotherapist, has already shown. ELIZA was developed in 1964 at MIT by Joseph Weizenbaum as a psychotherapy program, and was intended to show how superficial such artificial dialogs are. It was designed to keep a convincing conversation going.

Weizenbaum argued that professions that require genuine empathy, such as in the medical sector, geriatric care, soldiers, even customer support, should never be performed by a machine. Extensive interactions of humans with machines that do not show genuine empathy isolated humans and make them appear less valuable.

However, it turned out to be just the opposite. The test persons who communicated with ELIZA became convinced that behind the machine sat a human being who answered. Yes, even if they understood exactly that it was a machine, they asked after the end of the experiment if they could spend some more time alone with the machine. Reportedly even Weizenbaum’s own secretary asked for a ‘private appointment’ with ELIZA.

At the very least, the expectation seems to be a contribution to how to develop lovable and thus also human-acceptable artificial intelligence, bots and robots. Cute eyes or looks alone are not it, anyway. And this is where it gets really exciting, because the definition of ‘lovable’ is probably at least as unclear as our definitions of terms like feeling, emotion or consciousness.


This article is in part an excerpt from my book When Monkeys Teach Monkeys: How Artificial Intelligence Really Makes Us Human. Published in February 2020 by Plassen-Verlag.

Jetzt auf Amazon
und im Buchhandel!

Wenn Affen von Affen lernen

Was ist Intelligenz im künstlichen und menschlichen Sinn? Können Maschinen Bewusstsein entwickeln und wie würden wir das erkennen? Sind Maschinen fähig, Empathie zu zeigen und zu fühlen?
Innovations-Guru Dr. Mario Herger gibt darauf Antworten. Er verdeutlicht die vielfältigen Chancen und positiven Auswirkungen von KI auf alle Aspekte des gesellschaftlichen und wirtschaftlichen Lebens. Spannende Gespräche mit KI-Vordenkern und KI-Praktikern aus dem Silicon Valley vermitteln dem Leser wertvolle neue Erkenntnisse und Mindsets. Ein unentbehrlicher KI-Ratgeber für Gegenwart und Zukunft!

Leave a Reply