In 2014, I was in ballet class when I got a call from the hospital that my dad had experienced a stroke. I rushed to the hospital to join him, and found him buried in a nest of cables, surrounded by a variety of monochromatic, rapidly beeping assistive machines. They seemed to form a single, massive enclosure around him. Every few moments he would peer up at one of the machines with wide, confused eyes. It became clear that many of the technologies that were meant to be helping and supporting him were scary and inaccessible. During the most stressful moments of his life, the machines multiplied his fear.
I wondered how I could make him feel reassured, safe, and dignified around these devices. At the time, I was a professional dancer and choreographer. Dancers, performers and theater artists are all masters at evoking emotions, so I began to contemplate how I might improve machines to help him feel empowered and hopeful rather than afraid. My dad is now in his early 70s and fully recovered. But his story, and my personal questioning of technology’s effect on society, led me to start combining my passions for dance and technology.
I’ve danced with different robots all around the world, in installations and live performances. I’m now a Ph.D. candidate in mechanical engineering at Stanford University, where I work on models and interfaces that allow robots to learn new tasks from humans, and I work on ways to reduce alienation and increase empowerment for humans when interacting with machines. It’s fascinating how much dance and robotics theory overlap—the notion of kinesphere (dance) or workspace (robotics), for example. And my work thus far in graduate school has solidified my notion from 2014: dance and robotics share intriguing similarities under the themes of human perception and interaction.
One of the first things humans notice about robots is how they move. We see evidence of this in studies where humans draw patterns and emotional meaning from random collections of dots and minimal representations of bodies. “Humans have been moving and feeling much longer than they have been thinking, talking and writing,” said psychologist Barbara Tversky, at a talk she gave at the Stanford Human-Computer Interaction (HCI) Seminar last year. As the general public encounters and forms impressions of robots up close for the first time, the robots’ movement is paramount. Traditional methods of programming robot movement do not always account for the broader personality that the robot conveys. Areas of robotics research like social navigation, where robots update their paths to account for nearby humans’ movements, implicitly build upon dance improvisation. Dancers are given a set of rules or guidelines to follow, responding to the space, timing, and orientation of others around them. One important problem in social navigation is accurate human motion comprehension and prediction.
This is because the movements a human makes, whether waving a hand or skipping, can be meaningfully different depending on other humans, robots, and environmental circumstances nearby. Choreographers not only sequence motions together but place different agents’ motions in a relative context, to weigh importance and direct audiences’ attention. They use tools like repetition, foregrounding, mirroring, and translation to do this. This choreographic thinking could inspire new ways of modeling human motion and generating robot actions in complex environments where humans will interact with robots.
As the number of robots in society continues to increase, more people need to be capable of using them. I think of other ubiquitous technologies like laptops and phones, and reflect that I have minimized the breadth of my movement to adapt to the binary demands of a series of buttons. Since robots are embodied and often mobile, the whole robot can be an interface, and new ways to interact become possible. Some such interaction modes include gesturing at a robot, teleoperating it with a controller, or puppeting the robot through physical contact.
These sorts of interaction modes let a user like my dad actively direct the robot with natural human motions, and thus necessitate a diverse selection of meaningful, purposeful human gestures and physical touch points. Generating and then parsing these motions into discrete inputs for robots strikes me as a choreographic challenge. I am entrenched in one such challenge at the moment – determining how a robot will respond to a series of complicated gestures from more than one human interactant.
I believe the intersection between robotics and dance will continue to expand as robots move out of the factory and into the general public. As a result of the use of robots in performances and the increasing number of interdisciplinary research practitioners, this interlinking is formalizing into a field called choreorobotics or choreobotics. There will be a course offered on choreorobotics at Brown University next spring, academic conferences like the International Conference on Movement and Computing (MOCO) bring together practicing movement artists and academics from computer science and engineering, and increasingly, roboticists are using the term “choreography” to build movement sequences for robots. Just as the personal computing revolution instigated intersections between computing and other fields like graphic design and psychology, personal robotics will do the same. I am not sure how soon my dad will have a robot in his house, but I believe that when it arrives, it will be imbued with dance knowledge.
This is an opinion and analysis article; the views expressed by the author or authors are not necessarily those of Scientific American.