As the evolution of the humanoid robot progresses, we are seeing new permutations that are challenging our notion of humanity and the ethics in which we currently place value. Furthermore, as robotics and biology continue to crossover on the way toward The Singularity, studies into the dynamics of a future robot/human society become ever more complex.
Researchers continue to pursue better ways to imbue artificial intelligence with characteristics that can mimic (and compete with) human learning and understanding. We have seen this to a small degree with algorithms such as LEVAN – Learning EVerything About ANything.
Now, a French research team has developed the world’s first open-source, 3D-printed “baby” robot which is being studied for its ability to potentially evolve along multiple paths – some of which, they say, could offer a better understanding of how humans have become curious and inventive beings.
Poppy is a humanoid robot which is the first of its kind: Poppy was fully created by a 3D printer. Researchers were able to cut costs by a third utilizing the latest 3D tech. Early testing focused on human-like locomotion.
To begin with, it has an articulated spine with five motors – almost unheard of in robots of this size, but one of the ongoing topics at INRIA Flowers since its first humanoid, ACROBAN, from several years ago. The spine not only allows Poppy to move more naturally, but helps to balance the robot by adjusting its posture. The added flexibility also helps when physically interacting with the robot, such as guiding it by its hands, which is currently required to help the robot walk. (Source)
The focus now is shifting toward cognition, especially as the cost of physical experimentation declines. As Oudeyer illustrates, Poppy offers a platform for experimenting with how humans perceive the world from our beginnings – with emotions, learning, and curiosity that involves spontaneous exploration. Through trial and error, we become knowledgeable about what works, what doesn’t work and what is theoretically possible. We also encounter others of our species with who we share our findings and challenges.
In this sense, much of what Oudeyer describes about current research involves the similar goals being sought with a Wikipedia for Robots – RoboEarth – which enables robots to communicate with one another by uploading/downloading their challenges and solutions so that their colleagues can learn from their experiences. From knowledge and further experimentation springs creativity and inventiveness. Oudeyer believes that we can ultimately learn a lot about human child development and the journey of discovery by observing the echoes illuminated by robotic learning … and novel approaches that robots are undertaking.
One cautionary aspect should be noted, however, with Oudeyer’s statement that robots are “learning languages that cannot be predicted by programmers” and that robots are capable of spontaneously learning new languages arising from the very same experiment as it is run again. We have been reading about many warnings voiced lately regarding the rise of killer robots and a Superintelligence that may out-evolve its creator, deciding that humans are a threat to its own existence. While Oudeyer’s lecture is certainly thought provoking, it appears naive in its lack of concern for potential ethical challenges.
In fact, a new study has discovered that the current evolution of robotics/A.I. is not yet up to ethical tasks. While the U.S. military has undertaken the mission to create “moral robots,” some interesting discoveries arose from an experiment in which a robot was designed to save human lives. It wound up being paralyzed by the decision-making process:
The robot was programmed to save humans wherever possible: and all was fine, says roboticist Alan Winfield, at least to begin with.
“We introduced a third robot – acting as a second proxy human. So now our ethical robot would face a dilemma – which one should it rescue?” says Winfield. The problem isn’t – thankfully – that robots are enemies of humankind, but that the robot tried too hard to save lives. Three times out of 33, the robot manages, through a cunning series of lunges, to save BOTH. The other times, it appears as if the robot can’t decide.
“The problem is that the Asimov robot sometimes dithers,” says Winfield. “It notices one human robot, starts toward it but then almost immediately notices the other. It changes its mind. And the time lost dithering means the Asimov robot cannot prevent either robot from falling into the hole.”
“It was a bit unexpected,” Winfield says. “There was clearly time to save at least one robot, but it just left them half the time. It stood there, and failed to rescue either, whereas there was clearly time to save at least one.”
Winfield says that he “did not expect” to make a robot capable of acting ethically, but having shared his paper with philosophers and ethicists, one has come back saying the robot was acting with a sort of “ethics”.
“As ever when you do experiments, the most interesting results are often the ones you don’t expect,” says Winfield. “We did not set out to build an ethical robot. We were studying the idea of robots with internal models of the outside world. We just added an ethical decision-making layer to the logic: we call it the Consequence Engine. That’s the bit that makes the robot act ‘ethically’.” (Source)
The above seems to indicate that, at the very least, scientists are pursuing a potentially catastrophic path and hoping to figure it out as they go along. If history has taught us anything about scientific endeavors – especially ones in which the military plays a central role – the consequences can be long-lasting and many times irreversible.
Nuclear power, GMOs, pharmaceuticals, vaccines, and an ever-expanding array of laboratory-produced chemicals released upon unwitting populations the world over should give pause to rhetoric of a guaranteed Utopia where we seamlessly merge with machines on our path of evolution toward our true potential.
Perhaps we still have a quite a ways to go toward self-discovery before handing over the task to something that might not understand us at all.