Your questions answered: humanoid robots

Our expert panel assesses some of the challenges, potential and dangers of building robots that look like humans.  

Honda’s Asimo robot was built to capture the public imagination and show off the company’s technical prowess.

Even the most sophisticated robots built in the 20th century failed to really live up to the excitement of the humanoid devices seen in science fiction. But now androids, or at least human-like robots, are being developed for our homes and even for military applications such as search and rescue.

For the latest in our series of reader Q&As, we put your questions on the challenges, potential and dangers of humanoid robots to an expert panel including:

  • David Bisset, autonomous systems consultant and former head of robotics research for Dyson;

  • Marc Raibert, founder of Boston Dynamics (now owned by Google), the firm behind the humanoid ‘Atlas’ robot;

  • Noel Sharkey, professor of artificial intelligence and robotics at Sheffield University;

  • Rich Walker, director of Shadow Robot Company, the UK firm that produces robotic hands.

Which applications are currently driving the development of humanoid robot technology?

Noel Sharkey: Despite a lot of talk and hand waving, applications have never really been the driver for humanoid robot technology. Right from the beginning when Westinghouse launched its series of humanoid robots from Televox (1927) to Elektro (1939), it was to capture the public imagination and to show off the company’s technical prowess. The same was true of Honda’s Asimo robot. It is really only with the DARPA Rescue Challenge in 2013 [for which Atlas was built] that we have seen application-driven humanoids. We are at a point now where this is becoming more possible.

David Bisset: The primary role of humanoid robots, to date, is to create a pastiche of human physical behaviour that makes us pay attention and be amused. Humanoid robots are hard to build and expensive: there are much cheaper solutions, such as wheels, that exploit the advantages of the materials and technologies we already build things with. The humanoid form may be optimal for biological mechanisms (muscles) but that does not mean it is optimal when technology (electric motors) is used to solve a problem like moving round a house.

The US Department of Defense is running a competition using Boston Dynamics’ Atlas robot as a search and rescue tool.

What advantages would robots with human-like appearances and capabilities have, and what tasks might they be particularly suited to in the future?

Marc Raibert: We got started developing robots with a human form to test protective clothing that would be worn by humans. So the robot had to be the size and shape of a person, and had to move like a person. In the case of the Atlas robot, we are looking at situations where robots are used in environments designed for people. So the robot has to fit where people fit and has to be able to move through environments designed for human locomotion.

DB: Cartoon human-like forms are a good way of engaging people; they are less threatening and we have lower expectations when interacting with them. On the other hand, having a human-like form still brings interaction expectations that current robots can’t meet, we expect to be able to talk to it and be understood in a way that you would not expect interacting with a box on wheels. The next generation of robots will be physically interactive and human-like interaction will be important. When the technology to make smooth, intuitive, human-like interaction exists, even for limited interactions such as in a fast-food outlet, then it will be suitable to use humanoid forms to aid interaction. Until then they will just create frustration.

How much of a problem is the ‘uncanny valley’ effect (when people dislike robots that appear quite but not exactly human) and what, in your view, is the best way around this problem?

Rich Walker: Don’t try and build replicants — problem avoided. It’s the biggest challenge there seems to be in prosthetics, where making the prosthesis close but not close enough produces something really disturbing. Many people with prosthetics prefer to have something elegant that doesn’t pretend, rather than to have something in the uncanny valley. Most of the time, robots don’t need to try and have human-like faces.

’Many people would prefer not to be fooled into thinking that a robot is a person unless they are going to have sex with it’

Noel Sharkey

DB: The uncanny valley effect is an advantage: it stops robots from appearing to be human and that is potentially a good thing. Our senses are tuned to the fine nuances of social interaction because it allows us to interpret what the people around us are doing or more importantly intending to do. What we need are functional tools that do what we want, when we want. In fact, we will want to keep a robot looking like a robot so that we can preserve our social status. There are very limited circumstances where precise human mimicry is of any functional value except maybe where we are trying to deceive another person. For this reason many in the robotics community believe it is unethical to create products based on accurate 
human mimics.

NS: The biggest problem for the uncanny valley for me is the cold, dead eyes of the psycho killer. In general, I do not think that the uncanny valley is a problem for humanoid robots. The human look-alike is really something that is important for the Japanese. But it is really cosmetic and many people would prefer not to be fooled into thinking that a robot is a person unless they are going to have sex with it or use it as a companion.

Industrial robots tend to be separated from people to avoid risk of injury. What principles and technologies would you need to create humanoid robots that can interact with people safely?

RW: Lots of good force sensing at the joints, softer structures, precise location of humans around the hardware. Note, that 
many of the tasks are not dangerous because of the robot, but dangerous because tasks such as welding, or moving 50kg objects at high speed, or high-speed cutting, are inherently hazardous.

DB: It is possible to sense the space around the robot so that the motion of people in its workspace can be detected. The robot can then react by changing its path to avoid contact. The problem with this approach comes when a robot needs to exert a strong force to carry out a task but not direct that force harmfully towards a person who may be part of the task. This requires a very high level of understanding, on behalf of the robot, with respect to the possible actions and motions of people. In a constrained industrial setting this may be easy, but when caring for an elderly person it will be much harder. For example a robot may have to lift a person out of bed but must be able to respond if they roll over while being lifted so as not to hurt them.

The iCub robot was built by the Italian Institute of Technology as a research platform for studying human development.

What unexpected, emergent AI behaviours have you personally witnessed and what measures can you take to ensure such behaviours don’t create a danger to humans?

NS: I can’t think of any emergent AI behaviours that have knocked my socks off. Generally, the real achievements in AI have been gradual and incremental. There have been some wondrous applications like Deep Blue and Watson but I would not call these ‘emergent’.

What are the other dangers we need to be aware of related to the development of humanoid robots?

MR: You seem to be hinting that you think there are special dangers associated with humanoid robots. Robots are just like any other technology: car, aircraft, computer, laser and so on: they do some things; they don’t do other things. The people who operate or interact with them need to take various precautions to stay safe. Robots are no different than any other kind of machine.

NS: I would say that humanoid robots in themselves are unlikely to be a danger. But like all tools, they can be used for good and for bad, and they can also be used stupidly. As an example, I think we need to be very cautious about uses in the care professions. Robots can be great enablers and can empower the vulnerable. But I have written quite a lot about how they could be used in ways that infringe on human rights. One of my concerns is that if we get it wrong, it could set back research funding for a generation.

DB: Human gullibility is the greatest danger. If it becomes possible to build humanoids that are cute and accepted without question then they will have the potential to become dangerous because we will be less inclined to believe they pose a threat to our privacy and wellbeing. In this there are particular concerns for vulnerable people and children engaging with lifelike humanoid robots.

Shadow Robot Company’s Dextrous Hand has 24 joints that allow it to move almost exactly like a human hand.

What will the next generation of humanoid robots look like and be able to do that the current generation can’t, and what are the technology breakthroughs required to make this happen?

RW: The key next steps, for me, are about onboard power and ability to develop good interactions with the world. Better hands and better batteries. We’re working on one, and the phone people are working on the other!

NS: I would imagine that the next generation of humanoid robots will look pretty much like the current generation only a little less clunky. We still need to do a lot of mechanical work on dextrous manipulation and although humanoids can walk on flat surfaces without falling over, a lot more work on balance needs to be done. A lot of work still needs to be done on non-GPS localisation and mapping. Obviously, advancements in AI would help a lot.

DB: The largest single breakthrough for humanoids will be the development of high-energy density power packs that allow them to operate all day. The second major breakthrough will be in developing control systems that are able to maintain the absolute position of a finger when the floor is moving under the robot’s feet.


What needs to change in robotic technology to make robots widely affordable in the same way the spread of mobile computing has been driven by cost reduction in processors and touch screens?

RW: Cost reduction in processors and touch screens came about by very large volume production. Sales of robots are in the thousands, not millions. Once there are markets that can justify mass manufacture, then the cost should come down dramatically, but to reach those markets a lot of software work is still needed.

DB: The more you make the cheaper they are. So finding markets where you can sell large numbers of robots will make them cheaper. Making robots from standard components will also make them cheaper because then you can make many of one part for different markets increasing the volume and lowering costs. Finding cheaper ways to make robot parts will also help to lower the minimum cost. There is no magic in this, the car industry has been delivering mass-produced high-technology products at ever-lower costs for decades.

The question is whether we want, need or can afford truly general-purpose robots

Rich Walker

How long do you think it will be before we can build robots that can (a) replicate human movements and physical capabilities; and (b) interact with sufficient intelligence to be acceptable and useful to people?

DB: It is already possible to mostly replicate the physical properties of human limbs. However we cannot yet replicate human movement. Human movement is bound up with human sensing, human awareness and human thinking. So even something as simple as passing an object between a robot and a human cannot currently be done like a human. Robots can already be made useful, provided we can make the interaction intuitive and some systems are beginning to do this. Acceptability will depend on the application and its implementation. If a robot can help an elderly person get out of a chair it only needs to understand that particular type of interaction. Then, to that elderly person, it will be acceptable and useful. But if that robot uses the information it gathers about that elderly person to promote a particular painkiller or medical treatment that will be seen as unacceptable and unethical.


RW: Both have already happened: for example, the Google self-driving car. The question is whether we want, need or can afford truly general-purpose robots (a Kryten [from TV show Red Dwarf], for instance) or whether specialised systems exploiting robotic technologies to solve broad problems cost-effectively will continue to be the right approach. And the answer to that probably depends on how much AI is available.