Lewis the robot snapper

Researchers at the Computer Science Department of Washington University in St. Louis have developed a human-sized mobile robot that wanders about taking pictures of people.

Called Lewis, the robot finds faces in a camera image by identifying contiguous regions of skin-coloured pixels. All human skin falls into a clustered area in the YUV colour space. By looking for contiguous areas of pixels with colours in this region, Lewis can find skin-coloured blobs in the image. Lewis removes candidate regions that are the wrong shape (faces are roughly elliptical), or the wrong size (skin-coloured walls, for example, are too large), or at the wrong height.

An on-board laser range-finder provides Lewis with a good guess for all of the above, since he ‘knows’ how far away the closest person is, what the camera tilt angle is, and approximately how tall people are. Lewis currently assumes that people are between 4 and 7 feet tall, so children may need to stand on a chair!

Once Lewis has identified candidate faces in an image, the information is correlated with the on-board laser range-finder to determine the likely position of the owners of those faces. The robot ‘assumes’ that each face has a corresponding pair of legs, and that these legs are more-or-less directly beneath the face.

One rule of portraiture is the rule of thirds. This suggests that, if the picture were to be split into thirds, both horizontally and vertically, the main point of interest should be located where the lines cross. Lewis has been programmed to take photos with faces at these strategic points.

In photography, it is also important to avoid unnecessary empty space and maximise the amount of important information in the picture. This sometimes conflicts with the rule of thirds, since placing faces at these points may result in too sparse of an image. Lewis must balance these composition goals against one another to find the optimal photograph.

Lewis has also been given the freedom to be ‘daring’ at times. By occasionally breaking the rules of photography and getting feedback about the results, Lewis can learn when and where it’s appropriate to deviate from convention. This allows the robot to refine its composition sense beyond simple pre-programmed rules.

Lewis uses standard mobile robot path-planning algorithms to plan trajectories to all of the candidate picture-taking positions. An important component of path planning is staying within the bounds of the installation. Lewis uses internal odometry and a visual landmark to determine its exact location, which is then compared to a supplied map (the rectangular area of the room). The landmarks are two lit, coloured spheres, mounted on top of a post in an installation. By using the size, angular, and apparent proportions of the two colours, Lewis can reliably determine its position.

Some image analysis routines are too intensive for real-time processing, but post-processing software allows Lewis the opportunity to re-evaluate the photos that it has taken.

In the future, the robot’s developers aim to give Lewis control of the lights in a given space. Once Lewis finds a good composition the lights will automatically respond to remove any detected glares or shadows.