Researchers at Carnegie Mellon University have developed a method for tracking the locations of multiple individuals in complex, indoor settings using a network of video cameras.
The method was able to automatically follow the movements of 13 people within a nursing home, even though individuals sometimes moved out of view of the cameras. To track them, the researchers made use of multiple cues (apparel colour, person detection, trajectory and facial recognition) from the video feed.
The Carnegie Mellon team proved their technique with residents and employees in a nursing facility where camera views were compromised by long hallways, doorways, people mingling in the hallways, variations in lighting and too few cameras to provide comprehensive, overlapping views.
The performance of the Carnegie Mellon algorithm is claimed to have significantly improved on two of the leading algorithms in multi-camera, multi-object tracking. It located individuals within one metre of their actual position 88 per cent of the time, compared with 35 per cent and 56 per cent for the other algorithms.
The researchers – Alexander Hauptmann, principal systems scientist in the Computer Science Department (CSD); Shoou-I Yu, a Ph.D. student in the Language Technologies Institute; and Yi Yang, a CSD post-doctoral researcher – will present their findings June 27 at the Computer Vision and Pattern Recognition Conference in Portland, Oregon.
The Carnegie Mellon researchers developed their tracking technique as part of an effort to monitor the health of nursing home residents.
‘The goal is not to be Big Brother, but to alert the caregivers of subtle changes in activity levels or behaviours that indicate a change of health status,’ Hauptmann said in a statement.
The CMU work on monitoring nursing home residents began in 2005 as part of a National Institutes of Health-sponsored project called CareMedia.
The researchers found that tracking based on colour of clothing proved difficult because the same colour apparel can appear different to cameras in different locations, depending on variations in lighting. Likewise, a camera’s view of an individual can often be blocked by other people passing in hallways, by furniture and when an individual enters a room or other area not covered by cameras, so individuals must be regularly re-identified by the system.
Face detection helps in re-identifying individuals on different cameras, but Yang noted that faces can be recognised in less than 10 per cent of the video frames. To overcome this the researchers developed mathematical models that enabled them to combine information, such as appearance, facial recognition and motion trajectories.
Using all of the information is key to the tracking process, but Yu said facial recognition proved to be the greatest help. When the researchers removed facial recognition information from identifying parameters, their on-track performance in the nursing home data dropped from 88 percent to 58 percent, which is not much better than one of the existing tracking algorithms.
Further work will be necessary to extend the technique during longer periods of time and enable real-time monitoring.
The researchers also are looking at additional ways to use video to monitor resident activity while preserving privacy, such as by only recording the outlines of people together with distance information from depth cameras similar to the Microsoft Kinect.