3D motion captured without markers

Carnegie Mellon University researchers have developed techniques for combining the views of 480 video cameras mounted in a two-storey geodesic dome to perform large-scale 3D motion reconstruction.

Though the research was performed in a specialised, heavily instrumented video laboratory, Yaser Sheikh, an assistant research professor of robotics who led the research team, said the techniques might eventually be applied to large-scale reconstructions of sporting events or performances captured by hundreds of cameras wielded by spectators.

The video lab, called the Panoptic Studio, also can be used to capture the fine details of people interacting.

In contrast to most previous work, which has typically involved around 10 to 20 video feeds, the Carnegie Mellon researchers weren’t concerned about gaps in data as their camera system can track 100,000 points at a time. Instead, they need to ascertain which of the multiple video trajectories can see each of those points and select only those camera views for the reconstruction.

‘At some point, extra camera views just become ’noise,’‘ said Hanbyul Joo, a Ph.D. student in the Robotics Institute. ‘To fully leverage hundreds of cameras, we need to figure out which cameras can see each target point at any given time.’

Register now to continue reading

Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.  

Benefits of registering

  • In-depth insights and coverage of key emerging trends

  • Unrestricted access to special reports throughout the year

  • Daily technology news delivered straight to your inbox