advertisement
Science News
from research organizations

New system combines smartphone videos to create 4D visualizations

Approach requires neither studio nor specialized cameras

Date:
July 1, 2020
Source:
Carnegie Mellon University
Summary:
Researchers have demonstrated that they can combine iPhone videos shot 'in the wild' by separate cameras to create 4D visualizations that allow viewers to watch action from various angles, or even erase people or objects that temporarily block sight lines.
Share:
advertisement

FULL STORY

Researchers at Carnegie Mellon University have demonstrated that they can combine iPhone videos shot "in the wild" by separate cameras to create 4D visualizations that allow viewers to watch action from various angles, or even erase people or objects that temporarily block sight lines.

Imagine a visualization of a wedding reception, where dancers can be seen from as many angles as there were cameras, and the tipsy guest who walked in front of the bridal party is nowhere to be seen.

The videos can be shot independently from variety of vantage points, as might occur at a wedding or birthday celebration, said Aayush Bansal, a Ph.D. student in CMU's Robotics Institute. It also is possible to record actors in one setting and then insert them into another, he added.

"We are only limited by the number of cameras," Bansal said, with no upper limit on how many video feeds can be used.

Bansal and his colleagues presented their 4D visualization method at the Computer Vision and Pattern Recognition virtual conference last month.

"Virtualized reality" is nothing new, but in the past it has been restricted to studio setups, such as CMU's Panoptic Studio, which boasts more than 500 video cameras embedded in its geodesic walls. Fusing visual information of real-world scenes shot from multiple, independent, handheld cameras into a single comprehensive model that can reconstruct a dynamic 3D scene simply hasn't been possible.

Bansal and his colleagues worked around that limitation by using convolutional neural nets (CNNs), a type of deep learning program that has proven adept at analyzing visual data. They found that scene-specific CNNs could be used to compose different parts of the scene.

The CMU researchers demonstrated their method using up to 15 iPhones to capture a variety of scenes -- dances, martial arts demonstrations and even flamingos at the National Aviary in Pittsburgh.

"The point of using iPhones was to show that anyone can use this system," Bansal said. "The world is our studio."

The method also unlocks a host of potential applications in the movie industry and consumer devices, particularly as the popularity of virtual reality headsets continues to grow.

Though the method doesn't necessarily capture scenes in full 3D detail, the system can limit playback angles so incompletely reconstructed areas are not visible and the illusion of 3D imagery is not shattered.

Video:https://www.youtube.com/watch?v=quovnDPwL1k&feature=youtu.be

advertisement

Story Source:

Materialsprovided byCarnegie Mellon University. Original written by Byron Spice.注意:内容可能被编辑风格d length.


Cite This Page:

卡内基梅隆大学。“新系统结合martphone videos to create 4D visualizations: Approach requires neither studio nor specialized cameras." ScienceDaily. ScienceDaily, 1 July 2020. .
卡内基梅隆大学。(2020, July 1). New system combines smartphone videos to create 4D visualizations: Approach requires neither studio nor specialized cameras.ScienceDaily. Retrieved July 13, 2023 from www.koonmotors.com/releases/2020/07/200701134244.htm
卡内基梅隆大学。“新系统结合martphone videos to create 4D visualizations: Approach requires neither studio nor specialized cameras." ScienceDaily. www.koonmotors.com/releases/2020/07/200701134244.htm (accessed July 13, 2023).

Explore More
from ScienceDaily

RELATED STORIES