advertisement
Science News
from research organizations

Do as I say: Translating language into movement

Computer model aims to turn film scripts into animations

Date:
September 10, 2019
Source:
Carnegie Mellon University
Summary:
Researchers have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.
Share:
advertisement

FULL STORY

Researchers at Carnegie Mellon University have developed a computer model that can translate text describing physical movements directly into simple computer-generated animations, a first step toward someday generating movies directly from scripts.

Scientists have made tremendous leaps in getting computers to understand natural language, as well as in generating a series of physical poses to create realistic animations. These capabilities might as well exist in separate worlds, however, because the link between natural language and physical poses has been missing.

副教授,路易斯-菲利普莫伦西Language Technologies Institute (LTI), and Chaitanya Ahuja, an LTI Ph.D. student, are working to bring those worlds together using a neural architecture they call Joint Language-to-Pose, or JL2P. The JL2P model enables sentences and physical motions to be jointly embedded, so it can learn how language is related to action, gestures and movement.

"I think we're in an early stage of this research, but from a modeling, artificial intelligence and theory perspective, it's a very exciting moment," Morency said. "Right now, we're talking about animating virtual characters. Eventually, this link between language and gestures could be applied to robots; we might be able to simply tell a personal assistant robot what we want it to do.

"We also could eventually go the other way -- using this link between language and animation so a computer could describe what is happening in a video," he added.

Ahuja will present JL2P on Sept. 19 at the International Conference on 3D Vision in Quebec City, Canada.

To create JL2P, Ahuja used a curriculum-learning approach that focuses on the model first learning short, easy sequences -- "A person walks forward" -- and then longer, harder sequences -- "A person steps forward, then turns around and steps forward again," or "A person jumps over an obstacle while running."

Verbs and adverbs describe the action and speed/acceleration of the action, while nouns and adjectives describe locations and directions. The ultimate goal is to animate complex sequences with multiple actions happening either simultaneously or in sequence, Ahuja said.

For now, the animations are for stick figures.

Making it more complicated is the fact that lots of things are happening at the same time, even in simple sequences, Morency explained.

"Synchrony between body parts is very important," Morency said. "Every time you move your legs, you also move your arms, your torso and possibly your head. The body animations need to coordinate these different components, while at the same time achieving complex actions. Bringing language narrative within this complex animation environment is both challenging and exciting. This is a path toward better understanding of speech and gestures."

advertisement

Story Source:

Materialsprovided byCarnegie Mellon University.Note: Content may be edited for style and length.


Cite This Page:

Carnegie Mellon University. "Do as I say: Translating language into movement: Computer model aims to turn film scripts into animations." ScienceDaily. ScienceDaily, 10 September 2019. .
Carnegie Mellon University. (2019, September 10). Do as I say: Translating language into movement: Computer model aims to turn film scripts into animations.ScienceDaily. Retrieved July 7, 2023 from www.koonmotors.com/releases/2019/09/190910111405.htm
Carnegie Mellon University. "Do as I say: Translating language into movement: Computer model aims to turn film scripts into animations." ScienceDaily. www.koonmotors.com/releases/2019/09/190910111405.htm (accessed July 7, 2023).

Explore More
from ScienceDaily

RELATED STORIES