Carnegie Mellon University AI researchers have created an AI agent that is able to translate words into physical movement. Called Joint Language-to-Pose, or JL2P, the approach combines natural language with 3D pose models. The pose forecasting joint embedding is learned with end-to-end curriculum learning, a training approach that stresses shorter task completion sequences before moving on to harder objectives.

JL2P animations are limited to stick figures today, but the ability to translate words into human-like movement can someday help humanoid robots do physical tasks in the real world or assist creatives in animating virtual characters for things like video games or movies.

JL2P is in line with previous works that turn words into imagery — like Microsoft’s ObjGAN, which sketches images and storyboards from captions, Disney’s AI that uses words in a script to create…

Read More


Leave a Reply

Your email address will not be published.