What if we could communicate instantly, even though we speak two different languages? With drawing parallels to The Hitchhiker’s Guide To The Galaxy’s Babel Fish, the mysterious alien who allows instant translation, the Meta AI team developed Seamless M4T, a multi-modal model that achieves just that.

Working with Meta and agency partner Author’s Projects, I helped them to help translate this model, capabilities, and demo, into the website and supporting videos which take this complex idea into something most humans can understand—all in Meta’s brand voice.

Check out the live site here.

With such a massive leap in communication, the technology behind it is fairly complex, honing in on something called local and global prosody as well as latency. In a nutshell, it’s the facial, oral, and dialectal influences that make our voice our own.

We led with a video that showcases the tech in an easy to understand and lighthearted, but informational style—like a good friend is explaining this to you.

Audiences could even try the demo themselves by uploading their voice and hearing it in different languages.

We went into further depth and explanation of the models for the researcher and academia crowds.

Meta takes AI research and development safety precautions at every step. This was something very important to convey as this technology grows and develops.

If people wanted to learn more, they could dive in deeper to the white paper and more technical aspects of the model.

Previous
Previous

Google

Next
Next

VW // Fox Sports // Men's World Cup