Meta’s Make-A-Video tool builds on its Make-A-Scene text-to-image generator that was unveiled in July.
Meta has unveiled its latest creation to the world, a text-to-video generator called Make-A-Video.
The tool is billed as “a new AI system that lets people turn text prompts into brief, high-quality video clips”. Each clip is around five seconds long and there is no audio. The clips are captioned with the text prompts provided.
The tech giant is taking a step beyond the text-to-image AI generators that have been introduced recently to huge internet excitement. These include OpenAI’s DALL-E 2, which is now available to all, as well as Google’s Imagen AI model.
Meta has been getting in on the action too, revealing its text-to-image AI generator Make-A-Scene in July. It claimed it had made improvements on the tech being developed by competitors, since Make-A-Scene not only accepts text prompts but also rough sketches from users.
Meta revealed its new text-to-video tool yesterday (29 September) with a cutesy clip of a teddy bear painting a self-portrait on a canvas. The text prompt used was ‘a teddy bear painting a portrait’.
There were other whimsical examples, such as a dog kitted out in a superhero costume flying through the skies, as well as a close-up of paint being applied on a canvas and a horse drinking from a pond.
We’re pleased to introduce Make-A-Video, our latest in #GenerativeAI research! With just a few words, this state-of-the-art AI system generates high-quality videos from text prompts.
Have an idea you want to see? Reply w/ your prompt using #MetaAI and we’ll share more results. pic.twitter.com/q8zjiwLBjb
— Meta AI (@MetaAI) September 29, 2022
Meta already has an established interest in AI and its latest tool builds on the generative tech research being done by its AI division.
The company said it would be sharing its research in a paper so the public can learn about how the tool was created. It also plans to release a demo experience.
“We want to be thoughtful about how we build new generative AI systems like this,” the company said in a blogpost yesterday.
“Make-A-Video uses publicly available datasets, which adds an extra level of transparency to the research. We are openly sharing this generative AI research and results with the community for their feedback, and will continue to use our responsible AI framework to refine and evolve our approach to this emerging technology.”
However, as The Verge pointed out, Meta is not currently allowing anyone access to the model and the limited examples it has provided to the public could be the ones that show Make-A-Video in the best light.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.