Adobe releases new AI video model in beta

14 Oct 2024

Image: © gguy/Stock.adobe.com

The new video model is the latest addition to Adobe’s suite of generative AI tools.

Adobe has today (14 October) released an AI video model across its creative suite in limited public beta.

The Firefly Video Model is bringing a range of tools to the Adobe Creative Cloud, including a function that extends clips in Premiere Pro, a text-to-video tool and an image-to-video tool.

The new video model is the latest addition to Adobe’s suite of generative AI tools, known as Firefly. Firefly was first introduced in March of 2023, and already includes an image model, a vector model and a design model.

The new video model was first unveiled last month, and is currently only available through a limited public beta to gather feedback from “a small group of creative professionals”, which will be used to “refine and improve” the model, according to Adobe.

One of the functions introduced with the new video model is called Generative Extend, which can be used in Premiere Pro to extend clips to cover gaps in footage, smooth out transitions or hold on shots longer for edits.

The image- and text-to-video tools will allow users to generate video using text prompts, camera controls and reference images, and will be available in the Firefly web app.

Along with the video model, Adobe has also released a set of updates for its other Firefly models, including faster image generation for the Firefly Image 3 model and enhancements to the Vector Model functions in Adobe Illustrator.

Creator concerns

Along with today’s announcement, principal product marketing manager for Adobe Pro Video Meagan Keane added a note about Adobe’s “commitment to creator-friendly AI innovation”.

“Our Firefly generative AI models are trained on licensed content, such as Adobe Stock, and public domain content – and are never trained on customer content,” said Keane.

“In addition, we continue to innovate ways to protect our customers through efforts including Content Credentials with attribution for creators and provenance of content.”

In June, Adobe faced backlash online from filmmakers and artists after a terms-of use-update that allowed its machine learning tools to “access” and “view” user content, without a clear explanation of how customer content would be used by the company.

This backlash lead to Adobe updating its terms of use again in order to make its legal language more understandable. In a blogpost in June, the company tried to clear the air on its stance and reassure users that their content will not be used to train any of its generative AI tool.

“We’ve never trained generative AI on customer content, taken ownership of a customer’s work or allowed access to customer content beyond legal requirements. Nor were we considering any of those practices as part of the recent terms-of-use update.

“That said, we agree that evolving our Terms of Use to reflect our commitments to our community is the right thing to do.”

Earlier this year, Adobe revealed new generative AI features to improve customer experience management services as well as a new partnership with Microsoft.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Colin Ryan is a copywriter/copyeditor at Silicon Republic

editorial@siliconrepublic.com