Adobe has integrated AI video generator Firefly Video Model into Premiere Pro editor

Adobe has officially introduced a new generative neural network, Firefly Video Model, which is designed for working with video and has become part of the Premiere Pro application. Using this tool, users will be able to supplement the footage, as well as create videos based on static images and text prompts.

Image source: Adobe

The Generative Extend function based on the mentioned neural network is becoming available to Premiere Pro users as part of beta testing. It will allow you to extend the video by a few seconds at the beginning, end, or some other segment of the video. This can be useful if during the editing process you need to correct minor defects, such as a shift in the gaze of a person in the frame or unnecessary movements.

Generative Extend can only extend a video by two seconds, so it’s only suitable for making small changes. This tool works in 720p or 1080p resolution at 24 frames per second. The function is also suitable for increasing the duration of audio, but there are limitations. For example, a user can extend a sound effect or ambient noise up to 10 seconds, but this cannot be done with conversation recordings or music tracks.

The web version of Firefly has two new video generation tools. We are talking about the Text-to-Video and Image-to-Video functions, which, as the name suggests, allow you to create videos based on text prompts and static images. Both features are currently in limited beta testing and may not be available to all Firefly web users.

Text-to-Video works similarly to other AI video generators, such as OpenAI’s Sora. The user needs to enter a text description of the desired result and start the video generation process. Simulation of different styles is supported, and the generated videos can be further refined using a set of “camera controls” that allow you to simulate things like camera angle, movement and change the shooting distance.

Image-to-Video allows you to add a static image to the text description so that the generated videos more accurately meet the user’s requirements. Adobe suggests using this tool, among other things, for reshooting individual fragments, generating new videos based on individual frames from existing videos. However, published examples make it clear that this tool, at least at this stage, will not allow you to abandon reshoots, since it does not accurately reproduce all objects in the image. Below is an example of the original video and a video generated based on a frame from the original.

admin

Share
Published by
admin

Recent Posts

Apple will allow you to share the location of lost gadgets through Find My

Apple will soon give users the ability to send the location of lost items—an iPhone,…

28 mins ago

Scientists have turned polystyrene waste into an endless source of clean energy

Researchers from Australia and Latvia have discovered a way to turn useless polystyrene waste into…

58 mins ago

The cost of the Google Pixel 9 Pro was much lower than that of the Apple iPhone 16 Pro – both sell for $999

In Japan, the estimated price of components (Bill Of Materials, BOM) for the Google Pixel…

2 hours ago

More than 13 thousand bloggers applied for registration on “State Services”

In four days after the launch of the application form for registering blogs with an…

2 hours ago