(Image credit: Adobe)

Has Adobe just solved one of AI video's biggest problems?

by · Creative Bloq

Share this article
0
Join the conversation
Follow us
Add us as a preferred source on Google
Newsletter

Get the Creative Bloq Newsletter

Sign up to Creative Bloq's daily newsletter, which brings you the latest news and inspiration from the worlds of art, design and technology.

Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors


By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

You are now subscribed

Your newsletter sign-up was successful


Want to add more newsletters?

Five times a week

CreativeBloq

Sign up to Creative Bloq's daily newsletter, which brings you the latest news and inspiration from the worlds of art, design and technology.

Signup +

Once a week

By Design

Sign up to Creative Bloq's daily newsletter, which brings you the latest news and inspiration from the worlds of art, design and technology.

Signup +

Once a week

State of the Art

Sign up to Creative Bloq's daily newsletter, which brings you the latest news and inspiration from the worlds of art, design and technology.

Signup +

Seasonal (around events)

Brand Impact Awards

Sign up to Creative Bloq's daily newsletter, which brings you the latest news and inspiration from the worlds of art, design and technology.

Signup +


An account already exists for this email address, please log in.

Subscribe to our newsletter

Mangled nightmares like the Coca-Cola AI advert and the withdrawn McDonald's Christmas ad show that AI-generated video is still of dubious usefulness as a finished creative asset. Meanwhile, OpenAI's closure of Sora has raised questions again about whether there's demand for it and whether it can be both safe and profitable given the amount of resources it uses.

Adobe thinks it has a solution, at least for one of the major technical problems that makes AI video so difficult to use. It's released a preview of an experimental product called MotionStream that allows users to take a more hands-on approach to controlling AI-generated footage.

Text prompts make AI video easy to generate but difficult to control since it can be hard to describe motion in text. AI video generation also suffers from being slow, which means having to wait to a short clip to generate only to discover that the movement looks weird and unnatural, and then having to start over with each new generation.

You may like

  • Why AI video editing is the skill I’m finally learning in 2026

  • Can Adobe's new custom Firefly models finally tame AI?

  • Adobe Firefly's custom AI models "preserve the unique soul of your work"

Adobe's solution is to develop a way to interact with AI-generated video as it’s being created. MotionStream shifts from delayed rendering to real-time interaction, letting the user reposition objects and change camera angles using cursors and sliders as the video is generated.

The process still begins with a text prompt, but users can then click and drag objects to control their movement and adjust the camera location. Users can click to mark objects they want to remain static.

Eli Shechtman, Senior Principal Scientist and one of the researchers behind MotionStream, says the tool could be a game-changer for secondary effects that are hard to control manually.

“If you want to move an elephant, for example, you can click and move its body, but it’s a lot of work to manually make those movements look natural. This currently requires skills and specialized software to rig, and animate or keyframe the animation, following a process that typically takes hours, if not days depending on scope.

Get the Creative Bloq Newsletter

Sign up to Creative Bloq's daily newsletter, which brings you the latest news and inspiration from the worlds of art, design and technology.

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors

“Instead, the underlying video generator behind MotionStream is basically simulating the world in real time. So, the elephant’s legs move naturally, and the ears flap naturally as the elephant moves. The model provides you with knowledge about the world and you can interact with it.”

He thinks the same technology could also change how people edit photos and other still images.

“Once video becomes interactive, your canvas could be a video that’s always running. When you interact with it, you see a smooth video changing toward the edit you’ve specified. You can watch the transition, and you could even stop it in the middle if you like the intermediate result. There’s big promise here for both image and video”.

What to read next

  • What does OpenAI's Sora shutdown mean for the future of AI video?

  • AI filmmaking is a gimmick if you don’t know the rules of cinema

  • Photoshop's AI assistant is now live

The paradigm shift behind MotionStream would also speed up work with AI video. Early models would generate an entire video before delivering it to the user as each frame would look at every other frame.

That improved generation quality, but Senior Research Scientist and MotionStream collaborator Richard Zhang says “knowing both the past and future isn’t how the universe works".

Adobe Research wanted to remove that constraint so it developed a method that could generate a video in pieces, with future frames depending only on what’s already been created, a process described as “autoregressive”. As users watch their first piece, the tool is generating the second, making it possible to show a generated video to the user in more real-time fashion.

For now, MotionStream remains in development as a research project. There's no detail on if, when or how it could be added to tools like Adobe Firefly or Adobe's video-editing software Premiere.