Skip to main content
fact check

This article is part of The Globe’s initiative to cover dis- and misinformation. E-mail us to share tips or feedback at disinfodesk@globeandmail.com.

Video production and filmmaking have undergone huge technological changes in a few generations, from silent black-and-white films to blockbusters brimming with CGI. Now the growing capabilities of video generation using artificial intelligence seem ready to usher in a new era of visual creation.

Next Stop Paris is an example that has garnered much attention for its use of this technology. It is the first original production from TCLtv+, a streaming service from electronics manufacturer TCL, made using generative AI technology and other video creation tools.

The trailer has the hallmarks of video made by generative AI, including shots subtly shifting between frames and stiff animation. A press release from TCL said the romantic short uses professional voice actors and was written by two staff (not an AI such as ChatGPT, despite the number of clichés in the dialogue). TCL used Stable Diffusion’s video system to make the visuals, without ever needing to set foot in the City of Lights.

The company said a multinational team, including Canadians, worked on Next Stop Paris and that it is an “early experiment bringing tech and creative together in a hybrid entertainment format.” It will be released in the summer.

While Next Stop Paris may not be challenging Hallmark in the romantic movie market yet, it is a big improvement over the 2023 AI-generated clip of Will Smith eating pasta, which looked more Lovecraftian than real. Now other systems are taking the technology further.

Microsoft recently unveiled VASA-1, a video-generation system that can take a single image of a face and a piece of music and make a highly realistic animation. The computer giant calls it “Lifelike Audio-Driven Talking Faces Generated in Real Time.” Here’s its example of the Mona Lisa rapping.

Microsoft's experimental VASA-1 AI generation system can make videos like this by taking a still image of a face and smoothly animating it to lyrics or a script.

Microsoft

Critics have called it an automatic deepfake generator. Deepfakes are videos in which one person’s face is animated over another’s. It’s not a new technique but in the past has needed specialized software and a powerful computer. Now it can be done in moments in your web browser. Deepfakes can make anyone say or do anything, from presidents talking about a video game to revenge porn.

Microsoft says it has no plans to make VASA-1 available to the public because of those concerns. But there are similar tools already available.

Viggle can animate an avatar or image to a piece of music or dialogue. It’s not as smooth as Microsoft’s offering but has been used to make OpenAI’s Sam Altman look like an animated character and to mimic a news reporter at a crime scene, which raises the fear that it will be used to propagate disinformation.

As these video-generation systems improve, distinguishing their output from real footage is becoming very challenging, but careful examination can still reveal some clues. Take a look at this sample video from Sora, OpenAI’s video generator (which is not available to the public yet).

OpenAI/Sora

It’s very impressive and highly detailed, but look more closely at the bottom left corner, which we’ve zoomed into and slowed down.

Highlighting a glitching section of the dragon video. OPENAI/SORA

You can see the hand holding the pole suddenly shifts and the structure of the pole changes, making it look unbalanced and difficult to hold.

Another sample from Sora, of a construction site, shows a forklift suddenly changing direction and the high-visibility jacket of a worker changing colour. We circled both areas to make them easier to spot.

Highlighted areas where video generated by OpenAI's Sora is not able to keep a scene consistent. OPENAI/SORA

Depending on the scene created, clips from video generators can be indistinguishable from something real. It has enormous creative potential, but now more than ever, it’s hard to believe what you see.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe