Alibaba's AI video generator just dunked on Sora by making the Sora lady sing
Alibaba wants you to compare its new AI video generator to OpenAI's Sora. Otherwise, why use it to make Sora's most famous creation belt out a Dua Lipa song?
On Tuesday, an organization called the "Institute for Intelligent Computing" within the Chinese e-commerce juggernaut Alibaba released a paper about an intriguing new AI video generator it has developed that's shockingly good at turning still images of faces into passable actors and charismatic singers. The system is called EMO, a fun backronym supposedly drawn from the words "Emotive Portrait Alive" (though, in that case, why is it not called "EPO"?).
EMO is a peek into a future where a system like Sora makes video worlds, and rather than being populated by attractive mute people just kinda looking at each other, the "actors" in these AI creations say stuff — or even sing.
Alibaba put demo videos on GitHub to show off its new video-generating framework. These include a video of the Sora lady — famous for walking around AI-generated Tokyo just after a rainstorm — singing "Don't Start Now" by Dua Lipa and getting pretty funky with it.
The demos also reveal how EMO can, to cite one example, make Audrey Hepburn speak the audio from a viral clip of Riverdale's Lili Reinhart talking about how much she loves crying. In that clip, Hepburn's head maintains a rather soldier-like upright position, but her whole face — not just her mouth — really does seem to emote the words in the audio.
SEE ALSO:What was Sora trained on? Creatives demand answers.In contrast to this uncanny version of Hepburn, Reinhart in the original clip moves her head a whole lot, and she also emotes quite differently, so EMO doesn't seem to be a riff on the sort of AI face-swapping that went viral back in the mid-2010s and led to the rise of deepfakes in 2017.
Over the past few years, applications designed to generate facial animation from audio have cropped up, but they haven't been all that inspiring. For instance, the NVIDIA Omniverse software package touts an app with an audio-to-facial-animation framework called "Audio2Face" — which relies on 3D animation for its outputs rather than simply generating photorealistic video like EMO.
Despite Audio2Face only being two years old, the EMO demo makes it look like an antique. In a video that purports to show off its ability to mimic emotions while talking, the 3D face it depicts looks more like a puppet in a facial expression mask, while EMO's characters seem to express the shades of complex emotion that come across in each audio clip.
It's worth noting at this point that, like with Sora, we're assessing this AI framework based on a demo provided by its creators, and we don't actually have our hands on a usable version that we can test. So it's tough to imagine that right out of the gate this piece of software can churn out such convincingly human facial performances based on audio without significant trial and error, or task-specific fine-tuning.
Related Stories
- China's live streaming factories are bleak. Now TikTok wants to open one in the U.S.
- The White House is cracking down on brokers selling your data to China and Russia
- Tesla faces new potential challenge in China: Xiaomi's first EV cars
The characters in the demos mostly aren't expressing speech that calls for extreme emotions — faces screwed up in rage, or melting down in tears, for instance — so it remains to be seen how EMO would handle heavy emotion with audio alone as its guide. What's more, despite being made in China, it's depicted as a total polyglot, capable of picking up on the phonics of English and Korean, and making the faces form the appropriate phonemes with decent — though far from perfect — fidelity. So in other words, it would be nice to see what would happen if you put audio of a very angry person speaking a lesser-known language into EMO to see how well it performed.
Also fascinating are the little embellishments between phrases — pursed lips or a downward glance — that insert emotion into the pauses rather than just the times when the lips are moving. These are examples of how a real human face emotes, and it's tantalizing to see EMO get them so right, even in such a limited demo.
According to the paper, EMO's model relies on a large dataset of audio and video (once again: from where?) to give it the reference points necessary to emote so realistically. And its diffusion-based approach apparently doesn't involve an intermediate step in which 3D models do part of the work. A reference-attention mechanismand a separate audio-attention mechanismare paired by EMO's model to provide animated characters whose facial animations match what comes across in the audio while remaining true to the facial characteristics of the provided base image.
It's an impressive collection of demos, and after watching them it's impossible not to imagine what's coming next. But if you make your money as an actor, try not to imagine too hard, because things get pretty disturbing pretty quick.
Featured Video For You
Sora Explainer
-
CPUs Don't Matter For 4K Gaming... Wrong!How Apple, Anheuser'Quordle' today: See each 'Quordle' answer and hints for October 14, 2023England must play 'game of our lives': Bright'Black Myth: Wukong' PS5 review in progress: A potential masterpiece当好党代会精神的宣传员和践行者Google and Mystery Science teamed up to give schools eclipse glassesTwitter's API keeps breaking, even for developers paying $42,000NASA rover snaps photo of its most daunting challenge yetNK advises people not to eat dirty food washed away in flood waters
下一篇:Abrar Ahmed returns as Pakistan names squad for second Test against Bangladesh
- ·实干担当抓落实 多措并举促发展
- ·Eisenhower's Meeting with a Missile Nose Cone
- ·This Week In Numbers: A Paper Airplane Gun, Navy Drone Boats, And Trackers In NYC Phonebooths
- ·No more news on Facebook or Instagram in Canada soon
- ·A Journey Into the Mind of Stephen King
- ·Is your Pixel phone acting weird? Android 14 may be to blame
- ·This giant snail that looks like a rabbit will give you nightmares
- ·How to log out of the Amazon app
- ·7 Reasons to Explore Boston’s Lesser
- ·Import ban on Japanese seafood has loopholes: lawmakers
- ·当好党代会精神的宣传员和践行者
- ·我市举行2021民生领域案件查办“铁拳”行动工作情况及典型案例通报会
- ·Spate of defections show Kim Jong
- ·Import ban on Japanese seafood has loopholes: lawmakers
- ·Eisenhower's Meeting with a Missile Nose Cone
- ·组织开展司法警察 实弹射击训练考核
- ·Military prosecutors indict intel official over leaking 'black agent' info
- ·补短板强弱项集中攻坚 提升执法司法整体水平
- ·Smart thermostat deal: Get $50 off a Honeywell Home thermostat
- ·Prejudiced phrases in Korean textbook remain uncorrected for 3 years: lawmaker
- ·Top 10 Most Significant Nvidia GPUs of All Time
- ·NK advises people not to eat dirty food washed away in flood waters
- ·MIT Students Claim Astronauts Will Starve On 'Mars One' Mission
- ·For authors, social media is a powerful tool for self
- ·Webb telescope just snapped image of huge black hole gobbling material
- ·The Brilliant Ten: Roxana Geambasu Exposes How Companies Use Your Data
- ·Ford can make your Mustang Mach
- ·Best Anker deal: Anker 747 power bank is under $100
- ·Google and Mystery Science teamed up to give schools eclipse glasses
- ·绘“蓝”图,秀“莓”好!德庆永丰镇举行首届蓝莓文化节
- ·South Korean lawmakers brace for US election as Harris, Trump diverge on North Korea
- ·Mbappe scores on PSG return
- ·Uvalde mass shooting and gun rights: The Supreme Court is enabling the next massacre.
- ·See what Google's AI
- ·Best Home Depot Labor Day sale deals
- ·充电蓄能 练好审查调查“基本功”