when Woodpecker, A documentary about late chef and traveler Anthony Bourdain opened in theaters last month, and its director, Morgan Neville, teased promotional interviews with an unconventional disclosure by a documentary staffer. Some of the words Bourdain hear in the movie are faked by an artificial intelligence program used to mimic the star’s voice.
Accusations by Bourdain fans that Neville acted unethically dominated coverage of the film. Despite this interest, the amount of Bourdain’s fake voice in the two-hour film, and what was said, was not clear – until now.
In an interview that made his movie infamous, Neville Tell New Yorker He produced three fake Bourdain clips with the permission of his property, all of which were written or spoken by the chef, but were not available as audio clips. He revealed only one, an email that Bourdain “reads” at movie trailer, but he bragged that the other two clips would be undetectable. “If you watch the movie,” New Yorker Academy Award winner Neville is quoted as saying, “You probably don’t know what other lines AI has spoken, and you won’t.”
The audio experts at Pindrop, a startup that helps banks and others combat phone fraud, think they know. If the company’s analysis is correct, the debate over the Deepfake Bourdain technology is rooted in less than 50 seconds of sound in the 118-minute film.
Pindrop’s analysis pointed to an email quote revealed by Neville and also a clip early in the film apparently taken from an article Bourdain wrote on Vietnam called “The Hungry American,” compiled in his 2008 book, bad pieces. She also highlighted the sound in the middle of the film in which the chef notes that many cooks and writers have a “relentless instinct to dispense with something good.” The same sentences appear in Interview with Bourdain With food website First We Feast for his 60th birthday in 2016, two years before he died by suicide.
All three syllables sound like Bourdain’s voice. However, when listening closely, they seem to bear the signatures of syntactic speech, such as single tones and fricative sounds such as the “s” and “f” sounds. 1 Reddit user Independently The same three clips as Pindrop, and she writes, were easy to hear when watching the movie a second time. Focus Features distributor did not respond to requests for comment. Neville’s production company declined to comment.
When Neville predicted that his use of AI-generated media, it is sometimes called deep fake, would be undetectable, he might have overestimated the complexity of his fake. He probably did not anticipate the controversy or interest that his use of this technology would attract from fans and audio experts. When the buzz reached the ears of the researchers at Pindrop, they saw the perfect test case for the software they had built to detect sound deepfakes; They set it to work when the movie debuted on streaming services earlier this month. “We’re always looking for ways to test our systems, especially in real-world conditions – this was a new way to validate our technology,” says Colin Davis, Pindrop’s chief technology officer.
Pindrop’s findings may have solved the mystery of Neville’s missing deepfake, but the episode portends future controversies as deepfakes become more complex and accessible for both creative and malicious projects.