Welcome to the Frontpage
Voice cloning tech to power 2024 political ads as disinformation concerns grow
Written by Malek

by Sharon Goldman -- @sharongoldman - venturebeat -- Disinformation concerns may be growing over the use of AI in the 2024 U.S. elections, but that isn’t stopping AI voice cloning startups from getting into the political game. For example, the Boca Raton, Florida-based Instreamatic, an AI audio/video ad platform that raised a $6.1 million Series A funding round in 2021, says it is expanding its capabilities into the wild world of political advertising. The solution enables candidate campaigns to quickly generate highly-targeted AI-driven contextual video and audio ads — featuring voiceovers, not talking head videos — that adapt to changing events or locations.

However, the use of AI in 2024 U.S. election campaigns is expected to become a disinformation minefield and is already raising red flags. A recent ABC News report, for example, highlighted Florida governor Ron DeSantis’ campaign efforts over the summer which included AI-generated images and audio of Donald Trump. And VentureBeat recently interviewed Nathan Lambert, a machine learning researcher at the Allen Institute for AI, who said whether from chatbots or deepfakes, generative AI will make the 2024 US elections a ‘hot mess.’ Instreamatic requires confirmation of permission to use voice Stas Tushinskiy, CEO and co-founder of Instreamatic, insists the company has guardrails built in to make sure its product is not used for election disinformation. “For any kind of campaign, whoever the client is, they have to confirm they have permission to use the voice,” he told VentureBeat. In addition, he said that the political advertising offering will not be available to everyone. “You can’t just sign up,” he explained. “We will be engaged in campaign creation.” Instreamatic, he said, does not “want to get caught in the middle of something we didn’t intend the platform to be used for,” adding that if there were problems with political ads they would be “deleted immediately” on our hands ” and if necessary “we’ll make a public safety statement.”

Automating a manual process that already exists

Tushinskiy emphasized that Instreamatic is not reinventing the world of political ads to help candidates get elected. He described the company’s offering as automating a tedious manual process that already exists. “This process involves somebody like a candidate or voice talent going to a studio to spend hours and hours in the studio, then someone else uploading them and someone else checking for human errors,” he said. “It’s an extensive and expensive process and we automated all of that,” compressing the process from six to eight weeks to a few minutes. In addition, an ad campaign also requires a great deal of back and forth between the agency and the client, he explains, in which words may be changed, requiring new takes. But voice cloning allows an airline company, for example, to mention a variety of travel destinations in targeted ads, or car brands to cite local dealership locations. “Contextual ads always outperform generic ads, so it makes a lot of sense in terms of increasing the effectiveness of your ad spend,” he said.

Concerns about AI and election disinformation

Experts maintain the political ad landscape is fraught with potential AI-generated peril. For example, there are currently no federal rules for the use of AI-generated content, such as ads, in political campaigns. Russell Wald, policy director at Stanford University’s Institute for Human Centered AI, told ABC News Live in November that “All campaigns can use this. So in that sense, who is setting the rules of the road as the campaigns themselves, as they go?” But Tushinskiy said “if we were the ones that created misinformation, I wouldn’t want to be in this business process.” Instead, he maintained, “We’re just giving them the tools to be more effective.” And the moment Instreamatic catches somebody doing something unethical, “not only can we stop it, but we can also expose it.”