by Gary Grossman/DALL-E — venturebeat — AI is increasingly being used to represent, or misrepresent, the opinions of historical and current figures. A recent example is when President Biden’s voice was cloned and used in a robocall to New Hampshire voters. Taking this a step further, given the advancing capabilities of AI, what could soon be possible is the symbolic “candidacy” of a persona created by AI. That may seem outlandish, but the technology to create such an AI political actor already exists. There are many examples that point to this possibility. Technologies that enable interactive and immersive learning experiences bring historical figures and concepts to life. When harnessed responsibly, these can not only demystify the past but inspire a more informed and engaged citizenry.
People today can interact with chatbots reflecting the viewpoints of figures ranging from Marcus Aurelius to Martin Luther King, Jr., using the “Hello History” app, or George Washington and Albert Einstein through “Text with History.” These apps claim to help people better understand historical events or “just have fun chatting with your favorite historical characters.” Similarly, a Vincent van Gogh exhibit at Musée d’Orsay in Paris includes a digital version of the artist and offers viewers the opportunity to interact with his persona. Visitors can ask questions and the Vincent chatbot answers based on a training dataset of more than 800 of his letters. Forbes discusses other examples, including an interactive experience at a World War II museum that lets visitors converse with AI versions of military veterans.
The concerning rise of deepfakes Of course, this technology may also be used to clone both historical and current public figures with other intentions in mind and in ways that raise ethical concerns. I am referring here to the deepfakes that are increasingly proliferating, making it difficult to separate real from fake and truth from falsehood, as noted in the Biden clone example. Deepfake technology uses AI to create or manipulate still images, video and audio content, making it possible to convincingly swap faces, synthesize speech, fabricate or alter actions in videos. This technology mixes and edits data from real images and videos to produce realistic-looking and-sounding creations that are increasingly difficult to distinguish from authentic content.
While there are legitimate educational and entertainment uses for these technologies, they are increasingly being used for less sanguine purposes. Worries abound about the potential of AI-generated deepfakes that impersonate known figures to manipulate public opinion and potentially alter elections. The rise of political deepfakes Just this month there have been stories about AI being used for such purposes. Imran Khan, Pakistan’s former prime minister, effectively campaigned from jail through speeches created with AI to clone his voice. This was effective, as Khan’s party performed surprisingly well in a recent election. As written in The New York Times: “‘I had full confidence that you would all come out to vote. You fulfilled my faith in you, and your massive turnout has stunned everybody,’ the mellow, slightly robotic voice said in the minute-long video, which used historical images and footage of Mr. Khan and bore a disclaimer about its AI origins.”
This was not the only recent example. A political party in Indonesia created an AI-generated deepfake video of former president Suharto, who passed away in 2008. In the video, the fake Suharto encourages people to vote for a former army general who was part of his military-backed regime. As CNN reported, this video, released only weeks before the election, was intended to influence voters. And it did, receiving 5 million views. The former general went on to win the election. Similar tactics are being used in India. Aljazeera reported that an icon of cinema and politics, M. Karunanidhi, recently appeared before a live audience on a large projected screen. Karunanidhi gave a speech in which he was “effusive in his praise for the able leadership of M.K. Stalin, his son and the current leader of the state.” Karunanidhi died in 2018, yet this was the third time in the last six months that he “appeared” via AI for such public events. It is now clear that the AI-powered deepfake era in politics that was first feared several years ago has fully arrived.
Imagining the rise of ‘artificial’ political candidates
Techniques like those used in deepfake technology produce highly realistic and interactive digital representations of fictional or real-life characters. These developments make it technologically possible to simulate conversations with historical figures or create realistic digital personas based on their public records, speeches and writings. One possible new application is that someone (or some group), will put forward an AI-created digital persona for public office. Specifically, a chatbot supported by AI-created images, audio and video. “Outlandish,” you say? Of course. Ridiculous? Quite possibly. Plausible? Entirely. After all, they already serve as therapists, boyfriends, and girlfriends. There are several barriers to this idea, not the least of which is that a bona fide candidate for Congress or even a local city council must be an actual person. As such, a chatbot cannot register as a candidate, nor can it register to vote. However, what if a write-in campaign led to a digital persona chatbot getting more votes than any candidate on the ballot? That seems implausible, but it is possible. Since this is purely hypothetical, we can play out an imaginary scenario.
Got Milk?
For the sake of discussion, assume that “Milkbot” is a write-in candidate in a future San Francisco mayoral election. Milkbot uses an open-source large language model (LLM) that is trained on the writings, speeches, videos and social postings of Harvey Milk, the deceased former member of the San Francisco Board of Supervisors. The dataset might be further augmented with content from those who had or have similar viewpoints. Milkbot can make speeches that its promoters help to shape, create AI-generated video and audio and post on various social platforms. Milkbot is also able to “answer” questions for the public much as Vincent van Gogh, and as its popularity grows, answer questions from the press. Due to the novelty, or because no real candidate captures the public imagination in the election, momentum grows for the Milkbot mayoral effort.
A digital persona “delivers” a speech in a political campaign; image created with DALL-E 2. The bot then receives more votes through the write-in campaign than any candidate on the ballot. It is possible that the vote is symbolic, equivalent to “none of the above,” but it could be that the outcome is what the voting public wanted. What happens then? Likely, the outcome would simply be ruled impermissible by the election authorities and the human candidate with the highest vote total would be named the winner. However, this outcome could also lead to a legal redefinition of what constitutes a candidate or winner of a political contest. There would certainly be questions about representation, accountability and the potential for manipulation or misuse of AI in political processes. Of course, comparable questions already exist in the real world.
Will digital personas be a form of social or political commentary?
If nothing else, the possibility of using a digital persona in a symbolic campaign could appear as a form of social or political commentary. These bots could highlight issues such as dissatisfaction with current political options, desire for reform, the exploration of futuristic concepts of governance and prompt discussions about the role of technology in society, the nature of democracy and how humans should interact with AI. This possibility will open yet another ethical debate. For example, would a digital persona write-in “candidate” be an abomination or, if it gathered support, would this be designer democracy where the candidate can promote specific policies and traits? Imagine a digital persona put forward for an even higher office, potentially at the federal level. When the robotic revolution comes for politicians, we can hope the machines are trained for integrity.
Gary Grossman is EVP of the technology practice at Edelman and global lead of the Edelman AI Center of Excellence.