by Gary Grossman/DALL-E — venturebeat — AI is increasingly being used to represent, or misrepresent, the opinions of historical and current figures. A recent example is when President Biden’s voice was cloned and used in a robocall to New Hampshire voters. Taking this a step further, given the advancing capabilities of AI, what could soon be possible is the symbolic “candidacy” of a persona created by AI. That may seem outlandish, but the technology to create such an AI political actor already exists. There are many examples that point to this possibility. Technologies that enable interactive and immersive learning experiences bring historical figures and concepts to life. When harnessed responsibly, these can not only demystify the past but inspire a more informed and engaged citizenry.
People today can interact with chatbots reflecting the viewpoints of figures ranging from Marcus Aurelius to Martin Luther King, Jr., using the “Hello History” app, or George Washington and Albert Einstein through “Text with History.” These apps claim to help people better understand historical events or “just have fun chatting with your favorite historical characters.” Similarly, a Vincent van Gogh exhibit at Musée d’Orsay in Paris includes a digital version of the artist and offers viewers the opportunity to interact with his persona. Visitors can ask questions and the Vincent chatbot answers based on a training dataset of more than 800 of his letters. Forbes discusses other examples, including an interactive experience at a World War II museum that lets visitors converse with AI versions of military veterans.
The concerning rise of deepfakes Of course, this technology may also be used to clone both historical and current public figures with other intentions in mind and in ways that raise ethical concerns. I am referring here to the deepfakes that are increasingly proliferating, making it difficult to separate real from fake and truth from falsehood, as noted in the Biden clone example. Deepfake technology uses AI to create or manipulate still images, video and audio content, making it possible to convincingly swap faces, synthesize speech, fabricate or alter actions in videos. This technology mixes and edits data from real images and videos to produce realistic-looking and-sounding creations that are increasingly difficult to distinguish from authentic content.
While there are legitimate educational and entertainment uses for these technologies, they are increasingly being used for less sanguine purposes. Worries abound about the potential of AI-generated deepfakes that impersonate known figures to manipulate public opinion and potentially alter elections. The rise of political deepfakes Just this month there have been stories about AI being used for such purposes. Imran Khan, Pakistan’s former prime minister, effectively campaigned from jail through speeches created with AI to clone his voice. This was effective, as Khan’s party performed surprisingly well in a recent election. As written in The New York Times: “‘I had full confidence that you would all come out to vote. You fulfilled my faith in you, and your massive turnout has stunned everybody,’ the mellow, slightly robotic voice said in the minute-long video, which used historical images and footage of Mr. Khan and bore a disclaimer about its AI origins.”