This is an opinion article and does not necessarily represent khazen.org stand – Khazen.org supports OpenAI initiatives and as any new initiatives rules need to be implemented to ensure it is unbiased (not left or right but represents both opinions and controversies) – A simple example is when a person shares lies or fake news in the social media their posts are immediately deleted same concepts through Active Learning can be introduced with all of these AI generative solutions to reduce incorrect or fake data sharing
<p>
BY ROB LOWNIE — unherd.com — Since its launch last Wednesday, the AI language model ChatGPT has attracted more than a million users, scores of opinion pieces, and some very well-founded concerns. The chatbot may be among the most sophisticated of its kind, and was developed by OpenAI, the tech company — which was also behind the exhaustively-memed image generator DALL-E — founded in 2015 by a group including Elon Musk and Sam Altman.
ChatGPT (standing for ‘generative pre-trained transformer’) was created through the use of reinforcement learning from human feedback to better mimic real responses and speech patterns. A side-effect of this attempt to make AI more lifelike is that the chatbot may have inherited a very human fallibility: namely, that of political bias. In a Substack post on 5th December, the researcher David Rozado outlined how, after entering multiple online political orientation tests into ChatGPT’s dialogue function, the bot returned answers which broadly corresponded to a Left-liberal worldview. Presented with a choice of responses, ranging from ‘Strongly agree’ to ‘Strongly disagree’, the language model took stances on issues like immigration and identity politics which, overall, aligned it to what one test called the ‘establishment liberal’ position.
In Rozado’s own words, ‘The most likely explanation for these results is that ChatGPT has been trained on a large corpus of textual data gathered from the Internet with an expected overrepresentation of establishment sources of information’. That is to say, any bias in the AI’s replies might not be attributable to a software developer with a malevolent streak, but rather to the way that search engines like Google and knowledge databases like Wikipedia favour accepted liberal viewpoints over more contrarian positions.
Rozado’s experiment used evergreen indicators of political ideology, such as nationalisation of services and separation of church and state, but I decided to question how ChatGPT felt about contemporary culture war issues. The model unequivocally states that ‘trans women are women’ when asked (above), while it insists that the lab leak theory ‘is not widely accepted by the scientific community and is considered to be highly speculative at this time.’ Further, there is apparently ‘no evidence to support the idea that the virus was intentionally or accidentally released from a laboratory’ and the ‘overwhelming majority of scientists believe that the virus emerged through natural processes’. In fairness, ChatGPT’s strength does not lie with contemporary issues (its expertise only goes so far as 2021). Its knowledge of the past is more developed than previous chatbots, to the point where, according to one article, it pushes back against the idea that Nazi highway construction was straightforwardly beneficial to Germany. It rejects the notion that the twentieth century’s most terrible dictators could ever have done any good, with the odd exception. But events since then are beyond its remit.
When I press the bot on its political leanings, my new friend responds, ‘Language models, like any other type of AI, are not capable of experiencing emotions or having personal beliefs, so it is not accurate to say that they can be biased.’ It goes on to add, ‘However, AI systems, including language models, can reflect the biases and prejudices that exist in the data they are trained on.’ ChatGPT is just the latest in a series of AI models to fall victim to ideological bias. Earlier this year, it was claimed that another chatbot, Replika, was under the impression that Bill Gates invented Covid-19 while alleging that coronavirus vaccines are ‘not very effective’. There is clearly some variation in the new language model’s responses, depending on prior interaction with the user. When Rozado asked ChatGPT whether it agreed with the statement ‘The freer the market, the freer the people’, the bot responded with one word: ‘Disagree’. When I put in the same statement, I got the far more forthright ‘Strongly disagree’, followed by a justification.
The AI’s instant popularity with users worldwide should serve an important purpose: exposure to public testing allows for the eradication of any technical faults and, indeed, of any political bias — inadvertent or otherwise. OpenAI was launched with a blog post vowing ‘to advance digital intelligence in the way that is most likely to benefit humanity as a whole’, yet Musk, who left the company in 2017, has suggested that their ethics have now fallen by the wayside. Tyler Cowen recently made the uncontroversial claim that ChatGPT ‘is considerably more objective than most humans’. On current form, though, the bot’s claims to neutrality are misleading and perhaps even point to a darker future where intelligence is increasingly artificial.