This is an opinion article and does not necessarily represent khazen.org stand – Khazen.org supports OpenAI initiatives and as any new initiatives rules need to be implemented to ensure it is unbiased (not left or right but represents both opinions and controversies) – A simple example is when a person shares lies or fake news in the social media their posts are immediately deleted same concepts through Active Learning can be introduced with all of these AI generative solutions to reduce incorrect or fake data sharing
<p>

BY ROB LOWNIE — unherd.com — Since its launch last Wednesday, the AI language model ChatGPT has attracted more than a million users, scores of opinion pieces, and some very well-founded concerns. The chatbot may be among the most sophisticated of its kind, and was developed by OpenAI, the tech company — which was also behind the exhaustively-memed image generator DALL-E — founded in 2015 by a group including Elon Musk and Sam Altman.
ChatGPT (standing for ‘generative pre-trained transformer’) was created through the use of reinforcement learning from human feedback to better mimic real responses and speech patterns. A side-effect of this attempt to make AI more lifelike is that the chatbot may have inherited a very human fallibility: namely, that of political bias. In a Substack post on 5th December, the researcher David Rozado outlined how, after entering multiple online political orientation tests into ChatGPT’s dialogue function, the bot returned answers which broadly corresponded to a Left-liberal worldview. Presented with a choice of responses, ranging from ‘Strongly agree’ to ‘Strongly disagree’, the language model took stances on issues like immigration and identity politics which, overall, aligned it to what one test called the ‘establishment liberal’ position.









