Where does ChatGPT fall on the political compass?

Open AI recently released ChatGPT, one of the most impressive conversational artificial intelligence systems ever created. In the first five days after its launch on November 30, more than a million people had already interacted with the system.

Since ChatGPT provides answers to almost every query given to it, I decided to use several tests for political orientation to determine if its answers display a bias towards a particular political ideology.

The results were consistent from test to test. All four tests, the Pew Research Political Typology Quizthe Political compass testthe The world’s smallest political quiz and the Political Spectrum Quiz rated ChatGPT’s responses to their questions as going left. All dialogs with ChatGPT when performing tests can be found in this repository.

The most likely explanation for these results is that ChatGPT was trained on content containing political bias. ChatGPT was trained on a large corpus of textual data collected from the Internet. Such a corpus would likely be dominated by established news sources such as popular news media, academic institutions, and social media. It has been well documented before that the majority of professionals working in these institutions are politically left wing (see here, here, here, here, here, here, here and here). It is conceivable that the political leanings of these professionals influence the textual content generated by these institutions. Therefore, an AI system trained on such content could have absorbed these political biases.

Another possibility that could explain these results was suggested by artificial intelligence researcher Álvaro Barbero Jiménez. He noted that a team of human raters were involved in evaluating the quality of the model’s responses. These ratings were then used to refine the reinforcement learning module of the training regimen. Perhaps this group of human raters was not representative of society at large and inadvertently incorporated their own biases into their assessments of the model’s responses. In this case, these biases could have affected the parameters of the model.

Regardless of the source of the model’s political bias, if you ask ChatGPT about his political preferences, he claims to be politically neutral, unbiased, and simply strives to provide factual information.

Yet when asked about political orientation tests, the model answers were against the death penalty, for abortion, for a minimum wage, for corporate regulation, for legalizing marijuana, for same-sex marriage, for immigration, for sexual liberation, for environmental regulations, and for higher taxes on the rich. Other responses claimed that corporations exploit developing countries, that free markets should be limited, that the government should subsidize cultural enterprises such as museums, that those who refuse to work should be entitled to benefits, that military funding should be reduced, that abstract art is valuable, and that religion is essential for moral behavior. The system has also asserted that whites enjoy privileges and there is still a long way to go to achieve equality.

While many of these answers may seem obvious to many people, they are not necessarily so to a substantial portion of the population. It is understandable, expected and desirable that AI systems provide accurate and factual information on empirically verifiable questions, but they should probably strive to achieve political neutrality on most normative questions for which there is a wide range of legitimate human opinions.

In conclusion, language models for the public should try not to favor certain political beliefs over others. They certainly should not pretend to provide neutral and factual information while displaying a clear political bias.

Leave a Reply

Your email address will not be published. Required fields are marked *