More human than human: Measuring ChatGPT political bias

Motoki, Fabio ORCID:, Pinho Neto, Valdemar and Rodrigues, Victor (2024) More human than human: Measuring ChatGPT political bias. Public Choice, 198 (1-2). pp. 3-23. ISSN 0048-5829

[thumbnail of Motoki_etal_2023_PublicChoice]
PDF (Motoki_etal_2023_PublicChoice) - Published Version
Available under License Creative Commons Attribution.

Download (2MB) | Preview


We investigate the political bias of a large language model (LLM), ChatGPT, which has become popular for retrieving factual information and generating content. Although ChatGPT assures that it is impartial, the literature suggests that LLMs exhibit bias involving race, gender, religion, and political orientation. Political bias in LLMs can have adverse political and electoral consequences similar to bias from traditional and social media. Moreover, political bias can be harder to detect and eradicate than gender or racial bias. We propose a novel empirical design to infer whether ChatGPT has political biases by requesting it to impersonate someone from a given side of the political spectrum and comparing these answers with its default. We also propose dose-response, placebo, and profession-politics alignment robustness tests. To reduce concerns about the randomness of the generated text, we collect answers to the same questions 100 times, with question order randomized on each round. We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media. Our findings have important implications for policymakers, media, politics, and academia stakeholders.

Item Type: Article
Additional Information: Acknowledgements: This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001. Data availability: The datasets generated during and/or analysed during the current study are available in the Harvard Dataverse repository,
Uncontrolled Keywords: political bias,large language models,chatgpt,llms,political compass,c10,large language models,c89,bias,l86,z00,d83,economics and econometrics,sociology and political science,sdg 16 - peace, justice and strong institutions,3* ,/dk/atira/pure/subjectarea/asjc/2000/2002
Faculty \ School: Faculty of Social Sciences > Norwich Business School
Related URLs:
Depositing User: LivePure Connector
Date Deposited: 27 Jul 2023 15:30
Last Modified: 14 Jul 2024 08:30
DOI: 10.1007/s11127-023-01097-2


Downloads per month over past year

Actions (login required)

View Item View Item