Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence

Motoki, Fabio Yoshio Suguri ORCID: https://orcid.org/0000-0001-7464-3330, Pinho Neto, Valdemar and Rodrigues, Victor (2024) Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence.

Full text not available from this repository. (Request a copy)

Abstract

Our analysis reveals a concerning misalignment of values between ChatGPT and the average American. We also show that ChatGPT displays political leanings when generating text and images, but the degree and direction of skew depend on the theme. Notably, ChatGPT repeatedly refused to generate content representing certain mainstream perspectives, citing concerns over misinformation and bias. As generative AI systems like ChatGPT become ubiquitous, such misalignment with societal norms poses risks of distorting public discourse. Without proper safeguards, these systems threaten to exacerbate societal divides and depart from principles that underpin free societies.

Item Type: Article
Uncontrolled Keywords: generative ai,societal values,large language models,multimodal,ai governance,economics, econometrics and finance (miscellaneous),political science and international relations,3* ,/dk/atira/pure/subjectarea/asjc/2000/2001
Faculty \ School: Faculty of Social Sciences > Norwich Business School
UEA Research Groups: Faculty of Social Sciences > Research Groups > Accounting & Quantitative Methods
Related URLs:
Depositing User: LivePure Connector
Date Deposited: 05 Apr 2024 10:30
Last Modified: 11 Jun 2024 16:30
URI: https://ueaeprints.uea.ac.uk/id/eprint/94837
DOI:

Actions (login required)

View Item View Item