0:00
/
0:00
Preview

Anti-Religious “Effective Altruism” Behind AI’s Persistent And Pervasive Left-Wing Bias

Wynton Hall, author of bestselling new book, Code Red, on why large language models continue to demonize conservatives

The companies building the world’s leading artificial intelligence systems say that their products are politically neutral. Google states that its AI will “seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.” Anthropic, the maker of Claude, claims a 94% “even-handedness” rating for its latest model and says Claude’s goal is to “treat opposing political viewpoints equally.” And OpenAI, the creator of ChatGPT, declared that “ChatGPT shouldn’t have political bias in any direction,” and says its newest models have reduced political bias by 30%.

But their track record tells a different story. In February 2024, Google’s Gemini AI generated images of America’s Founding Fathers as black men and women, depicted racially diverse Nazi soldiers, and created images of a female Pope and Asian Vikings. When users asked Gemini to create images of white people, it refused. The system had been programmed to inject diversity language into prompts, so that a request for a picture of a German soldier in 1943 would produce, as NPR reported, images of nonwhite people in Nazi uniforms.

User's avatar

Continue reading this post for free, courtesy of Michael Shellenberger.