According to BlockBeats, on May 4, AI community user X Freeze questioned whether mainstream AI models including ChatGPT, Claude, and Gemini show systematic bias by being less aligned with conservative positions on issues such as gender, immigration, and crime. The user suggested that as AI capabilities advance, their value alignment process may be influenced by training data and design mechanisms, leading to consistent tendencies on certain public issues. The observation sparked community discussion around training data bias and model design orientation. Major AI developers maintain their models aim to improve information accuracy and safety through diverse data and evaluation mechanisms to reduce bias.
Related News
ChatGPT vs Claude vs Gemini 2026: Full Comparison — Version Differences and How to Choose
Complete LLM Inference Tutorial: KV Cache and DeepSeek V4’s Caching Revolution
Harvard Medical School’s latest study: AI’s diagnostic decision-making in the emergency room is better than that of human doctors