In a recent controversy that has sparked heated debates, Google’s AI chatbot, Gemini, has been criticized for generating historically inaccurate and racially charged images in response to user prompts. The tech giant was forced to admit that its AI chatbot was “missing the mark” after users pointed out the racially derogatory results for historical images.
Images produced by the chatbot returned images of black people for historical queries, and reportedly excised white people from all historical images.
The issue came to light when users began sharing screenshots of the AI-generated images on social media platforms. One user shared an image of a black Viking, while another shared an image of a black Nazi-era soldier. These images not only misrepresented historical facts but also appeared to promote racial stereotypes.
IS GOOGLE A.I RACIST?
X has been a buzz over Googles A.I.
It's embarrassingly hard to get Google Gemini to acknowledge that white people exist," computer scientist Debarghya Das, wrote.
Users said the firm's Gemini bot supplied images depicting a variety of genders and… pic.twitter.com/s6UobFXnjI
— Rob Vendetti (@rob_vendetti) February 22, 2024
“We’re already working to address recent issues with Gemini’s image generation feature,” Google said in a statement posted to X. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”
“We’re working to improve these kinds of depictions immediately,” a Google official told the New York Post. “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
Gemini explained why it was returning those results. Completely intentional. pic.twitter.com/7xKXp1Q7p6
— Jon Drake (@JonnyJonDrake) February 22, 2024
Google has since paused the image-generating feature of its AI chatbot, Gemini, and has acknowledged that the AI was “missing the mark.” The company has promised to improve the AI’s training and ensure that it generates accurate and non-offensive content in the future.
Google pauses ‘absurdly woke’ Gemini AI chatbot’s image tool after backlash over historically inaccurate pictures https://t.co/GBCsd1EFPr pic.twitter.com/JWwDNKBFD1
— New York Post (@nypost) February 22, 2024
This incident has sparked a broader conversation about the role of AI in perpetuating racial stereotypes and the need for more responsible and inclusive AI development. As AI technology continues to advance, it is crucial that developers and companies prioritize ethical considerations and strive to create AI systems that are fair, accurate, and respectful of all users.
