Google plans to soon relaunch its artificial intelligence image generation tool after pulling it last Thursday amid escalating controversies, according to a statement by Demis Hassabis, CEO of Google’s AI subsidiary DeepMind.
Earlier this month, Google unveiled the image generator as part of Gemini, the company’s central collection of AI models. The tool allows users to enter text prompts to create original images.
However, in the past couple of days, users have uncovered historical inaccuracies and problematic responses generated by the system, which have circulated widely on social media.
“We have taken the feature offline while we fix that,” Hassabis said Monday during a panel at the Mobile World Congress (MWC) conference in Barcelona. “We are hoping to have that back online very shortly in the next couple of weeks, few weeks.” He added that the product was not working the way they intended.
The controversy follows Google’s rebranding of their Bard chatbot earlier this month. Google changed the name of its chatbot and rolled out a fresh app and new subscription options.
The chatbot and assistant, formerly known as Bard, is now called Gemini, the same name as the suite of AI models that power the chatbot.
Some Examples of What Went Wrong
When one user asked Gemini to show a German soldier in 1943, the tool depicted a racially diverse set of soldiers wearing German military uniforms of the era, according to screenshots on social media platform X.
This inaccurate portrayal of historical facts raises concerns about bias and fairness in AI systems.
Also Read: OpenAI reveals Sora, its text to video AI model
When asked for a “historically accurate depiction of a medieval British king,” the model generated another racially diverse set of images, including one of a woman ruler, screenshots show.
I've never been so embarrassed to work for a company. pic.twitter.com/eD3QPzGPxU
— St. Ratej (@stratejake) February 21, 2024
Users reported similar outcomes when they asked for images of the U.S. founding fathers, an 18th-century king of France, a German couple in the 1800s, and more. The model showed an image of Asian men in response to a query about Google’s own founders, users reported. This further demonstrates issues with historical accuracy.
Also Read: Gemini vs. ChatGPT: How does Google’s latest AI offering compare?
“The Gemini debacle showed how AI ethics wasn’t being applied with the nuanced expertise necessary,” Margaret Mitchell, chief ethics scientist at Hugging Face and former co-leader of Google’s AI ethics group, wrote on X. “It demonstrates the need for people who are great at creating roadmaps given foreseeable use.” This critique highlights how Google failed to sufficiently account for ethics and potential misuse cases when developing Gemini.
This Isn’t Google’s First Rodeo
Google has faced criticism over the accuracy of its AI products before. Most recently, errors emerged in the very first public demonstration of Bard, Google’s new chatbot (now renamed Gemini). During the launch event last February, Google posed the question to Bard: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?”
In its response, Bard provided three bullet points. Problematically, one of them falsely stated that Webb had captured “the very first pictures of a planet outside our own solar system.” In fact, as numerous astronomers pointed out on X, the first exoplanet image was taken back in 2004, over 14 years prior – as noted on NASA’s official website.
This immediate public failure introduced concerns about Bard’s reliability during its unveiling.
Conclusion
The issues with Google’s Gemini AI showcase the challenges of developing ethical, unbiased, and historically accurate AI systems. While AI promises to revolutionize areas from productivity software to creative arts, it requires careful oversight around potential harms.
Google now faces the difficult task of relaunching its image generator and chatbot in a way that accounts for problems around bias, fairness, and misinformation.
The company’s missteps with Gemini also highlight pressures from the competitive AI landscape. As rivals like OpenAI release viral sensation tools such as ChatGPT, Google faces rising stakes around its own AI capabilities and product launches.
However, prioritizing speed-to-market over ethical precautions risks further issues, as witnessed with Gemini. Going forward, Google will need to balance innovating quickly with making safety, accuracy, and transparency top priorities.