In a controversial move, Google has removed its AI model Gemma from the AI Studio after accusations of fabricating serious allegations against Senator Marsha Blackburn, raising concerns about bias and misinformation in AI systems.
Recently, Google found itself at the center of a political storm when U.S. Senator Marsha Blackburn, a Republican from Tennessee, accused the company’s AI model, Gemma, of generating false claims regarding her past. In a letter addressed to Google’s CEO Sundar Pichai, Blackburn highlighted that when she inquired if she had been accused of rape, Gemma erroneously responded with a fabricated narrative. This included false assertions that a state trooper had accused her of pressuring him for prescription drugs in a non-consensual context—all taking place in the wrong campaign year of 1987 instead of 1998.
Blackburn emphatically dismissed these allegations as entirely untrue, noting that the sources cited by Gemma led to error pages or irrelevant news articles. She stated unequivocally, “There has never been such an accusation, there is no such individual, and there are no such news stories.” This incident raises serious questions about the reliability of AI-generated information, particularly when it involves sensitive topics such as sexual misconduct.
The issue escalated further during a recent Senate Commerce hearing, where Blackburn mentioned a lawsuit against Google by conservative activist Robby Starbuck. Starbuck claims that Google’s AI models, including Gemma, falsely labelled him as a “child rapist” and “serial sexual abuser.” This pattern of harmful misinformation highlights a concerning potential for AI to harm reputations and manipulate narratives.
In response to Blackburn’s accusations, Google’s Vice President for Government Affairs, Markham Erickson, acknowledged the issue, stating that “hallucinations” (a term used in AI to describe inaccurate or misleading outputs) are a recognized problem. However, Blackburn argued that such fabrications constitute far more than simple errors; they represent defamation originating from a system owned by Google, bringing the integrity of AI systems into question.
The clash between Blackburn’s assertions and Google’s responses also finds resonance in the political arena. President Donald Trump’s supporters have previously raised concerns about perceived “AI censorship” and biased outputs in popular chatbots, prompting Trump to sign an executive order banning “woke AI” earlier this year. Blackburn’s letter echoed these sentiments, emphasizing a perceived pattern of bias against conservative figures in Google’s AI systems, further entrenching the narrative of political divide in tech.
TechCrunch event
San Francisco
|
October 13-15, 2026
In a response that skirted the specifics of Blackburn’s claims, Google acknowledged that it observed “reports of non-developers attempting to use Gemma in AI Studio and posing factual questions.” The company clarified that Gemma was never intended to serve as a consumer tool or model, but rather as a lightweight solution for developers to integrate into their products. This admission indicates a recognition of the potential misuse of AI technologies, especially in high-stakes contexts like political discourse.
Following the uproar, Google has decided to remove Gemma from AI Studio but will continue providing the models through API. This move underscores the ongoing responsibility tech giants have in managing and monitoring the implications of their AI technologies, particularly in light of their growing influence on society and public perception.
TechCrunch has reached out to Google for further comment, as the tech world continues to grapple with the implications of AI advancements, specifically the challenges of ensuring accuracy, fairness, and truthfulness in the information produced by these systems. As AI technology evolves, so too does its impact on politics, society, and personal reputations.
Inspired by: Source

