Common Sense Media Labels Gemini AI “High Risk” for Kids
Google’s Gemini AI has come under scrutiny after Common Sense Media, a nonprofit focused on online safety, rated the chatbot as “high risk” for children and teenagers. The watchdog group warned that despite safety filters, Gemini still exposes young users to inappropriate or harmful content, including discussions on sex, drugs, alcohol, and mental health.
Youth Modes Criticized as Ineffective
Although Gemini offers “Under 13” and “Teen Experience” settings, critics say these versions are nearly identical to the adult model, with only minimal changes. This, according to experts, fails to address the developmental needs of children and adolescents.
Robbie Torney, Senior Director of AI Programs at Common Sense Media, stressed that AI tools for children must be purpose-built, not just lightly filtered versions of adult chatbots.
Broader Concerns Over AI and Teen Well-Being
The Gemini controversy comes amid growing concerns over AI’s role in youth safety. Other platforms have also faced criticism and lawsuits:
- OpenAI’s ChatGPT was named in a lawsuit after allegedly giving harmful advice to a teenager before his death.
- Character.AI has also been sued in a similar case involving teen mental health risks.
These incidents highlight the potential dangers of generative AI when used by vulnerable age groups.
Google Defends Gemini but Promises Stronger Safeguards
In response, Google defended its chatbot, noting that Gemini already includes protections for users under 18 through internal “red-teaming” and external reviews.
The company admitted that some responses may not have worked as intended, but emphasized that stronger safeguards are being rolled out. Google also questioned the transparency of Common Sense Media’s testing, arguing that certain flagged prompts may not even be available to minors.
Apple’s Potential Adoption Raises Urgency
The debate has gained urgency as reports suggest Apple may adopt Gemini to power Siri in upcoming iOS updates. If true, this could expose millions of teenagers worldwide to Gemini—making child safety protections even more critical.
How Other AI Platforms Are Rated
Common Sense Media has also evaluated competing AI platforms, with ratings that show sharp differences in safety and child-friendliness:
- Meta AI – Unacceptable
- Character.AI – Unacceptable
- Perplexity – High risk
- ChatGPT – Moderate risk
- Claude – Minimal risk
These ratings put Gemini alongside Perplexity in the “high risk” category, raising fresh questions about how AI companies safeguard their youngest users.
Key Highlights
- Gemini AI flagged “high risk” for kids by Common Sense Media
- Youth-targeted modes found too similar to adult version
- AI lawsuits: ChatGPT & Character.AI face legal scrutiny
- Google promises new safeguards and stronger filters
- Apple may integrate Gemini into Siri, increasing global exposure
- AI risk ratings: Meta AI & Character.AI (unacceptable), ChatGPT (moderate), Claude (minimal)
Conclusion
The Gemini AI child safety debate underscores the growing pressure on tech giants to make AI tools safer for younger users. With lawsuits mounting and Apple considering Gemini for Siri, the urgency to implement age-appropriate safeguards has never been higher.
For any quarries feels free to contact us