The discontinuation of Google’s AI-powered peer health advice feature went largely unnoticed until journalists began asking questions — a pattern that reflects how the company manages uncomfortable product decisions. “What People Suggest” organized health tips from online strangers using AI and displayed them in Google Search results. Three sources confirmed the removal before Google was forced to acknowledge it publicly.
The feature debuted at Google’s health event in New York with genuine excitement. Then-chief health officer Karen DeSalvo described it as a meaningful innovation that would help users find health perspectives from real people navigating the same conditions. The AI curated community forum content into themes for easy consumption by health searchers.
When asked about the removal, Google denied safety was involved, pointing instead to search simplification. The blog post cited as public disclosure of the decision made no mention of “What People Suggest,” however. One knowledgeable insider confirmed: “It’s dead.”
The context is significant. An investigation this year found that Google’s AI Overviews were distributing false health information to two billion users monthly. Though Google removed some medical AI Overviews following the investigation, health advocates have continued to push for more comprehensive reform.
Google’s upcoming health event will bring new declarations about AI’s potential to transform global healthcare. But the story of “What People Suggest” — removed quietly, explained vaguely, and barely acknowledged — demonstrates that transformative potential must be matched by an equally serious commitment to transparency and accountability. The public deserves no less.
