TechnologyGoogle's Crowdsourced Medical Advice Tool Scrapped Amid Growing AI...

Google’s Crowdsourced Medical Advice Tool Scrapped Amid Growing AI Health Concerns

-

A Google search feature designed to surface amateur health advice from internet strangers has been discontinued, the company confirmed. The tool, branded “What People Suggest,” organized crowdsourced tips from online discussions using artificial intelligence to present them to users searching for health information. Its removal was confirmed by a Google spokesperson and corroborated by three individuals familiar with the internal decision.

Launched just over a year ago at a high-profile health event in New York, the feature was celebrated by Google executives as a breakthrough in democratizing health information. Former chief health officer Karen DeSalvo wrote enthusiastically about how the tool would help people with conditions like arthritis find advice from others with shared experiences. The AI-organized threads were meant to complement expert medical sources already available through standard search results.

Google’s explanation for removing the feature has drawn skepticism. The company claims the decision was part of routine search page simplification and that safety played no role. But the blog post cited as public notice of the change contains no reference to the discontinued feature, leaving critics unconvinced by the company’s account of events.

The context surrounding this removal is significant. Earlier this year, an investigation revealed that Google’s AI Overviews were surfacing dangerously inaccurate health information to billions of users. While Google removed AI Overviews for some medical searches in response, it did not fully retreat from AI-generated health content, and questions about its reliability continue to grow.

Looking ahead, Google is scheduled to hold another edition of its “The Check Up” health event, where officials plan to highlight new partnerships and AI innovations in healthcare. The company’s credibility in this space, however, may hinge on its ability to demonstrate that it takes health misinformation seriously rather than quietly eliminating problematic features without adequate explanation.

Latest news

Mark Zuckerberg’s Metaverse Tried to Replace Reality — $80 Billion Later, Reality Won Again

Reality, as it turns out, is hard to replace. Meta has confirmed the shutdown of Horizon Worlds on VR...

 Instagram’s Encryption Bet Fails: What Went Wrong

Meta's experiment with end-to-end encryption on Instagram has failed. The company has confirmed the feature will be removed from...

Microsoft Defends Anthropic in Landmark Court Fight That Could Reshape AI and Military Relations

In a bold legal move, Microsoft has submitted an amicus brief to a San Francisco federal court in defense...

Regulatory Victory for Musk: xAI’s Colossus 2 Power Project Moves Forward

Mississippi regulators have greenlit a significant expansion for Elon Musk’s xAI datacenter, sparking outrage among local activists. The newly...

Trump Blacklists Anthropic After “Disastrous” Ethical Standoff, Clearing Path for OpenAI

The federal government’s relationship with the AI industry has been turned upside down following a direct intervention by President...

Nvidia’s $30 Billion OpenAI Gamble: What the New Deal Really Means

Nvidia is making a massive financial move that could reshape the artificial intelligence landscape. The chip giant is reportedly...

Must read

Meta’s WhatsApp Joins Tech Industry Leaders in Offering Military-Grade User Protection

In a significant move toward enhanced digital safety, WhatsApp...

You might also likeRELATED
Recommended to you