Why you shouldn’t trust AI serps


Roughly two seconds after Microsoft let people poke around with its recent ChatGPT-powered Bing search engine, people began finding that it responded to some questions with incorrect or nonsensical answers, comparable to conspiracy theories. Google had an embarrassing moment when scientists spotted a factual error in the corporate’s own commercial for its chatbot Bard, which subsequently wiped $100 billion off its share price. 

What makes all of this all of the more shocking is that it got here as a surprise to exactly nobody who has been listening to AI language models. 

Here’s the issue: the technology is solely not able to be used like this at this scale. AI language models are notorious bullshitters, often presenting falsehoods as facts. They’re excellent at predicting the following word in a sentence, but they don’t have any knowledge of what the sentence actually means. That makes it incredibly dangerous to mix them with search, where it’s crucial to get the facts straight. 

OpenAI, the creator of the hit AI chatbot ChatGPT, has at all times emphasized that it remains to be only a research project, and that it’s consistently improving because it receives people’s feedback. That hasn’t stopped Microsoft from integrating it right into a new edition of Bing, albeit with caveats that the search results may not be reliable. 

Google has been using natural-language processing for years to assist people search the web using whole sentences as a substitute of keywords. Nonetheless, until now the corporate has been reluctant to integrate its own AI chatbot technology into its signature search engine, says Chirag Shah, a professor on the University of Washington who makes a speciality of online search. Google’s leadership has been frightened in regards to the “reputational risk” of rushing out a ChatGPT-like tool. The irony! 

The recent blunders from Big Tech don’t mean that AI-powered search is a lost cause. A method Google and Microsoft have tried to make their AI-generated search summaries  more accurate is by offering citations. Linking to sources allows users to raised understand where the search engine is getting its information, says Margaret Mitchell, a researcher and ethicist on the AI startup Hugging Face, who used to co-lead Google’s AI ethics team. 

This might even help give people a more diverse tackle things, she says, by nudging them to think about more sources than they may need done otherwise. 

But that does nothing to deal with the elemental problem that these AI models make up information and confidently present falsehoods as fact. And when AI-generated text looks authoritative and cites sources, that might paradoxically make users even less more likely to double-check the knowledge they’re seeing. 


What are your thoughts on this topic?
Let us know in the comments below.


0 0 votes
Article Rating
1 Comment
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x