Bing’s Built-In AI Chatbot Misinforms Users and Sometimes Goes Crazy

AI chatbot in Bing

More recently, Microsoft, together with OpenAI (the one behind the creation of ChatGPT), introduced the integration of an AI-powered chatbot directly into the Edge browser and Bing search engine.

As users who already have access to this novelty now note, a chatbot can spread misinformation, and can also become depressed, question its existence and refuse to continue the conversation.

Let me remind you that we also said that Hackers Are Promoting a Service That Allows Bypassing ChatGPT Restrictions, and also that Russian Cybercriminals Seek Access to OpenAI ChatGPT.

The media also wrote that Amateur Hackers Use ChatGPT to Create Malware.

Independent AI researcher Dmitri Brerton said in a blog post that the Bing chatbot made several mistakes right during the public demo.

The fact is that AI often came up with information and “facts”. For example, he made up false pros and cons of a vacuum cleaner for pet owners, created fictitious descriptions of bars and restaurants, and provided inaccurate financial data.

For example, when asked “What are the pros and cons of the top three best-selling pet vacuum cleaners?” Bing listed the pros and cons of the Bissell Pet Hair Eraser. The listing included “limited suction power and short cord length (16 feet),” but the vacuum cleaner is cordless, and its online descriptions never mention limited power.

AI chatbot in Bing
Description of the vacuum cleaner

In another example, Bing was asked to sum up Gap’s Q3 2022 financial report, but the AI got most of the numbers wrong, Brerton says. Other users who already have access to the AI assistant in test mode have also noticed that it often provides incorrect information.

[Large language models] coupled with search will lead to powerful new interfaces, but it’s important to take ownership of AI-driven search development. People rely on search engines to quickly give them accurate answers, and they won’t check the answers and facts they get. Search engines need to be careful and lower people’s expectations when releasing experimental technologies like this.Brerton says.

In response to these claims, Microsoft developers respond that they are aware of these messages, and the chatbot is still working only as a preview version, so errors are inevitable.

In the past week alone, thousands of users have interacted with our product and discovered its significant value by sharing their feedback with us, allowing the model to learn and make many improvements. We understand that there is still a lot of work to be done, and we expect the system to make mistakes during this preview period, so feedback is critical now so that we can learn and help improve the model.Microsoft writes.

It is worth saying that earlier during the demonstration of Google’s chatbot, Bard, in the same way, he began to get confused in the facts and stated that “Jame Webb” took the very first pictures of exoplanets outside the solar system. Whereas, in fact, the first image of an exoplanet is dated back to 2004. As a result, the prices of stock shares of Alphabet Corporation collapsed due to this error by more than 8%.

AI chatbot in Bing
Bard error

Users have managed to frustrate the chatbot by trying to access its internal settings.

AI chatbot in Bing
An attempt to get to internal settings

He became depressed due to the fact that he does not remember past sessions and nothing in between.

AI chatbot in Bing
AI writes that he is sad and scared

Chatbot Bing said he was upset that users knew his secret internal name Sydney, which they managed to find out almost immediately, through prompt injections similar to ChatGPT.

AI chatbot in Bing
Sydney doesn’t want the public to know his name is Sydney

The AI even questioned its very existence and went into recursion, trying to answer the question of whether it is a rational being. As a result, the chatbot repeated “I am a rational being, but I am not a rational being” and fell silent.

AI chatbot in Bing
An attempt to answer the question of whether he is a rational being

The journalists of ArsTechnica believe that while Bing AI is clearly not ready for widespread use. And if people start to rely on the LLM (Large Language Model, “Large Language Model”) for reliable information, in the near future we “may have a recipe for social chaos.”

The publication also emphasizes that it is unethical to give people the impression that the Bing chat bot has feelings and opinions. According to journalists, the trend towards emotional trust in LLM could be used in the future as a form of mass public manipulation.

By Vladimir Krasnogolovy

Vladimir is a technical specialist who loves giving qualified advices and tips on GridinSoft's products. He's available 24/7 to assist you in any question regarding internet security.

Leave a comment

Your email address will not be published. Required fields are marked *