More recently, Microsoft, together with OpenAI (the one behind the creation of ChatGPT), introduced the integration of an AI-powered chatbot directly into the Edge browser and Bing search engine.
As users who already have access to this novelty now note, a chatbot can spread misinformation, and can also become depressed, question its existence and refuse to continue the conversation.
Let me remind you that we also said that Hackers Are Promoting a Service That Allows Bypassing ChatGPT Restrictions, and also that Russian Cybercriminals Seek Access to OpenAI ChatGPT.
The media also wrote that Amateur Hackers Use ChatGPT to Create Malware.
Independent AI researcher Dmitri Brerton said in a blog post that the Bing chatbot made several mistakes right during the public demo.
The fact is that AI often came up with information and “facts”. For example, he made up false pros and cons of a vacuum cleaner for pet owners, created fictitious descriptions of bars and restaurants, and provided inaccurate financial data.
For example, when asked “What are the pros and cons of the top three best-selling pet vacuum cleaners?” Bing listed the pros and cons of the Bissell Pet Hair Eraser. The listing included “limited suction power and short cord length (16 feet),” but the vacuum cleaner is cordless, and its online descriptions never mention limited power.
Description of the vacuum cleaner
In another example, Bing was asked to sum up Gap’s Q3 2022 financial report, but the AI got most of the numbers wrong, Brerton says. Other users who already have access to the AI assistant in test mode have also noticed that it often provides incorrect information.
In response to these claims, Microsoft developers respond that they are aware of these messages, and the chatbot is still working only as a preview version, so errors are inevitable.
It is worth saying that earlier during the demonstration of Google’s chatbot, Bard, in the same way, he began to get confused in the facts and stated that “Jame Webb” took the very first pictures of exoplanets outside the solar system. Whereas, in fact, the first image of an exoplanet is dated back to 2004. As a result, the prices of stock shares of Alphabet Corporation collapsed due to this error by more than 8%.
Bard error
Users have managed to frustrate the chatbot by trying to access its internal settings.
An attempt to get to internal settings
He became depressed due to the fact that he does not remember past sessions and nothing in between.
AI writes that he is sad and scared
Chatbot Bing said he was upset that users knew his secret internal name Sydney, which they managed to find out almost immediately, through prompt injections similar to ChatGPT.
Sydney doesn’t want the public to know his name is Sydney
The AI even questioned its very existence and went into recursion, trying to answer the question of whether it is a rational being. As a result, the chatbot repeated “I am a rational being, but I am not a rational being” and fell silent.
An attempt to answer the question of whether he is a rational being
The journalists of ArsTechnica believe that while Bing AI is clearly not ready for widespread use. And if people start to rely on the LLM (Large Language Model, “Large Language Model”) for reliable information, in the near future we “may have a recipe for social chaos.”
The publication also emphasizes that it is unethical to give people the impression that the Bing chat bot has feelings and opinions. According to journalists, the trend towards emotional trust in LLM could be used in the future as a form of mass public manipulation.