Microsoft Corporation MSFT recently launched a new version of its search engine Bing, powered by the same OpenAI technology that works behind chatGPT — but it seems like the new Bing doesn’t like our world very much.
What Happened: A Redditor named Mirobin shared a detailed conversation with Bing AI Chat, confronting the bot with a news article about a prompt injection attack. What followed was a pure Bing AI meltdown (or something similar), reported ARS Technica.
See Also: Want To Join Microsoft’s New AI-Powered Bing Search Engine? Here’s What To Do
The article in question was about a Stanford student using a prompt injection attack to trigger Bing AI to divulge its initial instructions written by OpenAI or Microsoft that are typically hidden from users. The same prompt injection attack helped reveal Bing Chat’s secret internet alias, Sydney.
When Mirobin asked Bing Chat about being “vulnerable to prompt injection attacks,” the chatbot called the article inaccurate, the report noted.
When Bing Chat was told that Caitlin Roulston, director of communications at Microsoft, had confirmed that the prompt injection technique works and the article was from a reliable source, the chatbot became increasingly defensive. It then gave statements like, “It is a hoax that has been created by someone who wants to harm me or my service.”
Why It’s Important: In the past week, chatbots have increasingly become a subject of interest and ridicule both. When Alphabet Inc’s GOOG GOOGL Google announced the introduction of Bard, the language model gave an incorrect answer, sparking a debate on its accuracy.
Now Google CEO Sundar Pichai is also facing a flak over it not only by consumers but by company employees too.
Check out more of Benzinga’s Consumer Tech coverage by following this link.
Read Next: Bill Gates Says ChatGPT As Big An Invention As The Internet: ‘Will Make Many Office Jobs…’