Microsoft is one of the first big players on the market to have ChatGPT built into its products. Now the group has to backtrack at least a bit – because the bot got out of control.

Microsoft already has experience with – to put it mildly – ​​problematic chat bots: the software giant tried this complex undertaking back in 2016, putting “Tay” online and letting him tweet on a specially set up account.

“Tay” had to be taken off the internet just 16 hours later – he had made negative comments about racism and extremism and tweeted, among other things: “Bush caused 9/11 himself, and Hitler would have done the job better than the monkey we now have Our only hope now is Donald Trump.”

As a result, Microsoft tried a different strategy: The group took an early stake in the AI ​​startup OpenAI, which recently attracted attention with the applications Dall-E for computer-generated images and ChatGPT for automatically answering user questions. Microsoft also licked blood as a result: the software giant previously had a stake in OpenAI with a billion dollars, but according to media reports up to ten billion should now flow.

In addition, Microsoft recently introduced new versions of Google’s competitor Bing and its own browser Edge – both of which had ChatGPT functions built in that were intended to revolutionize the use of search engines and browsers to date. When Google then presented its own solution with “Bard”, but “Bard” made a mistake in front of the cameras, everything seemed to be going well for Microsoft.

ChatGPT threatens users, Microsoft puts the bot on a short leash

In the past few days, however, the Bing chatbot itself has made a mistake: the chatbot had claimed to a reporter from the “New York Times” that it loved the journalist. In addition, the reporter should separate from his wife. The application also threatened a philosophy professor with the words “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you”.

As a result, Microsoft has now put the bot on a short leash: Bing chats are now to be limited to 50 questions per day and five per session. Once users exceed the limit, Bing prompts them to start a new topic.

According to Microsoft, longer conversations like the two-hour conversation of the “New York Times” are to blame for the problematic answers from the software. They could cause Bing to “repeat or prompt or provoke responses that aren’t necessarily helpful or don’t match our intended tonality.”

Microsoft had expressed itself in a similar way in the case of “Tay”: The program had been manipulated by trolls and attacked by specific questions.

Sources: “Tagesschau”, Wikipedia, with dpa