Awani Review

Complete News World

Artificial Intelligence: ChatGPT unblocks ChatGPT for hours

Artificial Intelligence: ChatGPT unblocks ChatGPT for hours

ChatGPT was unblocked for several hours on Tuesday, answering users' questions with meaningless sentences, a reminder that generative artificial intelligence (AI) models remain computer programs that don't understand human language.

OpenAI, the California startup that launched the interface at the end of 2022, noted Wednesday morning on its site that ChatGPT was working “normally” again.

Later in the afternoon, it published a brief press release explaining the computer glitch responsible for the issue, which occurred after the update.

“Money for bits and lists from strangers and the Internet where currency and a person cost from friends and currency. Next time you look at the system, exchange and truth, don't forget to give,” ChatGPT answered a question from user “scott.eskridge”.

In a forum for developers who use OpenAI's tools, he complained on Tuesday that all of his conversations with the language model “quickly turned into nonsense over the past three hours.”

With the success of ChatGPT, OpenAI has sparked tremendous enthusiasm for generative AI, making it possible to produce all kinds of content (text, sounds, videos) – usually of amazing quality – based on a simple request in everyday language.

On Tuesday afternoon — San Francisco time, where she resides — she announced an “investigation into reports of unexpected responses from ChatGPT.”

A few minutes later, the Silicon Valley star confirmed that it had “identified the problem” and was “in the process of solving it.”

GBT “The Haunted”.

Many users have uploaded screenshots showing erratic or incomprehensible responses from the form.

“It generates words that are completely non-existent, deletes words and produces sequences of small keywords that are incomprehensible to me, among other anomalies,” a user named “IYAnepo” reported on the same forum.

See also  WhatsApp is something new to calling

“You might think I would have specified such an instruction, but that's not the case. I feel like my GPT is haunted (…)”

OpenAI explained on Wednesday that “improving the user experience” had “resulted in an error in the way the model processes language.”

The company confirmed that “language models generate responses by randomly sampling words, based in part on probability,” before providing more technical details and concluding that the incident was “resolved” after “the patch was installed.”

The incident reminds us that artificial intelligence, even if it is generative, has no awareness or understanding of what it is “saying,” in contrast to the impression it can give during “conversations” with users.

When they first appeared a year ago, interfaces like ChatGPT or its rivals from Google and Microsoft regularly tended to “hallucinate,” that is, fabricate facts or even simulate emotions.

Artificial intelligence expert Gary Marcus hopes Tuesday's incident will be seen as a “wake-up call.”

“These systems have never been stable. No one has ever been able to develop security safeguards around these systems,” he wrote in his newsletter on Tuesday.

“The need for completely different technologies that are less ambiguous, more explainable, easier to maintain and debug – and therefore easier to implement – ​​remains critical,” he added.