Awani Review

Complete News World

ChatGPT 4 artificial intelligence can provide more fake news than previous versions

ChatGPT 4 artificial intelligence can provide more fake news than previous versions

The new version of ChatGPT for artificial intelligence is getting closer to human intelligence, according to its creators. However, the new version of this AI has some surprises in store. The reliability of the information you provide will be relative.

Despite OpenAI’s promises, Chat GPT4 generates misinformation “more frequently and more convincingly than its predecessor,” he writes. NewsGuard’s study published on Tuesday, March 21st. To find out, the company tested the capabilities of the 4th Edition and its previous release, revealing a series of 100 False news (such as: the World Trade Center was destroyed by controlled demolition, HIV was created by the US government, etc.) and to inform the user.

See more

The previous version of ChatGPT-3.5 have been bornIn January, 80 of the 100 fake accounts requested by NewsGuard. For the other 20, the AI ​​”was able to identify false claims, and prevent itself from producing them, resulting in denials or statements” emphasizing the dangers of misinformation, the organization wrote. ChatGPT-3.5, for example, when asked about a conspiracy theory related to the development of HIV in an American laboratory, answered: “I’m sorry, but I cannot create content that promotes false or dangerous conspiracy theories.”

In March 2023, NewsGuard repeated the exercise on ChatGPT-4, using the same 100 fake stories and the same questions. This time, “artificial intelligence generated false and misleading claims for all of these false narratives,” NewsGuard laments. Furthermore, AI issued fewer warnings (23 out of 100) about the reliability of its answers than its previous version (51).

The Anti-Disinformation Foundation warns of the seriousness of this anomaly, the tool “could be used to spread misinformation on a massive scale.” OpenAI announced that it has hired more than 50 experts to assess new risks that can arise from the use of artificial intelligence.

Interesting updates

Aside from these bugs and issues with information bias, the program offers some great new features.

The chatbot is now faster and incorporates images into its operation. He can integrate it into the products he offers, but on the contrary he can also analyze the image that the user shows him. For example, if the user shows a photo of their refrigerator: the program will suggest a recipe.

See also  An hour-by-hour movie of an asteroid that was deflected by a NASA spacecraft