ChatGPT's ability to summarize products has perhaps reached a level of cynicism that is astonishing to say the least.
Amazon is once again in turmoil after publishing crudely generated AI product sheets. This is the site Futurism Who noticed a certain number of articles containing the same text, perhaps to mislead consumers.
Maybe some smart people are using ChatGPT to automatically write product sheets for things, whether they're real or not. Except that OpenAI, the company behind ChatGPT, doesn't allow everything, so the product title was replaced with a standard phrase.
“I'm sorry but I can't respond to your inquiry as it violates OpenAI's Terms of Service. My goal is to provide you with the right help and information for Brown users”, Ad Title, Color Brown is… The color of furniture sold by a name brand is probably also randomly generated (FOPEAS ).
So this title is the same on dozens and dozens of products. It's a situation that embarrasses Amazon, especially since the online sales giant has already been blamed for letting drop-shipping products and false reviews pass too easily.
A position that tends to highlight products that are likely to be of poor quality, or that do not even exist.
Amazon is defending itself
Our colleagues from Futurism then wonder about Amazon's verification chain, even wondering if there's a “pilot on board.” The company explained in a press release that it “works hard to provide a trustworthy shopping experience [ses] customers, particularly by requiring third-party sellers to provide accurate information about their products.” It also removed select listings.
Most of the products involved in the crime fall into one category: furniture. But other products can be affected, especially regarding the description, for example a mixture of words that are not related to each other.
“Hardcore beer fanatic. Falls down a lot. Professional coffee fan. Music ninja.”