Awani Review

Complete News World

UK chatbot developers have warned

UK chatbot developers have warned

The Information Commissioner’s Office has warned tech companies about using information scraped from the internet to train their chatbots.

Artificial intelligence (AI) systems like ChatGPT are creating cause for concern, especially when it comes to personal data. These chatbots are powered by large language models (LLMs) that are trained with vast amounts of information gathered from the web. While Italy has decided to block the OpenAI chatbot for not complying with personal data law, the British equivalent, Cnil, has warned companies against building their systems using this information.

The Information Commissioner’s Office (ICO) has said that these companies must respect people’s privacy when developing these chatbots. “It’s important to take a step back and think about how personal data is being used by technology that might make its own CEO ‘scare a little'”Stephen Almond, ICO’s Director of Technology and Innovation, a contacted.

Rules to be followed

The director pointed out that his own exchange with ChatGPT recently led ChapTot to say that “Generative AI, like any other technology, has the potential to pose data privacy risks if not used responsibly”. “But while the technology may be new, the principles of data protection law remain the same – and there is a clear road map for companies to innovate to respect people’s privacy”Stephen Almond said.

Referring to the training process of LLMs, the director said that data protection law still applies when the personal information being processed comes from publicly available sources. Therefore, they must comply with the relevant legal basis for processing this data (consent, legitimate interests, etc.) or to seek to reduce the security risks presented by their systems. “As a data protection regulator, we ask these questions of companies that develop or use artificial AI (…)Stephen Almond promised.

See also  What happens after Kevin McCarthy is fired?