In a context where AI such as ChatGPT is gaining popularity, Europe is considering strengthening the General Data Protection Regulation to better regulate its use of personal data. These AI systems, used in connected devices such as smart watches and online platforms, can access sensitive information, often without users being completely aware. In the face of these risks, the European Union proposes regulatory measures aimed at protecting personal data and informing users about the use of their information by artificial intelligence.
This will interest you too
[EN VIDÉO] Interview: How was artificial intelligence born? Artificial intelligence aims to mimic the workings of the human brain, or at least its logic…
While the European Day dedicated to the protection of personal data seeks to raise awareness among the population so that they can control their private data on the Internet, their future may witness a revolution. It's the moment of the resounding arrival of powerful artificial intelligence (AI), like ChatGPT. If these AI systems know how to persuade with their oratorical talents, they will also increasingly be exploited to manage the many personal data we provide to platforms, often without even realizing it. Artificial intelligence is not without risks from this point of view.
For this reason, Europe wants to complete its project General Data Protection RegulationGeneral Data Protection Regulation (General Data Protection Regulation). The organization plans to add a set of harmonized rules on the use of artificial intelligence. It must be said that this famous artificial intelligence is now everywhere. We have that doordoor On the wrist day and night through connected watches and bracelets that can collect health data and even detect some data DiseasesDiseases. However, consumers do not always realize that asking personal questions, such as medical in nature, to a chat tool means providing the companies that manage this artificial intelligence with sensitive information that can be exploited for commercial purposes. This is not the only interest, because artificial intelligence includes many players, whether developer, supplier, importer, distributor, or user. This group remains somewhat opaque to the consumer. So it's difficult to know who actually has access to personal data and who will be responsible if problems occur.
Better information about artificial intelligence algorithms
As the use of these AI systems increases, the risk of personal data being leaked or losing control over it is also becoming greater. For this reason, for their protection, consumers should familiarize themselves with the company collecting their data and its policy on processing this personal information. This is not always easy to do, even if some players in the sector are more virtuous than others. This is especially the caseappleapple Who wants to defend data confidentiality by forcing developers to do soto requestto request To automatically request consent for data collection, for example.
To better protect users, the European Union has proposed three texts: a regulatory framework on artificial intelligence, a directive on liability in… ThemeTheme Artificial Intelligence, Product Liability Directive. Among its additional regulations, the European Union, for example, wants to oblige digital giants and other platforms social networkssocial networks To better inform users about their algorithms. To oblige them, the text provides for significant penalties. It could range from 10 to 30 million euros or 2 to 4% of turnover in case of non-compliance with these new obligations. The organization must adopt these texts quickly enough before the AI systems give themselves more freedom.
“Hardcore beer fanatic. Falls down a lot. Professional coffee fan. Music ninja.”