In potential long-term consequences for the advancement of AI in Europe, a task force at the EU data protection agency recently pointed to potential data accuracy violations by ChatGPT. Representing Europe’s national privacy authorities, the group established the task force on ChatGPT last year following complaints made by national regulators spearheaded by Italy’s data protection authority in relation to the widely adopted AI service.
As stated by the representatives of OpenAI in the report published on Friday, though the company implemented some measures to reduce the presence of factually false information in the responses of its ChatGPT chatbot as long as such artificial intelligence misinterprets them – such measures are useful to avoid it – they are still insufficient to meet the requirements of the data accuracy principle following from GDPR (General Data Protection Regulation).
At the core of the problem is that ChatGPT’s system is probabilistic, as it was stated in the report, which leads to “a model which may also produce biased or made-up outputs.” This causes problems for end-users who will take probable ChatGPT’s outputs for definitively true, even when they do not, when it comes to information about individuals.
The conclusions of the task force relay the fact that the developers of AI experts still must grapple with issues to do with data protection laws regardless of the efforts that they make towards the enhancement of the reliability of these systems.
Even though all the investigations conducted by the national privacy authorities in some Member States are still ongoing, the report offers some comparative results of the checks that have been completed, acting as a kind of a ‘lowest common denominator’ for the national authorities to continue with the monitoring and supervision.
AI Chat platform, ChatGPT was developed by OpenAI which hadn’t given an informal response to a request from Reuters. Nonetheless, the task force’s report will spur other debates and possibly the formulation of regulations to implement strict protocols to govern the usage of these innovations like ChatGPT in terms of accuracy of information used, along with transparency.
This development is not limited to OpenAI and its creation, ChatGPT; instead, it has established a benchmark for determining the ability of AI systems to meet the legal requirements under the data protection legislation across the EU. The widespread implementation of AI raises a myriad of questions on how such systems will be designed and whether they will adhere to legal and ethical standards.
The report of the task force in question can be considered as a kind of a wake-up call that speaks of the fact that despite many benefits that can be obtained with the help of AI, there are still risks and challenges that require facing. Keeping a balance between promoting innovation and protecting data will be one of the determining factors in assessing the success of the authorities, developers, and stakeholders involved in the process.
As the probes by the national authorities go on, attention is likely to be drawn toward searching for measures and practices to enable such AI systems like ChatGPT to function properly without violating the principles of data accuracy and transparency. This may mean additional development in the training practice and training methodology and having to employ better algorithms within the scope of designing AI, as well as being more transparent about AI’s strengths and weaknesses, its propensity for Bias, etc.
The AI is a growing field and in this respect the EU’s data protection board has established a task force to address possible issues before they emerge as well as to more effectively assess the degree of AI systems’ compliance with the data protection legislation. Nevertheless, the industry is currently struggling with these challenges, and the report of the task force can be a wake-up call for the developers and regulators to be more proactive in addressing these issues while continuing to champion the use of the AI technology for the greater good and with focus on the principles of data protection and privacy.