Introduction.
Artificial intelligence is voice recognition, conversational support centers, personalized marketing messages, computer vision, self-driving cars, inventory and real-time stock replenishment, social networks, augmented reality in healthcare, the examples are endless... We are all concerned.
All of these artificial intelligence systems are powered by data, both in their manufacture and in their use. And among this data, there is personal data. Your name, surname, face, bank account... any element that allows you to be identified, in whole or in part, as a natural person.
On July 12, 2024, the European Parliament published the Regulation on Artificial Intelligence, the AI Act. This is a world first [1]. This is also an excellent news, now, companies have a legal framework allowing them to create and use artificial intelligence.
Europe was already the first to have published a Personal Data Protection Regulation, the famous GDPR which came into force in May 2018. The GDPR is applicable to automated data processing, in whole or in part [2] [3]. Artificial intelligence is the realm of automated data processing, including our personal data [4].
What do these two European texts have in common ? It is a risk-based approach rather than a regulation of technology used. The greater the risks to individuals and their privacy, the higher the legal limits on use. There is a wish to have a prosperous and trusting environment.
However, artificial intelligence is taking us into unknown territory and lead us to rethink, to meet experts, to use new practices by innovating. The lawyer is therefore at the crossroads of personal protection and innovative IT.
The objective is clear for companies that create or use artificial intelligence : take full advantage of artificial intelligence and compliance with the new legal framework and the application of best practices [5].
Thus, 20 best legal practices have been selected to support the deployment of artificial intelligence by corporates.
1. Use European standards as a reference.
The European regulation is the first horizontal regulation on both personal data and artificial intelligence. Moreover, it is built on the European human values that we know as European citizens. Pending the internationalization of this regulation, it is therefore good practice to use the European legal system, the most complete in the world, as a reference point.
2. Consult the CNIL’s questions and answers.
Regulation on artificial intelligence is being expanded. France is a European leader on the subject. The CNIL, which is the French regulator of personal data, has just published its first recommendations for the use of artificial intelligence responsibly. You can visit the CNIL website : https://www.cnil.fr/fr/les-question...
3. Conduct an Impact Analysis before implementing artificial intelligence.
This privacy impact assessment (PIA) is more than a good practice, as it is now a European obligation for the provider and user of an artificial intelligence system [6]. The European Regulation prohibits artificial intelligence systems with unacceptable risks. Unacceptable risk : deemed contrary to the values and fundamental rights of the EU.
4. Have a record of processing.
The Data Protection Regulation requires a record of personnel data processing activities that makes it possible to identify the processing carried out by companies [7]. This register is part of the documentation proving the company’s compliance. I recommend identifying the treatments with and without artificial intelligence in this register.
5. Edit a Code of Good Conduct.
The publication of a Code of Good Conduct is the best idea to mark the importance attached to the respect of social rights within the company. My advice is to establish a Code of Good Conduct by involving a representative of all parties involved in your artificial intelligence system.
6. Optimize the use of artificial intelligence according to predefined needs.
Setting the limits of the use of artificial intelligence is very important when it comes to personal data. Indeed, the processing of personal data may be legitimate depending on the legal basis that the data controller must specify. There are six legal bases [8].
The processing of personal data by artificial intelligence must correspond to a legal basis, i.e. a previously identified need.
7. Minimize Personal Data collected.
Artificial intelligence is processing data on an unprecedented scale. However, the regulations provide for the minimization of personal data. To reconcile this opposite, it is necessary to target the quality and quantity of the data, avoid its communication between companies [9], and then plan to purge the data collected [10].
8. Require Product Quality.
The European legislator is determined to put artificial intelligence systems on the market that meet security requirements. These requirements are based on the quality of the product and its development processes.
The company using an artificial intelligence IT solution must check the presence of the "CE" marking on certain artificial intelligence solutions, this marking reflects the quality of a product it has selected.
9. Appoint a personal data protection officer.
The personal data protection officer has a decisive role [11].
It implements appropriate technical and organizational measures to ensure and be able to demonstrate that each processing is carried out in accordance with the European regulation. The protection of personal data must be guaranteed throughout the life of the artificial intelligence system [12]. The appointment of a personal data protection officer, which is sometimes mandatory, is always recommended. They should be trained in artificial intelligence or be supported by internal or external skills in this field.
10. Create a risk map.
European regulations on artificial intelligence and personal data are built on the apprehension of risks and their gradation. The European Regulation provides for the new rule to identify risks specifically created by artificial intelligence applications.
So, the creation of a risk map makes is a very good idea, this makes it possible to identify existing risks within the company with the description of gross risks, remediation plans and net risks, all with a priority given to certain risks.
11. Set up a risk Committee.
The Risk Committee makes it possible to consolidate the risk mapping for its creation and updating. The Risk Committee should be interdisciplinary, made up of IT and legal experts, the personal data protection officer, around a risk manager. Community and interdisciplinarity are, in my opinion, two essential qualities.
12. Choose a European artificial intelligence provider.
As the European regulations on personal data and artificial intelligence are similar, by choosing a European intelligence provider, you have a priori guarantee that the provider complies with European regulations and values. In some cases, they will have to provide a declaration of conformity [13]. Trust in artificial intelligence is based on trust in its providers.
13. Obtain a presumption of Legal Compliance.
As an extension of the previous good practice, identifying the compliance of the provider and therefore the user with harmonized standards is a important point. Indeed, compliance with these harmonized standards will be equivalent to a presumption of legal compliance. At the European level, there is ongoing harmonization work, in particular with the JTC 21 committee (CEN-CENELEC Joint Technical Committee for AI) available on the website : https://www.cencenelec.eu
14. Verify the transfer of personal data abroad.
Artificial intelligence has no borders, both in its creation and in its use. For example, the artificial intelligence used for autonomous car driving has a global dimension, as the cars’ market. Another example, American facial recognition has created a base of billions of photographs including those of European citizens [14]. The regulations on personal data impose a particular precaution on the transfer of personal data outside European borders [15]. In addition, as European regulations are the most protective and above all the most advanced, the transfer to a less requirement legal framework is a source of specific liability [16].
15. Audit the artificial intelligence contract and associated documentation.
The deployment or use of artificial intelligence systems is based on a formal contract. In law, and particularly in business law, the principle is the freedom of contract, a fertile ground for heterogeneous clauses [17]. The specific contractual audit of the subcontract, the DPA [18], will identify the roles and responsibilities of the parties. Indeed, the production chain of a product is often complex and the risk of confusion of the roles of each person is considerable.
The interest in identifying the role of each party is all the more justified as European legislation makes it a marker for both the GDPR and the AI Act, but with different terms. For the GDPR, there is the data controller, the processor, for the AI Act, there are the artificial intelligence provider and the deployer, the roles are cumulative. But who is responsible in the event of an incident ? The artificial intelligence researcher, the manufacturer, the programmer, the user, the controller or perhaps the "intelligent" machine that is supposed to be autonomous.
16.Train the team.
Artificial intelligence is not new to researchers and IT team, but its use has never been so widespread in companies in all markets. To support its deployment and benefit from its contributions, while limiting the risk of abuses, team training is important because each of us must work on a common objective. There are schools dedicated to artificial intelligence for the training of professionals. The aim is to raise awareness among employees within companies about the potential and risks of artificial intelligence, which is based on the work of multidisciplinary teams.
17. Educating customers
While companies have developed artificial intelligence technology, the individuals who own personal data, sometimes very intimate data such as health data, are unaware about IT complexity. Each person must remain a subject of law and not become a mass object. This dazzling opacity should lead companies to inform their customers and users about the digital technologies deployed, for example, by instituting a communication protocol and publishing their Code of Good Conduct [19].
18. Managing personal data breach.
Loss, theft, misappropriation of personal data, the incidents are potentially numerous. Artificial intelligence multiplies the frequency and scope by the large volume of personal data processed. Good practice encourages companies to adopt a forward-looking approach. Incident management can be carried out by artificial intelligence and its ability to automate because the GDPR defines very short reaction times in the event of incidents, which are a source of liability if they are not respected.
19. Take out the right insurance.
Given the importance of the financial consequences of liability, professional insurance is vital. Your insurer must be informed about your use of artificial intelligence. Making a declaration to your insurer to update the related risk is certainly a good practice, also is taking out additional professional insurance if necessary.
20. Documenting Best practices.
The implementation of good practices within the company involves the drafting and validation of documents with evidentiary force before a public authority, a judge or a mediator. The evolution of information technology forces us to rigorously apply the new rules. These evidentiary documents may be edited internally by each company or by an independent external supervisor ; a lawyer, with his ethics, his duty of confidentiality and his independence, is certainly a privileged partner.
To learn more, the complete Guide of artificial intelligence Best Practices will be soon available.
If you have any other Best Practices, feel free to complete this article by leaving a comment.