AI use is on the rise in people’s daily activities inside and outside work. Many use them for the searches they used to do on Google, to plan trips, keep a diary, get psychological guidance, obtain business advice or draw up contracts, among other uses. When these activities are carried out in a personal sphere, the responsibility for using AI lies directly with the user. In a work context, however, companies must make sure that any use made by their employees does not trigger a legal contingency that could later affect them. This makes it necessary to draw up and implement policies defining good and bad practices in AI use within the business.
The use of AI tools has clearly become a necessary element for companies to carry out certain tasks more effectively and efficiently, improve performance in various business areas and allow them to keep up in a highly competitive corporate world. From another angle, it exposes companies to certain legal risks, related to potential infringements in the fields of IP, data protection and compliance with confidentiality obligations.
To protect against these risks, companies need to prepare and implement policies on good and bad practices, which must be made known to their employees and the third parties with whom they share information.
To raise awareness and provide companies with guidance for governance on this subject, the World Intellectual Property Organization (WIPO) issued a guide on February 28, 2024: “Generative AI. Navigating intellectual property”, available here.
Along with describing some of the main issues and risks that companies must consider, certain mitigation measures are proposed which could be helpful to avoid infringing intellectual property rights.
The most notable recommended measures are grouped in the following categories:
- Preparing policies and training company staff.
- Creating a risk profile from the use of AI and monitoring it.
- Creating and keeping records of uses of AI tools at the company, of AI generated outputs and of the people involved in the creation process.
- Assessing AI tools and the data used to train them.
- Checking that AI products do not infringe IP rights and whether any licensing documents or agreements are needed to obtain rights in them.
These categories may be used as a basis for putting in place internal policies and guidance on the conduct of the company and its employees.
Although this list of measures is useful to regulate general scenarios inside companies, there is still a need to identify specific types of conduct that must be allowed, prohibited or restricted. We share below a summary of some actions that may be considered good and bad practices in the use of AI tools:
Examples of five bad practices related to AI use:
- Disclosure of confidential information: this may occur where internal or clients’ confidential information data or documents are fed into AI tools. Some AI programs include disclaimers with a recommendation to users not to upload confidential information. The risk associated with this practice lies in the company that has control over the artificial intelligence being able to access the information, or references being made to it in searches from other users.
- Disclosure of personal or sensitive data: this may happen where the information provided to the tool includes data allowing an individual to be identified and, in some cases, data classed as sensitive (information on health, religious ideology, sexual preferences, etc.) or confidential (financial information). Any sections of organizations handling personal, financial and sensitive information – human resources departments, debt collection firms and health institutions, for example – must be very careful when using tools of this type, because they could be exposed to an infringement in the information they have used.
- Asking the AI program to generate outputs similar to IP rights protected by third parties: this can happen where the program user gives instructions for the output to be similar to an existing work, which might be protected by copyright, trademark or another intellectual property right.
- Claiming ownership of outputs generated by an AI program: this type of conduct could be seen as plagiarism or an infringement of third-party rights.
- Failure to check the accuracy of an output: this could result in professional liability (for lawyers and doctors, for example) or commercial liability (such as a breach of consumer protection provisions). The AI tools themselves contain disclaimers informing that they can make errors and inviting users to check the output.
Examples of five good practices related to AI use:
- Using tools to improve ideas/tasks, but not to create them: instead of giving instructions to AI programs on how to create an output, a better practice would be to create a first draft/sketch and edit/adapt it assisted by AI. This lowers the risk of plagiarism or infringement of third parties’ rights, because it is the basis of the work that the user would create.
- Anonymize or disassociate personal data uploaded to AI programs: this practice can be carried out by replacing or substituting the personal data appearing in the documents to be used. An example would be replacing the real names of individuals with invented names or codes. It is important for the uploaded information not to allow identification of the individual to whom it belongs to mitigate the risk of a personal data infringement.
- Not sharing confidential information: as in the preceding point, if certain documents containing confidential information need to be used in AI programs, the solution could be to alter, restrict or delete that part to keep the information secret.
- Check the output: it is always worthwhile checking official information sources to confirm that the output is accurate and does not contain any errors. In the case of legal opinions, for example, it would be recommendable to examine the laws and legal criteria yourself to check that the interpretation is correct.
- Be careful with the instructions given to the tools: avoid using instructions such as “create content of a certain type or kind, or similar to” or similar instructions that might invade third parties’ IP rights.
The conclusion is that AI use brings a valuable opportunity to optimize processes at companies although it also implies taking on new responsibilities and risks.
It is important for companies to put in place clear policies, defining what is permitted and prohibited in relation to AI tools, to ensure that these practices are carried out safely, ethically and within the applicable laws. Having an organizational culture encouraging responsible AI use will serve both to mitigate legal contingencies and strengthen the reputation and sustainability of companies in the long run.