It is recognized that many of you regularly use generative AI (GenAI), such as OpenAI’s ChatGPT, Google Gemini, or Microsoft Co-Pilot, for drafting texts or processing and interpreting data. GenAI is a powerful tool that can lead to significant efficiencies and process improvements. Critical program partners such as OIV have actively encouraged Meridian to employ GenAI "to streamline processes and enhance operating procedures."
While the benefits of GenAI are evident, this technology raises important questions around data protection, especially when using free version platforms such as ChatGPT 3.4 or 4.0. These platforms may use the data for their own purposes, which can raise data protection concerns and regulatory compliance issues when personal data is involved.
As with any tool, there are safe and unsafe ways to use GenAI. Efforts are being made to incorporate best-use practices and guidelines for GenAI into our Data Protection Policy and Guidelines, as well as document its use in our Processing Activity Maps. In the meantime, please follow these best practices:
- Never include any personally identifiable information (PII) in the chat prompts. When using GenAI to create a document, use pseudonyms during the drafting process.
- Similarly, never include any information that Meridian would consider confidential. A good rule of thumb is to avoid including any information that Meridian would not post on its website in real time.
- Some GenAI platforms require creating an account. In this case, set up a dedicated email for GenAI use. Do not use your business email address/phone number. (Copilot tools are exempt)
- Continuously delete your chat history, i.e., as soon as you complete the individual project. (Copilot tools are exempt)