Take-Aways (AI)
- Clear compliance requirements, involvement of data protection officers and provision of secure functional accounts instead of private accounts.
- Prohibition of input/output of personal data, opt-out from training/history and checking for accuracy and discrimination.
- No automated final decision; sensitize employees; observe legal aspects (copyright, trade secrets, AI Act).
The Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) has published a Checklist for the use of chatbots based on Large Language Models (LLMs) such as ChatGPT have been published in German and English.
The checklist includes the following points:
- Compliance regulations specify (instructions etc.)
- Data Protection Officer integrate
- Provision of a Function accounts (so that employees do not have to use a private account; ideally the account should be without a name)
- Safe Authentication (so that attackers cannot misuse the account)
- None Entering personal data (customers, business partners, the employee’s own personal data)
- None Output of personal data (because the AI can incorporate information from the Internet; prompts should therefore not target personal data, e.g. no “who” questions)
- Caution with personal data (i.e. no input of data that allows a personal reference in context)
- Opt-out of the AI training (e.g. switching off “Chat history and training”)
- Opt-out of the History (especially if several people use the same account)
- Results Check for correctness
- Results check for discrimination (for example, a recommendation to fill a vacancy with “male spectacle wearers” should not be followed (?))
- None Automated final decision (no adoption of recommendations, e.g. because it is not clear how a recommendation is made)
- Employees sensitize (regarding the permissible use of such tools)
- Data protection is not everything (copyrights, trade secrets, etc. must also be taken into account)
- More Development track (e.g. the AI Act, which also regulates users)