Skip to content
Frau, auf die Code projiziert wird: Datenschutz beim KI-Training

AI Training and Personal Data: Key Lessons from the OLG Köln Decision in Meta’s Case

Meta’s Approach to User Consent

Training AI models requires vast amounts of data, and many providers are exploring ways to source it. In Meta’s case, the company decided to use publicly available data from adult users of Facebook and Instagram to train its AI models. This policy was announced to users before implementation, but instead of seeking explicit consent, Meta offered an opt-out mechanism.

This approach raised legal concerns and led to injunction proceedings initiated by the North Rhine-Westphalia Consumer Association. The Higher Regional Court of Cologne (OLG Köln) ruled on the matter on 23 May 2025 (Case No. 15 UKl 2/25). Below are key takeaways from the decision regarding the use of personal data for AI training.

Can Personal Data Be Used for AI Training?

In principle, personal data can be used to train AI models but only if it complies with the General Data Protection Regulation (GDPR). While the EU AI Act governs AI systems, any processing of personal data remains subject to GDPR rules.

In this case, the consumer association sought to stop Meta from using publicly available user data for AI training. However, the court did not grant the injunction, suggesting that personal data may, under certain conditions, be used for AI training even if the main purpose of its collection is not AI training.

That said, the court’s decision does not provide a blanket approval. Businesses must proceed with caution when considering the use of customer data for AI development.

Is AI Training a Legitimate Purpose?

Under Article 5(1)(b) GDPR, personal data must be collected for a specific, explicit, and legitimate purpose. Once collected, it cannot be repurposed for unrelated or undisclosed uses.

While consent is one legal basis for processing personal data, it must be:

  • Given for a specific purpose,
  • Provided through a clear affirmative action.

In Meta’s case, no affirmative consent was sought. Instead, the company relied on the legal basis of legitimate interest under Article 6(1)(f) GDPR. The court accepted that, in certain circumstances, the development of AI tools could constitute a legitimate interest, provided users are informed and given the opportunity to object.

Key Takeaways for Businesses

  1. Transparency Is Essential
    Users must be clearly and proactively informed about how their data will be used. The court accepted Meta’s opt-out model because users were notified and had the chance to object.
  2. Inform Users Before Collecting Data
    Only data collected after users have been informed and given the opportunity to opt out can be used for AI training. Data collected for one purpose cannot be repurposed later without proper disclosure.
  3. Establish a Legitimate Interest
    Ensure that the use of personal data for AI training serves a real, specific business purpose. Vague or speculative interests are unlikely to meet the legal threshold.

Conclusion: Proceed with Caution

Although the court rejected the injunction against Meta, this does not mean that any user data can be freely used for AI training. Each case must be assessed individually, considering:

  • The nature of the services provided,
  • The type of personal data involved,
  • The legal basis for processing.

As the regulatory landscape continues to evolve, especially with the implementation of the EU AI Act, businesses must stay informed and compliant.

If you’re considering using customer data for AI training, it’s crucial to evaluate the legal risks and ensure transparency and compliance. We’re here to help you navigate these complex requirements and support your compliance efforts.

Contact us for guidance on IT law and data protection to ensure your AI projects remain compliant and transparent.

Questions?

Feel free to contact us regarding this or any other subject.
Logo 25 Jahre Rickert Rechtsanwaltsgesellschaft