The future of data protection: are GDPR and AI Act friends or foes?
Published on January 23, 2024 --- 0 min read
By Shalini Kurapati

The future of data protection: are GDPR and AI Act friends or foes?

Share this article

As we celebrate Data Protection Day, we need to reflect on the evolving landscape of data privacy, given the stellar rise of Artificial Intelligence (AI) applications, with generative AI at the forefront. AI's ability to process vast amounts of data, make decisions, generate content and learn from these actions creates unforeseen data privacy challenges.

Therefore understanding the relationship between the General Data Protection Regulation (GDPR) and the AI Act is pivotal in this era of rapid advancement of AI. Although the final version of the AI act text is not yet available, we can already examine the high level principles to see the similarities and differences.

GDPR - General Data Protection Regulation

GDPR is a landmark law effective since May 2018 that unified 28 different national laws across the EU and EEA regarding processing personal data.

The key GDPR principles are:

  • Lawfulness, Fairness, and Transparency: Data must be processed legally, fairly, and in a transparent manner in relation to the data subject.
  • Purpose Limitation: Data must be collected for specific, explicit, and legitimate purposes
  • Data Minimization: Only data that is necessary for the purposes for which it is processed should be collected and processed.
  • Accuracy: Personal data must be accurate and kept up to date.
  • Storage Limitation: Personal data should be retained only as long as necessary for the specified purposes.
  • Integrity and Confidentiality: Security and protection against unauthorized or unlawful processing, loss, destruction, or damage.
  • Accountability: Role and responsibilities such as data controllers, processors, supervisory authorities to demonstrate compliance with all the above principles.

Non compliance with these principles could result in fines of up to a maximum of €20 million or 4% of the company's total global turnover. These fines are imposed by the Supervisory Authority (SA) of each member state, and the coordination of SAs is conducted by the European Data Protection Board.

Since data is the lifeline of AI led innovation, GDPR lightly touches on the topic of AI in article 22 on the obligations of explainability and fairness in automated decision making that use personal data. However GDPR provides vague and limited guidance to achieve this goal. This uncertainty is further exacerbated by the advancement of AI technology and their scope of their individual and social effects.

AI Act

The AI act aims to address this uncertainty by providing a comprehensive set of rules to regulate the powerful technology that is AI.

The AI act takes a risk based approach to regulation, where it applies the OECD definition for an AI system- ‘machine-based systems that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The AI act classifies AI risks into four levels: Unacceptable Risk, including social scoring and hazardous voice-assisted toys, is banned due to clear threats to safety and rights. High-Risk AI, used in critical areas like healthcare, infrastructure, and law enforcement, requires strict compliance measures such as risk assessments and robust security before market entry. Limited or Minimal Risk AI, such as video games and spam filters.

Additionally, the Act includes specific rules for general-purpose AI systems (GPAIs), aimed at generative AI applications like ChatGPT, to ensure value chain transparency. For high-impact GPAIs posing systemic risks, additional obligations such as model evaluations and reporting on serious incidents are mandated.

In general AI act is based on the following principles of Trustworthy AI:

  • Human Agency and Oversight: Ensuring AI systems support human autonomy and decision-making, as well as providing effective oversight mechanisms.
  • Technical Robustness and Safety: Requiring AI systems to be secure, reliable, and robust enough to deal with errors or inconsistencies during all life cycle phases.
  • Privacy and Data Governance: Safeguarding personal data, ensuring data privacy and quality, and managing data in a way that respects privacy rights.
  • Transparency: Mandating clarity and transparency in AI processes and decisions, ensuring that users are informed and that AI systems can be understood and interrogated.
  • Diversity, Non-discrimination, and Fairness: Avoiding unfair bias and discrimination, ensuring inclusivity and fairness in AI systems.
  • Societal and Environmental Well-being: Ensuring AI systems are developed and operated in a way that is socially beneficial, respecting ecological and environmental standards.
  • Accountability: Implementing mechanisms for legal and ethical responsibility for AI systems and their outcomes, including auditability and reporting.

Violations would cost companies between €7.5-€35 million or 1.5%-7% of their global turnover, although details on enforcement authorities are yet to be clarified.

GDPR and AI Act: similarities and differences

We can observe that the principles of transparency, data governance, security etc. clearly overlap between the two. GDPR also to some extent mandates non-discrimination and fairness in automated decisions based on personal data processing (article 22). Concepts of data quality, and completeness can also be seen as a common requirement between the two.

There are also several differences and incompatibility between the two. To name a few, the AI act applies to all AI systems irrespective of the nature of data across the AI value chain. However GDPR is only applicable when it comes to personal data, any information that relates to an identified or identifiable living individual. Additionally GDPR provides explicit data subject rights, where citizens can personally intervene to demand knowledge and control on the whereabouts and use of their data, while AI act tends more towards the consumer protection rights of a product. For the more legally curious the FPF has written a detailed piece on the incompatibilities.

Going forward

It still remains a key challenge to align AI developments with data protection principles, while maintaining the foundational rights and protections established under GDPR.

The path to implementing the AI act is complex and evolving, with some gaps and overlaps with the current GDPR compliance. The AI Act and GDPR have different scopes, definitions, and requirements, which can create challenges for compliance and consistency.

It’s important to note that one doesn't replace the other. AI act builds upon GDPR compliance for data protection obligations. Each law, the GDPR and the AI Act, is inherently complex on its own; ensuring their compatibility should not add further complexities. It is crucial for Supervisory Authorities to provide clear guidance and establish guidelines that leverage GDPR as a foundational framework for AI regulations, thereby fostering innovation without undue burden.


Picture of Shalini Kurapati
Dr. Shalini Kurapati is the co-founder and CEO of Clearbox AI. Watch this space for more updates and news on solutions for deploying responsible, robust and trustworthy AI models.