top of page

The AI & Privacy Explorer #28-29/2024 (8-21 July)

  • Jul 30, 2024
  • 31 min read


Welcome to The Privacy Explorer recap of privacy, digital and AI news for weeks 28-29 of 2024 (8-21 July)! 


In this edition:


🇪🇺   EU AI Act was published in the Official Journal;

🎛️  EDPB issued a statement on the role of data protection authorities in enforcing the AI Act;

🇫🇷   CNIL Issues Guidance on Interplay between the AI Act and GDPR;

🤖  Hamburg DPA Launches GDPR Discussion Paper on Personal Data in LLMs;

⚗️  Singapore's PDPC Issues Guide on Synthetic Data Generation;

📣  Open Rights Group Complains to ICO About Meta's AI Data Use;

🤖  Irish DPC Highlights GDPR Challenges with AI and Data Protection;

🤖  Irish DPC Publishes Blog on AI, LLMs, and Data Protection;

🇫🇷   CNIL publishes Q&A on the Use of Generative AI Systems;

🧠 CNIL's Guidance on Deploying Generative AI;

📜  Baden Wurtemberg DPA published a  navigator for AI & data protection guidance (ONKIDA);

🧩  Regulatory Mapping on AI in Latin America;

📸 Polish DPA Publishes Guide On Protecting Children's Privacy Online;

📌 India’s Supreme Court Finds That Google Pin Sharing as Bail Condition Violates Privacy;

📘 The German Federal Financial Supervisory Authority Issues Guidelines for DORA Implementation;

🏢  CNIL's Launches Public Consultation on Workplace Diversity Measurement;

🤥  Deceptive design under global spotlight - GPEN and country reports;

🕵️‍♂️  FTC Review Uncovers Dark Patterns in 76% of Websites and Apps;

🕵️‍♂️ AEPD Reports on Addictive Internet Patterns Impacting Minors;

🛑  EU Commission Sends Preliminary Findings to X for DSA Violations;

💰 Nigeria's FCCPC Fines Meta and WhatsApp USD 220M for Privacy Violations;

🇮🇹   Italian Competition Authority Initiates Investigation into Google for Unfair Practices;

📝  Turkish SCCs and BCRs Now in Effect;

🧑‍💼 New DPO Regulation in Brazil;

📉  Dutch Authority for Consumers and Markets Reports Widespread Non-Compliance with EU Digital Laws;

📚 Danish DPA Reports Municipalities' Steps Toward Compliance in the Google Chromebook Case;

📢  noyb Files Complaint Against Xander for GDPR Violations in RTB;

📹  No Room for Privacy: Airbnb's Hidden Camera Crisis.


 

 🇪🇺 EU AI Act was published in the Official Journal

 

The star of today's edition is, of course, the fact that the European Union AI Act, in its rightful name Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence and amending [several other regulations] (Artificial Intelligence Act), was published in the Official Journal of the EU on 12 July 2024 (see here).

For an easier reading of the AI Act I recommend the AI Act Explorer webpage published by the Future of Life Institute.

 

Implementation Timeline

  • 1 August 2024: AI Act enters into force.

  • 2 February 2025: Prohibitions on unacceptable risk AI systems take effect, as well as AI literacy requirements (I recommend Article 3.56 to see what that means).

  • 2 August 2025: Rules for general-purpose AI models and designation of national authorities come into force.

  • 2 August 2026: Full application of high-risk AI system regulations and establishment of AI regulatory sandboxes.

  • 2 August 2027: Rules for high-risk AI systems in specific sectors, like toys and medical devices, apply.



 🎛️ EDPB issued a statement on the role of data protection authorities (DPAs) in enforcing the AI Act

 

On 16 July 2024, the European Data Protection Board (EDPB) released a statement on the role of Data Protection Authorities (DPAs) within the AI Act framework, triggered by the publication of the AI Act in the Official Journal.

 

Background and Context

The AI Act mandates Member States to designate Market Surveillance Authorities (MSAs) by 2 August 2025, to oversee the regulation's application and implementation. The EDPB's statement underlines the crucial role of DPAs in this framework due to their extensive experience in managing AI's impact on personal data and fundamental rights.

 

Complementarity of Legal Framework

The EDPB stresses that the AI Act and EU data protection legislation should be seen as complementary and mutually reinforcing. Both aim to safeguard fundamental rights, with the AI Act enhancing the effectiveness of the GDPR and other data protection laws in regulating AI technologies.

Personal data processing is integral throughout the lifecycle of high-risk AI systems, which form the core of various AI technologies. The EDPB highlights that such processing is central to the AI Act's regulatory scope, ensuring that AI systems comply with data protection standards.

 

Key Recommendations

The EDPB provides several key recommendations:

  1. Designation as MSAs: DPAs should be appointed as MSAs for high-risk AI systems used in law enforcement, border management, administration of justice, and democratic processes, as specified in Article 74(8) of the AI Act.

  2. Consideration for Other High-Risk Systems: Member States should also consider DPAs for other high-risk AI systems, particularly those affecting personal data processing, in consultation with national DPAs.

  3. Single Points of Contact: DPAs appointed as MSAs should serve as the primary contact points for public and regulatory bodies at national and EU levels.

  4. Cooperation Procedures: Clear cooperation mechanisms must be established between MSAs and other regulatory authorities supervising AI systems, including DPAs.

 

Resource Needs

The EDPB also underscores the necessity for additional human and financial resources to support DPAs in their expanded roles. Adequate resources are essential for DPAs to effectively undertake new tasks related to AI system supervision.

It’s safe to say that this statement does not come as a surprise, and it is widely expected that not only DPAs, but also privacy professionals, will play a key role in AI governance going forward.

You can find the EDPB statement here.



 🇫🇷 CNIL Issues Guidance on Interplay between the AI Act and GDPR

 

The CNIL has published a FAQ to address the relationship between the European AI Act and the GDPR, stressing that the AI Act and GDPR are designed to complement each other, not replace one another. The AI Act is designed to work in harmony with the GDPR, enhancing data protection principles while addressing AI-specific challenges. The CNIL emphasizes that compliance with both regulations is crucial for organizations deploying AI technologies.

Understanding whether the AI Act, GDPR, or both apply involves considering the nature of the AI system and the data it processes:

  • Only the AI Act Applies: High-risk AI systems not processing personal data.

  • Only the GDPR Applies: AI systems processing personal data but not classified as high-risk.

  • Both Apply: High-risk AI systems processing personal data.

  • Neither Applies: Minimal risk AI systems not involving personal data processing.

The AI Act introduces specific provisions that complement GDPR compliance:

  • Prohibited Practices: Practices involving personal data prohibited under the AI Act must also comply with GDPR.

  • General-Purpose AI Models: These models often rely on personal data, necessitating adherence to both regulations.

  • High-Risk AI Systems: Compliance with both regulations is required for systems processing personal data.

  • Specific Transparency Risk: Transparency obligations under both regulations for systems interacting with individuals.

Several other complementarities as well as differences are explained. See the full FAQ here.



 🤖 Hamburg DPA Launches GDPR Discussion Paper on Personal Data in LLMs

 

The Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) has published a discussion paper analyzing the General Data Protection Regulation (GDPR) applicability to Large Language Models (LLMs). This document aims to provoke discussion and assist companies and public authorities in managing the intersection of data protection law and LLM technology.


Technical and Legal Analysis

The paper clarifies the distinction between LLMs as AI models and as components of AI systems, in line with the AI Act effective from 2 August 2024. It concludes:

  1. LLMs and Data Processing:

    • The mere storage of an LLM does not constitute data processing per GDPR Article 4(2) because LLMs do not store personal data.

    • Personal data processed within an LLM-supported AI system must comply with GDPR requirements, particularly concerning output.

  2. Data Subject Rights:

    • As LLMs do not store personal data, GDPR-defined data subject rights (e.g., access, erasure, rectification) do not apply to the model itself. However, these rights can relate to the input and output managed by the AI system's provider.

  3. Training Compliance:

    • Training LLMs with personal data must follow data protection regulations and uphold data subject rights. However, violations during the training phase do not affect the lawful use of the model within an AI system.

 

Practical Implications

The discussion paper outlines several practical implications:

  1. Unlawful Training:

    • If a third party unlawfully processes personal data during LLM training, it does not affect the legal deployment of the LLM by another entity.

  2. Data Subject Requests:

    • Individuals can request information, rectification, or erasure regarding the input and output of an LLM-based AI system, but not the LLM itself.

  3. Fine-Tuning LLMs:

    • Organizations should use minimal personal data for fine-tuning and prefer synthetic data when possible. Legal bases and data subject rights must be ensured if personal data is used.

  4. Local LLM Operation:

    • Storing LLMs locally is not a data protection issue, but AI systems must enable data subject rights and prevent privacy attacks.

  5. Third-Party LLMs:

    • When contracting third-party LLM providers, organizations must ensure GDPR compliance regarding input and output and clarify responsibilities (e.g., joint controllership).


If you want to look into the topic further, I recommend:

 


 ⚗️Singapore's PDPC Issues Guide on Synthetic Data Generation

 

On 15 July 2024, Singapore's Personal Data Protection Commission (PDPC) published the "Privacy Enhancing Technology (PET): Proposed Guide on Synthetic Data Generation". This guide, developed with industry collaboration, aims to provide organizations with comprehensive recommendations on creating and using synthetic data while ensuring privacy and compliance with data protection regulations.

Synthetic data, created through mathematical models or AI/ML algorithms, mimics the characteristics and structure of real data without revealing personal information. This guide emphasizes the potential of synthetic data to drive innovation, enhance AI model training, and facilitate data sharing and software testing. However, it also warns of the inherent re-identification risks and stresses the need for robust data protection measures.

 

Key Applications

The guide identifies several key applications for synthetic data:

  • AI Model Training: Synthetic data can augment training datasets, especially when real data is sparse or expensive to obtain. For example, J.P. Morgan successfully used synthetic data to improve fraud detection models.

  • Data Sharing and Analysis: Synthetic data enables data sharing in sensitive sectors like healthcare without compromising privacy. Johnson & Johnson utilized synthetic data to improve healthcare data analysis for external research.

  • Software Testing: Using synthetic data in development environments helps prevent data breaches and protects sensitive production data.

Recommendations

The PDPC recommends a structured approach to generating synthetic data:

  1. Know Your Data: Understand the source data and identify the necessary insights and attributes to preserve.

  2. Prepare Your Data: Apply data minimization, remove direct identifiers, and document data attributes in a data dictionary.

  3. Generate Synthetic Data: Use appropriate methods like Bayesian networks or GANs, ensuring data integrity, fidelity, and utility.

  4. Assess Re-identification Risks: Evaluate the risk of re-identification using methods like attribution disclosure and membership disclosure.

  5. Manage Residual Risks: Implement governance, technical, and contractual controls to mitigate any remaining risks.

This guide is meant as a living document, set to evolve with advancements in synthetic data technologies and methodologies. You can find it here.

 


📣 Open Rights Group Complains to ICO About Meta's AI Data Use

 

Open Rights Group (ORG) has formally lodged a complaint with the UK Information Commissioner’s Office (ICO) against Meta Platforms, Inc. for planned changes to its privacy policy. The complaint, filed on behalf of five ORG staff who use Meta services, contests Meta’s intention to utilize user data from Facebook and Instagram for AI development under the "legitimate interests" legal basis.

 

Complaint Background

Meta notified UK users in late May 2024 about privacy policy updates set to take effect on 26 June . The policy outlined that Meta would employ personal data for AI development, claiming this use fell under "legitimate interests." However, Meta’s communication did not assure users their objections would be honored, and once data is used for AI, it cannot be reversed, negating the possibility of retrospective consent.

On 6 June 2024 noyb filed GDPR complaints across 11 EU member states, prompting DPAs to call for an immediate halt to Meta’s data practices. Following this, Meta announced on 14 June that it would pause the changes. The ICO also confirmed Meta’s compliance in pausing and reviewing its AI data use plans. Despite these actions, no legal amendment to Meta’s privacy policy ensures a permanent stop to the data processing, leaving users unprotected.

On 2 July 2024 also the Brazilian DPA ordered a suspension on Meta's AI data use (see previous edition on this).

 

ORG’s Demands

ORG’s complaint insists the ICO:

  1. Issue an immediate, legally binding decision under Article 58(2) UK GDPR to stop processing the personal data of over 50 million UK users without consent.

  2. Conduct a thorough investigation under Article 58(1) UK GDPR.

  3. Prohibit the use of personal data for AI development without obtaining opt-in consent from users.

The press release and complaint are available here.

 


🤖 Irish DPC Highlights GDPR Challenges with AI and Data Protection

 

The Irish Data Protection Commission (DPC) has raised concerns about the growing use of Artificial Intelligence (AI), particularly Generative AI (Gen-AI) and Large Language Models (LLMs). These systems, which have become popular and widely accessible, utilize Natural Language Processing (NLP) to mimic human speech. They serve various functions, from internet search and creative writing to assisting with software development and solving academic problems. AI systems also aid in document summarization, keyword extraction, and numerous industrial, financial, legal, educational, and medical tasks.

 

Data Processing and AI

The DPC emphasizes that the development and use of AI systems involve significant personal data processing, introducing associated risks. Key concerns include:

  • Training Data Use: Large datasets, often containing personal data, are used during AI training without user knowledge or consent.

  • Accuracy and Retention: The accuracy and retention of personal data can lead to biases and decision-making issues.

  • Data Sharing: AI models shared with others can misuse personal data.

  • Bias in AI: Incomplete training data may introduce biases, affecting individuals' rights.

  • Re-training Risks: Incorporating new personal data into models can expose users to new risks.

 

GDPR Compliance for Organizations

The DPC advises organizations to assess AI system risks and ensure GDPR compliance:

  • Risk Assessments: Organizations need to evaluate the risks associated with personal data processing in AI systems to ensure GDPR compliance.

  • Data Flow Understanding: Understanding how personal data is used, processed, and stored by AI systems is crucial.

  • Safeguarding Data Subject Rights: Organizations must implement processes to facilitate data subject rights, including access, rectification, and erasure of personal data.

  • Third-Party Risks: When using third-party AI products, organizations must ensure that personal data is protected and understand the data processing practices of the AI provider.

  • Automated Decision-Making Risks: Purely automated decisions can cause significant consequences, so human oversight is necessary to mitigate these risks.

  • Storage Limitation: Organizations must establish retention schedules to comply with the principle of 'storage limitation.'

 

Responsibilities of AI Product Designers and Providers

AI product designers and providers must adhere to GDPR obligations as highlighted by the DPC:

  • Purpose and Goals Assessment: Evaluate if AI is the best method for processing personal data and consider less risky alternatives.

  • Data Collection Considerations: Even publicly accessible personal data falls under GDPR; ensure a legal basis for data processing.

  • Transparency: Inform data subjects about data processing practices and their rights.

  • Impact Assessments: Perform data protection impact assessments, especially for new technologies, combined datasets, or data related to minors.

  • Legal Agreements: Ensure a legal basis for data sharing agreements and fair processing.

  • Storage Limitation: Implement processes to meet the principle of 'storage limitation.'

  • Security Measures: Protect AI models and personal data from unauthorized use and malicious activities.

See the blog here.


 

🇫🇷 CNIL publishes Q&A on the Use of Generative AI Systems

 

On 18 July 2024 CNIL released a comprehensive FAQ on generative AI systems, aiming to guide organizations on compliance with GDPR and the forthcoming AI Act. Generative AI, encompassing text, code, images, and more, has transformative potential but also significant risks.

 

Benefits and Risks of Generative AI

Generative AI systems excel in creating personalized, high-quality content across various media. However, they operate on probabilistic logic, leading to plausible but potentially inaccurate results, known as "hallucinations." This raises trust issues and complicates bias detection. Additionally, these systems pose misuse risks, including disinformation and malicious code generation.

 

Approaches to Using Generative AI

Organizations can choose between off-the-shelf models and developing custom models:

  • Off-the-Shelf Models: These are readily available, proprietary, or open-source models, adaptable through pre-prompt instructions.

  • Custom Models: Developing custom models requires significant resources but allows for specific adaptations. Performance can be enhanced by connecting to a knowledge base (RAG) or fine-tuning pre-trained models, though both methods demand substantial resources.

 

Choosing the Right System

Organizations should match AI systems to their specific needs, considering risks and system limitations. Factors to evaluate include safety, relevance, robustness, absence of biases, and compliance with applicable rules. System documentation and external evaluations are crucial for informed decision-making.

 

Deployment Methods

Generative AI can be deployed on-premise, via cloud infrastructure, or through APIs:

  • On-Premise: Offers better data security but is costly.

  • Cloud: More accessible but requires stringent contracts to secure data.

  • APIs: Simplifies control but requires caution with personal data and clear contractual terms.

 

Implementation and Management

Compliance with GDPR necessitates risk analysis and governance strategies. Organizations must define roles, ensure data security, and regulate AI use through internal policies. Data Protection Officers (DPOs) play a pivotal role in managing data protection issues and ethical concerns.

 

Training and Awareness

End-users must be trained on AI system functionalities, limitations, and authorized uses. They should verify data inputs and output quality, avoiding confidential information. Training should include recognizing biases and preventing automation bias.

 

Governance

Regular checks and user feedback are vital for compliance and system improvement. Establishing ethics committees or appointing dedicated contacts can enhance oversight, especially for sensitive uses.

 

GDPR and EU AI Act Compliance

CNIL's recommendations cover dataset creation and AI training, emphasizing GDPR compliance. The upcoming EU AI Act, effective 1 August 2024, categorizes AI systems by risk levels and mandates transparency and accountability for general-purpose AI systems.

 

Get it here.


 

🧠 CNIL's Guidance on Deploying Generative AI

 

Still on 18 July CNIL also released initial guidelines to assist organizations in the responsible deployment of generative AI systems, focusing on data protection compliance. 

 

Key Recommendations

  1. Start with Specific Needs: Deploy AI systems only to address identified uses, avoiding deployments without a concrete purpose.

  2. Supervise Use: Define and enforce authorized and prohibited uses to mitigate risks, such as avoiding the input of personal data and limiting decision-making roles for AI.

  3. Acknowledge System Limitations: Understand the probabilistic nature of AI, which may produce plausible but incorrect results, and the need for critical verification of outputs.

  4. Choose Secure Deployment: Prefer local, secure systems or evaluate the security of external service providers. Ensure service providers do not misuse data provided to the AI.

  5. User Training and Awareness: Educate end users on the risks and limitations of AI, emphasizing the importance of verifying AI-generated content and prohibiting the input of sensitive data.

  6. Implement Governance: Ensure GDPR compliance by involving stakeholders like data protection officers and information systems managers from the beginning. Regularly review and update policies to address new risks and ethical concerns.

  7. Select Appropriate Systems: Match the AI system with specific needs, ensuring it is secure, robust, and free from biases. Evaluate system documentation and, if needed, request external evaluations.

  8. Deployment Mode: For non-confidential uses, cloud-based or API services might be acceptable with proper safeguards. For handling personal or sensitive data, "on-premise" solutions are preferred to minimize third-party data extraction risks.

  9. Continuous Monitoring: Establish an ethics committee or referent to oversee AI deployment, conduct regular audits, and collect user feedback to adapt to emerging risks and best practices.

  10. GDPR Compliance: Follow CNIL's recommendations for AI development, including data protection impact assessments (DPIA) and involving the Data Protection Officer in overseeing compliance. Ensure transparency with users regarding AI use.

You can find the guidance here.



📜Baden Wurtemberg DPA published a navigator for AI & data protection guidance (ONKIDA)

 

On 19 July 2024, the LfDI Baden-Württemberg announced the release of the 'Navigator AI & Data Protection Guidance' (ONKIDA). This tool serves as a comprehensive reference for regulatory documents related to artificial intelligence (AI) and data protection, aiming to support responsible bodies, such as authorities and companies, in understanding and adhering to GDPR requirements.

The tool features a matrix overview with:

  • Rows: Major data protection requirements relevant to AI applications involving personal data.

  • Columns: Guidance from various supervisory authorities on the interface between GDPR and AI. Each entry links to the original documents and specifies where the respective paper contains relevant statements.

Get it here in German.


 

🧩 Regulatory Mapping on AI in Latin America

 

Access Now has published the "Regulatory Mapping on Artificial Intelligence in Latin America," a comprehensive report outlining AI governance across the region. This report, developed with TrustLaw's pro bono legal network and supported by the Patrick J. McGovern Foundation, provides an in-depth analysis of AI definitions, soft law instruments, national strategies, and draft legislation in countries like Argentina, Brazil, and Mexico. It emphasizes human rights, transparency, and the need for region-specific AI policies, aiming to guide public policymakers towards effective AI regulation while promoting technical development and ethical standards.

 

Find it here.



📸Polish DPA Publishes Guide On Protecting Children's Privacy Online

 

On 8 July 2024, the Polish Data Protection Authority (UODO), in collaboration with the Orange Foundation, published a comprehensive guide titled "Children's Image on the Internet. Publish or not?" This guide is designed to support institutions, organizations, and adults in safeguarding children's privacy in the digital era.

 

Context 

The guide addresses the widespread issue of sharing children's images online, which, while often done with good intentions, can lead to severe consequences such as cyberbullying, identity theft, and exploitation by pedophiles ("sharenting"). It aims to raise awareness about the ethical and legal aspects of publishing children's images and provides practical advice for safer practices.

 

  1. Risks Highlighted:

    • Cyberbullying and harassment

    • Creation of harmful deepfakes and memes

    • Identity theft for fraudulent purposes

    • Exploitation in pedophile forums

    • Emotional distress and long-term impact on children

  2. Statistics and Findings:

    • 45.5% of teenagers report their parents share their images online.

    • 23.8% of these teenagers feel embarrassed, and 18.8% feel dissatisfied with this practice.

  3. Ethical and Legal Guidance:

    • Images of children are considered personal data and require careful handling.

    • Adults must be aware of legal obligations and potential negative impacts.

    • Consent for sharing images should be precise, voluntary, and well-documented.

  4. Practical Recommendations:

    • Obtain explicit consent from parents or guardians.

    • Avoid sharing identifiable images of children.

    • Use graphical elements to obscure children's faces.

    • Educate children about their right to privacy and involve them in decisions about their images.

  5. Support for Institutions:

    • Schools, preschools, and organizations must implement child protection standards.

    • Institutions should lead by example in respecting children's privacy rights.

You can find the press release here.



 📌India’s Supreme Court Finds That Google Pin Sharing as Bail Condition Violates Privacy

 

On 8 July 2024, the Supreme Court of India addressed the balance between investigative needs and privacy rights under Article 21 in the case of Frank Vitus versus the Narcotics Control Bureau. The Court ruled that any bail condition enabling police or investigative agencies to monitor every movement of an accused through technology violates the right to privacy guaranteed by Article 21 of the Constitution.

 

Case Background

  • FV, accused of violating the Narcotic Drugs and Psychotropic Substances Act, was granted bail by a Special Judge on 31 May 2022.

  • One of the bail conditions required FV to drop a pin on Google Maps, ostensibly to enable location tracking by the Narcotics Control Bureau.

 

Supreme Court's Rationale

  1. Privacy and Article 21 of the Indian Constitution:

    • The Court reaffirmed that the right to privacy is integral to Article 21, which guarantees the right to life and personal liberty.

    • The condition of real-time tracking infringes on this right, as it constitutes a form of surveillance akin to confinement.

  2. Technical Inefficacy:

    • Google LLC clarified that dropping a pin on Google Maps does not enable real-time tracking but merely marks a static location.

    • The user maintains full control over sharing this location, negating the intended purpose of the bail condition.

 

The Ruling

  • The Court declared the bail condition of dropping a pin on Google Maps redundant since it does not serve the intended real-time monitoring purpose.

  • The ruling emphasized that bail conditions should not be arbitrary or overreach the scope of ensuring attendance and cooperation in the trial process.

The decision is available here.



📘The German Federal Financial Supervisory Authority Issues Guidelines for DORA Implementation

 

On 8 July 2024 the German Federal Financial Supervisory Authority (BaFin) published guidelines to aid financial companies in implementing the Digital Operational Resilience Act (DORA). Effective 17 January 2025, DORA mandates comprehensive ICT risk management for financial entities. The guidelines address the banking and insurance sectors supervised by BaFin and provide detailed sections to ensure compliance with DORA’s requirements.

 

Key Areas Covered:

  1. Governance and Organization:

    • Development of a digital operational resilience strategy.

    • Establishment of an internal governance and control framework specific to ICT risks.

    • Enhanced responsibilities for the management board.

  2. Information Risk and Information Security Management:

    • Shift from information security to a broader ICT risk management approach.

    • Emphasis on continuous risk assessment and mitigation.

    • Strengthening of training and communication protocols.

  3. IT Operations:

    • Maintenance of stable and up-to-date ICT systems.

    • Classification and comprehensive documentation of ICT assets.

    • Broader scope for change management in IT systems.

  4. ICT Business Continuity Management:

    • Detailed guidelines for business continuity plans specific to ICT.

    • Inclusion of diverse scenarios such as climate change and insider threats.

    • Regular testing and updating of continuity plans.

  5. IT Project Management and Application Development:

    • Detailed requirements for project methodologies and risk assessments.

    • Emphasis on secure implementation and rigorous testing of ICT systems.

    • Elimination of materiality thresholds for change management.

  6. ICT Third-Party Risk Management:

    • Broader definition and scope for managing ICT third-party risks.

    • Extensive mandatory contractual provisions for ICT services.

    • Requirements for due diligence and ongoing risk assessment.

 

Context

The guidelines result from extensive collaboration between industry representatives, the German Federal Bank, and BaFin. They align the existing BAIT (banking) and VAIT (insurance) IT requirements with the new DORA framework. The aim is to provide a non-binding aid for companies transitioning to DORA, ensuring they meet the standards for digital operational resilience, ICT risk management, and cybersecurity. The guidelines also include a comprehensive overview of the necessary contract elements for agreements with third-party ICT service providers.

 

By 17 January 2025, most BaFin-supervised companies must fully integrate DORA's ICT risk management framework, replacing BAIT and VAIT where applicable.

 

The press release is available here.



🏢 CNIL's Launches Public Consultation on Workplace Diversity Measurement

 

On 9 July 2024, the French data protection authority (CNIL) issued a draft recommendation on conducting workplace diversity measurement surveys, seeking public comments until 13 September 2024. This recommendation aims to guide organizations in measuring diversity, such as disability, age, gender, and social, geographical, or cultural origins, while ensuring compliance with privacy laws and regulations.

Organizations use various tools to measure diversity, including surveys that collect personal data on aspects like disability, age, gender, and social, geographical, or cultural origins. The CNIL underscores the need to adhere to data protection regulations and constitutional principles, notably the 2007 Constitutional Council decision that prohibits the collection of real or perceived ethnic-racial data.

 

Key Recommendations

 

Anonymity and Data Protection
  • The recommendation underscores the importance of anonymity in collecting diversity data. According to Constitutional Council Decision No. 2007-557, collecting real or supposed ethno-racial affiliation is prohibited.

  • Surveys must exclude personal identifiers such as names, addresses, phone numbers, and birth dates.

  • If anonymity at data collection isn't feasible, anonymity must be ensured during data processing.

 

Cross-referencing Information
  • Organizations must prevent identification through cross-referencing survey data with other files, especially in small organizations where cross-referencing is easier.

  • For online surveys, CNIL recommends using dedicated web pages, excluding personal identifiers, and separating survey responses from technical information.

 

Legal Basis and Voluntary Participation
  • Surveys should be based on legitimate interest, ensuring participation is voluntary and responses optional.

  • Results should not directly identify respondents.

  • Involving a trusted third party to administer the survey and collect sensitive data can enhance compliance.

  • Employee consent must be free, explicit, and informed to lift prohibitions under Article 9 of GDPR.

 

Purpose and Data Minimization
  • Data collection should serve a specific, explicit, and legitimate purpose, such as improving equal opportunities at work.

  • Organizations should minimize collected information, using suitable questions to avoid excessive data collection.

  • Employees must be informed about the data controller's identity, DPO contact details, survey objectives, data recipients, retention duration, data subject rights, and the complaint submission process to CNIL.

 

Data Protection Impact Assessment (DPIA)
  • Given the high risk to individuals' rights and freedoms, a DPIA should be conducted before implementing or updating a survey.

 

Public Participation

Public comments on the draft recommendation can be submitted until 13 September 2024. After this period, CNIL will finalize and publish the recommendation, aiming to balance diversity measurement with stringent privacy protection.

 

The press release is available here.



🤥 Deceptive design under global spotlight - GPEN and country reports

 

On July 9, 2024, the Global Privacy Enforcement Network (GPEN) released a comprehensive report on the pervasive use of deceptive design practices that manipulate privacy choices. This report, based on a global sweep of 1,000 websites and apps with the collaboration of 26 international data protection authorities and the International Consumer Protection and Enforcement Network (ICPEN), sheds light on the critical global issue of privacy manipulation.

 

Key Findings

The GPEN's investigation uncovered several troubling trends:

  • Complex Privacy Policies: Over 89% of privacy policies were lengthy and difficult to understand.

  • Manipulative Language: 42% of sites used emotionally charged language to sway user decisions.

  • Least Protective Options: 57% made the least privacy-protective options the most prominent and easiest to select.

  • Account Deletion Obstacles: 35% of websites and apps persistently asked users to reconsider deleting accounts.

  • Access Barriers: Nearly 40% presented obstacles to making privacy choices or accessing information, with 9% requiring additional personal information to delete accounts.

 

Country-Specific Reports

 

Canada

The Office of the Privacy Commissioner (OPC) of Canada, alongside provincial counterparts, scrutinized 145 websites and apps, including 67 aimed at children. The findings revealed that deceptive design patterns such as false hierarchy, confirm shaming, and nagging were significantly more common on children's platforms. For example:

  • False Hierarchy: 56% of children's sites emphasized account creation, compared to 24% for other sites.

  • Confirm Shaming: 54% used charged language to deter account deletion, versus 17% for other sites.

  • Nagging: Repeated prompts were found in 45% of children's sites, triple that of other sites.

 

Bermuda

The Bermuda Office of the Privacy Commissioner (PrivCom) assessed 196 organizations. Key findings included (source):

  • Privacy Notices: Only 40% had a privacy notice or terms and conditions.

  • Privacy Officer Contacts: 22% provided contact details for a privacy officer.

  • Regulatory References: A mere 3% referenced PrivCom.

  • Missing Policies: 7% had non-functional links to privacy policies.

 

Hong Kong

The Office of the Privacy Commissioner for Personal Data (PCPD) joined the GPEN sweep and emphasized the need for businesses to enable informed privacy-protective choices by making the most protective options default and avoiding biased language and design (source).

 

Germany

The Baden-Württemberg data protection authority (LfDI Baden-Württemberg) examined 17 websites, all employing deceptive design patterns. The authority stressed compliance with the General Data Protection Regulation (GDPR) and the Telecommunications Digital Services Data Protection Act (TDDDG).

 

Guernsey

The Office of the Data Protection Authority (ODPA) in Guernsey focused on 19 gambling sites. Findings included (source):

  • Hidden Privacy Settings: 42% of sites obscured privacy settings.

  • Complex Policies: Most privacy policies were excessively lengthy.

  • Account Deletion Barriers: Deleting an account was often more challenging than creating one.

 

Malta

The Office of the Information and Data Protection Commissioner (IDPC) in Malta participated in the sweep, focusing on the websites of banks. The IDPC identified common deceptive design patterns aimed at collecting more personal information and complicating privacy choices.

 

GPEN's Recommendations

GPEN calls on organizations to adopt ethical design practices, including:

  • Emphasizing privacy options.

  • Using neutral language to present choices transparently.

  • Simplifying steps to find privacy information, log out, or delete accounts.

  • Providing contextually relevant consent options.



🕵️‍♂️ FTC Review Uncovers Dark Patterns in 76% of Websites and Apps

 

The Federal Trade Commission (FTC) has released the results of an extensive audit examining dark patterns in websites and mobile apps. This audit, conducted between 29 January and 2 February 2024, analyzed 642 global websites and subscription-based mobile apps to identify deceptive practices that influence consumer behavior and privacy decisions. The audit was a collaborative effort between the FTC, the International Consumer Protection and Enforcement Network (ICPEN), and the Global Privacy Enforcement Network (GPEN), involving 27 authorities from 26 countries.

 

Key Findings

  • Prevalence of Dark Patterns: The audit discovered that 76% of the reviewed sites and apps used at least one dark pattern. Additionally, 67% employed multiple dark patterns.

  • Common Dark Patterns:

    • Sneaking Practices: These involve hiding or delaying critical information that could impact consumers' purchasing decisions.

    • Interface Interference: Techniques that obscure essential information or preselect options to nudge consumers toward decisions that benefit businesses.

The FTC's review did not conclude whether these practices violated laws in the 26 countries involved. However, the findings highlight the significant influence of dark patterns on both consumer finances and privacy choices. GPEN's privacy-focused review found that many sites and apps used dark patterns to elicit more personal information from users than they intended to provide.

 

Read the press release here.



🕵️‍♂️ AEPD Reports on Addictive Internet Patterns Impacting Minors

 

The Spanish Data Protection Agency (AEPD) published a report on 10 July 2024, analyzing the impact of addictive patterns on the internet, particularly on minors. The report was presented at the "New Challenges for the Protection of People's Rights in the Face of the Impact of the Internet" course, part of the 2024 Summer Activities of the Menéndez Pelayo International University (UIMP) in Santander.

 

Key Findings:

  • Deceptive and Addictive Design: The report reveals that many online platforms, applications, and services use deceptive and addictive design patterns to prolong user engagement and collect more personal data. These strategies are especially harmful when targeting vulnerable populations, such as children and adolescents.

  • Impact on Minors: Addictive patterns significantly influence minors' preferences, interests, autonomy, and development. These practices pose a threat to their physical and mental integrity, affecting their decision-making and social interactions.

  • Legal and Regulatory Implications: The AEPD report underscores the need for the European Data Protection Committee to address addictive patterns in its upcoming guidelines on the interrelation between the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA). The DSA already prohibits online platforms from designing interfaces that deceive or manipulate users or hinder their ability to make informed decisions.

 

Classification of Addictive Patterns:

  • High-Level Patterns: Includes forced action, social engineering, interface interference, and persistence.

  • Medium-Level Patterns: Target users' psychological weaknesses and vulnerabilities.

  • Low-Level Patterns: Context-specific execution methods, often detected through algorithms or manual methods.

 

AEPD's Recommendations:

  • Proactive Responsibility: The report emphasizes the importance of proactive responsibility, data protection by design and by default, transparency, legality, and data minimization.

  • Collaboration: The AEPD will continue collaborating with the National Commission on Markets and Competition (CNMC) and promote guidelines that address the intersection of GDPR and DSA concerning addictive patterns.

 

The AEPD's report is not connected to the GPEN sweep on deceptive design, however it also reaches similarly damning conclusions.

 

Find the press release here.



🛑 EU Commission Sends Preliminary Findings to X for DSA Violations

 

On 12 July 2024, the EU Commission issued preliminary findings indicating that X is in breach of the Digital Services Act (DSA). This notification highlights several areas of non-compliance related to dark patterns, advertising transparency, and data access for researchers.

 

The alleged infringements

The DSA mandates transparency and accountability in content moderation and advertising. The Commission's investigation, which included analyzing company documents and consulting experts, found three primary grievances against X:

  1. Verified Accounts:

    • X's "Blue checkmark" verification misleads users by allowing anyone to obtain verified status. This practice undermines users' ability to make informed decisions about account authenticity and exposes them to potential deception by malicious actors.

  2. Advertising Transparency:

    • X's ad repository lacks the necessary transparency. The design features and access barriers make it unfit for proper supervision and research into online advertising risks. The repository's inefficiency fails to meet the DSA's transparency requirements.

  3. Data Access for Researchers:

    • X restricts researchers from accessing public data as required by the DSA. Terms of service prohibit independent data scraping, and the API access process is prohibitively expensive, deterring research.

 

Next Steps

X has the opportunity to review the Commission's findings and defend itself. The European Board for Digital Services will also be consulted. If the findings are upheld, X may face a non-compliance decision, potential fines up to 6% of its worldwide turnover, and mandated corrective actions. An enhanced supervision period could be imposed to ensure compliance, with periodic penalties for non-compliance.

 

You can find the press release here.



💰Nigeria Fines Meta and WhatsApp USD 220M for Privacy Violations

 

On 18 July 2024, Nigeria's Federal Competition and Consumer Protection Commission (FCCPC) finalized its decision to impose a $220 million penalty on Meta Platforms, Inc. and WhatsApp LLC for violating the Federal Competition and Consumer Protection Act (FCCPA) and Nigeria Data Protection Regulation (NDPR). This decision followed a joint investigation by the FCCPC and the Nigerian Data Protection Commission (NDPC) into WhatsApp's updated privacy policy, which became effective on 15 May 2021.

The FCCPC began its inquiry in May 2021 due to concerns over WhatsApp's privacy policy changes. The investigation revealed multiple violations of Nigerian consumer rights and data protection laws.

 

Key Orders and Penalties

  1. Reinstating Data Rights: Meta and WhatsApp must allow Nigerian users to control their data without losing functionality or needing to delete the app.

  2. Privacy Policy Compliance: The companies must ensure their privacy policy aligns with Nigerian laws, providing clear options for users to consent to data use.

  3. Cease Data Sharing: They must stop sharing WhatsApp users' data with Facebook and other third parties until users give explicit consent.

  4. Revert Data Practices: Meta and WhatsApp must revert to their 2016 data sharing practices and establish an opt-in screen for user consent.

  5. End Data Tying: The companies must stop linking and transferring data between WhatsApp and Facebook markets without express user consent.

  6. Written Assurances: They must assure the FCCPC in writing that they will cease all FCCPA violations.

  7. Remedy Implementation: Meta and WhatsApp must implement their proposed remedy package within 15 days and share it with Nigerian users.

  8. Investigation Costs: The companies must reimburse the FCCPC $35,000 for investigation costs within 60 days.

  9. Penalty Payment: They must pay the $220 million fine within 60 days.

The press release, final order, investigative report, and executive summary are all available here.



🇮🇹 Italian Competition Authority Initiates Investigation into Google for Unfair Practices

 

The Italian Competition Authority has launched an investigation against Google and its parent company, Alphabet, on 18 July 2024. The investigation focuses on Google's methods of requesting user consent for linking its various services, which the Authority suspects may constitute misleading and aggressive commercial practices. 

 

Context and Allegations

The Authority asserts that Google's consent requests provide inadequate, incomplete, and misleading information regarding the real impact of consenting on the use of personal data. This lack of clarity can affect users' decisions on whether and how much consent to provide.

 

Key Issues Identified

  • Inadequate Information: The consent requests reportedly fail to inform users adequately about the effects of their consent on data usage.

  • Misleading Practices: Information provided is allegedly imprecise, potentially leading users to misunderstand the implications.

  • Combination and Cross-Use of Data: Users are not clearly informed about the combination and cross-use of personal data across Google's numerous services.

  • Conditional Freedom of Choice: The methods and techniques used by Google to obtain consent may pressure users into making decisions they might not otherwise make.

 

Next Steps

The Authority will further scrutinize Google's consent mechanisms and the information provided to users. This investigation could lead to significant changes in how Google requests and manages user consent.

 

The press release is available here.



🧑‍💼 New DPO Regulation in Brazil

 

The Brazilian National Data Protection Authority (ANPD) published Resolution CD/ANPD No. 18 on 17 July 2024, establishing comprehensive guidelines for Data Protection Officers (DPOs).

 

Key Provisions

  • Appointment: The regulation mandates a formal appointment process for DPOs, requiring a clear, documented act by data controllers or processors. Small-scale organisations are exempt but must provide a communication channel for data subjects.

  • Roles and Responsibilities: DPOs must interact with data subjects and the ANPD, provide internal guidance on data protection practices, and ensure compliance with data protection laws. Their specific duties include:

    • Accepting and addressing complaints and inquiries from data subjects.

    • Receiving and acting upon communications from the ANPD.

    • Advising employees and contractors on data protection practices.

    • Managing data breach incidents and guiding the organization on data protection practices.

    • Assisting in creating and implementing internal data protection policies, conducting impact assessments, and establishing data security measures.

  • Public Disclosure: Data controllers must publicly disclose the DPO's identity and contact information on their websites or other accessible means. This information must be kept up-to-date and prominently displayed. Note - under GDPR only contact details and not identity need to be disclosed.

  • Support and Resources : Data controllers are required to provide necessary resources, including human, technical, and administrative support, to enable DPOs to fulfill their duties. They must guarantee the DPO's autonomy and ensure they can perform tasks without undue interference.

  • Transparency and Communication: Controllers must ensure the DPO has access to senior management and is involved in strategic decisions involving data protection. They must maintain open lines of communication with data subjects and the ANPD, ensuring the DPO's contact information is easily accessible.

  • Conflict of Interest: DPOs must avoid conflicts of interest and disclose any potential conflicts to their employers. They can serve multiple organizations if they can fulfill their responsibilities without conflicts. DPOs must act ethically, with integrity and technical autonomy, avoiding situations that could compromise their objectivity. Controllers must ensure the DPO does not engage in activities that create a conflict of interest and must implement measures to address any potential conflicts.

  • Compliance and Accountability: Data controllers are responsible for the compliance of data processing activities with the LGPD. They must oversee and support the DPO in maintaining records of data processing activities, ensuring data processing agreements meet legal standards, and implementing data protection policies and procedures.

 

Implementation

The resolution took effect on the date of its publication. Data controllers and processors must review and update their practices to align with these new requirements.

 

You can find the regulation here.



📝 Turkish SCCs and BCRs Now in Effect

 

On 10 July 2024, the Personal Data Protection Authority (KVKK) in Turkey published finalized documents on standard contracts and Binding Corporate Rules (BCRs). This release follows a period of public consultation on draft documents, aiming to ensure adequate safeguards for transferring personal data abroad under the revised Article 9 of the Law on Protection of Personal Data No. 6698.

KVKK's initiative is part of its efforts to align data protection practices with international standards. The documents are designed to provide clear frameworks for entities involved in cross-border data transfers.


  1. Standard Contracts for Data Transfers:
    • Controller to Controller.

    • Controller to Processor.

    • Processor to Processor.

    • Processor to Controller

They are available in Turkish here, and will apply alongside the obligations under other laws, such as the GDPR. Practically, if you are an EU processor selling a service to a Turkish controller, you will need to have in place both the EU processor to controller SCCs (model 4) as well as the Turkish controller to processor SCCs. If you are an EU controller buying a service from a Turkish processor, you will need to have in place both the EU controller to processor SCCs (module 1), as well as the Turkish processor to controller SCCs. 

I haven’t yet done a deep dive into the compatibility of these clauses, but it’s likely they are similar given that KVKK stated that the amendments to the data protection law are based on GDRPR. Fingers crossed.

Note also that under the amended Turkish data protection law, the SCCs need to be notified to the KVKK within 5 business days after signature.


  1. Binding Corporate Rules (BCRs):
    • BCR Application Form for Data Controllers.

    • Companion Guide for Controllers: Details essential issues that data controllers should include in their BCRs.

    • BCR Application Form for Data Processors.

    • Companion Guide for Processors: A guide outlining key issues for data processors to consider in their BCRs.

All documents are available here



 📉 Dutch Authority for Consumers and Markets Reports Widespread Non-Compliance with EU Digital Laws

 

On 10 July 2024, the Dutch Authority for Consumers and Markets (ACM) reported that many online service providers have yet to comply with the Digital Services Act (DSA) and the Platform-to-Business Regulation (P2B). The findings come from a sample survey of 50 businesses.

 

Compliance Requirements

Since 17 February 2024, the DSA mandates online service providers to implement safety measures, including efficient complaint-handling systems and transparent restrictions on user accounts. The P2B Regulation, effective since 2020, requires transparency in general terms and conditions, search result rankings, and dispute settlements.


Survey Results

1.     ACM's survey revealed notable non-compliance in several areas:

  • P2B Regulation: General terms and conditions were often incomplete or not easily accessible. Key information about account restrictions, search result rankings, and internal complaint handling was frequently missing.

  • DSA: Many platforms lacked accessible points of contact for service recipients and authorities. Information on content moderation was often insufficient or hard to find. Additionally, mechanisms for reporting illegal content were either absent or difficult to locate.

 

The ACM clarified it is the designated regulator for both the DSA and P2B Regulation, but can only enforce these regulations after Dutch implementation laws take effect, expected in early 2025. Despite this, ACM stresses the importance of immediate compliance, offering guidelines and information on its website to assist businesses.

 

The P2B Regulation targets online platforms and search engines, ensuring transparency and fair treatment of business users. The DSA extends to all providers handling user content online, with fewer rules for small businesses. ACM's guidelines aim to aid compliance, and businesses and consumers can file non-compliance reports to help ACM prepare for future enforcement.

 

Read the press release here.


 

📚 Danish DPA Reports Municipalities' Steps Toward Compliance in the Google Chromebook Case

 

The Danish Data Protection Authority (Datatilsynet) has reported progress in municipalities' compliance with the January 2024 ruling on Google Workspace usage in schools, that I wrote about here. KL (Local Government Denmark), representing 52 municipalities, confirmed that, effective 1 August 2024, personal data will no longer be shared with Google for purposes previously deemed unlawful by Datatilsynet.

 

Contract Adjustments and Data Processing

KL communicated that contract modifications ensure personal data processing adheres strictly to municipal instructions, except where EU law requires otherwise. These adjustments address the issue of children's data being shared without a legal basis.

 

Ongoing Issues and Future Steps

Despite addressing the main issue, Datatilsynet noted ongoing concerns. The January ruling emphasized the need for clear contract terms and continuous monitoring of data processing activities. Municipalities have pledged to avoid services processing personal data in non-EU countries without equivalent data protection.

 

Subprocessor Documentation

Datatilsynet has requested an opinion from the European Data Protection Board regarding the obligations of data controllers to document subprocessor use. Once this opinion is available, Datatilsynet expects to make a final assessment of the subprocessing chain in municipalities' use of Google's products.

 

See press release here.



📢 noyb Files Complaint Against Xander for GDPR Violations in RTB

 

On 9 July 2024, None Of Your Business (noyb.eu) filed a complaint with the Italian Data Protection Authority (Garante) against Xander Inc., a Microsoft subsidiary, for multiple GDPR violations. The complaint centers around Xander's alleged non-compliance with GDPR articles related to data minimization, accuracy, transparency, and the rights to access and erasure.

 

Background and Allegations

noyb's complaint focuses on Xander's Real Time Bidding (RTB) platform, which facilitates automated ad space auctions. Investigations in June 2023 revealed that Xander's RTB platform collected extensive sensitive data on users, including health, sexual orientation, political opinions, and financial status. noyb alleges that Xander mishandled this data and provided inaccurate and conflicting user profiles, which undermines the principles of targeted advertising.

 

Specific Incidents

The complaint outlines two specific access requests made by a data subject to Emetriq GmbH, a data broker, and directly to Xander in February 2024. Emetriq responded with detailed data, including 200 market segments and 70 profiling events linked to the data subject. In contrast, Xander claimed inability to identify the data subject and did not provide the requested information or erasure.

 

NOYB’s Requests

noyb's complaint demands that Garante:

  • Ensure Xander complies with access and erasure requests from data subjects.

  • Limit data processing to what is necessary and relevant for personalized advertising.

  • Correct or erase inaccurate user profiles.

  • Provide effective tools for users to exercise their rights.

  • Impose a fine on Xander in accordance with GDPR provisions.

 

The complaint highlights Xander's failure to comply with GDPR, emphasizing the need for stringent enforcement to protect data subjects' rights and ensure accurate data processing practices.

 

The press release is available here.



That’s it for this edition. Thanks for reading, and subscribe to get the full text in your inbox!


♻️ Share this if you found it useful.

💥 Follow me on Linkedin for updates and discussions on privacy education.

🎓 Take my course to advance your career in privacy – learn to navigate global privacy programs and build a scalable, effective privacy program across jurisdictions.

📍 Subscribe to my newsletter for weekly updates and insights in your mailbox. 

Comments


Privacy & digital news FOMO got you puzzled?

Subscribe to my newsletter

Get all of my privacy, digital and AI insights delivered to you weekly, so you don’t need to remember to check my blog. You can unsubscribe at any time.


My newsletter can also include occasional marketing, such as information on my product launches and discounts.


Emails are sent through a processor located outside of the EU. Read more in the Privacy Notice.

It  takes  less  time  to  do  a  thing  right  than  to  explain  why  you  did  it  wrong.


Henry Wadsworth Longfellow

bottom of page