top of page

The AI & Privacy Explorer #23-24/2024 (3-16 June)

  • Jun 20, 2024
  • 27 min read

Welcome to the AI digital and privacy recap of privacy news for weeks 23 and 24 of 2024 (3-16 June)! 

👈 Swipe left for a quick overview, then find 🔍 more details on each topic below.




 

 

📉 Addressing AI Risks in the Workplace: Workers and Algorithms

On 3 June 2024, the European Parliamentary Research Service published a briefing detailing the impact of algorithms and AI in the workplace. The document highlights the dual nature of AI technologies, which can drive societal progress while raising significant ethical concerns. Current labour laws, established before the advent of AI, struggle to provide adequate regulatory frameworks for these technologies.

Algorithmic Management (AM)

  • Definition: AM involves the use of algorithms to manage tasks traditionally performed by human managers.

  • Benefits: Optimization of operations, increased productivity, data-driven insights.

  • Risks: Job displacement, alteration of company hierarchies, worker surveillance, increased workload and stress.

  • Prevalence: Widely adopted in logistics, manufacturing, and services sectors.

Data Concerns

  • Data Collection: AI systems collect and process large amounts of data, raising privacy and accuracy issues.

  • Decision-Making: The opacity of data-driven decisions complicates accountability.

  • Bias: Machine learning techniques can replicate biases from training data, impacting fairness in the workplace.

EU Legal Framework

  • GDPR: Sets standards for data protection, ensuring transparency and fairness in AI data usage.

  • AI Act: Classifies AI systems by risk, imposes requirements for high-risk systems, including those used in employment.

  • Platform Workers Directive: Introduces specific rights related to algorithmic management, limited to digital platform workers.

European Parliament's Positions

  • Ethical AI Deployment: Emphasizes ethical principles and full human oversight for high-risk AI applications in employment.

  • AI Regulation Advocacy: Calls for robust AI regulation to mitigate power imbalances and enhance worker protections.

Collective Bargaining

  • Human in Control: Advocates for the 'human in control' principle, transparency, and effective oversight of AI systems.

  • Trade Union Negotiations: Successfully negotiated AI-related terms in various countries, ensuring consideration of worker interests in AI deployment.

Looking Ahead

  • Future EU Action: Potential for enhanced AI governance in the workplace through more explicit guidance on worker safety and consultation.

  • GDPR Provisions: Employee data protection provisions offer avenues for improvement.

  • AI Act Limitations: Current limitations suggest a need for additional worker-specific standards.

  • AM Practice Spread: Rapid spread raises questions about extending protections to all workers subject to algorithmic management.

Overall, while the EU has made strides in regulating AI, significant challenges remain in ensuring that the benefits of AI in the workplace are shared equitably among employers and workers.

 

👉 Find the whole piece here.


 

 

📝 Hamburg Commissioner Issues Guidance on Applicant Data Protection and Recruiting

On 6 June 2024, the Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) released a position paper, "Applicant Data Protection and Recruiting in Focus," addressing the growing importance of safeguarding personal data amid increasing digitalization and AI use in recruitment. The document covers legal aspects, best practices, and data protection challenges in applicant data protection.

Principles and Definitions 

The GDPR principles of lawfulness, purpose limitation, and data minimization are crucial when processing applicant data. The paper stresses the importance of transparent and fair processing, with companies needing to clearly define terms like recruiting, headhunting, and active sourcing to ensure consistent understanding and compliance. The HmbBfDI's definitions serve as a guideline for these terms.

Recruiting Phases and Talent Pools 

The recruitment process is divided into phases, each requiring separate legal assessments. Consent is necessary for inclusion in talent pools, with detailed information provided to applicants about the purpose, data processed, storage duration, and withdrawal options. The position paper advises against indefinite data storage and recommends periodic consent renewal to ensure data protection.

Background Checks and AI Applications 

Background checks should adhere to data protection laws, focusing on conventional methods like interviews and references. Internet research is permissible within limits, distinguishing between professional and private networks. AI applications, such as CV parsers, are permitted if they comply with data accuracy principles and do not make automated decisions with legal effects without human involvement. Emotion analysis is generally prohibited due to concerns over necessity and voluntariness of consent.

Outlook 

The document concludes by acknowledging the increasing role of AI in recruiting and the need for stringent data protection measures. Compliance with GDPR and upcoming regulations like the AI Act is essential to ensure fair and non-discriminatory recruitment processes. Companies are encouraged to implement transparent procedures to build trust with applicants and minimize legal risks.

👉 Read the guidance (in English), here.


 

 

🔐New Security-Focused Software Testing Measure Added to Danish DPA's Catalogue

The Danish Data Protection Agency (Datatilsynet) announced on 4 June 2024 the inclusion of a new security-focused software testing measure in its catalogue of recommended security actions. This addition aims to assist organizations in identifying and mitigating vulnerabilities in newly developed software, aligning with GDPR’s mandate for an appropriate level of security.

Background

In 2023, Datatilsynet released its comprehensive catalogue of security measures to help organizations manage various security risks. Following numerous requests for detailed guidance on testing, especially vulnerability and penetration tests, Datatilsynet has now expanded this catalogue to include a focused measure on software security testing.

Measure Details

The new measure outlines several testing types crucial for software security:

  • Vulnerability and Penetration Tests: Targeting both expected and unexpected functionalities to identify security flaws.

  • Code Reviews: Conducted by someone other than the code developer to find errors and malicious elements.

  • Integration Tests: Ensuring smooth interaction between software modules or IT systems.

  • Log Tests: Verifying proper logging and identifying unnecessary personal data in logs.

  • Encryption Tests: Checking the adequacy of data encryption methods.

Importance of Testing

Software testing is essential in uncovering potential security issues. Given the complexity of modern IT systems, integrating comprehensive tests early in the design and development stages can prevent security breaches caused by unintentional functionalities or third-party components.

Practical Application

The measure also emphasizes the need for continuous testing throughout the software lifecycle, including during the development phase and after deployment. It covers specific tests such as:

  • Design Review: To ensure compliance with data protection rules and safeguard against misuse.

  • Load Testing: To evaluate system performance under high demand conditions.

  • Session Management Testing: To prevent unauthorized access through session mishandling.

Documentation and Compliance

Documentation of test results is critical for demonstrating compliance with GDPR’s security requirements under Articles 5 and 32. This measure serves as a preventive and detective action, helping organizations to maintain robust security practices and to respond swiftly to potential vulnerabilities.

 

 

🗂️ FRA publishes GDPR in practice – Experiences of data protection authorities

The European Union Agency for Fundamental Rights (FRA) published a report analyzing the experiences and challenges faced by data protection authorities (DPAs) in the implementation of the General Data Protection Regulation (GDPR). This report aims to complement the forthcoming evaluation by the European Commission, providing practical insights from DPA representatives across the EU. The research is based on 70 qualitative interviews conducted between June 2022 and June 2023 with DPA staff from all 27 EU Member States.

Key Findings and FRA Opinions

  • Inadequate Resources: The report identifies that inadequate resources, both financial and human, are a significant obstacle for DPAs. Despite an overall increase in budgets and staff, most DPAs report insufficient resources to handle their growing workloads. This inadequacy affects their ability to perform tasks such as complaint handling, awareness raising, and independent investigations.

  • High Volume of Complaints: DPAs face challenges managing large volumes of complaints, including many trivial and repetitive ones. This issue forces DPAs to prioritize complaints over other regulatory tasks, which compromises their ability to conduct proactive investigations and provide comprehensive advisory services.

  • Public Awareness and Understanding: There is a gap between public awareness of data protection laws and actual understanding. While many individuals are aware of the GDPR, the specifics of their rights and the obligations of data controllers remain unclear to them. This contributes to the high volume of complaints and highlights the need for better public education on data protection matters.

  • Technological Challenges: New technologies, particularly AI and complex data-processing systems, pose significant challenges for DPAs. The report suggests that the GDPR alone is insufficient to address these challenges effectively. DPAs require additional technical expertise and resources to oversee the implementation of these technologies properly.

  • Cooperation and Tools: The report emphasizes the importance of stronger cooperation between DPAs and the need for additional tools to enhance their supervisory capacities. The European Data Protection Board (EDPB) plays a crucial role in facilitating this cooperation, but it also needs adequate resources to support its activities and assist national DPAs effectively.

Recommendations

The FRA provides several recommendations to address these challenges:

  • Increased Resources: Member States should ensure that DPAs have sufficient financial, human, and technical resources to fulfill their mandates.

  • Enhanced Tools: The European Commission and EDPB should consider introducing new tools and procedures to support DPAs in their investigatory and supervisory tasks.

  • Public Education: Efforts should be intensified to improve public understanding of data protection laws and the specific rights and obligations under the GDPR.

  • Technology-Specific Guidance: The EDPB should develop further guidance on the application of the GDPR to new and emerging technologies to help DPAs manage these complex areas effectively.

The report concludes by acknowledging the efforts of Member States to increase DPA budgets and staff but highlights the ongoing need for significant improvements to ensure the effective implementation of the GDPR across the EU.

👉 Read the report here.

 

🇪🇺 Multistakeholder Expert Group's Report on GDPR Application

The Multistakeholder Expert Group to the European Commission, established in 2017, published a detailed report on 10 June 2024, evaluating the application of the General Data Protection Regulation (GDPR). This group assists the Commission by identifying challenges and providing advice on GDPR implementation from various stakeholders' perspectives, including businesses, civil society, and individual experts. 

Key Findings

  1. Positive Developments:

    • Increased data protection compliance and awareness.

    • Enhanced control over personal data for individuals.

    • Global adoption of GDPR principles, benefiting European companies.

  2. Ongoing Challenges:

    • Legal Fragmentation: Despite attempts at harmonization, inconsistencies in GDPR interpretation by Data Protection Authorities (DPAs) lead to legal uncertainties. This affects sectors like healthcare and pharmaceuticals.

    • Specific Provisions: Issues with data minimization, storage limitation, and the use of compliance tools such as codes of conduct and certifications. Protection of minors' data remains inadequate.

    • SME Challenges: High compliance costs, fear of sanctions, and a need for simplified rules and practical guidance tailored for SMEs.

  3. Data Subjects' Rights:

    • Rights such as access and erasure are the most exercised. However, there are difficulties in quantifying the exercise of these rights and ensuring transparency from controllers.

    • Business sectors report burdens in responding to data subject requests, highlighting the need for pragmatic approaches and increased awareness of rights limitations.

  4. Automation and Competition: Concerns were raised that exercising the right not to be subject to automated decision-making may reveal sensitive information, potentially jeopardizing business secrets and raising competition issues.

  5. Data Portability: The lack of awareness of data portability rights is attributed to the absence of standardized data formats and concerns over potentially affecting the rights and freedoms of others when porting data.

  6. Transparency Obligations: Many organizations struggle with compliance, often using vague or overcomplicated terms not aligned with GDPR requirements, impacting overall transparency.

  7. Interplay with Other Regulations:

  8. AML and PSD2: There are significant concerns about the GDPR's application alongside other regulations like anti-money laundering obligations and the Payment Services Directive.

  9. Standard Contractual Clauses (SCCs): Adoption issues for data transfers outside the EU persist due to legal ambiguities and conflicting national DPA advice, complicating cross-border data transfers.

  10. Enforcement Issues: In cross-border cases, lack of coordination between DPAs and differences in national procedures result in slow, inconsistent decisions, hampering effective GDPR enforcement.

Recommendations

  • Harmonization: Improve consistency in GDPR application across member states to reduce fragmentation.

  • Support for SMEs: Develop practical tools, templates, and tailored guidance to assist SMEs in compliance.

  • Enhanced DPA Interaction: Foster better engagement between DPAs and stakeholders to ensure guidelines are practical and timely.

  • Global Engagement: Continue working with international partners to adopt GDPR principles and facilitate cross-border data flows.

The report underscores the importance of maintaining GDPR’s core principles while addressing practical challenges to ensure its effective application across diverse sectors and regions.

👉 Read the full report here.


 

👥 Bavarian Data Protection Authority Issues Joint Controllership Guidance

On 1 June 2024, the Bavarian Data Protection Authority (BayLDA) published comprehensive guidance on joint controllership, as outlined in Article 26 of the GDPR. This guidance addresses scenarios where two or more entities collaboratively determine the purposes and means of processing personal data. It underscores the importance of a clear, transparent agreement detailing each party's responsibilities to ensure compliance with GDPR provisions and protect the rights of data subjects.

Context and Objectives

Joint controllership, a concept introduced by the GDPR, aims to address the increasing interconnectedness of data processing activities among various organizations. The guidance seeks to clarify the application of this concept, especially given its frequent occurrence in practice. The document highlights that joint controllership agreements, while potentially complex, offer significant benefits by clearly distributing data protection responsibilities and improving transparency for data subjects.

Key Provisions

The guidance details several critical aspects:

  • Definition and Scope: Joint controllership arises when entities collaboratively decide on the purposes and means of data processing. This collaboration necessitates a joint agreement that is clear and accessible.

  • Legal Requirements: Article 26 mandates that such agreements specify each party's GDPR obligations, ensuring transparent communication and accountability. The guidance also emphasizes the need for these agreements to reflect the actual data processing dynamics accurately.

  • Practical Recommendations: To aid implementation, the guidance offers examples of joint controllership scenarios and provides templates for drafting agreements. It also advises on handling joint responsibilities in various contexts, such as social media use and inter-organizational collaborations.

Differentiation from Other Roles

The guidance provides crucial distinctions between joint controllership and other data processing roles:

  • Individual Responsibility vs. Non-Responsibility: Joint controllership differs from individual responsibility, where multiple entities determine the purposes and means of processing together. In contrast, non-responsibility applies when no entity decides on the purposes or essential means of processing, such as employees or customers .

  • Processor Role: A processor acts on behalf of a controller, following their instructions without autonomy over the purposes and means of processing. The differentiation is essential as processors and controllers have distinct obligations under the GDPR .

  • Functional Transfer: The concept of functional transfer, previously used under German law, referred to outsourcing tasks with some decision-making freedom for the recipient. This is no longer applicable under GDPR, where clear distinctions between controllers and processors must be maintained.

Legal Consequences

The guidance outlines the legal implications of joint controllership:

  • Liability and Accountability: Joint controllers share liability for GDPR compliance. They must transparently allocate responsibilities, ensuring that data subjects can exercise their rights effectively. Each party is accountable for ensuring that data processing complies with GDPR requirements.

  • Sanctions: Non-compliance can result in significant penalties. Both joint controllers can be held liable for breaches, emphasizing the need for clear agreements and diligent adherence to data protection principles .

  • Legal Obligations: Joint controllers must maintain documentation, such as records of processing activities and data protection impact assessments. They must also implement appropriate technical and organizational measures to safeguard personal data.

Practical Recommendations

The Bavarian Data Protection Authority offers several practical recommendations for entities engaging in joint controllership:

  • Drafting Agreements: Entities should create detailed agreements specifying the roles and responsibilities of each party involved in data processing. These agreements must include provisions on data subject rights, security measures, and notification procedures in the event of data breaches.

  • Transparency and Communication: Clear communication channels should be established between joint controllers to ensure efficient coordination and compliance. Regular meetings and updates can help maintain transparency and address any issues promptly.

  • Role Clarification: Clearly define and document which party is responsible for each aspect of data processing. This includes specifying who handles data subject requests, who ensures data security, and who communicates with data protection authorities.

  • Templates and Examples: Utilize the provided templates and examples as starting points for creating joint controllership agreements. These resources can help ensure that all necessary elements are included and that the agreements meet GDPR requirements.

  • Training and Awareness: Conduct regular training sessions for employees of all involved entities to ensure they understand their roles and responsibilities under the joint controllership arrangement. This can help prevent misunderstandings and ensure consistent compliance.

  • Handling Data Subject Requests: Develop a clear process for managing data subject requests, ensuring that requests are handled efficiently and that data subjects are informed of which entity to contact for specific queries.

  • Risk Assessment and Management: Regularly assess and manage risks associated with joint data processing activities. This includes identifying potential vulnerabilities and implementing measures to mitigate them.

  • Monitoring and Auditing: Implement monitoring and auditing mechanisms to ensure ongoing compliance with the joint controllership agreement and GDPR. This can involve regular reviews of data processing activities and updating agreements as necessary.

 👉 Find it here in German, and grab a traslation into English here:



 

 

⚖️ AG Opinion on the exemption to provide information under Art. 14(5)c GDPR

On 6 June 2024, Advocate General Medina provided an opinion on Case C-169/23, marking the first time the Court interprets GDPR Article 14(5)(c) concerning derogations from the obligation of data controllers to inform data subjects.

Context and Legal Background

The case originated from a complaint by UC, a Hungarian citizen, who received an immunity certificate from the Budapest Metropolitan Government Office. UC claimed the office failed to create or publish a privacy notice regarding the processing of his personal data and provide sufficient information about the processing's purpose, legal basis, and available rights. The Hungarian supervisory authority initially dismissed the complaint, citing Article 14(5)(c) of the GDPR as a reason for exemption from providing this information.

UC challenged this decision before the Fővárosi Törvényszék (Budapest High Court, Hungary). The court upheld UC's action, annulled the supervisory authority's decision, and ordered a new procedure. The court reasoned that the derogation in Article 14(5)(c) did not apply because some data associated with the immunity certificates were generated by the controller itself, not obtained from another body. These included the serial number, validity period, QR code, bar code, and other alphanumerical codes generated during the procedure.

Under GDPR, data controllers must inform data subjects about data processing when data is not obtained directly from them (Article 14). Article 14(5)(c) provides an exemption from this obligation if the obtaining or disclosure of data is explicitly required by Union or Member State law, provided the law includes appropriate measures to protect the data subject’s legitimate interests.

Questions and Answers

  1. Does Article 14(5)(c) apply to data generated by the controller?

    • Answer: Yes, it applies to all data not obtained from the data subject, including data generated by the controller.

    • Reasoning: The term "obtaining" in Article 14(5)(c) encompasses any method of acquiring data, including generation by the controller. This interpretation aligns with GDPR's transparency objective and ensures data subjects can exercise control over their data. The underlying assumption of the derogation is that EU or Member State law replaces the controller's obligation to provide information, as data subjects are assumed to have sufficient knowledge of the data's obtaining or disclosure. Such laws must expressly concern the obtaining or disclosure of the data and make it mandatory for the controller.

  2. Can supervisory authorities examine national laws for appropriate protective measures under Article 14(5)(c)?

    • Answer: Yes, supervisory authorities have the power to review whether national laws provide appropriate measures to protect data subjects' interests.

    • Reasoning: Supervisory authorities are tasked with enforcing GDPR compliance. This includes assessing if the law is mandatory for the controller and if it requires obtaining or disclosure of personal data, as well as if it offers sufficient safeguards for data subjects. If the law does not provide appropriate measures to protect the data subject’s legitimate interests, the obligation to provide information will apply. The Hungarian DPA had argued against this position, considering this exceeds its tasks, but the AG is of the opinion that this is an not an examination of the validity of national law, but only an examination of the question whether the data controller is entitled to invoke the derogation towards a particular data subject in a particular situation.

  3. Do national laws need to transpose data security measures as per Article 32 of GDPR?

    • Answer: No, national laws do not need to specifically transpose the data security measures outlined in Article 32.

    • Reasoning: Article 14(5)(c) and Article 32 cover different aspects of data protection. The "appropriate measures" required under Article 14(5)(c) should ensure transparency and fairness, but they do not need to mirror the technical and organizational security measures of Article 32.

TL;DR

AG Medina's opinion clarifies that Article 14(5)(c) of the GDPR exempts controllers from providing information about data processing in any situation when the data is not obtained directly from the subject, provided that there is a legal requirement to obtain or disclose such data and the law in question adequately protects data subjects' interests. Supervisory authorities are empowered to assess these protections, ensuring that transparency and control over personal data are maintained in all processing contexts.

👉 Find the AG Opinion here.


 

📱 Netherlands DPA Issues Guidance on Social Media Use in Education

On 13 June 2024, the Dutch Data Protection Authority (AP) issued detailed guidance for educational institutions on the responsible use of social media platforms. This guidance was prompted by a specific request from an educational institution seeking to utilize social media for marketing and communication purposes, involving the posting of photos and stories of students and teachers.

Key Points

  • Significant Risks:

    • AP Chairman Aleid Wolfsen highlighted substantial risks associated with social media, as these companies often have commercial interests such as selling ads based on detailed user profiles, including sensitive information like health or political views.

    • Schools must ensure that students and teachers understand and consent to how their data is used by social media companies.

    • Without clear agreements, the use of social media platforms is discouraged to avoid potential legal violations.

  • Responsibilities and Compliance:

    • Clear responsibilities must be defined regarding compliance with the GDPR.

    • Schools must have mechanisms to delete photos upon request and prevent misuse of the images.

    • Transparency about who to contact for exercising data protection rights is essential – whether it is the school or the social media company.

  • Data Security:

    • Secure storage of personal data is crucial, especially if stored outside the European Economic Area (EEA).

    • Data protection levels must be equivalent to GDPR standards, and agreements must ensure data security.

    • In countries where the rule of law is not adequately guaranteed, the legality of storing EU citizens' data is highly questionable.

  • Consent:

    • Educational institutions must seek proper consent from students and teachers.

    • Consent must be specific, informed, and documented, detailing any use of data for profiling or advertising purposes by the social media company.

  • Broader Impact:

    • The AP's advice is intended for all educational institutions in the Netherlands to review and potentially revise their social media usage policies.

    • Institutions should ensure they meet GDPR requirements and effectively protect the personal data of students and staff.

    • While the guidance initially addressed a specific case, it aims to inform broader practices across the educational sector, promoting rigorous data protection standards amidst increasing digitalization.

👉 Find the press release and the guideline here.


 

 

🤖 CNIL issues Recommendations on GDPR Compliance for AI Systems

On 7 June 2024, CNIL published the English translation of its recommendations for aligning AI system development with GDPR requirements (published in April, following public consultation). These guidelines aim to help developers and designers navigate the complexities of GDPR while fostering innovation in AI.

Context and Scope

The recommendations address AI systems that process personal data, often used in model training. This includes machine learning systems, general-purpose AI, and those with continuous learning capabilities. The focus is on the development phase, encompassing system design, dataset creation, and model training.

Key Recommendations (“AI how-to Sheets” 1 to 7)

  1. Define an Objective:

    • AI systems must have a well-defined, legitimate objective from the start to limit unnecessary data processing. For example, an AI designed to analyze train occupancy must explicitly state this purpose.

  2. Determine Responsibilities:

    • Developers must identify their role under GDPR, whether as data controllers, joint controllers, or subcontractors. This classification impacts their obligations and responsibilities.

  3. Establish a Legal Basis:

    • AI development involving personal data must have a clear legal basis such as consent, legitimate interest, or public interest. Each basis carries specific obligations and impacts individuals' rights.

  4. Ensure Lawful Data Reuse:

    • Reusing personal data requires legal verification. For data collected directly, compatibility tests are necessary. Publicly available data and third-party datasets must be scrutinized for legality and compliance with GDPR.

  5. Minimize Data Usage:

    • Only essential personal data should be collected and used, adhering to the principle of data minimization. This involves data cleaning, selecting relevant data, and implementing privacy-by-design measures.

  6. Set Data Retention Periods:

    • Personal data must not be kept indefinitely. Developers need to define retention periods aligned with the data’s purpose and inform data subjects accordingly. Data necessary for audits or bias checks may be retained longer with appropriate security measures.

  7. Conduct DPIAs:

    • Data Protection Impact Assessments (DPIAs) are crucial for identifying and mitigating risks associated with AI systems processing personal data. High-risk systems, especially those covered by the AI Act, should mandatorily undergo DPIAs.

Alignment with the AI Act

The recommendations integrate requirements from the European AI Act, ensuring consistency in data protection. This includes defining roles within AI system development and addressing high-risk AI applications.

Practical Implementation

Developers are encouraged to:

  • Conduct pilot studies to validate design choices.

  • Consult ethics committees for guidance on privacy and ethical issues.

  • Regularly update and document datasets to ensure ongoing compliance.

👉 Read the press release at AI system development: CNIL’s recommendations to comply with the GDPR | CNIL, and the AI how-to sheets 1-7 (in English) at AI how-to sheets | CNIL.

 

A new public consultation has been initiated with regard to further guidelines on AI (“AI how-to Sheets” 8 to 12), see below.


 

 

🔍 CNIL launches new public consultation on practical guidelines for developing AI systems

On 10 June 2024, the CNIL initiated a second public consultation on the framework for developing AI systems, releasing a series of practical guides and a questionnaire to help professionals reconcile innovation with privacy rights. This consultation, open until 1 September 2024, aims to provide clear guidelines on how to develop AI systems in compliance with the General Data Protection Regulation (GDPR).

Context

The CNIL's consultation follows initial recommendations published on 8 April 2024 (with the English translation published on 7 June - see my post on that here), which addressed the application of GDPR principles such as purpose limitation, data minimization, and storage duration to AI systems. These recommendations also covered scientific research rules, database reuse, and data protection impact assessments (DPIAs).

New Practical Sheets and Key Issues

The CNIL released seven new practical sheets for consultation:

  1. Legal Basis for Legitimate Interest in AI Development

  2. Legitimate Interest: Open Source Model Distribution

  3. Legitimate Interest: Web Scraping

  4. Informing Data Subjects

  5. Facilitating Data Subject Rights

  6. Data Annotation

  7. Ensuring AI System Safety

These sheets address important topics like the legal grounds for using personal data in AI, especially the frequently used legitimate interest basis, and offer concrete measures for compliance. They emphasize the need for rigorous oversight of data scraping practices and the importance of transparency and community collaboration in open-source AI development.

Public Consultation and Questionnaire

The consultation includes a questionnaire on GDPR application to AI models, focusing on whether AI models, which may retain training data, should be governed by GDPR. The CNIL seeks insights from AI suppliers, users, and other stakeholders to refine its future guidelines based on real-world risks and mitigation capabilities.

Stakeholder Participation and Timeline

The CNIL encourages a wide range of participants, including companies, researchers, academics, associations, and advisors, to contribute to the consultation. Contributions can be made individually or collectively, and participants are advised to review all relevant sheets before submitting responses.

 

The public consultation will close on 1 September 2024. After analysing the contributions, the CNIL will publish final recommendations on its website later in 2024, along with additional AI-related publications throughout the year.

 

👉 Read the press release here and the AI how-to sheets here (the new ones are at the bottom of the list, 8 to 12).

 

⚙️ Hong Kong's New AI Data Protection Framework Released

On 11 June 2024, the Office of the Privacy Commissioner for Personal Data (PCPD) in Hong Kong released the "Model Personal Data Protection Framework" to guide local enterprises on the ethical procurement, implementation, and management of AI systems that handle personal data. This framework addresses the growing need for AI governance in the face of rapid technological advancements and increasing regulatory challenges.

AI Strategy and Governance

The Model Framework provides guidelines for organizations on AI governance, risk assessment, customization, and stakeholder communication.

Key recommendations include:

  1. AI Strategy and Governance: establishing an AI strategy that aligns with the organization’s objectives, emphasizing top management's commitment to ethical AI use. It suggests forming an AI governance committee responsible for overseeing the entire AI lifecycle, from procurement to deployment. This committee should include diverse expertise from computer engineering, data science, cybersecurity, and legal compliance..

  2. Risk Assessment and Human Oversight: The framework adopts a risk-based approach, recommending thorough risk assessments to identify and mitigate potential risks, including privacy risks. It outlines the necessity of human oversight in AI operations, with higher-risk AI systems requiring more stringent human control to prevent errors and biased outcomes.

  3. Customization and Management of AI Systems: The framework highlights best practices for data preparation, ensuring compliance with the PDPO. It stresses the importance of using high-quality, relevant data and employing techniques such as anonymization and differential privacy to protect personal data. Organizations are advised to validate and test AI models rigorously before deployment.

  4. Continuous Monitoring and Management: Post-deployment, the framework calls for continuous monitoring and review of AI systems to maintain their reliability and robustness. Organizations should establish mechanisms for transparency, traceability, and auditability of AI outputs, ensuring accountability and compliance with data protection regulations.

  5. Stakeholder Engagement: Maintaining transparency through regular communication with stakeholders, including internal staff, AI suppliers, and regulators.

The PCPD has also published a leaflet summarizing key recommendations from the Model Framework to assist organizations in understanding and implementing these best practices effectively.

 

👉 Read the press release here, the full framework here and the leaflet here.


 

 

📡 Cyber Security Agency of Singapore Publishes IoT Device Security Advisory

On 5 June 2024, the Cyber Security Agency of Singapore (CSA) published an advisory focusing on the security of Internet of Things (IoT) devices. These devices, embedded with sensors, software, and Wi-Fi connectivity, collect and exchange data over the Internet, transforming daily life and business operations by offering convenience and efficiency. However, their proliferation makes them attractive targets for cyber threats, as they often collect sensitive personal and commercial data.

Common Vulnerabilities

The advisory outlines several common vulnerabilities:

  • Default and Weak Passwords: Many IoT devices come with default or weak credentials, which are easily exploited by attackers.

  • Insecure Network Services: Unencrypted communications between devices and servers can be intercepted.

  • Insecure Interfaces: Poorly designed web interfaces and mobile applications lack basic security measures.

  • Outdated Firmware and Software: Devices often remain unpatched, exposing them to known exploits.

  • Insecure Data Protection: Personal information may be stored without proper access controls or encryption.

  • Inadequate Physical Security: Physical access to devices can lead to hardware manipulation and information extraction.

Protection Measures

CSA recommends several measures to secure IoT devices:

  • Use Strong Passphrases and MFA: Avoid default credentials and use strong passphrases with a combination of characters. Enable Multi-Factor Authentication (MFA) where possible.

  • Update Firmware and Software Regularly: Ensure devices are updated regularly and consider upgrading devices that have reached their end-of-life (EOL).

  • Assess Device Operations: Disconnect devices from the Internet if not needed and disable unnecessary features.

  • Buy Products from Reputable Manufacturers: Choose manufacturers with a good track record in addressing security vulnerabilities and consider devices certified under CSA’s Cybersecurity Labelling Scheme (CLS).

  • Implement Physical Access Control Measures: Restrict physical access to devices and secure connectivity options.

Recovery from Compromise

If IoT devices are compromised:

  • Disconnect the Device from the Internet: Prevent further access by attackers.

  • Change Credentials and Perform a Factory Reset: Erase data and reset settings to default.

  • Contact Manufacturer for Assistance: Seek clarification on mitigation measures and consider upgrading or replacing the device if it has reached EOL.

👉 Find the advisory here.


 

🤖 IAPP Publishes AI Governance in Practice Report 2024

The "AI Governance in Practice Report 2024" explores the transformative impact of recent breakthroughs in machine learning technology on the landscape of AI. These advancements have led to sophisticated AI systems capable of autonomous learning and generating new data, resulting in significant societal disruption and a new era of technological innovation.

Context and Importance

As AI systems become more integral to various industries, leaders face the challenge of managing AI's risks and harms to realize its benefits responsibly. The report stresses the need for AI governance to address safety concerns, including biases in algorithms, misinformation, and privacy violations.

AI Governance Framework

The practice of AI governance encompasses principles, laws, policies, processes, standards, frameworks, and industry best practices. It aims to ensure the ethical design, development, deployment, and use of AI. Kate Jones, CEO of the U.K. Digital Regulation Cooperation Forum, highlights the importance of collaborative governance approaches across borders to safeguard people and foster innovation.

Maturing Field of AI Governance

AI governance, though relatively new, is rapidly maturing. Government authorities worldwide are developing targeted regulatory requirements, while governance experts support the creation of accepted principles like the OECD's AI Principles. The report addresses various challenges and solutions in AI governance, tailored to an organization's role, risk profile, and maturity.

Strategic Priority

AI is increasingly becoming a strategic priority for organizations and governments globally. The report underscores the importance of defining a corporate AI strategy, including target operating models, compliance assessments, accountability processes, and horizon scanning to align with regulatory developments.

Key Focus Areas

The report emphasizes transparency, explainability, data privacy, bias mitigation, and security as critical components of AI governance. It discusses the black-box problem in AI systems and the need for strong documentation and open-source transparency. Privacy by design and robust data protection measures are essential, with privacy impact assessments (DPIAs) and fairness risk impact assessments (FRIAs) highlighted under GDPR and the EU AI Act.

Mitigating Bias and Ensuring Security

Mitigating bias requires diverse teams and rigorous testing, with demographic parity and fairness objectives in focus. Security risks, such as data poisoning and model evasion, are addressed through conformity assessments and cybersecurity measures under regulations like the EU AI Act and U.S. Executive Order 14110.

 

The report concludes by urging organizations to prioritize AI governance now. It aims to empower AI governance professionals with actionable tools to navigate the complex AI landscape and ensure safe, responsible AI deployment. The full report offers comprehensive insights into applicable laws, policies, and best practices for effective AI governance.

👉 Find the report here.


 

📢 noyb Files Complaint Against Microsoft for Violating Children's Privacy

On 4 June 2024, noyb filed a complaint with the Austrian Data Protection Authority (DPA) against Microsoft for violating children's privacy rights with its “365 Education” services. noyb argues that Microsoft improperly shifts GDPR responsibilities to schools, which lack control over data processing.

Background

Since the pandemic, schools have increasingly adopted digital services like Microsoft 365 Education. However, this adoption has raised significant privacy concerns. Microsoft is accused of using its market power to dictate contract terms, leaving schools with a "take-it-or-leave-it" choice and no negotiating power. Consequently, when students attempt to exercise their GDPR rights, Microsoft deflects responsibility, claiming schools are the data controllers, despite schools having no practical control over the data.

GDPR Violations

The complaint outlines several key violations:

  • Tracking Cookies: Microsoft allegedly installed tracking cookies on students' devices without consent. These cookies collect browser data and track user behavior for advertising purposes. Despite students disabling optional data processing, tracking continued, violating GDPR requirements for consent.

  • Transparency Issues: Microsoft’s privacy documentation is described as vague and fragmented, making it nearly impossible for users, including children and their parents, to understand how their data is processed.

  • Misleading Controller Role: Microsoft claims schools are the data controllers, yet schools have no access to or control over the data, making it impossible for them to fulfill GDPR obligations. Schools cannot realistically enforce or oversee data processing, creating a compliance regime disconnected from reality.

  • Illegal Processing for Microsoft's Own Purposes: The complaint argues that Microsoft processes personal data for its own purposes, exceeding the scope of data processing as a processor. This includes using data for developing and improving products, which lacks a valid legal basis under GDPR.

Requests

noyb requests the DPA to:

  • Conduct a comprehensive investigation into Microsoft's data processing practices.

  • Declare that Microsoft violated GDPR by processing personal data without a legal basis.

  • Prohibit Microsoft from further processing data without valid consent.

  • Impose fines to ensure compliance and protect students' privacy.

Not the first rodeo

If this sounds familiar it is because the topic has been in the spotlight before, but in Denmark. The Danish DPA has previously sanctioned a municipality (the controller, due to organising the school system) for using Google Workspace (see my post here), and then said that Microsoft raises the same issues (see my post here). In those cases, the ones held responsible are indeed the schools (actually the municipalities that are responsible for the school system) and no direct sanction or requirement concerned Google or Microsoft.

Noyb is finally pointing the finger in the right direction - the qualification of roles between the parties and the overwhelming position of the tech giants, which is what makes this case one to watch closely.

👉 Read the complaints here.


 

🚨 noyb Files Complaint Against Google’s Privacy Sandbox

noyb has lodged a complaint with the Austrian data protection authority against Google for deceptive practices related to its Privacy Sandbox API, introduced on 7 September 2023. Google’s new system replaces third-party cookies with first-party tracking within Chrome, marketed misleadingly as a privacy feature.

Use of Dark Patterns

The complaint details how Google used dark patterns—manipulative UI/UX design techniques—to trick users into consenting to tracking. The pop-ups presented users with an option to "Turn on an ad privacy feature," implying increased privacy protection. However, this actually enabled extensive tracking of users' browsing histories.

Consent Violations

According to noyb, Google’s practices violate GDPR’s requirements for specific, informed, and unambiguous consent. The complaint highlights that the pop-up’s wording and design misled users about the nature of the data processing, failing to inform them that enabling the feature would allow tracking for targeted advertising.

Allegations and Deceptive Practices

  1. Tracking Scope: The Privacy Sandbox API tracks users' web activities, generating nearly 500 advertising topics based on their browsing history. Advertisers can access these topics to display targeted ads.

  2. Transparency Issues: Google's pop-ups falsely claimed to enhance privacy while actually facilitating tracking, contrary to users' expectations of a privacy feature.

  3. Consent Mechanism: The ambiguous consent process, involving misleading phrases like "Turn it on," does not meet GDPR standards. Google did not clearly inform users that their data would be used for targeted advertising.

  4. Deceptive Pop-up Design: The pop-ups presented a choice to either "Turn it on" or "No thanks," misleading users into thinking they were opting for increased privacy.

  5. Internal Browser Tracking: Despite advertising the feature as a means to protect privacy, the tracking is conducted internally by Google, bypassing third-party cookies.

  6. Misleading Imagery and Language: The use of terms like "protect," "limit," and "privacy features," alongside misleading imagery, reinforced the false message that the feature enhanced user privacy.

  7. Comparison to Competitors: Unlike browsers such as Safari and Firefox, which block third-party cookies by default, Chrome's Privacy Sandbox simply repackages tracking data for targeted ads.

  8. User Confusion: Even GDPR-trained lawyers at noyb found the pop-ups confusing and had to seek clarification from Google.

Compliance and Fines

noyb requests that the Austrian data protection authority investigates the complaint and compels Google to comply with GDPR. The complaint demands that Google stop processing data collected under invalid consent, inform data recipients of the unlawful processing, and proposes an effective, proportionate, and dissuasive fine, given the extensive impact on Chrome’s user base.

👉 Read the complaint here.


 

⛔ Meta Halts AI Rollout in Europe Following Irish DPC Request

Meta announced on 14 June 2024 that it is pausing the rollout of its AI tools in Europe following a request from the Irish Data Protection Commission (DPC). This decision arose after the digital rights advocacy group Noyb filed complaints with 11 European data protection authorities on 6 June, calling for urgent action against Meta’s plans to train AI models using public posts on Facebook and Instagram.

Background and Complaints

Noyb, led by privacy advocate Max Schrems, argued that Meta's AI plans were too ambiguous and accused the Irish DPC of complicity in the rollout. Noyb's complaints highlighted concerns over the requirement for users to opt out rather than opt in, which conflicts with the EU's data protection principles that typically necessitate explicit user consent.

Meta’s Response and DPC Involvement

Meta intended to use public content to develop AI large language models for generative AI experiences. However, the DPC's intensive engagement with Meta led to the request to delay these plans. The DPC, in cooperation with other European data protection authorities, will continue discussions with Meta to ensure compliance with data protection laws.

Meta expressed disappointment at the request, noting it had been informing European regulators since March and had recently begun notifying users with an opt-out option. Meta claimed that without local data, it could only provide a subpar AI experience to European users.

Broader Implications and Future Prospects

Noyb emphasized that Meta could still implement its AI technology if it sought consent from users instead of relying on legitimate interest as a legal basis. Despite Meta's assertion that European users will miss out on AI innovations, Noyb criticized the company for avoiding obtaining valid user consent.

Max Schrems welcomed the DPC's decision but stressed the need for an official change in Meta's privacy policy to make this commitment legally binding. The ongoing complaints will require further official decisions to ensure compliance.

👉 Read more here.



That’s it for this edition. Thanks for reading, and subscribe to get the full text in a single email in your inbox!


♻️ Share this if you found it useful.

💥 Follow me on Linkedin for updates and discussions on privacy education.

🎓 Take my course to advance your career in privacy – learn to navigate global privacy programs and build a scalable, effective privacy program across jurisdictions.

📍 Subscribe to my newsletter for weekly updates and insights in your mailbox. 

 

 

 

 

 

Comentários


Privacy & digital news FOMO got you puzzled?

Subscribe to my newsletter

Get all of my privacy, digital and AI insights delivered to you weekly, so you don’t need to remember to check my blog. You can unsubscribe at any time.


My newsletter can also include occasional marketing, such as information on my product launches and discounts.


Emails are sent through a processor located outside of the EU. Read more in the Privacy Notice.

It  takes  less  time  to  do  a  thing  right  than  to  explain  why  you  did  it  wrong.


Henry Wadsworth Longfellow

bottom of page