top of page

The AI & Privacy Explorer #37/2024 (9-15 September)

Welcome to the AI digital and privacy recap of privacy news for week 37 of 2024 (9-15 September)! 


The AI & Privacy Explorer #37/2024

This edition at a glance:

👈 Swipe left for a quick overview, then find 🔍 more details on each topic below.




🛡️ China Releases AI Safety Governance Framework

On 9 September 2024, China’s National Information Security Standardization Technical Committee (TC260) launched the ‘Artificial Intelligence Security Governance Framework’. The Framework is designed to uphold a people-centered and ethical approach to AI development, emphasizing the importance of safety and inclusive governance.

The Framework has 4 components: risks, technical countermeasures (what I’d call mitigating measures), governance measures, and safety guidelines.

 The risks:

  1. Inherent Risks in AI:

    • Algorithm risks: Explainability, bias and discrimination, robustness, stealing and tampering, unreliable input, adversarial attacks.

    • Data risks: Illegal collection and use, improper content and data poisoning, unregulated training data annotation, and data leakage.

    • Risks from AI systems: Exploitation through backdoors, infrastructure security, supply chain threats.

  2. Risks in AI Applications:

    • Cyberspace Risks: Content manipulation, misleading users and facilitating authentication bypass, information leakage, cyberattacks and security flaw transmission through model reuse.

    • Real-world Risks: Inducing traditional economic and social security risks, use of AI in illegal activities, and misuse of dual-use technology.

    • Cognitive Risks: Information bubbles, spread of misinformation and manipulation of public perception.

    • Ethical Risks: Social discrimination, challenging the traditional social order, and loss of control over AI in the future.

  3. Measures to Address the Risks:

    • Addressing AI’s Inherent Safety Risks

      • Risks from Models and Algorithms: improve explainability and predictability of AI systems; implement secure development standards to enhance robustness and reduce biases.

      • Risks from Data: enforce security rules on data collection, usage, and personal information processing; strengthen data selection processes to exclude sensitive and biased data.

      • Risks from AI Systems: disclose AI system principles, risks, and capabilities transparently; strengthen risk identification and mitigation on AI platforms.

    • Addressing Safety Risks in AI Applications

      • Cyberspace Risks: establish security protection mechanisms to prevent model interference and ensure reliable outputs.

      • Real-World Risks: set limitations on AI system services to prevent abuse in high-risk scenarios.

      • Cognitive Risks: identify and regulate untruthful or inaccurate outputs through technological means.

      • Ethical Risks: filter training data and verify outputs to prevent discrimination.

  4. Governance Measures – there are 10 measures, here are the ones I selected:

    • Tiered Management: Categorize AI applications based on risk levels, set up a testing and assessment system, and enforce compliance with safety standards to prevent misuse.

    • Traceability: Implement digital certificates and explicit labels to track AI systems, ensuring that users can identify information sources and assess credibility.

    • AI Data Security and Personal Information Protection: Define and enforce data security and personal information protection measures throughout AI processes, including training, labeling, and deployment.

    • Responsible AI: Establish guidelines and ethical review systems to ensure AI is developed responsibly, adhering to principles of AI for good and aligning with societal values.

    • Advancing Research on AI Explainability: Conduct research to improve AI decision-making transparency, predictability, and error-correction mechanisms.

    • Promoting International Exchange and Cooperation: Engage in global AI governance discussions, set international standards, and promote collaboration within multilateral frameworks like the UN, APEC, and G20.

  5. Safety Guidelines:

    • For Developers: Strengthen ethical reviews, data security, and prevent biases in AI models.

    • For Service Providers: Inform users about AI risks and provide guidance on responsible use.

    • For Key Area Users: Conduct impact assessments and maintain secure AI operations.

    • For General Users: Raise awareness about AI risks and ensure responsible interaction with AI technologies.

The press release is only available in Chinese, but the framework was published in English, here. It is well worth a read and I think it will come in very handy in AI risk assessments.

There is also a table mapping the risks, mitigating measures and governance measures:


China AI Safety Governance Framework

 

📊 EDPB and European Commission to Collaborate on GDPR-DMA Guidance

On 10 September 2024, the European Data Protection Board (EDPB) and the European Commission announced a joint effort to provide guidance on how the General Data Protection Regulation (GDPR) and the Digital Markets Act (DMA) intersect.

The guidance will focus on the obligations of digital gatekeepers under the DMA, such as data handling and interoperability requirements, which intersect significantly with GDPR principles. This cooperation aims to align these obligations, and ensure that digital gatekeepers comply with both regulations.

The press release is here


 

🧬 German Data Protection Conference Issues Position Paper on Scientific Research Purposes

The German Data Protection Authorities (DSK) issued a position paper on 11 September 2024, defining “scientific research purposes” under the GDPR. Given that processing for scientific research is rather privileged under GDPR and a lot of rules don’t apply, clarifying what is scientific research is and isn’t is quite necessary. 

Criteria for Scientific Research:

  1. Methodical and Systematic Approach: Research must use a structured, methodical approach to seek rational truth.

  2. Knowledge Acquisition: Research should aim to generate new insights. Application of existing knowledge does not qualify.

  3. Verifiability: Research findings should be documented and open to scrutiny, supporting peer review and public discourse.

  4. Independence and Autonomy: Research must maintain independence, especially from funders, to ensure unbiased results.

  5. Public Interest: Research should benefit society and not serve solely commercial interests.

Other aspects:

  • The concept of scientific research includes technological development, basic research, and private-funded research, aligning with the EU’s goals for social progress and improved quality of life.

  • The DSK emphasizes balancing data protection rights with research freedom, as outlined in the EU Charter of Fundamental Rights, which considers proportionality and common good.

You can find the paper here.


 

📜 German Data Protection Conference Position on Data Transfers in Asset Deals

On 11 September 2024, the German Data Protection Conference (DSK) released guidance on the handling of personal data in Asset Deals, which differ from Share Deals due to their unique data protection challenges. This guidance aims to help businesses, particularly sole proprietors and partnerships, navigate the complex regulatory landscape when transferring assets like customer data, contracts, and employee records during business sales.

Note: data transfer as used here does not mean transfer to a third country, but rather in general from one business owner to another.

Key Points:

  1. Data Transfers in Asset Deals:

    • Asset Deals involve the transfer of business assets such as properties, equipment, and customer data, triggering specific data protection obligations under GDPR.

    • Unlike Share Deals, where only company shares are transferred, Asset Deals require careful evaluation of data transfer legitimacy, particularly when it involves personal data.

  2. Data Transfers Before Deal Completion:

    • During preliminary negotiations (Due Diligence), the transfer of personal data to potential buyers is generally prohibited unless there is explicit consent from affected individuals.

    • Exceptions exist for essential stakeholders (e.g., key personnel) where a legitimate interest may justify data sharing under GDPR Article 6(1)(f).

  3. Customer Data:

    • Different rules apply depending on the stage of the contractual relationship:

      • Contract Negotiation: Data transfer is permitted if necessary for continuing negotiations, otherwise, a notice and opt-out mechanism should be used.

      • Ongoing Contracts: Transfer is allowed if the buyer takes over the contract, enabling them to fulfill contractual obligations under GDPR Article 6(1)(b).

      • Completed Contracts: Data may be shared for compliance with legal retention periods but must be segregated and not used for other purposes without explicit consent.

  4. Employee Data:

    • Employee data transfer is often permissible under certain conditions during business transitions, especially when covered by § 613a BGB (employment law). Consent or legal grounds under GDPR may be required depending on the scenario.

  5. Marketing and Special Data Categories:

    • Marketing using transferred data is restricted and generally requires prior consent, especially for sensitive data types (e.g., health information).

    • Special rules apply for sensitive categories under GDPR Article 9, mandating explicit consent for any transfer.

  6. Responsibility and Compliance:

    • The seller bears responsibility for ensuring lawful data transfers, including maintaining appropriate security measures and informing affected parties as required.

    • Both seller and buyer must fulfil their GDPR obligations, including the rights of data subjects.

You can find the paper here.


 

📝 Google investigated over data protection impact assessment for AI training

On 12 September 2024, the Data Protection Commission (DPC) announced a statutory inquiry into Google Ireland Limited, examining the tech giant’s compliance with data protection obligations under the GDPR. The investigation focuses on whether Google undertook a Data Protection Impact Assessment (DPIA) before processing personal data in developing its AI model, Pathways Language Model 2 (PaLM 2).

Other tech companies, including OpenAI and Meta, have faced similar scrutiny over data use in AI training. Enforcement actions highlight ongoing concerns about AI’s impact on privacy.

The press release is here


 

EU Commission announces public consultation on SCCs where the importer falls under GDPR

The European Commission announced that it plans to request public feedback on Standard Contractual Clauses (SCCs) that address the specific scenario where the data importer (controller or processor) is located in a third country but is directly subject to the GDPR. The public consultation will start in the fourth quarter of 2024, with the aim to have them published in the second quarter on 2025.

This announcement comes three years after the Commission issued FAQs that clarify the existing SCCs (issued in June 2021) cannot be used when the importer is subject to the GDPR directly. Despite this, there was no movement on this front until the Dutch data protection authority fined Uber 290 million EUR for lacking exactly this instrument (the decision does not say they needed this, this is a conclusion because no other transfer safeguard worked).

You can follow the progress here


 

⚖️ EU Advocate General Issues Opinion on Automated Decision-Making Transparency (Case C-203/22)

On 12 September 2024, Advocate General Richard de la Tour delivered his opinion in Case C-203/22, addressing key issues related to the GDPR’s provisions on access to information in automated decision-making, specifically Articles 15(1)(h) and 22. The case concerns CK, whose creditworthiness was automatically assessed by Dun & Bradstreet Austria, leading to the denial of a mobile phone contract. CK sought detailed information about the decision-making logic, while Dun & Bradstreet claimed that the algorithm was a trade secret. The Administrative Court in Vienna referred several questions to the Court of Justice, which were examined together by the Advocate General.

The key Issue depicted by the questions is what constitutes “meaningful information about the logic involved” in automated decision-making, and how does this interact with the protection of trade secrets or third-party data under the GDPR.

The Advocate General clarified that “meaningful information” does not require disclosing all technical details of an algorithm. Instead, it involves providing clear, understandable information on three core aspects:

  • Main factors that influenced the automated decision.

  • Weight of factors: how these factors were weighted in the decision-making process must be provided.

  • Outcome: the data subject must understand the outcome of the decision based on these factors.

The purpose of providing this information is to enable the data subject to understand the automated process and verify its accuracy, ensuring that decisions are based on correct and consistent data. The information must empower individuals to exercise their rights under GDPR Article 22, including expressing their views and contesting the decision.

The Advocate General emphasized that companies must find a balance between transparency and the protection of intellectual property. They should provide general explanations of how their automated decisions work without revealing proprietary or detailed algorithms that are protected as trade secrets.

The Advocate General emphasized that while the GDPR supports transparency, it also recognizes the need to protect trade secrets and third-party data. Conflicts between access rights and the protection of sensitive business information or third-party data should be resolved through controlled disclosure, where an authority or court reviews the information without directly sharing it with the data subject. This approach ensures that data subjects receive meaningful information without compromising the rights and freedoms of others or the controller’s intellectual property.

This is a decision to watch out for if you’re working with or in AI, as it will underlie how much information must be provided about algorithmic decision-making.

The opinion is available here.


 

⚖️ CJEU Clarifies GDPR Rules on Contract Performance, Legitimate Interest, and Legal Obligation for Data Processing (Cases C-17/22 and C-18/22)

On 12 September 2024, the Court of Justice of the European Union (CJEU) delivered its judgment in Cases C-17/22 and C-18/22 (ECLI:EU:C:2024:745), addressing the lawfulness of personal data processing under GDPR. The case involved requests by investment fund partners to access contact details of other partners with indirect shareholdings. Specifically, the CJEU examined whether such processing could be justified as necessary for contract performance, legitimate interests pursued by the data controller or a third party, or a legal obligation.

  • Article 6(1)(b) – Contract Performance:

    The Court held that data processing under Article 6(1)(b) is lawful only if it is objectively indispensable to fulfill contractual obligations integral to the contract’s purpose. This means that the primary objective of the contract cannot be achieved without the data processing. Notably, if the agreement explicitly prohibits data sharing, this condition is not met, and processing is deemed unlawful.

  • Article 6(1)(f) – Legitimate Interests:

    The Court ruled that processing based on legitimate interests is permissible only when strictly necessary to achieve those interests, and only if these do not override the fundamental rights and freedoms of the data subjects. (note: at the time of writing this there is an error in the decision and it refers to (b) instead of (f) in par. 78(2))

  • Article 6(1)(c) – Legal Obligations:

    The CJEU clarified that data processing is justified under Article 6(1)(c) if it complies with a clear, precise, and foreseeable legal obligation established by EU member state law or case law, on condition that that case-law is clear and precise, that its application is foreseeable for those persons subject to it and that it meets an objective of public interest and is proportionate to it.

You can find it here.


 

📉 FPF Releases Report on Emerging Trends in U.S. State AI Regulation

On 13 September 2024, the Future of Privacy Forum (FPF) released a comprehensive report, “U.S. State AI Legislation: A Look at How U.S. State Policymakers Are Approaching Artificial Intelligence Regulation,” analyzing recent legislative trends across U.S. states.

Key Aspects of the Report

  • Focus on Consequential Decisions: The report reveals that the most frequently introduced state legislative framework is ‘Governance of AI in Consequential Decisions’, examined in detail in Section II of the report. Consequential decisions are usually those that materially affect an individual’s life and opportunities, such as:

    • Employment: Hiring, promotions, and task assignments.

    • Education: Admissions, assessments, and financial aid.

    • Healthcare: Access to treatments, insurance coverage, and patient management.

    • Housing: Rental decisions and mortgage approvals.

    • Financial Services: Credit decisions, loan approvals, and risk assessments.

    • Government Services: Access to benefits and legal determinations.

  • Mitigating Algorithmic Discrimination: A major legislative goal is to prevent algorithmic discrimination by restricting AI systems known to pose discriminatory risks or by requiring reasonable care standards to protect individuals.

  • Developer and Deployer Obligations:

    • Developers: Entities that create AI systems must ensure transparency, conduct risk assessments, and provide documentation to deployers about AI capabilities, limitations, and potential biases.

    • Deployers: Entities that use AI in real-world applications must notify affected individuals, monitor AI systems, and maintain governance programs to manage risks.

  • AI Governance Programs: The report highlights that a core element of AI regulation is requiring AI governance programs as part of compliance efforts. Key elements include designating responsible personnel, conducting impact assessments, and regularly reviewing AI performance to ensure compliance with legal and ethical standards.

  • Consumer Rights: Emerging AI laws often provide consumers with rights, including:

    • Notice and Explanation: Consumers must be informed when AI is used in decisions that impact them.

    • Correction: Individuals can correct errors in the data used by AI systems.

    • Appeal and Opt-Out: Consumers have the right to challenge AI-driven decisions or opt out of AI decision-making processes where feasible.

  • Alternative Approaches: While the dominant approach focuses on consequential decisions, some states are also exploring technology-specific regulations for generative AI and foundational models, recognizing the distinct risks associated with these technologies. Section III of the Report provides some details on these.

My note: after reading the report and it’s supplement (which has concrete examples from both existing laws and drafts), it seems to me that the focus is more on regulating an extensive version of GDPR Article 22 (automated decision making with significant effect) rather than a corresponding EU AI Act. Even so, people only get a right to opt-out – i.e. actively request another method before the decision is made. This is of course good and definitely better than nothing, but I am wondering if this is sufficient especially if the entity is allowed to then refuse the service/product/contract (an aspect not caught in this report).

Read the entire report here.


PrivacyCraft divider

That’s it for this edition. Thanks for reading, and subscribe to get the full text in a single email in your inbox!

♻️ Share this if you found it useful.

💥 Follow me on Linkedin for updates and discussions on privacy education.

🎓 Take my course to advance your career in privacy – learn to navigate global privacy programs and build a scalable, effective privacy program across jurisdictions.

📍 Subscribe to my newsletter for weekly updates and insights in your mailbox.

Privacy & digital news FOMO got you puzzled?

Subscribe to my newsletter

Get all of my privacy, digital and AI insights delivered to you weekly, so you don’t need to remember to check my blog. You can unsubscribe at any time.


My newsletter can also include occasional marketing, such as information on my product launches and discounts.


Emails are sent through a processor located outside of the EU. Read more in the Privacy Notice.

It  takes  less  time  to  do  a  thing  right  than  to  explain  why  you  did  it  wrong.


Henry Wadsworth Longfellow

bottom of page