The AI & Privacy Explorer #26/2024 (24-30 June)
- Jul 7, 2024
- 15 min read

Welcome to the AI digital and privacy recap of privacy news for week 26 of 2024 (24-30 June)!
In this edition:
💰 Avanza Bank Fined 1.34M EUR In Sweden For Misconfiguration Of Meta Pixel;
🛡️ Sweden's PTS Launches E-Service for Cyber Security Act Compliance
📢 Arkansas AG Sues Temu for Data Theft and Privacy Violations;
🤖 OECD Explores AI, Data Governance, and Privacy Synergies;
🛡️ California AG Bonta Reminds Companies of Health Information Obligations;
📋 EDPB publishes Standardised Messenger Audit;
🤖 EDPB publishes Checklist for AI auditing;
📄 EDPB publishes report on data protection risks of AI for Optical Character Recognition (OCR) and Name Entity Recognition (NER);
🧠 EDPS and AEPD Insights into Challenges of Neurodata Processing for Privacy and Data Protection;
💸 FTC Bans Avast from Selling Web Data and Fines $16.5 Million;
🎾 Tennis Players Raise Privacy Concerns Over ATP's New Wearable Policy;
💰 Avanza Bank Fined 1.34M EUR In Sweden For Misconfiguration Of Meta Pixel
Major Swedish online financial services company Avanza Bank was just fined 15 million SEK (approx. 1.34 million EUR) for unlawful disclosure of personal data to Meta, through a misconfigured Meta pixel. This breached GDPR Articles 5(1)(f) and 32(1) as Avanza failed to implement adequate technical and organizational measures to secure personal data which led to unauthorized data disclosure.
Context
On 8 June 2021 IMY received a breach notification from Avanza Bank regarding the transfer of personal data (including identity numbers, loan amounts, and account numbers) to Meta between 15 November 2019 and 2 June 2021.The unauthorized transfer was due to the inadvertent activation of Meta’s Automatic Advanced Matching (AAM) and Automatic Events (AH) functions by the bank.Approximately 500,000 to 1,000,000 individuals were affected, with their data transferred without authorization.Upon discovery, Avanza deactivated the Meta-pixel tool and confirmed data deletion by Meta.Avanza implemented new policies and guidelines to prevent future breaches.
Key points in the decision:
GDPR Article 5(1)(f) Violation:
Personal data must be processed securely to prevent unauthorized access and accidental loss.
Avanza Bank transferred sensitive personal data to Meta without adequate security measures, compromising data confidentiality and integrity.
GDPR Article 32(1) Violation:
Controllers must implement security measures appropriate to the risks involved in processing personal data.
Avanza’s failure to detect and prevent the unauthorized disclosure of personal data indicated a lack of sufficient security measures.
Assessment
IMY concluded that the breach involved high-risk data, including the financial information mentioned as well as personal identification numbers, all of which would have required stringent security measures. Due to this, the breach constituted a significant risk to data subjects’ rights and freedoms.
Avanza’s inability to detect and rectify the unauthorized data transfer further underscored the deficiency in its security measures.
The fine was determined based on the gravity, duration, and nature of the infringement.
Avanza can appeal. A similar investigation had been initiated in 2021 also with regard to Länsförsäkringar, a major insurer in Sweden. That procedure is not yet finalised but we can expect a similar decision soon.
👉 You can read the original decision here in Swedish, and I made a translation into English available here:
🛡️ Sweden's Post and Telecom Authority Launches E-Service for Cyber Security Act (NIS2) Compliance
On 24 June 2024, the Swedish Post and Telecom Authority (PTS) announced the launch of an e-service titled 'Are we covered by the CSL?'. This tool is designed to assist companies in evaluating whether they fall under the new Cyber Security Act (CSL), set to take effect on 1 January 2025. The Cyber Security Act implements the Network and Information Security (NIS2) Directive, which aims to enhance cybersecurity across the European Union.
Background and Purpose
The new Cyber Security Act will require essential and important operators to register with the PTS. These operators must undertake risk management measures and report incidents. The Act categorizes operators into sectors, each overseen by a specific authority, with the Swedish Civil Contingencies Agency (MSB) having overall responsibility.
E-Service Functionality
PTS' e-service is specifically aimed at entities under its sectoral responsibility:
Digital infrastructure
ICT services management
Space
Postal and courier services
Digital suppliers
The e-service provides a step-by-step assessment based on the company’s establishment and size, helping businesses determine their coverage under the CSL. However, the result is not legally binding and should only be used as a preliminary assessment tool.
👉 Read the press release here.
📢 Arkansas AG Sues Temu for Data Theft and Privacy Violations
On June 25, 2024, Arkansas Attorney General Tim Griffin announced a lawsuit against Temu's parent companies, PPD Holdings Inc. and WhaleCo Inc., for violating the Arkansas Deceptive Trade Practices Act (ADTPA) and the Arkansas Personal Information Protection Act (PIPA). Griffin claims Temu operates as a data-theft business, collecting unauthorized user data by overriding device privacy settings.
Background
Temu, launched in the U.S. in 2022, is linked to a precursor app, Pinduoduo. Both apps faced suspension from Apple and Google Play stores due to misrepresentations about data access and usage, raising significant security and privacy concerns.
Allegations by the AG
Griffin's lawsuit outlines several claims:
Temu excessively collects sensitive personal information without user knowledge.
The company's privacy policy contains false representations.
User data may be misappropriated by Chinese authorities.
Arkansans' privacy rights are violated, with their data exposed to unauthorized access.
Temu misleads about product quality to maximize user sign-ups and data collection.
The company collects personal information from minors, including those under 13.
Outcomes Sought
Filed in Cleburne County Circuit Court, the lawsuit seeks:
An order to stop Temu's deceptive practices and privacy violations.
The imposition of civil penalties.
Monetary and equitable relief for the state and affected individuals.
Statements and Responses
Griffin emphasized the security risks posed by Temu, noting its leadership by former Chinese Communist Party officials. He pledged to fight Temu's deceptive practices aggressively. Temu, however, denied the allegations, attributing them to misinformation from a short-seller. The company expressed disappointment over the lawsuit and vowed to defend itself.
👉 Read the press release here.
🤖 OECD Explores AI, Data Governance, and Privacy Synergies
The OECD's report, "AI, Data Governance, and Privacy: Synergies and Areas of International Co-operation," published on 25 June 2024, addresses the growing intersection of artificial intelligence (AI) and privacy. The document highlights the necessity of collaboration between AI and privacy policy communities to tackle the challenges brought about by generative AI.
Key Findings and Recommendations
International Co-operation: The report underscores the need for global collaboration to address AI and privacy issues. Aligning the OECD Privacy Guidelines with AI Principles is essential for this effort.
Generative AI as a Catalyst: Generative AI's rise has accelerated the need for collaboration between AI and privacy sectors. The section "Generative AI: a catalyst for collaboration on AI and privacy" provides an insightful overview of the latest developments in Privacy Enhancing Technologies (PETs) and AI.
Machine Unlearning: This emerging field allows individuals to control their personal data even after sharing. Recent research suggests it may be possible to infer whether an individual's data was used to train a model, even if deleted from a database.
Policy Synergies: The report identifies commonalities and divergences between AI and privacy principles, emphasizing the importance of coordinated policy responses.
Legal Frameworks: The application of traditional legal frameworks, such as "legitimate interests," to AI practices is complex. The report states that the legal basis known as "legitimate interests" is considered suitable for generative AI but requires careful balancing of interests and rights.
Mapping Principles: The report maps existing OECD principles on privacy and AI, identifying key policy considerations.
National and Regional Developments: It provides an overview of national and regional initiatives on AI and privacy, highlighting the role of Privacy Enforcement Authorities and their guidance on AI-related privacy issues.
Moving Forward
To foster effective co-operation, the report advocates for:
Standardizing Terminology: Ensuring a common understanding of key terms across AI and privacy communities.
Enhancing Regulatory Frameworks: Updating privacy guidelines to address AI-specific challenges.
Promoting Innovation: Encouraging the development and adoption of PETs to safeguard privacy while leveraging AI's potential.
👉 Read the report here.
🛡️ California AG Bonta Reminds Companies of Health Information Obligations
On June 26, 2024, California Attorney General Rob Bonta sent letters to eight major pharmacy chains and five health data companies, reminding them of their obligations under the Confidentiality of Medical Information Act (CMIA) and the new requirements introduced by Assembly Bill (AB) 352. This bill aims to provide additional protections for patients' reproductive health and gender-affirming care information.
Key Aspects of AB 352
Effective Date: 1 July 2024
Primary Requirements:
Prohibition on Disclosure: Pharmacies and health data companies are generally prohibited from disclosing information related to abortion care to out-of-state entities unless authorized by the patient or an exception under CMIA applies.
Enhanced Data Security: Entities must implement security features to segregate and protect information related to abortion, contraception, and gender-affirming care, preventing unauthorized access across state lines.
AB 352 strengthens existing CMIA provisions by specifically targeting the protection of sensitive health information. This legislative effort is particularly significant in light of the United States Senate Committee on Finance's findings, which revealed that some major pharmacy chains had been disclosing protected health information (PHI) to law enforcement without warrants, often without patient knowledge. These practices, while not violating federal laws, conflict with California's stricter privacy standards.
Compliance Requirements
Attorney General Bonta has requested that the addressed pharmacy chains and health data companies, including Walmart, CVS Health, Walgreens, and Amazon Pharmacy, among others, provide their policies demonstrating compliance with AB 352 by 31 July 2024. These policies should include:
Warrant Requirements: Policies ensuring that medical information is only disclosed to law enforcement with a warrant if the request involves California patient data.
Data Security Measures: Documentation from pharmacies and their electronic health record (EHR) subcontractors showing adherence to the new security requirements to protect reproductive and gender-affirming care information.
AG Bonta emphasizes the critical importance of protecting the privacy of medical information, especially in the current climate following the repeal of Roe v. Wade. The state's stringent privacy laws reflect a commitment to safeguarding the confidentiality and rights of patients seeking sensitive healthcare services.
👉 Read more here.
📋EDPB publishes Standardised Messenger Audit
On 27 June 2024, the EDPB published the results of the Standardised Messenger Audit project as part of the Support Pool of Experts program, following a request from the German Federal Data Protection Authority (DPA). Prof. Mathieu Cunche concluded the project in November 2023.
What is a Messenger
A messenger is an application designed for real-time communication through text, multimedia, and other data types. Initially intended for personal use, messengers have become integral tools for business collaboration, necessitating stringent data protection measures to comply with regulations like the GDPR.
Objective
The project's primary aim is to establish a comprehensive test catalogue of requirements for GDPR-compliant messenger services. This catalogue includes mandatory, recommended, and optional criteria to aid data protection authorities and companies in assessing and improving their products.
Project Deliverables
D1 - Frontend Requirements: This document presents a structured list of criteria aligned with specific GDPR articles. It covers:
Mandatory Requirements (MUST): Essential criteria for compliance. They follow the articles of GDPR.
Recommended Requirements (SHOULD): Strongly advised but not compulsory.
Optional Requirements (MAY): Suggested practices for enhanced privacy.
D2 - Audit Methodology: This document outlines the verification methods for the criteria specified in D1, ensuring each requirement can be effectively audited.
While this audit methodology could be extremely useful in practice, I for one really miss having a structured format - even something as simple as a table bringing together the requirements, what needs to be done, etc.
👉 Find the documents here.
🤖 EDPB publishes Checklist for AI auditing
Also on 27 June 2024, the EDPB published the results of the AI Audit project as part of the Support Pool of Experts program, following a request from the Spanish Data Protection Authority (AEPD). This project, delivered by Dr. Gemma Galdon Clavell in January 2023, aims to assist data protection authorities in inspecting and auditing AI systems through a detailed methodology and checklist.
The primary objective of this project is to help various stakeholders understand and assess data protection safeguards within the AI Act framework. It focuses on creating a comprehensive checklist and proposing tools that improve transparency and facilitate the auditing process of algorithmic systems. The checklist is intended to be applied primarily by Data Protection Authorities (DPAs), but algo-scores and leaflets will primarily address AI developers and organizations implementing AI systems.
Key Elements of the Checklist
Model Card Requirements: These compile information on the training and testing of AI models, including Data Protection Impact Assessments (DPIAs), data sharing agreements, and approvals from data protection authorities.
System Maps: These maps establish relationships and interactions between the algorithmic model, the technical system, and the decision-making process.
Bias Identification and Testing: This involves identifying moments and sources of bias and designing tests to determine the impact of different biases on individuals, groups, society, or the efficiency of the AI system.
Adversarial Audits: These are designed to challenge the system's robustness and identify vulnerabilities.
Public Audit Reports: These reports enhance transparency by providing detailed audit outcomes to the public.
Algo-Score Proposal
The proposal includes an algo-score system inspired by the Nutriscore and A+++ methodologies. This scoring system evaluates AI governance, model fairness and performance, and post-market monitoring and auditing. The algo-score aims to promote transparency, accountability, and consumer choice in AI systems.
AI Leaflet Proposal
The AI leaflets, adapted from medical package leaflets, provide detailed and accessible information about AI systems. They include general information, process descriptions, data sources, model details, bias and impact metrics, and redress mechanisms. These leaflets aim to facilitate informed decision-making and compliance with GDPR and AI Act requirements.
Broader Impact
GDPR Compliance:
The AI auditing checklist directly supports GDPR compliance by providing detailed guidelines on data protection impact assessments (DPIAs), data sharing agreements, and transparency requirements.
AI leaflets ensure that necessary information about data processing and AI decision-making is accessible to data subjects, fulfilling GDPR requirements for transparency and accountability.
AI Act Alignment:
The algo-scores and AI leaflets are aligned with the proposed AI Act’s focus on high-risk AI systems, requiring comprehensive documentation and transparency measures.
The AI Act emphasizes the need for clear accountability and governance structures, which the algo-scores system aims to encapsulate by evaluating governance roles, compliance with standards, and documentation practices.
As mentioned, the documents detailing the project's framework, including proposals for algo-scores and AI leaflets, were delivered in January 2023 and only the proposal for algo-scores mentions an update in June 2024.
👉 Find all three documents here.
🤖 EDPB publishes report on data protection risks of AI for Optical Character Recognition (OCR) and Name Entity Recognition (NER)
...but I only managed to get through the OCR document, so that's the only one covered.
Also part of the Support Pool of Experts work, this project was conducted by Isabel Barbera and delivered in September 2023, identifying various privacy and security risks posed by OCR technology.
Background on OCR Technology
OCR converts images or scanned documents into machine-readable text.
Common applications include digitizing documents, license plate recognition, consumer behavior analysis, assistive technologies, and medical documentation.
Identified Risks
Processing Sensitive Data: OCR systems often handle sensitive information, including health data, financial records, and legal documents. Inadequate processing can lead to significant privacy violations.
Large Scale Processing: High volumes of data increase the risk of breaches. Proper safeguards must be implemented to protect such data.
Vulnerable Individuals: Processing data of children, elderly, or other vulnerable groups requires stringent protections to avoid potential exploitation.
Low Data Quality: Poor input data quality can result in inaccuracies, affecting the reliability of the OCR output and leading to potential misuse of incorrect information.
Insufficient Security Measures: Lack of adequate security can lead to data breaches, unlawful data transfers, and other privacy infringements. Ensuring data encryption, access controls, and secure transmission is crucial.
Unlawful Data Storage: Storing data longer than necessary or without appropriate safeguards contravenes GDPR principles. Clear data retention policies are essential to comply with legal requirements.
Unlawful Data Transfer: Transferring data to jurisdictions without adequate protections poses significant risks. Proper assessments and safeguards are necessary for cross-border data flows.
Mitigation Strategies
Implement Robust Safeguards: Ensure data encryption, access controls, and secure data transmission to prevent breaches.
Regular Risk Assessments: Continuously monitor and assess risks to identify and mitigate potential threats.
Data Minimization: Only process the data necessary for the specific OCR task, reducing the potential for misuse.
Quality Assurance: Regularly verify the accuracy and quality of input data to ensure reliable OCR output.
Compliance with GDPR: Adhere to GDPR requirements, including data minimization, lawful processing, and ensuring the rights to rectification and erasure.
The report emphasizes the importance of stringent safeguards and regulatory compliance to mitigate data protection risks associated with OCR technology. Implementing these measures can help organizations leverage OCR while protecting individual privacy rights.
👉 Read the documents here.
🧠 EDPS and AEPD Insights into Challenges of Neurodata Processing for Privacy and Data Protection
On 27 June 2024, the Spanish Data Protection Agency (AEPD) and the European Data Protection Supervisor (EDPS) released a joint report analyzing the challenges of neurodata processing for fundamental rights and freedoms. Neurodata, defined as information gathered from the brain and nervous system, includes data related to brain activity, structure, function, and other neural signals. This report offers an in-depth examination of neurodata, evaluating its impact on privacy and personal data protection and discussing various practical use cases.
Neurodata and Its Applications
Recent advances in neurotechnology have led to numerous devices that monitor brain activity for various purposes, including medical treatment, marketing, education, and entertainment. Neurodata can be collected invasively, requiring surgical implantation, or non-invasively through external interfaces like headbands. The collection methods can be passive, not requiring explicit actions, or active, involving specific tasks.
Risks to Fundamental Rights
The report underscores the significant risks neurodata poses to fundamental rights, particularly privacy, mental integrity, and human dignity. It provides examples of how neurodata processing in education, gaming, and other sectors can lead to intrusive and potentially unlawful practices under EU law. The combination of neurodata with artificial intelligence (AI) amplifies these risks, enabling detailed insights into individuals' thoughts and emotions.
Data Protection Principles
Neurodata often falls under special categories of personal data, such as biometric or health data, which require stringent protection measures. The report highlights the importance of proportionality, data minimization, accuracy, transparency, and fairness in neurodata processing. Controllers must ensure that the purpose justifies the invasive nature of neurodata processing and comply with all relevant data protection principles.
Neurorights
The concept of neurorights, proposed to address emerging issues in neurotechnology, includes rights such as cognitive liberty, mental privacy, mental integrity, psychological continuity, and fair access. The report suggests that existing human rights frameworks may not suffice to protect individuals against the unique challenges posed by neurodata.
Recommendations
The report concludes with recommendations for regulators and organizations processing neurodata. It emphasizes the need for comprehensive impact assessments, robust safeguards, and consideration of neurorights to ensure the ethical and lawful use of neurotechnology. Additionally, it calls for increased awareness and regulation to prevent misuse and protect individuals' fundamental rights.
👉 Read more here.
💸 FTC Bans Avast from Selling Web Data and Issues $16.5 Million Fine
On June 27, 2024, the US Federal Trade Commission (FTC) finalized an order against Avast Limited, along with its subsidiaries Avast Software s.r.o and Jumpshot, Inc., following a February settlement to address the sale of browsing data for advertising purposes.
Background
The FTC's investigation revealed that Avast unfairly collected and sold consumers’ browsing data through its browser extensions and antivirus software. The company falsely claimed that its products would protect user privacy while secretly selling detailed and re-identifiable browsing data to over 100 third parties.
Findings
The FTC's complaint included several key points:
Deceptive Privacy Practices: Avast's privacy policy failed to disclose the sale of browsing data.
Lack of Consumer Consent: Consumers were not adequately informed or consented to the sale of their data.
Misleading Privacy Claims: Avast misrepresented its data aggregation and anonymization processes.
Order Provisions
Monetary Penalty: Avast must pay $16.5 million within nine days of the order's effective date, to be used for consumer redress.
Data Deletion: The company must delete all web browsing data transferred to Jumpshot and any derived products or algorithms.
Consumer Notification: Avast must notify consumers whose data was sold without consent about the FTC’s actions.
Privacy Program: Avast is required to establish a comprehensive privacy program to prevent future violations.
Not the first rodeo
In The Privacy Explorer #16 you can read about the history of the Avast blunder and that Avast has been fined CZK 351 million (approx. USD 14,9 million) by the Czech Office for Personal Data Protection.
👉 Press release here.
🎾 Tennis Players Raise Privacy Concerns Over ATP's New Wearable Policy
On 27 June 2024, the ATP announced its approval for in-competition wearables from Catapult and STATSports to be used in ATP Tour and Challenger level matches starting 15 July. These devices will collect data for the ATP's Tennis IQ performance analytics program. However, this decision has faced criticism from players, notably from the Professional Tennis Players Association (PTPA), co-founded by Novak Djokovic and Vasek Pospisil.
Context
The PTPA, established in 2019, aims to represent the interests of top ATP and WTA players. The organization frequently addresses issues restricting player autonomy. The ATP's new policy has raised significant concerns among players regarding data privacy and control.
Key Concerns
Vasek Pospisil voiced his concerns on X, posing three critical questions:
Why can't players use their own devices and keep data private?
Why must data be centralized in Tennis IQ, especially if players are considered "independent contractors"?
What benefits will players receive if their data is monetized, considering its potential value?
Pospisil labeled the policy a 'red flag,' emphasizing the potential loss of control over personal performance insights. He also questioned the possible connection to the ATP and ATP Media's joint venture with TDI, which commercializes tennis data for betting and media.
GDPR Considerations
Despite being a non-EU organization, the ATP is likely to have to comply with the GDPR under Article 3(2)(a), which makes GDPR applicable to entities outside the EU if they provide products and services to people in the EU (which includes memberships and events).
The French data protection authority (CNIL) has recently issued guidance on collecting performance data from athletes, emphasizing that the use of such data must comply with GDPR principles. Key points include:
Identifying the data controller, joint controllers, and processors.
Ensuring data collection has a legal basis (this is where the need to get consent, at GDPR standards, will come in unless there is an EU law providing for public interest in the collection).
Limiting data collection to what is necessary for the stated purpose.
Maintaining transparency with athletes about data usage.
Implementing appropriate security measures to protect data.
It’s very likely that ATP falls under other data protection laws around the world (not least the ones in the United States) and will sooner or later face the consequences.
👉 Read the news article here.
That’s it for this edition. Thanks for reading, and subscribe to get the full text in your inbox!
♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy education.
🎓 Take my course to advance your career in privacy – learn to navigate global privacy programs and build a scalable, effective privacy program across jurisdictions.
📍 Subscribe to my newsletter for weekly updates and insights in your mailbox.
Comentários