Welcome to the AI digital and privacy recap of privacy news for week 30 of 2024 (22-28 July)!
In this edition:
🇺🇸 Oracle Agrees to $115 Million Settlement in Consumer Privacy Lawsuit.
🇺🇸 Texas $1.4 Billion Settlement with Meta Over Its Unauthorized Capture of Personal Biometric Data.
🇬🇧 TikTok Fined £1.875 Million by UK’s Ofcom for Data Inaccuracies.
🇰🇷 Korean PIPC Fines AliExpress $1.43M for Data Protection Violations.
🇺🇸 FCC and TracFone Settle $16 Million Fine Over Data Breaches.
🇮🇪 Irish DPC to investigate X’s Grok AI training on user data without consent.
🇺🇸 DOJ Accuses TikTok of Data Misuse on Sensitive Topics.
📈 FTC Investigates Surveillance Pricing Practices.
📢 European Commission Coordinates Action Against Meta on 'Pay or Consent' Model.
🚗 US Senators Urge FTC Crackdown on Automakers' Data Sharing.
📝 EU Commission Publishes Biennual Report on Consumer Protection.
🇪🇺 Second GDPR Report Highlights Progress and Challenges.
🤖 U.S. Department of State Releases Risk Management Profile for AI and Human Rights.
📜 NIST Released Guidance on Mitigating Generative AI Risks.
📢 FCC Proposes New AI Disclosure Rules for Political Ads.
🔄 Google Halts Plans to Phase Out Third-Party Cookies.
🔍 FTC Warns Against Misleading Anonymization Claims Through Hashing.
📹 Ofcom Publishes Paper on Mitigating Deepfake Harms.
🤖IAPP publishes Global AI Governance Law and Policy: Jurisdiction Overviews.🧠 Neurotechnologies and Mental Privacy: Societal and Ethical Challenges.
🍪 noyb's Consent Banner Report: How Authorities Actually Decide.
🇺🇸 Oracle Agrees to $115 Million Settlement in Consumer Privacy Lawsuit
Oracle has reached a $115 million settlement in a consumer privacy lawsuit filed in 2022 in the Northern District of California. The lawsuit accused Oracle of secretly creating and selling dossiers containing detailed personal information on millions of people, including non-users, and earning $42.5 billion annually from these practices.
Oracle allegedly used a variety of tracking techniques to gather data on internet users, including cookies, fingerprinting, JavaScript, and tracking pixels. The company also collected data through its "AddThis" social bookmarking service and by purchasing consumer data from third-party brokers like Datalogic.
The lawsuit claimed Oracle's dossiers included full names, home addresses, race, political views, retail purchases, and location data. Oracle's "Oracle ID Graph" tool used this information to identify individual internet users and offered these profiles to private and government buyers. The lawsuit emphasized the secrecy of Oracle's data collection methods, highlighting that its privacy policies did not clearly disclose the extent of data gathering and sharing.
Settlement Details
Date: 25 July 2024
Amount: $115 million
Impacted Individuals: Approximately 220 million people
Legal Fees: Up to $28 million sought by the involved law firm
Data Collection Changes: Oracle will stop collecting text from forms on non-Oracle websites and cease gathering user-generated information from URLs of previously visited websites.
Furthermore, Oracle announced its exit from the ad tracking business and plans to delete stored customer data after fulfilling obligations to data providers.
Read more here.
🇺🇸 Texas $1.4 Billion Settlement with Meta Over Its Unauthorized Capture of Personal Biometric Data
On 30 July 2024, Texas Attorney General Ken Paxton secured a $1.4 billion settlement with Meta Platforms Inc. over the unauthorized capture and use of Texans' biometric data. This record-setting agreement, the largest ever obtained from a single US state, highlights the enforcement of Texas’s “Capture or Use of Biometric Identifier” Act (CUBI).
Background and Legal Context
In February 2022, Paxton filed a lawsuit against Meta, accusing the tech giant of violating CUBI and the Deceptive Trade Practices Act by using facial recognition software without informed consent. The feature, initially called Tag Suggestions, was automatically enabled for Texans without explaining its operation. Meta’s software scanned photographs to capture facial geometry, violating state law that requires explicit consent.
Settlement Details
Payment Structure: Meta will pay $1.4 billion over five years. The first installment of $500 million is due within 30 days, with subsequent annual payments of $225 million each.
Use of Funds: The state treasury will receive the funds, with portions allocated to attorney fees and the state’s general revenue fund.
Broader Implications
Historic Precedent: This settlement surpasses the previous largest privacy-related settlement in the US, a $390 million agreement involving Google and 40 states in 2022.
Future Enforcement: The outcome serves as a deterrent to companies engaging in unauthorized biometric data practices, emphasizing the importance of compliance with privacy laws.
Legal Framework: This case marks the first enforcement action under CUBI, setting a significant precedent for future privacy litigation in Texas.
Read the press release here.
🇬🇧 TikTok Fined £1.875 Million by UK’s Ofcom for Data Inaccuracies
On 24 July 2024, the British communications regulator Ofcom announced a £1.875 million fine against TikTok for providing inaccurate information about its Family Pairing parental control feature. The fine stems from TikTok's failure to accurately respond to a statutory request for information, which is crucial for Ofcom's regulatory duties.
Context
Ofcom sought data from video-sharing platforms to compile a report on child safety measures. TikTok was asked to provide data on the usage of its Family Pairing feature, intended to help parents manage their children's online activity. The accuracy of this data was essential for assessing the feature's effectiveness and informing the public.
TikTok's Response and Investigation
Initial Response: TikTok submitted the requested data on 4 September 2023. However, on 1 December 2023, TikTok admitted the data was inaccurate and launched an internal investigation.
Ofcom's Investigation: Initiated on 14 December 2023, Ofcom found that TikTok's data governance was insufficient, leading to delayed error reporting and remedying.
Findings
Data Inaccuracies: TikTok's submission contained significant errors, and the company was slow to report these inaccuracies to Ofcom.
Impact on Report: This delay forced Ofcom to exclude TikTok's data from its transparency report, which disrupted efforts to provide clear safety information to parents.
Subsequent Delays: Even after committing to providing accurate data, TikTok faced further delays, only submitting partial data by 28 March 2024, well past the initial deadline.
Penalty Decision: Ofcom deemed the £1.875 million fine appropriate, reflecting the severity of the contravention and TikTok's size and resources. This includes a 25% reduction for TikTok’s cooperation in settling the case.
Read the press release here.
🇰🇷 Korean PIPC Fines AliExpress $1.43M for Data Protection Violations
On 24 July 2024, the Korean Personal Information Protection Commission (PIPC) concluded its 13th plenary meeting by imposing significant sanctions on AliExpress for violations of the Korean Personal Information Protection Act (PIPA). AliExpress, managed by Alibaba.com Singapore E-Commerce Private Limited, faces a penalty surcharge of 1.978 billion KRW ($1.43 million) and an administrative fine of 7.8 million KRW ($5,631). This action follows extensive investigations prompted by privacy concerns from the National Assembly in October 2023 and widespread media coverage.
Background
AliExpress, a global online marketplace, transfers buyers' personal data to third-country sellers for shipping, which in AliExpress' case included transferring data of over 180,000 Korean buyers to Chinese sellers. The PIPC found that AliExpress did not meet PIPA's stringent requirements for cross-border data transfers, including obtaining explicit consent from users and ensuring robust safeguards and redress mechanisms.
Results
The investigation revealed multiple compliance failures by AliExpress:
Failure to Notify: Users were not adequately informed about the transfer of their personal data, including the recipient's country and contact details.
Lack of Safeguards: Essential privacy protections were missing in terms and conditions.
User Rights Impediment: The platform complicated membership withdrawal and account deletion processes, only offering these in English.
Administrative Sanctions
The PIPC levied the following sanctions:
Penalty Surcharge: 1.978 billion KRW for cross-border data transfer violations.
Administrative Fine: 7.8 million KRW for inadequate data protection safeguards.
Corrective Orders: Required compliance with data transfer and user rights regulations.
Recommendations: Improve transparency, designate a domestic agent, and minimize data collection.
Corrective Measures by AliExpress
In response, AliExpress took steps to address non-compliance:
Informed Consent: Obtained users' consent for data transfers.
Domestic Agent: Improved procedures for handling personal data.
Privacy Policy Updates: Aligned policies with PIPA requirements.
You can read the press release here.
Note on consent
Korea’s data protection law states that data handlers may collect and use personal data without the data subject's consent when it is necessary to perform a contract concluded with the data subject or take measures at the request of the data subject in the course of concluding a contract, however this does not apply for the provision of personal data to a third party. If you’re curious, the English version of the law is available here - check out in particular articles 15 and 28-8.
🇺🇸 FCC and TracFone Settle $16 Million Fine Over Data Breaches
On July 22, 2024, the Federal Communications Commission (FCC) announced a settlement with TracFone Wireless, Inc. concerning investigations into three data breaches that occurred between January 2021 and January 2023. These breaches exposed customers' personal and proprietary information due to vulnerabilities in TracFone's application programming interfaces (APIs).
Background
TracFone Wireless, a subsidiary of Verizon Communications, offers prepaid mobile services through brands such as Straight Talk and Total by Verizon Wireless. Between January 2021 and January 2023, TracFone experienced three significant data breaches that exposed customer proprietary network information (CPNI) and personally identifiable information (PII). Unauthorized access to customer data and numerous port-outs resulted from these breaches.
Settlement Details
The settlement includes a $16 million civil penalty and several stringent security measures aimed at preventing future breaches. Key components of the settlement are:
Information Security Program: TracFone must implement a comprehensive security program addressing API vulnerabilities, consistent with standards from the National Institute of Standards and Technology (NIST) and the Open Worldwide Application Security Project (OWASP).
SIM Change and Port-Out Protections: Measures include enhanced authentication protocols and customer notifications for SIM changes and port-out requests.
Annual Assessments: TracFone will conduct annual security assessments, including independent third-party evaluations, to ensure the effectiveness of its information security program.
Employee Training: Regular privacy and security awareness training for employees and certain third parties.
The press release and consent decree can be found here.
The FCC has fined multiple major wireless carriers for their data protection failures in recent years. In April 2024, the FCC imposed nearly $200 million in fines against AT&T, Sprint, T-Mobile, and Verizon for illegally sharing customer location data without obtaining proper consent - I wrote about this in a previous edition that you can find here.
🇮🇪 Irish DPC to investigate X’s Grok AI training on user data without consent
On 27 July 2024 various media outlets reported that the Irish Data Protection Commission is investigating X for its decision to share user data with Elon Musk's AI startup, xAI, without obtaining explicit consent. This practice allows xAI to train its AI assistant, Grok, using data from X users.
The change was discovered by users on 26 July 2024. The default sharing setting can only be changed via the desktop version of X, with plans to add a mobile app option.
The Irish Data Protection Commission had been engaging with X about its data usage plans for months. The sudden rollout surprised the DPC, leading to further inquiries and the possibility of a GDPR probe.
Meta faced similar scrutiny from theDPC, leading to a pause in its plans to use European user data for AI training.
Read FT's reporting here.
🇺🇸 DOJ Accuses TikTok of Data Misuse on Sensitive Topics
On 27 July 2024, the U.S. Department of Justice accused TikTok of collecting and sharing U.S. user data on sensitive issues, including abortion and gun control, with its Chinese parent company, ByteDance. The accusations were detailed in documents filed to the federal appeals court in Washington, highlighting the use of an internal web-suite system called Lark. This system allowed TikTok employees to send sensitive data to ByteDance engineers in China, raising significant national security concerns.
Key Details
Data Collection and Transfer: TikTok employees used Lark to send sensitive data about U.S. users, which ended up stored on Chinese servers. This data was accessible to ByteDance employees in China.
Sensitive Topics: The data included user views on divisive social issues such as abortion, gun control, and religion. TikTok's capability to gather this information was enabled by an internal search tool within Lark.
Government's Concern: The DOJ warned about the potential for "covert content manipulation" by the Chinese government, citing the algorithm's ability to shape user content. They alleged TikTok's "heating" practice, promoting certain videos, could be used for nefarious purposes.
Broader Implications
Legal Battle: This case is part of a larger legal struggle over TikTok's future in the U.S. A law signed by President Joe Biden in April could ban TikTok if it doesn’t sever ties with ByteDance (see my post on that here).
National Security: The government’s concerns center around the possibility that China could force ByteDance to hand over U.S. user data or manipulate public opinion through TikTok’s algorithm.
First Amendment Debate: TikTok argues the potential ban would violate the First Amendment by silencing 170 million American voices. The DOJ contends that the law addresses national security without targeting protected speech.
Additional Context
Project Texas: TikTok’s $1.5 billion plan to store U.S. user data on Oracle servers is deemed insufficient by federal officials to mitigate national security risks.
Future Proceedings: Oral arguments in the case are scheduled for September, as TikTok continues to argue that the law discriminates against viewpoints and that divestment would alter the platform's content.
You can read NPR’s reporting here.
📈 FTC Investigates Surveillance Pricing Practices
On 23 July 2024, the Federal Trade Commission (FTC) issued orders to eight companies to provide comprehensive information on their surveillance pricing practices. The companies involved are Mastercard, Revionics, Bloomreach, JPMorgan Chase, Task Software, PROS, Accenture, and McKinsey & Co. This investigation focuses on understanding how these firms utilize personal data, such as browsing history, credit scores, and other demographics, to set individualized prices for goods and services.
Context and Purpose
The FTC's initiative aims to scrutinize the opaque market of surveillance pricing, where companies use advanced algorithms and AI to exploit personal data for pricing strategies.
According to a blog post on the background to this study, the FTC aims to:
Understand Surveillance Pricing Mechanisms: Investigate how companies use personal data to fuel their pricing algorithms and identify the sources of this data.
Identify Key Players: Focus on intermediary firms that enable surveillance pricing, examining their technical methods and the data pipelines involved.
Assess Consumer Impact: Determine the effects of surveillance pricing on consumers, including the types of data used and how it influences the prices they pay.
This investigation leverages the FTC's 6(b) authority, which allows for wide-ranging studies without specific law enforcement purposes. The goal is to comprehend the implications of these practices on consumer privacy, market competition, and protection from unfair pricing.
Specifications and Requirements
The FTC's orders require the companies to submit a Special Report detailing various aspects of their surveillance pricing solutions:
Types of Products and Services: Each company must list all User Segmentation Solutions and Targeted Pricing Solutions developed, produced, or licensed. This includes detailed descriptions of the intended uses, technical approaches, features, data inputs, and promotional materials.
Data Collection and Inputs: Detailed information on internal and external data sources used, data collection methods, platforms used for data collection, data retention periods, and oversight mechanisms for data sources. Companies must provide a complete data map and explain the use of each data field.
Customer and Sales Information: Annual aggregate sales data, customer lists, and details of contracts and service agreements with major clients. Companies must describe contractual limitations on the use of their solutions and the enforcement of these limitations.
Impacts on Consumers and Prices: Analysis, reports, studies, and surveys evaluating the effects of User Segmentation and Targeted Pricing Solutions on pricing, sales volume, and consumer segmentation.
FTC Chair Lina M. Khan emphasized the potential risks, stating "Firms that harvest Americans’ personal data can put people’s privacy at risk. Now firms could be exploiting this vast trove of personal information to charge people higher prices." The FTC’s inquiry aims to illuminate these practices and protect consumers from potential exploitation.
The orders stipulate that the companies must:
Submit responses within 45 days of the service date.
Ensure the Special Report is prepared or supervised by an official who can verify its accuracy.
Schedule a teleconference with FTC representatives within 14 days of receiving the order to discuss their response.
Provide all responsive documents in electronic or hard copy format, accompanied by detailed metadata and indexes.
The topic of pricing discrimination has been discussed under EU law especially under Art. 22 GDPR (fully automated decision making). Here are some recommended readings on this:
Price Discrimination, Algorithmic Decision-making, and European Non-discrimination Law by Frederik Zuiderveen Borgesius (at SSRN) - see also the accompanying blog post here.
Affinity-based algorithmic pricing: A dilemma for EU data protection law, by Zihao Li (at Science Direct).
A special price just for you: effects of personalized dynamic pricing on consumer fairness perceptions, by Anna Priester, Thomas Robbert & Stefan Roth (at Springer).
📢 European Commission Coordinates Action Against Meta on 'Pay or Consent' Model
On 22 July 2024, the European Commission coordinated with the Consumer Protection Cooperation (CPC) Network to address concerns surrounding Meta’s 'pay or consent' model. This model demands users to either pay for accessing Facebook and Instagram or consent to personalized ads based on their personal data. The CPC, led by France’s Directorate General for Competition, Consumer Affairs, and Fraud Prevention, began this action in 2023 following Meta's overnight implementation of the new model.
Key Concerns
Misleading Information: Meta's use of the term 'free' misled consumers, as those not subscribing must consent to their data being used for ads.
Confusing Processes: Users navigated multiple screens and hyperlinks to understand data usage, complicating their decision-making.
Imprecise Language: Terms like 'your info' instead of 'personal data' created misunderstandings, and even paying users might still see ads via shared content.
Undue Pressure: Long-time users, accustomed to free services, were pressured to make immediate choices without adequate time to understand the implications.
The action is part of broader EU and national investigations into Meta's practices. Separate ongoing investigations include potential breaches of the Digital Markets Act (DMA) and the Digital Services Act (DSA), and the Irish Data Protection Commission's assessment under GDPR.
Meta has until 1 September 2024 to address the CPC's concerns and propose solutions. Failure to comply may result in enforcement measures, including sanctions.
See here the press release.
🚗 US Senators Urge FTC Crackdown on Automakers' Data Sharing
On July 26, 2024, U.S. Senators Ron Wyden and Edward J. Markey urged the FTC to investigate automakers' unauthorized disclosure of driver data to data brokers. Wyden's investigation revealed that GM, Honda, and Hyundai shared driving and location data with Verisk Analytics without obtaining informed consent from drivers. This investigation follows increased scrutiny of automakers' data privacy practices, highlighted by recent investigative stories from the New York Times.
Key Findings
GM shared customer location data and driving data, using dark patterns to enroll consumers in data-sharing programs. GM disclosed sharing data from its Smart Driver program and general internet-connected cars without informed consent.
Honda shared data from 97,000 cars with Verisk, earning $25,920. Honda's enrollment process for its Driver Feedback program used deceptive tactics to obscure data-sharing practices.
Hyundai shared data from 1.7 million cars with Verisk, earning over $1 million. Hyundai automatically enrolled consumers who activated their car’s internet connection into a data-sharing program without their explicit consent.
Automakers are said to have employed dark patterns and deceptive claims, implying that shared data would only be used to lower insurance bills. However, it was revealed that telematics data could also lead to higher insurance premiums. Only a few US states prohibit the use of such data to increase premiums.
The senators highlighted that these practices could be just the "tip of the iceberg," suggesting further investigation into automakers’ relationships with other data brokers. They emphasized that selling driver data without consent, particularly for profit, is unacceptable. Given the potential harm to consumers from increased insurance costs and the manipulation through dark patterns, the senators urged the FTC to hold both automakers and data brokers accountable.
Call to Action
The senators called for:
Broad FTC investigation into auto industry practices.
Accountability for automakers and data brokers.
Responsibility for senior company officials for privacy abuses.
The ongoing scrutiny underscores the importance of transparent and lawful data practices in the automotive industry.
Read the press release here.
📝 EU Commission Publishes Biennual Report on Consumer Protection
The European Commission's biennial report on the actions carried out under the Consumer Protection Cooperation (CPC) Regulation details efforts aimed at protecting consumer interests in the EU.
Key Actions by the CPC Network
Mutual Assistance Requests: A significant increase with 440 requests, marking a 41% rise from the previous period.
Alerts and Sweeps: A total of 93 alerts were issued, covering issues like dark patterns and misleading price reductions during Black Friday. Sweeps focused on influencers and car rental intermediaries.
Digital Markets: Major actions involved platforms like Wish, Shopify, Amazon, Google, and Vinted, addressing issues from misleading discounts to data transparency.
Specific Enforcement Actions
Influencer Marketing: A 2023 sweep of 576 influencers revealed 80% did not disclose commercial activities properly. The European Commission launched an Influencer Legal Hub for compliance.
Car Rental Intermediaries: 55% of 78 websites checked were non-compliant with transparency requirements.
Green Transition: Actions against Nintendo and Zalando addressed product durability and misleading environmental claims.
Cooperation and Capacity Building
EU and International Cooperation: Collaborative efforts with BEUC, CENTR, and the US FTC on issues like greenwashing and online fraud.
Capacity Development: Initiatives like the eLab provided CPC authorities with advanced tools for online investigations, and the e-enforcement Academy offered extensive training.
Market Trends and Future Challenges
AI and Chatbots: Increasing concerns about consumer awareness and regulatory compliance.
Greenwashing: Rising scrutiny on sustainability claims, especially in sectors like airlines and plastic products.
Online Fraud: Persistent issues with scams and identity theft, with a significant emotional and financial toll on consumers.
Read the press release here.
🇪🇺 Second GDPR Report Highlights Progress and Challenges
On 25 July 2024, the European Commission published its second report on the application of the GDPR, in accordance with Article 97. The report highlights several achievements but also points out what is not going so well. Here is a condensed version.
Successes
Increased Awareness and Rights Exercise: The GDPR has significantly raised public awareness about data protection, with many individuals now familiar with their rights and actively exercising them. This has been facilitated by effective public awareness campaigns, educational initiatives, and user-friendly digital tools developed by data protection authorities (DPAs).
Strong Enforcement Actions: There has been a notable uptick in enforcement activities by DPAs, including substantial fines against large tech companies (around EUR 4.2 billion). This has led private companies to ‘take data protection seriously’ and helped to embed a culture of compliance.
Enhanced Cooperation Between DPAs: The cooperation and consistency mechanisms under the GDPR have been increasingly utilized, with a rise in cross-border cases handled through mutual assistance and informal cooperation. The European Data Protection Board (EDPB) has played an important role in resolving disputes and ensuring consistent application of the GDPR.
Use of GDPR Compliance Tools: Businesses have benefited from practical compliance tools such as standard contractual clauses (SCCs), codes of conduct, and certification mechanisms. These tools are particularly useful for SMEs and organizations lacking extensive resources.
Development of Guidelines and Best Practices: The EDPB and DPAs have developed numerous guidelines to clarify various GDPR aspects, helping organizations understand and comply with their obligations. These guidelines have generally been well-received by stakeholders.
Improved Resources for DPAs: Most DPAs have seen increases in staff and budget, enhancing their capacity to enforce the GDPR and carry out their tasks effectively.
Positive Role of the EDPB: The EDPB has strengthened cooperation between DPAs and has been instrumental in ensuring the consistent application of the GDPR across member states.
Challenges and Recommendations
Inconsistent Interpretation by DPAs: Divergent interpretations of key GDPR concepts by national DPAs lead to legal uncertainty and increased compliance costs. The Commission emphasizes the need for clearer, more actionable, and practical guidance from DPAs and the EDPB.
Fragmented National Legislation: Fragmentation arises from national laws where member states have discretion, such as the age of consent and processing of sensitive data. The Commission advocates for better alignment and harmonization to minimize these inconsistencies. Member States must consult DPAs timely before adopting personal data processing legislation, as this is sometimes lacking or insufficient.
Resource Limitations of DPAs: Many DPAs report inadequate human and financial resources, impacting their enforcement capabilities. The Commission calls for continued investment to ensure DPAs are adequately funded and staffed.
Challenges for SMEs: SMEs face significant hurdles in achieving GDPR compliance due to limited expertise and perceived regulatory complexity. The Commission suggests intensifying support efforts, providing practical tools, templates, and tailored guidance for SMEs.
Inefficient Handling of Cross-Border Cases: Procedural differences across member states lead to inefficiencies in handling cross-border cases. The Commission has proposed a Regulation on procedural rules to harmonize these aspects and support timely investigations and remedies.
Streamlining Approval Processes for Compliance Tools: The approval processes for codes of conduct, certifications, and SCCs are often slow and complex. The Commission highlights the need for clearer timelines and more active engagement by DPAs to encourage the development and adoption of these tools.
Data Protection Officers (DPOs): DPOs play a critical role in ensuring GDPR compliance within organizations. While many DPOs possess the necessary knowledge and skills, challenges remain, such as (i) difficulty appointing qualified DPOs, (ii) lack of EU-wide training standards, (iii) poor integration of DPOs, (iv) insufficient resources, (v) non data protection tasks, and (vi) low seniority. The Commission recommends enhanced enforcement and clearer guidelines for DPOs, emphasizing the need for their adequate integration into organizational processes and ensuring they have sufficient resources and authority to fulfill their duties effectively.
Coordination with Other EU Policies: The GDPR is increasingly integrated with other EU digital policies, such as the Digital Services Act, Digital Markets Act, and the AI Act, to ensure a cohesive regulatory environment for data protection and digital services. These policies build on the GDPR framework to address specific issues like online advertising, AI applications, and platform work, ensuring a comprehensive approach to digital regulation. This requires that regulators cooperate to ensure efficient enforcement, quoting ‘pay or OK’ models as an example.
Engagement with Stakeholders: Constructive engagement between DPAs and stakeholders, including businesses and civil society, is critical. The report notes varying levels of responsiveness from DPAs and recommends improved communication and clarity. Enhanced stakeholder engagement will ensure that compliance measures are practical and well-understood, fostering a cooperative regulatory environment.
More guidelines needed: The report calls for more guidance on processing data for scientific research, balancing data protection with fostering innovation and public health research. The Commission encourages the adoption of clear guidelines to clarify roles and responsibilities, facilitating research while ensuring robust data protection.
Enhanced Transparency and Participation: The Commission recommends increased transparency in the development of guidelines and policies, with early-stage consultations to better understand market dynamics and practical applications. This approach aims to create more practical and understandable guidelines, especially for non-legal professionals in SMEs and voluntary organizations.
Find it here.
🤖 U.S. Department of State Releases Risk Management Profile for AI and Human Rights
On 25 July 2024 the U.S. Department of State released the "Risk Management Profile for Artificial Intelligence and Human Rights," aiming to provide practical guidance for organizations in various sectors to align AI development and usage with international human rights standards.
Context and Rationale
AI's potential to advance technology and human rights is significant, yet it can also pose risks such as bias, surveillance, and censorship. The Profile leverages international human rights principles to inform AI risk management, offering a universal normative basis for assessing AI impacts.
Key Objectives
The Profile has two primary goals:
Integrating Human Rights with AI Risk Management: It aligns the NIST AI Risk Management Framework with human rights due diligence, demonstrating how human rights considerations fit within the AI lifecycle.
Promoting Rights-Respecting AI Governance: It fosters a common language for AI developers, policymakers, and civil society, illustrating the integration of human rights in AI governance.
Structure and Recommendations
The Profile consists of three sections:
Rationale and Scope: Explains the unique role of international human rights in AI governance.
Human Rights Impacts: Analyzes potential human rights risks from AI, including privacy violations, discrimination, and freedom of expression infringements.
Recommended Practices: Offers actionable steps derived from the AI RMF's four functions—Govern, Map, Measure, and Manage. Examples include:
Govern: Establish policies on AI and human rights, conduct human rights due diligence, and set up procedures for addressing risks.
Map: Engage stakeholders in AI system design and assess the context of AI deployment.
Measure: Evaluate and monitor AI systems for human rights impacts using quantitative and qualitative methods.
Manage: Prioritize and address human rights risks, ensure transparency, and provide redress mechanisms.
The Profile is designed to be cross-sectoral and aligns with various international initiatives, including the UN Guiding Principles on Business and Human Rights, OECD Guidelines, and UNESCO's AI ethics recommendations.
Read it here.
📜 NIST Released Guidance on Mitigating Generative AI Risks
The National Institute of Standards and Technology (NIST) released the AI RMF Generative AI Profile (NIST AI 600-1) on 25 July 2024. This profile helps organizations identify and mitigate the unique risks posed by generative AI, aligning risk management practices with organizational goals and priorities.
Risks Identified
Cybersecurity Threats: Lowered barriers for cyberattacks, including the creation and spread of harmful content like disinformation and hate speech.
Content "Hallucinations": Generative AI systems may produce inaccurate or misleading information, leading to potential misuse or misinterpretation.
Privacy Concerns: Risks related to the leakage and unauthorized use of sensitive data, with AI models sometimes revealing personal information unintentionally.
Environmental Impact: The significant energy consumption and carbon footprint associated with training and operating AI models.
Bias and Homogenization: The potential amplification of harmful biases and uniformity in AI outputs, leading to discriminatory practices and reduced diversity in content.
Human-AI Configuration: Misconfigurations and poor interactions between humans and AI systems, leading to over-reliance or aversion to AI systems.
Information Integrity: The ease of producing and disseminating misinformation and disinformation, undermining public trust.
Information Security: Increased attack surfaces and vulnerabilities in AI systems, including risks from prompt injections and data poisoning.
Intellectual Property: Potential infringement on copyrighted content and the unauthorized use of intellectual property.
Obscene, Degrading, and Abusive Content: The creation and spread of explicit and harmful content, including non-consensual intimate imagery and child sexual abuse material.
CBRN Information or Capabilities: The facilitation of access to information and capabilities related to chemical, biological, radiological, and nuclear weapons.
Value Chain and Component Integration: Risks associated with the integration of third-party components, such as improperly vetted data and lack of transparency.
Mitigation Actions
NIST outlines over 200 actions that can be taken to manage these risks effectively. These actions are organized into categories according to the AI Risk Management Framework (AI RMF): Govern, Map, Measure, and Manage. Here are the key points:
Govern:
Governance Structures and Policies:
Establish clear governance structures for managing AI risks.
Develop and implement policies that ensure accountability and transparency in AI system development and deployment.
Legal and Regulatory Compliance:
Align AI development processes with existing legal and regulatory requirements.
Stay informed about evolving laws and regulations relevant to AI technologies.
Stakeholder Engagement:
Engage with a diverse set of stakeholders to understand different perspectives and requirements.
Involve stakeholders in the risk assessment and mitigation processes.
Map
Risk Identification and Documentation:
Identify and document potential risks associated with AI systems at different stages of their lifecycle.
Use structured methodologies to map out these risks comprehensively.
Contextual Understanding:
Understand the context in which the AI system will operate.
Consider the socio-technical environment and the potential impact on different user groups.
Scenario Analysis:
Conduct scenario analyses to anticipate possible risk scenarios.
Prepare mitigation strategies for identified scenarios.
Measure
Performance Metrics:
Define and use metrics to measure the performance and impact of AI systems.
Regularly assess these metrics to ensure AI systems are functioning as intended.
Bias and Fairness Audits:
Conduct regular audits to detect and mitigate biases in AI systems.
Ensure AI systems are fair and do not disproportionately impact any group.
Robustness and Security Testing:
Test AI systems for robustness against adversarial attacks.
Implement security measures to protect AI systems from malicious actors.
Manage
Incident Response Plans:
Develop and maintain incident response plans for AI systems.
Ensure these plans are tested and updated regularly.
Continuous Monitoring and Improvement:
Continuously monitor AI systems for performance and compliance.
Implement mechanisms for continuous improvement based on feedback and monitoring data.
Third-Party Risk Management:
Ensure proper vetting and management of third-party components used in AI systems.
Maintain transparency in the sourcing and integration of these components.
Additional Key Points
Human-AI Interaction:
Define clear guidelines for human-AI interaction.
Ensure that users understand the capabilities and limitations of AI systems.
Value Chain and Component Integration:
Assess and manage risks associated with integrating various components across the AI value chain.
Ensure that all components meet organizational standards for quality and security.
The AI RMF Generative AI Profile serves as a comprehensive companion to the NIST AI Risk Management Framework (see also the NIST AI RMF Playbook). While the AI RMF is structured to provide a systematic approach for organizations to manage all AI-related risks (Govern - Map - Measure - Manage), the Generative AI Risk Profile enhances the AI RMF by offering specific guidance for managing the unique risks associated with generative AI models.
📢 FCC Proposes New AI Disclosure Rules for Political Ads
The FCC has released a Notice of Proposed Rulemaking (NPRM) to mandate the disclosure of AI-generated content in political advertisements. Released on 25 July 2024, this proposal addresses the increasing use of AI in political ads, which has raised concerns about the potential spread of deceptive content such as "deepfakes."
Context and Background
The NPRM, under MB Docket No. 24-211, responds to the patchwork of state laws regulating AI and deepfake technology in elections. Nearly half of the states have enacted such laws, with most being bipartisan efforts. The FCC’s initiative aims to bring uniformity and stability to these regulations, ensuring greater transparency in political advertising.
Proposals and Requirements
Definition of AI-Generated Content:
AI-generated content is defined as any image, audio, or video created using machine-based technology that depicts individuals or events in ways that can be manipulated to appear real.
Disclosure Requirements:
On-Air Announcements: Broadcasters must include an announcement disclosing the use of AI-generated content in political ads. For radio, this must be an oral statement, and for TV, it can be oral or visual.
Political Files Notices: Broadcasters are required to include notices in their online political files for all political ads containing AI-generated content.
Scope of Application:
The rules apply to radio and TV broadcast stations, cable operators, Direct Broadcast Satellite (DBS) providers, and Satellite Digital Audio Radio Service (SDARS) licensees.
The proposal also covers section 325(c) permit holders who transmit programming from U.S. studios to foreign stations.
Public Interest and Benefits
The proposal highlights the dual nature of AI in political ads. While AI can democratize campaign messaging by lowering costs and enhancing targeting, it also poses risks by making the creation of deceptive content easier and cheaper. The FCC believes that requiring disclosure will help voters evaluate the authenticity of political messages, thus supporting an informed electorate and maintaining the integrity of the democratic process.
Legal and Procedural Considerations
The FCC seeks comments on the proposals, particularly on the definition of AI-generated content, the effectiveness of the disclosure requirements, and the potential First Amendment issues. The Commission also considers the costs and benefits of these measures and their alignment with existing statutory authorities.
Read more here.
🔄 Google Halts Plans to Phase Out Third-Party Cookies
Earlier this year, Google began efforts to phase out third-party cookies in its Chrome browser, a move aimed at increasing user privacy. The plan was to replace third-party tracking with the Privacy Sandbox, a system controlled by Google within Chrome. However, on 22 July 2024, Google announced a significant shift in its strategy. Instead of deprecating third-party cookies, Google will introduce a new system allowing users to make an informed choice about blocking cookies across their web browsing.
Google says the decision follows feedback from various stakeholders, including the UK’s Competition and Markets Authority (CMA) and Information Commissioner’s Office (ICO). However, the ICO expressed disappointment at Google’s decision, emphasizing the need for more private alternatives to third-party cookies. The ICO plans to monitor industry responses and may take regulatory action against systemic non-compliance.
🔍 FTC Warns Against Misleading Anonymization Claims Through Hashing
In a blog post on 24 July 2024, the FTC underscores that hashed data is not anonymous. Hashing converts data, such as email addresses, into fixed numerical values. While this process can obscure the original data, it does not anonymize it, as hashed values can still uniquely identify individuals. The FTC has highlighted several cases demonstrating the misuse of hashed data and other pseudonymous identifiers, emphasizing the need for companies to accurately represent their data privacy practices.
Hashing transforms data like email addresses into a numerical hash, for example the phone number "123-456-7890" becomes "2813448ce6316cb70b38fa29c8c64130". Despite appearing meaningless, hashes can still uniquely identify users.
Notable FTC Cases
Nomi (2015): The FTC alleged that Nomi used hashed MAC addresses to track users in stores, which still allowed user identification.
BetterHelp (2022): The FTC charged BetterHelp for sharing hashed email addresses with Facebook, which could then identify users seeking mental health counseling.
Premom (2023): Premom allegedly shared unique advertising and device identifiers, enabling third-party tracking contrary to its privacy claims.
InMarket (2024): The FTC alleged InMarket used unique mobile device identifiers for user tracking without informed consent.
What to keep in mind
Hashes and other unique identifiers can be used for persistent tracking.
Companies must not claim that hashing anonymizes data - pseudonymization is not anonymization.
This issue applies also in the EU - the concept of personal data under the GDPR is broad and as long as there is any possibility to single out an individual, even if the data does not seem to make sense in itself, then the data is personal and GDPR applies. See recital 26 GDPR which says “To determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly.”
A couple of recommended readings on this topic:
🇬🇧 Ofcom Publishes Paper on Mitigating Deepfake Harms
Deepfakes, created using AI to generate or manipulate audio-visual content, have become a significant concern due to their potential to misrepresent individuals and events. Ofcom’s latest discussion paper, published on 23 July 2024, examines the increasing prevalence of deepfakes and explores the harms they cause, as well as strategies to mitigate these harms.
Context and Impact
Deepfakes are increasingly sophisticated due to generative AI tools, enabling even those with modest technical skills to create convincing fakes. These fakes can cause severe harm by humiliating individuals, facilitating scams, or spreading disinformation. High-profile incidents, like the deep nude images of Taylor Swift and fake audio of London Mayor Sadiq Khan, highlight the risks. Ofcom’s poll found that 43% of respondents aged 16+ and 50% of respondents aged 8-15 believe they encountered deepfakes in the past six months.
Types of Deepfakes
Demeaning Deepfakes: These are used to humiliate or abuse individuals, often depicting non-consensual sexual acts. Victims, predominantly women, suffer severe emotional and reputational damage.
Defrauding Deepfakes: These facilitate scams by misrepresenting someone’s identity, leading to financial losses. Examples include fake advertisements and romance scams.
Disinforming Deepfakes: These aim to spread false information to influence public opinion on political or societal issues. Notable instances include fake videos or audio clips of political figures.
Mitigation Measures
Prevention: Model developers can filter harmful content from training datasets and block prompts that generate harmful outputs. Red teaming exercises can assess and mitigate risks before models are deployed.
Embedding: Techniques like watermarking, metadata, and labeling can indicate the synthetic nature of content. Initiatives like the Coalition for Content Provenance and Authenticity (C2PA) help standardize these efforts.
Detection: Forensic techniques and machine learning classifiers can identify deepfakes by analyzing inconsistencies in audio-visual content. Hash matching databases can also help identify and track known deepfakes.
Enforcement: Online platforms must enforce clear rules about synthetic content and take action against users who create or share harmful deepfakes.
Deepfake technology continues to evolve, posing challenges to individuals, institutions, and society. Ofcom’s discussion paper underscores the need for a multi-faceted approach involving prevention, detection, and enforcement to mitigate the harms caused by deepfakes.
Read it here.
🤖 IAPP publishes Global AI Governance Law and Policy: Jurisdiction Overviews
On 24 July 2024, the International Association of Privacy Professionals (IAPP) published a a five-part article series examining the laws, policies, and broader contextual history and developments relevant to AI governance in five jurisdictions: Singapore, Canada, the UK, the US, and the EU.
The articles explore the varied regulatory approaches, voluntary frameworks, and legislative measures aimed at managing the risks and opportunities associated with AI technologies. The series showcases how these regions are addressing AI governance through national standards, sectoral initiatives, and comprehensive legislative approaches.
You can find it here.
🧠 Neurotechnologies and Mental Privacy: Societal and Ethical Challenges
The European Parliament's recent study explores the rapid advancements in neurotechnologies (NT) and their implications for mental privacy. Initially confined to clinical applications, NT has now permeated consumer markets, offering enhancements in work, education, and entertainment. This transition presents a myriad of challenges related to data security, privacy, and ethical considerations.
Background and Context
The Neurorights Foundation (NRF), founded in 2017, champions the establishment of 'neurorights' to protect individuals from the potential misuse of NT. These rights encompass:
The Right to Mental Privacy: Safeguarding brain data from unauthorized access.
The Right to Personal Identity: Ensuring that NT does not alter one’s sense of self.
The Right to Free Will: Protecting decision-making processes from NT manipulation.
The Right to Equal Access to Mental Augmentation: Promoting fair access to cognitive enhancements.
The Right to Protection from Algorithmic Bias: Preventing discriminatory practices in NT applications.
Key Findings and Implications
The study identifies several critical issues:
Neuro-Enchantment and Socio-Technical Imaginaries: The allure of NT often leads to exaggerated claims and uncritical acceptance, termed 'neuro-enchantment'.
Neuro-Essentialism: The reduction of human experiences to neural activities, which can lead to oversimplified solutions to complex problems.
Regulatory Gaps: Current legislation may not adequately address the unique challenges posed by NT. There is a risk of high-level legislation proposing new human rights without a thorough discussion of their practical implications.
Recommendations
The report proposes several policy options:
Laissez-Faire/Non-Interference: Minimal regulation, allowing market forces to shape NT development. This could lead to unchecked risks and data security issues.
Blanket Prohibition: Banning certain NT applications to prevent misuse, which could stifle beneficial innovations and economic opportunities.
Orchestrated Steps: Implementing a coordinated approach to regulation, focusing on:
Risk Evaluation: Extending beyond individual technologies to assess ecosystem impacts, especially for vulnerable groups.
Public Communication: Enhancing NT literacy and ensuring transparent communication about benefits and risks.
Legislative Adaptation: Updating existing laws to explicitly include neuro data and NT, following models like the AI Act.
Research Funding: Supporting studies on NT’s long-term effects and potential side-effects.
European Neurodata Space: Creating a secure data framework to protect European citizens' brain data.
Standardisation: Ensuring reliable and valid NT devices through robust standards.
You can find it here.
The EDPS TechDispatch #1/2024 (that I wrote about here) also emphasizes the critical need for data protection in the context of neurotechnologies, highlighting the risks of neurodata exploitation by private entities and law enforcement. While both the European Parliament study and the EDPS report address the ethical and legal implications of NT, the EDPS focuses more on the immediate regulatory and data protection measures needed to safeguard neurodata. The European Parliament study, in contrast, provides a broader, interdisciplinary evaluation, proposing specific neurorights and a balanced approach to regulation.
🍪 noyb's Consent Banner Report: How Authorities Actually Decide
This happened earlier in July but I missed it then so here goes. On 11 July 2024, noyb published its Consent Banner Report, detailing the discrepancies between the European Data Protection Board (EDPB) recommendations and the national Data Protection Authorities (DPAs) regarding consent banners. This report aims to provide a comprehensive resource for companies implementing consent banners and to spark further discussion on improving user consent mechanisms.
The EDPB established a "cookie banner taskforce" in September 2021 in response to numerous complaints about misleading consent banners. The taskforce's report, released in January 2023, offered opinions and recommendations on various consent banner violations. The EDPB emphasized that its findings set minimum thresholds, allowing national DPAs to adopt stricter standards.
noyb's report highlights various issues by country, with the common denominator that no country requires a permanently visible floating icon to withdraw consent, highlighting a gap in ensuring user-friendly consent withdrawal options.
Here is the country by country view in a nutshell:
That’s it for this edition. Thanks for reading, and subscribe to get the full text in your inbox!
♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy education.
🎓 Take my course to advance your career in privacy – learn to navigate global privacy programs and build a scalable, effective privacy program across jurisdictions.
📍 Subscribe to my newsletter for weekly updates and insights in your mailbox.