top of page

The AI & Privacy Explorer #22/2024 (28 May - 2 June)

  • Jun 9, 2024
  • 18 min read

Welcome to the AI, digital, and privacy recap of privacy news for week 22 of 2024 (28 May - 2 June)! 


👈 Swipe left for a quick overview, then find 🔍 more details on each topic below.



 

🤖 Italian DPA Issues Guidance on Protecting Online Personal Data from Web Scraping

On 30 May 2024, the Italian Data Protection Authority (Garante) released guidance to help public and private data controllers protect personal data published online from web scraping. Recognising that the collecting entities have their own obligation to ensure lawfulness under GDPR, this guideline is for the other side – the website operators that undergo the scraping. Web scraping is an indiscriminate collection of personal data by third parties, often for the purpose of training generative AI models – and ongoing investigations, including one against OpenAI, will determine the lawfulness of such practices based on legitimate interest.

Recommendations

  1. Reserved Areas: Garante advises creating restricted access areas, requiring registration to reduce public data availability.

  2. Anti-Scraping Clauses: Including specific clauses in terms of service can provide legal grounds against violators, deterring unauthorized data collection.

  3. Traffic Monitoring: Monitoring web traffic for abnormal data flows can help identify and mitigate web scraping activities.

  4. Bot Mitigation: Implementing measures against bots, such as CAPTCHA checks, modifying HTML markup, embedding data in images, and using robots.txt files, can make data scraping more challenging.

These measures are suggestions and not mandatory. Website data controllers are responsible for deciding which measures to implement, considering factors like technology developments and implementation costs, particularly for small and medium-sized enterprises.

Garante emphasizes that these measures cannot completely prevent web scraping but are essential for reducing unauthorized data use. Website operators must evaluate and implement appropriate measures to protect personal data from scraping.

👉 Find the press release and guidance in Italian here.

👉 Get an automated translation into English below:


 

🤖 Austria’s DSB Publishes Information on the relationship between the GDPR and the EU AI Act for controllers in the private and in the public sectors

On 31 May 2024, the Austrian Data Protection Authority (DSB) published two guidelines on the interplay between the General Data Protection Regulation (GDPR) and the newly adopted EU AI Act. One guideline is aimed at private sector controllers, and another one is aimed at public sector controllers, emphasizing that the GDPR remains crucial even with the advent of the AI Act.

Applicability of GDPR

The DSB clarified that the AI Act does not supersede the GDPR – according to Article 2(7) of the AI Act, the roles and responsibilities of data protection authorities, as well as the obligations of AI system providers and operators under the GDPR, remain unaffected. This means that any processing of personal data by AI systems must still comply with GDPR provisions.

Legal Basis for Data Processing

For AI systems that process personal data, there must be a valid legal basis as outlined in Article 6(1) of the GDPR. When dealing with sensitive data, the stricter conditions of Article 9(2) GDPR must also be met. The DSB emphasized that the GDPR does not hinder the development of AI but ensures that personal data is processed lawfully.

Automated Decision-Making (Article 22 GDPR)

The guidelines highlighted the significance of Article 22 GDPR, which applies to automated decisions that produce legal effects or significantly affect individuals. Examples include automated loan approvals or online hiring processes. The DSB pointed to the broad interpretation of Article 22 by the European Court of Justice (ECJ), necessitating strict adherence to its provisions when AI systems are used for such decisions.

Practical Examples and Further Guidance

The DSB referred to case law involving the Austrian Supreme Administrative Court and the AMS algorithm used by the Public Employment Service. This example underscored the applicability of Article 22 GDPR in automated decision-making scenarios. Additionally, the DSB provided links to FAQs and resources for further information on AI and data protection.

Future Directions

The DSB noted the European Data Protection Board’s (EDPB) strategy for 2024-2027, prioritizing guidelines on the relationship between the GDPR and the AI Act. As a member of the EDPB, the DSB will actively contribute to developing these guidelines to ensure coherent application of data protection laws.

👉 Find the guidance here.

 

🤖 EDPS Issues Guidelines on Generative AI for EU Institutions

On 3 June 2024, the European Data Protection Supervisor (EDPS) published its inaugural orientations on the use of generative artificial intelligence (AI) by EU institutions, bodies, offices, and agencies (EUIs). These guidelines offer practical advice on managing personal data in compliance with Regulation (EU) 2018/1725 when deploying generative AI systems. They address a wide range of scenarios and stress adherence to fundamental data protection principles without mandating specific technical measures.

Context and Purpose

The orientations serve as a foundational step towards comprehensive guidance for EUIs. They are designed to help EUIs navigate the evolving landscape of generative AI technologies while ensuring data protection. Although the document does not delve into every potential issue, it provides initial responses and encourages EUIs to consider the broader implications of AI on data protection.

Key Questions Addressed

  1. What is Generative AI?

    Generative AI uses machine learning to produce various outputs like text, images, or audio, based on foundation models.

  2. Can EUIs Use Generative AI?

    EUIs can use generative AI if they meet all legal requirements, ensuring accountability and respect for fundamental rights.

  3. Personal Data Processing

    Generative AI involves data processing at multiple stages; compliance with data protection principles is essential.

  4. Role of Data Protection Officers (DPOs)

    DPOs advise and ensure compliance, providing crucial oversight in developing and deploying generative AI systems.

  5. Data Protection Impact Assessments (DPIAs)

    Required for high-risk processing, DPIAs help identify and mitigate risks to data protection throughout the AI lifecycle.

  6. Lawful Processing of Personal Data

    Processing is lawful if it meets one of the grounds specified in Regulation (EU) 2018/1725, such as public interest or consent.

  7. Data Minimisation Principle

    Only necessary personal data should be processed; indiscriminate data collection must be avoided to comply with this principle.

  8. Data Accuracy Principle

    Data must be accurate and up-to-date, requiring verification and regular monitoring throughout the AI system’s lifecycle.

  9. Informing Individuals

    Transparency is key; individuals must be informed about how, when, and why their data is processed by generative AI systems.

  10. Automated Decisions (Article 24)

    If AI systems involve automated decision-making, safeguards like human intervention and the right to contest decisions are necessary.

  11. Ensuring Fair Processing and Avoiding Bias

    Bias in AI systems must be minimized; EUIs need oversight mechanisms to prevent and correct unfair processing.

  12. Exercise of Individual Rights

    EUIs must facilitate individuals’ rights to access, rectify, erase, and object to data processing, ensuring transparency and traceability.

  13. Data Security

    AI systems must have robust security measures to protect against new and existing risks, with continuous monitoring and updates.

  14. Additional Resources and Updates

    Further guidance and updates will be provided as generative AI technologies evolve, ensuring EUIs remain compliant and informed.

Future Developments

The EDPS plans to refine and expand these orientations as generative AI technologies and their applications evolve. The guidelines will be updated within 12 months to incorporate new insights and developments from the EDPS’s monitoring activities.

👉 Find the guidance here.

 

🤖 Danish DPA publishes AI data protection impact assessment template

On 22 May 2024, the Danish Data Protection Agency (Datatilsynet) introduced two new templates to aid companies and authorities in conducting impact assessments as required by data protection regulations. This initiative addresses challenges highlighted in Datatilsynet’s October 2023 survey, which revealed significant difficulties in performing timely and adequate impact assessments, particularly concerning AI applications.

The release includes two distinct templates:

  1. Generic Template: Designed for a broad range of processing activities.

  2. AI-Specific Template: Specifically crafted for assessing the development and operation of AI solutions. This template provides concrete examples of potential risks and corresponding mitigation measures, inspired by the UK’s Information Commissioner’s Office (ICO) AI and Data Protection Risk Tool Kit.

The AI-specific template includes a comprehensive quality assurance checklist to guide organizations through the assessment process. It emphasizes:

  • Clear descriptions of processing activities and the necessity of the impact assessment.

  • Systematic documentation and logical structuring of the assessment.

  • Definition of roles, data flows, and compliance measures.

  • Identification and assessment of all relevant risks and corresponding mitigation measures.

  • Regular updates and continuous review of the assessment.

Datatilsynet’s October 2023 mapping of AI usage in the public sector identified significant challenges in conducting impact assessments. These included issues in performing assessments timely and adequately, especially for AI solutions. The new templates aim to address these challenges by providing structured guidance.

The AI DPIA template emphasizes compliance with GDPR principles, including purpose limitation, data minimization, accuracy, and security. Transparency and the ability to explain AI decisions to data subjects are also highlighted as critical aspects.

Organizations are encouraged to document specific mitigation measures for identified risks and involve domain experts in the assessment process. Clear roles and responsibilities should be assigned to ensure accountability.

If high risks cannot be mitigated, organizations must consult the Danish Data Protection Agency before proceeding. Management approval is required for the DPIA, and the views of data subjects or their representatives should be obtained where applicable.

You can find the templates here in Danish, but you can grab an automated translation into English of the AI DPIA here:

 

👤 EDPB Issues Opinion on Facial Recognition at Airports

On 24 May 2024, the European Data Protection Board (EDPB) published an Opinion on the use of facial recognition technology by airport operators and airlines to streamline passenger flows. This Opinion, requested by the French Data Protection Authority, aims to ensure that such practices comply with GDPR requirements across EU Member States.

The Opinion assesses compatibility with Articles 5(1)(e) and (f), 25, and 32 of the GDPR. It excludes analysis of other provisions, such as the lawfulness of processing under Articles 6, 7, and 9.

Risk of Biometric Data

Facial recognition involves processing sensitive biometric data, posing risks like false negatives, bias, discrimination, and identity fraud. The EDPB emphasizes the need for less intrusive alternatives.

Scenarios of Storage Solutions

  1. Individual Storage: Biometric data stored only on the individual’s device.

  2. Centralized Storage with Individual Control: Data stored centrally but encrypted, with keys held by individuals.

  3. Centralized Storage at Airport: Data stored within airport systems.

  4. Cloud Storage: Data stored in the cloud, under airline control.

Findings and Recommendations

  • Preferred Storage Solutions: The EDPB finds that only solutions where biometric data is stored by the individual or centrally encrypted with individual control are compliant with data protection principles.

  • Excessive Data Processing: Where no legal requirement exists for identity verification, using biometrics for this purpose is excessive and non-compliant with the GDPR.

  • Data Protection by Design: The Opinion underscores the importance of integrating data protection from the outset, ensuring measures are in place to protect individuals’ privacy.

  • EDPB’s Position: The EDPB calls for a balance between effective airport operations and protecting individuals’ biometric data, highlighting the need for robust data protection measures in evolving technological landscapes.

👉 Find the opinion here.

 

📊 NIST Reports First Results From Age Estimation Software Evaluation

On 30 May 2024, the National Institute of Standards and Technology (NIST) published a report evaluating software that estimates a person’s age from facial photos. This report, the first of its kind in ten years, assessed six algorithms to understand their capabilities and limitations. The study is part of NIST’s broader effort to support age assurance technologies, which are becoming essential in age-restricted activities and online safety regulations.

Key Findings

The report, titled “Face Analysis Technology Evaluation: Age Estimation and Verification” (NIST IR 8525), revealed that none of the evaluated algorithms clearly outperformed the others. NIST found significant variation in age estimates based on facial expressions, eyeglasses, and other factors. For instance, age estimates could differ when the same person wore glasses versus when they did not, and different facial expressions also impacted the accuracy.

Methodology

The evaluation used a diverse set of 11.5 million photos from four U.S. government databases, including visa applications, FBI mugshots, webcam images from border crossings, and immigration application photos. This comprehensive dataset ensured a variety of ages, genders, and regions of origin were represented. NIST maintained the anonymity of the data and conducted the study under rigorous privacy standards.

Performance and Demographics

The study highlighted several points of interest:

  • Performance Over Time: Compared to a similar study in 2014, age estimation accuracy has improved. The mean absolute error (MAE) decreased from 4.3 years to 3.1 years.

  • Gender Disparities: Error rates were higher for female faces than for male faces, consistent with the findings from the 2014 study.

  • Demographic Sensitivity: The accuracy of algorithms varied significantly across different demographic groups, with some algorithms performing better on specific groups than others.

Implications

The findings are essential for developers, policymakers, and users of age estimation technology. As age assurance becomes more embedded in legislation and regulations, especially for protecting children online, the continuous improvement and evaluation of these technologies are crucial.

NIST aims to provide a reliable benchmark for the performance of age estimation software, ensuring that these technologies can meet the evolving demands of privacy and accuracy in age-restricted applications.

My take

While it’s encouraging to see advancements in age estimation technologies, the recent NIST report underscores their current unreliability, as no single algorithm outperformed the others and significant variations in accuracy were noted based on factors like facial expressions and the presence of eyeglasses.

👉 Find the press release here and the report here.

 

✍️ Advocate General Opinion on GDPR and Company Registers (Case C‑200/23)

On 30 May 2024, Advocate General Medina provided an opinion on Case C-200/23, involving the Bulgarian [Company] Registration Agency’s refusal to erase personal data from a company’s constitutive instrument published in the commercial register. This case was referred by the Bulgarian Supreme Administrative Court and primarily concerns the interaction between EU data protection regulations and company law.

Background

The issue arose from OL, a member (shareholder) of a Bulgarian limited liability company, requesting the removal of her personal data from the public register, arguing that such data were published without her consent. The Bulgarian Registration Agency denied the request, leading to legal proceedings.

The AG examined the relevance of Directive 2017/1132 on company law and Regulation 2016/679 (GDPR). Articles 14 and 16 of Directive 2017/1132 require the public disclosure of certain company documents, while Articles 4 to 6 and 17 of the GDPR regulate the processing and erasure of personal data.

Advocate General’s Suggested Answers to the Referred Questions

  1. Controller Responsibility

    • Conclusion: The Bulgarian Registration Agency is solely responsible for making personal data in company documents available to the public, even if such data should have been redacted before submission.

    • Reasoning: Article 4(7) and Article 26(1) of the GDPR define the controller as the entity that determines the purposes and means of processing. The agency processes and discloses the data according to legal obligations, making it the controller.

  2. Procedural Requirements for Data Erasure

    • Conclusion: The GDPR precludes national legislation or practices that condition data erasure on the submission of a redacted copy of the document. The agency must fulfill erasure requests without undue delay, regardless of whether a redacted copy is provided.

    • Reasoning: Articles 17 and 23(1) of the GDPR guarantee the right to erasure and limit restrictions on this right. Procedural rules requiring redacted copies impose unnecessary obstacles and violate the GDPR’s provisions.

  3. Compatibility with Company Law Directive

    • Conclusion: Directive 2017/1132, as amended by Directive 2019/1151, does not permit procedural rules that restrict the right to data erasure. The agency can redact personal data itself and retain an unredacted version on file.

    • Reasoning: The directive mandates public access to company documents but does not preclude data protection requirements. Recital 8 emphasizes compliance with data protection laws, allowing the agency to redact non-essential data while preserving document integrity.

Unanswered Questions

The Advocate General left several questions unanswered, among which whether a handwritten signature constitutes personal data within the meaning of Article 4(1) of the GDPR.

Conclusion

Medina’s opinion highlights the necessity for procedural safeguards that protect personal data while maintaining the public interest in company transparency. The opinion is not binding, and the CJEU will issue its own ruling.

👉 Find the AG Opinion here.

 

📘 Quebec CAI Releases User-Friendly PIA Companion Guide

On 14 May 2024, the Quebec Commission on Access to Information (CAI) released an updated and more user-friendly version of its Privacy Impact Assessment (PIA) Companion Guide. The guide aims to assist organizations in better protecting and respecting privacy rights from the outset of any project involving personal information.

Key Features of the Guide

  • Assessing the Need for a PIA: The guide helps determine when a PIA is required, emphasizing the importance of legal compliance and thorough reflection on privacy impacts.

  • Conducting a PIA: Detailed steps are provided, from defining the project and assessing privacy risks to implementing mitigation measures. The guide ensures a structured and comprehensive approach to safeguarding privacy.

  • Preparing the PIA Report: It includes instructions on preparing a PIA report following the Commission’s standards, ensuring consistency and thorough documentation.

Key Sections of the Template

  • Project Description: Details about the project, including its objectives, timeline, and the personal information involved.

  • Roles and Responsibilities: Identification of individuals and administrative units involved in the PIA, with clear definitions of their roles and responsibilities.

  • Personal Information Inventory: Comprehensive inventory of the personal information collected, used, communicated, and stored during the project.

  • Compliance with Privacy Obligations: Documentation of measures taken to comply with relevant privacy laws and principles.

  • Risk Identification and Mitigation: Description of privacy risks associated with the project, their causes, consequences, and the strategies implemented to mitigate these risks.

  • Action Plan: Concrete actions to be taken following the PIA, including timelines and responsibilities for managing residual risks.

  • Approval and Versioning: Formal approval by a senior official and documentation of updates to the PIA report.

The template includes helpful symbols and codes for guiding users through the documentation process. It also provides suggestions and examples to assist in reflecting on various aspects of privacy evaluation. While it is not mandatory to use this template, the CAI recommends adapting it to fit the specific needs of each organization and project.

👉 Find the guideline and the template here, in French.


 

📋 Latvian DVI Outlines Actions Post-DPO Appointment

On 27 May 2024, the Latvian Data State Inspectorate (DVI) released a comprehensive guide detailing the steps organizations must follow after appointing a Data Protection Officer (DPO). This reminder serves to highlight the DPO’s critical role as a liaison between the organization, the DVI, and data subjects.

Key Responsibilities

  • Inform the DVI: Organizations must notify the DVI of the DPO’s appointment, providing the DPO’s name, contact information, and the organization’s name. If multiple controllers appoint a DPO, all must be listed and the document signed by each.

  • Notify Citizens: The DPO’s contact information must be accessible to data subjects. This information should be included in the organization’s privacy policy on its website or provided through other convenient methods, such as email or in-person during data processing initiation.

  • Update Changes: Any changes in the DPO’s contact details or replacement must be promptly reported to the DVI and communicated to data subjects, ensuring that the information remains current.

  • Report Terminations: If the DPO’s contract ends without immediate replacement, the DVI must be informed. Organizations must ensure continuous data protection by promptly appointing a new DPO when required.

  • Employee Awareness: Staff should be informed about the DPO, enabling them to assist data subjects and coordinate with the DPO regarding data processing activities.

Appointment Guidelines

  • Certification and Expertise: A DPO can be:

  • A specialist who has passed the DVI’s qualification exam and is listed in the DVI’s registry.

  • A professional with sufficient practical and theoretical knowledge in data protection, regardless of DVI listing.

  • Contract Basis: The DPO can be appointed through an employment contract or an outsourcing agreement.

Additionally, organizations should have contingency plans to delegate DPO duties temporarily during illness or other incapacities to ensure continuous compliance.

👉 Find the guidance here.


 

📊 EDPB Statement on Financial Data Access and Payments Package

On 23 May 2024, the European Data Protection Board (EDPB) adopted Statement 2/2024, focusing on the European Commission’s legislative proposals for Financial Data Access (FIDA), Payment Service Regulation (PSR), and Payment Service Directive (PSD3). The European Commission published the proposals on 28 June 2023, intending to build on existing frameworks like the Second Payment Services Directive (PSD2). These proposals aim to enhance consumer protection, competition in electronic payments, and empower consumers to share their data for accessing diverse financial products.

Key Recommendations

Transaction Monitoring Mechanism

The EDPB stresses the necessity of clear rules for the recording and disclosure of personal data in transaction monitoring mechanisms (TMM) under the PSR. Recommendations include:

  • Specifying categories of personal data processed.

  • Documenting reasons for data processing.

  • Limiting access to authorized personnel.

  • Informing data subjects about data processing criteria.

Obligations for AISPs and PISPs

For AISPs and PISPs, the EDPB emphasizes:

  • Transparency in informing account servicing PSPs about the customer account and legal basis for data access.

  • Data minimization, ensuring access to only necessary personal data.

Legal Meaning of ‘Permission’

The EDPB advocates for a clear distinction between ‘permission’ and GDPR consent, recommending amendments in the PSR Proposal to prevent confusion and ensure proper legal interpretation.

Permission Dashboards

The EDPB calls for:

  • Specifications in the PSR to prevent undue influence on users in granting or withdrawing permissions.

  • Requirements in the FIDA for data users to inform data holders about the legal basis for accessing personal data.

Processing of Special Categories of Personal Data

The EDPB urges:

  • Specific designations of payment services allowed to process special categories of personal data.

  • Justifications for processing such data should be provided in the legislative text.

Regulatory Cooperation

The EDPB recommends explicit references to cooperation between financial and data protection authorities for effective enforcement and information exchange.

👉 Find the Statement here.

 

Irish DPC Published 2023 Annual Report

The Data Protection Commission (DPC) released its 2023 Annual Report on 29 May 2024, showcasing a landmark year in data protection enforcement and regulation. The report highlights the DPC’s commitment to upholding personal data rights and compliance with the General Data Protection Regulation (GDPR).

Significant Enforcement Actions 

The DPC issued 19 final decisions in 2023, resulting in administrative fines totaling €1.55 billion. Key cases included:

  • Meta Platforms Ireland Limited: In May 2023, the DPC fined Meta €1.2 billion for GDPR violations related to data transfers from the EU to the US and ordered compliance adjustments.

  • TikTok Technology Limited: In September 2023, TikTok was fined €345 million for improper processing of children’s data, alongside corrective orders.

  • Bank of Ireland: In February 2023, a €750,000 fine was imposed for data breaches on the Bank of Ireland 365 app.

  • Centric Health: A €460,000 fine was issued following a ransomware attack affecting over 70,000 patients’ data.

Case Statistics and Trends

The DPC saw a 20% increase in new cases, receiving 11,200, with 2,600 progressing to formal complaints. The DPC concluded 11,147 cases, including 3,218 formal complaints. It handled 6,991 valid data breach notifications, marking another 20% rise from 2022, with 92% resolved by year-end.

Legislative Contributions and Guidance 

The DPC contributed to 37 legislative proposals, including input on the Circular Economy and Miscellaneous Provisions Act 2022, providing a legal basis for local authorities to use recording devices for waste management enforcement.

International and Cross-Border Activities

As the EU’s Lead Supervisory Authority for several major companies, the DPC managed 89 statutory inquiries, including 51 cross-border inquiries. Notably, it achieved 87% of all GDPR enforcement fines across the EU. The DPC also postponed or revised four significant internet platform projects due to data protection concerns.

Key Insights and Observations

The DPC’s report emphasizes the critical role of Data Protection Officers (DPOs) in ensuring compliance across various sectors. In 2023, the DPC concluded 237 electronic direct marketing investigations and prosecuted four companies for unsolicited marketing communications, resulting in €2,000 in fines.

However, It didn’t take long until the Irish Council of Civil Leadership pointed out that the report “highlights problems in Big Tech enforcement up to end of 2023”.

👉 Find the Report here.

 

✂️Quebec Implements Personal Information Anonymization Regulations

On May 30, 2024, the Quebec government implemented the Regulation on the Anonymization of Personal Information. This regulation, outlined in the Decree 783-2024, establishes the criteria and procedures for anonymizing personal data, affecting public bodies and private businesses within Quebec.

Scope and Applicability

The regulation applies to:

  • Public bodies as defined in Section 3 of the Act providing access to documents held by public bodies and the protection of personal information (chapter A-2.1).

  • Persons operating a business under the Act providing the protection of personal information in the private sector (chapter P-39.1).

  • Professional orders as stipulated in the Code of Professions (chapter C-26).

Key Requirements

Criteria and Procedures
  • Pre-Anonymization: Organizations must establish the purposes for using anonymized data, ensuring compliance with the relevant laws (Article 73 for public bodies and Article 23 for private sector).

  • Supervision: Anonymization must be carried out under the supervision of a competent individual.

  • Risk Analysis: Before anonymization, organizations must remove all directly identifying information and conduct a preliminary analysis of re-identification risks, considering factors like individualization, correlation, and inference.

  • Techniques and Measures: Organizations must use recognized anonymization techniques and implement reasonable security measures to minimize re-identification risks.

  • Post-Anonymization: After applying anonymization techniques, organizations must analyze re-identification risks to ensure the data cannot reasonably be used to identify individuals, considering residual risks and technological advancements.

Ongoing Assessment
  • Organizations must periodically evaluate anonymized data to ensure it remains anonymous, updating their risk analysis to account for technological advancements that could impact re-identification.

Record-Keeping
  • Organizations must maintain a register documenting the types of personal information anonymized, purposes, techniques used, and dates of risk analysis. Article 9, which details these record-keeping requirements, will come into effect on January 1, 2025.


👉 Find the press release here.

 

🤖Estonian Information System Authority publishes Report on Risks and Controls for AI and Machine Learning Systems

Estonia’s Information Systems Authority published a comprehensive report on 27 May 2024, titled “Risks and Controls for Artificial Intelligence and Machine Learning Systems”. This report aims to support the implementation of AI technology by providing detailed guidance on ensuring cybersecurity, fulfilling legal requirements, and maintaining societal safety.

Historical Context and Definitions

The report begins with an overview of the history of AI, detailing its evolution from early cybernetic studies to modern machine learning and large language models. Definitions and abbreviations for common AI terms are provided to ensure clarity.

AI Applications and Trends

The report discusses various use cases for AI, emphasizing its potential to add value across multiple sectors. It highlights trends such as the shift from general-purpose to special-purpose AI, the movement towards open-source models, and the increasing regulatory landscape, including the EU AI Act and the draft AI Liability Directive.

Legal Aspects

Legal considerations are thoroughly reviewed, focusing on international initiatives, EU proposals, and specific regulations. The legal roles of AI system stakeholders are examined through the lenses of GDPR and the AI Act.

Deployment Models and Risk Assessment

Three deployment models for AI applications are presented:

  • using an AI API,

  • implementing an external AI model, and

  • using an in-house AI model.

The report outlines associated risks and offers a detailed risk management methodology. Section 5.2 covers information security risks, legal risks, and specific AI risks, including attacks against AI systems.

Practical Controls and Recommendations

Section 6 reviews controls for mitigating AI-specific risks, including improving AI system quality and safety, and addressing technological and societal risks. Policy recommendations are summarized to promote safe AI application in Estonia.

Quick Reference Guide

The most practical part of the report is Section 8, a quick reference guide for organizations.

It includes steps for describing AI systems, finding suitable deployment models, identifying applicable laws, evaluating threats, and selecting appropriate controls. The guide is designed to help organizations implement AI safely and effectively, with worksheets and decision charts to facilitate the process.

 

👉 Find the report here.

That’s it for this edition. Thanks for reading, and subscribe to get the full text in your inbox!


♻️ Share this if you found it useful.

💥 Follow me on Linkedin for updates and discussions on privacy education.

🎓 Take my course to advance your career in privacy – learn to navigate global privacy programs and build a scalable, effective privacy program across jurisdictions.

📍 Subscribe to my newsletter for weekly updates and insights in your mailbox.

Comments


Privacy & digital news FOMO got you puzzled?

Subscribe to my newsletter

Get all of my privacy, digital and AI insights delivered to you weekly, so you don’t need to remember to check my blog. You can unsubscribe at any time.


My newsletter can also include occasional marketing, such as information on my product launches and discounts.


Emails are sent through a processor located outside of the EU. Read more in the Privacy Notice.

It  takes  less  time  to  do  a  thing  right  than  to  explain  why  you  did  it  wrong.


Henry Wadsworth Longfellow

bottom of page