top of page

The AI & Privacy Explorer #36/2024 (1-8 September)

  • Sep 16, 2024
  • 16 min read

Updated: 2 days ago



Welcome to the AI digital and privacy recap of privacy news for week 36 of 2024 (1-8 September)! 


In this edition:

🚖  Dutch DPA Fines Uber €290 Million Over Data Transfers;

⚖️  Swedish DPA Fines Apoteket and Apohem for Meta Pixel Misuse;

📑  ECtHR Finds Privacy Violation in Monitoring of Legal Documents Exchanged between Prisoners and Lawyers;

🛑  Dutch DPA Fines Clearview AI for Illegal Facial Recognition;

🛡️  Norwegian University Fined for Weak Access Controls in Microsoft Teams;

🏢  IAPP Organizational Digital Governance Report 2024;

⚖️  AG de la Tour's Opinion on 'Excessive' GDPR Complaints;

📱  ECtHR Rules Search of Mobile Phone Violated Article 8 Rights;

🤖  CoE’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and The Rule of Law has been opened for signature;

🇪🇺  European Commission Issues FAQs on the Data Act.


 

🚖 Dutch DPA Fines Uber €290 Million Over Data Transfers

On 26 August 2024 the Dutch Data Protection Authority (DPA) announced that it had imposed a €290 million fine on Uber Technologies Inc. and its Dutch subsidiary, Uber B.V., for violating Article 44 of the General Data Protection Regulation (GDPR). The penalty was a result of Uber’s failure to ensure appropriate data transfer mechanisms for EU-to-U.S. transfers between August 2021 and November 2023.


Background

Uber’s data transfer practices came under scrutiny after a complaint was lodged with the French data protection authority (CNIL) in June 2020, alleging that Uber’s transfers violated the Schrems II ruling. The case was escalated to the Dutch DPA in January 2021 under the GDPR’s One-Stop-Shop mechanism, and the investigation formally commenced in April 2021.

 

Findings of the Dutch DPA

Uber had originally used the EU SCCs for data transfers between its Dutch entity (Uber B.V.) and U.S. entity (Uber Technologies Inc.). However, following EU Commission Q&As on the SCCs, Uber removed the SCCs since they cannot be used with data importers already subject to the GDPR under Article 3. The Dutch DPA held that Uber, after removing Standard Contractual Clauses (SCCs) from its Data Sharing Agreement in August 2021, did not have a valid alternative in place for transferring data to its parent company, Uber Technologies Inc. in the United States. Uber had relied on Article 49(1)(b) and (c) of the GDPR, which permits transfers based on “contractual necessity,” but the DPA rejected this approach, finding that these derogations are intended for occasional and limited transfers, not for sustained data flows integral to Uber’s global operations. In November 2023 Uber certified under the EU-U.S. Data Privacy Framework, which is when the breach stopped.


European Commission and the Delayed SCCs

In November 2022, the European Commission announced plans to issue new SCCs specifically for cases where the data importer is already subject to the GDPR under its extraterritorial scope. However, the release of these SCCs has not occured to date and it was only last week, in September 2024 and after the Uber sanction, that the Commission initiated a public consultation on these new clauses, with a final version expected in Q2 2025.


Fine Calculation

The Dutch DPA adhered to EDPB guidelines on calculating fines, considering the scale of the violations and the potential impact on data subjects. The lack of adequate safeguards for such a long period, combined with the size of Uber’s operations, resulted in a fine of €290 million.


Appeal

Uber has already stated it will appeal the fine.


Broader Impact

The Dutch DPA’s decision reflects the never-ending complexity of managing cross-border data flows under GDPR. Here are some key issues that caught my eye: 

  1. Extraterritorial Scope of the GDPR: An important issue is the relationship between the GDPR’s extraterritorial scope (Article 3) and its international data transfer rules (Chapter V). Uber argued that because its processing by the U.S. entity was already subject to the GDPR, additional transfer mechanisms were unnecessary. The Dutch DPA disagreed and I think they’re right. However, I do think that a transfer mechanism was not necessary but due to point 2 below.

  2. Consequences of joint controllership: I haven’t seen this addressed yet, but my personal take (as I wrote here) is that if two entities are joint controllers for the collection of the data (let’s not forget the Fashion ID case explaining JC can be for different phases), this should mean both of them must be considered to collect the data themselves. Direct collection of personal data from outside of the EU is not a transfer, and thus I don’t see why a transfer instrument between this type of joint controllers is necessary, much less a contractual instrument which is meant to bring the importer under the rules of GDPR and the control of the exporter.

  3. Article 49 derogations, which allow data transfers under specific conditions: are they limited to one-off situations or can they be applied more broadly? While the CJEU pointed out in para 202 of the Schrems II judgment that “in any event, in view of Article 49 of the GDPR, the annulment of an adequacy decision such as the Privacy Shield Decision is not liable to create such a legal vacuum. That article details the conditions under which transfers of personal data to third countries may take place in the absence of an adequacy decision under Article 45(3) of the GDPR or appropriate safeguards under Article 46 of the GDPR”, the EDPB’s FAQs on this judgement state that “With regard to transfers necessary for the performance of a contract between the data subject and the controller, it should be borne in mind that personal data may only be transferred when the transfer is occasional” and the Dutch DPA followed this approach.


In conclusion, this case, though specific to Uber, is a warning for any company transferring personal data from the EU to other countries. It signals that despite ongoing attempts to settle data transfer mechanisms, such issues are far from resolved and will likely resurface as key regulatory challenges. The inclusion of personal data in AI development is only going to increase the problem.


You can find the press release and the decision in Dutch here, and you can grab a translation into English below:


 

⚖️ Swedish DPA Fines Apoteket and Apohem for Meta Pixel Misuse


On 29 August 2024, the Swedish Privacy Protection Authority (IMY) issued two fines against major pharmacy companies, Apoteket AB and Apohem AB, for violations of Article 32 of the GDPR. Both cases involved the misuse of Meta's analytics tool, Meta Pixel, which led to the unintentional transfer of sensitive customer data to Meta Platforms Ireland. The breaches occurred due to the activation of the Advanced Matching (AAM) function, resulting in the exposure of personal information, including health-related data. Apoteket was fined SEK 37 million (approximately $3.6 million), while Apohem was fined SEK 8 million (approximately $780,000).


Details of the Breaches

  • Apoteket used Meta Pixel from January 2020 to April 2022, transferring customer data such as names, emails, addresses, and product purchases related to non-prescription items (e.g., sexual health treatments, self-tests, and gender-specific products). Approximately 930,000 users were affected, with 9% of sales linked to sensitive products. Apoteket was fined SEK 37 million (approx. $3.6 million).

  • Apohem experienced a similar breach between April 2021 and April 2022, affecting around 15,000 users. Like Apoteket, they transferred sensitive purchase data to Meta, though their breach involved fewer customers. Apohem was fined SEK 8 million (approx. $780,000).


Common Violations

The Meta Pixel is a piece of tracking code embedded on websites to collect user interaction data (such as viewed pages, products added to cart, and completed purchases) and send it to Meta for targeted advertising and performance measurement. However, in these cases, the companies mistakenly activated the AAM function, which enabled the transmission of additional sensitive user data - like names, addresses, and health-related product purchases - to Meta.

IMY’s decisions emphasize that the data involved was information that customers themselves actively input while using the website, through selecting products or completing purchase forms. IMY’s distinction in these cases might seem counterintuitive because the Meta Pixel collects data from users' devices, especially through cookies. However, IMY focuses on the type of data involved - specifically, data that users actively provide (e.g., names, contact details, and purchase choices) versus data that is passively collected from their devices (like tracking data from cookies). IMY argues that, in these cases, the critical data (such as product selections or personal details) was inputted voluntarily by the data subjects during interactions with the website, like filling out a checkout form or selecting products, rather than being automatically extracted from the device's memory or cookies without user action.

This distinction means the breaches primarily concern GDPR’s rules on data protection rather than the law implementing the ePrivacy Directive, which governs cookie-based tracking. If the ePrivacy rules would apply, IMY would lose jurisdiction over the matter as in Sweden these two laws have different enforcement authorities. I personally think IMY’s reasoning is not stellar, and I’m curious if any of the sanctioned entities will appeal on this point.

IMY concluded that both Apoteket and Apohem failed to implement sufficient technical and organizational measures to secure sensitive customer data, as required by GDPR Article 32. The companies had no processes in place to detect or prevent these breaches internally and relied on external sources to identify the issue. Additionally, neither company performed the necessary risk assessments before activating the AAM function, which escalated the risk of exposing sensitive health-related data to Meta. These are similar to the findings in the sanction against Avanza Bank, about which I wrote here.

I’m disappointed that IMY did not look into Meta’s responsibility at all - there’s no concern as to whether they are a joint controller, what consequences the default activation of AAM has, nothing. Is this really that far from the FashionID case that involved Facebook Like buttons and it was found to result in joint controllership? I think not - in my view whenever websites use third-party tools (the Facebook Like button in Fashion ID and Meta Pixel) to collect user data and transmit it to Meta, they become joint controllers for the data collection.


Outcomes and Remediation

Per IMY, both companies have since improved their data protection policies, enhancing their internal procedures and security protocols to prevent future occurrences. The fines were proportional to the number of affected customers and the severity of the breaches, with Apoteket facing the larger penalty due to the scale of the incident.


You can read the press release here, the Apoteket decision here, and the Apohem decision here (in Swedish).


 

📑 ECtHR Finds Privacy Violation in Monitoring of Legal Documents Exchanged between Prisoners and Lawyers

On 3 September 2024, the European Court of Human Rights (ECtHR) delivered its judgment in Hallaçoğlu v. Türkiye (Application no. 24514/19), addressing a privacy violation involving the monitoring of confidential communications between a prisoner and his lawyer. The case centered on Ruhi Hallaçoğlu, who was detained following the 2016 coup attempt for alleged membership in the FETÖ organization. During his imprisonment, Turkish authorities monitored documents exchanged with his lawyer, citing Law no. 5275, amended by an emergency decree issued in the aftermath of the coup.


Key Points

  • Government's Argument: Turkish authorities defended the monitoring as a necessary emergency measure under domestic law, specifically section 59(5) of Law no. 5275, amended by Legislative Decree no. 676.

  • Hallaçoğlu's Complaint: Hallaçoğlu claimed that monitoring his legal communications breached his right to private life and confidential communication under Article 8 of the European Convention on Human Rights (ECHR).


Decision

  • The Court ruled that the monitoring constituted an interference with Hallaçoğlu’s right to respect for private life and correspondence under Article 8 ECHR.

  • Citing its earlier decision in Mehmet Demir v. Türkiye, the Court found that the Turkish legislation was too vague and did not meet the ECHR’s standards for foreseeability and lawfulness.

  • The Court also determined that the emergency measures used to justify the monitoring were not adequate to override fundamental rights.


Compensation

  • The ECtHR ruled unanimously that Türkiye had violated Article 8 of the ECHR.

  • The Court awarded Hallaçoğlu €2,600 for non-pecuniary damages but rejected other claims due to lack of documentation.

This ruling highlights the importance of clear legal standards and the protection of privacy in lawyer-client communications, even during states of emergency. You can read it here.


 

🛑 Dutch DPA Fines Clearview AI for Illegal Facial Recognition


On 16 May 2024, the Dutch Data Protection Authority (AP) imposed a fine of €30.5 million on Clearview AI, a U.S. company that offers facial recognition services. Clearview built a vast database of over 30 billion images, scraped from publicly accessible online sources, and converted these into biometric data. This practice violates the General Data Protection Regulation (GDPR) because Clearview lacked a lawful basis for processing such sensitive data.


Key Violations

  • Unlawful data collection: Clearview collected images, including from Dutch citizens, without consent and without informing the data subjects.

  • Lack of transparency: The company failed to adequately inform individuals about the usage of their personal data and did not respond to access requests.

  • Biometric data processing: Clearview's use of biometric data, such as unique facial vectors, is prohibited unless statutory exceptions apply, none of which were valid in this case.

In addition to the fine the AP also ordered Clearview AI to:

  • end the processing of personal data of data subjects who are in the Netherlands and remove the personal data that Clearview AI unlawfully obtained, threatening additional penalties of up to €5.1 million for non-compliance;

  • provide data subjects in the Netherlands with the information as referred to in Article 14 of the GDPR in a concise, transparent, intelligible, and easily accessible form;

  • end its policy of not responding to access requests by data subjects who are in the Netherlands; and

  • designate a representative in the EU in writing.


Aleid Wolfsen, chairman of the Dutch DPA, emphasized the intrusiveness of facial recognition technology, cautioning that any Dutch organization using Clearview's services could face significant fines. Wolfsen also mentioned that the Dutch DPA is exploring legal avenues to hold Clearview's management personally accountable for the continued violations.

Clearview cannot appeal this decision, further solidifying the consequences of its unlawful data practices within the European Union. However, it is still to be seen how this sanction can be enforced - this is the first time an EU data protection authority will try to enforce a sanction on a company not established in that country.


The press release and the decision, both in English, can be found here.


 

🛡️ Norwegian University Fined for Weak Access Controls in Microsoft Teams

On 4 September 2024, the Norwegian Data Protection Authority imposed a fine of NOK 150,000 (approx. EUR 12,000) on the University of Agder (UiA) following a six-year-long breach of personal data security. UiA had been storing sensitive personal data in open Microsoft Teams folders without proper access controls, making this information available to unauthorized employees and students.


Scope of the Breach

UiA’s non-compliance began in 2018 when the university implemented Microsoft Teams and SharePoint. A total of nine documents containing personal data of 16,000 employees and students were exposed. Key documents included:

  • Information on 4,851 employees and 10,419 external individuals, including national ID numbers.

  • Data on 568 students with specialized exam arrangements.

  • A list of 64 Ukrainian refugees studying at the university.

  • Other sensitive personal information, including health data.


Inadequate Access and Log Controls

The breach occurred because UiA failed to implement sufficient access controls and logging mechanisms in Microsoft Teams. Employees without a business need could access personal data through shared folders. The university's internal controls only logged activities for the previous six months, making it impossible to confirm whether unauthorized individuals accessed the data over the six-year period.


Corrective Measures

Upon discovering the breach in February 2024, UiA reviewed all shared folders, restricted access, and notified affected individuals by 21 February 2024. The university also improved internal procedures, updated staff training, and ensured that shared Teams folders were made private to prevent unauthorized access.


Consequences

The DPA determined that UiA violated Articles 24 and 32 of the GDPR, which mandate appropriate technical and organizational measures to safeguard personal data. The authority found the university's actions to be negligent and imposed the fine based on the serious nature of the breach, which affected sensitive data over a prolonged period. Still, the fine is very small. 


The press release and decision are available here (in Norwegian).


 

🏢 IAPP Organizational Digital Governance Report 2024


The "Organizational Digital Governance Report 2024" by IAPP explores how businesses are addressing the challenges brought on by digital technologies and increasing regulatory complexity. Digital governance now encompasses privacy, AI, cybersecurity, and more, demanding a coordinated approach. The report is based on interviews with over 20 senior leaders from major tech-driven firms and highlights several key insights:


Key Drivers of Digital Governance

Organizations are facing regulatory demands from frameworks like GDPR, the AI Act, and Digital Markets Act, along with evolving laws worldwide. This "alphabet soup" of regulations requires companies to develop governance structures that ensure compliance while balancing operational needs. The political, social, and technological environment also contributes to the need for more comprehensive digital governance models.


Organizational Structures and Approaches

Many companies are in the early stages of formalizing digital governance, often referred to as "analog governance," where different functions operate independently. More mature organizations are adopting "augmented" models with dedicated governance committees, such as AI governance boards and privacy committees, to streamline decision-making. These models often assign specific roles and responsibilities across C-suite executives, such as Chief Privacy Officers (CPOs) and Chief Information Security Officers (CISOs), to manage the intersecting demands of privacy, AI, and cybersecurity.


Emerging Trends

The report identifies the rise of cross-functional governance approaches, where AI, cybersecurity, and data governance are increasingly intertwined. Notably, many Chief Privacy Officers are also gaining responsibilities in AI governance, with 63% of organizations assigning privacy functions to manage AI compliance. In addition, 69% of surveyed organizations have tasked privacy professionals with data governance responsibilities.


Challenges Ahead

Interviewees expressed challenges such as resource constraints, lack of internal coherence, and difficulties in coordinating across functions. Most organizations acknowledge that more work is needed to build sustainable, efficient governance models. The report suggests that greater investment in tooling, senior-level engagement, and cross-functional training will be key to overcoming these hurdles.


Overall, the report calls for organizations to take a proactive, structured approach to digital governance, as digital technologies and their associated risks become more ingrained in everyday business operations.


You can find the report and 3 possible organisational charts here.


 

⚖️ AG de la Tour’s Opinion on ‘Excessive’ GDPR Complaints

On 5 September 2024, Advocate General Richard de la Tour provided an opinion in the case Österreichische Datenschutzbehörde (C-416/23), clarifying whether a large number of complaints filed in a short time can be automatically considered “excessive” under the GDPR. The case originated when the Austrian Data Protection Authority (DSB) declined to act on 77 complaints filed by a data subject within 20 months, citing the burden these complaints placed on its resources.


Key Issues Raised

The Austrian court raised three key questions concerning GDPR Article 57(4):

  1. Do 'requests' include 'complaints'?

    De la Tour confirmed that "requests" under Article 57(4) should indeed include complaints submitted under Article 77(1).

  2. Does a large volume of complaints make them 'excessive'?

    He argued that simply submitting many complaints is insufficient to deem them excessive. Instead, data protection authorities (DPAs) must show that the complainant acted with an abusive intention. Submitting multiple complaints for different controllers is not inherently excessive if the rights of the data subject are at stake.

  3. Can DPAs choose to charge a fee or refuse to act?

    The AG stated that DPAs have the discretion to either charge a reasonable fee based on administrative costs or refuse to act, but must carefully assess the circumstances before deciding. Charging a fee may be a less restrictive approach than outright refusal.


Conclusion

The Advocate General concluded that DPAs should prioritize charging a reasonable fee rather than refusing complaints outright, as doing so aligns better with GDPR’s goal of ensuring high data protection. This opinion supports the idea that excessive complaint thresholds must be carefully evaluated, requiring proof of abuse rather than being based solely on quantity.


Read it here.


 

📱 ECtHR Rules Search of Mobile Phone Violated Article 8 Rights


On 5 September 2024, the European Court of Human Rights (ECtHR) delivered its judgment in the case of Mukhtarli v. Azerbaijan and Georgia. The case primarily concerned a search of the applicant’s mobile phone by Azerbaijani authorities during his detention, which the applicant argued violated his right to private life under Article 8 of the European Convention on Human Rights (ECHR).


Background

Afgan Mukhtarli, a journalist, was allegedly abducted from Georgia and unlawfully transferred to Azerbaijan in May 2017, where he was detained. During his detention, Azerbaijani authorities searched his mobile phone without obtaining prior judicial authorization. Mukhtarli's complaints at the national level about the legality of this search were unsuccessful, prompting him to take his case to the ECtHR. 


Court's Findings

The ECtHR found that the search of the phone had not been conducted "in accordance with the law," as required by Article 8 ECHR. The Court highlighted that such intrusive measures, which significantly interfere with private life and correspondence, necessitate prior authorization by an independent judicial body. In this case, the search was part of an investigation that did not require judicial approval under Azerbaijani law, leaving the decision at the discretion of the investigator.


The Court ruled that allowing investigators unchecked discretion over searches of this nature contravenes the safeguards set by Article 8 ECHR. Although the Court acknowledged that urgent situations might justify bypassing judicial authorization, it concluded that no such circumstances applied in this case.


You can read it here.


 

🤖 CoE’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and The Rule of Law has been opened for signature

On 5 September the Council of Europe opened for signature the landmark Framework Convention on Artificial Intelligence and Human Rights, Democracy and The Rule of Law. It was already signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom, Israel, the United States of America and the European Union - see up to date list of signatories here.

If you want a breakdown of the Convention, the Future of Privacy Forum has a very good one here.


Some key facts:

➡️ 𝗔𝗱𝗼𝗽𝘁𝗶𝗼𝗻 𝗗𝗮𝘁𝗲: 17 May 2024.

➡️ 𝗢𝗽𝗲𝗻 𝗳𝗼𝗿 𝗦𝗶𝗴𝗻𝗮𝘁𝘂𝗿𝗲: 5 September 2024. From this date, member and non-member states can begin signing and committing to the obligations under the treaty.

➡️ Once a state signs and ratifies the convention, it will have to implement the treaty’s obligations into its 𝗱𝗼𝗺𝗲𝘀𝘁𝗶𝗰 𝗹𝗮𝘄𝘀. While the specific deadline for implementation may vary by state, typically such conventions expect timely adoption, often within 1-3 years depending on national legal systems and processes. States will need to enact legislative and administrative measures aligned with the treaty's principles, including provisions for a risk-based approach, transparency, accountability, and privacy.

➡️ States must adopt mechanisms for 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 before first use or when significantly modified, as outlined in Article 16(2)(g). While no strict deadline is provided, testing and risk assessments must be continual and throughout the lifecycle of AI systems.

➡️ States have discretion to impose 𝗺𝗼𝗿𝗮𝘁𝗼𝗿𝗶𝘂𝗺𝘀 𝗼𝗿 𝗯𝗮𝗻𝘀 on specific AI uses deemed incompatible with human rights under Article 16(4). There is no fixed timeline for this, as states are encouraged to base their decisions on ongoing risk assessments.

➡️ The convention does not require members to apply its obligations to AI systems used for 𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗼𝗿 𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗱𝗲𝗳𝗲𝗻𝘀𝗲, but they must still comply with other international human rights laws.


 

European Commission Issues FAQs on the Data Act

On 6 September 2024, the European Commission published FAQs to help stakeholders understand and implement the Data Act (Regulation EU 2023/2584), which will be enforced from 12 September 2025. The Data Act aims to set harmonized rules for accessing and sharing data, particularly in the context of IoT, across various sectors. These FAQs offer vital clarifications on how the Data Act interacts with existing legislation like GDPR, enforcement mechanisms, and specific considerations for IoT data.

Some of the covered aspects:


Interaction with GDPR

The Data Act does not directly regulate personal data, but GDPR applies fully when personal data is processed. In any conflict, GDPR rules take precedence (Article 1(5)). Data Protection Authorities (DPAs) will enforce compliance where personal data is involved. The FAQs also clarify that individuals do not need to approach separate authorities under both laws when handling personal data issues related to the Data Act.


IoT and Data Types

The FAQs clarify that the Data Act applies to raw and pre-processed data generated by connected IoT products. Data that is highly enriched, for example, through complex algorithms or protected by intellectual property rights, is not covered. The FAQs emphasize that only data generated after the Data Act takes effect will be subject to its rules. Additionally, IoT data types, such as 'product data' and 'related service data,' are defined to guide companies on what falls within the Act's scope.


Enforcement and Trade Secret Protection

Data holders are required to provide access to data upon request, but they can refuse if sharing the data could cause serious economic harm by revealing trade secrets. The FAQs introduce a “trade secrets handbrake” mechanism to balance data access and trade secret protection, where data holders can negotiate confidentiality safeguards before sharing sensitive information.


Digital Markets Act (DMA) Interaction

The FAQs specify that while DMA "gatekeepers" cannot use the Data Act’s mandatory data-sharing rules, they are not entirely excluded from IoT data markets. The FAQ stresses that DMA gatekeepers are limited in using specific mechanisms but can still engage in voluntary data-sharing arrangements.


You can find the press release and the FAQ here.


 

That’s it for this edition. Thanks for reading, and subscribe to get the full text in a single email in your inbox!

♻️ Share this if you found it useful.

💥 Follow me on Linkedin for updates and discussions on privacy education.

📍 Subscribe to my newsletter for weekly updates and insights – subscribers get an integrated view of the week and more information than on the blog.

Commenti


Privacy & digital news FOMO got you puzzled?

Subscribe to my newsletter

Get all of my privacy, digital and AI insights delivered to you weekly, so you don’t need to remember to check my blog. You can unsubscribe at any time.


My newsletter can also include occasional marketing, such as information on my product launches and discounts.


Emails are sent through a processor located outside of the EU. Read more in the Privacy Notice.

It  takes  less  time  to  do  a  thing  right  than  to  explain  why  you  did  it  wrong.


Henry Wadsworth Longfellow

bottom of page