Skip to Main Content

Publications

Accelerated Scrutiny of AI Systems in 2024: The EU AI Act and the U.S. Strategy


On December 9, 2023, representatives from the Council of the European Union, the European Parliament, and the European Commission agreed in principle on the world’s first comprehensive legal regulations for artificial intelligence – the European Union Artificial Intelligence Act (“the EU AI Act”).[1] The EU AI Act takes a risk-based approach to regulating AI. Its impact will be extremely broad, applying to any AI systems “placed on the market, put into the services, or used in the EU.” It bans any AI system that presents an unacceptable risk to fundamental human rights and democracy, including biometric categorization systems that use sensitive characteristics (e.g. political or religious beliefs, sexual orientation, or race), AI systems that manipulate human behavior to circumvent their free will, social scoring tools based on behaviors or personal characteristics, and remote biometric identification in public spaces such as live facial recognition systems (unless it falls within the narrow law enforcement exception).

In contrast to the EU’s AI Act, the United States appears to be taking a less restrictive approach to AI that emphasizes best industry practices and relies upon federal agencies to craft their own rules, which will likely result in each sector of the economy being subjected to varying levels of scrutiny. On October 30, 2023, President Biden issued the first-ever AI Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This EO established sweeping directives and priorities across a wide swath of areas controlled by the federal government. Among other actions, the EO directed the following:

  • Under the Defense Production Act, establish a requirement that developers of powerful AI systems share the results of all red-team safety tests and other critical information with the U.S. Government;
  • The National Institute of Standards and Technology (NIST) will develop industry standards and guidelines to ensure AI systems are safe, secure, and trustworthy;
  • Federal agencies will evaluate the risks of AI being used to engineer dangerous biological weapons;
  • The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content to prevent AI-enabled fraud and deception; and
  • The Department of Defense and Homeland Security will evaluate and assess AI risks to improve U.S. cyber defenses and offensive capabilities and develop AI tools to find and fix vulnerabilities in critical software.

In January 2023, NIST, in collaboration with public and private partners, developed an AI Risk Management Framework and it recently issued a warning about the rapid deployment of AI systems, as discussed below. NIST is also working on developing industry standards, but those will merely provide guidance and are not self-executing. It is unclear whether the U.S. Congress will pass any new federal laws in 2024 that specifically address AI issues, as the U.S. still lacks federal data privacy legislation. States will likely fill this void if Congress fails to take action. For instance, in November 2023, the California Privacy Protection Agency released preliminary draft AI privacy regulations on automated decision-making technology.

Regardless of whether any new AI laws are passed in the U.S., companies should be careful of how they use AI systems, as federal agencies have signaled that they intend to police AI systems and ensure responsible innovation.[2] Indeed, in July 2023, the Federal Trade Commission (FTC) opened an investigation into whether OpenAI, the company that developed the ChatGPT platform, mishandled personal data or violated other consumer protection laws. On December 19, 2023, the FTC settled an enforcement action against Rite Aid for their “reckless use of facial recognition systems.” In announcing this settlement, the FTC made clear that it intends to vigilantly protect “the public from unfair biometric surveillance and unfair data security practices.”[3] As a result, in 2024, the Department of Justice (DOJ), FTC, and the Consumer Financial Protection Bureau will likely open investigations and bring enforcement actions to address harms caused by AI, including unlawful discrimination, the dissemination of false and misleading information, privacy violations, intellectual property theft, and other fraudulent conduct.

The EU’s agreement on the AI Act is the culmination of two years of negotiations during which time AI systems significantly advanced with the launch of generative AI foundation models, such as OpenAI’s ChatGPT-4 and Google’s Gemini used to power Bard last year. While the EU has agreed upon the general parameters of the law, the final details and the text of the AI Act have not yet been finalized and approved. The Act will go into effect two years after that occurs. Thus, it will not become enforceable until 2026, but it will likely become the global standard for AI regulation like the General Data Protection Regulation (GDPR) in the area of privacy. Further, the EU is also working on liability rules for AI to provide compensation to persons harmed by AI systems.[4]

The rapid expansion of AI systems brings with it the possibility of widespread dangers.

What is the Scope of the EU AI Act?

It applies to sellers, manufacturers, importers, distributors, and deployers of AI systems that are sold or used in the EU, or produce an effect in the EU, irrespective of where the provider is located. Given the number of US-based AI systems that are also available in the EU, the broad regulations will encompass many American providers.

What Systems are Covered Under the EU AI Act?

The EU AI Act will cover nearly every AI system used in the EU in any sector or industry. In addition to covering the most powerful current generative AI models, OpenAI’s ChatGPT and Google’s Gemini, the Act will apply to any machine-based system designed to operate with a certain level of autonomy that generates output, based upon machine or human-provided data and inputs.

There are few exemptions to this broad definition:

  • AI systems designed or used exclusively for military purposes;
  • AI systems used for scientific research that does not expose natural persons to harm; and
  • AI systems used for personal non-professional activities.

What Does the EU AI Act Require?

The EU AI Act incorporates a risk-based approach and contains different requirements for AI systems depending upon the risk level posed by the AI system. Specifically, it establishes four risk categories for AI systems: (1) unacceptable risk; (2) high risk; (3) general purpose and generative AI; and (4) limited risk.

First, the EU will ban any AI systems that pose an unacceptable risk of harm. Systems in this category include: (1) cognitive behavioral manipulation of vulnerable groups of people (i.e., voice activated toys encouraging dangerous behavior for children); (2) social scoring based on behavior or personal characteristics; (3) biometric identification and categorization of people; (4) facial recognition databases; and (5) emotional recognition systems in workplaces or schools. There are certain exceptions for law enforcement to use systems that are otherwise prohibited. For example, law enforcement can use real-time biometric surveillance in public spaces for monitoring threats or investigating serious crimes including terrorism, murder, rape, and sexual offenses.

Second, the EU will permit high-risk AI systems, but they will be subject to strict compliance requirements including a “fundamental rights impact assessment” before they can be deployed or sold in the EU. High-risk AI systems necessitate government assessment before they enter the marketplace and throughout their lifespan. These products will need to comply with EU’s comprehensive risk management and mitigation requirements, which ensure human oversight, robust cybersecurity, validation mechanisms, accuracy, minimization of risk, and transparency. Once in the market, the EU will continue to monitor compliance, and any modifications to these AI systems will be subject to a new conformity assessment.

Third, the EU will impose guardrails on general purpose and generative AI systems like OpenAI’s ChatGPT. These systems will need to comply with basic transparency requirements to make users aware that they are interacting with a machine and that any AI generated content be clearly labeled. Further, before they are placed in the EU market, these AI systems will also need to adhere to pre-market compliance requirements too, such as: (1) drafting technical documentation; (2) complying with EU copyright law; and (3) summarizing the content used for training. This agreement was reached before the New York Times (The Times) sued Microsoft and OpenAI for billions of dollars in damages for widespread and massive copyright violations on December 27, 2023. The Times alleges that Microsoft’s Bing and OpenAI’s ChatGPT were “built by copying and using millions of The Time’s copyrighted” materials without its consent or providing fair compensation.[5] Thus, it is possible that this lawsuit could impact the EU’s view of ChatGPT.

Fourth, the EU AI Act designates other AI systems as posing a limited risk. For systems falling under this category, the same transparency requirements noted above will apply to ensure that users are informed when they are interacting with AI and that AI generated content is clearly labeled and detectable.

What are the Penalties for Non-Compliance?

The EU will impose steep penalties for noncompliant AI systems. Users that are not in compliance could face fines from $8.1 million USD (or 1.5% of global sales) up to $38.2 million USD (or 7% of global sales).

What to Expect in 2024?

In the EU, over the next few months, the EU AI Act will be finalized and submitted to the EU member states for approval.

In the US, federal agencies will continue to closely monitor AI developments in an evolving landscape and bring enforcement actions to deter irresponsible AI deployment that threatens the public. On January 4, 2024, NIST issued a stark warning to industry of security and privacy risks that arise from the rapid deployment of AI systems.[6] In short, AI systems are vulnerable to attacks and the data upon which they rely may not be trustworthy. Even when carefully designed prompts are used, a “chatbot can spew out bad or toxic information …” According to NIST, “adversaries can deliberately confuse or even ‘poison’ artificial intelligence systems to make them malfunction – and there’s no foolproof defense that their developers can employ.” Id.

Conclusion:

The rapid expansion of AI systems brings with it the possibility of widespread dangers from the proliferation of false information and manipulation to mass surveillance and repression of vulnerable groups as well as liability risks. The EU has adopted a risk-based approach to address threats to fundamental human rights and democracy. With the upcoming 2024 presidential election, increases in privacy violations and cybercrime, and the potential for unfair and deceptive practices, including intellectual property violations, there will be increased AI enforcement in the United States. The only question is what mechanism this enforcement will take. Will the enforcement be the result of new federal or state laws or policed by federal agencies like the DOJ and FTC using current laws and regulations?

This is for informational purposes only and is not intended to be legal advice. Please contact a member of the Cybersecurity, Privacy, & Data Protection Practice Group if you have any questions.


[1] See Council of the EU Press Release, Dec. 9, 2023, Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world.

[2] See, e.g, Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, EEOC-CRT-FTC-CFPB-AI-Joint-Statement (final).pdf.

[3] See FTC Press Release, Dec. 19, 2023, Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards | Federal Trade Commission.

[4] See AI Liability Directive, EUR-Lex – 52022PC0496 – EN – EUR-Lex (europa.eu).

[5] See The Times v. Microsoft Corporation et al. Complaint, Civil Action No. 23-11195.

[6] See NIST Press Release announcing publication “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” Jan. 4, 2024, NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems | NIST.