Unpacking the AI Executive Order: A Closer Look at President Biden’s Directives

On October 30, 2023, President Biden issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Spanning over 100 pages, this is the most comprehensive set of guidelines on AI issued by the United States government. The executive order is designed to provide a framework for the responsible development and implementation of AI.

The main themes of the executive order include finding ways to develop safe and secure AI while also protecting citizens’ privacy, worker’s rights, and civil rights. It aims to create directives that foster an open market for AI development competition while safeguarding national security and preventing the spread of false information. Finally, the order compels government agencies and private companies to label AI-created content so that users can differentiate between AI and human-created content.

The Executive Order comprises eight guiding principles with recommendations impacting government agencies and private corporations. Let’s get into the details.

Guiding Principals

1. AI Must Be Safe and Secure

This principle is intended to create standardized evaluations of AI systems and create policies and mechanisms to test AI and mitigate risks before these systems are used. “Testing and evaluations, including post-deployment performance monitoring, will help ensure that AI systems function as intended, are resilient against misuse or dangerous modifications, are ethically developed and operated securely, and are compliant with applicable Federal laws and policies.” The goal here is also for Americans to be able to determine when content is AI-generated and provide a foundation that addresses AI risk.

2. AI Must Promote Responsible Innovation and Competition

The Biden Administration wants the US to lead in the development and implementation of AI. This guiding principle aims to unlock AI’s potential to solve society’s most complex challenges, such as healthcare, education, and national security. It seeks to balance the investment in AI-related education, development, and research while creating laws that tackle novel intellectual property questions to protect inventors and creators. “The Federal Government will promote a fair, open, and competitive ecosystem and marketplace for AI and related technologies so that small developers and entrepreneurs can continue to drive innovation. Doing so requires stopping unlawful collusion and addressing risks from dominant firms’ use of key assets such as semiconductors, computing power, cloud storage, and data to disadvantage competitors.”

3. AI Must Not Harm American Workers  

“The responsible development and use of AI require a commitment to supporting American workers. As AI creates new jobs and industries, all workers need a seat at the table, including through collective bargaining, to ensure that they benefit from these opportunities.” This principle aims to protect workers from the implementation of AI so that it does not undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor disruptions.

4. AI Policies Must Be Consistent with Protecting and Advancing Civil Rights

Here, the Biden Administration is emphasizing the fact that AI cannot be used to disadvantage those already denied equal opportunity and justice. This principle wants to build on steps that have already been taken, such as implementing an AI Bill of Rights so that AI can improve quality of life –not make things worse. AI must comply with all federal laws to ensure “technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation.” There must be AI accountability and standards to protect against “unlawful discrimination and abuse, including in the justice system and the Federal Government.” Americans must be able to trust AI to advance civil rights –not diminish them or be used to further discrimination.

5. American Consumer Interests Must Be Protected from AI Abuse

This principle is focused on Americans who use AI in their everyday lives. The Administration seeks to warn AI companies that technological advancement does not preclude organizations from their legal obligations and that “hard-won consumer protections are more important than ever in moments of technological change.” Emphasis on protections in healthcare, financial services, housing, education, transportation, and law are paramount. The Administration aims to use AI to protect existing infrastructure while at the same time protecting consumers, raising the quality of goods and services, and lowering prices.

6. Privacy and Civil Liberties Must Be Protected from AI Advancements

As AI continues to advance, it is paramount that Americans’ privacy and civil liberties remain protected. AI capabilities make it easier “to extract, identify, link, infer, and act on sensitive information about people’s identify, locations, habits, and desires,” which increases the risk of personal data being exposed and exploited. The Biden Administration seeks to curb that risk by implementing laws to protect against illegal data collection and unauthorized retention of confidential information. The concern here is to protect against the improper collection of data and the “chilling” effect AI can have on First Amendment Rights.

7. AI Must Be Used Responsibly by the Federal Government

“It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.” The Administration wants to ensure that members of its workforce receive adequate training to understand the benefits and risks of AI. The principle is aimed at creating technological infrastructure that provides AI that is safe, secure, and legally implemented by officials.

8. The United States Should Lead the way in AI Technological Processes

The Biden Administration wants the United States to be at the forefront of AI innovation and technological advancement. The United States has previously been a leader in technological progress in “eras of disruptive innovation and change” (like the creation of the internet), and the Biden Administration wants to continue this. The United States will “engage with international allies and partners in developing a framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges.” The Federal Government aims to promote responsible AI safety and security principles throughout the international community. 

Limits of the Executive Order

The AI Executive Order is a necessary step in implementing laws and regulations to foster AI development while protecting consumers from risk. However, the executive order is intended to provide guidance rather than legislation. Congress and government agencies will be responsible for creating laws to regulate, protect, and promote the use of AI. The Executive Order is an excellent start at laying the foundation.

Still, our government must work at implementing laws and policies surrounding the use and development of AI. Technology advances faster than the law, so while Biden’s executive order goes beyond previous US government attempts to regulate AI, “it places far more emphasis on establishing best practices and standards than on how, or even whether, the new directives will be enforced.” The law must catch up to AI technology, as we have already seen the negative implications of AI played out in the media and the courts.

Global Response to the Executive Order

Overall, the executive order was hailed by tech companies as a necessary step forward in the development and advancement of AI. “Brad Smith, the vice-chair, and president of Microsoft, [stated] it [was] ‘another critical step forward in the governance of AI technology.’ Google’s president of global affairs, Kent Walker, said the company looks ‘forward to engaging constructively with government agencies to maximize AI’s potential—including by making government services better, faster, and more secure.'”

Last week, “delegates from 27 governments worldwide, as well as the heads of top artificial intelligence companies, gathered for the world’s first AI Safety Summit [in] London…Among the attendees: representatives of the US and Chinese governments, Elon Musk, and OpenAI CEO Sam Altman.” Government and technology leaders appear to embrace AI and are ready to implement necessary changes.

“UK Prime Minister Rishi Sunak said that AI will have the potential to become the ‘most disruptive force in history.'” While this is probably true, it is also essential to keep in mind the goal of technological advancement. As the Brookings Institute put it: “Technology should not erase people, nor should it harm people. In fact, it was always developed to help us solve social problems.”

How Does the Executive Order Impact the Legal Field Specifically?

All fields of the legal sector will be impacted by AI, from healthcare, education, and national security to labor, commerce, intellectual property, and ethics. The Executive Order is a framework that “creates new opportunities and challenges for legal professionals as they will need to advise and represent clients on various legal issues related to AI,” including human rights, liability, compliance, and contracts.

Lawyers must adapt to this new AI environment to stay competitive in the legal field. Learning how to incorporate AI into their daily practice, becoming literate in AI systems, and knowing the limits of AI – all in the context of legal professional standards and client obligations.

Ready to Get Started?

Interested in artificial intelligence’s impact on your legal practice? Curious about current AI litigation? Want to incorporate AI into your legal workflow? Check out trellis.law.Trellis is an AI-driven state trial court research and analytics platform. We make the fragmented US state trial court system searchable through a single interface by providing lawyers with analytical insights on judges, cases, and opposing counsel. Request a demo and check out our state trial court platform, analytics, and API so that we can provide you with the tools needed to streamline your legal practice.

Sources:

https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

https://ogletree.com/insights-resources/blog-posts/president-biden-signs-wide-ranging-executive-order-on-ai-with-serious-implications-for-employers/

https://www.ey.com/en_us/public-policy/key-takeaways-from-the-biden-administration-executive-order-on-ai

https://www.iansresearch.com/resources/all-blogs/post/security-blog/2023/10/30/the-new-biden-ai-executive-order-3-top-takeaways-for-security-teams

https://www.govtech.com/blogs/lohrmann-on-cybersecurity/artificial-intelligence-executive-order-industry-reactions

https://www.pbs.org/newshour/politics/analysis-how-bidens-new-executive-order-tackles-ai-risks-and-where-it-falls-short

https://legal.thomsonreuters.com/blog/how-president-bidens-executive-order-on-ai-impacts-the-legalsector/#:~:text=To%20protect%20Americans’%20privacy%20from,collection%20practices%2C%20and%20establishing%20guidelines

https://www.nytimes.com/2023/10/30/us/politics/biden-ai-regulation.html