One year ago, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order directed sweeping actions to manage AI’s safety and security risks, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.
Today, the Biden-Harris Administration is announcing that Federal agencies have completed on schedule each action that the Executive Order tasked for this past year—more than one hundred in all. Below are some of the Administration’s most significant accomplishments on managing AI’s risks and seizing its promise in the year since President Biden signed his Executive Order.
Managing Risks to Safety and Security:
The Executive Order directed the boldest actions ever taken to protect Americans from a broad range of AI’s safety and security risks, including risks related to dangerous biological materials, software vulnerabilities, and foreign actors’ efforts to develop AI for harmful purposes. Over the last year, to protect safety and security, agencies have:
- Used Defense Production Act authorities to require developers of the most powerful AI systems to report vital information, including results of safety and security testing, to the U.S. government. These companies have notified the Department of Commerce about the results of their red-team safety tests, their plans to train powerful models, and large computing clusters they possess capable of such training. Last month, the Department of Commerce proposed a rule to require the reporting of this information on a quarterly basis.
- Led the way on AI safety testing and evaluations to advance the science of AI safety. The U.S. AI Safety Institute (US AISI) at the Department of Commerce has begun pre-deployment testing of major new AI models through recently signed agreements with two leading AI developers. The Department of Energy (DOE) developed and expanded its AI testbeds and evaluation tools, which it has already used to test models’ risk to nuclear security.
- Developed guidance and tools for managing AI risk. The US AISI and the National Institute of Standards and Technology (NIST) at the Department of Commerce published frameworks for managing risks related to generative AI and dual-use foundation models, and earlier this month, AISI released a Request for Information on the responsible development and use of AI models for chemical and biological sciences. The Department of Defense (DoD) released its Responsible AI toolkit to align AI projects with the Department’s Ethical Principles.
- Issued a first-ever National Security Memorandum (NSM) on AI. The NSM directs concrete steps by Federal agencies to ensure the United States leads the world’s development of safe, secure, and trustworthy AI; to enable agencies to harness cutting-edge AI for national security objectives, including by protecting human rights and democratic values; and to advance international consensus and governance on AI. This essential document serves as a formal charter for the AI Safety Institute, designating it as the center of the whole-of-government approach to advanced AI model testing, and will guide rapid and responsible AI adoption by the DoD and Intelligence Community. The NSM also directs the creation of a Framework to Advance AI Governance and Risk Management in National Security, which provides agile guidance to implement the NSM in accordance with democratic values, including mechanisms for risk management, evaluations, accountability, and transparency.
- Finalized a framework for nucleic acid synthesis screening to help prevent the misuse of AI for engineering dangerous biological materials. The framework, developed by the Office of Science and Technology Policy (OSTP), encourages nucleic acid synthesis providers to identify gene sequences that could be used to pose national security risks, and to implement customer screening to mitigate the risks of misuse. Federal agencies will require that funding recipients obtain synthetic nucleic acids from vendors that adhere to the framework, starting in 2025. The Department of Homeland Security (DHS) has developed an initial framework with principles for evaluating the effectiveness of screening mechanisms going forward.
- Launched a new Task Force on AI Datacenter Infrastructure. The Task Force provides streamlined coordination on policies to advance datacenter development operations in line with economic, national security, and environmental goals.
- Identified measures—including approaches for labeling content and improving transparency—to reduce the risks posed by AI-generated content. The Department of Commerce submitted to the White House a final report on science-backed standards and techniques for addressing these risks, while NIST has launched a challenge to develop methods for detecting AI-generated content. President Biden has emphasized that the public has a right to know when content is AI-generated, and agencies are working to use these tools to help Americans to know that communications they receive from their government are authentic.
- Combatted AI-generated image-based sexual abuse. Image-based sexual abuse—both non-consensual intimate images of adults and child sexual abuse material—is one of the fastest growing harmful uses of AI to date and disproportionately targets women, children, and LGBTQI+ people. This year, following the Vice President’s leadership in underscoring the urgent need to address deepfake image-based sexual abuse and a White House Call to Action to reduce these risks, leading AI developers and data providers made voluntary commitments to curb the creation of AI-generated image-based sexual abuse material. Additionally, the Department of Justice (DOJ) funded the first-ever helpline to provide 24/7 support and specialized services for victims of the non-consensual distribution of intimate images, including deepfakes. The Department of Education also clarified that school responsibilities under Title IX may extend to conduct that takes place online, including AI-generated abuse.
- Established the AI Safety and Security Board (AISSB) to advise the Secretary of Homeland Security on the safe and secure use of AI in critical infrastructure. The AISSB has met thrice this year to develop a set of recommendations for entities that develop, deploy, and promote accountability for AI systems that assist in delivering essential services to millions of Americans. The work of the AISSB complements DHS’s first-ever AI safety and security guidelines for critical infrastructure owners and operators, which were informed by agencies’ assessments of AI risks across all critical infrastructure sectors. To help protect critical infrastructure further, the Department of Treasury released a report on managing security risks of AI use in the financial sector, and the Department of Energy released an assessment of potential risks to the power grid, as well as ways in which AI could potentially strengthen grid resilience and our ability to respond to threats.
- Piloted AI for protecting vital government software systems. The Department of Defense and DHS conducted AI pilots to address vulnerabilities in government networks used, respectively, for national security purposes and for civilian governmental organizations.
Standing up for Workers, Consumers, Privacy, and Civil Rights
AI is changing the products and services Americans buy, affecting jobs and workplaces, and introducing or exacerbating risks to privacy, equity, and civil rights. President Biden’s Executive Order stands up for Americans in each of these domains, and over the last year, agencies have:
- Developed bedrock principles and practices, along with guidance, to help protect and empower workers as AI is built for and used in the workplace. The Department of Labor (DOL) released AI Principles and Best Practices for employers and developers to build and use AI in ways that center the wellbeing of workers and improve the quality of jobs. DOL also published two guidance documents to assist federal contractors and employers in complying with worker protection laws as they deploy AI in the workplace. In addition, the Equal Employment Opportunity Commission released resources for job seekers and workers to understand how AI use could violate employment discrimination laws.
- Protected patients’ rights and safety, while encouraging innovation, as AI is developed and deployed for healthcare. The Department of Health and Human Services (HHS) established an AI Safety Program to track harmful incidents involving AI’s use in healthcare settings and to evaluate mitigations for those harms. HHS has also developed objectives, goals, and high-level principles for the use of AI or AI-enabled tools in drug development processes and AI-enabled devices. Additionally, HHS finalized a rule that established first-of-its-kind transparency requirements for AI and other predictive algorithms that are part of certified health information technology. HHS also finalized a civil rights regulation, implementing Section 1557 of the Affordable Care Act, that requires covered health care entities to take steps to identify and mitigate discrimination when they use AI and other forms of decision support tools for care.
- Published guidance and resources for the safe, secure, and trustworthy design and use of AI in education. In July, the Department of Education released guidance calling up on educational technology developers to design AI in ways that protect rights, improve transparency, and center teaching and learning. This month, the Department of Education released a toolkit to support schools and educational leaders in responsibly adopting valuable AI use cases.
- Issued guidance on AI’s nondiscriminatory use in the housing sector, which affirms that existing prohibitions against discrimination apply to AI’s use for tenant screening and housing advertisements, while explaining how to comply with these obligations. Additionally, the Consumer Financial Protection Bureau approved a rule requiring that algorithms and AI used for home valuations are fair, nondiscriminatory, and free of conflicts of interest.
- Set guardrails on the responsible and equitable use of AI and algorithmic systems in administering public benefits programs. The Department of Agriculture’s guidance provides a framework for how State, local, Tribal, and territorial governments should manage risks for uses of AI and automated systems in critical benefits programs such as SNAP, while HHS released a plan with guidelines on similar topics for benefits programs it oversees.
- Affirmed commitments to prevent and address unlawful discrimination and other harms resulting from AI. DOJ’s Civil Rights Division convenes federal agency civil rights offices and senior government officials to foster AI and civil rights coordination. Five new agencies also joined a 2023 pledge to uphold America’s commitment to fairness, equality, and justice as new technologies like AI become more common in daily life.
- Advanced privacy protections to safeguard Americans from privacy risks that AI creates or exacerbates. In particular, the National Science Foundation (NSF) and DOE established a research network dedicated to advancing the development, deployment, and scaling of privacy-enhancing technologies (PETs), while NSF launched the $23 million initiative Privacy-preserving Data Sharing in Practice program to apply, mature, and scale PETs for specific use cases and establish testbeds to accelerate their adoption. Simultaneously, DOE launched a $68 million effort on AI for Science research, which includes efforts at multiple DOE National Laboratories and other institutions to advance PETs for scientific AI. The Department of Commerce also developed guidelines on evaluating differential privacy guarantees. The Office of Management and Budget (OMB) released a Request for Information (RFI) on issues related to federal agency collection, processing, maintenance, use, sharing, dissemination, and disposition of commercially available information containing personally identifiable information. OMB also released an RFI on how federal agencies’ privacy impact assessments may be more effective at mitigating privacy risks, including those that are further exacerbated by AI and other advances in technology and data capabilities.
Harnessing AI for Good
Over the last year, agencies have worked to seize AI’s enormous promise, including by collaborating with the private sector, promoting development and use of valuable AI use cases, and deepening the U.S. lead in AI innovation. To harness AI for good, agencies have:
- Launched the National AI Research Resource (NAIRR) pilot and awarded over 150 research teams access to computational and other AI resources. The NAIRR pilot—a national infrastructure led by the National Science Foundation (NSF) in partnership with DOE and other governmental and nongovernmental partners—makes available resources to support the nation’s AI research and education community. Supported research teams span 34 states and tackle projects covering deepfake detection, AI safety, next-generation medical diagnoses, environmental protection, and materials engineering.
- Promoted AI education and training across the United States. DOE is leveraging its network of national laboratories to train 500 new researchers by 2025 to meet demand for AI talent, while NSF has invested millions of dollars in programs to train future AI leaders and innovators. These programs include the EducateAI initiative, which helps fund educators creating high-quality, inclusive AI educational opportunities at the K-12 through undergraduate levels that support experiential learning in fields such as AI and build capacity in AI research at minority-serving institutions.
- Expanded the ability of top AI scientists, engineers, and entrepreneurs to come to the United States, including by clarifying O-1 and H-1B visa rules and working to streamline visa processing.
- Released a report on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, including related policy recommendations. The Department of Commerce’s report draws on extensive outreach to experts and stakeholders, including hundreds of public comments submitted on this topic.
- Announced a competition for up to $100 million to support the application of AI-enabled autonomous experimentation to accelerate research into—and delivery of—targeted, industry-relevant, sustainable semiconductor materials and processes.
- Established two new National AI Research Institutes for building AI tools to advance progress across economic sectors, science, and engineering. The NSF-led AI Research Institutes launched in September will develop AI tools for astronomical sciences, with broader applications across scientific disciplines. Earlier this year, NSF also funded 10 inaugural Regional Innovation Engines (NSF Engines), seven of which include a focus on advancing AI.
- Announced millions of dollars in further investments to advance responsible AI development and use throughout our society. These include $13 million invested by DOE in the VoltAIc initiative for using AI to streamline permitting and accelerate clean energy deployment, as well as $68M from DOE to fund AI for scientific research to accelerate scientific programming and develop energy efficient AI models and hardware. DOE has also launched the Frontiers in AI for Science, Security, and Technology (FASST) initiative roadmap and request for information to harness AI for scientific discovery, national security, energy and electric grid resilience, and other national challenges, building on AI tools, models, and partnerships. NSF, in partnership with philanthropy, announced an inaugural investment of more than $18 million to 44 multidisciplinary, multi-sector teams across the U.S. to advance the responsible design, development, and deployment of technologies including AI, ensuring ethical, legal, community, and societal considerations are embedded in the lifecycle of technology’s creation.
- Issued a first-ever report analyzing AI’s near-term potential to support the growth of America’s clean energy economy. DOE’s National Laboratories also issued a long-term grand challenges report identifying opportunities in AI for energy over the next decade.
- Released a vision for how AI can help us achieve our nation’s greatest aspirations. AI Aspirations sets forth goals to create a future of better health and opportunity for all, mitigate climate change and boost resilience, build robust infrastructure and manufacturing, ensure the government works for every American, and more. In furtherance of these goals, HHS launched CATALYST, a research and development program focused on the potential use of AI to better predict drug safety and efficacy before clinical trials start. In complement, the President’s Council of Advisors on Science and Technology also authored a report outlining AI’s potential to revolutionize and accelerate scientific discovery.
- Published guidance addressing vital questions at the intersection of AI and intellectual property. To advance innovation the U.S. Patent and Trademark Office (USPTO) has released guidance documents addressing the patentability of AI-assisted inventions, on the subject matter eligibility of patent claims involving inventions related to AI technology, and on the use of AI tools in proceedings before USPTO.
Bringing AI and AI Talent into Government
AI can help government deliver better results for the American people, though its use by Federal agencies can also pose risks, such as discrimination and unsafe decisions. Bringing AI and AI-enabling professionals into government, moreover, is vital for managing these risks and opportunities and advancing other critical AI missions. Over the last year, agencies have:
- Issued the first-ever government-wide policy to strengthen governance, mitigate risks, and advance innovation in federal use of AI. OMB’s historic policy, M-24-10, requires agencies to implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety. These safeguards include a series of mandatory risk management practices to reliably assess, test, and monitor AI’s impacts on the public and provide greater transparency into how the government uses AI. OMB’s policy also directs agencies to designate Chief AI Officers to coordinate the use of AI across their agency, while expanding and upskilling their AI workforce and removing barriers to adopting AI for all manner of purposes—from addressing climate change to advancing public health and safety.
- Released a government-wide policy to advance responsible acquisition of AI by Federal agencies. M-24-18, published this month by OMB, helps ensure that when Federal agencies acquire AI, they have the information and tools necessary to manage risks, promote a competitive marketplace, and collaborate on strategic planning. This work directs the Federal government—the largest buyer in the U.S. economy—to advance AI innovation and risk management through responsibly exercising its purchasing power.
- Hired over 250 AI practitioners into the Federal government through the AI Talent Surge. Tech talent programs ramped up hiring for AI talent, with the Presidential Innovation Fellows bringing on their first-ever AI cohort, DHS establishing their AI Corps with over 30 members onboarded to date, and the U.S. Digital Corps providing pathways for early-career technologists to join Federal service. AI talent has been instrumental in delivering on critical AI priorities, from using AI to deliver top-tier government services, to protecting the public’s rights and safety in the use of AI.
- Established the Chief AI Officers Council to harmonize best practices and sharing of resources across the interagency to implement OMB’s guidance and coordinate the development and use of AI in agencies’ programs and operations.
- Introduced expanded reporting instructions for the federal AI use case inventory to include identifying use cases that impact rights or safety and how the agency is addressing the relevant risks in line with OMB’s policies.
- Bolstered the public interest technology ecosystem. Building on the AI Talent Surge, the White House announced funding across government, academia, and civil society to support education and career pathways that will help ensure government has access to diverse, mission-oriented technology talent.
- Activated new hiring authorities to bring AI and AI-enabling talent into agencies. As part of the AI Talent Surge, the Office of Personnel Management (OPM) granted new hiring authorities, including direct hire authorities and excepted service authorities, for agencies to rapidly bring on top-tier AI and AI-enabling talent, and released guidance on skills-based hiring and pay and leave flexibilities to best position agencies to hire and retain AI and AI-enabling talent. Additionally, OPM collaborated with partners to run three National Tech to Gov career fairs to connect the public with AI and tech jobs in government, surfacing roles from over 64 Federal, state, and local government employers to over 3,000 job seekers.
Advancing U.S. Leadership Abroad
President Biden’s Executive Order directed work to lead global efforts to capture AI’s promise, mitigate AI’s risks, and ensure AI’s responsible governance. To advance these goals, the Administration has:
- Sponsored and passed a landmark United Nations General Assembly resolution. The unanimously adopted resolution, with more than 100 co-sponsors (including the People’s Republic of China), lays out a common vision for countries around the world to promote the safe and secure use of AI to address global challenges.
- Engaged foreign leaders on strengthening international rules and norms for AI, including at the 2023 UK AI Safety Summit and the AI Seoul Summit in May 2024, where Vice President Harris represented the United States. In the United Kingdom, Vice President Harris unveiled a series of U.S. initiatives to advance the safe and responsible use of AI, including the establishment of AISI at the Department of Commerce.
- Announced a global network of AI Safety Institutes and other government-backed scientific offices to advance AI safety at a technical level. This network, which will formally launch in November at the inaugural network convening in San Francisco, will accelerate critical information exchange and drive toward common or compatible safety evaluations and policies.
- Expanded global support for the U.S.-led Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy. Fifty-six nations now endorse the political declaration, which outlines a set of norms for the responsible development, deployment, and use of military AI capabilities. DoD has expanded the scope of its international AI Partnership for Defense to align global Responsible AI practices with the Political Declaration’s norms.
- Developed comprehensive plans for U.S. engagement on global AI standards and AI-related critical infrastructure topics. NIST and DHS, respectively, will report on priority actions taken per these plans in 90 days.
- Signed the Council of Europe’s Framework Convention on AI and Human Rights, Democracy, and the Rule of Law. This first multilateral treaty on AI represents a powerful affirmation of the relevance of existing human rights obligations to AI activities and establishes a strong baseline in international law for responsible government use of AI. The United States’ signature reflects its commitment to ensuring that AI technologies are designed, developed, used, and governed in ways that promote respect for human rights and democratic values.
- Led the development of a Joint Statement on Responsible Government Practices for AI Technologies. The Joint Statement, to which the 41 countries of the Freedom Online Coalition committed, calls on governments to develop, use, and procure AI responsibly, including by respecting international obligations and commitments, assessing impacts of AI systems, conducting ongoing monitoring, ensuring adequate human training and assessment, communicating and responding to the public, and providing effective access to remedy.
- Launched the Global Partnership for Action on Gender-Based Online Harassment and Abuse. The 15-country Global Partnership has advanced international policies to address online safety, and spurred new programs to prevent and respond to technology-facilitated gender-based violence, including through AI.
- The Department of State and the U.S. Agency for International Development published resources to advance global AI research and use of AI for economic development. The AI in Global Development Playbook incorporates principles and practices from NIST’s AI Risk Management Framework to guide AI’s responsible development and deployment across international contexts, while the Global AI Research Agenda outlines priorities for advancing AI’s safe, responsible, and sustainable global development and adoption.
October 30, 2024 Washington, DC
Sources: WH.gov ,