President Joe Biden is expected to receive a classified memo outlining AI’s threats to national security and suggesting limits to its deployment, several sources with knowledge of the memorandum’s contents told Nextgov/FCW.
Ordered up by Biden’s October executive order on AI, the memo is meant to help “develop a coordinated executive-branch approach to managing AI’s security risks,” and it is expected to build on last year’s guidance issued by the Office of Management and Budget and international commitments discussed in recent meetings at Bletchley Park and Italy.
“This [memorandum] is focused on national security systems which exist in military and intelligence agencies, but also some of FBI’s and DHS’s systems also will qualify,” a person familiar with the expected contents of the memo said.
The memo will not directly change AI procurement , but will likely carry “significant implications” for cloud service providers and frontier model developers and their understanding of how to responsibly deploy these technologies.
Securing U.S. leadership in AI innovation and standardization is also a likely focal point of the memo, which is expected to address domestic workforce challenges.
“In addition to underscoring the strategic focus on talent development as essential for maintaining technological leadership, a heavy focus will be on talent development within the United States and bringing top talent to the United States,” a second person with knowledge of the memorandum’s contents said. “This is seen as critical for enhancing the nation’s competitive edge in AI technologies.”
The memo is also expected to deal with the energy demands of AI computing and how best to balance those demands with the policy push for clean energy.
The memo is expected to address how AI should not be used in government operations. The first source said that the memo will likely include a short list of “prohibited uses” of AI systems, such as using it to operate nuclear weapons and tracking constitutionally protected activity, like free speech.
The memo will also discuss “high-impact” uses that are not prohibited, but demand greater oversight—perhaps including real-time biometric tracking of individuals.
“Those high-impact uses will be subject to various governance and risk management practices that will be similar to those in the OMB memo, though depart from them in some ways,” the first source said.
Although the memo will be initially classified, the Biden administration is angling to declassify as much as it can for broader accessibility at a later date, the second source familiar with the memo said.
Experts in the national security field note that this memorandum is important in setting a tone for how the government will respond to both risks and advantages offered by AI technologies.
“What you’re looking at here are government capabilities that really affect fundamental freedoms and rights: who they decide to investigate, who they decide to surveil, who they allow to come into the country, who they designated as a national security or public safety threat. So these are things that are really important to individuals and really affect their lives,” Faiza Patel, co-director of the Liberty and National Security Program within New York University’s Brennan Center for Justice, told Nextgov/FCW. “So I think it’s an incredibly high-stakes document which hasn’t gotten as much attention, I think, as some of the other AI work.”
Patel noted that within national security organizations, internal mechanisms are often needed to enforce the implementation of safeguards. She said that bringing in more robust external oversight to ensure the safe deployment of AI technologies stands to be helpful in federal agencies working to preserve civil liberties alongside AI integration.
“I would be pleased to see strong guardrails for high-risk systems. I would be pleased to see a robust list of high-risk systems, but I do question whether there are effective mechanisms inside the government to make sure whether those rules and safeguards are actually being followed,” Patel said.