New Biden Administration order compels AI companies to share safety data

US President Joe Biden has issued an executive order that sets the standard on safety, security and transparency of artificial intelligence systems, requiring companies developing them to share details of the systems’ safety tests with the U.S. government.

Companies that train AI through so-called foundational models, which may pose national security and public health risks depending on how they’re deployed, must reveal the safety results to U.S. agencies, the order states.

Foundational models refer to AI systems that ingest large quantities of data and are then capable of being deployed in a wide variety of applications. Many AI systems that use natural language processing, and those suggesting the creation of new chemical compounds, use foundational models.

Additionally, any company working on life science projects that receives federal funding must develop new standards to ensure that the addition of AI technologies would not be used to engineer dangerous biological materials.

The National Institute of Standards and Technology will set the standard on the kinds of safety tests that companies must undertake before AI systems are released. The Department of Homeland Security will then apply such standards for systems used in critical infrastructure sectors, and will establish an AI Safety and Security Board.

The Energy and Homeland Security departments will assess threats posed by AI systems to companies operating critical facilities such as energy, water, pipeline and other sectors.
Biden’s order uses the authority the president has under the Defense Production Act that has been routinely used by presidents to oversee U.S. companies for the purpose of national security. The law was used by former President Donald Trump during the COVID-19 pandemic to control exports of medical goods and increase production of critical supplies.

The executive order codifies voluntary commitments made by top U.S. companies with the White House in July this year. Amazon.com Inc., Anthropic, Google LLC, Inflection, Meta Platforms Inc., Microsoft Corp., and OpenAI Inc. pledged to develop technologies in a “safe, secure, and transparent” manner.

Previous articleIndia’s cybersecurity agency to probe security alert sent by Apple to politicians, journalists
Next articleInfosys subsidiary hit by cyber security attack