The White House has announced a series of measures to address the challenges of artificial intelligence, driven by the sudden popularity of tools such as ChatGPT and amid rising concerns about the technology’s potential risks for discrimination, misinformation and privacy.
The US government plans to introduce policies that shape how federal agencies procure and use AI systems according to the White House.
The step could significantly influence the market for AI products and control how Americans interact with AI on government websites, at security checkpoints and in other settings.
The National Science Foundation will also spend $140 million to promote research and development in AI, the White House added.
The funds will be used to create research centers that seek to apply AI to issues such as climate change, agriculture and public health, according to the administration.
The plan came the same day that Vice President Kamala Harris and other administration officials met with the CEOs of Google, Microsoft, ChatGPT-creator OpenAI and Anthropic to emphasize the importance of ethical and responsible AI development. And it coincides with a UK government inquiry launched Thursday into the risks and benefits of AI.
“Tech companies have a fundamental responsibility to make sure their products are safe and secure, and that they protect people’s rights before they’re deployed or made public,” a senior Biden administration official told reporters on a conference call ahead of the meeting.
Officials on the call cited a range of risks the public faces in the widespread adoption of AI tools, including the possible use of AI-created deepfakes and misinformation that could undermine the democratic process. Job losses linked to rising automation, biased algorithmic decision-making, physical dangers arising from autonomous vehicles and the threat of AI-powered malicious hackers are also on the White House’s list of concerns.
In a readout following the meeting, the White House said Biden and Harris “were clear that in order to realize the benefits that might come from advances in AI, it is imperative to mitigate both the current and potential risks AI poses to individuals, society, and national security.”
Biden “underscored that companies have a fundamental responsibility to make sure their products are safe and secure before they are deployed or made public,” according to the White House.
Harris, meanwhile, reminded the companies they have an “ethical, moral and legal responsibility to ensure the safety and security of their products,” and that they will be held accountable under existing US laws, according to a White House statement on the meeting.
Harris also teased the possibility of additional future regulation of the rapidly evolving industry.
“Government, private companies, and others in society must tackle these challenges together,” Harris said in a statement. “President Biden and I are committed to doing our part — including by advancing potential new regulations and supporting new legislation — so that everyone can safely benefit from technological innovations.”
Speaking to reporters following the meeting, White House Press Secretary Karine Jean-Pierre described the conversation as “honest” and “frank.”
“We had four CEOs here meeting with the vice president and the president,” she said. “That shows how seriously we take it.”
Jean-Pierre said greater transparency from AI companies, including giving the public the ability to assess and evaluate their products, will be crucial to ensuring AI systems are safe and trustworthy.
One company that has invested heavily in AI and that was noticeably absent from Thursday’s meeting was Meta, Facebook’s parent. Meta CEO Mark Zuckerberg has described AI development as the company’s “single largest investment” and has said the technology will be integrated into all of its products. But it currently does not offer a ChatGPT-like tool among its services.
The meeting marked the latest example of the federal government acknowledging concerns from the rapid development and deployment of new AI tools, and trying to find ways to address some of the risks.
Testifying before Congress, members of the Federal Trade Commission have argued AI could “turbocharge” fraud and scams. Its chair, Lina Khan, wrote in a New York Times op-ed this week that the US government has ample existing legal authority to regulate AI by leaning on its mandate to protect consumers and competition.
Last year, the Biden administration unveiled a proposal for an AI Bill of Rights calling for developers to respect the principles of privacy, safety and equal rights as they create new AI tools.
Earlier this year, the Commerce Department released voluntary risk management guidelines for AI that it said could help organizations and businesses “govern, map, measure and manage” the potential dangers in each part of the development cycle. In April, the Department also said it is seeking public input on the best policies for regulating AI, including through audits and industry self-regulation.
The US government isn’t alone in seeking to shape AI development. European officials anticipate hammering out AI legislation as soon as this year that could have major implications for AI companies around the world.