U.S President Joe Biden’s 2024 campaign team has formed a special task force to prepare responses to misleading AI-generated images and videos, including drafting court filings and developing novel legal theories to counter potential disinformation efforts that technology experts have warned could disrupt the vote.
The task force, comprised of the campaign’s top lawyers and outside experts such as a former senior legal advisor to the Department of Homeland Security, is investigating what steps Biden might take if, a fake video of a state election official falsely claiming that polls are closed emerged, or if an AI-generated image falsely portrayed Biden as urging non-citizens to cross the US border to vote illegally emerged.
The goal is to create a “legal toolkit” that will allow the campaign to respond quickly to virtually any scenario involving political misinformation, particularly AI-created deepfakes — convincing audio, video, or images created with artificial intelligence tools.
As part of a larger campaign to combat misinformation in all its forms, the Biden campaign, according to TJ Ducklo, a senior adviser, established an internal task force in recent months that they named the “Social Media, AI, Mis/Disinformation (SAID) Legal Advisory Group.”
The team, which is headed by Garg and general counsel for the campaign Maury Riggan in addition to other volunteer specialists, has already started drafting some legal theories while it is still researching others, according to Garg. It wants to be ready to conduct a tabletop exercise for the entire campaign in the first half of 2024.
The scramble highlights the vast legal gray area covering AI-generated political speech, and how policymakers are struggling to respond to the threat it could pose to the democratic process. Without clear federal legislation or regulation, campaigns such as Biden’s are being forced to take matters into their own hands, trying to devise ways to respond to images that might falsely portray candidates or others saying or doing things they never did.
It is unclear if campaigns are prohibited by current US election law from “fraudulently misrepresenting other candidates or political parties” when it comes to AI-generated content. Republicans on the Federal Election Commission blocked a move in June that would have made it clear that the law applied to AI-generated images. Since then, the agency has started to examine the concept but has not made a decision.
U.S President Joe Biden’s 2024 campaign team has formed a special task force to prepare responses to misleading AI-generated images and videos, including drafting court filings and developing novel legal theories to counter potential disinformation efforts that technology experts have warned could disrupt the vote.
The task force, comprised of the campaign’s top lawyers and outside experts such as a former senior legal advisor to the Department of Homeland Security, is investigating what steps Biden might take if, a fake video of a state election official falsely claiming that polls are closed emerged, or if an AI-generated image falsely portrayed Biden as urging non-citizens to cross the US border to vote illegally emerged.
The goal is to create a “legal toolkit” that will allow the campaign to respond quickly to virtually any scenario involving political misinformation, particularly AI-created deepfakes — convincing audio, video, or images created with artificial intelligence tools.
As part of a larger campaign to combat misinformation in all its forms, the Biden campaign, according to TJ Ducklo, a senior adviser, established an internal task force in recent months that they named the “Social Media, AI, Mis/Disinformation (SAID) Legal Advisory Group.”
The team, which is headed by Garg and general counsel for the campaign Maury Riggan in addition to other volunteer specialists, has already started drafting some legal theories while it is still researching others, according to Garg. It wants to be ready to conduct a tabletop exercise for the entire campaign in the first half of 2024.
The scramble highlights the vast legal gray area covering AI-generated political speech, and how policymakers are struggling to respond to the threat it could pose to the democratic process. Without clear federal legislation or regulation, campaigns such as Biden’s are being forced to take matters into their own hands, trying to devise ways to respond to images that might falsely portray candidates or others saying or doing things they never did.
It is unclear if campaigns are prohibited by current US election law from “fraudulently misrepresenting other candidates or political parties” when it comes to AI-generated content. Republicans on the Federal Election Commission blocked a move in June that would have made it clear that the law applied to AI-generated images. Since then, the agency has started to examine the concept but has not made a decision.
U.S President Joe Biden’s 2024 campaign team has formed a special task force to prepare responses to misleading AI-generated images and videos, including drafting court filings and developing novel legal theories to counter potential disinformation efforts that technology experts have warned could disrupt the vote.
The task force, comprised of the campaign’s top lawyers and outside experts such as a former senior legal advisor to the Department of Homeland Security, is investigating what steps Biden might take if, a fake video of a state election official falsely claiming that polls are closed emerged, or if an AI-generated image falsely portrayed Biden as urging non-citizens to cross the US border to vote illegally emerged.
The goal is to create a “legal toolkit” that will allow the campaign to respond quickly to virtually any scenario involving political misinformation, particularly AI-created deepfakes — convincing audio, video, or images created with artificial intelligence tools.
As part of a larger campaign to combat misinformation in all its forms, the Biden campaign, according to TJ Ducklo, a senior adviser, established an internal task force in recent months that they named the “Social Media, AI, Mis/Disinformation (SAID) Legal Advisory Group.”
The team, which is headed by Garg and general counsel for the campaign Maury Riggan in addition to other volunteer specialists, has already started drafting some legal theories while it is still researching others, according to Garg. It wants to be ready to conduct a tabletop exercise for the entire campaign in the first half of 2024.
The scramble highlights the vast legal gray area covering AI-generated political speech, and how policymakers are struggling to respond to the threat it could pose to the democratic process. Without clear federal legislation or regulation, campaigns such as Biden’s are being forced to take matters into their own hands, trying to devise ways to respond to images that might falsely portray candidates or others saying or doing things they never did.
It is unclear if campaigns are prohibited by current US election law from “fraudulently misrepresenting other candidates or political parties” when it comes to AI-generated content. Republicans on the Federal Election Commission blocked a move in June that would have made it clear that the law applied to AI-generated images. Since then, the agency has started to examine the concept but has not made a decision.
U.S President Joe Biden’s 2024 campaign team has formed a special task force to prepare responses to misleading AI-generated images and videos, including drafting court filings and developing novel legal theories to counter potential disinformation efforts that technology experts have warned could disrupt the vote.
The task force, comprised of the campaign’s top lawyers and outside experts such as a former senior legal advisor to the Department of Homeland Security, is investigating what steps Biden might take if, a fake video of a state election official falsely claiming that polls are closed emerged, or if an AI-generated image falsely portrayed Biden as urging non-citizens to cross the US border to vote illegally emerged.
The goal is to create a “legal toolkit” that will allow the campaign to respond quickly to virtually any scenario involving political misinformation, particularly AI-created deepfakes — convincing audio, video, or images created with artificial intelligence tools.
As part of a larger campaign to combat misinformation in all its forms, the Biden campaign, according to TJ Ducklo, a senior adviser, established an internal task force in recent months that they named the “Social Media, AI, Mis/Disinformation (SAID) Legal Advisory Group.”
The team, which is headed by Garg and general counsel for the campaign Maury Riggan in addition to other volunteer specialists, has already started drafting some legal theories while it is still researching others, according to Garg. It wants to be ready to conduct a tabletop exercise for the entire campaign in the first half of 2024.
The scramble highlights the vast legal gray area covering AI-generated political speech, and how policymakers are struggling to respond to the threat it could pose to the democratic process. Without clear federal legislation or regulation, campaigns such as Biden’s are being forced to take matters into their own hands, trying to devise ways to respond to images that might falsely portray candidates or others saying or doing things they never did.
It is unclear if campaigns are prohibited by current US election law from “fraudulently misrepresenting other candidates or political parties” when it comes to AI-generated content. Republicans on the Federal Election Commission blocked a move in June that would have made it clear that the law applied to AI-generated images. Since then, the agency has started to examine the concept but has not made a decision.
U.S President Joe Biden’s 2024 campaign team has formed a special task force to prepare responses to misleading AI-generated images and videos, including drafting court filings and developing novel legal theories to counter potential disinformation efforts that technology experts have warned could disrupt the vote.
The task force, comprised of the campaign’s top lawyers and outside experts such as a former senior legal advisor to the Department of Homeland Security, is investigating what steps Biden might take if, a fake video of a state election official falsely claiming that polls are closed emerged, or if an AI-generated image falsely portrayed Biden as urging non-citizens to cross the US border to vote illegally emerged.
The goal is to create a “legal toolkit” that will allow the campaign to respond quickly to virtually any scenario involving political misinformation, particularly AI-created deepfakes — convincing audio, video, or images created with artificial intelligence tools.
As part of a larger campaign to combat misinformation in all its forms, the Biden campaign, according to TJ Ducklo, a senior adviser, established an internal task force in recent months that they named the “Social Media, AI, Mis/Disinformation (SAID) Legal Advisory Group.”
The team, which is headed by Garg and general counsel for the campaign Maury Riggan in addition to other volunteer specialists, has already started drafting some legal theories while it is still researching others, according to Garg. It wants to be ready to conduct a tabletop exercise for the entire campaign in the first half of 2024.
The scramble highlights the vast legal gray area covering AI-generated political speech, and how policymakers are struggling to respond to the threat it could pose to the democratic process. Without clear federal legislation or regulation, campaigns such as Biden’s are being forced to take matters into their own hands, trying to devise ways to respond to images that might falsely portray candidates or others saying or doing things they never did.
It is unclear if campaigns are prohibited by current US election law from “fraudulently misrepresenting other candidates or political parties” when it comes to AI-generated content. Republicans on the Federal Election Commission blocked a move in June that would have made it clear that the law applied to AI-generated images. Since then, the agency has started to examine the concept but has not made a decision.
U.S President Joe Biden’s 2024 campaign team has formed a special task force to prepare responses to misleading AI-generated images and videos, including drafting court filings and developing novel legal theories to counter potential disinformation efforts that technology experts have warned could disrupt the vote.
The task force, comprised of the campaign’s top lawyers and outside experts such as a former senior legal advisor to the Department of Homeland Security, is investigating what steps Biden might take if, a fake video of a state election official falsely claiming that polls are closed emerged, or if an AI-generated image falsely portrayed Biden as urging non-citizens to cross the US border to vote illegally emerged.
The goal is to create a “legal toolkit” that will allow the campaign to respond quickly to virtually any scenario involving political misinformation, particularly AI-created deepfakes — convincing audio, video, or images created with artificial intelligence tools.
As part of a larger campaign to combat misinformation in all its forms, the Biden campaign, according to TJ Ducklo, a senior adviser, established an internal task force in recent months that they named the “Social Media, AI, Mis/Disinformation (SAID) Legal Advisory Group.”
The team, which is headed by Garg and general counsel for the campaign Maury Riggan in addition to other volunteer specialists, has already started drafting some legal theories while it is still researching others, according to Garg. It wants to be ready to conduct a tabletop exercise for the entire campaign in the first half of 2024.
The scramble highlights the vast legal gray area covering AI-generated political speech, and how policymakers are struggling to respond to the threat it could pose to the democratic process. Without clear federal legislation or regulation, campaigns such as Biden’s are being forced to take matters into their own hands, trying to devise ways to respond to images that might falsely portray candidates or others saying or doing things they never did.
It is unclear if campaigns are prohibited by current US election law from “fraudulently misrepresenting other candidates or political parties” when it comes to AI-generated content. Republicans on the Federal Election Commission blocked a move in June that would have made it clear that the law applied to AI-generated images. Since then, the agency has started to examine the concept but has not made a decision.
U.S President Joe Biden’s 2024 campaign team has formed a special task force to prepare responses to misleading AI-generated images and videos, including drafting court filings and developing novel legal theories to counter potential disinformation efforts that technology experts have warned could disrupt the vote.
The task force, comprised of the campaign’s top lawyers and outside experts such as a former senior legal advisor to the Department of Homeland Security, is investigating what steps Biden might take if, a fake video of a state election official falsely claiming that polls are closed emerged, or if an AI-generated image falsely portrayed Biden as urging non-citizens to cross the US border to vote illegally emerged.
The goal is to create a “legal toolkit” that will allow the campaign to respond quickly to virtually any scenario involving political misinformation, particularly AI-created deepfakes — convincing audio, video, or images created with artificial intelligence tools.
As part of a larger campaign to combat misinformation in all its forms, the Biden campaign, according to TJ Ducklo, a senior adviser, established an internal task force in recent months that they named the “Social Media, AI, Mis/Disinformation (SAID) Legal Advisory Group.”
The team, which is headed by Garg and general counsel for the campaign Maury Riggan in addition to other volunteer specialists, has already started drafting some legal theories while it is still researching others, according to Garg. It wants to be ready to conduct a tabletop exercise for the entire campaign in the first half of 2024.
The scramble highlights the vast legal gray area covering AI-generated political speech, and how policymakers are struggling to respond to the threat it could pose to the democratic process. Without clear federal legislation or regulation, campaigns such as Biden’s are being forced to take matters into their own hands, trying to devise ways to respond to images that might falsely portray candidates or others saying or doing things they never did.
It is unclear if campaigns are prohibited by current US election law from “fraudulently misrepresenting other candidates or political parties” when it comes to AI-generated content. Republicans on the Federal Election Commission blocked a move in June that would have made it clear that the law applied to AI-generated images. Since then, the agency has started to examine the concept but has not made a decision.
U.S President Joe Biden’s 2024 campaign team has formed a special task force to prepare responses to misleading AI-generated images and videos, including drafting court filings and developing novel legal theories to counter potential disinformation efforts that technology experts have warned could disrupt the vote.
The task force, comprised of the campaign’s top lawyers and outside experts such as a former senior legal advisor to the Department of Homeland Security, is investigating what steps Biden might take if, a fake video of a state election official falsely claiming that polls are closed emerged, or if an AI-generated image falsely portrayed Biden as urging non-citizens to cross the US border to vote illegally emerged.
The goal is to create a “legal toolkit” that will allow the campaign to respond quickly to virtually any scenario involving political misinformation, particularly AI-created deepfakes — convincing audio, video, or images created with artificial intelligence tools.
As part of a larger campaign to combat misinformation in all its forms, the Biden campaign, according to TJ Ducklo, a senior adviser, established an internal task force in recent months that they named the “Social Media, AI, Mis/Disinformation (SAID) Legal Advisory Group.”
The team, which is headed by Garg and general counsel for the campaign Maury Riggan in addition to other volunteer specialists, has already started drafting some legal theories while it is still researching others, according to Garg. It wants to be ready to conduct a tabletop exercise for the entire campaign in the first half of 2024.
The scramble highlights the vast legal gray area covering AI-generated political speech, and how policymakers are struggling to respond to the threat it could pose to the democratic process. Without clear federal legislation or regulation, campaigns such as Biden’s are being forced to take matters into their own hands, trying to devise ways to respond to images that might falsely portray candidates or others saying or doing things they never did.
It is unclear if campaigns are prohibited by current US election law from “fraudulently misrepresenting other candidates or political parties” when it comes to AI-generated content. Republicans on the Federal Election Commission blocked a move in June that would have made it clear that the law applied to AI-generated images. Since then, the agency has started to examine the concept but has not made a decision.