- Meta Platforms, which runs Facebook and Instagram, has asked advertisers to disclose digitally created or altered political ads, including through artificial intelligence.1
- According to the new rules, which will go into effect in 2024 worldwide, Meta would penalize advertisers who fail to disclose ads featuring a person saying or doing things they never said or did, altering footage of an actual event, or showing realistic-looking people and events.2
- Furthermore, Meta has banned advertisers from using its generative AI software for social issues, electoral, or political ads on Facebook and Instagram.3
- This comes a day after Microsoft revealed that political campaigners could insert digital watermarks into their ads to verify their authenticity and prevent others from digitally altering them without leaving evidence.4
- Meanwhile, the Federal Election Commission is expected to vote on a similar rule requiring political advertisers to disclose the use of AI-generated content in political ads ahead of the 2024 presidential election.5
- However, synthesized media has already been used in political ads. While the Republican National Committee used AI to create a 30-second ad envisioning a hypothetical second term for Pres. Biden, critics of Donald Trump have spread fabricated images of him being arrested.6
- Pro-establishment narrative, as provided by Northwestern Now. Not only should tech companies be reviewing AI-generated images and videos, but governments around the globe should enact laws to prevent this insidious content from polarizing society even more. Until that happens, however, everyone must learn to carefully analyze any content they see online before sharing it widely over the internet.
- Establishment-critical narrative, as provided by Institute for Free Speech. While Meta isn't calling for an outright ban on deep fake images, requiring disclaimers is a slippery slope that could lead to forced labels on other content, such as satire. Twisting on what people say and doctoring images to fit a narrative is a decades-long issue; just because AI exists doesn't mean we should lose our right to use it as a function of protected speech.