Public

Public

Share this post

Public
Public
Censorship Demands Behind Deep Fake Hype

Censorship Demands Behind Deep Fake Hype

Defense Department funded the AI that government-linked NGO, "Deep Trust Alliance," says is a grave disinformation threat

Michael Shellenberger's avatar
Michael Shellenberger
Sep 14, 2023
∙ Paid
173

Share this post

Public
Public
Censorship Demands Behind Deep Fake Hype
23
17
Share
Secretary of Defense Lloyd Austin (center) listens as President Joe R. Biden (left) speaks. Kathryn Harrison, (right) Deep Trust Alliance CEO (Getty Images)

The ability to create deep fakes and fake news through the use of AI is a major threat to democracy, say many experts. “AI-generated images and videos have triggered a panic among researchers, politicians and even some tech workers who warn that fabricated photos and videos could mislead voters, in what a U.N. AI adviser called in one interview the ‘deepfake election,’” reported the Washington Post late last month. “The concerns have pushed regulators into action. Leading tech companies recently promised the White House they would develop tools to allow users to detect whether media is made by AI."[1]

But the threat of AI to elections today is as overblown as the threat of Russian disinformation to elections in 2020. Never before has the U.S. been better prepared to detect deep fakes and fake news than we are today. In truth, the U.S. Department of Defense has been developing such tools for decades. In 1999, Defense Advanced Research Applications (DARPA) described its funding for R&D as having the goal of “total situational awareness” through “data mining,” “face recognition,” and computer networks to evaluate “semantic content.” in a proposal that anticipates the direction of the technology over the following 25 years.[2]

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Michael Shellenberger
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share