Censorship Demands Behind Deep Fake Hype
Defense Department funded the AI that government-linked NGO, "Deep Trust Alliance," says is a grave disinformation threat
The ability to create deep fakes and fake news through the use of AI is a major threat to democracy, say many experts. “AI-generated images and videos have triggered a panic among researchers, politicians and even some tech workers who warn that fabricated photos and videos could mislead voters, in what a U.N. AI adviser called in one interview the ‘deepfake election,’” reported the Washington Post late last month. “The concerns have pushed regulators into action. Leading tech companies recently promised the White House they would develop tools to allow users to detect whether media is made by AI."[1]
But the threat of AI to elections today is as overblown as the threat of Russian disinformation to elections in 2020. Never before has the U.S. been better prepared to detect deep fakes and fake news than we are today. In truth, the U.S. Department of Defense has been developing such tools for decades. In 1999, Defense Advanced Research Applications (DARPA) described its funding for R&D as having the goal of “total situational awareness” through “data mining,” “face recognition,” and computer networks to evaluate “semantic content.” in a proposal that anticipates the direction of the technology over the following 25 years.[2]