America has long maintained a high bar for defamation, and with good reason. In a country that values free speech as a foundational right, the U.S. Supreme Court has repeatedly ruled that public figures must prove “actual malice” to prevail in court: not just that a false statement was made, but that it was made knowingly or with reckless disregard for the truth. This standard, established in New York Times v. Sullivan, protects open debate and shields journalists, authors, and commentators from politically motivated lawsuits. In his 2024 book, Liar in a Crowded Theater, the author and professor Jeff Koseff defends this high bar as essential to democracy, arguing that the alternative would create a chilling climate where powerful interests weaponize the courts against critics, which would do far more damage than the occasional falsehood. For example, the United Kingdom’s defamation laws make it far easier for claimants to silence reporting, especially when backed by wealth or political influence.
But what happens when the false statements come not from a human journalist or rival pundit, but from an artificial intelligence chatbot created and deployed by one of the most powerful companies in the world? That is the question now facing a federal court in Tennessee, where conservative commentator Robby Starbuck is suing Meta, the parent company of Facebook, for defamation after its AI system falsely implicated him in criminal activity linked to the January 6 Capitol riot. One version of the AI’s output falsely stated that Starbuck had “stormed the Capitol,” an accusation with no basis in fact. Within 24 hours, Starbuck told me in a new podcast, “my lawyers got in touch with Meta’s lawyers and I put out a message to executives at Meta and said, Hey, this has gotta stop. This is not acceptable.”