By Tom Parker
The Department of Homeland Security’s (DHS’s) controversial “Disinformation Governance Board” was recently shut down after First Amendment concerns but the DHS seemingly still intends to continue its “disinformation” work.
A recent report from the Homeland Security Advisory Council’s “Disinformation Best Practices and Safeguards Subcommittee” states that while “there is no need for a separate Disinformation Governance Board..the Department must be able to address the disinformation threat streams that can undermine the security of our homeland.”
The report was produced after DHS Secretary Alejandro Mayorkas asked the subcommittee to make recommendations for how the DHS can “most effectively and appropriately address disinformation that poses a threat to the homeland while protecting civil rights and providing greater transparency across this work.”
In the report, the subcommittee provides a broad definition of disinformation, outlines how the DHS detects and mitigates information that falls under the scope of this definition, and provides the subcommittee’s recommendations.
The far-reaching definition of disinformation includes both deliberate and unintentional spreading of “falsehoods.” The subcommittee also deems the “intentional spreading of genuine information with the intent to cause harm” to be a form of disinformation and uses “moving private and personal information into the public sphere” as an example of this type of disinformation.
The report outlines how several US government agencies that fall under the DHS’s purview, including the Office of Intelligence and Analysis (I&A), the Federal Emergency Management Agency (FEMA), and Customs and Border Protection (CBP), surveil online messages, forums, and social media to identify disinformation, “rumors,” and “attitudes related to migration.”
It also notes that the Cybersecurity and Infrastructure Security Agency (CISA), which also falls under the purview of the DHS, flags “disinformation campaigns utilizing social media” to social media companies “for whatever action those companies see fit to take.”
In the recommendations section of the report, the subcommittee insists that the DHS’s work on disinformation is “critical” and that the DHS “needs the ability to identify, analyze, and, where necessary, address certain incorrect information.”
The subcommittee adds that the DHS should be able to flag disinformation to social media platforms:
“The Department can and should also bring such disinformation to the attention of other government agencies for appropriate action and to platforms hosting the falsehoods. It is for the platforms, alone, to determine whether any action is appropriate under their policies.”
While the report recommends that the DHS should maintain its broad powers to surveil disinformation and flag it to social media platforms, the subcommittee insists that these activities will be “consistent with the law and the relevant civil rights and privacy protections.”
We obtained a copy of this Disinformation Best Practices and Safeguards Subcommittee report for you here.
We obtained a copy of the appendix to this report (which contains examples of DHS products and activities that address disinformation) for you here.
This report and its recommendations were published on the same day that the DHS officially shut down its Disinformation Governance Board. This board was introduced in April but days after it was introduced, 20 states threatened legal action and branded it an “unacceptable and downright alarming encroachment on every citizen’s right to express his or her opinions, engage in political debate, and disagree with the government.”
While the DHS’s activities related to the Disinformation Governance Board generated mass controversy, the DHS was surveilling “misinformation” and accused of surveilling money transfers before the board was even introduced. It also has contracted with a social media surveillance company that provides surveillance software.
The recommendation that the DHS should flag alleged disinformation to social media was published one day before the Federal Bureau of Investigation’s (FBI’s) use of this tactic in the run-up to the 2020 US presidential election came under fresh scrutiny.
The scrutiny began after Facebook CEO Mark Zuckerberg appeared on The Joe Rogan Experience podcast and said the FBI had warned Facebook about a “dump” of “Russian disinfo” just before the New York Post published a story alleging that Joe Biden and his son Hunter Biden had engaged in an alleged corruption scandal. This story was published a few weeks before the 2020 US presidential election and was censored by Facebook and other Big Tech platforms. At the time, many politicians and journalists blasted Big Tech for censoring a story that was unfavorable to then-Democratic presidential candidate Joe Biden. A recent poll found that 79% of Americans who followed the story believe that “truthful” coverage would have changed the outcome of the 2020 election.
Government agencies, such as the DHS and the FBI, defend this practice of flagging alleged disinformation to social media companies by insisting that they’re not directing the companies to censor and that it’s up to the platforms to decide whether they want to remove the information that’s flagged to them.
However, internal chats have revealed that when government agencies or officials flag information or accounts to platforms, they do sometimes apply pressure. For example, recently released internal Slack messages show Twitter employees discussing the White House branding journalist Alex Berenson “the epicenter of disinfo” and questioning “why Alex Berenson hasn’t been kicked off from the platform” four months before he was banned.
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.