Joe Biden’s Attack on Facebook for False Political Advertising: The Solution is…….

Last Friday, the Financial Times ran an article about former VP and current 2020 Presidential candidate Joe Biden’s stinging attack on Mark Zukerberg and Facebook (“FB”), arguing for Congress to revoke the law that shields platforms like FB from liability for user-generated content posted on their sites: https://www.ft.com/content/dc3f2c52-394a-11ea-a6d3-9a26f8c3cba4?shareType=nongift

Of course, the underlying cause of this rage was a combination of the platform having run an allegedly “false political advert claiming the former vice-president blackmailed Ukrainian officials in order to thwart an investigation into his son, Hunter,” and a belief that the platform has allowed, and continues to allow, Russian disinformation campaigns through false advertising, making the platform (according to Biden) knowingly complicit in such activity.

For those not familiar with the law being referenced, the law is the Communications Decency Act (CDA); specifically Section 203 of the CDA.  Interestingly, while the CDA has been around for decades, it’s actually a small part of the original law, much of which has since been struck down, leaving this oft-cited provision as the sole survivor.

Specifically, the key verbiage of Section 203 states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In laymen’s terms, a website, social media site, or other online service cannot be held responsible for the content of a post by a user of that service.   While this may not seem like a big deal to the average Joe, this is actually a hugely important protection.

To understand why this becomes so important, consider the case of Yahoo! France, which was held liable by a French Court in 2000 for allowing the sale of Nazi memorabilia on its site, in violation of a French law prohibiting the sale or glorification of the Nazis.  Yahoo and its then President were even prosecuted criminally in France for this conduct, although they were later acquitted. Eventually, Yahoo stopped allowing the sale of Nazi memorabilia on all its sites worldwide.

But the point here is that Yahoo was being held liable for content that they had neither created, nor posted, and indeed, would have trouble filtering out completely, as they could not identify and filter out French citizens with 100% accuracy.

By contrast, in the U.S., the CDA protects companies like Yahoo! from such liability.  Of course, technology has changed dramatically since then. But it’s even easier to imagine how the CDA protects sites today, such as when a person posts a racist tirade on Facebook, or a violent sex encounter on YouTube. Of course, most people disapprove of such things, but given the massive quantity of items posted, it would place a huge burden on these sites to proactively identify and remove such content.  Thus, immunity from liability has been key to the growth and proliferation of social media and other sites.

Indeed, a strong argument can be made that the CDA is potentially the differentiator between why we have such a wonderful variety of social media and speech-focused sites like YouTube, Facebook, and What’s App in the U.S., as opposed to Europe, where such protection does not exist.

Various advocacy groups have pushed to amend the CDA over the years – to limit the immunity– but almost all of them have failed because of our desire to preserve open discourse and free speech, bolstering the growth of the Internet.  Thus, Joe Biden’s call for removing the CDA’s grant of immunity is a tall order, and one that’s a dangerous path to pursue.

We certainly don’t want to be looking at political points of view when addressing this issue, and we don’t want to ask sites to get into the business of regulating content, or removing ideas they find objectionable to their values, as that would start us down the path of becoming the authoritarian, repressive govts that we’re trying to stop.

But I do think that the false political advertising, including by foreign governments trying to influence our elections, is of such concern that we should consider some options. After all, we’re talking about the very fabric of our constitutional democracy; voting is one of the most cherished of rights.

So what if we left the CDA as is, but created an additional obligation on online sites when it comes to accepting political advertising?  What if we added a requirement for sites to verify who is taking out the online political adverts? Sort of like the “know your customer rule” that has worked quite well in the financial arena since the 9/11 attacks, but in this case, for online political advertising.

Coupled with this, we could also put the onus on these sites to do some basic verification of the facts when allowing a political advertisement.  Conceptually, we could require the entity submitting a political advertisement to include a list of sources for all factual assertions made in the ad, making it even easier for the website to validate (or invalidate) the advertising as legitimate (or false).

In such a scenario, the site would be able to accept the political advertising, and would not be responsible for the repercussions, so long as they made an effort to validate the source of the ad (know your customer) and to do a basic fact check of the content. This could be built into the cost of the ads, and given how much campaigns spend on advertising, would be unlikely to have a chilling effect on speech.

But maybe, just maybe, it would enable us to catch and filter out more of the true foreign state group political ads, as well as the clearly false ads.

Anyway, I’m curious about people’s thoughts on this?

Joel Schwarz

Joel Schwarz is Managing Partner with the Schwarz Group LLC and an adjunct professor at Albany Law School, teaching courses on cybercrime, cybersecurity and privacy. He previously served as the Civil Liberties and Privacy Officer (CLPO) for the National Counterterrorism Center and was a cybercrime prosecutor for the Justice Dept. and N.Y. State Attorney General’s Office. Joel frequently speaks and writes on privacy matters (to include student data privacy and privacy in Education Technology), is a member of the Student Data Privacy Consortium (SDPC) and is Privacy & Security Vice-Chair of the Montgomery County PTA’s Safe-Tech Committee.

>