1 Comment
May 2Liked by Rasheed Griffith

This:

"The issue of Who bears liability is a very interesting one..."

And this:

"And as a society, countries need to decide for themselves. Where do they want that liability dial to be, understanding that the more liability you put on the company is the fewer kinds of speech you're going to have on these properties, and at the margin, you're going to have more and more speech restricted."

I think that for regulators, free speech and it's liability is mediated by the platforms' algorithm.

It could be argued that the algorithm feeds people's compulsion to act a certain way. We could say the same of drugs or alcohol. Now, we've put age limits on alcohol and outlawed drug trafficking, not drug use per se.

We already have age restrictions on social media but we are yet to determined the point where current liability laws take effect for social media because we don't know whether to treat these platforms like traditional media. My argument is that we should in the eyes of the law. The liability falls on them where a paid employee is concerned, but falls on the individual or the company that pays them to put out falsehoods or hate speech on the platforms.

The tricky part here is how to regulate the role of the algorithm in amplifying falsehoods and hate speech. I don't want to believe this is a knowledge problem because tech activist have been arguing for more competition and resistance to the closed internet of the sort we have today.

My thinking is that there's an incentive problem on the part of the regulators where they'd rather fine the company of pesky regulatory infringement that does nothing to enhance the user experience on the platforms.

I also want to note that social media became this "cesspit" when politicians brought their campaigns here. We were good pre the Obama/Trump campaigns. That's one area government could look into. The algorithms are what they are, political messaging worsens it memetic effect.

Perhaps the government could limit political messages to traditional media (for now) where there are clear rules on where the liability falls. And increasingly allow it on social media when we've worked out the kinks of it. This might be a good incentive for all the parties involved to work out something meaningful. A good example is the removal of anonymity on political messages.

There are many intrinsic properties of the internet that runs counter to the shenanigans of politics and it's purported public order. The internet is almost anarchic by nature. I want to preserve this daring spirit of the internet and will take the side constraint of a limit on political messaging to achieve it.

While I'm wary of government control, I'm not sure it's illiberal or an infringement on rights to limit political messages to certain media for explicit reasons while we work to expand access in the long run.

Expand full comment