CNN obtained an internal memo to Meta’s (formerly Facebook’s) content moderation team that noted that “political speech is ineligible for fact-checking. This includes the words a politician says as well as photo, video, or other content that is clearly labeled as created by the politician or their campaign.” The memo was in direct response to requests for guidance from Meta’s internal and external fact checkers.
Take your political hat off for a moment and put on your data science hat. Propose a step-by-step process for content moderation that ensures your worldview and political preferences (as well as everyone else’s political preferences and worldviews) are respected.
If that problem is too hard (and it is), make the problem simpler by creating special conditions (ones that do not exist in the real world, but make it easier for you to attempt a solution). For example, you can reduce the size of the audience or reduce the degree of subtlety or limit sensitivity to nuance, inference, coded threats, etc.
The goal is to figure out how to create a social media newsfeed that treats everyone fairly (in your mind) with clearly laid out steps to do the work by hand. Say, for instance, it’s one person (i.e. you) reading 100 stories or posts and using your process (algorithm) to determine which posts should be left alone, which should be flagged as suspect, which should receive a warning, which should be deleted, and (of course) which accounts should be banned for violating your terms and conditions.
Now, propose a scalable, fully-automated solution that does exactly what you have just done, step by step. No conditional coding allowed; the steps must work for every story or post because the scale of this content moderation process is far beyond the capacity of individual human moderators or fact checkers.
When you’ve finished your assignment, send me an email with your step-by-step process. You can do it in plain English with no need for any tech jargon or code snippets or technical language at all. For example… Step 1: Read some posts. Step 2: Group them into two piles: acceptable and unacceptable. Step 3: Using your criteria for acceptable and unacceptable, read more posts and add them to the piles. You’ll have to define “acceptable” and “unacceptable” and propose a mechanism for comparing new posts to your definitions. You’ll also have to figure out a way to keep your definitions up to date. You’ll also need to do this in every popular language used on social media. And on, and on, and on…
Twitter just fired most of its content moderation team. Meta won’t even attempt to moderate or fact check a post by politicians… how will they handle people quoting (or misquoting) politicians?
All of this is happening as new AI tools are coming to market that create alternate realities that look and sound so real that they are indistinguishable from our lived experiences. Get ready. It’s going to be one hell of a journey.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.