Two young theologians with a sizable and growing social media following asked me what they could do to accelerate the process. They noticed that when they posted a picture of a traditional food prepared in an unusual way, they got thousands more likes (and picked up more new followers) than when they posted information about how to perform a traditional ritual in an unusual way. They were struggling with a question at least as old as their faith: quality vs. quantity.
Two public policy staffers asked me what they should recommend to their boss regarding censorship, fake news, and misinformation. They cited a very well-publicized story about 20 to 30 video channels on YouTube (sponsored by a foreign government) that achieved “unusually large” view counts and vast numbers of subscribers. What was the trick? How did the “bad guys” grow their audience so quickly with what their boss believes is fake news? What kind of policies could guide this process, and who should be regulated?
The theologians and the public policy staffers properly identified the conflict, but they did not properly identify the enemy.
Clickbait works. You are going to get a significantly better view count with a title like “Please Stop the Buzzword BS” than you are with the title of this article.
It is easy to start a debate about quality vs. quantity. “I’d prefer a small audience of highly engaged hand-raisers, rather than wasting my time [or resources or money] messaging millions of people who think my content is irrelevant.”
But that quantity-vs.-quality debate is now meaningless. Quality is in the eyes of the beholder. We may yearn for a narrative to explain how and why, but that’s not how the digital world works. The algorithmic curation that controls what you do or do not see on every social media company’s newsfeed isn’t programmed to provide you with an emotionally satisfying narrative; it is continuously tuned to keep you engaged and clicking or tapping. So if your key metric is engagement or completed views, “5 Ways to Bounce a Quarter Off of Kim Kardashian’s Butt” or a video of a horrible disaster will always outperform less clickbaity titles or subjects.
Gaming the System
Using clickbait isn’t gaming the system; it is the system. It is not rigging the system. The system is already rigged, and it’s rigged with a specific goal: Put the right content in front of the right person in the right place at the right time. That’s it.
If you are going to allow open social media platforms, then people (or machines) who want to reach targeted audiences, or simply the largest audiences, will post clickbait (however you wish to define it) for tonnage and then pepper in messaging that is easily digestible and understandable. If that message is about how to prepare a traditional dish in an unusual way, wonderful! If that message is about identifying and hating the “other,” we need to think about what is really happening and what we can do about it.
Identifying the Enemy
To frame the conversation about regulation of social media, you need an in-depth knowledge of the way algorithmic curation surfaces content on the Internet. The enemies (yes, there are more than one) are the values and policies used to train the artificial intelligence, machine learning systems, cognitive agents, neural networks, etc. You can choose your own words to describe the technology; from here on, I’ll call it a “recommendation engine.” (Which is synonymous with algorithmic curation.)
It would be easy to create laws to regulate or censor the information on Facebook or Google or YouTube or Twitter, but these companies are not the enemy. They sell ads.
But if Social Media Isn’t the Enemy, What Is?
Do you have a good argument as to why everyone on the internet should not be able to connect with everyone else on the internet? With 2.6 billion users, Facebook connects roughly half of all the people who are online. Facebook may be the best organized social media tool, but the internet itself connects well over 4 billion users. What is the right framework for digital interaction at this scale?
Should recommendation engines be required to tell you why you are seeing a particular piece of content? Would that transparency help solve the problem? If you knew that you were watching a video produced or brought to you by an antagonistic foreign government, would knowing that change your mind about watching the video? What if that government were funding a company with a name that sounded safe? (Use your imagination here.) The more questions you ask about this, the more complicated the situation becomes, and at some point, you will realize that while algorithmic curation is the enemy, it is also the best weapon in this fight.
My challenge to you is to figure out how we align incentives and outcomes. Recommendation engines are optimized to surface what we ourselves tell them we want to see. We tell them by exhibiting specific behaviors that generate data that the system can analyze. The more behaviors we exhibit, the better the system gets. It will always give us what we want most. So perhaps algorithmic curation isn’t the enemy at all.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.