Here’s something to ponder and discuss. There’s a case before the Supreme Court brought by the family of Nohemi Gonzalez, who was killed in the 2015 Islamic State terrorist attack in Paris. The plaintiffs claim that YouTube, a unit of Google, aided ISIS by recommending the terrorist group’s videos to users, and that because the videos recommended harmful content, the platform should not be protected.
Enacted in 1996, Section 230 of the Communications Decency Act protects digital platforms from liability arising from user-generated content. It’s based on the idea that phone companies are not liable for the content of phone calls, so pure play internet platforms should not be liable for what people post.
The petitioners’ brief contends that protection under Section 230 “is not available for material that the website itself created […] If YouTube were to write on its homepage, or on the homepage of a user, ‘YouTube strongly recommends that you watch this video,’ that obviously would not be ‘information provided by another information content provider.’”
Google believes that it is protected by Section 230 and argues that there is no way to draw a meaningful distinction between recommendation algorithms and the related algorithms that allow search engines and ranking systems to work. Google’s general counsel Halimah DeLaine Prado said, “Section 230 is fundamentally the economic backbone of the internet,” asserting that “a ruling that undermines Section 230 would have significant unintended and harmful consequences.”
What caught my attention was the petitioners equating YouTube’s recommendations algorithm with a written declaration that “YouTube strongly recommends that you watch this video.” Let’s discuss!
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.