John Samples, a political scientist and Cato scholar, was chosen to serve on Facebook’s “oversight board,” a group of 20 people from various countries that makes non-binding recommendations to Facebook concerning content moderation. In our discussion for paid subscribers, I came away with a better appreciation of why content moderation is an issue there. Unfortunately, I did not record the conversation properly, so the following is based on notes.
I asked if Facebook could not get out of the content moderation business simply by getting rid of the algorithmic feed. Have people select for themselves the content that they want to see. Then Facebook could claim neutrality, just as somebody who supplies Internet infrastructure can claim neutrality.
He replied that Facebook would still face would be reputation risk. With its worldwide reach of 4 billion users, Facebook has been used in other countries to rally political movements. Samples said that this first posed a problem several years ago in Myanmar, where organized oppression of the Muslim minority was being facilitated on Facebook.
Once we take it as given that Facebook will have to impose some rules on content, the problem becomes one of inevitable dissatisfaction. If it is too lenient, people who see the content will complain. If it is too strict, the people whose content was taken down will complain. And Facebook is bound to make mistakes. You can think of Type I and Type II errors, where it either mistakenly takes something down that was really ok or mistakenly leaves something up that was really offensive. He pointed out that perhaps with artificial intelligence these errors might be reduced.
Another insight concerns the advertising model. Facebook obtains a lot of knowledge about its users. As much as some people complain about corporate surveillance, most users do not mind, and of course advertisers love being able to target people with ads.
This made me realize that the advertising model and the subscription model are not just two equally plausible ways of generating revenue. If Facebook switched away from an advertising model to a subscription model, it would be throwing away the valuable information that it obtains about its users. Subscriptions would be a much weaker business model.
A final insight was that Facebook saw TikTok as a threat. Facebook’s assumption was that you would want your posts to be seen by friends and family. TikTok assumed that you wanted to be followed by strangers. Facebook felt that it had to compete with TikTok, which meant enabling people to use its service to send messages to strangers. As people used that service to send political messages, this created a lot of dissatisfaction. More recently, by changing the feed algorithm to cut back on political content, Facebook was able to calm things down a bit.
Samples is a straight shooter, not a Facebook flack. But he just happened to bring up nuances that Facebook’s critics might not have appreciated. I use social media, including Facebook less than almost everyone I know. (Unless you count substack as social media.) I think that the same is true for Samples.
Um. The fact that FB sees great advantages in learning about its users and prefers that to a simple subscription service is a problem. As a user I would prefer to pay FB and not get ads rather than have FB learn all about me to serve me ads.
That's actually one reason I use mewe more than FB . I pay for mewe and there are no ads
I greatly dislike Facebook and Mark Zuckerberg. Why? Because the algorithm optimizes for engagement. This often means amplifying poor behavior. This can degrade our culture. This is like incorporating a religion into our society that promotes disrespect and other poor behaviors.
We want discourse platforms that raise expectations and amplify respectful behavior. The best way of doing this is by manually curating the content. Each one of us ourselves. For example, see the Substack In My Tribe. Links to Consider can be thought of as a Best Work Board, showcasing the best ideas and writing according to one man. Each of us can do this. We don’t need an algorithm. We don’t need Mark Zuckerberg. We don’t want an algorithm to take away from our duty to judge the behavior of others. This is for each of us to do, whether online or in person. Say no to algorithms that attempt to mimic human judgment about others’ behavior. Conscience is our God. See my post “Toward Better Religious Schools” that argues against AI graders.
We can improve discourse by policing disrespectful commenting. Each Substack administrator should do this according to their discretion.
This still leave us with an imperfect system. Anonymous commenters can get away with disrespectful comments by creating new profiles and continuing poor conduct.
We can ban anonymous profiles but this comes with a steep trade-off. Many commenters prefer anonymity. I wonder what discourse would be like at In My Tribe if all commenters were required to use their real names and identifying headshots. This would be an interesting experiment. We don’t allow anonymous drivers on the road? Everyone has to carry a driver’s license. Obviously there are downsides to requiring photo identification on discourse platforms like Substack, but what are the alternatives?
What are alternative techniques to COMPLETE anonymity? Don’t we want to maintain some kind of tracking to reputation and status linked to a known user so that we can discipline users that simply want to disrupt our conversations. What is a good happy medium between complete anonymity and complete photo identification? Maybe the administration can have access to photo identification but prevent the public from knowing who is who. This could be done manually by each Substack administrator.
And I use Facebook less than anyone, averaged over the past 8 years. Didn’t use it at all from 2016 until this past spring. Then had to sell a snowblower on marketplace so I created a profile. Now certain sports clubs require that I use it to sign up for their programs.