Sorry, Arnold. You are wrong. As long as nasty barbarians continue to grab and consolidate their coercive power, they will erase and eliminate everything they don't like. You may not like it but "thanks" to the pandemic, they have finished the destruction of all liberal values. Unfortunately, too many cowards are helping and appeasing them. As of today, it looks too late to save the U.S. bureaucracy, universities, and social media --even if the barbarians and their servants were defeated, their rebuilding would be costly and take a long time.
Something that I think would help social media would be punishing the “echo chamber”. If you think of each pair of users in a social network as having a “similarity score” (how similar they are in the types of posts they interact with), then someone speaking to an echo chamber would get most of their “likes” from people with high similarity scores to themselves.
Deboosting people with high echo chamber scores and boosting people with low echo chamber scores would improve discourse drastically. And I think it would be hard to game without actually improving discourse.
I've been on the internet long enough to remember when people thought the war on spam was unwinnable, and then shortly later to see the use of technology and math to nearly eliminate spam. I don't see why semantic analysis, moderation, fact checking, statistics, and machine learning can't be used in the same way to make social media nicer, more pleasant, and more fact based.
The problem isn't technological or mathematical. The problem is incentives. The social media companies have mostly weaponized our attention. They profit the more attention we give their services. And, sadly, we seem to devote much of our attention to sensationalist, low information, high conflict, and tribal nonsense.
I think of Pinterest and LinkedIn as two social media sites that don't endlessly promote conflict through attention mining, and it no surprise to me that they are way less attention consuming than Facebook, Twitter, Instagram, and the like.
I don't know much about it, but I gather that spam wars were won more by creating viable reputation systems for MTAs based on domain names (protected from spoofing with DKIM, SPF, DMARC, newer ARC) ultimately anchored in the requirement that people provide real-world identification such as credit card details to register domain names. Spam detection systems by themselves have to be constantly updated with human input (e.g. when you press "This is spam" button in Gmail) as spammers learn how to circumvent existing filters. This activity is inherently reactive and inevitably lags behind the attackers. This makes restricting the volume with reputation systems essential.
Such speed bumps have been implemented in many places already, with mixed results as one might expect (it's basically anarcho-tyranny). For example, requiring CAPTCHA to post on anonymous imageboards. When this was first introduced in an attempt to improve atmosphere on boards, it quickly turned out that it did little to impede motivated wipers (i.e. those who flood a board with objectionable or low-quality content) because they soon created scripts which circumvented, solved or brute-forced the CAPTCHA and were able to elude IP address/range bans using a variety of tricks and maybe a little money. Regular users could not muster the numbers to overwhelm the malefactors' automated posting systems, and the more sophisticated CAPTCHA and other restrictions board managers introduced, the worse the problem got. Some regular users turned to the same wipe scripts to out-wipe the wipers. There were wipe wars, treaties, betrayals, and so on. Ultimately key people on all sides got bored with it and moved on.
No. I'm claiming that there was a period when they were unusable despite captcha and range bans. Eventually, as I wrote, the most energetic people got bored / grew out of it, found better things to do with their coding skills and energies, and moved on. Later, due to widespread use elsewhere, captchas improved to the point that they could no longer be solved for free, although this "speed bump" means that regular users now have to spend many seconds and possibly reload several times until they chance on a readable one. This is a cost and annoyance that is imposed on regular users because it is still impossible or infeasible to go after malefactors. On the other hand, while paying ~$1/1000 for commercial captcha-solving plus whatever it takes to avoid IP range bans using cloud services may be above the WtP of the typical adolescent imageboard troll, it is hardly unaffordable either.
I appreciate being pointed toward the article and Arnold’s comments. I sort of feel towards these issues like I felt 20 odd months ago when I had no relatives or acquaintances who been infected with Covid—I’ve never actually received untoward social media like that which is discussed. Anybody else in the same boat? Like OneEyedMan I think comparison with spam is useful and makes me think this stuff will in practice be limited by reasonable steps.
Sorry, Arnold. You are wrong. As long as nasty barbarians continue to grab and consolidate their coercive power, they will erase and eliminate everything they don't like. You may not like it but "thanks" to the pandemic, they have finished the destruction of all liberal values. Unfortunately, too many cowards are helping and appeasing them. As of today, it looks too late to save the U.S. bureaucracy, universities, and social media --even if the barbarians and their servants were defeated, their rebuilding would be costly and take a long time.
I suggest reading
https://quillette.com/2021/12/18/scientists-must-gain-the-courage-to-oppose-the-politicization-of-their-disciplines/
BTW, Razib Khan says about it that's a lost cause.
Something that I think would help social media would be punishing the “echo chamber”. If you think of each pair of users in a social network as having a “similarity score” (how similar they are in the types of posts they interact with), then someone speaking to an echo chamber would get most of their “likes” from people with high similarity scores to themselves.
Deboosting people with high echo chamber scores and boosting people with low echo chamber scores would improve discourse drastically. And I think it would be hard to game without actually improving discourse.
I've been on the internet long enough to remember when people thought the war on spam was unwinnable, and then shortly later to see the use of technology and math to nearly eliminate spam. I don't see why semantic analysis, moderation, fact checking, statistics, and machine learning can't be used in the same way to make social media nicer, more pleasant, and more fact based.
The problem isn't technological or mathematical. The problem is incentives. The social media companies have mostly weaponized our attention. They profit the more attention we give their services. And, sadly, we seem to devote much of our attention to sensationalist, low information, high conflict, and tribal nonsense.
I think of Pinterest and LinkedIn as two social media sites that don't endlessly promote conflict through attention mining, and it no surprise to me that they are way less attention consuming than Facebook, Twitter, Instagram, and the like.
I don't know much about it, but I gather that spam wars were won more by creating viable reputation systems for MTAs based on domain names (protected from spoofing with DKIM, SPF, DMARC, newer ARC) ultimately anchored in the requirement that people provide real-world identification such as credit card details to register domain names. Spam detection systems by themselves have to be constantly updated with human input (e.g. when you press "This is spam" button in Gmail) as spammers learn how to circumvent existing filters. This activity is inherently reactive and inevitably lags behind the attackers. This makes restricting the volume with reputation systems essential.
I think that a "tax" on advertising attached to heavy engagement might be useful. The trick would be how to define "heavy engagement.
"
Such speed bumps have been implemented in many places already, with mixed results as one might expect (it's basically anarcho-tyranny). For example, requiring CAPTCHA to post on anonymous imageboards. When this was first introduced in an attempt to improve atmosphere on boards, it quickly turned out that it did little to impede motivated wipers (i.e. those who flood a board with objectionable or low-quality content) because they soon created scripts which circumvented, solved or brute-forced the CAPTCHA and were able to elude IP address/range bans using a variety of tricks and maybe a little money. Regular users could not muster the numbers to overwhelm the malefactors' automated posting systems, and the more sophisticated CAPTCHA and other restrictions board managers introduced, the worse the problem got. Some regular users turned to the same wipe scripts to out-wipe the wipers. There were wipe wars, treaties, betrayals, and so on. Ultimately key people on all sides got bored with it and moved on.
Are you claiming that imageboards would be more usable with no captcha or rangebans?
No. I'm claiming that there was a period when they were unusable despite captcha and range bans. Eventually, as I wrote, the most energetic people got bored / grew out of it, found better things to do with their coding skills and energies, and moved on. Later, due to widespread use elsewhere, captchas improved to the point that they could no longer be solved for free, although this "speed bump" means that regular users now have to spend many seconds and possibly reload several times until they chance on a readable one. This is a cost and annoyance that is imposed on regular users because it is still impossible or infeasible to go after malefactors. On the other hand, while paying ~$1/1000 for commercial captcha-solving plus whatever it takes to avoid IP range bans using cloud services may be above the WtP of the typical adolescent imageboard troll, it is hardly unaffordable either.
I appreciate being pointed toward the article and Arnold’s comments. I sort of feel towards these issues like I felt 20 odd months ago when I had no relatives or acquaintances who been infected with Covid—I’ve never actually received untoward social media like that which is discussed. Anybody else in the same boat? Like OneEyedMan I think comparison with spam is useful and makes me think this stuff will in practice be limited by reasonable steps.