Moderation in moderation

Status
Not open for further replies.

humon

Member
Joined
Jan 3, 2019
Location
Canada
Title is a little cheeky, I'm riffing on the title of my other thread. This one starts with another Rebel Wisdom video, which is a discussion brought on by Elon Musk's takeover of Twitter. Two guys with some experience in moderating large platforms (and with very different politics) have an interesting debate about the pros and cons of different forms of platform moderation. What I liked about this conversation was that they were both making some good points and some bad points, and I was starting to build a good synthesis position in my head for where I would want to stand on each topic.



One thing I appreciated at the beginning was the division of moderation into three different categories: decorum moderation, content moderation, and viewpoint moderation. This is a very useful way to approach the pros and cons and tradeoffs involved in each type.

Decorum moderation -- Enforcing rules of engagement, for example to stop harassment, slander, doxxing, flaming, and so on. Without this, the platform becomes a dumpster fire, equivalent to a state of anarchy, where nobody can be comfortable except for those who enjoy fighting.

Content moderation -- Removing content that violates the rules, such as CP, calls to violence, bomb-making instructions, and so on. Everyone can agree that at a minimum, this would involve things that are illegal in the country and region where the server is based, but also depending on the nature and purpose of the platform, it could restrict content further to remove additional things that the community finds repugnant.

Viewpoint moderation -- Removing or disciplining users for having troublesome viewpoints. This is where free speech advocates get most riled up, and should done as little as possible, however in the course of the video above there is an interesting debate about what's to be done with things like QAnon and wokeness, which are sets of ideas that could pass content moderation but in the extreme could also lead people into dark and violent places.


Somewhere in the middle of the conversation the younger guy, Aaron, started talking about censoring QAnon as a complex of memes (a memeplex), because of the harm that this memeplex can do. The older guy, Jim, was quick to disagree, but I think his arguments weren't great. What I would have said is something they got to later, which is that QAnon is more or less the same thing as wokeness, but politically flipped. Aaron says wokeness isn't a conspiracy theory because there's no conspiracy being posited, but that's besides the point, as I see all of the same fallacies, mental traps, and hateful rhetoric in both belief systems. What I would have said first is that people aren't calling to ban wokeness from social media, and so that should tell you where the bias lies (there are calls to ban it from schools, but that's a separate thing, because nobody is trying to teach QAnon ideas in schools to begin with -- political indoctrination of any kind should not be on the curriculum).

The position I would take is this: A memeplex on its own should not be a target of moderation, because these things are too vague; they include extreme forms and moderate forms, and painting with too broad a brush would make it impossible to discuss anything seriously. This is the trouble that we had with Islam in the 2000s, where you had the crazy guys flying planes into buildings, and then you had ordinary people who pray at the mosque and just want to be left alone. In order to solve that problem we had to create clear distinctions between the jihadis and the moderates.

So I would say that where a memeplex is harmful, the target of moderation should be the specific logic within the memeplex that makes it harmful. This can be things like a refusal to consider alternate evidence; making accusations without proof (or even the intention to consider potential innocence); demonizing moderate viewpoints; dehumanizing the "other"; calling for extreme action in pursuit of justice. These behaviours are the building blocks of radicalization, and that's what should be targeted for viewpoint moderation. Taking that approach, it then starts to fall more in line with decorum moderation, because these things can also be categorized as ideas that stifle discussion rather than enhancing it, much like the heckler's veto.

So I would want to hear what a woke person or QAnon person has to say, and have that debate with them, up until the point where they start calling me a white supremacist or a Jewish shill, or they start saying if I'm not with them I'm against them, and then I would just check out of that conversation. It would be the point at which I can tell that the person will not listen to reason. It's a subjective thing, though, so I would take it case by case, just like when dealing with people in your neighbourhood.


A topic Aaron brings up is that he disagrees with sunlight being the best disinfectant, on the grounds that a lie gets halfway around the world while the truth is still pulling its boots on. I would argue in that case that the sunlight-disinfection process merely takes time, for fact checkers to dig into things and weigh competing claims. But it raises an interesting issue, in that social media is a world where there isn't much time for this process to take place, because the average user of social media seems to have a 10-second attention span and is only going to read, knee-jerk, respond/share, and move on to the next topic.

This reminds me of a point I heard about police brutality: In cities where police budgets are too tight to police effectively, they compensate by using more brutal methods in order to keep criminals in line through fear. The same situation could occur in online platforms where there is too much content needing moderation and not enough eyeball-hours to go through it judiciously, and so the answer is going to be to fall back on subjective heuristics (after bots do the flagging) to decide who's good and who's bad, and then swing the hammer down hard on the bad guys to set an example to the others.

Another problem I see is when there's no set of shared facts that people can agree on. This was clear in the video above where they were talking about Elon Musk letting Trump back onto Twitter. Aaron seemed to think that it was an objective fact that Trump was banned for supporting the Jan 6 riot, but the sources of info that I get told me quite the opposite, that Trump's last tweet had urged the rioters to go home in peace, and his ban was entirely political. What this leads to is that even with objective and clear terms of service, a given statement could be either a truth bomb or misinfo, depending entirely on what facts (and sources of facts) the person in authority believes to be true. In meatspace this is solved through due process, but in online space there's no time for such a cumbersome legal process to play out, and so it instead falls into the post-modern thing, of it being all about power, and about who writes the history books.

Anyways, this video was only an hour and most of it was disagreeing over QAnon. I would have liked to hear them go into more detail about the three types of moderation, and about the 'red queen' thing that Aaron mentioned at the beginning, where trolls deliberately test the boundaries of the rules. Each of those would have been worth a couple hours of discussion if they had gotten around to them.
 

Wallachia

Member
Joined
Feb 27, 2022
Interesting. In my 15 years of experience moderating online forums, I focused the most on Decorum Moderation, because Content Moderation was always bland and had no mistery (If it violates the rules/laws, must be removed, simple as that). Yet I always avoided Viewpoint Moderation, because I believe that even dumb statements must be acceptable, unless they lead/incite violations on the other two points.

In my experience, Memeplex should never be a problem, because it falls into the third category, unless it's posted exaggeratedly arund the virtual ambience, which, in this case, it becomes flood and falls within Decorum Moderation.

When it comes to platforms with too much content needing moderation, I've been there, and there are two possible options:

1. Fully delete violating content prior to the current moderator assuming their post on the sense of "all's forgiven so far, but that will no longer be the case". I did that in all cases like this where I was recruited to help. It warrants an often rebellious and even verbally violent reaction from users, as they assume that the unmoderated place is part of their personal identity and see as a personal offense to be told what to do.

2. Halt the entire activity, ergo, freeze the place temporarily while you review everything. This will take longer, indeed, but it will also be more efficient and prompt less violent acts, because some people go "if he did that and nothing happened, so can I". When you start dishing sanctions, bothered users will hurriedly come to your defense because they feel like you are fixing the place. I did this in Paladins forums when HiRez released the ever-controversial OB64 patch and the forum was taken by a virtual riot. In this case, the best course of action, so I found, is to do retroactive moderation: analyze content prior to your active time - to a limit of, say, 7 days - and give harsher sanctions to Content Violation and lighter sanctions to Decorum Moderation, fully ignoring Viewpoint Moderation.

Political ban is problematic. The problem with moderators is that they always have higher-ups, because big places can't and must never be moderated by site owners, yet sometimes the owners will have a WTF wish that the moderators must carry out and said decision will cause backlash not on the owner, but on the moderator, who will only be able to say "orders from above" if that much.
 
Status
Not open for further replies.
Top Bottom