Home Technology Musk’s ‘Twitter Files’ offer a glimpse into the raw, complicated, and thankless job of moderation – businessroundups.org

Musk’s ‘Twitter Files’ offer a glimpse into the raw, complicated, and thankless job of moderation – businessroundups.org

by Ana Lopez
0 comment

Twitter’s new owner, Elon Musk, is feverishly promoting his “Twitter Files”: select company internal communications laboriously tweeted by sympathetic amanuenses. But Musk’s obvious belief that he’s released some partisan crack is false – far from conspiracy or systemic abuse, the files are a valuable peek behind the curtain of moderation at scale, a hint at the Sisyphean work being done by each social media platform is undertaken.

For a decade, companies like Twitter, YouTube, and Facebook have performed an elaborate dance to keep the details of their moderation processes out of the reach of bad actors, regulators, and the press.

Revealing too much would expose the processes to abuse by spammers and scammers (who indeed profit from every detail leaked or published), while revealing too little leads to damaging reports and rumors as they lose control of the story. Meanwhile, they must be prepared to justify and document their methods or risk censorship and fines from government agencies.

The result is that while everyone a small as to how these companies precisely inspect, filter, and rank the content posted on their platforms, it’s just enough to make sure that what we’re seeing is just the tip of the iceberg.

Sometimes there are revelations of the methods we suspected – contractors clicking through violent and sexual images by the hour, a repugnant but apparently necessary industry. Sometimes the companies exaggerate their hands, such as repeated claims about how AI is revolutionizing moderation and subsequent reports that AI systems are unfathomable and unreliable for this purpose.

What almost never happens – in general, companies don’t do this unless they are forced to – is that the actual tools and processes of content moderation are exposed at scale without a filter. And that’s what Musk has done, perhaps to his peril, but certainly to the great interest of anyone who’s ever wondered what moderators actually do, say, and click as they make decisions that could affect millions.

Don’t pay attention to the honest, complex conversation behind the curtain

The email chains, Slack conversations, and screenshots (or rather screenshots of screens) released over the past week provide a glimpse into this important and poorly understood process. What we see is a bit of the raw material, which isn’t the partisan illuminati some were expecting – although it’s clear from the highly selective presentation that this is what we should be observing.

See also  Twitter will end the 'old' blue diamonds on April 1

On the contrary, those involved are by turns prudent and self-assured, practical and philosophical, forthright and compliant, demonstrating that the choice to restrict or prohibit is not made arbitrarily, but according to a developing consensus of opposing viewpoints.

In the run-up to the choice to temporarily limit the Hunter Biden laptop story — probably the most controversial moderation decision in recent years right now, behind Trump’s ban — there is neither the partisanship nor the conspiracy insinuated by the bombshell- packaging of the documents.

Instead, we find serious, thoughtful people trying to reconcile conflicting and inadequate definitions and policies: what is “hacked” material? How sure are we of this or that assessment? What is a proportional response? How should we communicate it, to whom and when? What are the consequences if we do, if we don’t restrict? What precedents are we setting or breaking?

The answers to these questions are not at all obvious and are the kind of stuff usually handed out after months of research and discussion, or even in court (legal precedents affect legal language and repercussions). And they had to be made quickly, before the situation somehow got out of hand. Dissent from inside and outside (from a US representative, no less – ironically, dumped into the thread along with Jack Dorsey in violation of the same policy) was considered and fairly incorporated.

“This is an emerging situation where the facts remain unclear,” said former Trust and Safety chief Yoel Roth. “We are making a mistake by including a warning and preventing this content from being amplified.”

Some question the decision. Some question the facts as presented. Others say it is not supported by their reading of the policy. One says they should make the ad hoc basis and scope of the action very clear as it will clearly be investigated as partisan. Deputy General Counsel Jim Baker is calling for more information, but says caution is advised. There is no clear precedent; the facts are currently absent or unverified; some of the material is clearly non-consensual nudity.

See also  FTX lowered Chipper Cash's valuation from $2 billion to $1.25 billion businessroundups.org

“I think Twitter itself should limit what it recommends or posts in trending news, and your policies against QAnon groups are totally fine,” admits Rep. Ro Khanna while also claiming that the action in question is a step too far. “It’s a hard balance.”

Neither the public nor the press are aware of these conversations, and the truth is that we are just as curious, and largely in the dark, as our readers. It would be inaccurate to call the published materials a complete or even accurate representation of the entire process (they are blatantly obvious, if ineffective, handpicked and chosen to fit a narrative), but even as they are, we are better informed than before .

Instruments of trade

Even more directly revealing was the following thread, which contained screenshots of the actual moderation tools used by Twitter employees. While the thread unfairly attempts to equate using these tools with shadow-banning, the screenshots don’t show any nefarious activity, nor does it need to to be interesting.

Image Credits: Twitter

On the contrary, what is shown is convincing precisely because it is so prosaic, so boringly systematic. Here are the various techniques that all social media companies have explained time and time again and they use, but where before we had contained it in the cheerful diplomatic cant of PR, it is now presented without comment: ‘Trends Blacklist’, ‘High Profile ‘, ‘DO NOT TAKE ACTION’ and the rest.

Meanwhile, Yoel Roth explains that the actions and policies need to be better aligned, that more research is needed, that there are plans to improve:

The hypothesis underlying much of what we have implemented is that if exposure to, say, misinformation directly causes harm, then we should use remedial measures that reduce exposure, and limiting the spread/virality of content is a good way to to do that…we are we’re going to have to make a stronger case to get this into our policy remediation repertoire, especially for other policy areas.

Again, the content contradicts the context in which it is presented: these are hardly the deliberations of a secret liberal cabal lashing out at its ideological foes with a banhammer. It’s an enterprise-grade dashboard as you might see for lead tracking, logistics, or accounts, discussed and iterated by level-headed individuals who work within practical constraints and are focused on satisfying multiple stakeholders.

See also  South Korean prosecutors seek arrest warrants for Terraform Labs co-founder, investors and engineers businessroundups.org

True to form: Twitter, like its fellow social media platforms, has been working for years to make the moderation process efficient and systematic enough to function at scale. Not only to avoid flooding the platform with bots and spam, but also to comply with legal frameworks such as FTC orders and the GDPR. (Which of which the “expanded, unfiltered access” that outsiders have been given to the tool depicted may constitute a breach. The relevant authorities told businessroundups.org they are “engaging” with Twitter about this.)

A handful of employees making arbitrary decisions without scrutiny or oversight is no way to effectively moderate or meet such legal requirements; neither (like the dismissal several on Twitter’s Trust & Safety Council attests today) is automation. You need a large network of people who work together and work according to a standardized system, with clear boundaries and escalation procedures. And that is certainly evident from the screenshots that Musk has published.

What the documents fail to show is any kind of systematic bias, which Musk’s stand-ins insinuate but fail to fully substantiate. But whether or not it fits the narrative they want, what gets published is of interest to anyone who thinks these companies should be more forthright about their policies. That’s a win for transparency, even if Musk’s opaque approach brings it about more or less by accident.


You may also like

Leave a Comment

About Us

Latest Articles