Skip to content

Steps towards a safer fediverse (DRAFT)

DRAFT! Work in Progress!

The word "draft", all in capital letters, written on a chalkboard
DRAFT!   WORK IN PROGRESS!  
Feedback welcome on infosec.exchange or Lemmy.
Part 5 of  A golden opportunity for the fediverse -- or whatever comes next.  Earlier posts in the series include Mastodon and today’s fediverse are unsafe by design and unsafe by default; Blocklists in the fediverse; It’s possible to talk about The Bad Space without being racist or anti-trans – but it’s not as easy as it sounds; and Compare and contrast: Fediseer, FIRES, and The Bad Space
"Even though millions of people left Twitter in 2023 – and millions more are ready to move as soon as there's a viable alternative – the fediverse isn't growing.1 One reason why: today's fediverse is unsafe by design and unsafe by default – especially for Black and Indigenous people, women of color, LGBTQIA2S+ people2, Muslims, disabled people and other marginalized communities.  ‌"

Mastodon and today’s fediverse are unsafe by design and unsafe by default

As EFF says, The Fediverse Could Be Awesome (If We Don’t Screw It Up).  Big centralized social networks with their surveillance capitalism business models keep getting worse and worse, and the fediverse's decentralized model – with thousands of instances, running any of dozens of somewhat-compatible software platforms – is a great opportunity to take a different approach. 2023 featured a wave of grassroots innovation in the fediverse, and with corporations like Wordpress, Flipboard, Medium, Vivaldi, and Mozilla starting to invest (as is Facebook's parent company Meta) there's a golden opportunity here ...

If the fediverse can make progress on long-standing issues like safety.  

It seems pretty obvious to me that most people would rather be on instances where they're less exposed to nazis, white supremacists, anti-LGBTQIA2S+ bigots and harassers.  And yet, that's not the default in today's fediverse.

And while there are quite a few fediverse instances with active and skilled moderators and admins that are relatively safe – safer in many ways than Facebook or Twitter – that's still far from the norm.

More positively, though, this highlights some straightforward opportunities for significant short-term safety improvements:

  • Steer people to instances that are well-moderated and have functionality like local-only posts that give people more protection from nazis, white supremacists, anti-LGBTQIA2S+ bigots and harassers on other instances
  • Improve moderation on instances that aren't as well moderated but want to be
  • Make it easy for admins setting up new instances to start out with an instance blocklist that prevents hate speech and harassment from known bad actors – and to provide functionality like local-only posts
  • App support for blocklists, for example expanding on the policy several apps implemented in 2019 of blocking white supremacist instance Gab by embedding a "worst-of-the-worst" blocklist, and by allowing individuals to upload their own blocklists (functionality that web UIs like Mastodon and Misskey already support)
  • Implement local-only posts and instance-level blocking on platforms like Lemmy that don't currently have them.

But this is only the start of what's needed to change the dynamic more completely. For that to happen, fediverse "influencers," admins, developers, businesses, and funders will need to start prioritizing investing in safety.

It's about people, not just software

At least in the short term, today's fediverse is likely to continue to rely on software platforms that weren't designed with safety in mind, and the useful-but-very-imperfect tools I discussed in earlier parts of this series: instance blocking and blocklists, augmented by instance catalogs like The Bad Space and Fediseer. We certainly need better tools for people to protect themselves, and I'll discuss various approaches the software could evolve to provide more safety below, but first I want to highlight that it's not just about the software.

One of the distinctive features of the fediverse's architecture (as opposed to pure peer-to-peer networks) is the role of instances. From a safety perspective, this means that instance admins and moderators can make a huge difference.  But effective anti-racist and intersectional moderation is hard!  And as the IFTAS Fediverse Moderator Needs Assessment Results highlights, most moderators today don't feel like they have the tools and resources they need.

So this is an area where investment can pay a lot of dividends.  A few specific suggestions:

  • Training and mentoring, including dealing with different aspects of intersectional moderation.
  • Sharing and distilling "positive deviance" examples of instances that do a good job
  • Documentation of best practices, including templates for policies and process
  • Cross-instance teams of expert moderators who can provide help in tricky situations
  • Workshops, conferences, and ongoing discussions between moderators, software developers, and community members

Resources developed and delivered with funded involvement of multiply-marginalized people who are the targets of so much of this harassment today are likely to be the most effective.

It's also about the software

"There’s a lot more that can be done to counter harassment, Nazism, racism, sexism, transphobia, and other hate online. Mastodon’s current functionality only scratches the surface of what’s possible — and has generally been introduced in reaction to events in the network."

Lessons (so far) from Mastodon, 2017-8
"With the surging popularity of federating tools, how do we make it easier to make safety the default?"

– Roland X. Pulliam FSEP Product Requirements Document, August 2023

Developers, instance admins, and hosting companies can also play a big role. People who are targets of harassment are clear about the kind of functionality they want: local-only posts,2.1 the ability to control who can reply to posts,2 other finer-grained controls over visibility and interaction. Fediverse software platforms like GoToSocial, Bonfire, Akkoma, and Streams already provide these tools. Mastodon forks like Glitch and Hometown at least provide local-only posts, which makes a big difference.

Broader adoption of software that gives people better tools to protect themselves could have an impact today, as could hosted offerings that combine more-safety-oriented software with privacy-friendly default settings that are known to reduce risks of harassment and hate speech2.5 and basic blocklists. People volunteering for platforms that don't have this functionality yet should encourage the developers to implement it, quickly – or shift their efforts to forks and platforms that do prioritze safety.  Funders should follow suit.  

A complementary approach, also worth pursuing (and funding!), is to investigate tools from other platforms like Block Party and FilterBuddy that allow for collaborative defense against harassment and toxic content can apply in a federated context. This work is also likely to be relevant (perhaps with modifications) to Bluesky-based networks. And as both Block Party and Filter Buddy highlight, tools designed and implemented by (and working with) marginalized people who are the targets of so much of this harassment today are likely to be the most effective.

Revisiting some core assumptions

"[C]ommitments to safer spaces in some ways run counter to certain interpretations and norms of openness on which open source rests."‌
‌‌
‌– Christina Dunbar-Hester, in Hacking Diversity

Just as important, though, the fediverse will need to revisit some core assumptions, including shifting to a consent-based model and a less-absolutist definition of "open" that prioritizes safety over connection and reach.  

For example, as Anil Dash has pointed out, consent is a key (often-unstated value) in the fediverse – and an equally-important component of safety.  As I said in Focus on consent (including consent-based federation), privacy, and safety

Even if you're not an expert on online privacy and safety, which sounds better to your: "Nazis and terfs can't communicate with me unless I give my permission" or "Nazis and terfs can harass me and see my followers-only posts until I realize it's happening and say no"?

Unfortunately, today's fediverse's commitment to consent is intermittent. Most fediverse software today accepts all requests for federation unless the instance is explicitly blocked. This isn't affirmative consent – and it opens the door to various ways harassers, nazis, and terfs can subvert instance level blocking.  But Mastodon's documentation describes the alternative of consent-based allow-list federation as "contrary to Mastodon’s mission."  Similarly, Mastodon (and most other fediverse software) Mastodon also approves follower requests by default without asking for consent.

Fortunately, even with today's software more intentional approaches are possible. Mastodon has an option to approve follower requests; it would be a one-line change to make that the default. And many fediverse platforms support consent-based approaches to federation. PeerTube, for example, has an option for manual approval of federation requests. Akkoma, GoToSocial, and even Mastodon (despite its philosophical objections) all support "allow-list" federation. Bonfire has a sophisticated system of circles and boundaries.  PixelFed recently added the ability to make federation with Threads opt-in for individual users.

A purely allow-list system might well be cumbersome, but there are intriguing ideas for other approaches that also invert today's fediverse's core assumption that harassers, nazis, terfs, and anybody else who wants should be able to spew hate at you unless you block them.  For example:

Design from the margins – and fund it!

"We need to acknowledge that there is a history on Mastodon of instances of color being marginalized, being discriminated against. There is a history users of color being subject to racist, to anti-Semitic, to homophobic kinds of abuse. There is a history of the kinds of similar kinds of violence against users of color, against disabled users, against marginalized users on Mastodon that there is on Twitter ..."

– Dr. Johnathan Flowers, The Whiteness of Mastodon (December 2022)
"[D]espite all the major contributions they’ve made, queer, trans, and non-binary people of all colors have also been marginalized in Mastodon."

A (partial) queer, trans, and non-binary history of Mastodon and the fediverse, June 2023
"The decentered include subpopulations who are the most impacted and least supported; they are often those that face highest marginalization in society... when your most at-risk and disenfranchised are covered by your product, we are all covered."

– Afsenah Rigot, Design From the Margins, 2022

It's worth reemphasizing a point I've touched on a few times already: the single most important way for the fediverse to move forward is to fund more work by and with people from decentered communities.  As LeslieMac said back in 2017 after Twitter introduced some bone-headed feature that led to increased harassment, "literally 10 (paid) Black Women with > 5K followers would head this crap off at the pass."  

And, yes, I'm bringing up funding repeatedly. It's important!  I don't mean to minimize the importance of volunteers, who may well wind up doing the bulk of the work here: as moderators, as members of open-source software projects. People want to be safer online, and want to be able to invite their friends and relatives to communities where they won't be exposed to hate speech and harassment, so many people will help out in various ways. That's good!  Still, paid positions and project-based funding are important as well.  Unless people are paid for their work, participation is restricted to those who can afford to volunteer.  

Where will the money come from?  As well as crowdfunding and civil society organizations (the primary funding mechanism for today's fediverse), businesses looking at the fediverse are an obvious source.  Larger corporations such as Wordpress, Flipboard, Vivaldi, and Medium looking at the business opportunities of providing infrastructure, apps, hosting, or services to the fediverse are much more likely to be successful if the fediverse is safer – and so are startups. Media organizations considering the fediverse, progressive and social justice organizers looking for alternatives now that Twitter's turned into a machine for fascism have smaller budgets but just as much interest in improvement.  

So if  you're somebody from a tech or media company looking at the fediverse, a foundation or non-profit concerned about disinformation or corporate control of media, a progressive or racial justice organization hoping to build a counter to fascist-controlled social networks like Xitter,  or an affluent techie feeling guilty about having made your money from surveillance capitalism ... now's a good time to invest.

Notes

1 According to Fediverse Observer, the number of monthly active fediverse users decreased by 7% over the course of 2023. According to fedidb.org, monthly active users decreased by 30% over the source of the year.

2 I'm using LGBTQIA2S+ as a shorthand for lesbian, gay, gender non-conforming, genderqueer, bi, trans, queer, intersex, asexual, agender, two-sprit, and others who are not straight, cis, and heteronormative. Julia Serrano's trans, gender, sexuality, and activism glossary has definitions for most of terms, and discusses the tensions between ever-growing and always incomplete acronyms and more abstract terms like "gender and sexual minorities". OACAS Library Guides' Two-spirit identities page goes into more detail on this often-overlooked intersectional aspect of non-cis identity.