Skip to content

Steps towards a safer fediverse

Part 5 of "Golden opportunities for the fediverse -- and whatever comes next."

A road sign in the shape of an arrow with the word Safety
Join the discussion on infosec.exchange or Lemmy.

Contents

Earlier posts in the series: Mastodon and today’s fediverse are unsafe by design and unsafe by default, Blocklists in the fediverse, It’s possible to talk about The Bad Space without being racist or anti-trans – but it’s not as easy as it sounds, and Compare and contrast: Fediseer, FIRES, and The Bad Space

Intro

"This goes against everything the Fediverse stands for."‌

– a comment on one of the proposals in the draft version of this article in the Lemmy discussion

As EFF says, The Fediverse Could Be Awesome (If We Don’t Screw It Up).  Big centralized social networks with their surveillance capitalism business models keep getting worse and worse. The fediverse's decentralized architecture – with thousands of instances (aka servers) running any of dozens of somewhat-compatible software platforms – is a great opportunity to take a different approach.

But today's fediverse is not yet awesome.  Also, for the last year it's been shrinking, rather than growing.1 As I said in Mastodon and today’s fediverse are unsafe by design and unsafe by default

"One reason why: today's fediverse is unsafe by design and unsafe by default – especially for Black and Indigenous people, women of color, LGBTQIA2S+ people2, Muslims, disabled people and other marginalized communities."

While there are quite a few fediverse instances with active and skilled moderators and admins that are relatively safe – safer in many ways than Facebook or Twitter – that's still far from the norm.  Hundreds, maybe thousands, of instances aren't actively moderated; as the mid-February 2024 spam attack illustrated, this causes huge problems for people and moderators on other instances as well – and the same techniques that spammers use can also be used for harassment and disinforation.  And even on instances with active moderation, the tools don't make it easy for moderators to do their job ... and many don't have a lot of experience with anti-racist moderation or dealing with disinformation.  

The good news is that there are some straightforward opportunities for significant short-term safety improvements. If fediverse funders, developers, businesses, and "influencers"start prioritizing investing in safety, the fediverse can turn what's currently a big weakness into a huge strategic advantage. After all, today's big centralized social networks aren't very safe either – especially for Black and Indigenous people, women of color, LGBTQIA2S+ people, Muslims, disabled people and other marginalized communities.

Then again, it might not happen. The largest software platforms in today's fediverse haven't prioritized safety in the past; neither has the ActivityPub protocol. And as I'll discuss in an upcoming installment in this series, today's fediverse will have to revisit some core assumptions to make progress. So it's possible that the fediverse will collectively shrug its shoulders and squander this opportunity. There's a reason this series is called "Golden opportunities for the fediverse – and whatever comes next." So we shall see.  

But at least for this post, let's make the leap of faith that there's enough of a critical mass of people and organizations that want a safer fediverse, and look at steps towards towards making that happen.

It's about people, not just the software and protocol

As I'll discuss below, today's fediverse software certainly needs improvement, and so does the ActivityPub protocol. But let's start with something that's even more important.

One of the distinctive features of the fediverse's architecture (as opposed to pure peer-to-peer networks) is the role of instances. Some instances are much safer than others, and admins and moderators can play a big role in making their instances safer. But effective anti-racist and intersectional moderation is hard! So is dealing with disinformation, harassment, and cultural conflicts. And as the IFTAS Fediverse Moderator Needs Assessment Results highlights, most moderators today don't feel like they have the tools and resources they need.

So this is an area where investment can pay a lot of dividends. A few specific suggestions:

  • Steering people to instances that are well-moderated and have functionality like local-only posts that give people more protection from nazis, white supremacists, anti-LGBTQIA2S+ bigots and harassers on other instances
  • A fediverse-wide incident response team and communication channels to deal with situations like the spam attack.  With instance blocklists, tools like Fediseer, and configuration settings, there's plenty of useful technology here; the challenge is getting it to the people and instances that need it.
  • Making it easy for admins setting up new instances to choose from one of a list (configurable by hosting providers) of initial instance blocklists that prevents hate speech and harassment from known bad actors3 – and to provide functionality like local-only posts.
  • Improving moderation on instances that aren't as well moderated but want to be, using techniques like
    • Training and mentoring, including dealing with different aspects of intersectional moderation.
    • Documentation of best practices, including templates for policies and process, and sharing and distilling "positive deviance" examples of instances that do a good job
    • Cross-instance teams of expert moderators who can provide help in tricky situations
    • Workshops, conferences, and ongoing discussions between moderators, software developers, and community members

Similarly, providing tutorials and documenting for people to protect their privacy on the fediverse can also provide immediate safety benefits. Defaults on most fediverse software are not privacy-friendly, and settings are not always easy to find – for example, the Mastodon mobile app makes it really hard to enable approval for new followers (which is necessary to prevent harassing replies to followers-only posts).4

Resources developed and delivered with funded involvement of multiply-marginalized people who are the targets of so much of this harassment today are likely to be the most effective.

It's also about the software

"There’s a lot more that can be done to counter harassment, Nazism, racism, sexism, transphobia, and other hate online. Mastodon’s current functionality only scratches the surface of what’s possible — and has generally been introduced in reaction to events in the network."

Lessons (so far) from Mastodon, 2017-8
"With the surging popularity of federating tools, how do we make it easier to make safety the default?"

– Roland X. Pulliam Federation Safety Enhancement Project (FSEP) Product Requirements Document, August 2023

Developers, instance admins, and hosting companies can also play a big role. People who are targets of harassment are clear about the kind of functionality they want: local-only posts,5 the ability to control who can reply to posts,6 other finer-grained controls over visibility and interaction. Fediverse software platforms like Pixelfed, Bonfire, GoToSocial, Akkoma, Friendica, Hubzilla, and (streams) already provide some or all of these tools; so do Mastodon forks like Glitch, Hometown and Firebird.

Broader adoption of software that gives people better tools to protect themselves could have an impact today, as could hosted offerings that combine more-safety-oriented software with privacy-friendly default settings that are known to reduce risks of harassment and hate speech. Implementing this functionality on platforms like Mastodon and Lemmy that don't currently have them is a basic first step. People volunteering for platforms that don't have this functionality yet should encourage the developers to implement it, quickly – or shift their efforts to forks and platforms that do prioritze safety. Funders should follow suit.  

At least in the short term, the fediverse is also likely to continue to rely on the useful-but-very-imperfect tools of  instance blocking and blocklists, augmented by instance catalogs like The Bad Space and Fediseer.  There's a fair amount of innovation already happening in this area, with infrastructure like FIRES, CARIAD, and Fedimod all under active development. Steps towards better blocklists discusses further incremental improvements on this front. Pixelfed's Autospam highlights the potential value of a straightforward Naive Bayes approach. The retrospectives after the February spam wave are already highlighting other opportunities for improvement.

Longer-term, fediverse software will need more fundamental changes, including paying more attention to consent.  As I said in The free fediverses should focus on consent (including consent-based federation), privacy, and safety

"There aren't yet a lot of good tools to make consent-based federation convenient scalable, but that's starting to change. Instance catalogs like The Bad Space and Fediseer, and emerging projects like the FIRES recommendation system. FSEP's design for  an"approve followers" tool, could also easily be adapted for approving federation requests. ActivityPub spec co-author Erin Shepherd's suggestion of "letters of introduction", or something along the lines of the IndieWeb Vouch protocol, could also work well at the federation level.  Db0's Can we improve the Fediverse Allow-List Model? and the the "fedifams" and caracoles I discuss in The free fediverses should support concentric federations of instances could help with scalability and making it easier for new instances to plug into a consent-based network."

As I'll discuss in the next installment in the series, this will require revisiting some core assumptions – especially for Mastodon, whose documentation describes consent-based federation as contrary to its mission. What's striking, though, is how much incremental progress can be made even with existing code bases. PeerTube and Pixelfed, for example, have key building blocks for making consent-based federation choices more scalable and usable, including manual approval of federation requests, triggers and custom rules which allow for partial automation, plugins, and opt-in federation with Threads for individual users.  Akkoma, GoToSocial, and even Mastodon (despite its philosophical objections) all support "allow-list" federation.  The FSEP proposal sketches how instance catalogs like The Bad Space can be leveraged to provide more informed consent for approving followers, an approach that can also be applied to other aspects of consent.

Apps can play an important role as well, as Tapbots' emergency update for Ivory with a custom filter during the recent spam wave illustrates. For example, apps could expand on the policy several apps implemented in 2019 of blocking white supremacist instance Gab by embedding a "worst-of-the-worst" blocklist; allowing individuals to upload their own blocklists (functionality that web UIs like Mastodon and Misskey already support to some extent); and adding support for sharing filters and blocklists.

A complementary approach, also worth pursuing (and funding!), is to investigate tools from other platforms like Block Party (and  Filter Buddy) that allow for collaborative defense against harassment and toxic content can apply in a federated context – initially as standalone tools if necessary, but ideally integrated into existing apps and web UIs. This work is also likely to be relevant (perhaps with modifications) to Bluesky-based networks. And as both Block Party and Filter Buddy highlight, tools designed and implemented by (and working with) marginalized people who are the targets of so much of this harassment today are likely to be the most effective.

And it's about the protocol, too

"Unfortunately from a security and social threat perspective, the way ActivityPub is currently rolled out is under-prepared to protect its users."

Christine Lemmer-Webber, in OcapPub: Towards networks of consent
"The basics of ActivityPub are that, to send you something, a person POSTs  a message to your Inbox.

This raises the obvious question: Who can do that? And, well, the default answer is anybody.... this means that we’re dealing with things now with the crude tools of instance blocks or allow list based federation; and these are tools which are necesssary, but not sufficient."

– Erin Alexis Owen Shepherd, in A better moderation system is possible for the social web

The ActivityPub protocol that powers today's fediverse is extremely flexible, and the spec's authors – including Lemmer-Webber and Shepherd – deserve a lot of credit for coming up with something that (as Ariadne Conill said back in 2018 in ActivityPub: The “Worse Is Better” Approach to Federated Social Networking) "mostly gets the job done." And Lemmer-Webber, Shepherd, Conill, and others and others have all described approaches for providing more safety with ActivityPub.

Still, the original ActivityPub spec wasn't designed with a focus on safety or privacy. Indeed, as Hrefna points out, ActivityPub "makes assumptions that are fundamentally opposed to the kinds of protections that people seem to be seeking." Not only that,  some aspects of ActivityPub's design make it challenging to write secure implementations; so does its complexity – and insecure software compromises safety as well as privacy.

It's certainly possible that these issues could be addressed with improvements to the ActivityPub protocol, or with an additional protocol layered on top of it. Lisa Dusseault's outstanding Threat Model for Data Portability is a very encouraging sign.   Who knows, perhaps Meta's adoption of ActivityPub – and participation in the standards committee – will lead to an increased focus on safety. But it's also possible that won't happen, and fediverse software will need to shift to more safety-conscious protocols.

Threat modeling and privacy by design can play a big role here!

"Social threat modeling applies structured analysis to threats like harassment, abuse, and disinformation.  You'd think this would happen routinely, but no."

Social threat modeling and quote boosts on Mastodon

I've discussed threat modeling in several other posts (including Threat modeling Meta, the fediverse, and privacy) so I'm not going to go into detail on it here, but it's a vital technique for developing software that's safer and more secure.  Improving privacy and safety in fediverse software goes into more detail on what a threat modeling project for the fediverse could look like.

"Mastodon ... violates pretty much all of the seven principles of Privacy by Design. Privacy by default?  End-to-end security?  User-centricity?  Uh, no."

–  Threat modeling Meta, the fediverse, and privacy

And speaking of vital techniques for developing software that's safer and more secure, the principles and practices of privacy by design highlight the opportunities for short-term low-hanging fruit – as well as longer-term directions. Looking at the February spam attack, for example, a privacy-by-default approach would have significantly reduced the opportunities for spammers to set up new accounts ("open registration" isn't privacy by default), limited the ability for new accounts to affect other instances (accepting all federation requests isn't privacy by default), and limited the impact on individual users (getting notifications and direct messages from random accounts isn't privacy by default).

Design from the margins – and fund it!

"We need to acknowledge that there is a history on Mastodon of instances of color being marginalized, being discriminated against. There is a history users of color being subject to racist, to anti-Semitic, to homophobic kinds of abuse. There is a history of the kinds of similar kinds of violence against users of color, against disabled users, against marginalized users on Mastodon that there is on Twitter ..."

– Dr. Johnathan Flowers, The Whiteness of Mastodon (December 2022)
"[D]espite all the major contributions they’ve made, queer, trans, and non-binary people of all colors have also been marginalized in Mastodon."

A (partial) queer, trans, and non-binary history of Mastodon and the fediverse, June 2023
"The decentered include subpopulations who are the most impacted and least supported; they are often those that face highest marginalization in society... when your most at-risk and disenfranchised are covered by your product, we are all covered."

– Afsenah Rigot, Design From the Margins, 2022

It's worth reemphasizing a point I've touched on a few times already: the single most important way for the fediverse to move forward is to fund more work by and with people from decentered communities.  As LeslieMac said back in 2017 after Twitter introduced some bone-headed feature that led to increased harassment, "literally 10 (paid) Black Women with > 5K followers would head this crap off at the pass."  

And, yes, I'm bringing up funding repeatedly. It's important!  I don't mean to minimize the importance of volunteers, who may well wind up doing the bulk of the work here: as moderators, as members of open-source software projects. People want to be safer online, and want to be able to invite their friends and relatives to communities where they won't be exposed to hate speech and harassment, so many people will help out in various ways. That's good!  Still, paid positions and project-based funding are important as well.  Unless people are paid for their work, participation is restricted to those who can afford to volunteer.  

Where will the money come from?  As well as crowdfunding and civil society organizations (the primary funding mechanism for today's fediverse), businesses looking at the fediverse are an obvious source.  Media organizations considering the fediverse, progressive and social justice organizers looking for alternatives now that Twitter's turned into a machine for fascism have smaller budgets but just as much interest in improvement.  

So if  you're somebody from a tech or media company looking at the fediverse, a foundation or non-profit concerned about disinformation or corporate control of media, a progressive or racial justice organization hoping to build a counter to fascist-controlled social networks like Xitter,  or an affluent techie feeling guilty about having made your money from surveillance capitalism ... now's a good time to invest.

Stay tuned!

In the next installment in the series, I'll discuss some of the core assumptions today's fediverse will need to revisit – and some of the hard problems that need to be solved.  I'll also talk about the opportunities if that happens.  Here's a couple of sneak previews (as always, subject to change as I revise!)


Consent is a key aspect of safety as well as privacy.  As I said in Focus on consent (including consent-based federation), privacy, and safety,

Even if you're not an expert on online privacy and safety, which sounds better to your: "Nazis and terfs can't communicate with me unless I give my permission" or "Nazis and terfs can harass me and see my followers-only posts until I realize it's happening and say no"?

But Mastodon (like most fediverse software today) accepts all requests for federation unless the instance is explicitly blocked, and by default automatically accepts all following requests. This isn't affirmative consent – and it opens the door to various ways harassers, nazis, and terfs can subvert instance level blocking.


From a strategy perspective, the fediverse has some great opportunities here. Many people – especially people of color, trans and queer people, women, other marginalized people – don't feel safe on today's large social networks. At its best many people find that today's fediverse can be a lot safer than the alternatives.

So if the fediverse gets its collective act together and improves safety significantly (while addressing other problems like usability, accessibility, and whiteness), it's got a huge strategic advantage.

Notes

1 According to Fediverse Observer, the number of monthly active fediverse users decreased by 7% over the course of 2023. According to fedidb.org, monthly active users decreased by 30% over the source of the year.

2 I'm using LGBTQIA2S+ as a shorthand for lesbian, gay, gender non-conforming, genderqueer, bi, trans, queer, intersex, asexual, agender, two-sprit, and others who are not straight, cis, and heteronormative. Julia Serrano's trans, gender, sexuality, and activism glossary has definitions for most of terms, and discusses the tensions between ever-growing and always incomplete acronyms and more abstract terms like "gender and sexual minorities". OACAS Library Guides' Two-spirit identities page goes into more detail on this often-overlooked intersectional aspect of non-cis identity.

3 As Instance-level federation decisions reflect norms, policies, interpretations, and (sometimes) strategy discusses, opinions differ on the definition of "bad actor." So the best approach is probably going to present the admin of a new instance with a range of recommendations to choose between based on their preference. Software platforms should provide an initial vetted list (along with enough information for a new admin to do something sensible), and hosting companies and third-party recommenders should also be able provide alternatives.

4 On the Mastodon iOS app, click on the gear icon at the top right to get to Settings. Scroll down to The Boring Zone and click on account settings. Authorize signing in to your Mastodon account, which takes you to the Account settings page. Click on the three bars in the top right of the screen to get to the settings menu; click on Public profile, and then Privacy and reach. Scroll down to the Reach section and unclick the "Automatically accept new followers." Easy peasy!

I get it: the Mastodon team has minimal development resources, so instead of implementing functionality directly in the mobile app it's much more efficient from an engineering perspective to simply add a few more clicks and use the web functionality. But it's still a major hassle for users.

5 I know I sound like a broken record on this, but this valuable privacy and safety functionality has been implemented in Mastodon forks for six years but Mastodon's BDFL (Benevolent Dictator for Life) Eugen Rochko refuses to make it broadly avaiable. Does Mastodon really prioritize stopping harassment? has more.

6 Until Glitch-soc maintainer Claire's November 2022 FEP-5624: Per-object reply control policies proposal (or an alternative) is adopted as a de facto standard, there are likely to be compatibility issues related to this, which is annoying, but shouldn't be a barrier to progress. Similar compatibility problems also came up when Mastodon introdued post visibility back in 2016-2017, but since this was functionality that most people wanted, the issues wound up getting resolved.

Image credit

Safety, by Nick Youngston for Alpha Stock Images via Picpedia, licenced under CC BY-SA 3.0

Revision history

Ongoing: typo fixes, wording improvements and clarifications, adding links

February 21: revised version published, no longer a draft!

January 26- February 1: revisions to draft, including responses to feedback from Lemmy discussion

January 25: first published draft, shared to Lemmy