Improving privacy and safety in fediverse software
This page was originally a resource page for a proposal for an August 2023 submission to the NLNet NGI Zero Entrust Trustworthiness and data sovereignty grant. I submitted at the last moment; it was my first submission, I didn't have any EU people involved, and didn't have any software projects signed up to partner with, so unsurprisingly it didn't get funded. It's still a good idea though!
From a budgeting perspective, threat modeling experts aren't cheap; in the consulting world, $300+ / hour isn't unusual. But the value is huge, nd NLNet isn't the only organization with money that has an interest in improving fediverse privacy and safety, so hopefully funding for multiple projects like this will emerge. Threat modeling needs to be done from multiple perspectives, so it's crucial that participants and experts include people of color, women, trans and queer people, disabled people, and others whose safety is most at risk – and especially people at the intersections.
This project uses a technique called "threat modeling" to identify ways that current fediverse leaves people open to harassment, abuse, or data harvesting -- and mitigations that can address these threats. It also include funding for initial implementation of some high-priority, relatively-low-effort mitigations in at least one EU-based fediverse software platform. As background, Social Threat Modeling describes the overall technique; Shireen Mitchell (of Stop Online Violence Against Women) and also have a brief discussion social threat modeling in the 2017 SXSW presentation on Diversity-friendly software. See the last section for additional links.
The project consists of three phases:
- Initial research: discussions identifying and prioritizing threats and potential mitigations with development teams, instance admins, moderators, and communities
- Development of detailed threat models, including a final report with recommendations for software improvements to improve privacy and safety
- Implementation of some high-priority proposed mitigations in at least one EU-based fediverse software platform
In addition to the software improvements and report, deliverable artifacts include the threat models as well as documentation of the process and open-source tools to allow others to refine the models or do their own.
Compare your own project with existing or historical efforts.
Fediverse software development teams, with their limited resources, have not yet used threat modeling techniques to focus on privacy or safety. My work on "Social threat modeling and quote boosts on Mastodon" and "Threat modeling Meta, the fediverse, and privacy" are (as far as I know) the only published fediverse-related threat models. Both of these are narrowly focused, very informal, and did not include any explicit requirements gathering process. That said, they illustrate the value of this approach: identifying both straightforward short-term improvements as well as areas where more research and development is needed.
This project builds on the learnings from that work, involves more stakeholders, and includes a development phase to implement high-priority mitigations. During the requirements stage, for example, we will solicit input (via a survey, interview, and/or group discussions) from development teams working on software that proritizes safety such as Bonfire and GoToSocial, safety-focused forks of projects that haven't historically provided safety, and nascent projects that have an opportunity to do more things right from the beginning.
A note to potential civil society and government funders: please ensure that your funding focuses on open-source projects; corporations and trade associations can fund the proprietary stuff themselves. In your grant conditions, please specify that proposed solutions cannot require any dependencies on corporate services (even if they are offered for free).
A note to tech and media companies: the fediverse is a huge opportunity but unless safety issues are addressed it the opportunity will be missed. If you've got threat modeling skills in your organization – or budget to fund consultants – it's really worth spending on this.
What are significant technical challenges you expect to solve during the project, if any?)
The biggest challenge is that there are likely to be barriers to short-term implementation of some otherwise-attractive potential mitigations, due to design and implementation choices in current code base or limitations of the underlying ActivityPub protocol. Fortunately, there are also likely to be many potential mitigations that are possible with the current code base or minimal changes. To identify which opportunities are most relevant for short-term implmentation with the development resources allocated on the project, we will partner with an EU-based software project in the implementation phase; the report will identify opportunities for longer-term work that requires more resources.
In addition, threat modeling processes and open-source tools have historically focused on traditional security concerns, so adapting them to these more human-focused threats will require creativity and innovation. A fallback, if that proves too difficult during the timeframe of the project, is to use simpler tools like diagrams and spreadsheets. In any case, the deliverable of documenting the tools and processes will help others build on this work.
Describe the ecosystem of the project, and how you will engage with relevant actors and promote the outcomes?
Which actors will you involve? Who will deploy your solution to make it a success?
Key participants in the ecosystem include community members; moderators (who are likely to be the most aware of the overall landscape of harassment); development teams; instance admins; trust and safety-focused organizations such as IFTAS; and government agencies, civil society organizations, media, and businesses investigating or adopting the fediverse.
Engagement will start by with discussions on the fediverse, using hashtags, groups, kbin magazines, and Lemmy communities. The research phase adds interviews and group discussion sessions with key stakeholders, as well as surveys and ongoing discussions in the fediverse. Circulating drafts of the threat models and proposed mitigations provides additional for engagement and visibility.
The project proposal includes funding for software development on at least one EU based software platform to ensure at least some initial adoption. Just as importantly, this also gives other software platforms the incentive to follow suit by also improving privacy and safety.
Success for community members is giving them more control of their personal data and reducing the amount of harassment. Achieving this will require development teams to implement implementation of mitigations to identified threats; consistent engagement with teams committed to prioritizing privacy and safety increases the likelihood of this happening. In addition ongoing engagement with moderators, instance admins, community members, and organizations adopting (or investigating) the fediverse is likely to led to them encouraging the development teams to prioritize these improvements.
Have you been involved with projects or organisations relevant to this project before? And if so, can you tell us a bit about your contributions?
Yes. I've done threat modeling work for over two decades, and been active in the fediverse since 2011 including analyzing and writing about privacy and safety issues.
Most recently, "Threat modeling Meta, the fediverse, and privacy" (still in draft form) looks at potential mitigations to limit the amount of data a specific "threat actor" will be able to collect without consent if Facebook's parent company Meta follows through on its plans for Threads to join the fediverse. The recommendations at the bottom of the article include mitigations for developers and instance admins.
"Social threat modeling and quote boosts on Mastodon" takes a similar approach looking at migitations for harassment and abuse) and illustrate how this technique applies to fediverse software, and also contains recommendations for developers.
"Don't tell people "it's easy", and seven more things Kbin, Lemmy, and the fediverse can learn from Mastodon" and "Mastodon: a (partial) history" both discuss areas where current fediverse software needs to be improved to improve privacy and safety, along with other topics.
Previous related work includes the 2007 National Academy of Sciences / CSTB report "Software for Dependable Systems:Sufficient Evidence?", and 2003's "Beyond Stack Smashing" in IEEE Security & Privacy.