Bots already have a strong presence in open source projects, helping contributors in their maintenance. Mostly, automating repetitive tasks. See a list of bots in software development.

I think their importance will keep growing. In fact, I believe they could play a critical role in the long-term sustainability of open source projects, nowadays, one of the major challenges of open source. It’s already difficult to convince users of open source to also give a hand to the projects they benefit from (in any shape or form). There is no way they will stay if they need to face a toxic environment that discourages them as soon as they try to participate.

Our (myself, Javi and Gwendal) main hypothesis is that we can ensure open source communities remain safe and diverse collaborative environments by using bots as first responders. 

Ideally, project owners should make sure discussions in their projects are free of racism, sexism and other toxic behaviors, and conducted in a way that favors diversity of opinions and backgrounds. Unfortunately, this constant manual supervision is not feasible. We think bots could help human moderators and become a valuable tool towards this goal.

In a wonderful world, this would not be necessary but open source is not free from the harassment present in many aspects of our society. As I said above, this scares away potential contributors, putting in danger the sustainability of the projects. This same behavior also affects the project diversity, as minorities in open source (e.g., women, ethnic groups, etc.) are often a preferred target of this toxic behavior. Less diverse communities result in open source projects that do not represent the society they aim to serve.

Codes of conduct aim to fix this by defining a set of principles and values to expect from all community members. But detecting violations and enforcing the code of conduct is still a manual process. This does not scale. Neither in terms of the number of projects nor for projects with large communities. We want contributors to be able to spend their time on the core aspects of the project, not on fighting trolls and blocking haters.

We plan to investigate whether the use of bots trained to:

  1. Detect toxic behavior and
  2. Promote a more diverse participation

could improve this situation. Among other aspects, bots could report policy violations, block toxic comments (e.g., via an analysis of the natural language text such as this sentiment bot does) or improve the visibility/priority of first-time contributors (e.g., to help in making their first contribution attempt a success: this usually maximizes the chance they decide to stick around).

From a technical perspective we have all we need. Building good bots is not easy, but we can rely on Xatkit for that. And there are a number of pretrained language models for toxicity analysis (e.g. Perspective API, that I use client-side in my WordPress plugin,  or this Detoxify Python library) that could be plugged into Xatkit for the NLP processing part. What we need is to find the time to actually put these pieces together and conduct a sound experiment to validate our hypothesis.

I’m afraid thought that it will take time to find this time 🙂 . So far we’ve been unable to secure the funding for this project (last attempt was this Ford/Sloan Foundation call) but until then, I thought it would be good to share our ideas in the open to see what do you think about them and check whether somebody is interested in collaborating (even better if this somebody can commit or point to some resources to make it happen!).

Many other social networks and communities are taking steps in this direction (e.g. YouTube is one of the latest to announce its measures to fight toxic comments). Let’s make sure open source is not the exception.

Featured image by Danilo Alvesd on Unsplash

Join our Team!

Follow the latest news on software development, especially for open source projects

You have Successfully Subscribed!

Share This