image of Facebook's masthead displayed on computer screen

After the Capitol riot in January, many looked to the role that social media played in the organization of the event. A good amount of blame has been directed at Facebook groups: such groups have often been the target of those looking to spread misinformation as there is little oversight within them. Furthermore, if set to “private,” these groups run an especially high risk of becoming echo chambers, as there is much less opportunity for information to flow freely within them. Algorithms that Facebook uses to populate your feed were also part of the problem: more popular groups are more likely to be recommended to others, which led to some of the more pernicious groups getting a much broader range of influence than they would have otherwise. As noted recently in the Wall Street Journal, while it was not long ago that Facebook saw groups as the heart of the platform, abuses of the feature has forced the company to make some significant changes into how they are run.

The spread of misinformation in Facebook groups is a complex and serious problem. Some proposals have been made to try to ameliorate it: Facebook itself implemented a new policy in which groups that were the biggest troublemakers – civics groups and health groups – would not be promoted during the first three weeks of their existence. Others have called for more aggressive proposals. For instance, a recent article in Wired suggested that:

“To mitigate these problems, Facebook should radically increase transparency around the ownership, management, and membership of groups. Yes, privacy was the point, but users need the tools to understand the provenance of the information they consume.”

A worry with Facebook groups, as well as a lot of communication online generally, is that it can be difficult to tell what the source of information is, as one might post information anonymously or under the guise of a username. Perhaps with more information about who was in charge of a group, then, one would be able to make a better decision as to whether to accept the information that one finds within it.

Are you part of the problem? If you’re actively infiltrating groups with the intent of spreading misinformation, or building bot armies to game Facebook’s recommendation system, then the answer is clearly yes. I’m guessing that you, gentle reader, don’t fall into that category. But perhaps you are a member of a group in which you’ve seen misinformation swirling about, even though you yourself didn’t post it. What is the extent of your responsibility if you’re part of a group that spreads misinformation?

Here’s one answer: you are not responsible at all. After all, if you didn’t post it, then you’re not responsible for what it says, or if anyone else believes it. For example, let’s say you’re interested in local healthy food options, and join the Healthy Food News Facebook group (this is not a real group, as far as I know). You might then come across some helpful tips and recipes, but also may come across people sharing their views that new COVID-19 vaccines contain dangerous chemicals that mutate your DNA (they don’t). This might not be interesting to you, and you might think that it’s bunk, but you didn’t post it, so it’s not your problem.

This is a tempting answer, but I think it’s not quite right. The reason is because of how Facebook groups work, and how people are inclined to find information plausible online. As noted above, sites like Facebook employ various algorithms to determine which information to recommend to its users. A big factor that goes into such suggestions is how popular a topic or group is: the more engagement a post gets, the more likely it’s going to show up in your news feed, and the more popular a group is, the more likely it will be recommended to others. What this means is that mere membership in such a group will contribute to that group’s popularity, and thus potentially to the spread of the misinformation it contains.

Small actions within such a group can also have potentially much bigger effects. For instance, in many cases we put little thought into “liking” or reacting positively to a post: perhaps we read it quickly and it coheres with our worldview, so we click a thumbs-up, and don’t really give it much thought afterwards. From our point of view, liking a post does not mean that we wholeheartedly believe it, and it seems that there is a big difference between liking something and posting it yourself. However, these kinds of engagements influence the extent to which that post will be seen by others, and so if you’re not liking in a conscientious way, you may end up contributing to the spread of bad information.

What does this say about your responsibilities as a member of a Facebook group? There are no doubt many such groups that are completely innocuous, where people do, in fact, only share helpful recipes or perhaps even discuss political issues in a calm and reasoned way. So it’s not as though you necessarily have an obligation to quit all of your Facebook groups, or to get off the platform altogether. However, given that otherwise innocent actions like clicking “like” on a post can have much worse effects in groups in which misinformation is shared, and that being a member of such a group at all can contribute to its popularity and thus the extent to which it can be suggested to others means that if you find yourself a member of such a group, you should leave it.

Ken Boyd is currently a postdoc in the Department for the Study of Culture at the University of Southern Denmark. His philosophical work concerns the ways that we can best make sure that we learn from one another, and what goes wrong when we don’t. You can read more about his work at kennethboyd.wordpress.com.