What moderation tactics have you used or seen as a mechanism to curtail the spread of misinformation in communities and on social media platforms? Word detection, link blocking, and digital stickers promoting legitimate information sources may immediately come to mind.
But what would happen if you ran your moderation tools against URLs shared in link-in-bio services used in your community? Or what if you learned that folks on your platform were using specific codewords to circumvent word detection? Or posting screenshots of misinformation rather than using plain-text? People are getting creative with how they share all types of information online, misinformation included. Are our moderation strategies keeping up?
In this discussion, Patrick chats with Joseph Schafer, an undergraduate student of Computer Science and Ethics at the University of Washington and Rachel Moran, a postdoctoral fellow at the University of Washington’s Center for an Informed Public. They discuss their research and how anti-vaccine advocates are circumventing content moderation efforts on Facebook, Instagram, Twitter, and large social networks. Some of their findings might surprise you! For example, specific folk theories have emerged that define how some believe social platforms and algorithms work to moderate their content and conversations. And whether these theories are true or not, the strategies forming around them do seem to help people keep questionable content up long enough for researchers to come across it.
So, where do we start? How can we detect misinformation if people are using codewords like pizza or Moana to get around our tools and teams? There may not be precise solutions here just yet, but Rachel and Joseph both offer ideas to help us down the right path, which starts with deciding that the engagement that brews around misinformation is not safe for the long-term health of your community.
Among our topics:
- Why Linktree needs community guidelines and how link-in-bio sites have become a vector for misinformation
- The folk theories that are informing how we perceive and operate around social media algorithms
- Adapting your moderation strategies to better find misinformation