This conversation with Mohamed Mohammed, a community manager and a PhD student studying deepfakes, is timely. Just last week, a deepfake emerged attempting to spread misinformation that the president of Ukraine, Volodymyr Zelensky, was announcing surrender to Russia’s invasion. In that situation, preparation and rapid response helped minimize the spread of misinformation.
So, what’s your community strategy against deepfakes? Mohamed recommends starting with learning from the information and experts in our field. He also shares an important reminder: As community professionals, while we may want to prevent all harms from happening, we simply can’t. However, we can minimize the harm that’s caused, and we can educate our community members to identify and flag suspicious behaviors. Just as many platforms adjusted their community guidelines and enforcement rubrics to prevent the spread of misinformation, deepfakes represent a new area for us to learn about and help our communities adapt.
Mohamed and Patrick also discuss:
- Why science denial is banned in the Space.com community
- What good governance on deepfakes might look like
- Mohamed’s PhD on deepfakes
Our Podcast is Made Possible By…
If you enjoy our show, please know that it’s only possible with the generous support of our sponsor: Hivebrite, the community engagement platform.
Ground your moderation in your guidelines (6:13): “There was no way to not iterate our [community] policies when the world shut down because of a global pandemic, when flat Earth or conspiracy theories found their way to the forums. When these things happen, you have to make changes. Otherwise … we look shadowy. We start banning content or removing forum posts simply because we think it’s a bad thing. Even if everyone agrees with us, the perception is so important. The perception that we’re consistent within the scope of our guidelines is massive to being able to, for lack of a better term, keep the peace.” –@MMohammed_Comms
If your community has the same problems as a big social media platform, why should people stick around? (9:24): “If you’re not consistent [in your community moderation,] and if you happen to have the same problems as bigger platforms, then what’s the difference? Why am I investing all of this time as a user into this forum of yours when all of my efforts are being met with inconsistent approaches to keeping the place safe?” –@MMohammed_Comms
Antagonizing people to engage (11:30): “I don’t think it’s a coincidence that [the antagonistic content we discourage as forum managers] is the same thing a brand whose KPI is engagement on Twitter will post just to get engagement and to antagonize someone into giving the rapid-fire answers that get people. Nothing brings engagement on a place and an echo chamber quite like a divisive question. We’re trying to be the opposite.” –@MMohammed_Comms
In the words of Sam Gregory, “Prepare, don’t panic” (40:48): “Don’t get scared about this apocalyptic vision of deep fakes … [just] read as much as you can about them. I know it’s going to sound scary, but the more you understand them, the more you get comfortable with the fact that tools are advancing.” –@MMohammed_Comms
Shoutout to the supportive managers out there (46:01): “Having a [supportive] manager is to me the difference between having this long career that can be fulfilling and rewarding and can help you feel better about yourself versus something where you have to build this foundation all by yourself.” –@MMohammed_Comms
About Mohamed Mohammed
Mohamed Mohammed is a community manager at Future Plc, managing forums for brands such as PC Gamer and Space.com. He is also a PhD candidate at the QUEX Institute, researching the platform governance of deepfakes.
- Sponsor: Hivebrite, the community engagement platform
- Mohamed Mohammed on LinkedIn
- Mohamed Mohammed on Twitter
- A Zelensky Deepfake Was Quickly Defeated. The Next One Might Not Be, via WIRED
- Future PLC
- PC Gamer
- QUEX Institute
- Amanda Petersen on Community Signal
- Communications Decency Act
- Sam Gregory of the WITNESS Media Lab
- Joe Pishgar
[00:00:04] Announcer: You’re listening to Community Signal, the podcast for online community professionals. Sponsored by Hivebrite, the community engagement platform. Tweet with @communitysignal as you listen. Here’s your host, Patrick O’Keefe.
[00:00:25] Patrick O’Keefe: Hello, and thanks for tuning in. We’re talking with Space.com community manager, Mohamed Mohammed about science denial, his PhD on deepfakes, and what good governance on deepfakes might look like.
Thank you to Jules Standen, Jenny Weigle, and Heather Champ for being among our Patreon supporters. If you’d like to join them, please visit communitysignal.com/innercircle.
Mohamed Mohammed is a community manager at Future PLC, managing forums for brands such as PC Gamer and Space.com. He is also a PhD candidate at the QUEX Institute, researching the platform governance of deepfakes. Mohamed, welcome to the show.
[00:01:18] Mohamed Mohammed: Hey, it’s a pleasure to be here, Patrick. Thank you for having me.
[00:01:21] Patrick O’Keefe: It’s my pleasure. The Space.com community guidelines say that posts that promote science denial will be removed. What does that mean in practice?
[00:01:11] Mohamed Mohammed: Science denial is something that we’ve tried really hard to expand out and iterate just depending on what’s trending and misinformation really. The basics that we started out with was when we came up with that rule, there was a lot of misinformation trending towards the pandemic, and vaccine misinformation, particularly before vaccine rollouts had happened in countries all over the world. Our first definition was if you are responding to posts that are following the guidelines, referencing NCBI, peer-reviewed papers, or posts that’s following the rules.
If you go ahead and respond to this post by saying, “Actually, that’s fake, because…” Then you respond with things that we’ve commonly seen as misinformation, referring to anti-vaxxer groups, things like that, that’s our primary example. The second tier of that is if you post something on the forums that purports itself to be peer-reviewed, proven, and accepted scientific fact, and you have no evidence of that at all, we give a little nudge to say, “Look, is there any evidence for this?”
If there is none, if you happen to made a mistake, and it is intentional that you posted this the way it is, that’s when we start to get more aggressive and remove some things so that it doesn’t expand its harmful reach.
[00:02:23] Patrick O’Keefe: Did Space.com have trouble with COVID misinformation?
[00:02:26] Mohamed Mohammed: I only mentioned the COVID misinformation, because Space.com is part of a vertical in Future, the science vertical, where space is one side of it, and life science is another part of it. That’s when we started the science until now, but we moved it onto Space only because there would be very, and I cannot stress this enough, very occasional arguments that have to do with whether or not a certain part of a black hole was based on scientific theory, or if it was based on something more proven and solid.
I think we only did that, because we wanted to make sure that with our experience with life science, at some point, it got a little bit difficult to manage with our small team, based on the fact that so much of the code of misinformation was focused on there, that we thought, “Look, we can’t see anything like that happening on the Space forums, but it’s best that we just nip any arguments in this way in the bud now,” so that if anything does come up a controversial topic, if, for example, one of the latest newsworthy topics was a brand new picture of Jupiter was taken.
A brand new picture of Jupiter being taken, and if someone ostensibly questions that as being legitimate or not, and we go, “Okay, look, this has been established. It’s been confirmed by NASA, it’s been confirmed by the probes that are passing by and are taking these pictures.” If you’re causing an argument on the forums just to say, “I don’t think this is real, and thus, you shouldn’t think this is real.” We treat that in the same way that we would treat as misinformation on any one of our other forums because the problem isn’t just that it’s misinformation, and that you’re doubting something that’s been proven real. It’s also that you’re just being inflammatory. That just causes everyone to be uncomfortable on the forum.
While it’s debatable whether or not doubting, say, for example, we have some flat Earth posts that come up on the forums every now and then. It’s debatable whether those are as harmful in the real world in the meatspace, if you will, as misinformation regarding COVID, but they cause the same antagonistic effect on the forums. We try to treat them as antagonistic content that can do those double harms of misinformation, plus making people uncomfortable. We learned from 2020 and decided to just spread that approach to our space forums as well.
[00:04:43] Patrick O’Keefe: Right, because if you allow those types of posts, then they have a tendency to take over, in my experience. They tend to have a sense to take over because you can always insert that into any random conversation and take a thread off topic, because you don’t have a set of agreed-upon understandings from which to discuss things.
[00:05:06] Mohamed Mohammed: That’s right. The consistency is such a huge thing for us because what I’ve learned from moderating and managing moderators now is that the best set of guidelines is two things that can seem somewhat antithetical to each other, which is, one, they’re universal and consistent, but they’re also iterative. You have to be able to make changes as you go along.
One thing I’ve learned, and this may be more opinion-based, and I can definitely take critique on this is I’ve learned that if your guidelines don’t change for, say, five years if your guidelines haven’t even a tiny bit, if they haven’t changed at all, depending on the size of your community and the subject of it, there might be reason to give it a quick audit.
You may not have a big problem. It’s not necessarily the worst thing in the world, but I think it’s worth an audit because it’s really difficult to stay consistently applying these guidelines to something as, let’s call it, potentially provocative as human speech if you’re not prepared to iterate to some degree.
Being able to stick to the guidelines has been much easier for us and much easier for my team because we’ve iterated based on– There was no way to not iterate our policies when the world shut down because of a global pandemic when flat Earth or conspiracy theories found their way to the forums. When these things happen, at some point, you have to make some changes.
Otherwise, stay consistent. We look shadowy, we start banning content or removing forum posts simply because we think it’s a bad thing. Even if everyone agrees with us, the perception is so important. The perception that we’re consistent within the scope of our guidelines is massive to just being able to, for lack of a better term, keep the peace really.
[00:06:47] Patrick O’Keefe: I always like to ground myself in our guidelines, which are written vague in some senses purposefully to give overarching structure because there is a point at which you have to cut off a level of detail while being detailed enough to be helpful. Then, the substance to them is often given in a consistent application of them that people get used to. If we’re removing something and we can’t cite a guideline for that, not to say never say never, but in general, that’s bad in my view.
We have to have something to cite that relates to it. Even if it’s simple, even if it’s just that, “This is inappropriate for this audience.” If we can get more specific, we will, but if you’re removing something and you’re not tying it back to a guideline, then I think that’s a good time to update your guidelines so that people can be more aware and it has all sorts of benefits even when people never read the guidelines, or most people don’t, to be able to give them that link and site that you’re tying it back to this one specifically is always an easy your way to build trust than the opposite, which is to have it feel as though you are pulling rules from your mind.
[00:07:52] Mohamed Mohammed: This is the nice thing about talking to a fellow forum dweller is that when it comes to chat boards, because it can get so niche, I found that in my experience, the smaller and more specific the interests of the community get, the more the sense of trust that’s placed upon what you are doing in the community, our jobs as moderators and managers. I’ve learned to take that trust quite seriously because even if the form of something like 5,000 people, there’s always a surprising amount of people who take that trust very seriously because they’re on very regularly.
They contribute not only as a sense– Like I mentioned with Space, we have people who contribute very regularly to be able to answer people’s questions. They find it satisfying to answer people’s questions. They will give very long answers, one because they’re knowledgeable and we appreciate their knowledge, but two, because they’re contributing something of their time. Ostensibly, these are people who have other things to do in their day, and part of them being able to feel committed and safe to contribute is there is that bit of a social contract that we have.
I just found this particularly with forums that upholding our particular end of the bargain is such a sensitive matter that if the forum gets overrun with spam or if we’re not consistent, then it just– Forget about growth targets, the actual community fundamentals of helping people feel invested and feeling like they belong just becomes so much more to difficult to do because, at that point, people start to wonder, I think. This is just from personal experience, if you’re not consistent and if you happen to have the same problems as bigger platforms, then what’s the difference? Why am I investing all of this time as a user into this forum of yours when all of my efforts are being met with inconsistent approaches to just keeping the place safe? Frankly, it’s completely understandable.
[00:09:43] Patrick O’Keefe: One thing I love to learn about with individual communities as you mentioned, niche communities, the guidelines they come up with that speak to their individual use case that maybe to an outside observer, even to us, may not make sense or may feel weird or illogical. I talked about this in our last show with Amanda Petersen about the breastcancer subreddit which she’s a part of and moderates. They have a few really interesting roles in particular that really speak to a community that’s mature and knows who exist to serve, but one example was no research requests because they would get so many requests for random surveys.
[00:10:20] Mohamed Mohammed: Right.
[00:10:21] Patrick O’Keefe: This person’s PhD, for example, to draw it to you, or this person’s survey or maybe even some medical institution. The reason they don’t allow it is because she said that it’s a place for people to navigate the complexities of their breast cancer. It’s not a place to help other people do their jobs better which is an interesting way to think about it. I have a martial arts community I’ve run for 21 years in May. We have a guideline against discussions that are about the best martial art.
[00:10:47] Mohamed Mohammed: Right.
[00:10:47] Patrick O’Keefe: We learned many years ago. For some communities, that makes a lot of sense. Maybe it generates lots of replies and lots of anger applies to talk about what’s the best martial art. We just decided to opt-out of that totally, once upon a time, that it wasn’t worth our time to moderate it and it really wasn’t what the community was about. It was more about thoughtful conversation around the martial arts.
If you want to talk about the relative merits of an art, how does this compare to this other one as far as kicks go or punches or technique or whatever, that’s fine, but if your thread is to say, “What’s the best martial art?” We found that to be unproductive. The guidelines that communities make when they’re mature and know their audience are always interesting to think about and talk about.
[00:11:29] Mohamed Mohammed: Yes, I don’t think it’s a coincidence that the posts and the content that, as forum managers, the thing that you and I and others will look for as terms of the antagonistic stuff, the ones that we want to discourage, I don’t think it’s a coincidence that the example you just gave is the same thing a brand whose KPI is engagement on Twitter will post just to get engagement and to antagonize someone into giving the rapid-fire answers that get people. Nothing brings engagement on a place and an echo chamber quite like a divisive question. We’re trying to be the opposite.
Not to sound a bit forums holier than thou, but it can get quite easy to talk about forums in this way because I’ve always considered them as a user and then on the professional side of things as a bit of a pure form of talking about someone’s interest without feeling you’re yelling into the void. It just fundamentally changes how we approached guidelines. Just to note on your example on the breast cancer community, it’s fascinating because on a much less serious note, I had this idea as part of my PhD to center the experiences of trust and safety professionals. That was going to involve sending out surveys and speaking to people.
I cut that short a little bit because this may be why I caught it short. As a community person, my first thought was, first of all, we have rules as well in our communities against spamming people with surveys and doing research. There used to be a little bit of leeway for it if you came to us rather than just creating a profile and immediately getting to people with surveys because that just represented, not the most ideal behavior.
However, usually, it was a blanket ban on that thing because it just ruined experience. One of the reasons that I thought it immediately was I can’t think of a single context within which I would be part of a community, joining one that I’m not a part of which this project would generally take. For lack of a better term, really, to spam people with this relatively irrelevant thing, but it’s only useful to me to, again, as it was so well put, to help me do my job better. It’s easy to empathize where people would have to get this.
[00:13:36] Patrick O’Keefe: Yes, it sounds you’re thoughtful. A lot of people look at that and say like, “What do you mean? It’s relevant.” or “What do you mean? I’m trying to help you.”
If I had a dime for every “It’s relevant” that I’ve heard in 20 plus years, I’d have a few bucks. It’s great to have a few dimes. It wouldn’t pay for much, but a coffee maybe.
[00:13:55] Mohamed Mohammed: [laughs]
[00:13:58] Patrick O’Keefe: Let’s pause for a moment to talk about our generous sponsor Hivebrite.
Hivebrite empowers organizations to manage, grow, and engage their communities through technology. Its community management platform has features designed to strengthen engagement and help achieve your community goals. Hivebrite supports over 500 communities around the world, including the American Heart Association, JA Worldwide, Earthwatch, the University of Notre Dame, Columbia Business School, and Princeton University Advancement. Visit hivebrite.com to learn more.
Yes, let’s talk about your PhD. You’re working on a PhD focused on platform governance of deepfakes. As you explained it to me before the show, you really talked about how it will be grounded in the experiences of moderators and a study of community guidelines. Could you talk about that?
[00:11:14] Mohamed Mohammed: Yes, absolutely. The reason I got into this project was because as a moderator who started in 2017, one thing that I had the benefit of was having supervision and colleagues who were consistently encouraging people to get into the nitty-gritty of moderation including policy setting. I got a great view into how this stuff was set and the legal research that went into it, which was surprisingly extensive. One thing that we always had a problem with was moderating synthetic images that involve the faces of real people and compromising situations, which is how people know deepfakes really.
I didn’t realize how difficult it was to write out a policy that had to do in a small platform-specific way, regulating them and moderating them on a small platform without blanket banning them. Which was really, really difficult to do based on any legal precedent. Because the fact of the matter is outside of the United States, in Europe, especially, there isn’t a huge collection of laws that have to do a Deepfake and so you don’t have this guiding light. One thing that I think all community people can sympathize with is moderators don’t just come up with stuff, these rules are based on legal precedent on the law, on what’s legal and illegal.
I got interested in this because as a moderator, I didn’t really have an answer to this except to individually delete posts without having like we said, in the beginning, that consistency. This project was available and more importantly, it was founded because there was like any other PhD, it’s really difficult to afford unless you’re being paid to do it. To explain further, the work is that my team and I are trying to figure out how to regulate deepfakes in the political spectrum, which is one pillar is how to advise the legislative of processes to make certain deepfakes illegal while centering and while basing that legislative framework on the work of moderators, trust and safety professionals.
The people who supervise them, managers like us. The reason I’m doing that and the utility behind it is I’ve read a lot of submissions to consultations here in parliament and whenever they asked about how should we approach regulating artificial intelligence, particularly on platforms and then they would talk about synthetic media that came from artificial intelligence programs. It was incredibly rare to see someone who worked as a moderator or to any degree, these consultations are eventually public.
I read through, God knows how many hundreds of pages, and I was shocked by how few moderators were involved in this process in any way. This project is an answer to that to say, once you center the people who are going to be enacting these laws that you eventually come up with, because these legislations will find their ways to the platforms because when regulating deepfakes, there’s nothing more important than to platforms.
There’s nothing more important than being the person who works at Reddit, who has to deploy these legal standards and then turn them into a community guideline. That’s the entire reasoning behind this PhD, and I hope to use it so that when these political machinations start to happen, when laws are being discussed that are going to really impact platforms with regard to regulated deepfakes that at the very least, I’m imagining hundreds of other community industry professionals can very confidently say, “Look, I’ve done this research, I’ve spoken to the people that are relevant to be spoken to about this and this is the way that you need to come up with these legislations by keeping this.”
It’s not really explicitly prescriptive, it’s just saying you need to keep these experiences in mind so that the people who are supposed to scale these rules and the people who are supposed to enact these rules on their platforms, they’re the difference between the success of these legislations and the failure of these legislations.
If you keep them in mind, the applicable level at the application level, you’re more likely to succeed in preventing the harms that come from deepfakes, which, unfortunately, conservatively, 99.5% of them are online for the sole purpose of abusing women. This is what research has found and to prevent that, at the level of application for moderators and community professionals, we just have to be more in mind. Now I’m spending the next two years in change hopefully speaking to members of parliament as well and speaking to moderators and to just center their experiences to solve that.
[00:18:54] Patrick O’Keefe: That’s a good strategy, I like that a lot. One thing I want to call out real quick is just, I have a lot of US listeners and the idea that moderation centers itself on the law. In the US, as you know, we have just a great piece of legislation, Section 230. Globally fairly unique, my teenage self was thankful for it. Once upon a time back in ’98, when I started moderating, really in your community, you didn’t have to wait to do anything about deepfakes necessarily.
You could always disallow them and write a policy and say, “Hey, we don’t allow them or we do allow them in these cases, but not these cases.” Depending on your definition of deepfakes or synthetic media or photoshopped images or any of those types of things. You could go ahead and do that but I think one of the things that is the growing challenge, the biggest thing even in my opinion is scarier than just writing a policy and not allowing them or allowing them is detecting them, right?
[00:19:44] Mohamed Mohammed: Yes.
[00:19:44] Patrick O’Keefe: What are your thoughts on that? What are your thoughts on the tooling that’s out there right now or that’s not out there right now. Because I think that the real thing that is the uncertainty to myself, and I think a lot of people is just in detection, confirmation. The things that empower you to take action because a lot of this stuff, a lot of the reasons people use deepfakes for most communities, the ones we talked about earlier like the smaller communities, the forums.
Most well-run forums will probably chop down the abusive uses most of the time, vast majority because it violates another guideline or something else happens where these bigger platforms often don’t have the same motivations and don’t have the same purpose and don’t look at it the same way. Even in a case where maybe it’s something that’s mundane, which is maybe a minor case, but you still don’t want to allow it, but you want to figure out if it is in fact, synthetic, that seems like a toolset that’s going to prejudice itself to the people that have resources when those tools are to employed and people who can afford to run them, as opposed to folks who fit the 99.9% of the community spectrum.
The people who don’t have a lot of money to invest in tools. When it comes to that side of it, are you optimistic, pessimistic? Where do you think it’s going?
[00:20:56] Mohamed Mohammed: I’m quite optimistic. I think it’s a bit of a mixed bag. This is funny because one of my reasons for doing this PhD is centered in law and policy and the tech is definitely something to worry about. What I would say is that I justified focusing on the policy because in the nature of deepfakes, without getting too much in the weeds for anyone who hasn’t heard of them before, deepfakes are created using a certain artificial intelligence called Generative Adversarial Networks.
These General Adversarial Networks are made generally of two very important things. You have a generator which tries to generate as a true to the real thing, imitation of an image or any kind of media, including audio as possible, as close to the original as possible, and then you have a discriminator which says whether or not this thing is based on reality, the generator tries its best to fool the discriminator.
Once it does, that’s when a deepfake is produced. In very simple terms, that’s how a deepfake is produced. You’re talking about a system, a neural network that is built generally on fooling itself before it even hits the real world. Detection tools can be very good but the problem is that, really, these tools are just a mechanism for feeding future deepfakes and making them a little bit better because if you model a discriminator part of this algorithm after the latest detection tech, you’re just giving these deepfake producing pieces of technology a method by which they can produce something that is and more realistic and even closer to being capable of fooling the advance detection.
It’s a bit of a fire with fire thing, where you can fight fire with fire. The problem is that’s great. The point is to stop the fire that can get a bit much and detection tools it’s going to be a bit of a cat and mouse game. It’s still better than not having any yet all. Absolutely. It’s still a net positive. I would say, one of the examples I want to point out was that Microsoft earlier this year had hit a bit of a milestone with a piece of technology called Truepic and Truepic is Adobe and Microsoft and the BBC came together and Truepic, without getting too technical about it, confirms whether or not something has been manipulated via looking at its metadata.
It’s a very visual and easy to use thing where you can tell whether or not something was manipulated after the point of being shot, after the point of being taken and so the reason I point to that piece of tech is we’re making some progress and it’s very positive progress, but the tools themselves, when it comes to forums, particularly, it is not terribly difficult for certain open source piece of software.
A lot of this stuff will be open-sourced, so that there’ll be people who can’t afford it on their forms. That’ll just happen. You’re absolutely right about that. I think for the majority of people, as time goes on, the easier the deepfakes will be to make for everyone else, ostensibly, the easier is going to be, and the more access people are going to have progressively to the detection tools, because that’s going to be the point. If the tools aren’t democratized in some sense, it won’t really work very well. I think the economics of investment in these tools supports that as well.
I think at the forum level, especially, and at the platform level, there are ways to blanket ban certain IPs. There are ways to blanket ban certain face swap apps and things like that, just based on the fact that certain apps are used predominantly for malicious reasons. I think that part of it, the technical part of it to say, “No, we don’t want that on our forums. No, we don’t want that on our forums,” Is a great example of one of the great things about Section 230 is that as someone who’s had to spend like a year reading the original Communications Decency Act text on that, you are encouraged by a Section 230 to moderate as you see fit.
That is if we’re holding true to the legislative theory here, that is the intention of Section 230 originally, is that you’re given freedom to like moderate as you see fit. Where I’m connecting dots on this is I think that there needs to be some incentive for using something like a Truepic or any other verification software and part of that, I think is not only do you have to democratize that tech, but you also have to be able to say, look, there’s a clear, legal, universal approach to this.
In the United States, there’s a general harmonization of how the law approach deepfakes. There are bad ones. There are okay ones when it comes to art. Then there are bad times. States like California, Virginia, Texas, there are laws against using deepfakes, something like within three months, either side of an election, the logic of which is pretty clear. What I’m looking for is to advise on policy that has any stance on deepfakes so that you’re not forcing the platforms to have to regulate against this, but you’re encouraging them by being clear about what the parameters are for what the law considers a bad deepfake versus something that’s like artistic or a parody or something.
I think clarity on that is the difference between feeling comfortable with using hopefully democratized authenticator technology and not seeing the need for it. My problem is not seeing the need for it based on the fact that there is just political and legislative silence on the matter. That’s where I tie those strings together.
[00:26:06] Patrick O’Keefe: That’s interesting. Generative Adversarial Networks love the word adversarial in there, such a good name. Obviously, I get pitched by platform and tool companies fairly routinely. My thing I say pretty consistently is if you really want to change the world, teenage me needs to be able to use it. That person really doesn’t have money.
I didn’t, I was that person. I can tell you, I didn’t, I used free tools and I used open-source tools and it was very powerful. I always liked to think about phpBB, especially from like 2000 to ’08, ’10, ’12, something like that. Very popular open source project, extremely popular ran probably the vast majority of the most popular forms in the world. If you had a tool that hooked into that was open source, then you had a tool that folks could use and so it’s always really important to me.
Your interest in this subject came from a situation that you encountered as you were working in community management, where you were exposed to a situation where a deepfake, just to use a simple term here, came into one of the communities that you were responsible for. Can you talk about that?
[00:27:08] Mohamed Mohammed: Yes, absolutely. I do want to say also, just as a shout out to someone you mentioned phpBB and I just wanted to say our platform that we use right now still has very heavy use of BB code. That’s our markdown. I thought just to give our OGs out there something to smile about.
When I first started moderating, it was in a community platform called Campus Society that was meant for university students. While I worked there, a maximum of 600,000 users, 4%, I believe at its peak were on there every single week. That’s quite a large amount of people and so at some point what we started getting was images that were spread for the sake of disturbing the process, just to be trolls. You were getting images that were pornographic in nature that were violent in nature, but that had increasingly convincing Photoshop heads, facial features sometimes of users, it was often like celebrities or something, but it was also, we would have pictures that users put up and those faces would be on these like rather compromising images. We would be on live chats. I would also be moderating live chats, which for like a six to eight-hour shift was our own version of absolute hell.
What we would do is essentially we would get like streams of these for hours, we would just get somebody creating a profile for the sake of posting six or seven of these in a stream of live messages. Our issue there was we could delete them, we see them and delete them. This is the thing about these images in a predominantly text-based chat, you see them and delete them. We had before blanket bands for certain IPs and for certain malicious users from certain IPs, we had a problem of like a logistic issue because at most we had two moderators on staff at any given time.
It was remarkably difficult from a logistic standpoint. It was really hard to scale a strategy where two moderators were going to get rid of these particular things. There was no way again, to convince this platform and the leaders of this platform, no matter how much our managers agreed with us, there was no way to convince them to spend a certain amount of money on what was considered pretty ahead of its time technology, even in 2017 to 2018, it would’ve been quite an investment.
It was very difficult to say, look, the best way to do it in any industry is to say, you need to invest this money. If you don’t, you will lose more money than you’re investing now and you have to be able to speak in those business terms and I couldn’t make that case. I knew I couldn’t make that case because our manager at the time Elena Goodrum could not make that case.
She is incredibly bright and has this wide range of experience. That’s what got me scratching my head, which was how do you approach this problem if we can’t convince the people who are so and so hundreds of thousands of dollars difference away from moderating this properly and letting it run rampant. If I can’t convince someone like that and if the people who would help me convince someone like that, which are the legislators have no clue that this is a threat and they don’t see it as legislative priority, what do we do?
That’s for a while even before I started working at Future as a community manager, I just really didn’t know the answer to that except individually deleting posts which was just the more popular this gets, the less workable and the less scalable and the less possible that is.
[00:30:22] Patrick O’Keefe: I feel fortunate to be working with platforms that are primarily text-only.
I’ve consulted with some folks on live audio and stuff I can give you a roadmap, I can lay a blueprint out for you on how it should work and policy and toolset, and what can make things better but it’s tough. I don’t know live video and live audio especially in live video for our use case here there’s a tool for editing your podcast and I forget the name of it but basically, it allows you to edit your own audio voice by editing the text, so think about that.
You add a word and it says it and it’s you saying it, and I see that and I don’t want to use that because I think that seemed very dangerous to me. I’ll get an episode of Community Signal out and it’ll be like not me but just moderating that and dealing with that sometimes I want to stay as far away from audio and video as I can.
[00:31:13] Mohamed Mohammed: Can I actually ask you a question about this? I’ve been curious about your transition from you have this extensive experience on forums, from there to what you currently do now where you’re focusing on some that is so heavy on live video even if it’s prerecorded. I wonder, did you pick up any shock really as a community culture shock when you were transitioning to this role?
[00:31:32] Patrick O’Keefe: Thankfully, our product that we’re working on my team has to focus on text, so that makes things a little more palatable. The video live streaming audio component side of it we’re doing is live but our work starts and ends with the text side of it so that makes things a little easier. Like I said I consulted with some folks who are operating in those spaces and it’s such a beast and I always think back to when I was younger and Ustream came out, ustream.tv, I think it’s long gone now. I think it was bought by some big company and turned it into an enterprise video product.
I remember thinking to myself that something like, well, I couldn’t do that responsibly so I’m not going to do it and who has that stopped from doing anything in the social web and I couldn’t do that. Let’s not try it no, it’s like let’s get some venture capital, let’s grow as fast as we possibly can, and deal with the consequences later. As one person I was like “video” sure as heck can’t do that, that would take more people and tech and money that I don’t have.
The answer is yes, I’m picking up things as I go there’s definitely nuance to working at a company like CNN that is making me better and making me understand large organizations because there’s truly nothing like CNN. It’s interesting because it brings me to… the live audio and video component, not the work I do right now, but just all these platforms, something I wanted to ask you about was just if good platform governance around this issue is possible but also optimistically possible.
Like you said something before this show that relates well to the question you asked me and that I think was interesting which is that you said, “It’s helpful to understand that complete avoidance of harm is impossible, it sounds obvious but I have to constantly remind myself of this. I’ve worked long enough in this industry to know that the best outcome when it comes to governance on platforms is the one that sees the least possible harm to users”.
[00:33:30] Mohamed Mohammed: Honestly, it’s really difficult to start working as either a manager or a moderator where part of your responsibility is making sure the users of whatever platform you’re in charge of are safe. It’s really difficult for most people to start that role and think to themselves I have to allow for the possibility that some harm and in fact, the guarantee that some harm will come to these people. That some version of harm will happen that I will feel like I could have prevented it but in reality, I can’t prevent all of it.
The reason I think that that was relevant really is that, first of all, for anyone listening that is considering being a moderator or a working community you can’t prevent all the bad stuff from happening, I know it sounds cliché but it is 100% true. The more relevance to it is that when you get in the weeds of studying all this, you’re not only reading the law I’m having to constantly read about these incredible academics who are writing down and noting and recording the experiences of people who’ve had to suffer because of someone’s malicious deepfake, their lives have been effectively ruined because of these deepfakes in multiple ways that like you and I couldn’t even imagine.
I think when you start to get exposed to those stories over and over again and what we’re talking about is like you’ll see one account per year that really makes it into the mainstream that really makes it into news. Then for everyone, there’s a couple hundred thousand that won’t say anything about it but will talk amongst themselves. When you read enough of them you lose sight of the this proposition that like everyone can’t be protected because the numbers of people that are being harmed by something once you’re made aware of them, you become intimately aware of them to the point where you lose sight of the fact that you can’t protect everyone.
If the number of people that are affected by something is 60,000, you can make it as small as humanly possible in your own little way. The combined efforts of other academics and platform professionals and community professionals, that in and of itself, is a worthy enough goal. I say that because it’s one of these things, it’s cliché for a reason.
It’s cliché because unfortunately, it is 100% true. I say, unfortunately, because it does sound like I’m ringing this bell that’s just like you need to settle, work hard, and settle for the best outcome. That works for anyone who’s listening that’s a grad school student. That’s 100% what PhD people lose sight of which is, just settle for the best possible outcome and just work your butt off.
In community as well, once we’re applying this research at the level of community guideline and then pitching to our management to say, not specifically mine, but to our collective management as evidence for why they should invest in a piece of detection software, and in changing their guidelines, to remember that your goal is everything you’re doing in this specific respect, is to make sure that your users are protected as much as they can possibly be.
Harm is not something you’re ever going to be able– I’ve never successfully been able to protect everyone and even a given shift as a moderator. Forget about a platform, even a specific six-hour, eight-hour shift that I was on, there’s just no way you’re going to protect everyone. The longer that I’ve had to settle with that, the easier it’s become to think about both my research and the different moderation efforts professionally, that I have as scalable. It’s all scalable. If your goal is to protect everyone, and it’s either this or that, it is really difficult to scale anything you do, because you can’t scale anything to being actual perfection. I thought that’s why that was important to say.
[00:37:03] Patrick O’Keefe: I literally just told somebody the other day, I think it was something like moderation is imperfect by its very nature. It’s true. It’s not even a question of scale. It’s a question of just simple existence, that moderation is imperfect. The goal of moderation is not perfection, it’s to be as successful as you can on the first try and get it right on the second. It’s being open to review.
If we can get it right nine and a half times out of 10, 9.9 times out of 10, and then we get it right. In the end, most of the time, for almost all decisions, that’s going to be good enough, because most moderation decisions that you know, are not crucial moments in someone’s life. They are just taking out the garbage in some ways, or removing this piece of spam, or taking care of this thing. Most moderation is straightforward and vast majority decisions are probably easy, quick. It’s the tough stuff.
That’s what we’re talking about today too, the tough things where there’s nuance, where there’s difficulty, where there’s actual harm being done. In those cases, it’s hard to do this job, end my sense there. It’s hard job, period, but it’s also hard to do this job if your expectation of success is based upon perfection, or is based upon never falling short. We sift through all the things people don’t want to look at. We take action on things to make our communities a better place.
I think one thing you pointed out about just the impact of folks is important. No matter how big or small the issue is, we look at problems and, globally, they can seem insurmountable. In the US, we have, as you may have heard, a lot of disagreement over Section 230 and about moderation in general, legislatively and otherwise. One thing you can do is do the best you can in your own little space.
[00:38:43] Mohamed Mohammed: Absolutely.
[00:38:44] Patrick O’Keefe: Facebook is a nightmare. I wouldn’t work there and I want nothing to do with it. I turned down a recruiter long ago from Facebook, but I do see promise in other spaces. I do see hope and optimism in other spaces, often smaller spaces, where people are coming together around specific topics of interest. Your individual space, it’s kind of sappy, but I like to say when I meet people that, online communities have changed the arc of my life, they’ve given me half of my closest friends, they’ve given me a career, and the reason I know my why.
On a deep level, individually, I am a testament to the power of them and who they have on just one person, a small online community. We all can take our own little spaces and make them a little safer each day.
[00:39:27] Mohamed Mohammed: Absolutely. This is one way for me to– I have this habit of whenever I talk about deepfakes or applications or platforms, it’s always 80% quite scary. I’m getting an appreciation for that. One of the things to add on to what you just said is to, anyone who’s listening who happens to be concerned about this, which sensibly is everyone, I would say, I’m going to steal a line from Sam Gregory, who’s over at the WITNESS Media Lab, which is, “prepare, don’t panic.” Deepfakes are just this part of life that is going to increase more and more.
We’ve seen them in movies. They were used after the late passing of some actors to fill in their scenes in movies like Fast and Furious. They are a more normal part of life, the tech is a more normal part of life. Inevitably there will be used maliciously, but we can prepare for them. Part of my project is to help prepare for them so that we can nullify quite a bit of that harm, so that we can minimize that harm, without having to erase technology altogether.
Community is such a huge part of that because these things spread on platforms and with any luck, we will be more involved in the legislative conversations so that we can keep the use of platforms that are based on people’s interests, a safe place for the beautiful image just described where people can make these lifelong friends, where people can meet the love of their lives.
I think a big part of that is that people don’t get scared about this apocalyptic vision of deep fakes is to just read as much as you can about them. I know it’s going to sound scary, but the more you understand them, the more you get comfortable with the fact that tools are advancing, there are people like myself and people that are way more qualified than me, including my supervisor, who are studying this from a legal standpoint. It is a problem that’s being tackled that, again, I thought Sam Gregory’s line was perfect for this, which is looking scary, but prepare, don’t panic, and it’ll be community people will never take credit for anything but at some point, I want us to be able to say that we involve ourselves in this legislative conversation to help people prepare rather than panic and to keep this beautiful vision of community.
[00:41:33] Patrick O’Keefe: Yes, that’s great and similar to what you said there about relying on that piece of community people, I think we hear about issues like this and it’s easy to get overwhelmed because there are so many issues that we have to deal with so I think it’s okay to learn about them and also, I think researchers, you have alluded to should rely on the experience of community folks and practical application and not just the academic.
Community folks should rely on people who it sounds like you’re going down this path now, who dig into specific issues and make themselves the expert in a specific area. If those folks are receptive to community pros, thinking with them in mind, and the real practical applications of policy and laws specifically, then the flip side of that is those folks who dig into a specific area of expertise, whether it be deepfakes or something else, are folks that we should look to for that expertise, and rely on and utilize as we kind of craft our own policies and determine how to handle these matters on our platform. I think it’s really great.
At the risk of ending on a sappy note, I wanted to mention that from talking to you before the show is clear that you credited Joe Pishgar a lot with identifying your talent for this work and cultivating it with you and when I asked you one thing that you learned from him, you said that it was his reminder that you belong here. I thought that was quite a touching answer. I wanted you to talk about it. Could you expand on that a little bit?
[00:42:52] Mohamed Mohammed: Absolutely. I will talk about Joe Pishgar ad infinitum but yes, to specifically point to that, it’s really easy to get overwhelmed by how much work is given to you when you’re trusted in a team, despite the fact that you’re really junior, when in your brain, you’re thinking there’s no reason without mentioning names, we have a collective of superstars, in my opinion, that work on our community team.
When Joe was leading that, it was surprising how much work was given to me and how much trust, how little supervision was given to me while again being available for answers but he kept reminding me every time I would never say this out loud, I would never say okay, I really don’t deserve this or any of that, because that would just seem a bit ungrateful but he would sense in my hesitation, or in my answers to questions or in my questions to him that I felt like I’m new to this, and I probably don’t belong, I’m not being immediately successful and I would get down on myself.
He was really, really perceptive like that and what he would say consistently is to just, “Look, we gave you this job for a reason. You went through several interviews, you’re being paid to do this, and being paid to do it by this particular company which is no small feat, because you belong here. We would not have hired you if you didn’t belong here.
You belong in this industry even more broadly because you represent all the things that I was looking for, despite the fact that you don’t have any community industry, and that once you come in, first of all, outside of just feeling gratitude to someone like Joe Pishgar for taking a chance on you, unintentionally often you hold them in this standard of polarizing them a little bit to say this is the person whose career trajectory is what I want mine to be, and for him to say so consistently, “You belong here, you belong here, you belong here.”
Really, I think it’s the difference between what I was when I was starting in 2019, as incredibly nervous and as feeling isolated as the most junior, sort of no, nothing member of the team, to who I am now, which is someone who will very quickly say, “Look, I can handle that. I’ll take that. That’s not a problem”, giving presentations in our team for how my projects went, taking accountability for whenever, mistakes happen, being able to do that with some confidence and being able to actually learn some stuff rather than getting super anxious.
I don’t think anyway that I would have survived the full year of being that down on myself about something particularly, a big part of that year was when the world shut down. Everyone was doubting everything. I think having his consistent voice in my ear, we had constant one-to-ones. I would speak to him every single week because he thought that this was a thing that managers need to be able to speak in a casual fashion with their employees so that we could address concerns. While I never said it, I was always blown away by the fact that he had and still has the emotional intelligence to say, “Look, I know you’re doubtful. I get it, it makes sense.”
Acknowledging what I’m saying but to consistently remind me, those feelings while valid, come from a place of you don’t realize that you actually do belong here. You belong here and he kept repeating that it was a drumbeat. It still stays in my head whenever I get these little moments of doubting myself that if someone like that could consistently say, “You belong here, you belong here, you belong here.” Having a manager like that is to me the difference between having this long career that can be fulfilling and rewarding and can help you feel better about yourself versus something where you have to build this foundation all by yourself.
My foundation was even before Joe, I had a manager, named Ilana Goodrum who said go after everything that you observe, you are capable go after it. Then Joe was the most consistent reminder of you belong here to me sounded like you are not an outsider, which meant a ton to me as the person who felt like the outsider.
[00:46:35] Patrick O’Keefe: Yes, I think that’s just amazing story. Thanks for sharing it. I think we all want a boss who believes in us on some level, I think, I don’t know. It’s not as easy as distilling it down to that. Not just believes in us and words but in actions. I don’t know if they’re rare or just I don’t know. I haven’t run into as many of them as I would like but about 50/50 so far, I would say.
It makes all the difference in the world. If you didn’t have someone like that, then it’s possible that you burned out and moved on to something else fairly quickly and because you had that it cultivated the ability that you obviously had. You were able to step into this role and work on your PhD and do all these things. Something we don’t always realize the power that a handful of words consistently repeated can have on someone.
[00:47:24] Mohamed Mohammed: He wrote a reference letter for one of my PhD applications, that’s where I was at with my relationship with my manager. That’s how fortunate I got. I didn’t know he was the manager for this place when I was applying for a job but a year later, he was the guy writing this rather long and he’s very articulate. He’s a very wordy articulate guy. We make fun of him for this all the time, for his use of very big words but for a PhD application, it can help quite a ton and to go from complete stranger to talking about moderation as I was talking to a buddy.
Then to go to the guy that’s pushing for my involvement in projects, for my continued success and getting scholarships at the PhD level, and things like that. It doesn’t really occur to you how important that is until it’s either really bad or really good. Thankfully, it’s been the ladder for me.
[00:48:13] Patrick O’Keefe: Mohamed, I want to check back in when you’re a bit farther along in your PhD and influence, parliamentary policy or government policy but for now, it’s been a pleasure. Thanks so much for spending time with us.
[00:48:24] Mohamed Mohammed: Thanks so much, Patrick, it’s been great.
[00:48:26] Patrick O’Keefe: We’ve been talking with Mohamed Mohammed, community manager for Future Plc. For the transcript from this episode, plus highlights and links that we mentioned, please visit communitysignal.com.
Community Signal is produced by Karn Broad, and Carol Benovic-Bradley is our editorial lead. Until next time. Be excellent to each other.
If you have any thoughts on this episode that you’d like to share, please leave me a comment, send me an email or a tweet. If you enjoy the show, we would be so grateful if you spread the word and supported Community Signal on Patreon.