There was one big problem: They didn’t tell community members that they were being experimented on. They didn’t tell the community moderators. They didn’t tell Reddit’s corporate team. Only when they were getting ready to publish, did they disclose their actions.
It then became clear that beyond the lack of consent, they had engaged in other questionable behavior: Their AI-written contributions had spanned multiple accounts, pretending to be a rape victim, a trauma counselor focusing on abuse, a Black man opposed to Black Lives Matter, and more.
Change My View volunteer moderator Logan MacGregor joins the show to discuss what went on behind the scenes, plus:
Announcer: [00:00:00] You’re listening to Community Signal, the podcast for online community professionals. Here’s your host, Patrick O’Keefe.
Patrick O’Keefe: [00:00:13] Hello, and thank you for listening to Community Signal. In March, the volunteer moderators of the popular Change My View subreddit received an email from researchers at the University of Zurich. The researchers revealed that they had been spamming the community, that’s my terminology, spamming the community, with AI-generated posts and comments across multiple accounts, without disclosing this fact to the moderators or the community members they were interacting with. Their goal, apparently, was to show how AI could be persuasive.
Patrick O’Keefe: [00:00:45] Worse yet, according to the moderators, the researchers used these AI comments to pretend to be a victim of rape, a trauma counselor specializing in abuse, a black man opposed to Black Lives Matter, and more. Online communities are never fair game for emotional exploitation like this.
Patrick O’Keefe: [00:01:01] Reddit’s company leadership doesn’t have a particularly good track record with AI, in my opinion, since they signed a deal with Google to allow the company to train AI models on Reddit posts, including those made by dead people, without an opt-in to participate, let alone share in the profits. They also don’t have a great track record with regard to the creation of fake accounts, which was part of the founding of the platform, but Reddit members and volunteer moderators certainly deserve better.
Patrick O’Keefe: [00:01:24] After learning about the study, the mod team filed an ethics complaint with the university and revealed the matter to the community, which led to a lot of media attention, nearly all of it focusing on how unethical the research was. Reddit’s chief legal officer, Ben Lee, said the company was evaluating its legal options. Due to the mounting pressure, plans to publish the research were scrapped and the researchers apologized. They pledged to practice stronger ethical safeguards in future research.
Patrick O’Keefe: [00:01:49] Manipulation in online communities has been a thing since the start, whether it’s government actors trying to sway public opinion, autocrats targeting the opposition, Steve Huffman and Alexis Ohanian creating phony accounts, or Scott Adams pretending to be a fan of Scott Adams. It’s been going on. AI’s impact is on the believability, quantity, and speed at which this can happen. I’ll let research experts talk about the ethics of conducting experiments without consent. Spoiler, it’s not good. And I’ll take the online community angle.
Patrick O’Keefe: [00:02:18] Bad AI-related things are absolutely happening in and to many online communities. We should do what we can to limit it, but ultimately the fault in these things rests with bad actors, not on community operators doing their best. When you see it, do what you can. I think the mods of r/changemyview deserve a lot of credit here, and that’s why I wanted to have one on the show. Logan MacGregor is a member of the volunteer mod team on r/changemyview.
Patrick O’Keefe: [00:02:41] Drawing from a unique blend of experience, including social work, administration, program management, project management (including research-based projects), policy, strategic development, and emergency management, Logan is a credentialed Type 3 Planning Section Chief that is planning to complete the master’s program at the Center for Homeland Defense and Security with a thesis likely focusing on information campaigns. Logan, welcome to the show.
Logan MacGregor: [00:03:04] Thank you for having me.
Patrick O’Keefe: [00:03:05] It’s my pleasure. So tell me about the moment when you first became aware of what the researchers had done.
Logan MacGregor: [00:03:14] So I think it was March 17th. It was definitely a Monday.
Patrick O’Keefe: [00:03:19] St. Patrick’s Day, a great day. (laughter)
Logan MacGregor: [00:03:20] So it was definitely a Monday, and I had gotten up a couple hours early before work, and the mod queue was getting a little backed up, and I wanted to put in an hour or two of just working the queue before I had to start my regular day job. And so it was six-ish in the morning, and I grabbed my first cup of coffee, and one of the things I like to do is I like to review mod mail just to see if anything is happening or if there’s an appeal that I need to review.
Logan MacGregor: [00:03:54] I like Change My View because it does have an appeal process where if we remove something incorrectly, people can say, hey, it actually didn’t break the rules, and we can take a look at that. And then there it was.
Patrick O’Keefe: [00:04:06] It was a noteworthy day.
Logan MacGregor: [00:04:07] There was the email, and normally, you know, when there’s a big issue for the mod team, you have an internal discussion group, and so I posted it there, and I think my comment was, they fucking did what? So that was kind of my first reaction.
Patrick O’Keefe: [00:04:24] Talk about that, like, I know you just said the reaction, but just sort of the, I guess the reaction amongst mods, amongst yourself, however you want to describe it, like the immediate, oh my goodness, like what was the mood amongst the mods, let’s say?
Logan MacGregor: [00:04:36] It’s online collaboration, so it’s hard to read between the lines because people are typing things, and it’s not like an in-person conversation or a virtual conversation like we’re having here. You know, there was a lot of questions like, is this real? Is this actually happening? And then, you know, obviously, it was real. And what does this mean for our sub? How do we respond? What are the issues? And I think it was just shock.
Logan MacGregor: [00:05:01] There were just too many questions, and it was really hard to get our heads, I think, wrapped around what was happening and really just get a sense of it. And I think it took us several days just to really fully understand the gravity of what had happened and everything around it. And that was even before we could begin to think about, you know, what we wanted to do about it because it was enormous. It was just, it was huge. Personally, you know, I was a brand new mod. So like, I wasn’t expecting anything.
Patrick O’Keefe: [00:05:30] Like, it’s like this every day? (laughter)
Logan MacGregor: [00:05:32] It was actually one of my questions. I’m like, is this normal for the sub? The team’s like, oh no, this does not happen every day.
Patrick O’Keefe: [00:05:41] So speaking about the breadth of it, like I guess it’s hard to put a number on like how many members were impacted because you don’t know how many read it, right? You only know people replying and you can see the upvotes, downvotes, those sorts of things, but like the countless number of people who read the subreddit and don’t actually participate who were then influenced by this person posing as a rape survivor or this person posing as this thing. Do you have any idea like how many posts they made and then like how many members? I don’t know, I guess you could only estimate that. How many members were actually impacted by it?
Logan MacGregor: [00:06:11] Yeah, and so, you know, we have roughly 3. 8 million members on Change My View, but obviously they’re not all active at the same time. That’s just the number of people that subscribe. And you know, the activity on the sub, it varies and it really depends on what’s happening in the news cycle for the most part. During the time that the experiment was taking place, it happened to be the election cycle for the US. So it was a very active period of time for the sub. A lot of people posting views about one candidate or another, one issue or another.
Logan MacGregor: [00:06:46] And, you know, Change My View is a special place where people can go and have an exchange of ideas in a civil space where views can actually change. So I know that there were 1, 700 comments, that we know. And there were 13 bot accounts that got through the Reddit filters. I think they deployed 31-ish bots, but then a good number of them got shadow banned by Reddit before they made it onto the site. So, you know, 13 user accounts, 1, 700 comments, and it was all published in their draft paper.
Logan MacGregor: [00:07:23] And, you know, I had some questions about the methodology, but in terms of how many users it impacted, you know, in terms of the number of users that interacted with the comments, I can’t say that it was any confidence how many people looked at the comments. I know that before it became a mod, you know, I had become very interested in one of the bot accounts that was active in some of the same posts that I was involved in.
Logan MacGregor: [00:07:51] And I personally thought it was some sort of foreign government misinformation campaign, but then the comments that was making were not making any sense from that perspective. And so I clearly identified one of the bot accounts as a bot account, but I wasn’t a mod. And so I feel kind of bad that I didn’t report the thing.
Patrick O’Keefe: [00:08:09] It’s really interesting though, because you, you know, being a relatively new moderator, you actually were one of the members who was part of the experiment. I mean, all members who are mods are members too, but like you were on the member side and you were part of the experiment, so to speak, that they were conducting, and then you became a moderator and got to see the fallout because of the gap in time between when they conducted their experiment and when they informed any community leadership.
Patrick O’Keefe: [00:08:33] So that’s kind of an interesting, I mean, I guess there’s maybe a couple people that qualify as that, but it’s sort of an interesting thing where you got to be a part of both sides, really.
Logan MacGregor: [00:08:42] Yeah, I never really thought about it that way, but I think that’s right. And I think that’s part of why, you know, this kind of hit me harder than, I don’t know, maybe it should have, but like it hurt. And part of the reason it hurt was because I had seen some of this behavior by the bots and I didn’t do anything about it. And now here was, you know, I was able to see in real time the consequence of my own inaction, I don’t know.
Logan MacGregor: [00:09:09] And I don’t know if the mod team would have been able to do anything about it because, you know, part of the reason I was brought on board was the mod queue, where people make reports when other people violate the rules, it was getting quite busy and they really needed someone else to come on board. And so I don’t know if they would have had the bandwidth. We have tens of thousands of comments a day. And so we’re trying to sort through 1, 700 comments over four months. I don’t know if they would have had the bandwidth to do that.
Logan MacGregor: [00:09:39] You know, should have, could have, would have, like, who knows?
Patrick O’Keefe: [00:09:41] Yeah, yeah. I mean, let me alleviate you of that guilt.
Logan MacGregor: [00:09:43] Oh, well, thank you.
Patrick O’Keefe: [00:09:45] Let me alleviate you, because like it is like, in cases like this, because manipulation in online communities has existed, you know, forever. And what’s happening with sort of AI, LLM, however you want to quantify these tools is the velocity, the believability, the speed at which people can do it. I mean, there’s this amazing story we’ve had on the podcast.
Patrick O’Keefe: [00:10:01] If you’re not familiar with Scott Adams, when he tried to impersonate a fan of Scott Adams, the Dilbert guy, oh my gosh, look up the story. It is fascinating and hilarious. But I always place the responsibility, the fault, the true fault, it’s not on the member who didn’t report this post. It’s not on the volunteer mod team doing their best with no resources and sort of fighting the volume. It’s the bad actor, right?
Patrick O’Keefe: [00:10:23] The responsibility, the fault always rests with the person who chooses to manipulate the community. It’s easy to fool people, right? It’s easy to fool some people and to do something that undermines the trust of something. It’s harder to build trust. So yeah, let me alleviate that. So you mentioned this interesting thing I hadn’t realized right away or I hadn’t thought about. It’s just this took place last year during the election, and then they waited until this period to reveal it to anyone in leadership.
Patrick O’Keefe: [00:10:47] I assume they were getting close to publishing research and then like, we better tell someone. But there’s this gap from when the mod team learns of it and when the mod team announced it. And you mentioned sort of the coming to grips with it, right? Understanding what it meant, how big it was, how to communicate it. So, you know, after the mod team finds out, after the researchers finally disclose that this happened, talk about that process a little bit. Like, how do you come to terms with it and sort of decide what the right next steps are?
Logan MacGregor: [00:11:14] Well, you know, I have to say that, you know, to the credit of the researchers, they were extremely forthcoming with our questions. So the first thing we had to do was kind of make sense of what had happened. And we had questions about which accounts had been used so that we could actually review them. We, you know, wanted to see the ethics committee materials and they shared those with us. And so they were very forthcoming with helping us make sense of that. Their transparency in that process was nice.
Logan MacGregor: [00:11:47] And it was kind of weird because on one hand, their research was deceptive by design. And, you know, we didn’t know anything that they were doing until they told us it was done. And it was part of their disclosure process that was baked into their research method. And so the first thing we did is like really kind of uncover, what are all the details about what can we learn? And so then we read the abstract, we read the ethics materials. We really tried to thoroughly understand what had happened.
Logan MacGregor: [00:12:14] And then in terms of our process, we are a consensus-based mod team. Very rarely do we break from that consensus-based model. And it’s usually around time pressure. If we have to do something very quickly, then sometimes we’ll bypass the consensus. So deciding how to go about the response took some time. I would say about a week. We pretty much knew right away that we wanted to file a complaint with the Ethics Commission at the University of Zurich and ask them for a number of remedies.
Logan MacGregor: [00:12:49] But then, you know, the mod team deciding what those remedies were, that took some time because there’s some of us with research backgrounds, there’s other with actually some background in AI, others who have some legal background. And so we just had a lot of expertise on the team and a lot of different viewpoints. And we ultimately settled on the requested actions that were included in the letter, which we shared with the community. And I think the two most pressing, and this is just my perspective, I can’t speak for the entire mod team.
Logan MacGregor: [00:13:20] But from my perspective, I think the things that we wanted the most was an apology and a promise not to publish. The second was really important because we really were concerned that if this was published in a peer review journal, because obviously the findings are out in the wild, you can Google it and can find it. So it’s not that the findings aren’t being discussed.
Logan MacGregor: [00:13:43] It was just this idea that if it was elevated to a prominent journal, that our community, which is supposed to be a protected human space, would now become just another sandbox for researchers. And so we felt very strongly that it should not be published. And we tried to articulate that as clearly as we could in the complaint letter. Unfortunately, it didn’t land well. But that was kind of our process. And we went through several drafts of that letter until we all said, yeah, this is what we want to say.
Patrick O’Keefe: [00:14:20] It’s really interesting because I think on one hand, what’s funny to me is you mentioned their transparency. I think that’s good. And I don’t know them. I’m casting some aspersions to a limited extent. Even when they send you a list of members, there’s a trust that’s been broken, sort of, by not asking for permission previously, right? You have to trust that these are all the member accounts. They’re disclosing all of them. There’s not a worse example or something they threw away. And I think that can be an understandable leap.
Patrick O’Keefe: [00:14:46] And then the other thing that jumped out at me is just this idea, which I think is very good and I think is true. I always say that when one community person, a volunteer, a community host, a person in this line of work, stands up for their community, they stand up for all communities.
Patrick O’Keefe: [00:14:59] And so when you push back, it’s not just that you’re stopping or trying to prevent people from using this particular community as a sandbox for their research without consent, but then it would be accepted across all communities, not just all subreddits, but all independent online communities where you can get something to work in this form. And I think that’s really important.
Logan MacGregor: [00:15:21] Yeah, I hear what you’re saying there, and honestly, it touches me a bit. And, you know, I’ll say it with regard to the researchers, you know, we really had no reason to doubt what they were saying because, you know, the materials were official University of Zurich documents and they had given us the actual name of the principal investigator. And you know, from my own personal perspective, I think the researchers meant well. This concern that they raised about the potential for AI persuasiveness in online spaces, that’s a real issue.
Logan MacGregor: [00:15:55] And I think that they were really wanting to try to address that. And so I think that their intentions were good. And I think that, you know, they believed that this was the right way to go about it. And so I think that there was some sincerity there. I think that they wanted to be transparent. They really wanted to engage our community in the debriefing process, as they put it. And I think, unfortunately, they were just flat wrong. I’ve seen so many comments denigrating their character, and no, they shouldn’t have done that.
Logan MacGregor: [00:16:28] And part of me is like, you know, what part of them thought that this was okay? Like I get that. But then I also get, you know, I’ve done research. I’ve had to run stuff through an ethics board and they have this curious system in Switzerland where the ethics board doesn’t have any binding authority on the researchers and they just provide a consent role. And I just think that structurally, the way that this was allowed to happen, somebody should have been able to say, oh, no, you’re not going to do that, right?
Logan MacGregor: [00:16:55] Like, because all of this could have been avoided. And to me, I think that that is, or should be, the role of the ethics board. And so in terms of believing them at the time, I think there’s always that kind of nagging doubt in your mind, like what else is out there?
Logan MacGregor: [00:17:09] But I don’t think, you know, really given how rapidly they responded and we were able to validate everything they said, like the accounts they gave us, sure enough, they were AI generated. So, you know, I don’t think that they had anything to hide there. And I don’t think that there’s an experiment behind the experiment. Like I’m not going to go down that route at all.
Logan MacGregor: [00:17:28] And then in terms of the community, like it was about our community. And I didn’t really think of us as like championing like the whole universe of online communities. We were just really reacting to what happened to us.
Patrick O’Keefe: [00:17:41] It makes a lot of sense. When we stand up for our members, we, you know, we stand up for others. So like, I think that’s a really positive thing. You touched on something that I thought was really interesting, because I mean, I read the announcement, obviously, I read the apology announcement that came out a day or two ago before recording here. I read a bunch of the media coverage, and I’ve had a lot of time to think about this thing.
Patrick O’Keefe: [00:17:57] And one of the things that surprised me, and I was curious if it surprised you, were you surprised that the researchers were sort of so confident that this was going to be published research in this form prior to the announcement and prior to the news coverage? Because the announcement, the comments beneath the announcement, the response in media coverage and just overall, I would say virtually everything I’ve read, I’ve read probably hundreds, if not getting to 1,000 comments by this point, has been like, this is awful, they shouldn’t have done that.
Patrick O’Keefe: [00:18:28] And yet they really believed when they approached you, that this was research that was like, we’re going to do it, and it’s coming out, and here’s what’s happening. I mean, were you surprised by that confidence?
Logan MacGregor: [00:18:39] Yeah. And that was actually part of what made me feel like they really did mean well. And I can’t get inside their heads too much, but it really came across as if they believed that they were helping. And to me, like, I wanted to scream, this is not helpful, it’s absolutely not helpful at all. That’s part of why consent is so important, because it’s hard to have common ground after the fact, right? Like once you’ve been violated like that, it’s hard to get to that place where there’s mutual respect.
Logan MacGregor: [00:19:08] And I have to say that I’m a minority on the team in terms of the perception that this was well-meaning. You know, there’s a lot of anger.
Logan MacGregor: [00:19:17] And to me, I keep coming back to the system, like the system should have prevented this. I can totally see eager researchers identifying a real problem, coming up with a way to try to combat it. Well, okay, I guess consensus-based models lack ecological validity.
Logan MacGregor: [00:19:34] You know, here we go, we’re going to go into the sub and we’re going to show not only how problematic these AI creatures can be, but maybe come up with some ways to combat them. And I think we just kind of glossed over that whole consent piece and didn’t realize just how important it was. And this is something that I don’t think has been captured in the media very well at all.
Logan MacGregor: [00:19:57] And that is, I think, as we’re starting to talk about artificial intelligence and its pervasiveness everywhere, then I think we need to get past individual-based ethics and research. That so much of what they did to try to prevent harm was to say, well, you know, comments like this happen all the time online, we don’t think that it’s going to cause individual trauma. And we kind of dispute that because some of the comments are kind of like what you’re pretending to be a trauma counselor and maybe that could actually cause some harm.
Logan MacGregor: [00:20:30] But I think we have to look at how it impacts the community. And I don’t think that that is a thing in AI research generally. And so I think that that kind of feeds into how the researchers approach this, because I don’t think they thought enough about community impacts until after the community kind of screamed ouch, then I think they were like, oh, maybe this is a thing. And I’ve seen some ethics commentary in peer-reviewed journals that kind of suggests as much that we really should be looking at community-based ethics in this new world, that it’s not enough to just think about individual harm. And I think had that been an ethics focus, maybe this could have all been avoided.
Patrick O’Keefe: [00:21:16] Yeah, I mean, that’s a really interesting point because I do think a lot of the criticism of AI centers on consent and sort of how it impacts the end user, if you will. But it’s interesting because what jumped out at me as I was listening was this idea that a lot of harm or a lot of bad behavior online, like a lot of people are against anonymity or pseudonymity, like Reddit, where you have a username.
Patrick O’Keefe: [00:21:39] And in my findings, and as I talked to a lot of people who’ve done this for a long time, that’s not really so much an issue as it is sort of the attention paid to this space. Like whether or not anyone’s paying attention, whether or not anyone cares.
Patrick O’Keefe: [00:21:50] And I think a lot of things that are taken for granted, like this sort of situation where you sort of take for granted that, oh, there are so many comments online, obviously there are already AI comments or robot comments, so what’s a few more, is to sort of not look at people in an online community as people. And to sort of take it for granted that, okay, yeah, they’re dealing with some challenges, let’s toss a few more on, because what’s a few more? And that’s not the way that in general, most people would wanna be treated in any facet of their life.
Patrick O’Keefe: [00:22:21] And yet that’s sort of the justification is, there’s a lot of junk online and people have to look at it, so why not? Why would our thousand more comments, 1,800 comments, however many it was, really make a dent in a community that has 3. 8 million members? But that just takes so much for granted.
Logan MacGregor: [00:22:37] It is, and I take your point about the anonymity of Reddit, and I agree with you, it’s not that big of a deal, because you look at Facebook and other platforms where anonymity is not a thing, and you have similar problems. One thing that’s really special about Change My View is that it’s a human space, it’s a decidedly human space. And one of the things that is also true is that the University of Zurich is a decidedly human space.
Logan MacGregor: [00:23:05] Like I could envision myself traveling across the ocean, visiting Zurich, taking in the sights, and maybe even touring the campus and checking out their research libraries.
Patrick O’Keefe: [00:23:15] Just knock on the door.
Logan MacGregor: [00:23:16] Like, yeah, I have kind of that thought in my head of like, you know, this is a place where people share ideas and deal with cutting-edge science research and philosophy, and it’s one of the top research institutes in the world, and highly respected. And it’s respected because of what it does for humanity, and I think Change My View, obviously, we’re not a leading-edge research institution, we’re just a place where people can go and change views, but we’re also a decidedly human space.
Logan MacGregor: [00:23:43] And so what I think is so insidious about AI is it’s caused people to behave in ways that I don’t know we would have without the stupid thinking machines, because it’s a toxic influence. Unlike the bots that are kind of invading us daily and that we’re constantly shutting down as a sub, and, you know, Reddit admins are doing the same thing, this is a people-to-people relationship. We had to have a human conversation about what happened here between humans in this very public way that was covered by the media.
Logan MacGregor: [00:24:17] And that, I think, hurts a little bit more than just dealing with bots, because this wasn’t just bots. These are people interacting with other people, and there was a human element there. The researchers are real people, I’m a real person. This happened between real people, and it wasn’t just AI.
Patrick O’Keefe: [00:24:35] We both sort of teased at this, and I wanna just ask you outright, after this announcement is made by the moderator sort of revealing these efforts to the community, how would you sum up the reaction from members?
Logan MacGregor: [00:24:47] I would say that there was like this collective outrage, I think would be the prevailing theme that I see in the comments. I think it was a unique and singular violation of the ethos of the sub, and I think it was especially palpable because there are a lot of researchers and research-affiliated people that are fond of the sub. And so it seemed like, we protect national parks, and we have national monuments, these protected spaces, and it almost felt like on that level, you know, of all the places to do this, why Change My View, you know?
Patrick O’Keefe: [00:25:25] It was the name. They were drawn to it. We wanna prove persuasion. What sounds more about that than change my view?
Logan MacGregor: [00:25:31] They kind of said as much.
Patrick O’Keefe: [00:25:33] I think you got picked on because of your popularity and your namespace.
Patrick O’Keefe: [00:25:36] I wish this situation hadn’t happened. I always wish every situation like this wouldn’t happen, every abuse of online communities that I’ve ever seen. But one thing that is, I guess a small, if not a positive, maybe like a solid trait is, I think it does often help community members to see the benefit of moderators and what they do behind the scenes to keep things human or to keep things healthy.
Patrick O’Keefe: [00:26:01] As I know there were a lot of comments that were very supportive and positive about the moderators who are, again, all volunteers, as is the case with 99. 9% of online communities, which are not Facebook and not these big platforms. There was like a palpable sense of appreciation, which was nice.
Logan MacGregor: [00:26:17] It was nice, but we did get some hate in modmail, so…
Patrick O’Keefe: [00:26:22] Oh gosh, okay. Well, it’s never 100%, right?
Logan MacGregor: [00:26:25] It’s never 100%, but you know, I think that was appreciated. And, you know, I think that the level of thoughtfulness, because it was weird being a mod and seeing people talk about, you know, whether the mods should disclose to the community sooner, whether the mods were right in selecting these specific demands of the University of Zurich, you know, whether the University of Zurich was correct in telling us to pound sand.
Logan MacGregor: [00:26:50] It was interesting because instead of just removing comments when somebody violates a rule, people were asking questions of the mod team, and it was a level of engagement that I found really nice.
Logan MacGregor: [00:27:01] That people were asking, you know, we had shared all the open active accounts, which were active at the time. They’ve since been removed by Reddit.
Logan MacGregor: [00:27:09] And, you know, one user wanted to know, like, what about all the closed accounts? And so I listed those and, you know, you know, how do you know it was this group? And it’s, you know, how do we know the researchers are real? And so, you know, we were able to have that. And then there were a lot of questions about the mod’s decision to protect the identity of the researchers.
Logan MacGregor: [00:27:30] I think that that was the biggest thing that people challenged us on. Frankly, it was something the mod team wrestled with, but I think ultimately we decided to protect their safety.
Logan MacGregor: [00:27:42] And I think that that was the correct decision. I personally removed a death threat from that post. And so that kind of solidified that given how hot the issue had become, it was the right decision to protect their identities. And I know that that’s a controversial take, but I stand by it.
Logan MacGregor: [00:28:02] And I do think they meant well, and I’m happy that they understand the mistake that they’ve made, but I certainly don’t want any harm to come to them or their families. And that’s part of the difficulty of being a moderator, is you have to sometimes make these really hard calls and they’re not going to be universally accepted.
Patrick O’Keefe: [00:28:22] Yeah, I mean, I’ve had many a talk with people about the idea that moderation is only fair if it can protect people who you don’t necessarily agree with always. If moderation is only for the people that you agree with, then it’s really not a fair moderation. The researchers have now apologized. They won’t publish this research. They pledged to follow stricter ethics, all this following, sort of all this attention. I guess you’ve kind of touched on this already, but I just wanted to ask it outright again. What’s your reaction to that?
Patrick O’Keefe: [00:28:48] After all the time that the mods had to spend, right, kind of wrestling behind the scenes, getting nowhere, to going public with it, to then seeing things change because of that pressure. I know you have a fair perspective and I think you give them a high degree of benefit of the doubt. So I’m sure that’s sort of where your answer goes. But like, what’s your reaction to sort of this apology and what’s happened now in the last couple of days?
Logan MacGregor: [00:29:09] Well, I can tell you how I personally feel. Like, it reads like something that’s been run through a communications person or office.
Logan MacGregor: [00:29:18] So, you know, I don’t know how much of what the researchers really wanted to say, the University of Zurich let them say, because it definitely reads like something that’s been filtered through communications people. And I think that that’s kind of unfortunate because obviously it’s a necessary evil of our society. I would have preferred to have a more genuine conversation with the researchers.
Logan MacGregor: [00:29:40] And it just feels like they were muted a little bit. But it did seem like we had finally gotten a little bit of closure in that we had, you know, something that we could push out to the community. The researchers had promised everything that we wanted of the researchers. I think we still have some concerns with the University of Zurich and the funders. And I don’t know how those are going to get resolved.
Logan MacGregor: [00:30:02] But from the perspective of working with the researchers, and again, I can’t speak for the mod team because we’ll have to talk about it. The two biggest things were the apology and the promise not to publish. There were some of the comments that I think still need to get answered. Like, you know, is the data actually going to get fully deleted?
Logan MacGregor: [00:30:19] You know, I think there’s still some questions about, you know, are we going to be able to notify all the users that interacted with these bots? And so I think I would best characterizing as the first big step towards resolution. Like, I don’t think we’re done figuring out what all this means for the community.
Logan MacGregor: [00:30:40] I don’t think that certainly we’re done doing everything that we need to do to protect our community from inauthentic content. But I think that this is starting to inch towards closure. And again, I don’t think we’re quite there yet, but it was a big step. And I think it’s important to recognize how far the researchers came.
Logan MacGregor: [00:31:00] Because, you know, initially the researchers and the ethics commission basically told us to pound sand, that the value of the research wasn’t proportional, I think was their word, wasn’t proportional to suppressing complication. And so they wanted to keep producing it.
Logan MacGregor: [00:31:16] And they were like, yeah, we’ll look at strengthening ethical safeguards in the future, but this thing is still going to go forward. And I’m glad that they finally realized why this can’t be published and why the apology was necessary. And I think that that part was genuine, despite the communications people filtering it.
Patrick O’Keefe: [00:31:32] After getting beat over the head with it, finally decided it was time to back away. It’s interesting when you cause harm, it’s messy, you know, and so like. I think that speaks to sort of the complication. But one thing that you said is sort of where I want to end our conversation, which is just, you know, this idea that, as I described like manipulation in online communities has happened since the very start. It’s only gotten worse. There’ll be more stories like this that come out.
Patrick O’Keefe: [00:31:56] There are things happening without permission from bad actors all the time. You know we know it. We know this happens, but it’s probably never been easier for people to operate at a high level of both like believability and dishonesty, and then with incredible speed and volume. It’s a real pressure on everyone who builds online community spaces, and online community spaces are inherently human spaces. It’s a big challenge for the huge platforms, but they also have a lot of money. It’s a real challenge for, again, that 99. 9% of online communities.
Patrick O’Keefe: [00:32:28] That is a teenager who’s just building something about a community, about a thing they love-large subreddits powered by volunteers. That’s where there’s this is really hard. It’s most challenging, I would say, in a lot of ways, and there’s this great potential of erosion of trust for these spaces and them being human. It’s a big question. So has this changed your approach to, kind of- or what you think the approach would be to building a space that people can trust, or sort of: what do we do moving forward to build spaces that people trust? What do you take away from this whole thing in that vein?
Logan MacGregor: [00:33:02] Oh yeah, that is a big question. It’s almost like I think about something like Change My View as something that definitely needs a higher level of protection than your typical online space, because it is decidedly human. It’s one of the few places where civility rules are enforced. It’s a place that offers an exceptionally rare opportunity where people can actually change their views. So much of the world is filled with echo chambers, and this is a place that’s not an echo chamber.
Logan MacGregor: [00:33:37] This is a place where you go and you post your post and you say, hey, I don’t know if this is correct or not, I wanna hear other views on this and I’m willing to change my mind, and that is absolutely special, and so I think that that almost feels like it’s a critical infrastructure and one of the things that I know about critical infrastructure: there’s a trade-off between accessibility and security. The more that you harden spaces, the less people can get into those spaces.
Logan MacGregor: [00:34:12] And one of the things that I worry about when it comes to AI is it’s probably going to chip away – and in a way that I don’t think like any of us are really happy about, but it’s gonna chip away – at this idea of even the feasibility of having protected online spaces, because if in-person conversations are the only way that you can validate that you’re not talking to a robot, then this thing that we created called the internet, it’s gonna cease to have value at all.
Logan MacGregor: [00:34:49] That’s the fear, and I have hope that we’re going to be able to figure out a way to get past that challenge, but I’m scratching my head as to how we would do that, and the true tragedy in this whole piece is that the very people that I think are best equipped to help us navigate that space are now distrusted because of this experiment, and we need to heal that, and I don’t know how that’s gonna happen.
Logan MacGregor: [00:35:19] I really don’t, but if I could wave a magic wand, I would say that we are all now inextricably linked, as humans, to the trauma that this experiment caused, and I think it’s incumbent upon all of us to figure out a way to move past it.
Patrick O’Keefe: [00:35:35] Well said. Logan, thank you so much for joining me today. It’s been a real pleasure and I appreciate you sharing with us.
Logan MacGregor: [00:35:41] Thank you. Pleasure to be here.
Patrick O’Keefe: [00:35:42] We’ve been talking with Logan MacGregor, volunteer moderator on the Change My View subreddit. Logan was quoted in articles about this story by the Washington Post and Science, both of which we’ll link to in the show notes. For the transcript from this episode, plus highlights and links that we mentioned, please visit communitysignal.com. Community Signal is produced by Karn Broad. Thank you for listening.
If you have any thoughts on this episode that you’d like to share, please leave me a comment or send me an email. Thank you for listening.