Part 1: with Siva Vaidhyanathan
AG: Siva, could you just start by giving us a quick bio how you’d like to identify yourself to our listeners?
SIVA: Yeah. I’m Siva Vaidhyanathan I am the Robertson Professor of Media Studies at the University of Virginia. I’m the author of Antisocial Media: How Facebook Disconnects us and Undermines Democracy.
And then so the first question that I meant to ask that we’re asking everyone is what does being online feel like to you in 2020? So it can be a description, a metaphor, a word what does being online as Siva feel to you?
SV: Normal. And that’s largely because there is no online/offline distinction anymore, right? I have a device that is connected to AT&T that sits either on my body or within arms’ length of my body every minute of almost every day, right? And so it might as well be my body, it might as well be part of it, right? We are all cyborgs in that sense.
AG: Awesome, thank you. And so I think that we’ll end up probably covering your work across multiple domains but I think that we’ll most likely end up focusing today on the book that you just named, your book about Facebook. So I’m wondering if you could just for context give our listeners kind of a quick summary or elevator speech about what Antisocial Media is about?
SIVA: Yeah. So as I was looking around the world between about 2014 and let’s say November 2016 I noticed a couple of disturbing trends. One of those was that politics as electoral politics specifically, seemed to be getting more Facebook dependent around the world and you know what, the United States was not the leading indicator of that I was watching the rise of the BJP in India, a very sort of neo-fascist, a religious nationalist party that had been in the opposition for many decades but has pretty much run India for most of the last two and a half decades. But most recently for the past five years has really run India with an iron fist. And the BJP rose along with the popularity of Facebook and the popular dependence on Facebook as a mode of personal communication and that from which we receive information about the world. So I was looking at that. I was watching the rise of Rodrigo Duterte in the Philippines, I was watching the ways that a lot of the post-soviet states were fraying and their fragile democracies were being torn apart and a lot of it was happening because of what was happening on various social media platforms specifically Facebook which is the largest. So I’ve been watching all of that. I had been teaching classes on privacy and surveillance and for that I was collecting a lot of scholarship about Facebook for use in classes. And then in November 2016 like so many people shook me up and I was trying to figure out what possible use I am in the world and what good I am and what I could contribute and one of the things I realized was that I needed to tell this story of how globally we had become so dependent on Facebook for so much. And I wanted to look at all the ramifications of that and not just the ways it undermines democracy for which I think there is a very clear case. I think we also needed to concede the extent to which people have a personal relationship with and through Facebook and that is valuable. So at this point two and a half billion people use Facebook regularly around the world. They are not dupes. They are not fools. They get something of value out of it. So I wanted to explore both of those factors. The fact is Facebook is terrifically valuable for individuals and yet terrible for us collectively. Not unlike my car, right? My car is really nice for me it makes my life better, more convenient, right, makes everything easier, I’m a big fan of my car, has all the features I want. Our cars collectively are terrible for us.
SIVA: So that was one conclusion. I think the stronger, larger, bigger conclusion the takeaway I wanted people to have from the book was that while Facebook is perhaps the best tool we have ever had for motivation whether that’s personal or hobby based or political it is about the worst tool we’ve ever had for deliberation. And that democratic republics need tools that foster both motivation and deliberation. We’ve gone all in on motivation we need to do a lot more for deliberation.
AG: That’s great thanks. And I definitely, we’ll definitely get in to a lot more of those specifics and in particular that’s a really useful – those are two really useful terms I think to think about motivation versus deliberation how they’re different, how they work together, and how Facebook has sort of splintered them in that sense. But I actually kind of want to start by zooming out quite a lot and thinking about Facebook’s relevance even to people who don’t actually use Facebook. You just said how there are about 2.5 billion Facebook users but of course there are about 7.7 billion people on the planet so there are a lot of people who don’t actively log on. I think we’re also seeing an increasing generational divide where older folks are on it a lot but Millennials like myself included have either deleted our profiles or don’t use it as much and I’m pretty sure that to Gen Z Facebook is majorly uncool. So essentially there is a sector of people in the world who are not actively going onto Facebook.com all the time but I’m guessing that you would argue that Facebook still matters in a sense to those people because it’s shifted the world in these really big ways. So can we start by addressing those people?
AG: And I’m curious what you would say to those people if Facebook doesn’t apply to my day-to-day life why is it important for me nonetheless to understand the influence that Facebook has had on the world?
SIVA: Okay. So let me start from the big number 7.4, 7.5 billion people around the world. Subtract 1.5 billion of those who live in China who can’t get access to Facebook, right?
SIVA: So at that point we’re down to 5 billion. Right? Subtract another 2.9 billion who do not have the means or the technological foundation in their societies to engage with digital technology in any way. These are the poorest of the poor. They are concentrated in Sub-Saharan Africa, in parts of Asia, in parts of South America but scattered around the world, right, including many millions in the United States who just don’t have the means. Understand that that number is actually shrinking pretty quickly maybe not this month but, you know, it’s actually shrinking pretty quickly as well as the access to digital technology specifically phone based data drops consistently. And Facebook has a lot to do with that, right? So at that point we are close to 3 billion people. Now 3 billion people happens to be the number of regular users of all Facebook products and so not just what they call blue, the mothership of Facebook but Instagram and WhatsApp. So of the top six like the six most powerful, most popular social network platforms in the world four of them are owned by Facebook. They are in order Facebook, Instagram, WhatsApp, and Facebook Messenger. The only two that are not owned by Facebook YouTube which is owned by Google which is number two at 1.9 billion. And WeChat which is the most popular and powerful social network service in China, right? So there’s no direct competition between WeChat and Facebook. Understand that most people who use Facebook also use Instagram and also used WhatsApp and almost everybody who uses WhatsApp and Instagram have a Facebook account. The fact that they’re all one company means that this generational divide first of all we have to remember that, well, let me put to the side, first of all, I reject all generational labels I think there’s no actual empirical support for any of those distinctions. But I would say that one thing we have seen is that younger people in North America have been deferring their Facebook registration and use. Deferring, not avoiding, right? Because 14-year-olds tend to become 25-year-olds it’s something that happens, right? And Facebook is of more use to people who have loved ones in other cities. Cousins who are getting married, who have high school friends who now live in other countries and that tends to happen later in life. It is of more value to a grandparent than it is to a college student without a doubt and that’s sort of because of how we’ve arranged ourselves and our activities on Facebook and the variety of features Facebook offers. When you own a home in a neighborhood Facebook becomes really valuable as part of staying in touch with your community. If you’re part of a hobby group that requires or is best done over distance Facebook has more value. So really what we’re seeing this idea of a generational lag first of all really is only North America. So we’re talking about a very small slice of the world.
14-year-olds in India, 14-year-olds in South Africa, 14-year-olds in Kenya are signing up for Facebook as soon as they can because they need it. It’s not just about doing Snapchat which by the way has fewer than a billion users and has plateaued. And about posting stories. It really is about being able to navigate your community and your family.
AG: Yeah, absolutely. And I think I definitely take all those points and I think part of what I was trying to get at was the way that Facebook as a corporation has had these kind of ripple effects the way that other media organizations operate and so we’ll definitely get into that a bit when we start to talk about Facebook’s business model and the way it’s changed advertising and all of that. But thank you for that clarification.
SIVA: I would add just one other thing to that, right? So even if you don’t have a Facebook account Facebook’s tracking you. So if you’re using any other service on your phone, if you have any Facebook owned apps like Instagram or WhatsApp Facebook is tapping into your phone, into your address book. They have a dossier on you. They know everything about you and they know everything about everybody like you. So it almost doesn’t matter if you’re not using Facebook, Facebook wins anyway.
AG: That’s good to know. So my mom who has proudly never had Facebook if she’s communicated with me on WhatsApp she’s still being tracked essentially?
SIVA: Yeah. Absolutely.
AG: I’m sure she’ll be comforted to know that. So just kind of picking up on what you said little bit about the good of Facebook, right, about the way that like a car it does provide a lot of sort of warm fuzzy convenient things for us. You write at the end of your introduction that you say, “I’ve lived my life through Facebook. Facebook has been the operating system of my life.” And I’m wondering, I would love to hear from you a little bit more specifically about what has Facebook given you in a positive sense? How has it been something that has been alluring and you’ve kept coming back to over your life even as you’ve obviously started to learn about some of the darker aspects of it?
SIVA: Sure, sure. I think anybody just being able to keep up with family I’m a bit distant from, family who are not in my primary favorites in my phone list, right? cousins, second cousins, nieces and nephews knowing that they graduated at the top of their class, that they’ve got a new puppy, right? Those are good things. Those are the things that Facebook has made easier to do than ever. We lived fine before we had that constant flow of knowledge so it’s not like it’s crucial to us but we would miss it if we didn’t have it, right? And so that’s really valuable. I think Facebook has, you know, we all have anecdotes about the moment that Facebook meant something to us made something just a little bit easier. I mean one of the times I can tell you a story about how just in a local sense a few years ago the gym that I belong to had in one of the yoga classes a handgun fell out of a person’s bag and there was immediate uproar among the members like why is there a handgun in someone’s bag? And it turned out that the management had changed the policy basically allowing concealed handguns in the building none of us had been informed of this. There’s childcare in that building, this was certainly – and it’s yoga, right? It’s yoga why do you need a weapon in yoga, right? It seems very anti yoga, right? And not very om. So a lot of us immediately started organizing using Facebook and within two days we got our gym to change the policy. Now, prior to Facebook, in the absence of Facebook that organizational process would have been harder. Not impossible we probably could have done it in a week we would have had email and phone conversations and petitions and maybe a protest outside. As it turned out this barrage of Facebook activism that made hundreds of people call the office and complain changed the policy rather quickly. Again, Facebook was not necessary but it really helped and got the job done. Now, if I happened to be a member of a white supremacist organization I would have just the same amount of ease organizing a political action that might have scared the hell out of some people. So it just so happens I was against guns in that moment which was kinda good. So I don’t want to oversell it as this is a progressive process. It does mean though that if you want to get something done Facebook is terrifically powerful. Now I can just add one more thing my father just passed away and I announced his demise on Facebook as people do these days. I have found it to be a remarkably powerful and, I don’t know, “pleasant” is the wrong word, it’s hard to use a word like in the wake of losing someone. A warming process. And again not just because – Facebook has allowed so many of my friends to express their condolences and their appreciation and their love for me and for my family without having to text me or call me which I, you know, I love the texts and calls I got I’m glad I don’t have 200 of them, I’m glad I have ten of them, you know? And that’s nice, right? it’s a nice way to sort of manage these moments and it let me write a few sentences about him and there aren’t a lot of platforms, there aren’t a lot of means of communication that allow you to do something so personal and yet so public and sort of manage the public side of mourning quite so well. And really honor my father in a very effective and straightforward way. And so at this very moment I’ve spent much of the day on Facebook because I’m getting so much love out of it and I know my sisters are experiencing the same thing, my mother’s experiencing the same thing. Again, before Facebook when I would lose a loved one it would feel like a lonelier experience, right? It’s how we did things for centuries so it clearly wasn’t that bad. But it’s kinda nice.
AG: Yeah. Well, again I’m sorry to hear about your father.
SIVA: Thank you.
AG: And it’s strange I actually had an extremely similar experience of that today too I lost a friend recently and did a similar thing where I posted and I was looking through the replies today and there is a way in which the strange way in which, and you write about this in your book, Facebook has changed our understanding of quote/unquote “friends” to also include these very loose ties it can actually be really nice in those moments. You know I had comments from my high school French teacher and someone who babysat my brother and me when we were kids and in a sense those messages are so meaningful too because it’s not just my mom and my best friends but it’s like wow actually there’s this person who remembers me from high school French, and from when I was eight and who is thinking thoughts and who read that thing. So I think that’s a perfect – the kind of gestures of public mourning I think is a perfect encapsulation of how Facebook can be really intimate and useful in consoling in moments.
AG: So we’ve been kind of talking about the warm and fuzzy aspects of it but I think that to understand some of the darker aspects as you show you really clearly in your book we really need to understand the business model, right? The way that I think that when we’re on Facebook we experience as a surface, right, we see what we see but we’re not really thinking about why material is being presented to us or what Facebook’s incentives are for doing so. So in order to move into this territory I’m wondering can you just briefly explain Facebook’s business model? So when we’re flitting around liking things or commenting on really serious posts like the ones that you and I just described how is Facebook making money off of us? What’s its business model?
SIVA: Right. Look, if you watch an episode of Mad Men you will see what advertising used to be, right? Mad Men is faith based advertising, right? It’s storytelling, right? it’s like this process of women in America really want this bra and they want this bra because they want this life and this bra will lead to this life and we’re going to present a series of images and words and stories that promise fulfillment in life through this bra. … So that sort of stuff was common in the late 20th Century because every car dealer and every shoe store and every restaurant and every movie theater had to buy ads that might or might not have worked. Google and then Facebook created an advertising system that has two features. Number one, you never pay for an advertisement that does not generate a click of interest. Right? So there’s actually something to measure. You don’t have to pay for ads that hit people’s eyes and don’t make them do anything. Number two, the price of that ad is based on a complex algorithm and auction system and the third part of that is that the ad is always targeted to someone who has already expressed interest in that subject. So.
SIVA: If I run a company that sells ostrich skin cowboy boots and my choice is to buy a quarter page ad in the local newspaper or let’s say I’m based in Ft. Worth, Texas, right, so it’s a city that has a lot of cowboy boot users per-capita relative to the rest of the company I could buy a quarter page ad in the Ft. Worth Star-Telegram or I can go on Facebook and Facebook will help me identify the very sort of person who might buy ostrich skin cowboy boots, which by the way run between $400-500 a pair. So they are a niche product and…Facebook and Google are the best ways to do it because they won’t waste your money trying to find people who won’t buy ostrich skin cowboy boots.
AG: Right. And you also show in your book just to add something that you can also filter people out, right, because to be advertising them to –
SIVA: That’s right.
AG: — vegans, you know, or anti leather crusaders. So just the degree of micro targeting that is possible by the filtering is what is really unprecedented.
SIVA: That’s right. In addition you can use that advertising system to narrow your audience to highly specialized people by picking attributes. You can even pick gender and religion and political attitude in all these different ways. So if you’re trying to advertise this podcast for instance you can go onto Facebook advertising and find people with a certain level of education, or people who have expressed interest in certain subjects and very accurately find an audience of 3,000 to 5,000 people who might benefit from hearing about this podcast. Now the other way to do it is and the way that most political campaigns do it and many products do it is they will have an email list. Like if I run a boot shop on Ft. Worth I’m going to have an email list of customers who have come in. And it might be 200 people long or 2,000 people long. And I can take that email list and put it in a spreadsheet and upload it to Facebook and Facebook will generate what it calls a look-a-like audience. People with the exact same attributes as the people who are likely to walk in my shop. And I might start with 2,000 mail addresses but they could generate as many as 10,000 more names for me, maybe more around the country and then I can directly go after those people regardless of whether they live in Seattle or Maine and they are much more likely to want my boot than someone I just peppered. Or someone I even attracted demographically.
AG: Right. And so I think that if we’re talking about the advertising aspect of it I think someone could listen to what you’re saying and say you know that’s great, I mean everyone wins, right? I get marketed just the stuff I want and the companies can be more efficient in how they allocate their advertising budget so that’s great, right? so I’m wondering if you can then help us understand how that translates to some of the things that start to feel disconcerting that you write at length in your book about whether it’s our filter bubbles, our confirmation bias … our information ecosystems being fundamentally shaped in a new way by Facebook.
SV: Right, right. All these years I was growing up in the wake of the fall of the Berlin Wall I kept hearing that capitalism and commerce were not only compatible with democracy but sort of they were co-evolutionary, right? They were running together and as soon as we introduced market economics to China it will become democratic, right? I think we’ve pretty much unraveled that completely and Facebook shows it really strongly as does Google, right? So what is great for selling ostrich skin cowboy boots which is also really great for selling Donald Trump or Joe Biden, you know, is not necessarily good for democracy. What makes commerce more efficient and advertising more efficient is actually counterproductive for democracy because democracy demands different things. We in a democratic republic want people to be able to hold their candidates and their political organizations accountable. So not to idealize the era of television and radio advertisements but at least if I were running for city council and I – let’s say I’m in Charlottesville, Virginia which is where I live, right? And there are a lot of small businesses, at least there were a month ago, and I’m running for mayor or for city council and I decide that I want to unfairly target my opponent as a shoplifter or pro shoplifting, right? And I decide to run like radio ads or television ads saying you know Annie Galvin my opponent is a notorious shoplifter and I put those ads up on television and radio. Well, the next day Annie Galvin and all of our people know about it and they’re immediately striking back at me. The local newspaper, the local radio they’re all coming after me for making this thing up. There’s some accountability there. It may not be effective accountability but there’s – you know, and we can actually have that feedback system, right? But if I were to do that on Facebook or Facebook ads and only target small businesses and say, you know what, you should watch out for this Galvin, we got a problem here, some shoplifting issues, right?
AG: Yeah, yeah.
SV: Those ads would disappear, they’re ephemeral, only the people who might be swayed by those ads would even know they existed. I could deny them. That in itself is bad and we know that the Trump campaign in 2016 poured out thousands of highly specific targeted ads at African American communities around the country especially in Pennsylvania and Michigan and Wisconsin trying to reduce enthusiasm for Hillary Clinton and all you have to do is knock a few thousand people away from voting on Tuesday and lo and behold you can swing a state. There were ads targeted very specifically at men of Haitian descent in South Florida reminding them that Bill Clinton had gone to Haiti after the last earthquake and promised all these reforms and all that they got was cholera. That was a pretty powerful message to send to men of Haitian descent who might already be reticent about voting for a woman for president. And again all you need is a few thousand. Florida went to Trump by about 110,000 votes. The entire Electoral College was settled by three states, Wisconsin, Michigan, and Pennsylvania by 78,000 votes. You move a thousand votes here or there with a small targeted ad to a specific segment of the electorate and all you have to convince them not to vote for the other person or get a thousand new people to vote for you and who knows you could swing a state like Wisconsin which was razor thin, it was less than 1% that decided Wisconsin. Right? So that kind of stuff, that’s really bad for democracy not just because Trump won that’s a whole other story, it’s bad for democracy because our candidates and their communication with voters should be accountable for criticism and response. So that’s one of the big problems with the way this advertising systems undermines democracy.
AG: Right. Yeah, and that was kind of a revelation that I experienced from your book. I think a lot of the kind of discourse around Facebook and the election was about these bots and trolls had really fiercely motivated people to do something actively but the idea that it’s so easy to actually dissuade people from doing something like voting, right, it’s in a sense easier to just nudge people to stay at home, right, than it may be is to throw their support behind someone that that dissuading nudge can be just as consequential. So I thought that was a really interesting kind of just like a slightly different frame for looking at Facebook’s effect on the election that I hadn’t thought about before.
SV: Yeah. It’s also important to get beyond an election, right? Elections attract our attention because they matter, right, I mean Donald Trump’s president and the whole world’s falling apart as a result so it’s real easy to go, oh, my God, what happened when Trump – and tell the story on Facebook on Trump. And in this book I tried to do bigger stuff than that. I think the long term effects are what we really should worry about, right? Had Hillary Clinton run a better campaign on Facebook she might be president today. Maybe Joe Biden will get it together and he will be president. It won’t change the fact that Facebook is bad for democracy. It won’t change the fact that no matter who works Facebook effectively ultimately Facebook guides our immediate ecosystems, structures the ways that television and other forms of media decide how to present stories of what to present and what emotional cues to trigger, right? All of that is changing because of Facebook. All of that undermines our ability to deliberate soberly and maturely about the problems we face and ultimately all of that shenanigans we saw in 2016, the Russian interference, right, yes, it was hoping to move people toward voting for Trump or against Clinton but the larger goal which is their long term gain and it’s not just Russians there are lots of domestic forces trying to do this too and we see it coming from the White House right now is to just make us give up. Make us give up on the potential of actually using democratic politics to solve problems and enhance people’s lives even among people who differ, right? And that is what we’re really in danger of losing and we see it happening in the Philippines and we see it happening in India and Brazil and we see it happening in the United States and Hungary and Poland and it’s really bad.
AG: Yeah. I mean I think that’s a really key point that it’s not necessarily that Facebook is tipping the scales the way that we like or we don’t but it’s these bigger institutions that are being lost. And it’s interesting what you were just saying because I feel that I’ve been a little bit of a victim to that because when I hear you say terms like “democracy” and “deliberation” and “civil participation,” those terms have actually started to feel very abstract and almost undefined in a way so I would love to ask you if you could just really specifically pinpoint like what are top let’s say three qualities of democracy? What do you see as democracy that is being lost by Facebook? Three might not be enough. But I’ve forgotten what democracy is apparently so –.
SV: Well, look at its most abstract democracy is a system of government that is responsive to the will and concerns of the voters. …Deliberative democracy or republican democracy essentially, right, the notion that we can have institutions that help us convene and vet our differences and our disagreements and work towards something, some response that makes sense to a problem. So one of the examples I like to bring up when I give talks about this is that in 1969 the Cuyahoga River outside of Cleveland caught on fire and it wasn’t the first river fire in the 20th Century in fact there had been many but in 1969 the country was ready to hear and think about it and ready to be appalled by it. Most importantly it was the last river fire of the 20th Century. And one of the reasons it was the last river fire of the 20th Century is that people cared enough so between 1969 and 1972 Congress passed the Clear Air Act and the Clean Water Act and started the EPA and President Nixon a free market Republican signed all of those into law. And we as a country took seriously the problem of water pollution and decided that even though many of us supported free market responses and many of us supported state based responses and other supported tort based restrictions on companies that polluted we had to do something. And we worked through those arguments. Congress worked though those arguments. We created the structure of the EPA … all of that happened because in 1969 for all of its faults this country was able to talk about a problem in a mature and informed way without being flustered by sophistry, without being we would think of it as trolling, trolling to you know, people could actually argue it out in I don’t know, “good faith” may be too strong, argue it out without having to be swept away. But right now like that was one river on fire, right now the whole world is on fire and we can’t get to step two for talking about the way to respond to that because every time we want to talk about it we get swept away by sophistry, by trolls who want to make us prove again that the problem exists. And this is insanity. And Facebook’s not the reason but Facebook is a part of the problem.
AG: Right. And that’s to me starts to kind of echo what you were talking about at the very beginning about motivation versus deliberation. And I think ….And I think I’m sure that when you have these conversations the natural final question is what do we do about it?
AG: And you’ve written extensively in The Guardian and a little bit in your conclusion and elsewhere about the capacities we might have to regulate or to design policy to break up some of these big companies. But I’d actually like to ask the question a little bit different. And if you could just kind of wave a wand and just change Facebook, just make it look different, make it behave differently than it currently does what would you change kind of from a design perspective and operational engineering perspective? Like what would your ideal vision of Facebook be that would both allow you to grieve your father and see the nephew who was valedictorian but that would also actually promote these values of democracy that you’ve described so eloquently?
SV: So if I worked for the company I would have an imperative to continue to do what’s best for Facebook and therefore I wouldn’t change much. I would actually keep it being just as destructive, right?
AG: That’s so depressing.
SV: I mean that’s the way it is, right? Nobody who works for these companies has an incentive to do anything against the interest of these companies. Which is why we have the state, right? We have the state to regulate and limit the negative externalities of whether that negative externality is water pollution that starts a fire in the river or a negative externality is idea pollution that undermines democracy….But if I had a magic wand it would be a policy wand and I would severely restrict the ways in which companies could track and keep data about our interests and behaviors. I would severely limit because this would not be a First Amendment problem, I would severely limit what companies could find out about us and how long they could keep it. To the point where it would decrease their ability to respond to us through the newsfeed, through the YouTube recommendation engine, and through their advertising platforms. This would have the secondary effect of making these companies poorer, they would have to fight harder but it would level the playing field in the advertising world a bit if we still have newspapers coming out of this depression, you know, maybe they might have a little bit more of a chance to compete and build their own advertising systems rather than have all the money rush to Facebook and Google but more than that it would mean that no company like Facebook or Google could ever leverage a decade or more of private information that gives them an inherent and unfair advantage over any insurgent company. If you want to make the next Facebook you can’t because you don’t have a decade of personal information to build upon to build these services, right? And then tell advertisers, and then tell people they should spend time with it because it will be more satisfying, it just won’t. Right? So there aren’t a lot of policy tools we’re going to use. Antitrust is actually a lot weaker than people seem to think in this matter and I haven’t yet seen a model of what an effective antitrust intervention with Facebook or Google would be. Data protection in the European sense is already starting to seem to be inadequate. I think we need a much more bold and more radical approach. In the political realm there is one reform I would love to see congress put forth no one’s really taken it seriously and our doubt our congress is going to take it seriously soon. But that is basically – and this would not be a First Amendment problem if properly structured. No campaign ad should be allowed to be targeted at any group smaller than the district in which the election is running. So if you and I were running against each other for city council I cannot aim an ad at African Americans and exclude everybody else. I cannot aim an ad at women and exclude everybody else. I cannot aim an ad at small business owners and exclude everybody else. All of my ads much be structured for the electorate at large and must appeal to the electorate at large. Right?
SV: Making micro targeting in political ads illegal should totally be legit. Now who knows what this current Supreme Court would do with that but whatever, you know, it’s worth a shot because that would mitigate a lot of the problems with the ways political ads work. But again that is only one small problem of all of the antidemocratic effects that Facebook has on the world.
AG: Yeah, I have not heard that one and I think that’s really interesting. I mean it goes back to the point about accountability I mean at least you have to be accountable to a community outside the very narrow community that you’re able to target currently.
AG: So let’s get behind that. So I just had two more questions, quick questions to ask that we’re asking all of our guests. I actually forgot to ask you the very first one so I’ll ask you the closing question then I’ll ask you the beginning one that I forgot to ask.
AG: But the closing question is, What do you think is the next big question that we need to be asking and think about as we study what platforms like Facebook are doing to society and democracy?
SV: Yeah… The fact is Facebook and Google and Microsoft and Amazon and Apple they all want to be the operations system of our lives going forward. They’re not struggling any more to be the operating system of our phones or our computers. That’s done. They want to be the governance of our thermostats, of our refrigerators, of cars, or our clothes, of our eyeglasses, of our insulin pumps, right? They want to be embedded in every part of our lives. There is no online/offline distinction anymore and everybody in Silicon Valley knows it and they know that once you have data flowing through everything everything is governable they want to have that monopoly power. … The big question for us is how can we defend ourselves against these big powerful oligopolies that are struggling to become the operating system of our lives? … I think that is the big challenge of the next two decades. I think our current public health and economic crisis makes that an even harder fight than it was just a few months ago. These companies are going to become more important. They may be the only ones standing in some fields and it’s going to become more imperative to distill this argument as not about fake news and not about privacy per se, it is about the operating system of our lives. Is that operating system going to be our human societies, families, and individual minds? Or is it going to be Facebook, Google, Microsoft, Amazon, Apple or some combination?
AG: Yeah, that’s great. Hopeful it would be more persuasive to frame it as such I mean at the risk of sounding melodramatic but it really an issue of our humanity.
AG: This is all the question about the choices, the agency that we have to maneuver through the world and through our lives and it’s not just the fact that we’re staring at our phones too much it’s vastly expanded beyond that. So I really appreciate that.
thank you so much for your time. This has been super fascinating and I think it ties in with a lot of stuff that we’ve been thinking about at Public Books and obviously the stuff you’ve been writing about. So we really, really appreciate it a lot so thank you.
SV: Oh, my pleasure.
[End of Recording]
Part 2: with Alice E. Marwick
AG (Annie Galvin) Alice, thank you so much for agreeing to be on our podcast. I’m really excited to speak with you, and so let’s start. If you could just give a short bio, a couple of sentences, how you would like to introduce yourself to listeners that would be great.
AM (Alice Marwick) Sure. My name is Alice Marwick, I’m an Associate Professor of Communication and Principle Researcher at the Center for Information, Technology and Public Life at the University of North Carolina at Chapel Hill.
AG Wonderful, thank you. And so I’d like to start with a question that we’re asking all of our listeners and so the question is, what does being on the internet in 2020 feel like to you? So that can be a phrase, a word, a metaphor, a description, anything that captures the experience of being online in 2020 for you.
AM Kind of fun, kind of dull. I got that from Natasha Schüll has this great book called, Addiction by Design, about video poker machines and the phrasing she uses is a little bit different, but it’s this idea that you are using things like video poker machines to sort of zone out from your regular life, that it creates this kind of interstitial space where the concerns of regular life don’t really touch you, but it’s not like a fun space or an ecstatic space, it’s just kind of a null space.
AG Yeah, I think that’s a really good, good way to put it. And you know, we all sort of pick our poison, our platform or game of choice, but they all seem to have that kind of, that effect on us so that’s great, thanks. And so I think it would also help to give our listeners just a little bit of context and background about the work that you do, and your work as a scholar has covered just a really impressively vast range of topics related to the internet from social media and internet community to misinformation, privacy, radicalization, and you wrote about openness in Wikipedia for Public Books. It feels like you have kind of covered every, all of the hot topics around the internet, so, could you tell our listeners a little bit about some of your most recent projects relating to the internet?
AM Sure, so when I describe I work I say that I’m a scholar of social media and basically anything about social media that interests me I feel free to delve into. And so right now I would say that my research agenda has sort of two branches. The first branch is critical privacy studies and I’m working on my second book right now, which is called, The Private is Political: Network to Privacy on Social Media, where I’m looking at how the impact of network to privacy violations are felt most deeply by people who are marginalized in other areas of their life. So I’m trying to integrate this critical theory of power into our understanding of privacy, and then my other set of interests have to do with disinformation, misinformation and radicalization, which I came to in the run-up to the 2016 election when I was a fellow at Data and Society and started looking at media manipulation online, and that’s obviously like, that’s the kind of new hot, sexy thing and the privacy thing feels a little old school at this point, but what both of them have in common is that they are trying to look at impacts of new technologies in a way that recognizes that those impacts are differential and that if we need to, to understand technology, we had to incorporate theories of power that are drawn from feminist theory, critical race theory and queer theory because they help us understand how technology actually plays out in people’s daily lives. Like I’m not interested in making huge generalizations about tech does A or tech does B or Facebook does this or that. I’m interested in looking at both the positives and the negatives and nuances and recognizing that nothing is all good or all bad, and that when you actually get into how people use technology, you often find that the stories are more complicated or more complex than they might appear at first glance.
AG Yeah, I think that’s so interesting how you are bringing these kind of, these areas from humanistic study together with technology studies, and we can definitely keep, I mean, this will come up over and over again in the conversation I think, but, you know, technology is, it’s just kind of humans, especially social media, is just humans talking to each other and so if we don’t think about the, you know, different intersectional experiences of being human, we’re really missing everything in a way. …, so our kind of larger question for this episode is what is the internet doing to society and in order to explore that, it seems crucial to understand how these platforms make money off of us, their users, and you know, I think a lot of hear about data a lot in the news these days, how these “free platforms” like Facebook, Google, Uber, all of our apps make money off the data they collect about users and sell to other corporations, but I think a lot of us, certainly myself included before getting into more of [unintelligible – 17:02]’s work, didn’t totally understand how this happens, how if I buy a baby shower gift for a friend, you know, all of a sudden the internet feels like I’m expecting a child as well. So I’m wondering if you can kind of walk us through the journey of a piece of data. So if we focus on Facebook for a second. When I put a set of personal data into my profile, like my age, hometown, profession, relationship status, where exactly does that data go, can you kind of walk me through the journey of that data from my fingers striking the keys to me starting to see these targeted ads online?
AM So Facebook is famously opaque about what its data practices actually are. So a lot of what I’m about to say is my best guess based on things that I’ve learned over the years about how a large social platform like Facebook works. And Facebook is probably the best at doing this. So say I, you know, say I’m scrolling through Instagram, which Facebook owns and I click on an ad for a fancy pair of slippers on Instagram, right, something, I get marketed stuff like that all the time, right, yoga leggings, direct to consumer slippers, all those kinds of things. I’m also a sucker for that kind of stuff and I buy a lot of it so they already know that, you know, you show me fancy slippers maybe I’m going to buy them. So I click through to the website and I complete the transaction. So I’ve now bought slippers. So, Instagram and Facebook, which is the same data backend, right, they share all the same information and the same profiles about their users, they now have another piece of information about me. They know the time of day that I clicked on the ad, they know how many times they showed me the ad before I clicked on it, they showed me, they know what I was doing before and after I clicked on the ad … So that piece is, that piece of information about me, which they already have my age, my closest friends, what my closest friends buy, they know where I live, they know how my tastes have changed over the years, they know how much money I’m spending through all of these ads, etc., so this new piece of information about these slippers gets added to that data profile. Now, Facebook has made a big push over the last five years to try to integrate what you do online with what you do offline … So what Facebook has done is invested really heavily in what is called onboarding, which is integrating your offline and online profiles. So if you go to J. Crew and you give them your e-mail address at checkout, which a lot of the time every store is going to ask you for your phone number or your e-mail address, that acts as a unique identifier that they can then match up with the data profile that they have on you on the internet. So they know how much money I’m spending in Sephora online, and they know how much money I’m spending at Sephora offline, so they can look at the, these slippers that I bought, they can look at other purchases that I’ve been making on other websites. They have a fairly comprehensive understanding of what my consumer behavior is at this point. So then they are able to do predictive modeling to decide what are the things that they can show me that I would be the most likely to click on. And they do that by classifying me as a consumer in some way, so they will, they have probably a consumer profile that is like, you know, upper middle class, 40 something, suburban mom, that, or something like that, right? Interested in fashion and entertainment and books. Put me in that bucket and then they are going to serve me ads that are similar to ads that other people who are like me have clicked on. Then they are also going to use the slippers ad and say, okay, well what other products are like these slippers. You know, maybe they have other kinds of shoes like Rothy’s or whatever that they are going to show me. So they are putting me in a, they are using this enormous amount of information they have about me, and they are combining it with information they have not just about me, but about everybody that I’m connected to and about, you know, a large demographic group of people who are similar to me to predict what I’m going to do when I’m online.
AG Right. Whoo, that is quite a journey, that, so it helped me understand better, yeah. Alice, when I was reading your work, one thing that I learned about from it that I didn’t know about really before was these third party data brokers, who, and please correct me if I’m wrong, yeah, basically act as kind of third party mediators between, or a third party marketplace between the apps that are collecting the data and other corporations, who can use that data for their marketing purposes. Can you tell us what a data broker is and what it does?
AM A data broker is a company that buys and sells personal data. So, they aggregate data from like a huge variety of sources, so public records like mortgage records, driver’s licenses, campaign contributions, in some states gun licenses, anything like that, along with any information that they can mine off of just scraping social media sites. Combined with these consumer profiles that are created on social media sites or by different companies, and then they slice and dice all this data in a million different ways and they sell lists of people to different actors. So, say you are starting a, say you are starting a magazine for a cigar smokers or something, it’s probably a poor business decision in 2020, but say that’s what you are doing, and you want a list of people you think might be interested in this. You can go to a data broker and you can buy a list for that. You can also buy a list of people who are older, who have less money, who are maybe in financially difficult straights. If you are, you know, if you are maybe doing a somewhat illegitimate or less legitimate business, like you want to send I don’t know, lottery come-ons or commemorative plates or MLM’s or some scammy thing, you can also buy lists of people that you think might be more likely to fall for those things. And then –
AG Ugh, great.
AM – so all this information is totally opaque, it’s really hard for people to see what information the data brokers have on them. Sometimes they sell this information to the U.S. government, even though some of it is information that the government isn’t legally allowed to acquire and because none of us know what is in those files, none of us know when decisions are being made based on this data.
AM Because this data gets used in a huge variety of different ways, from policing algorithms to, you know, these, these databases that determine whether you are a good credit risk or whether you might be a good employee or even whether you might, you should be accepted into college or not.
AG Yeah, yeah, and I definitely want to talk a little bit more about those issues in a bit, but I just wanted to follow up and ask are these data brokers, I mean, is this a new, like a new phenomenon that has grown up alongside the internet and the extremely, the extremely fine tuned micro-targeting that can be done, or were these a feature of advertising prior to social media?
AM They start with sort of direct mail and junk mail solicitation in the 80’s, maybe 70’s and 80’s and then they get bigger and bigger and bigger, and more concentrated the more you go on. So the modern ones like Acxiom are very much tied to the emergence of the internet. The ability to micro-target is so much more sophisticated than it was 30 years ago, right? Like you can decide that you want to buy a Facebook ad that is targeted to people who are within 25 and 30, who live in the Lower East Side and Murray Hill, who are into astrology and who have a cat. Like you can go on Facebook and you can fill out all those fields, and you can say, okay, my new age pet emporium on First and Houston, this is where, these are my potential customers, right?
AM So the ability to collect all that information about people and also the way to be incredibly agile and nimble with it, to be able to target really quickly and really precisely is new because of the internet.
AG Yeah, right, okay. Yeah, I mean, what you were just saying about how, I mean, what really kind of blew my mind about what you were saying is the way that our online behavior and offline purchases are starting to be more and more integrated. Because that, I mean, I think when we are online, we are sort of aware that, you know, someone is watching us, we’re being tracked, but that just feels like, I don’t know, it is just crazy that, you know, I purchase things online and offline, and I really had no idea that that was happening.
AM Yeah, I mean, a couple of years ago I realized that virtually all of my students, who at the time were young Millennials and who are now kind of like the older Gen Z, they all believe that their phones listen to them and that micro-targeted advertising was delivered to them based on the things that they said to their friends or their mom or whatever. And no matter how much Facebook says that is not true, and how often I hear from engineers that is not technically possible because of the amount of data that would have to be processed in real time in order for that to happen, the fact that people think that is real is because these data practices are so unbelievably intrusive that we don’t even realize how much data is being collected. They don’t need to listen to us. They know where we are.
AG Yeah, they don’t need it.
AM They know where we are when we are walking around. They knew who we were interacting with, right? Like, if I’m chatting with someone on Instagram, Facebook knows that. If I’m tagging a friend in a, in a picture that I took yesterday, Facebook knows where and when the picture was taken. It knows who is in that conversation. Yes, there are still enough uncanny coincidences that I still somewhat think Facebook might be listening. It’s really hard not to believe that sometimes, but it really is a factor of the amount of information that is out there about us that is being collected without our consent, without our knowledge and without basically any governmental oversight whatever.
AG Right, yeah, I mean, that notion that they almost don’t even need to listen to us is really freaky, you know, that there are, these practices are so sophisticated already. Wait, what was I going to – oh yeah, and I mean, I think another thing that I learned from your work about the data brokers, I think a lot of times when we are, what we’re talking about when we are talking about data sharing and privacy is consent in a way and it gets so complicated when there is that third party because, you know, maybe we think, we think that we’re just giving the information to Instagram, right, and so we think, well, what’s the worst that can happen, you know, they’ll sell me more minimalist, basic clothes that I probably don’t need. But, you know, whatever it is kind of this closed loop of commerce, but that notice that the data could end up in so many places that we don’t know about I think is really kind of insidious. Why don’t we now get into, I want to get into some of your work about, that you mentioned earlier in the conversation about the way that privacy impacts, you know, different populations differently, and I know that your book is coming out soon about privacy and some of your recently published articles looked at how breaches in privacy affect different people, different people differently, so specifically women and lower income Americans. So let’s start with gender. Can you explain what you mean when you say that privacy is gendered?
AM So when I say that privacy is gendered what I mean is that there are certain privacy violations that are more likely to happen to people based on their gender and that the impacts are going to be different based on gender.
AM So I coined this term in a study that I did of Celebgate, which was when a big trove of celeb nudes that were mostly selfies that were taken on cell phones were leaked to Reddit, and I was really interested in the sort of ethical ramifications of this because the difference between the way that the people on Reddit saw this, which was a sense of like entitlement and oh, oh, if they didn’t want us looking at them nude, they shouldn’t have taken these photos of themselves, versus the women whose photos were leaked, who basically to a one said this is a sex crime, this is a violation of privacy, this is a, this is sexist, this is misogynist. I really interested in that disconnect. So what I did was I looked at all the comments on this subreddit called r/theFappening and I sort of looked at the way that they talked about these celebrities and these women. And what I found is something that really kind of backed up another theory of privacy that a bunch of people have been working on for the last couple of years, which is that in the United States we tend to think of privacy as an individual responsibility, that you are the person who is responsible for your own data and if the data leaks, it’s your fault, right? So if your password gets hacked, it probably wasn’t a strong enough password. If your phone gets hacked, you shouldn’t have taken those pictures to begin with. You shouldn’t have left your phone somewhere. You shouldn’t have been using a sketchy app or something like that, right? But what all this stuff ignores is the fact that these privacy violations happen over and over and over and that they are basically inevitable and that people are resigned to them, and that people use social media or they use network technologies in general not in a way that they know at some point there will be some kind of breach, right? No matter how careful you are. There is no way to get around it because these technologies intrinsically connect people together … So there was this real sense of entitlement from these men of looking at these women’s bodies and what we find is that there is this whole sort of set of privacy violations and this goes into a literature that a lot of other people have written about that is technologically-abled sexual violence, where you have all of these things like location tracking apps for example that an abusive partner might make their partner put on their phone, or doxxing, leaking nudes especially is a huge one, putting someone’s, cutting and pasting someone’s head on pornographic imagery and sending it to like their boss, their friends, etc., there’s this whole sort of set of privacy violations that are much, much, much more likely to happen to women. And when I stopped seeing these as like these isolated incidences of harassment and started linking them to privacy and safety, I started understanding to what extent this was a gendered issue, and a lot of this stuff goes for non-binary and queer folks as well. I certainly don’t mean to or insinuate that this is only for cisgender women.
AG Right, right.
AM But it is anyone whose gender I think makes them vulnerable in a way. That gender becomes what security, security experts would call an attack vector, which is basically a vulnerability that can be exploited. So if you have someone you don’t like for any reason and that person is a woman or a non-binary person, then you can use their gender as a way to attack them.
AG Right, yeah, and I mean, when you are talking about that notion of, you know, men online feeling entitled to be able to view women’s bodies and also the kind of ideology of individual responsibility, it starts to sound like, you know, in person sexual violations as well, right, that these two kind of ideologies dovetail together and it is just interesting to see the way that it all kind of, it feeds into the same issues that we were dealing with long before the internet.
AM Yeah, so I was really inspired by this strand of British feminist sociology that has been using this term called Safety Work, which is the work that women do to keep themselves physically safe in space. So, you know, holding your keys in your hand with one out like you are going to like stab someone with it. Looking behind you, avoiding certain parts of town, not riding the bus by yourself at night, being on the phone when you are in the Uber with your friend, like all these different kinds of things, right? There’s this whole spectrum, and a bunch of women in British sociology had written about this for years and this woman, Fiona Vera-Gray kind of excavated this idea of Safety Work and brought it into the contemporary by interviewing all these women about like what did they do to keep themselves safe in their daily movements around the world. And I found this really, really interesting that this is a form of labor that is unequally distributed that women bear the burden of, and again, trans and non-binary people as well, in keeping themselves safe. And I started thinking about, what if rather than thinking about privacy, the way that we try to protect our privacy as a set of things that are always lacking, we think about it as a kind of work that we are always doing. And so in my new book I’ve coined this term, privacy work to encompass all of what we used to call privacy protective attitudes and strategies and instead think about everything from having a password manager, to making sure you can’t see through your curtains, to hiding your Social Security number to, you know, all of these different things as this sort of privacy work that because there is no systemic protection for privacy being violated, we all end up doing on our own time. And I found that both men and women engage in privacy work. I don’t really see a big gender difference in terms of the types of privacy work that people do, but when people are very vulnerable, their circumstances make them vulnerable, their privacy work tends to be more elaborate, just because they have so much more at stake if it fails.
AG Yeah, definitely. So I want to shift to some of the work that you’ve done around, around poverty and lower income Americans and privacy, and you have a really interesting article and I assume you are writing about this in your book as well, about how data mining and privacy violations have particularly harmful effects on low income Americans, specifically in three realms: employment, so seeking jobs, college admissions, trying to get into college, and policing. And so why don’t we focus on employment because we could talk forever about all three of them. If a low income American, who has social media, who has social media accounts is trying to get a job, how can the data that they enter on those social media profiles potentially be used against them?
AM So a lot of low wage jobs use many automated systems to hire … where people’s information is compared, is, people’s information is put into a database and they are looking for kind of red flags like should I hire this person or not? And so some of that is things like, do you have a low credit score, like I guess that somehow has an impact on your employability, I don’t know what that actually, why that would make you a bad employee, but then one of the other things that they do is they mine social media data and so a lot of the times they will, there will be these products that employers can buy that will say go through your social media and say whether you are using curse words or whether you are talking about drug use, or whether you are talking about guns or something like that. And so you are taking all this information that typically an employer wouldn’t have access to, you know, your private communications, your communications with your peers or your friends, and because it is on social media, a lot of the time it is accessible, and it is being looked through not by a human, but by an algorithm, and they will generate a score for you based on this information, like is this somebody that is worth taking a risk on employing, and the problem with that is that since you have no idea that that is why you were denied the job, you can’t do anything about it, you can’t correct this record, you can’t say, oh, you got me mixed up with somebody with the same name, or that’s a tweet that I made seven years ago, right, it has no bearing on my current life, so a lot of the times these systems will deny people jobs and people don’t even know that that is why they didn’t get the job.
AG Right, yeah. That’s interesting, and maybe briefly could you touch on some of the ways that this is used in policing, just so, I think maybe people have heard a little bit about this, but what is something kind of startling that you found from your research about how, you know, voluntarily putting data into various online systems can end up affecting how policing is done?
AM So predictive policing is a policing technique where you are, it’s almost like Minority Report, you are trying to figure out where the crime is going to happen before it takes place, because you want to be able to deploy your officers or whoever to parts of town where there is high crime rates. And we know that crime rate tracking like New York City’s CompStat system, has had an enormous impact on the way that different communities are impacted by police and frankly by police violence, right? So predictive policing that uses big data is often using the same kind of information that drawn from social media, that are drawn from public records, to determine a threat score for individual houses, individual blocks or individual people. So the idea there is if a, you know, the cops get a call from, you know, One Main Street, and they look at their dashboard and it tells them that there is somebody at One Main Street that is potentially very dangerous, they might go in there with, you know, all guns blazing in a way that they wouldn’t if they are going to like a white suburban neighborhood for example, where the threat scores are very low. But again the problem is that there is no way to correct this information, and in fact a lot of this information is incorrect … you know, it’s bad enough when you don’t get a job at Walmart because of something that an algorithm thinks you posted on social media, but it’s much, much worse when you get into the consequences of deadly violence being used, right, like the stakes are incredibly high and unsurprisingly the threat scores are very tied to socioeconomic class and race. So, there is, there is a real call there by a lot of social justice groups and civil rights groups for more transparency on the part of police forces and localities and municipalities that are using these products because their decisions are being made about citizens and residents based on information that those people don’t have access to.
AG Wow, yeah, that’s, that’s all really interesting, and I think, yeah, again, it’s so important to think about the way that these issues impact people differentially, and I’m wondering if you wouldn’t mind my asking about the current situation that we’re living in … I think recently in the news we’ve started to hear a little bit about how obviously contact tracing is one of the methods that the experts are mentioning as being crucial to containing the further spread of the virus as more people return to public life, and I think it’s not very hard to imagine how GPS tracking on smartphones could be mobilized to help with this, so, you know, maybe it’s a good thing that, that, you know, this data on where we are and who we have been in contact with could be traceable, and so I’m just curious about whether you’ve been thinking about privacy in the context of the pandemic and some of the tools that people are starting to talk about and even build toward this end?
AM So the problem with these kinds of apps that are trying to, or these kind of companies or initiatives that are trying to build these systems is not necessarily the systems themselves, although, generally we find that when people build systems the same biases that those people and the society that they live in hold, those get put into the systems.
AM The real problem is overreach and abuse because we have very weak data protection laws in this country. We have very weak laws around what information the government can and can’t have access to and we’re very bad about keeping information in silos … And in this situation, even if you have all this information about people’s interactions that is in the system that is supposedly for public health, right, or that is some kind of anonymity to it, we’ve seen over and over and over and over again that when you have all of these data points, not only is it extremely easy to identify people based on these data points, there’s also such a huge push for that same information to be used by the government and by police, right?
AM So I don’t believe for one second that any of these apps would stay in the public health realm, even if the people who are designing them have the best interests of everyone at heart. Clearly, it does seem like contact tracing is something that is pretty necessary to contain the pandemic, but I just worry very, very much that once this information is tracked and once this information is digitally instantiated, it can now be moved around and combined with all kinds of other pieces of information to do many things that are probably more nefarious than we might want to imagine, right, like, think about the way that you could, you could combine that information with immigration databases or with ICE databases to try to do targeted deportation raids, right? It is very frightening, and so my worry here is that we’re opening a door that may, we just, it is justified by saying, okay, well this is the pandemic, people’s lives are at stake, we need to do this, but it is going to open the door to using this data for all kinds of other ways that we probably wouldn’t think were socially responsible or acceptable.
AG Right, yeah, and going back to that issue of us giving our consent to this kind of thing, right, we might think, well yeah, I’m doing this in the, right, in the interest of public health, that’s what I’m agreeing to, but we don’t know what’s going to, where that is going to lead. So I want to just ask about one last issue before we start wrapping up and that is your work on misinformation online. I know we’re kind of jumping all over the place.
AM No, that’s fine.
AG Because you’ve covered so much ground, but you, yeah, and so, you know, I think in the wake of the 2016 election, we all heard a lot about fake news, alternative facts, the spread of misinformation online, and there has obviously been a lot of kind of pundit chatter about that. And in 2017, you and Rebecca Lewis published this really fascinating report with Data and Society called Media Manipulation and Disinformation online, and we can definitely link to that, and I follow political news pretty closely, but your report really yielded some findings that still surprise me, and I’m wondering what in the course of your doing your research for that report, what were some of the findings about misinformation online that surprised you the most?
AM So we were doing a very specific set of qualitative observations. We were spending a lot of time in these kind of far right or alt right spaces. So we were on 4chan every day, we were on Gab, we were looking in different discords and different, you know, white supremacist blogs like the Daily Stormer, like we were in the fringey parts of the internet.
AG Yeah, you were deep in there.
AM Yeah, yeah, and what we, what we found was that there are all of these narratives that are bubbling up in these really, really fringe spaces, and they are being strategically mainstreamed by the participants. And those participants are looking for vulnerabilities in social platforms and in media institutions in order to spread often watered down ideas, but, versions of their ideas, but ideas nonetheless that are completely coherent with a project of white supremacy, right, or a project of creating a white ethnonational estate, and, you know, I had been following a bunch of these fringe groups for years, like I find the men’s rights movement like extremely interesting for I don’t know why. And I’ve been following them for years, and I had kind of some, you know, I had some interesting, like I had some interest in some of these fringe groups, but what really stunned me was to the extent that they were successful in getting their ideas out into the mainstream in a way that really fundamentally changed modern political discourse around things like race in a way that I don’t think I would have been able to predict, right, before the 2016 election. Like I didn’t think that anti-Semitism was going to become as mainstream again as it, as we see that it is now, right, where you see anti-Semitic ideas being thrown around in a lot of different spaces online in a way that, you know, me in my naiveté, I thought, okay well that’s a thing of the past, right, or at least those of, you know, those of us who struggle with anti-Semitism, not on a daily basis, right?
AM So the sophistication by which these actors understood the way that modern media functions, you know, journalists who are super overburdened, they are doing like five different jobs, they are, or their job requires writing ten blog posts a day so they are getting most of their sources from Twitter, or they don’t have the shoe leather to go out there and be interviewing people, they were falling for these like hoaxes and pranks and trolls nonstop that got the alt right really so much more, so much more media attention than they should have gotten. And I see it happen over and over, like just very recently with these reopen groups. This is a tiny fringe number of people compared to the number of people who are staying home and doing like nothing and doing, you know, wearing their masks and wiping down their groceries. That is what most people are doing. These reopen folks are a tiny, tiny set of people that in some cases are being directly funded by political actors, and yet the amount of column inches that has been devoted to these people versus the amount of column inches that are being devoted to the, you know, your average person who thinks these things are absolutely necessary and that we should continue staying home, it’s the ability to manipulate the media’s love for political spectacle and the sort of contrarian idea or this idea that you have to see both sides of every issue, which is preposterous and at the same time, it’s really effective. So it was very dispiriting to me to see that over and over again social media platforms and journalists were playing key roles in amplifying these narratives and that social platforms have been very inadequate in their response to their type of amplification.
AG Yeah, I mean, I think that is what struck me so much reading your report is that, you know, maybe I’ve never been on 4chan or QAnon [ph. sp.], I’ve never read the Daily Storm or certainly, but I could read, you know, I could be a consistent reader of The New York Times or the Washington Post and essentially be seeing these conspiracy theories being given, hold on, sorry, you know, being given real legitimacy, and just sort of skewing my sense of what is actually, you know, what is actually happening out in the world, so that was super interesting and I appreciated that, yeah.
AM The other thing I’d like to mention is that often when disinformation or misinformation is discussed in the abstract, people seem to see it as like, oh, this is incorrect information, this is inaccurate information. This is like information that is just wrong. But it’s not just that. It’s deeply ideologically skewed information and it is almost always racist, misogynist, anti-Semitic and xenophobic and without understanding that, that focus, we miss the forest for the trees and we’re unable to solve the problem because if you can’t understand that racist disinformation is playing off of a 400 year history of American racism, then you are not able to use the tools that you need to stamp it out, right? Or to combat it. And so I’ve been very frustrated because I’m always pushing this agenda that the, the way that this stuff gets into the mainstream is it uses these like wedge issues that are more acceptable to a lot of people. So for example, when the alt right is recruiting young men, they start with anti-feminism always. They are always like, you know –
AG It’s the gateway.
AM Yeah, it’s always like, oh, women are so uppity, you know, why won’t a woman sleep with you, they all think they are so great, but women are enrolling in colleges in greater rates than men and men’s suicide rates are higher and what about father’s rights and yadda, yadda, and there’s this whole spiel, right? And when the alt right is trying to make inroads into mainstream conservative communities, they start with anti-Muslim, anti-immigrant attitudes. And a lot of communities, anti-trans attitudes or anti-non-binary attitudes are where it starts. So you have these issues that are, you know, they are more acceptable, and that, and those, that’s the way that these discourses start and then as you get more and more into them, then you start seeing the virulent racism or the virulent anti-Semitism, and it really, without acknowledging that and without acknowledging those connections to mainstream discourse, they are not solvable problems.
AG Right, yeah, and that feels like a genie that has been let out of the bottle that is just going to be really hard. I mean, I do want to conclude by talking a little bit about regulation and asking whether you have seen any, you know, positive steps in this direction or ideas for how we can do that, but I mean, I think that at a certain point, you know, when we’re talking about technology, we are sort of leaving the realm of technology and we’re entering this more, almost philosophical or psychological or sociological realm, where we are sort of thinking about like what is wrong with people, it’s not like the internet created these problems. They are longstanding human and social problems, so, I’m just kind of wondering, I mean, what do we, what do we with that, right, we can definitely start talking about regulating Facebook and trying to strengthen journalism again, but it just feels like some of these problems kind of exceed the internet, they exceed technology and that feels like kind of a harder nut to crack, or a harder thing to reckon with. How do you think about that in your work, sort of what is technology, what is human, and how do we, I don’t know, how do we work to solve some of the problems that those two things together have created?
AM Technology is human, it’s made of humans, it is made by humans, and humans use it to interact. It is social. It is used by people within social contexts. You can’t extract technology from society. They are so inextricably intertwined that one doesn’t exist without the other, yeah, I don’t know if I 100% buy that, but you know what I mean, like there is this inextricable link between technology and society. So if you are saying, okay, well, there’s a bunch of racism online, how do we fix that, let’s use technology to fix it, you can’t, because, you know, we have had a lot of very smart and very driven people for many years trying to eradicate racism and it is still here, it is still present with us on a daily basis. So, the, the question is, how do we acknowledge that existence and then do the work within technology that we need to do … A so, Europe has been sort of a leader in passing what I think they hoped were comprehensive data protective laws under the GDPR. Unfortunately, you know, I think those laws have a lot of great intentions. The way that they have been operationalized has not always been the best, but often when you are talking about the overreach by social media companies, you see lawsuits taking place in the EU or in countries like Ireland rather than in the United States because there isn’t a regulatory board in the United States that is willing to actually take Facebook to account for some of these things. So for example, we don’t have laws about data brokers. We don’t have any laws. They are not, they are basically unregulated. We don’t have laws about information from different, different aspects of life being integrated. What we do have is a sort of patchwork where there are certain types of personal data that are highly regulated, like educational records with FERPA or health records with HIPAA, but we don’t have any kind of principles that would apply to all of those types of data. So for example, we have something in place called the video, I think it’s called the Video Privacy Protection Act (VPPA), that was passed in the 1980’s under Reagan during the Bork hearings for the Supreme Court, because Bork was badly embarrassed by somebody going and finding out a list of everything that he rented from a video store and that becoming part of the trial. So they then passed a law saying you can’t, you can’t get somebody’s video records, like those are protected information. But at the same time, your, you can subpoena a cell phone company and get access to every single number somebody has texted without even telling them that that has been done. So there is no principle underlying it, it’s just the sort of patchwork of different laws. And we do need comprehensive data protection laws, but again I worry that we’re not in a regulatory climate where those laws, any laws that would be enacted would really be well thought through or would have the impact that is intended.
AG So, to wrap up, the last question that we’re asking all of our guests is what is the next big question that you think we need to be asking as we study the internet and sort of what it is doing to us as societies?
AM I have no idea. … Like there are so many interesting things about the way that the internet functions. I’m someone that always is going to want to ask like five or six questions rather than one. And that’s why I’m glad there is such a strong set of people working on different aspects of critical technology studies and critical internet studies coming at it from, you know, computer science, from information science, from the humanities, from medial studies, from communication, sociology, anthropology, because we need, because this is such a comprehensive part of modern human existence, we need all the tools we have in our arsenal, every discipline, every method in order to investigate it fully.
AG Yeah, that’s really well said, and I think it is, it is exciting how studies of the internet have become so interdisciplinary and so that’s really great. I think that’s pretty much all that I have now.
This work is licensed under a Creative Commons-Attribution License (CC-BY 4.0).