MB: My name is Meredith Broussard. I am a data journalism professor at NYU and I am the author of a book called Artificial Unintelligence: How Computers Misunderstand the World.
AG: All right, wonderful. And how about you, Margaret?
MO: I am Margaret O’Mara. I am a professor of history at the University of Washington and my most recent book is a history of Silicon Valley entitled The Code: Silicon Valley and Their Remaking of America.
AG: All right. Wonderful. So, the first question that I’ll ask is a question that we are asking all of our guests and that is: What does spending time on the Internet in 2020 feel like to you? So, this could be a description, a metaphor, a feeling, a word—anything that captures the experience of being online to you right now. So, Meredith, do you want to start off?
MB: I think, today, I would say that being on the Internet feels really uncomfortable. I have one of those laptops that has the butterfly keyboard that is really hard and so, since COVID, I have been spending so much time online that my hands hurt after using the computer. And it reminds me that the physicality of using the Internet is something that designers don’t always take into account. When people imagined, oh, we are going to go into this digital utopia where everybody is going to do everything online all the time, it actually turns out that human bodies are not built for that, and especially as your body gets a little bit older, like, sitting in one spot for hours on end, is a terrible idea.
AG: That is interesting. That’s an interesting way to think about it. Great. How about you, Margaret?
MO: Well, you know, my thought was actually related to Meredith’s, which is that the human mind is not built for the Internet as it is today. Again and again, I think, gosh, Alvin Toffler was right. He was the futurist who wrote a number of books, one of which bestseller, Future Shock, which came out in 1970 and he coined the term “information overload.” [Editor’s note: The term was coined by Bertram Gross in 1964 but Toffler was responsible for popularizing it.] And that’s really what we are all working under and especially we thought it was bad before 2020 and now, it has grown exponentially. And I also think a lot about how this compares to other moments of crisis and also how media mediates crisis. I talk to my students a lot about early print culture and early America and how you had all these broadsheets and newspapers that had all sorts of… telling you all sorts of things and partisan slants and fake news, that we have seen some parallel moments where new media hits and people have to process it but we didn’t have the algorithmic precision and the incredibly effective tools of engagement and addiction that today’s Internet exhibits. So, in that way, it is quite different.
AG: Yes. Thank you. That was really interesting. And I think it is really great to sometimes put this in historical perspective because I think that we have these biases towards thinking, you know, the Internet is this new thing that came out of nowhere and everything is new, but it is really useful to have a historian who can sort of say, well, we’ve been here before with print culture, with radio, with TV, so I appreciate that. And so, I’m sure that work that both of you have done across different domains will come up today, but to give our listeners some context, it would really help to hear a little bit about the two books that you each mentioned in your bios, so maybe we can start with Margaret. So, you published in 2019 a book that you mentioned called The Code: Silicon Valley and the Remaking of America. So, what is The Code about? It is about 500 pages, so it is about a lot, but if you could give us kind of a brief synopsis, that would be great.
MO: Well, the way I like to characterize it is it’s a biography of a place, right? So, when it comes to the tech industry, we have biographies of people, we have biographies of companies, and there really wasn’t one single volume that put everything together and showed the evolution of Silicon Valley – how it came to be, why it is where it is, why it is the way it is. And I think it is really important right now. I think history is always important, it always informs the present, but particularly now when we are operating in a world where the products and platforms that come out mostly five large companies on the West Coast are inextricable parts of our days – that even if you decide you want to delete Facebook or live a very analog life, it is very, very difficult to live in modern America or live in most parts of the world without being touched by these products, and I think that makes it very important to understand how, not only these companies came to be but the whole ecosystem of tech, the mindset, and how it is connected to politics and society and things that… You know, tech likes to present itself as something differen—like these capitalist cowboys out separate from everything else and actually, they are very much a product of modern America, so I wanted to tell that story.
AG: Right. That is wonderful. Thank you. And so, Meredith, in 2018, you published a book called Artificial Unintelligence: How Computers Misunderstand the World, so can you give us a little synopsis or summary of your book?
MB: Sure. The book is about the inner workings and the outer limits of technology. So, I come to this as a data journalist. Data journalism is the practice of finding stories in numbers and using numbers to tell stories. The particular kind of data journalism that I do is something called algorithmic accountability reporting. So, increasingly, algorithms are being used to make decisions on our behalf, and the traditional function of the press is to hold decision-makers accountable. Well, we actually need to do the same thing for algorithms. We need to hold the algorithms and their creators accountable. So, in my work, I actually build artificial intelligence tools in order to commit acts of investigative reporting. So, in the book, I go through a little bit about what AI is and what it isn’t. I take readers through some examples of data journalism and algorithmic accountability reporting and I also do a little bit of history of how we got to a concept that I call technochauvinism, which is a concept that is underlying a lot of the marketing around technology nowadays. Technochauvinism is a kind of bias that says that technology or technological solutions are better than other solutions, and what I argue is that it is not a competition.
MB: It is: How we should be thinking about what is the right tool for the task? Sometimes the right tool for the task is a computer, sometimes it is something simple like a book in the hands of a child on its parent’s lap, and one is not better or superior to the other.
AG: I love the idea of algorithmic accountability. I am really glad that that is happening, I will just say that, but I am wondering, what is an example of an algorithm that you’ve found that needs a little bit of accountability?
MB: One of the amazing resources for learning more about algorithmic accountability is a new newsroom called The Markup. It is run by Julia Angwin, who used to be at ProPublica, and Julia Angwin’s reporting really kicked off the entire field of algorithmic accountability. When she was at ProPublica, she did an investigation into the COMPAS algorithm, which is an algorithm that was used to assign a recidivism score to people who were arrested. So, it basically rated people on how likely they were to commit another crime. Then, these scores were given to judges and the judges would use the score in deciding what kind of sentence the person got, whether they were going to fail, basically affecting the person’s passage through the judicial system. Now, the COMPAS algorithm is biased towards white people. It is biased against Black people. And subsequent to Julia’s investigation, mathematicians went in and validated her results and said, well, actually, mathematically, there is no way for this algorithm to treat white defendants and black defendants fairly. So, this was a big scandal, obviously, and it kicked off the entire field of reporting on algorithms and saying, is this fair, is this just, should we be doing this?
AG: Right. Thank you. And this will certainly come up but I think it is worth pointing out now that I think one really interesting thing that you bring to these discussions, Meredith, is that you have a background as a software… Correct me if I have these terms wrong but, like, a computer programmer, a software designer, so you are kind of able to look actually into the technology, and you really take the reader on a journey not just through kind of the world of tech that we, as users and consumers, experience, but what is actually going on in the hardware and the software. And so, I really learned a lot from your book, just from kind of getting that inside perspective. Did I get those terms right?
MB: Absolutely. Yes, so I started my career as a computer scientist and I quit to become a journalist. I quit computer science because of all the textbook reasons you hear about why Black women leave STEM careers. Like, all those things that they say, they are all true. But I kind of came back to computer science through journalism. So, one of the things that I do is I can kind of look at a technical system and take it apart in my head. Like, I can tell you, hey, this is how this works, this is what it is doing, this is what it is not doing, and these are the potential problems with this system.
AG: So, in terms of getting into it, I think one reason why I’m really excited to have both of you here is that, in your work, I think you both do a really interesting job of kind of acknowledging some of the myths, really, that have grown up around technology itself, in Meredith’s case, and around Silicon Valley as this place but also kind of an idea – something even beyond a place, in Margaret’s. And, you know, myths are myths for a reason but they are also maybe not totally squared up with reality. And so, I am really interested in both of your work for that reason. And so, in order to get into this, which I realize sounds a little bit abstract, I thought we could start with a pretty concrete example and I was interested to see that the self-driving car comes up in both of your books, and it strikes me that the self-driving car is something that has a kind of symbolic resonance. It always feels like kind of the next frontier. It is very, kind of, Jetsons in terms of technology. But it is also a hardcore material reality. It is a technology that is being built. So, maybe we can start with Meredith. I am curious about, what do you think that the driverless car signifies in a kind of symbolic sense but then, what is actually happening with the driverless car as an actual technology that is currently being built?
MB: So, in the book, I tell the story of going for a ride in a very early self-driving car. Self-driving cars were developed initially as part of a DARPA initiative, a grand challenge where they had a race through the desert where a bunch of teens from universities developed self-driving cars – They got lots of donations from corporations, and they had this robot car race through the desert. And it was just as much of a disaster as you could imagine, but it was a lot of fun, and that really kick-started the development of the self-driving car industry. So, I rode in one of these cars before it was in the grand challenge and it was terrifying. I almost died in this car. And I thought to myself, I don’t trust these engineers to build something that is going to be safe eventually because, I mean, they were brilliant but they were reckless. And they were not paying what I felt was sufficient respect to the rules of the road, or just to the social conventions that keep us safe on the roads. I forgot about it but then, I kind of thought at the time, all right, this is never going to work and I am just going to ignore this. Fast forward a couple of years, we’ve got Tesla making claims that they’ve got self-driving and I decided, all right, I am going to go look into this. Maybe the technology has advanced more than I thought. And it had not. What you can do is you can actually read the code online. A lot of self-driving car technology has been open-sourced, so you could go in and you can read what is going on inside the code, and one of the things that I discovered was that the images that the computer vision is being trained on are very limited. So, lots of people talking about the trolley problem with self-driving cars, about, oh, the car is going down the road and can either hit one person or three people and which one is ethically preferable? Something I am actually more worried about is what the car vision technology counts as human, or who counts as human, because the vision technology that is inside self-driving cars is the same type of vision technology that is in facial recognition systems, which are really good at recognizing people with light skin – and really bad at recognizing people with dark skin. You see this everywhere. You see this in soap dispensers, you see it in videogame systems. So, I am really worried that self-driving cars only recognize able-bodied people with light skin as human, and people who are not able-bodied or people who have darker skin are not going to be recognized as human and are going to be killed by two-ton killing machines at a higher rate. And I think that is terrible.
AG: Yeah, that is definitely terrible. That makes a lot of sense when you think about that vision technology as well, and what we already know that it does in other contexts, being applied in a context that is so high stakes, the highest stakes, in a sense. Thank you for that. And so, Margaret, The Code is this just incredibly expansive, detailed, sort of beautiful history of Silicon Valley, and I was struck that toward the end, you set the scene of a self-driving car as well. And so, I am wondering how you would respond to that question, too. What do you think, in the context of what you learned through your research, what does the self-driving car sort of represent and how does that square with the realities that Meredith is talking about?
MO: Yeah, I mean, I think the self-driving car kind of captures so much of what is marvelous and what is terrifying about tech. Look, the components of it are all of the sort of product of people pushing the boundaries of technological possibility and feeling like they can. In a way, it so encapsulates Silicon Valley’s thinking, or a line of thinking that has been predominant in Silicon Valley for so long in that it is both, as Meredith puts it, kind of technochauvinist, techno-optimist, kind of this notion that the technology will do all these wonderful things. And it also is so completely dissociated from the real world. So, with self-driving cars, even if you got a self-driving car to work, you have an effective autonomous vehicle technology involves remaking landscapes, these places in which the self-driving car drives. It is about streetscapes, it is about what cities are like, what roads are like, who is inhabiting those roads – all sorts of things. And then, getting to the bigger question of, is a car what we should be focusing our attention on? I would love all of that creative energy to be devoted to getting rid of the internal combustion engine. I mean, again, it is a different technology but nonetheless, it is so interesting to me that the idea is, we shall improve all these things by making the car better. It is being conducted in a way that is absent real understanding of… thinking about these are not just robot cars rolling through the desert. They are going to be rolling through urban spaces, through suburban spaces, the spaces that are already designed and built and so, what does that entail? How do you redesign them? And we have had these, sort of, fantastic techno-optimistic visions of whole city rebuilding before. Think about the middle of the 20th century and the, sort of, grand visions for what a city of the future would look like which involved multilane highways among other things which, you know, yes, enabled more rapid auto transportation than the roads that preceded them but they also plowed through cities, destroyed neighborhoods, had very disruptive effects, not to mention the environmental effects. But it also kind of has this science-fiction mentality that once we have this technology, all these other things are going to fall into place. And actually, it is a much more complicated, thornier, wicked, real-world problem that doesn’t just involve technology – it also involves public policy, it involves all sorts of other actors that would need to… And then, getting to the basic question of, are cars what we need to be building? Or maybe we should be thinking about other ways to get us to the goal that everyone is seeking of a better way to move people through space, or a better way to arrange spaces in order for people to live in them in a sustainable fashion.
AG: Right, because it is worth acknowledging that, you know, proponents of the self-driving car, as I know Meredith points out in her book… You know, they will say, well, traffic accidents are a huge problem and we can probably all agree on that – you know, we want to have fewer of them, right? But why is it that we are focusing on this one thing as opposed to these larger, kind of, systemic… Just thinking about other ideas outside of the tech itself. So, Meredith, you mentioned your term techno-chauvinism and I think that is just such an interesting and useful term – the belief that essentially, machines are sort of more capable than humans in some ways. And I think maybe, in a sense, that can sound a little bit abstract and I am wondering, why is it important for everyday users of tech to understand that term and to maybe push back against it?
MB: I think it is important to push back against techno-chauvinism because a blind belief in technology has become the default. And it no longer serves us well to have that be the default. In fact, it is a kind of dangerous bias. One of the things that I’ve noticed is that people think that innovation is the same as using more technology, and that is not at all true, right? You can be innovative without just using more computers. There was a period of time when, yes, we did need to innovate by integrating computers into a lot of aspects of life. There was definitely a cause for that at the beginning of the digital revolution. But technology itself is actually no longer in and of itself, innovative. Like, we are two decades into the digital revolution now. The Internet is not the cool new thing anymore. It is just mainstream. So, in journalism, for example, people sometimes still talk about, oh, such and such media organization is doing a bad job of navigating the turn to the digital. And I hear that and I think, who on earth is that out there who has not already navigated the turn to the digital?
AG: Yeah, we’ve already turned. Yeah, a long time ago, right.
MB: Yeah, so, universities, for example – people are talking about, “Oh, universities are going to have to transform to accommodate online learning.” Yes, but we’ve already transformed to accommodate online learning and this is what we get. Like, there is no future digital utopia where everybody is really good at using technology in every aspect of their lives. The technology is not that good, people are not that good at it, and honestly, we don’t want to be on our devices all the time. We want to spend time with other people. We want to spend time in nature. That is what work/life balance is all about.
AG: And I am wondering what might be an example of that kind of a problem, where it might seem like a technological innovation would be the solution but the problem has kind of been solved more productively in another way? I know that in your book, you go through a number of cases where it seems like the technology has been offered as the solution and it has not panned out very well. I am wondering, are there any examples where we’ve seen, kind of, a productive turn back to the human?
MB: I think that we are going to be grappling with this in the Fall of 2020 as lots of schools are trying to figure out how to cope with online learning during COVID for the long term. One of the things that I wrote about in the book is about textbook shortages in Philadelphia public schools. Now, many, many schools have invested in getting kids one-to-one laptop programs, at getting kids iPads, in doing electronic books and trying to integrate this into the classroom and it just hasn’t worked for a really long time, in part because kids break things. They drop their iPads, they leave them places, the digital divide is still really profound. Lots of places don’t have sufficient wireless connectivity. But you know what doesn’t break when you drop it? A book.
AG: Yeah, right.
MB: A physical book. You can drop it in a puddle. You can leave it out in the rain. You can put it in a backpack and the kid, like, slings the backpack down and clunks it and shakes the entire house because their backpack is so heavy. But the book is really sturdy. It is really well-suited for the task of containing information and delivering it to a child who has a kind of complicated physical reality going on. EBooks and devices don’t have the same sturdiness and so they are not necessarily as well suited. And they are also far more expensive than people realize. So, the infrastructure needed to support online learning is dramatically more expensive than the infrastructure needed to support mailing a bunch of books to a bunch of kids.
AG: Right, and I think it is easy to forget that the book itself, the print codex, is a technology. I think we have this bias towards only describing technology as things that feel new and wizardly and everything, but the book was invented and built and developed in a similar way – it is just, at this point, not as sexy and exciting as the newest shiny thing, so that is interesting.
MB: Oh, it is definitely not as sexy. I will absolutely cop to that. There is very little sexy about a textbook.
AG: We might have some listeners at Public Books who would disagree but yes, in general, no. And, Margaret, I am curious, you know, as someone who has studied Silicon Valley so extensively, where do you think that that belief in technochauvinism, to use Meredith’s term, where do you think that might have come from? I realize that is kind of a big question but what are some of the historical roots of that belief? How did it start germinating into what it is now?
MO: Yeah, I think technochauvinism has very deep roots and is very much a product of the time and place. Silicon Valley is a product of Cold War America. It got its start… The Valley itself turned from being a chiefly agricultural sleepy valley in northern California into an electronics hub courtesy of Cold War defense spending and military installations in the Bay Area, but also the money that flowed into defense contractors and notably into universities – Stanford chief among them – that kind of created… And this was also a moment of optimism in America’s capacity and the belief that technology would be a path to unalloyed progress… That the whole gig gets kicked off in many ways by a report that is given to Franklin Roosevelt in 1945 that is authored by an MIT engineer named Vannevar Bush who had been Roosevelt’s chief science advisor during the war. And the report was titled “Science, the Endless Frontier.” And it makes a case that the US government should get into the research and development business in a permanent way; that it shouldn’t just be a wartime Manhattan Project–era experiment but that science and technology was the future and it was not just technology purely for defense purposes, although a lot of it… Defense became the rationale for a lot of that spending. But that science was where the economy was going to be, the basis of the new economy, was going to be the basis of new educated human capital. And this frontier mythology has been very, very persistent, and it pops up again and again and again in Silicon Valley, kind of, this idea of the frontier, this idea of pioneers as people who have the fortitude to go into unknown territory and the metaphors of the American West, kind of, the Frontier West, the John Wayne–style West, abound throughout Silicon Valley history. Up until this day, of course, that whole John Wayne–style American frontier was one that erases the indigenous people whose land it was and who were occupying it. Manifest destiny was not a matter of going into the vast emptiness of the West, which is the way that Anglo-Americans characterized it from the mid-19th century forward. Of course, it involved violence, it involved displacement, and it also was… You know, the miners themselves were able to do what they were able to do because of the indigenous knowledge that was already there, but also, significantly, and I think this is a major part of the story I tell in my book… They were able to do what they were able to do, whether they’d be 19th century homesteaders or 20th century electrical engineers because of government support and spending, that public policy and that created this infrastructure. The metaphor I like to use is a sandbox—that it creates, kind of, the wooden box itself and then, puts a lot of sand in, and then allows people to go in and build sand castles and throw sand at each other and do all these sorts of things. And so, it gives a degree of independence but creates this incredible container in which people are allowed to build and create.
AG: Yeah, absolutely. And, right, I think it is so interesting how your book shows how that initial openness and freedom and liberation in the space of innovation around the Internet did lead to some of the innovation but it is also the exact thing that has gotten us into these traps nowadays. And I guess just to follow up on that, Margaret, it seems to me that one of the main claims of your book is about how the story of Silicon Valley is kind of an “only in America” story, and just to read a quote from you hear, you write that, “Even though every other industrialized nation has tried in some form to mimic its entrepreneurial alchemy”—that is, Silicon Valley’s—”even though its companies have spread their connective tissues and disruptive power across the globe, it is an only in America story.” And I’m wondering if you could tell our listeners, what are some of the features of the Silicon Valley “success story” that feel distinctly American?
MO: Yeah, well, one of the arguments… The story I tried to show in the book is how Silicon Valley, rather than being kind of a “think different,” set-aside, separate story from the main narrative of American history, it is quite intertwined. I mean, I think the way that we talk about the Valley, the way that historians have written about it and I am talking about, you know, say you open a history textbook and you have the main narrative of wars and presidents and social movements and all those things, and then, you have, like, the sidebar about some special topic. So, tech was kind of the special topic. And so, you’d have a picture of, you know, Steve Jobs sitting barefoot on his floor, kind of, oh, you know, new and different—those wacky guys making those sparkly things. But on the only in America story, I think to understand that, you need to bring those two together. So, what explains why Silicon Valley, why America, why here, why have all of these other Silicon somethings around the world not been what they attempted to do? They’ve been something but they haven’t quite been what Silicon Valley is, and it has to do with the way, not only the government funding of the Valley, of particularly electronics and then computer hardware and software industries… Well, not software. It wasn’t really an industry yet. But the computer and electronics industries in the 1950s and 1960s is foundational. But it is not just that the money flowed in and, man, there was a lot of it – like, not just the Pentagon but then, the Space Program which, you know, shooting the moon was a Cold War project, too. And what do you need to send a rocket to the moon? You need very small, very fast, very light electronic devices. You need microchips and integrated circuits. Who makes those? These little companies in Northern California. And so, you have this incredible, kind of… I mean, I refer to the federal government as effectively functioning as the first great venture capitalist of the industry. Look, lots of other countries around the world have spent lots of money on science and tech and spent proportionately more as a proportion of GDP than the US does now… It is the way the money flowed indirectly, and this has to do with American federalism, American dislike of big government, which has been there since the founding. Look, the US was founded by overthrowing a monarch. We don’t like central authority. We don’t like bigness. It was reaching a crescendo in the early 1950s. What else was going on in the early 1950s? Well, that was the McCarthy era. Like, Dwight Eisenhower is not going to be, like, I’m going to build this massive science complex that is all housed within the government. Nope. He is going to send contracts to private industry. He is going to send contracts to private and public universities. It is going to be this decentralized network. And in decentralizing it and privatizing it, the government builds the industry but does it in a way that makes many of the people in it feel like they did it on their own, and that actually is part of the secret. It is, in a way, that the kind of myths that the Valley and the tech industry believe about themselves are actually part of the secret of their success. And giving room for a very entrepreneurial, intensely competitive industry to grow, giving opportunity. I think one of the big component parts of this only in America story is the massive investment that happened starting with the G.I. Bill, continuing in the postwar period, and higher education and in human capital. And this incredible escalator of mobility. I was really struck when I first started researching the book and I started sitting down with these octogenarian venture capitalists who have done very well for themselves, to realize that almost to a man, and they were all men – I’ll get to that in a minute – they started in very modest circumstances. They were not Ivy Leaguers, they were not prep school kids. If they did that and you are coming of age in the 1950s and early 1960s, you stay on the East Coast and you go work for a Fortune 500 company. You go work at your father’s bank. You don’t, like, hitch a ride, go all the way out to Palo Alto, California, where there’s like, almost nothing going on. There’s one bar. There are a bunch of fruit trees. There is one university that is on the make but it is not the hub, by any means. It is not where the bulk of the electronics computing industry was. It is branch offices out in California initially of these major companies. It is not where HQ is. And so, this post-war prosperity enables them to have a free or almost free education, whether they are paying $50 at University of California Berkeley, $50 per semester, or they are getting a scholarship at Rice for undergrad and they are coming to Stanford for their Master’s and they are going to school at night and working at Lockheed during the day to pay for their education. They are not coming from money. And, look, they are all white, they are all male. It is the 1950s and 1960s. And I think that also sets in these patterns of who the winners are, and who gets to pick the winners of the next generation that persist to this day. But, basically, this investment that the American… The United States made at the national level and at the state level in this postwar period, enabled by kind of a unique set of circumstances including not having significant international competition right after World War II, was extraordinary in terms of creating these immense opportunities for people. And if you were in the right place at the right time and you were the right demographic, you had incredible opportunity before you. And what I like to think about is, okay, how do we do a 2.0 version of that in the 21st century that is still creating this escalator of upward mobility, but doing it in a way that enlarges who gets to be part of it?
AG: Right. Yes, Meredith, I am wondering if that resonates with anything that you’ve observed, either from first working in the industry and now, studying it as a journalist.
MB: I would be perfectly happy to accept a river of money in order to do innovative things. I am just putting that out there, Margaret. I am on board with your vision and I will volunteer as tribute.
MO: I think the other thing that is in play, particularly in the last most recent generation of tech is, first of all, success is rewarded not just by money from funders but by money. Like, these people become so wealthy. And I think about the founders and the age of the founders when they hit it big, right? And, you know, Zuckerberg starts Facebook when he is 19 and within a couple of years, he is turning away billion-dollar offers and kind of getting incredible validation for everything that he and his colleagues, who are about the same age and stage and friends from college, are doing. And the same thing with Larry Page and Sergey Brin. They are graduate students at Stanford and they get this $25 million seed round from the two biggest VC companies in the Valley, who agree to go halfsies because they want in on this incredible product that they have and they are in their 20s, too. And so, I think about, what if I had become wildly successful or become a billionaire when I was 23? I think it would have been very, very hard to evolve my worldview or expand it from there, right? You end up in a state of arrested development because of your incredible success. I mean, I am glad that I am not thinking the same way I did when I was 23. I have learned a lot since then. But there is also, I think, you know, the point that Meredith is making about this just being told you are brilliant and being told you don’t need to know anything else… I think one of the hallmarks of Silicon Valley culture is its antipathy towards politics and bureaucrats and Washington, D.C. – just really, yes, they give these folks money and there is obviously… And I chronicled this in the book extensively… The path between D.C. and Silicon Valley is very well-trod and has been forever. But the kind of attitude falls into two camps: one is the techno-libertarian, really “government should just vanish, go to the vanishing point and we should just replace everything else with, you know, privatized own systems. Let’s go seasteading and let’s have drones and all those things.” And then, the other camp is more, kind of, thinks, “okay government is good, but government is so messed up and really what government needs is some Silicon Valley thinking to improve it.” And we saw this a lot in the Obama Administration, you know, where you had a very… Like the path between, particularly the Google campus and the White House became very well-trod. And it was fueled by… And Washington lawmakers of both parties have been very receptive and kind of bought into that argument of we kind of do it wrong and Silicon Valley does it so much better. And that, again, validates the kind of pretty narrow worldview of folks who have been trained exclusively in engineering programs, who have achieved success, whether it be kind of a blockbuster success as a founder but even just becoming very financially comfortable and successful at an early age because you are a full-stack engineer and you are in demand. And I think about it as my students… It is hard to be a history major and to convince people that you are choosing that over something else. Everyone at University of Washington wants to be a computer scientist and if they can’t, they try to get in other disciplines that kind of sound like… I think students will choose poli-sci over history because there is science in the title. And, look, they are just looking after… They are doing what society’s cues have told them to do, right? And you see where the rewards are. And we have sort of pushed so hard in the favor of this sort of technologically-driven world and now, we are having this reckoning very, very quickly and we are realizing how much our systems are kind of all skewed in one direction.
AG: Right. Yeah. I mean, Siva Vaidhyanathan, who we had on earlier, his kind of theory about Mark Zuckerberg, which I think is really interesting, is not necessarily that he is evil but he is just sort of profoundly naive and sort of ignorant about everything other than software and, right, when you build tools and you have no real, kind of, curiosity about how society works and how power certainly works among humans and within human societies, then it is not surprising that we get to where we are now. One thing that I think has been kind of exciting – we’ve been publishing some interviews at Public Books with Donna Riley and Virginia Eubanks who work in technology. They are talking about the way that engineering education is starting to change I think in really good ways. There seems to be a more widespread recognition that the folks who have the talent to be building these systems do need to have more awareness about things that we get from the humanities and the social sciences. So, I am curious about where you see glimmers of hope and exciting new paths for the future in the areas that you study. So, maybe, Meredith, for you, kind of in the realm of actually building these technologies, what is making you feel a little more optimistic these days?
MB: I am glad that people are starting to talk about ethics. There is a conversation about fairness and embedding fairness into computational systems. I think that the conversation should be about justice, not just about fairness, because one of the examples I like to use is the difference between social fairness and mathematical fairness. And you can think about this when you think about a cookie. So, when there is only one cookie left and you have two children, you know that there is going to be a fight over who gets the cookie, right? So, the computer would solve this problem by saying, okay, each child gets 50% of the cookie and that is fair. But I know that when I was little and there was only one cookie left and my little brother and I both wanted the cookie, we would break the cookie in half and it didn’t break exactly at 50%—there was a big half and a little half. And so, then there would be a negotiation. And I would say to my little brother, “Okay, you give me the big half of the cookie and I will let you pick the TV show that we watch after dinner tonight.” And my brother would say, “Okay, yes, that is fair.” So, that social fairness is different than mathematical fairness. And so, when we are using computers as the intermediaries in social decisions, it is not always enough to do what’s mathematically fair.
MB: We should also think about what is socially fair. And if we can’t make computer systems that do socially fair decisions, maybe we don’t use computer systems at all, which is kind of a radical suggestion, or people take it as a radical suggestion. I don’t take it as a very radical suggestion. I mean, that’s how we’ve done stuff in the past. So, we just need to be judicious about when we do and do not use technology. We need to think about what is just, in addition to what is fair.
MB: I will also go back to our conversation about self-driving cars. People often get frustrated when I say that self-driving cars are a terrible idea, they are never going to work. And they say, “Isn’t there anything you like about the idea of a robot car?” Like, for some reason, this is very upsetting. People get very upset when I say I don’t like self-driving cars. So, here is what I can get behind. I can get behind self-driving tractors because in the field, there is nobody for the tractor to run over and kill.
AG: Right. Maybe some animals, but maybe we can keep them penned up somewhere. I don’t know. But it is a far less…
MB: Yes, I am pretty sure that we could train a self-driving car to avoid a cow.
AG: Yes! That is a great way to think about it. Yes, you have a line towards the end of your book that I love, which is where you say, “humans are the point,” right? We are doing all of this to help humans and to serve humans, and it is so easy to forget that and get so lost in the technology. It is so simple, yet, it is so easy to forget, and I love that. And so, Margaret, where do you see glimmers of hope these days in this kind of Silicon Valley economy or outside of it?
MO: Yeah. I see glimmers of hope in the conversation about exclusion and bias and institutional discrimination in tech and in the platforms and products that it has created. And yes, it is a conversation and, yes, the needle is moving slowly, but the conversation that has evolved over the last several years was one that was not happening at all before really in a way that was, or at least not happening… It was happening in academic circles. There were people who have been doing the work for a really long time, but it has kind of reached this burst out onto the public in a way that is quite different and it is having reverberations politically where, in late July, there was an antitrust hearing on Capitol Hill where all four of the big CEOs came and were grilled and it was not just hearings… A hearing is a hearing, and it is kind of political theatrics… But the substance of the questions that they were being asked and the kind of general vibe that was pretty hostile coming from both parties for different reasons was so different. So, I see the conversation has changed and I think we need to recognize that, that that is meaningful. It reminds me a little bit… I am thinking about the early 1960s and I am thinking about a series of books that were published that became bestsellers like Silent Spring, by Rachel Carson, which really becomes the tipping point for the modern environmental movement. I am thinking about Ralph Nader, Unsafe at any Speed, about car seat traffic safety. And I think the work that people are doing, the work that scholars like Meredith are doing, the work that so many… sort of, so much important work that is being done on not just Silicon Valley, not just the Internet but the Internet and society broadly defined. And taking these technologies… Not taking the technologist at their word but instead, pushing back, I see that as hopeful and I see that as hopeful not in an, oh finally we get to blow them up and tear it all down, they are evil… I think it is a way of saying, we are holding these incredibly powerful companies accountable. These companies are filled with incredibly talented people who, you know, genuinely, I know it kind of seems cheesy now but I think they generally… You know, people do want to change the world for the better, and those of us on the outside are, like, well, it is not really working out the way they expected. But there is a lot of good they have brought into the world. Or there are things that have been generative that have been the products of this particular phase of the high-tech revolution. And so, how do we harness the energy and have a really more honest and meaningful conversation? We are in a tech-saturated world. The Internet is with us. And so, I don’t want to be a Pollyanna about, oh, now we are talking about diversity, so problem solved. No, that is not it. But I have been struck really since, you know, starting perhaps I guess it was, well, 2012 was the Ellen Pao case, the gender discrimination case against Kleiner Perkins, the big venture capital firm. That started this conversation about women in tech which then, in the post-2016 moment, snowballed into this much bigger conversation. And all of the work of the scholars who have been working on this stuff for so long suddenly gets amplified and lifted up, new work adds to it and now, we are having, it is much more… You can no longer have these [unintelligible – 1:12:49] conversations about, oh, Silicon Valley changing the world, yeah, rock on, which really were pretty prevalent not too long ago. You don’t have to go too far back in history to find people just uncritically repeating this is going to get us where we need to go and all we need is better technology and voila! And so, this is necessary and important and I am made hopeful by it.
AG: Yeah, that’s great. I think I am definitely encouraged by the conversations, kind of, like, what you were mentioning about these deeper structural issues that go far back before the Internet, having to do with capitalism, I think that it is maybe not enough to just keep kind of tinkering right at the edges. Both of your work and, like, Wendy Liu’s, Abolish Silicon Valley, which kind of presents some of the more radical ideas about this and it is great to just have a bunch of different approaches to it. I think the more scholarship, the more disciplines that get involved, the better. So, that is really great. Do you want to add? Please go for it, yes.
MB: I helped to run something at NYU called the Center for Critical Race and Digital Studies. And so, for me, that feels like the epicenter of where a lot of really interesting new ideas are coming from. And there is also a documentary that is coming out soon that is on the festival circuit right now called Coded Bias by Shalini Kantayya. That is a documentary about Joy Buolamwini and her fight against facial recognition. So, I guess I would also put this in my optimism column – that there is a movement to ban facial recognition technology because of the racial bias that is embedded in this technology and the fact that technologies like facial recognition: a) don’t work really well; and b) are disproportionately weaponized against communities of color and poor communities. And until and unless it works for everybody in a way that is just, then we shouldn’t use this technology.
AG: So, I think that sounds like a really good place to end so I want to thank both of you so much for being here. It has just been a huge treat to have both of you here together.
MB: Thank you. This was a really great conversation.
MO: Yes, thank you so much.
[End of Recording]
This work is licensed under a Creative Commons-Attribution License (CC-BY 4.0).