Audio of this conversation is available via your favorite podcast service.
In an introduction to a special issue of the journal First Monday on topics related to AI and power, Data & Society (D&S) affiliate Jenna Burrell and Program Director of D&S’ AI On the Ground Jacob Metcalf argue that “what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science.” The papers in the journal go on to interrogate the epistemic culture of AI safety, the promise of utopia through artificial general intelligence how to debunk robot rights, and more.
To learn more about some of the ideas in the special issue, Justin Hendrix spoke to Burrell, Metcalf, and two of the other authors of papers included in it: Shazeda Ahmed, the Chancellor’s Postdoctoral Fellow at the Center on Race and Digital Justice at the University of California, Los Angeles and Émile P. Torres, a postdoc at the Inamori International Center for Ethics and Excellence at Case Western Reserve University.
A transcript of the discussion is forthcoming.
Justin Hendrix:
I’m so pleased that you all could join me today. We’re going to do a survey of some of the ideas and issues that you raise in this special issue, Ideologies of AI and the Consolidation of Power. Jenna and Jake, you edited this volume. Perhaps I’ll ask you to start with what you were hoping to accomplish with it. What was the genesis of this? What got you started with this project?
Jenna Burrell:
Yeah. I think being part of this community of people looking at AI from social scientific perspectives, the AI ethics community maybe is an appropriate label for it, I knew there were a lot of pieces of work that were floating around and not really finding a home. I had a paper myself that I had submitted to a conference and had gotten some useful reviews, but it wasn’t accepted at that conference. And so just putting word out on the street, I asked around to a bunch of people to see if they also had pre-prints, publications that they hadn’t found a home for yet, that had to do with power, specifically naming power, identifying power, as an issue, a concern in AI research and AI development.
And there were a bunch of papers, and so we decided the best way to hone that theme was to do a special issue. And so we found a journal, First Monday, that was willing to publish all those works together. And then we did the peer review process with each other as well, to further hone our arguments. But that was the genesis, was how do we get work about power that’s a little bit hard to publish together, and find the common theme in all of our work?
Jacob Metcalf:
I think it’s also interesting, and something that I think we’re a little proud of, that we did an unusual process for this special issue. We asked our authors for what prior feedback they had, and we thought really hard, how do you do a supportive peer review process? So that we could have a common sense of some of the themes about why these papers were having a hard time finding a home. And a lot of the prior reviews had said something that was along the lines of, don’t name power, that was along the lines of, this feels like it’s really negative about the community, or this feels like it is too generalizing about people, or something like that, when they really weren’t, when they were… These are empirical papers that are paying attention to history, paying attention to sociological evidence. And even in interdisciplinary computing spaces, there’s some pressure around the community norms on how critical you can be about the community, and we wanted to push back on that a little bit.
Justin Hendrix:
So we’re talking about history, we’re talking about power dynamics, we’re talking about looking at anti-democratic tendencies emerging in AI, all these questions around how we understand present day risks, how we balance those with future risks. And I hear what you’re saying. I feel like I’m exposed to this quite a lot in talking to folks out there, this tension between focusing on these critiques, really delving into some of these issues, and always being asked, “But wait, aren’t we dismissing all of the extraordinary opportunities that these technologies have created for humanity?” Is that a sort of false choice we’re being presented? I don’t mean to ask a leading question, I certainly believe that it is, but perhaps you can tell me a little bit about where you’ve arrived at on that question.
Jenna Burrell:
I certainly think that’s a false choice. I mean, to me, the subtext of that statement is, stop being negative, stop complaining, we should be optimistic about technology, I don’t want to hear about how this could be harmful, I don’t want to hear about why we shouldn’t just charge ahead full speed and build whatever we want to build. And I think actually the word for me, in the community, optimism has become a dirty word. I think that’s a word that gets used a lot to shut down conversation.
So yes, it’s not… There isn’t a choice between either tech or no tech. And actually, I think one of the really interesting themes in our work in these articles is, no-one is saying we should stop building technology. No-one is the stereotype of what people think is a Luddite, which is someone who is against technology, or wants to live in the woods, or detach themselves from tech entirely. And actually, several of the articles have some really concrete ideas about what we should be doing with tech. And I also think that on the table should be, as has always been the case, things that are unacceptably unsafe or unacceptably risky. And it’s strange that that often gets treated as though it’s an impossible thing to do, we either have tech or we don’t. There are choices made every day. Companies make choices about technology not to build, because it’s not profitable, for example. There are all kinds of technologies that are regulated. Cars are licensed and regulated, there’s safety regulations. Of course, we should be talking about that with AI as well.
Émile P. Torres:
Yeah. I would just add that I think critique is essential for fully realizing whatever benefits there might be from these technologies. So thinking about all the ways that the development of these technologies could go wrong, doing a sort of pre-mortem analysis, that’s a crucial step for maximizing the benefits of these technologies. So the fact that someone is being critical, maybe even harshly critical, of how AI, for example, is being developed and the various ideologies that are shaping the current projects to build advanced AI, that could be understood as an affirmation of the potential good uses of these technologies, and an effort to ensure that they turn out as well-designed as they possibly could be.
Shazeda Ahmed:
I also want to add to this point about critique, not just its value, but there’s this unspoken body of… Where is that feeling coming from of, we want to cast some people as Luddites or as anti-technology? And I think there’s two things I feel like I keep seeing happen. One, across a lot of my work, which used to focus on China, when you look at the United States, there is this fear of the United States losing its geopolitical position in the world as the economically dominant superpower. And so what I see is that policymakers are afraid of critique of technology because it is so tightly bound to their idea of progress and of American dominance, that any level of critique becomes seen as this thing that’s really about America not being this democratic bastion in the world.
And I know that sounds like I made a massive leap, but you can see that move happen… over, and that tech companies really like to play to that argument, so that the United States government is open to hearing what a tech company thinks the way it should be regulated should look like, versus what a variety of experts who have been working on this for a very long time think.
And then there’s also, on the flip side, how that plays out when it comes to individual people, I’m talking about people who might be coming for these critiques on Twitter, is that that narrative of technological progress and that optimism becomes such a personal narrative for people who think they know what AI is, and who believe the hype that tech companies are selling them, and thus the people issuing the critique are seen as people who want to take that away from them.
Timnit Gebru, who’s Émile’s co-author, gave a really great keynote talk at UCLA recently, and I believe she was mentioning how even the data annotators, workers who are doing some of the hardest jobs that make training AI systems even possible, they’re paid very little, they’re exposed to all kinds of content that is psychologically disturbing, there’s all kinds of taxing ways this work affects them, even sometimes some of them believe in that bigger narrative without realizing that the dividends and the benefits of it won’t necessarily come back to them. And so that’s why critique is really important, is that this narrative is so self-renewing on the micro and the macro level.
Justin Hendrix:
I think that’s a good place… I want to come back to some of the big picture issues that you get into in the introduction, but I am going to come back to this in a moment. I think you just offered me a moment to maybe transition to you, Émile, to this paper on… Am I pronouncing it correctly, the TESCREAL bundle? TESCREAL? How do we say it?
Émile P. Torres:
TESCREAL.
Justin Hendrix:
There we go.
Émile P. Torres:
There is no right way to say it. That’s the way I say it. And I guess since I came up with the acronym, maybe I have a certain authority, but not really. TESCREAL is fine.
Justin Hendrix:
I think we’ll give you that. So your paper with Dr. Gebru on the bundle, Eugenics and the Promise of Utopia Through Artificial Intelligence, you’re talking about a set of ideologies, a blob of ideologies that I feel like we run into in various ways. Of course, we see various strands of this, transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism. I feel like you spend your days trying to educate the public about this. Tell me about what you were trying to accomplish in this particular paper.
Émile P. Torres:
I think a key aspect of the argument that we’re articulating and the thesis that we’re trying to defend is that, with respect to a lot of companies in the world, if you want to provide a complete explanation for why they do what they do, you don’t need to look much further than the profit motive. There are these companies that are embedded in this capitalist system. Why were the fossil fuel companies ruining the world? To maximize profits, and so on. I think the case of AI companies constitutes an anomaly. So the profit motive is a part of the picture, but it’s only half of the picture, or maybe it’s less than half of the picture.
There are these companies like Google, Microsoft, Amazon, and so on, that have invested billions of dollars in the leading AI companies, with the expectation that they will profit, make billions of dollars as a result. But if you want to understand the origin of the leading AI companies, so this would be DeepMind, OpenAI, Anthropic, xAI recently founded by Elon Musk, you have to be attentive to this bundle of ideologies. All of these leading companies emerged directly out of the TESCREAL movement. So DeepMind, for example, was co-founded by three individuals, Demis Hassabis, Shane Legge, and Mustafa Suleyman. Demis and Shane Legge in particular, they had a fairly extensive history within the TESCREAL movement before founding DeepMind. In fact, the reason they founded DeepMind in the first place was because of this TESCREAL idea that the creation of artificial general intelligence and ultimately machine super-intelligence will constitute the most important and pivotal moment in human history, will constitute a fundamental discontinuity in human history. Once we create super-intelligence, either we will get utopia, we’ll spread into the stars, we’ll be able to re-engineer humanity, live forever, and so on, or by default the super-intelligence will inadvertently destroy us, just as an unintended consequence of its so-called misaligned value system.
So, super-intelligence is really important. That is a core idea that was developed by the TESCREAL movement. These individuals, Demis, Shane Legge, various others, they were exposed to this idea by virtue of their participation within the TESCREAL movement, and that is what inspired them in the first place to found DeepMind. Then later on, DeepMind was acquired by Google in 2014. Google thought, maybe we can make a whole lot of money off this. And I can make exactly similar claims about OpenAI and Anthropic, which were heavily influenced by effective altruism and transhumanism. Sam Altman is a transhumanist who has explicitly said that we will digitize our brains within our lifetime, and so on.
So to understand the origin of the current race to build AGI, and to understand the worldview that is shaping and driving this race, looking at capitalism and the profit motive does not give you a satisfying or complete answer. You have to also understand these techno-utopian ideologies, which also have this apocalyptic aspect, but you’re fundamentally teck utopian ideologies to understand why we’re talking about AGI in the first place. I think one of the claims that Gebru and I make – I don’t think we make in the paper, although maybe we do, I can’t quite remember – is that without these ideologies, we almost certainly would not be talking about AGI right now. Yeah. So again, that just underlines the importance of understanding the nature of these ideologies and their particular futurological visions for making sense of the current race to build AGI.
Justin Hendrix:
So I’ll put this question to you, but also to the others. It does strike me that the thing we’re often dealing with here is that Silicon Valley has offered a set of visions of the future that have become extremely salient, and you might argue they are the dominant version of the future that is available to most humans. Maybe we could argue that there’s a Chinese alternative, which also seems to me to be shaped by Silicon Valley in a way. But anyways, we could argue about that all day. Are there alternative visions that you think could become salient, to potentially displace this Silicon Valley version of AGI, or a version of a future that’s on offer at the moment?
Émile P. Torres:
I think there’s a lot to say about this. Because on the one hand, one of the critiques that has been made from leftists who are within the, for example, the transhumanist community, but haven’t really contributed in any significant way to the TESCREAL movement itself, that more libertarian strain of thinking within transhumanism, effective altruism, and so on, that has influenced the AGI race. So one of the critiques is that a lot of leftists don’t really have a similarly or comparably compelling vision of the future. And I think there’s maybe something to that. One could certainly propose, I think, a compelling account of what the future ought to look like that foregrounds or makes central social justice concerns.
So I used to be in the TESCREAL movement for ten years. A lot of my work was on AI safety. And AI safety as a field, that emerged directly out of the TESCREAL movement, along with all the major AI companies. And these days, I’m more focused on AI ethics and the social justice issues. But there is a version of thinking about how can AI potentially help to solve some of the injustices in the world? It’s a whole different gestalt, a whole different perspective thinking about these issues.
Jacob Metcalf:
It’s hard to say what could replace TESCREAL, in part because it’s too grand of a vision. It’s in some ways absurd, that the point of all this science that’s being done, with all this money that’s being spent now, with all this policy work, is what? It’s to move beyond being human, to stop being Terran, to stop having biological bodies. It’s in some ways a very anti-human idea. And it has this very strong utopian flavor, but its logic is very weird. Its logic really is, if we don’t reach our utopia, it is by definition now a dystopia. I’m not willing to accept that premise. I don’t think they’re proposing a utopia, first of all, but also it’s so far out of the bounds of things that we can plan for, that we can control for, that we can guide with policy. It’s so wrong, it’s not even worth saying it’s wrong.
I think the future of AI is much more mundane. It’s going to be bounded by business practices. It’s going to be grounded in market needs. It’s going to be in response to actually existing humans wants and desires. The TESCREALists have really tried to set a modality of talking about the future that doesn’t work for us. It doesn’t actually help accomplish things that are good for people.
And so my preferred vision of the future is to be a little more mundane, to think practically, much more oriented towards what science can be done now to solve the problems that are on the table now. I think Shazeda’s paper is really great at showing how there’s a very deliberate effort to change the course of science. There is a strong recruitment effort from the TESCREALists with money to create a funnel that pulls scientists and policy-oriented folks into a very specific ideology that is obscuring other path. There are effective altruism and long-termist nonprofits spend $10 million a year on Congressional fellowships. How much oxygen is that taking up? How many other alternative views of the world would be available if, instead, there was an attitude of, let a million flowers bloom, rather than very purposefully pointing all of these resources into an anti-human ideology that’s ungrounded in science?
Jenna Burrell:
I was also going to say something along the lines of something more humble. Tech can have a much more humble role. And actually, what I get from Émile and Timnit’s paper is this idea that… What if we return to more traditional engineering practices? If there’s a domain where there’s a problem, we apply AI specifically to that problem. And the issue with the TESCREAL bundle is that they’re saying, “Okay, if you don’t like our utopian vision, what is your utopian vision?” And what we’re saying is, we don’t want to… We’re not playing that game. We don’t want to debate on those terms. Those aren’t the terms of the debate that we choose. We don’t think there’s one grand vision versus another. We think that there has long been a role for tech, including computing tech, to address specific needs in specific domains.
And when you do that, you have much more likelihood that you can control it. If you’re applying a technology specifically to disease diagnosis, and it’s just for that, it’s not for answering every question in the world, it’s not for solving any problem, then you have a much better chance of being able to understand and control what that technology is doing. And so the positive alternative is really traditional engineering practices, problem solving in specific domains, not trying to displace human intelligence with super-intelligence.
Justin Hendrix:
Let’s perhaps look at, Shazeda, your paper, Field-Building and the Epistemic Culture of AI Safety. You and your co-authors, I suppose, to what has just been referenced, you look at this epistemic community, its insularity, its disengagement with outside perspectives. What did you find in the course of doing this?
Shazeda Ahmed:
Yeah. So the reason we even chose to write this paper is because in 2022, as Sam Bankman-Fried was on the rise and had created his own philanthropy based on, as we now know, his ill-gotten gains in his cryptocurrency gambit, we were interested in how this idea that had been floating around the whole time that I was in graduate school at Berkeley… Essentially the TESCREAL bundle. You’d see people at various AI related academic and industry events who were interested in that. And I always noticed that they were a tightly clustered community, but they weren’t necessarily mainstream. And I felt like, with this infusion of capital, this community that was already so tight-knit and very well distributed in places to have influence could have this boost.
And so I wanted to understand what that was starting to look like and who… You mentioned the insularity. But what was really interesting to me is, how are people who are outside of it coming to it? Sometimes it’s because they want funding. When you look at the kinds of things FTX Future Fund funded, it was sometimes computer science professors getting a course buyout, which means they don’t have to teach for a semester, so they can do this research. And they thought of it as just a grant that I got from somewhere, without doing that extra digging of, but what’s the ideological project behind this money?
And so I wanted to understand when that boundary breaks. When is this insular and a set of practices that a community has put together, to continue to socially reproduce their ideas? And then why is it suddenly spilling into media and policy? What could that look like in the long term? Even at the time, I remember asking my co-authors, “Okay, if a year from now in 2023, this whole community has dissolved, why would it have been important to have studied this?” And it was because we’d understand how AI hype cycles work and what kinds of beliefs they can make people accept without really much critique, and shape their whole lives and careers around. And we saw that the opposite happened. It’s 2024, and [inaudible 00:23:48] bigger than ever, and has found ways to bring in people.
And even from before we wrote this paper, I would encounter people who would wear EA-related merch, and they would say to me, “I don’t agree with all the ideas, but I really like the community.” You still hear that. That’s always been there. And I find that really interesting, especially right now that there is more critique of this community, but there’s no accountability.
So writing the paper was for me, and it still is, this research is almost like writing a contemporary history of this moment, before we forget about every little episode that led us to have a certain set of ideas that might be accepted as mainstream, without people remembering that there were all these alternatives that the three of us are pointing out.
And I also wanted to think about it in the context of AI’s history, which I don’t think comes up nearly enough. Why are we going so far down the path of something like RLHF? What happened to the expert systems of the 1980s? How did other scholars study these communities? My favorite anthropologist, Diana Forsythe, was asking, when these people do this, when they say they’re doing AI, what are they doing? That’s still the most relevant question, I think, that I’m constantly trying to answer. And the more we dig, the more we’re seeing how the media and policy influence has been extraordinarily effective. I’m hoping with subsequent pieces that my colleagues and I are putting out that we get people who think, this has nothing to do with me, it’s quite niche, it’s not that coordinated, to realize that it is and that it matters.
Justin Hendrix:
So what would a genuine pluralism or genuine pluralistic environment for ideas around AI look like? I’m just struck by the fact that there’s just so many billions now invested in certain ways of looking at things, that it strikes me as hard to get a seat at the table for some of these alternatives, alternative ideas, alternative value systems, even alternative arguments. What would it look like to actually see that kind of diversity of ideas?
Jacob Metcalf:
I actually think] the majority of work being done in AI is actually grounded, and practical, and oriented towards the kinds of outcomes that will marginally, and… I say marginally in a good way. It will move the margins of human wellbeing. And what we’re lacking is the money and attention and coherence of the TESCREAList ideology, and see some things on the horizon along the lines of trying to articulate what that community would be, what that kind of research agenda would be. I’m sure it won’t be as well funded. But there are a lot of civil society organizations that are paying attention in the right way, paying attention to questions of democracy, fairness. I really think it’s there. It’s just, it’s not speaking with one voice.
Justin Hendrix:
Jenna, perhaps this is a place to bring you in about the paper that you wrote yourself here about automated decision-making as domination. Because you do look for, I guess, answers on this question of algorithmic fairness, whether there are alternatives, if not fairness, then what? What would you say to this?
Jenna Burrell:
This paper I started many years ago. I actually remember presenting an early version of it, just before the pandemic started, at a conference at UCLA. And what I was struck by way back then was that so many people were talking about fairness, algorithmic fairness. And that usually meant, how do we distribute a resource so that everyone gets a fair chunk of it? And whether that was money or opportunity, anything could be smashed into this fairness register. And the thing that I found most alarming about some of the trends in AI was that it was replacing human self-determination. That seemed to me like the essential question, is when and where is that acceptable, and what do we give up if decisions start being made automatically without human input?
So I found a lot of actually philosophical literature that critiqued fairness as a measure of justice, and just tried to bring more of that conversation from that discipline, from philosophy. I’m not a philosopher, but with some help from actual philosophers to think through it, to just… Really it was just to try and broaden that conversation about how we think about justice, and to point to ideas that already were published and useful. And I think if we conclude that human self-determination is really an important question, then that just suggests all kinds of ways of intervening. And it’s not just about perfecting an algorithm, it’s about processes that could take place before technology is designed, while it’s being designed, after it’s been designed.
And if you look across… I think what Jake was saying is that some of this is already happening. I entered the tech industry as someone who is really interested in user-centered design. So there’s this long tradition of consulting people who use technologies to find out what they need, and to find out why technology does or doesn’t work for them. And the big message there was that technologists are probably more ill-equipped to understand that than average people, and to see the industry swing back towards tech authority and tech expertise as making decisions about everything, from how technology is built to what future is the best for all of us, is alarming in that sense.
So we can draw from things like user-centered research. This is how I concluded the paper, is just pointing to all the processes and practices we could start using. There’s user-centered research, there’s participatory processes, there’s… Policy gets made often through consulting with groups that are affected by systems, by policies. There are ways of designing technology so that people can feed back straight into it, or we can just set in some regulations that say, in certain cases, it’s not acceptable for technology to make a decision.
I think some of what we have to do is get outside of the United States, because some of the more human rights oriented or user-centered policymaking is happening elsewhere, especially in Europe. The General Data Protection Regulation included a provision saying that there had to be a human fallback. If a decision’s made in an automatic way and someone objects to it, you have to be able to bring in a human to review that decision. There’s just lots of opportunities out there. There are lots of ideas and practices that already exist. And I think maybe picking up what Jake pointed out, it’s just about bringing it together so that people can see the spectrum of what we could be doing, instead of letting supposed tech experts tell us what future we want and what will work best for everyone, because they’re just wrong. The ideas that they have are very specialized, and not widely agreed on as the future that would best suit all of humanity.
Shazeda Ahmed:
I don’t know what the future could look like in that pluralistic world, because we’re in it. It’s just that different voices have different resources, and it’s very rare still, I think, to look to non-Western societies and say, how were they approaching this? Part of the problem is, there is this hegemonic influence of the United States, and to a lesser extent more recently when it comes to AI safety, the UK, where when the UK sets up an AI safety summit that’s global, suddenly all these countries want to come to the table, even though nothing substantive happens there, because there’s a marketing ploy of… There are some governments who are going to set the global pace on this, because they happen to be where the companies and the researchers are.
But we don’t have to buy into that model. That doesn’t mean there aren’t all kinds of viable approaches that contend with different cultures, different political economies, that are worth exploring. I think it’s just that, here in the United States, we often discount those. There’s so many researchers who have talked about computing from the South. I don’t know how many times we have to say this again and again to point out also that, whatever comes out of the United States and then tries to get grafted onto other places will inevitably cause problems. So I think one part is that actively going out and looking for examples that are not the norm of what we have in Western democracies.
And then the other one is that we need more classic accountability measures. I remember when I first presented this paper, back when I was still working at Princeton, I got a great question from a philosopher, Molly Crockett, who was asking… The tobacco industry when they fund research, for example, or any other big industry like that funds research, there’s all these disclosures. Is there a culture of disclosures in this community you’re talking about? And there’s actually a hard opposite. Some of my colleagues and I have seen places where on the web forums they’ll say, “Let’s just not mention that this is related to EA,” or, “Let’s downplay that.” Which is why you come up with something like a TESCREAL acronym, why we have to name what are these movements and cultures? How do they link to each other? And I think that, if we had more disclosure, you’d start to see how some of these voices are crowding out others.
Émile P. Torres:
Just to add something that touches on what everybody’s said so far. The TESCREAL vision of the future… So first of all, one of the explicit claims that Gebru and I make in the paper is that there are a lot of people working for these companies who either don’t buy into these particular visions or are just unaware of their proximity to them. I have friends or contacts at DeepMind, for example, who are just working on particular projects, focused on solving very specific problems, are critical of the TESCREAL worldview, despite working for a company that emerged out of this movement.
So that being said, maybe it’s just worth noting that the way I understand the TESCREAL vision is as a kind of tyranny. This is a very particular view about what the future ought to look like that was crafted almost entirely by white men at elite universities like Oxford and in Silicon Valley, and now they’re trying to realize this future, and in doing that to impose this future on the entire rest of humanity. And the way they’re trying to realize this future is through AGI, because again, they believe that AGI is going to be the most important and the last invention we ever need to create. And if it’s value aligned, it’ll bring about… Value aligned, meaning aligned with the values of the TESCREAL utopian vision. Then it’ll bring about that utopia. If it’s not value aligned, then by default it’ll kill everyone on Earth.
And so Jacob was right that the way existential risk or existential catastrophe is typically defined is as any scenario that would prevent us from creating this particular utopia, where we re-engineer humanity, we go out, colonize space, we create literally planet-sized computers to run virtual reality worlds full of trillions and trillions of happy people. So that is what existential catastrophe is. So there’s only one option for the future, and I feel like, at least for me, one of the most striking features of the TESCREAL literature is the complete absence… There’s virtually zero reference of what the future could or more importantly should look like from alternative perspectives, from the perspective of indigenous communities or Islam or Afrofuturism, feminism, queerness, disability, even the non-human world.
In one of the most influential books, the most authoritative publication so far on the EA long-termist worldview, it was written by William McCaskill published in 2022 called What We Owe The Future, and he has a passage where he suggests that our systematic obliteration of the biosphere might be net positive. And the reason is that there’s lots of suffering among wild animals, therefore the fewer wild animals there are, the less wild animal suffering, so that might ultimately be a good thing. So that then suggests the question, does the natural world, do our fellow creatures here on Earth, even have any home, any place in this utopian vision? It’s why I’ve argued that avoiding an existential catastrophe could itself, or almost certainly would, be completely catastrophic for humanity, because avoiding existential catastrophe means that we’ve created this utopia. If we’ve avoided existential catastrophe, we’ve realized utopia, that utopia itself I think would be disastrous for humanity.
One way to think about this also is, a lot of conceptions of utopia are inherently exclusionary. Somebody’s always left out of utopia. In the Christian Heaven, if there are non-believers there, it’s not Heaven. In the communist utopia, if there are capitalists, then it’s not utopia. So there’s question of who exactly is left out of the utopia that these TESCREAList individuals and the AI companies that they have founded are trying to actualize in our world. And my answer is that I think most of humanity is left out, and that’s why I would argue that avoiding an existential catastrophe, as they define it, would itself be completely catastrophic for humanity.
Justin Hendrix:
Jacob, I’ll bring you in here. I did want to ask you about participation and your piece on participatory AI. I don’t know if this offers you an opportunity to pick up from Émile there, and maybe get into a couple of those ideas.
Jacob Metcalf:
I can do that. I think our issue in some ways is one of the first times it’s really being said collectively. The TESCREAList community, these research labs have pulled a really interesting trick in naming existential risk the way that they have, and putting so much attention on it. Because when a normal person hears existential risk, and they think of The Terminator, think of AI causing… They think that’s really awful, we should prevent the end of humanity. But the TESCREAList vision is the end of humanity. For them, existential risk is humans continuing to exist. And that’s only a vision that’s possible if you are excluding vast numbers of people from governing AI, having their values integrated into it, having a democratic expression of what is actually desirable from our tech.
And so that’s what our paper really is about. Participation versus scale is the framing there. And we’re exploring this question, what actually take to have communities, to have society, to have a much wider collection of people involved in designing and governing AI systems? A particular problem at the heart of that question is the scale of AI systems, especially if we’re talking these very large language models, or looking further out, something like AGI, that has essentially a global claim, a claim that this is good for everyone, that this is usable by everyone, that it can speak to anyone, and anyone can use. That runs contrary to some really important aspects of participatory research. The ways that sociologists, social workers, anthropological people, and that sort of social side of the things, have really invested a lot… thought and energy and effort over the years into developing methods that are actually able to elicit the needs, interests, desires, experiences of communities.
But those don’t happen at scale usually. The commitment is actually to locality, the commitment is to specificity. And that specificity is a way of doing justice to people’s actual experiences as individuals, as communities, as families, as neighborhoods, as cities. There’s a tension between the global and the local here. And so our paper is really about trying to figure out, what would it mean to scale participation? What are the methods that would make it possible to elicit the needs, interests, preferences of a much larger number of people, but yet not bulldoze over the differences? And it’s really challenging. We’re not offering a solution. Instead, we’re really looking at the tensions and maybe offering some provocations that could hopefully help people working in the space think through the gradual interventions, experimentation, attempts at making this vision of AI nominally more democratic and accessible to the people who are impacted by it.
Émile P. Torres:
So I would just add something real fast to what I think is a really important point that Jacob made, which has to do with the trick that’s the TESCREALists are pulling when they talk about existential risks. So just to underline… So two things. One is to underline that they are explicit, that a failure to radically re-engineer humanity would itself probably indicate that existential catastrophe has happened. So part of the project is becoming or replacing our species with a new, supposedly superior, “radically enhanced” post-human species. And so this leads to another terminological trick that they play, which I think could be really misleading for people who aren’t all that familiar or conversant with the frameworks and ideas that the TESCREALists have developed. This other trick is the way they define humanity.
So they have an idiosyncratic definition of humanity, which includes not just our species homo sapiens, but whatever descendants we might have as well. And maybe they add a few other conditions, like these future descendants have to have a certain level of “intelligence” or certain kind of moral status, so they’re the sorts of beings that matter in a moral sense. But this broader extension, this broader circle of beings, homo sapiens and our future descendants that all are classified as humanity, enables them to say that reducing the risk of human extinction is a top priority. Most people will interpret that as meaning, we should make sure that our species, homo sapiens, sticks around. No. The way they define human in this broader sense implies that our species could disappear entirely, maybe within the next decade, without human extinction having occurred. And in fact, it’s a certain type of human extinction, namely the eventual replacement of our species with this new post-human species, that is a key aspect of their project.
So there’s a very real sense in which, if you understand human in a intuitive, colloquial sense, or even just a more biological sense, they are pro-extinctionists. If you accept their broader definition, then they’re not pro-extinctionists, because they include not just current day humans, but also these post-humans within the extension of the term humanity. So it’s a bit of a trick. And it’s really important, I think, for people not to get fooled by this, because they hear the TESCREALists say, oh yeah, we avoided human extinction, really important. What they’re talking about when they mention human extinction is not what most people think.
And so a real humanist like myself is about preserving our species, whereas they are ultimately about marginalizing or maybe even eliminating our species. One of the terms they use in fact is legacy humans, to refer to those instances of homo sapiens that happens to stick around in the post-human era. What would these legacy humans… What would their life be like? I don’t know, maybe they’d be kept in pens or as pets or something. But again, we’d be marginalized if we wouldn’t be completely eliminated.
Justin Hendrix:
I want to just ask a final question of the group, which is, what would be your message to policymakers? And certainly here in the US we’re perhaps a bit behind Europe in this regard, and thinking about artificial intelligence, but I’m not sure what to make of the quality of the debate and how it might relate to some of the things we’ve talked about here. I know that at one of the AI insight forums that Senator Chuck Schumer hosted, he did go round the room and try to get everyone to estimate their p(doom) calculation, what their probability of doom was from artificial intelligence. So to some extent, perhaps they’re internalizing up on Capitol Hill some of the things that we’ve discussed here. I believe Janet Haven at Data & Society was one of the participants at that insight forum. But if you all had been in the room, what would you want to communicate to policymakers about these issues?
Jenna Burrell:
I’ll start. So I think when you look at computing technology, policymakers often look at that and they go, “This is extraordinarily complicated. This is something that requires technical expertise.” And okay, yes, on some level, but I think it’s a real danger, particularly to the democratic outcome of these systems and how they’re regulated, to let technologists, people with computer science degrees, people who actually build the technology, make all the decisions. And actually, I have a computer science degree, and so I understand how this stuff works, and I think that’s very dangerous. It’s also a cover that technologists can use to say, “It’s hard to understand. You have to trust me that this is what it is, this is what it can do, this is how we should regulate it.”
So I think if you’re looking at who’s in the room, that’s the thing to look at, like how diverse are the backgrounds, perspectives, life experience, socioeconomic background, expertise… It shouldn’t be something that technologists with technical degrees get to decide because it’s complicated. And if that’s the only way to regulate it, perhaps we shouldn’t be using it in so many domains.
Shazeda Ahmed:
I would say that giving into the manufactured urgency of AI safety thus far appears to be leading policymakers to make policy that might change nothing. Instructions for how to build bioweapons have been on the internet since the dawn of the internet, and there are all these arguments about the actual physical need to make one. If we end up with something like that on the books and it never happens, all this time and money and investment would have gone towards preventing an outcome that, as we’ve discussed in many ways, might never have happened anyway. And that instead, policymakers should be looking to the issues that are actually tractable, that they’re continuing to punt on because their time is limited.
And when they bring in people who want to talk about speculative risks, they are actually losing time on a variety of issues that have had many solutions pitched, that are testable, that you can actually experiment with, see what works and what doesn’t, versus this fully speculative space we’ve been talking about. Plus, over time, when you punt on those issues, it leads to a loss of the United States’ opportunity to be a global norm entrepreneur, and a failure to live up to the democratic ideals that we say that the US government espouses.
Émile P. Torres:
I would emphasize that it’s crucial for policymakers to understand the ideologies that are shaping the discourse about AGI existential risk, understand what this concept of existential risk is in the first place, and how it relates to human extinction. Again, to reiterate what Jacob was mentioning and what I was discussing earlier, a lot of these individuals are extinctionists in a certain sense, despite Elon Musk saying, “There are humanists and extinctionists, and I’m one of the humanists, and woke people and the Voluntary Human Extinction Movement and so on, they’re the extinctionists.” No, actually Elon Musk is a transhumanist, and transhumanism is ultimately about replacing humanity, our species, with a new post-human species.
So understand that the TESCREALists are responsible for the AGI race in the first place, including Eliezer Yudkowsky, one of the leading AI doomers in the world. He has played an integral role in getting DeepMind founded and funded, as well as OpenAI. Altman has been explicit that Yudkowsky inspired him and many other people to get into AGI in the first place. The very people who are screaming about AGI doom are also the ones who are largely responsible for this predicament that we’re in right now. Understand that even the most extreme doomers are pro-AGI. They want AGI as soon as possible. They’re just worried we’re not ready for AGI yet. Why do they want AGI as soon as possible? Because once we get AGI, we get immortality, and we get to spread into the stars. We get to realize this vast and glorious future, as they would put it.
Yeah. I think that’s what I would emphasize. It’s just really crucial to understand… And it’s complicated, and I think a lot of policymakers are going to struggle with that, understandably. But it’s really important to be aware of the idiosyncratic way that they use terms. This goes back to a point that we were discussing early on. I think when it comes to articulating some vision of the future that we ought to strive towards, maybe that’s the wrong way to think about it in the first place. But insofar as we’re designing a future, I think it’s really important, as a colleague of mine Monika Bielskyte likes to say, you have to design with, you cannot design for. So you can’t just have a bunch of powerful white men who are designing a future saying, “Yeah, this is for everyone.” And even if they are sensitive to social justice issues, that is not enough. You need to include people with a multiplicity of different perspectives, values, cultural traditions and so on, in a processual sense, as part of the process. That is absolutely integral. So that’s what I’d emphasize.
Justin Hendrix:
So maybe add to follow the money, which is always the good advice, follow the ideas back to their root. Jacob, I guess that gives you the last word, and also offer you the opportunity to say any thanks to the other authors that you might like to mention.
Jacob Metcalf:
I agree with everyone else about policymakers, but I’m going to get really practical and just say fund socio-technical research. It’s fair to think that, in some ways, people on our side of the game produce a lot of snark. But we also produce a lot of research. To the extent that we want this diversity of perspectives to flourish inside of science and policy, we need to be supporting the careers and the intellectual communities where that will happen.
And the government really does have a lot of influence in that space, through the NSF, through NIST, setting the priorities, creating the careers that are for people who want to attend to the actual consequences, impacts of these technologies rather than some far-off future. To some extent, they’ve done a pretty good job. I want to acknowledge that NIST has done a pretty good job.
And then just to conclude, I really want to thank all the other authors. We couldn’t have everyone… this call, but each and every one of them could have been on this call, and it would’ve been a fascinating conversation. Really, the whole collection, they’re all bangers, and I really hope that people take the time to give it all a read. I also should thank Edward Valauskas, who is the editor-in-chief of First Monday, which is the world’s first open access online journal, and he’s a tireless editor who has kept it going.
Justin Hendrix:
So if you are looking for this, you can google Ideologies of AI and the Consolidation of Power at First Monday, a peer-reviewed journal on the internet. I want to thank you four legacy humans for joining me today for a good conversation and a good tour of the ideas here, and commend my readers to go and look at these papers. Thank you all so much.