Transcript: The Futurist Summit: Lessons of the Last Decade with Meredith Whittaker & Frances Haugen
[...] MS. PASSARIELLO: So how optimistic are you that Congress and regulators will tackle these issues today competently even though they have not passed any major legislation on the last big tech topic they were focused on, which was social media? MS. WHITTAKER: Well, they haven't passed a federal privacy bill and it's been 20-something years, right? So what--you know, like I don't know where optimism would spring from, but it's pretty barren ground. You know, and I think the incentives right now are not aligned for the social good. I think we're looking at billions of dollars in lobbying being thrown by these big tech companies, a full-on media operations campaign that has been documented by tech industry adjacent folks to displace ethical concerns and concerns about the social harms of these systems with, you know, what I would call religious sci-fi fantasies about the singularity and about sort of the super intelligence. So, you know, we are outgunned in terms of lobbying power and in terms of the ability to put our weight on the decision makers in Congress. But where my hope lies for regulation is not with the kind of, you know, Athena birthing from the head of a senator and saying, like, actually you need to, you know, do the right thing but with things like the Writers Guild of America, who have, you know, I think done the best job we've seen of regulating AI, you know, just non-traditionally. They did the classic move, withholding their labor, and they got terms that are actually, you know, staunching the bleeding of the--you know, use by the studios and big tech to place AI within their labor process in ways that will degrade their labor, that will degrade artistic output, and that will actually, you know, I think have a real precedent-setting move in terms of stopping the real harms right now. So I would look to the Writers Guild of America. I would look to SAG. I would look to, you know, drivers' unions that are contesting the sort of automated precarity of AI systems like Uber and Lyft. I would look to sort of movements from below that are actually tackling the harms now and not simply sitting around and taking selfies with Elon Musk and calling it a regulatory agenda. [...] MS. PASSARIELLO: And, Meredith, you, of course, were--became notable in part because of your whistleblowing within Google around the ways that the technology was being used, of course--you know, and how their business incentives sort of dominated over moral and ethical concerns. Now, in this era of generative AI, they talk about being bold and responsible. And of course, they have been a little bit on the backfoot, you know, a little bit beat to the market by both OpenAI and Microsoft. How do you see their approach to ethics and morals versus the business balance these days? MS. WHITTAKER: Yeah, well, I did do labor organizing at Google, and that was one of the few things that actually checked some of these impulses. So, I think, you know, we can talk about business model. We can also talk about capitalism, right? The engines of these companies are driven by a need, a requirement to report revenue and growth increases every quarter forever. That's the definition of metastasis. And it is obviously not healthy for the social benefit. So, I think, you know, we do need those structural checks. I think--you know, how is Google doing--look, I don't--remember Web3, right? MS. PASSARIELLO: Vaguely. MS. WHITTAKER: Like, you know, this was a hype cycle. Everyone was predicting, you know, massive numbers. This is going to change the entire environment. And then, you know, no one's talking about it. Andreessen Horowitz has even moved off it. They're, you know, black holing their optimistic manifestos. I think generative AI is very similar. I don't think AI in general is similar. I think they're going to continue to create these large-scale models that involve data and compute. But generative AI is not actually that useful. What happened in January was that technology, or sort of a framework for building models that had been developed in 2017, was sort of put online with an interface by Microsoft/OpenAI, who have to be understood as the same entity, right? And the ChatGPT interface kind of gave people a simulated experience of like, oh, my God, I'm talking to kind of a human. It's spitting out nonsense, but it's spitting it out and this feels kind of sentient, right? And on the backs of this advertisement for their GPT API, which they sell through their Azure cloud services, they sort of generated an entire new hyped narrative around generative AI as the sort of future facing technology that's going to change every industry. But what does it do, right? It, you know, presents visual images that are often, you know, stolen from artists or like far too close for comfort. And it presents plausible text, right? It infers what's the sort of plausible response to a prompt, based on, you know, mountains of data from the internet, the Reddits, the 4chans. You know, the—Stormfront is in there, as Natasha's work have shown, you know, and kind of presents text that looks plausible, but has no relationship to facts, has no relationship to reality, has no citations, right? So, what is this useful for? It's not useful in most serious contexts. Yeah, you could, you know, replace a junior copywriter, but you better have a senior copywriter, who's checking that text because it's going to be janky. So, I think we need to be like, really clear about what are we actually responding to. We're responding to an advertisement, a very expensive advertisement, ChatGPT, that was put online as an interface that allowed us to have a sort of simulated experience with a bot that we're now sort of making all kinds of predictions on that I don't think are actually grounded in any understanding of the utility of these systems. And again, you know, Silicon Valley runs on VC hype. VCs require hype to get a return on investment, because they need an IPO or an acquisition, and that's how you get rich. You don't get rich by the technology working. You get rich by people believing it works long enough that one of those two things gets you some money https://www.washingtonpost.com/washington-post-live/2023/10/26/transcript-fu...
participants (1)
-
Daniela Tafani