Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. She is also the co-founder of NYU’s AI Now Institute. Whittaker got her start at Google, where she worked for 13 years until resigning in 2019 after she helped organize the Google Walkouts. She speaks with Oz about learning on the job, championing data privacy and being awarded the Helmut Schmidt Future Prize for “her commitment to the development of AI technology oriented towards the common good.”
Thanks for tuney into tech Stuff. If you don't recognize my voice, my name is Oz Valoshian and I'm here because the inimitable Jonathan Strickland has passed the baton to Cara Price and myself to host Tech Stuff. The show will remain your home for all things tech, and all the old episodes will remain available in this feed. Thanks for listening. Welcome to Tech Stuff. I'm Os Valoshian and I'm Cara Price.
And this is the story our weekly episode, dropping each Wednesday, sharing an in depth conversation with a fascinating person in and around tech. This week, Oz tells me he's bringing me a conversation with Meredith Whittaker.
That's right. Meredith leads the Signal Foundation, which is a nonprofit that oversees the encrypted messaging app Signal, beloved by journalists, deal makers, drug dealers, and privacy buffs alike.
It takes wanting to keep a really really really big secret to get on Signal, but we should also think about everything we say as a really big secret. If you can who has access to all of our information.
One hundred percent, Well, you don't sometimes know what's a really really big secret until it's too late. Meredith has a fascinating background. She started at Google back in the day when it famously boasted that don't be Evil slogan, and she actually started as a temp in a kind of customer service adjacent gig that was despite being a Berkeley grad in literature and rhetoric. Eventually she worked her way up into leadership, founding Google Open Research Group and then separately founding something called the AI Now Institute at NYU. And after she left Google, she became head of Signal in twenty twenty two. But the circumstances of her departure from Google were memorable, to say the least.
I think it's fair to say that she got on everyone's radar back in twenty eighteen when she led the walkouts at Google over the experience of women in tech and at Google specifically over things like pay inequality and the handling of sexual harassment at the company.
And then there's also Project Maven, which was Google's decision to take part in a Department of Defense program to use machine learning and data for battlefield target identification, which was very very controversial internally and which in part is what led her to her role today at signal. Merediths won a prize in Germany last year called the Helmut Schmidt Future Award, where she was honored to quote her commitment to the development of AI technology oriented towards the common good. So we talk about that too, and why Meredith fundamentally describes herself as an optimist. But we start by geeking out over literature.
Let's hear it.
So I was reading your bio and one of the things that stuck out to me was your background as a English literature and rhetoric major in college. I was also in English major. As I looked at your kind of story, there's this arc from arriving at Google as a graduate in two thousand and six in the Don't Be Evil error, seeing all of these technologies developed, many of which went on to be kind of part of the foundation of what we now call AI, leading a protest movement and ultimately leaving and it made me think of Pilgrim's Progress or a buildings Room.
Then yeah, I mean, maybe a Jeremiah is more accurate.
So that was a joke for the four English literature majors who are listening. I think for me, I was very sincere. I still am very sincere, and I came in and took a lot of things at face value, in part because I had no idea about tech. I was stepping in and I was asking very basic questions.
You did in two thousand and six.
I mean, well, you know, it wasn't the money occupation when I got into Google. It was it was actually, it was a fairly lovely environment in some ways because a lot of the people there were people who had a special interest, like nerdy people like myself, but I was a book nerd, and they were sort of math and science nerds. And suddenly their special interest, their sweet passion that they spent so much time thinking about, was also profitable. It was also important for the world. It put them in a position of being a meaningful contributor. And there was a genuineness to that. And we're in an era now, of course, where you know, no one's mom is saying be a doctor or a lawyer. They're saying be an engineer. And so it was, you know, it was simply a very different world.
And what about don't be evil? When was that generation?
Then?
When was that abandoned? Was that part of the kind of discourse when you went to Google, that.
Was generated before my time, and it was core to the discourse at Google. I think it was removed by lawyers in like twenty fifteen, twenty sixteen, I don't actually remember the date, but look, it's a slogan.
You know.
The structure of Google was still up into the right forever so profits and growth worthy objective functions. That's the sort of perpetual motion machine that Google adhere. However, I do think don't Be Evil had some power in that it was a touchstone that we all shared where socially awkward people are people who might not quite understand how to formulate a discomfort could kind of point to it and say like, hey, this is making me think that we should examine this because we said don't be evil. And most of this, you know, throughout Google's history was at a time where let's say, the horizons that would challenge that were very far in the distance. So you could still make a lot of quarterly returns doing a lot of things that were understood and we can debate that as as good right, and things like building a search engine for the Chinese market that would include surveillance functions for the government was not something you even had to consider because there was so much low hanging fruit in terms of continuing to grow, continuing to profit. And so I think that combination of like it was, it was sincerely held, and that the actual temptation to do the things that really got into that sort of hot water potential evil were not on the table at that point because there was so much else that could be done. And you know, of course, the ratchet turns, the ratchet turns, the ratchet turns, and then suddenly those horizons are much closer. The company is swollen by orders of magnitude, and that little slogan is quietly removed by a team of lawyers who probably bought some summer houses with the fees.
Did you believe that slogan? And was there a moment along the way where you definitively started not to believe it.
I liked having that slogan as a tool. Let me say that, right.
It was useful. It did work in rooms.
It you know, allowed us to frame debates in a way that had at at least as their reference port a core and ethical consideration. Like I was young, there was about a million different learning curves I was scaling at once. Yes, what does it mean to have a real job.
Right.
I didn't come from this class.
I didn't have familiarity with these cultures or environments because I'd literally never been in an environment that looked anything like you know what my family still jokingly refers to as a business job.
Right.
I didn't know what was normal and what was not normal. So for me, Google was normal, and so I just want to be care Like. I don't know if I believed it or not. I found it useful and I saw it do work in the world, and I liked the idea, Like, it sounds great that we're giving people ads that match their interests. What could be wrong with that We're organizing the world's information. Wow, all of this is cool.
Like I was like, Okay, what is tech? What is Google? Hey, sit me down and tell me what is information? I was a master at asking the dumbest questions, but sometimes they would unlock whole world for me, and then I would realize, like, oh, information is just websites. Okay, but you can't make a website if you don't have a computer, and you have to be able to code HTML markup to do that. Okay, So it's more limited than I thought. Okay, well what is an index? Oh, it's ranking it. But isn't that a value judgment?
Well?
I guess it's not if it's an algorithm, but like who makes the algorithm? And so it was butting my head up against what are kind of dumb, very basic questions and sort of sensing an insufficiency of the responses, and like, you know, over almost twenty years now, continuing that process of trying to, you know, just figure out like what are we doing here?
And how do I understand it? And that was basically my education.
So fust forward, you know, thirteen years from two thousand and six to twenty nineteen. How did you get to that point where it became impossible for you to stay?
I don't have a pat answer, But how did I get to that point? Well?
Again, I was a sincere person and I had done a lot in my career. I had a lot of opportunities and then I took them, and you know, I built a research group inside of Google.
I was involved in.
Efforts to spread privacy and security technologies across the tech ecosystem. I was a champion for privacy within Google. I had built you know, open source measurement infrastructures to help inform the net neutrality debate. I had always tried to be on the side of doing social good, to.
Be very bland about it.
Yeah, And I had been able to do very very well doing that. And at the point where I became frustrated, I had already established the AI Now Institute, co founding that at NYU, looking at some of the present day real term harms and social implications of AI technologies. This was twenty sixteen. I had become known through the academic world and the technical world, and particularly within Google itself as an authority and a speaker and an expert on those issues. So talking about some of the harms that these technologies could cause, criticizing within, essentially criticizing within. I saw myself as a resource, and actually many many people did. I would be brought in when teams were struggling with things, I would advise them. I would often give academic lectures that went against the party line at Google, but we're very well cited, were very empirically documented, and I had felt that I was making some headway right.
People were listening to me. I was at the table, so to speak.
And then in twenty seventeen Fall twenty seventeen, and I learned about a military contract that Google had secretively signed to build for the US drone program. And that was at a time and still today, when those systems were not purpose built right, they were not safe.
There were ethical concerns we had. Sergey Brin, the co founder of Google, had made.
Remarks that were very clear in the past about keeping away from military contracting. Given Google's business model, it's public interest. It serves the world, not just.
The US business model, meaning that it knows a lot about people. Essentially, yeah that it is.
Ultimately it's a surveillance company. It collects huge amounts of data about people, and then it creates models from that data that it sells to advertisers, and advertisers can leverage Google's models and ultimately the surveillance that they're built on in order to precision target their message and thus reach more customers. And that remains the core of the business model, that remains the core of the Internet economy.
And putting that type.
Of information at the service of one or another government is in my view, it is dangerous. And the first thing I did is I think start a signal thread with some people and say what should we do?
This was Project Maven. It was to do with helping the US government target enemy combatants on the battlefield.
Though well, it was building surveillance and targeting AI for the drone program.
And what was the result of me used so talking to people on a signal. What happened next?
You know, it grew and it grew, and then I wrote an open letter. That letter got signed by thousands of people. Eventually someone leaked it to the New York Times. It snowballed, it grew into a movement, and ultimately Google dropped that contract. Now you know, these efforts are ongoing. I want to be clear that this was a fairly different time. I think now there's a lot more comfort with military technology, but I think those issue shoes have yet to be addressed. The issues of the danger of yoking the interests of any one country irrespective to a massive surveillance apparatus, the types of which we haven't seen in human history. That's a real concern, and I think that is a concern across political spectrum. We should all be concerned about that, and frankly, I think it speaks to the need to question that form altogether. The form of sort of the surveillance business model and the collection of massive amounts of information that can be used to harm, control, oppress, et cetera.
How did you end up finally deciding what's time to leave?
There's not one answer to that, you know, I realized pretty quickly that I had poked a bear. I had a huge amount of support inside the company.
I still do. I don't think anyone's saying I'm wrong.
I mean, there are twenty thousand employees at one point right involved in Yeah, and it's a huge no. I mean, the employees are there.
At that point. I think it was like two hundred thousand. Yeah, So it's it's significant.
It was the largest labor action in tech history, which speaks not to the militancy of people who work in tech, but I think speaks to the fact that this was really a rupture of a lot of discomfort that had been building for a long time. Right, Like, people.
Who often got into this industry because of their ethical compass, because they really, you know, wanted to quote unquote change the world, were feeling betrayed and feeling like their hands were no longer on the rudder in a way that they felt comfortable.
With coming up. We discussed why signal broke through the noise, stay with.
Us, so you poked the bear and it became time to leave. I mean, not for nothing. The kind of one of the wedge issues was this use of Google's technology for military purposes. And I think twenty twenty four was the year. I mean, of course we will remember, you know, the strikes, the drone strikes during the Obama years. And this is not per se a new phenomenon, but something about Ukraine and Gaza has really elevated into mainstream consciousness what it means to have autonomous weapon systems.
Yeah, I think the environment has changed a lot. Look on one side, there's a very real argument that I'm deeply sympathetic to, like military infrastructure and technology is often outdated. They're amortizing yanky old systems that are not up to snuff. Logistics and processes are threadbare, all of that being true, and some of the new technologies would make that a lot easier. On the other hand, I think we're often not looking at these dangers. What does it mean to automate this process? What does it mean to predicate kate? Many of these technologies on surveillance data on infrastructure that is ultimately controlled by a handful of companies based in the US. How do we make sure the very real dangers of such reliance are actually surfaced and quick fixes to bad practices accrued over many, many years of grifty military contracting don't come at the expense of, you know, civilian life. Don't come at the expense of handing the keys to military operations to for profit companies based in the US.
And you talked about Project Lavender and your Helmut Schmidt speech, which I found fascinating. Why do you think it was important to start there?
Well, Lavender, Gospel and Where's Daddy are three systems that were revealed to be used in Gaza by the Israeli military to identify targets and then, you know, basically kill them. I looked at those because they are happening now, and because they were the type of danger that the thousands of people who were pushing back on Google's role in military contracting were raising as a hypothetical, and because I think it shows the way that the logics of the Obama era drone war have been supercharged and exacerbated by the introduction of these automated AI systems. So for folks who might not remember, the drone war was the theater to use the military term in which the signature strike was introduced, and the signature strike was it's killing people based on their data fingerprint, not based on any real knowledge about who they are, what they've done, or anything that would resemble what we think of as evidence. So, because someone visited five locations that are flagged as terrorist locations, because they have family here, because they are in these four group chats, whatever it is, I'm making a possible data. But you could model what a terrorist looks like in any way you want. And then if if my data patterns look similar enough to that model, you deem me to be a terrorist and you kill me. And you know it is exactly the logic of ad targeting.
Right.
I scroll through Instagram. I see an ad for athletic greens. Because you know they have my credit card data. You buy healthy stuff. You're this age group, you live in New York, whatever it is, we assume you will buy athletic greens or be likely to click on this ad and will get paid. It's that but for death and the lavender, Where's Daddy? And gospel systems are basically that for death supercharged.
And is it a coincidence the resemblance between how AD targeting works and how these signature strikes on the battlefield or in war work. Was the military inspired in some way by the ad industry or what's the kind of source of the connections.
I wouldn't be surprised. But then again, you know, it wasn't just the ad industry. You know, there was an Obama era and kind of a neoliberal faith in data. Now there wasn't much of a definition of what data is, but the idea that data was more objective, less fallible, more scientific. These are almost like mythologized versions of data, and that if we relied on the data instead of subjective decision making, we would re each determinations that were better, We would make the right choices. Now, part of my background is I came up doing large scale measurement system so I was in the business of making data.
That's interesting. I mean, I just want to pause on that, because that's an interesting way. I've never heard it expressed before. But essentially making data equals measuring stuff.
Yeah, data is a very rough proxy for a complex reality.
Right, So you.
Figure out, like, what's the way we're gonna create those proxies. How are we going to measure it? Right, whether we measure human emotion, or measure the timing of street lights, or measure the temperature in the morning based on a thermometer that's calibrated in a certain way. We log all of those as become data, right, And of course those methodologies are generally created by people with an interest in answering a certain set of questions, maybe not another set of questions. In the case of Google or another company, they were interested in measuring consumer behavior in servis of giving advertisers access to a market. This is, in effect, like a rough proxy created to answer very particular questions by a very particular industry at a very particular historical conjuncture, and it is then leveraged for all these other things. And I think shrouding that data construction process in sort of scientific language, you know, assuming that like data is simply a synonym for objective fact, that's a very convenient way of strouding some of those intentions, right, Shrouding the distinction between those who get to measure and thus get to decide, and those who are measured and are thus subject to those decisions.
I mean, twenty twenty four was the year of this hypothetical of what was raised at Google becoming real in Gaza. It was also the year that two Nobel Prizes, one in chemistry and one in physics, went to one former and one current Google employee, and both in the realm of pushing fundamental science forward. So how do you see the moment in an AI and both this kind of enormous hope and something drug discovery and the obvious peril in getting people based on their metadata.
Well, I think there's a lot of things AI could do very well right. Drug discovery is one, it's very interesting, and the idea of being able to create drugs for things that affect smaller numbers of people thus aren't generally incentivized.
All of that, like, yes, let's leverage what we.
Have to make life as beautiful as possible for as many people as possible. Amen, The issue comes down to the political economy of AI and what we're actually talking about. And I think when we look at the AI industry right now, and when we look at the technologies that the Nobel Prize was awarded for either creating or leveraging, we begin to recognize that AI is an incredibly centralized set of technologies. It relies on huge amounts of computational resources, and then there is data that's required to train these models. And so the paradigm we're in right now is the bigger is better. The more chips, the more compute, the more data, and you combine those in whatever architecture and you create a model, and then that model is purportedly better performing and therefore better, and therefore we're advancing science.
And that's a useful argument, I guess for very very large companies whose inherent logic is to grow larger. Right, I mean, if you Amazon or Googo, little Microsoft, you what is there to do for growth other than popularizing this model of how the progress of AI works?
Yeah, I mean it would be very bad for these companies' bottom lines if it turned out that much smaller models using much more bespoke data were in fact much better, right, Because they've thrown a lot of money behind this. Narratives we hear about tech, and the narratives we hear about progress are not always born of scientific and I think when you look at that, then you have to realize, like AI is not a it's not a product of scientific innovation that everyone could leverage and these guys just figured it out first, right, It's a product of this concentration of resources, and in fact, in the early twenty tens when the sort of current AI moment took off, the algorithms that were animating that boom were created in the late nineteen eighties. But what was new were the computational resources, the chips and the ability to use them in certain ways, and the data and the data is that product of this surveillance business model that Google, Meta at Alia sort of participated in and had built their infrastructures and businesses around. So of course, who has the access to those infrastructures, who can meaningfully apply these old algorithms and begin to fund research sort of optimizing them, building new approaches to this big data, big compute AI paradigm. It's the same surveillance platform companies, And so all of this comes back to the question of like, well, the Nobel is awarded for AI this year, don't we want better drug discoveries? And it's great, yes, we absolutely do, but if you look at the market conditions, it's unclear how that better drug discovery is actually going to lead to better health outcomes, and so who will the customers of that AI be?
Right?
Most likely it will be pharmaceutical companies, not altruistic organization. It'll be insurance companies who may actually want to limit care. Right, So you have a diagnostic algorithm, but it's not used to make sure that I'm given the care I need, even if it's expensive early so I live a long life. It's used to make sure I'm excluded from the pool so that I am.
Not a cost going forward.
So I think that's the lens that we need to put on some of this, because the excitement is very warranted. But in the world we have, it's pretty clear without other changes to our systems, that AI, given the cost, given the concentrated power, given the fact that these entities need to recoup massive investment, is not going to be an altruistic resource, but will likely be ramifying the power of actors who have already proven themselves pretty problematic in terms of fomenting public good so to speak.
Coming up, we'll hear about Meredith Whitaker's speech upon winning the Helmet Schmidt Future Prize and why we should rethink our preconceptions about data stayed with us.
Talk about Signal and not for nothing. Was a was the first place you went when you learned about Project Maven, and is now where you spend a lot more time. So for those who don't know what it is and how it works and why it's important, I mean, just tell us a bit about it.
Yeah, Well, you know, Signal is incredible.
I'm really honored to work here and it's been along on the journey and various capacities for a very long time.
It is the.
World's largest actually private communications network in an environment as we just discussed, where almost every one of our activities, digital and increasingly analog are surveilled in one way or another by a large tech company, government, or some ad mixture. So it's incredibly important infrastructure. We hear human rights workers rely on signal. Journalists rely on signal to protect their sources. We have militaries rely on signal to get communications out when they're necessary. We have board rooms, you know, most bour room chatter, most CEOs I meet use signal religiously.
Governments use signals. So anywhere where the confidentiality of communications is necessary, certainly anywhere where the stakes of privacy are life or death. We know that people rely on Signal and our core encryption protocol was released in twenty thirteen and it has stood the test of time. It is now the gold standard across the industry.
And it's used by WhatsApp as well.
Right, yeah, it's used by WhatsApp.
It's used across the tech industry because one of the things about encryption is it's very hard to get right. It's very easy to get wrong, and so if someone gets it right, you want to reuse that. You want to use that formula. You don't want to DIY because it's almost certain you'll have some small error in there, and when we're talking about life or death consequences, you can't afford that.
But why if what seven Signal used the same open source code, am I so much safe for using Signal and WhatsApp?
Because the Signal protocol is one slice of a very large stack, and WhatsApp uses that slice to protect what you say. But Signal protects way more than what you say. We protect who you talk to. We don't have any idea who's talking to whom. We don't know your profile name, we don't know your profile photo. We don't know your contact list, and it goes on and on and on. So everything we do we sweat the details in order to get as close to collecting as little data as possible.
Those things that you're describing that Signal doesn't collector are the inputs to the signature strikes we were discussing earlier.
Could be? Could be.
I mean one of the issues with proprietary technology or classified information is we don't totally know, but it's that type of information that has been reported and mentioned as inputs to those strikes.
Yeah, and you're not the CEO of Signal, you're the president of the Signal Foundation. Can you kind of explain why that is and what it means?
Well, I think in this moment in the tech industry, that would be dangerous. Although it's not profits that we are opposed to, it is the particular incentive structure of the tech industry which ultimately you either collect data and monetize that you're creating models to sell ads as we discussed, or training AI models or selling its data brokers or what have you, so surveillance, or you provide sort of goods and services to those who do the picks and shovels of the surveillance industry. So you can imagine a board with a fiduciary duty governing a for profit signal. At some point that fiduciary duty is going to take precedence to the obsessive focus on privacy.
So that's really the crux of it.
And we're in an industry where consumer apps are quote unquote free, right there is not a norm of paying for these things upfront by the consumer. Is not for lack of innovation that we don't have many many more signals or signal like products that are focused on democracy, focused on privacy, focused on ensuring fundamental rights, focused on the public good. It's really because many of those things cannot be subject to the surveillance business model and thus don't really exist because capitalizing them is such an issue.
You mentioned that you know there's a consumer expectation that tech is free, or at least free at the point of delivery. You also have heard a piece of Wired recently kind of arguing twenty twenty five might be but the beginning of the end of big tech. But I guess my question is big tech of so successfully kind of created this expectation on the consumer side that tech products are free and that essentially I pay for the product by allowing myself to be surveiled. Can there be a fundamental shift away from the big tech hegemony un lessen until consumers are willing to pay with money rather than data for infrastructure services.
I mean, I absolutely believe there can be. This is what we have.
Thirty years of this, twenty years of this. This is hyper novel in terms of human history. This is in no way natural. There's you know, there's a longer history of the sort of particular regulatory and political environment this came out of. We now have the FBI advising people not to send SMS messages following a massive hack that enabled China and I don't know who else to access telephony networks and surveil US citizens, including many high ranking citizens. And so we're in a geopolitically volatile moment where the lines between nations and jurisdictions and interests are getting a bit more brittle, a bit more crisp, and I think the imperative of moving away from this centralized surveillance tech business model is really becoming clear to almost everyone. And that was what animated the piece and wired you know, I would say it's more of a wish casting into the future than necessarily a prediction of what will happen.
But insofar as like hope.
Is an invitation for action, I really think there's a lot of creative work that could be done to undo that business model. Right, the question is really where do the resources come from? And that's a question we can begin to answer. Okay, are there endowments, Are there certain technologies like signal, like some of the cloud and core infrastructure that should be more openly governed that really, you know, the more dependent we are on these infrastructures, the more these infrastructures are controlled by you know, a small number of actors, the more perilous those single points of failure become. And I think the more attentive people are becoming to that peril, and the more appetite there is for solutions that may not have been on the table even a couple of years ago.
Yeah, it is interesting. I mean I was a talk the other day and somebody's talking about cloud computing and how the metaphor kind of suggests that this kind of decentralized model and it's kind of around all of us. But like what cloud computing has actually meant it's this incredible kind of centralization of computational power in the hands of basically two nations and ten companies.
Yep, yeah, exactly, and you know, two nations, but the US being dominant there it's Amazon, Microsoft, and Google have seventy percent of the global market seven zero insane. So that means other nation states are running on Amazon, running on Google, right, And there is no way to simply create a competitor because you are talking about a form that was understood in the utilities context, particularly in telephony, as a natural monopoly for a very long time, and that has you know, ultimately been kind of entered into the gravitational pull of these handful of companies. So it's a.
Very big problem.
But again, it doesn't mean that things like signal don't exist.
Signal exists. Swimming upstream signal is proving that one extremely innovative technologies can and actually do exist, that even swimming up against stream, we can create something that has set the gold standard and proven that yes, there's a ton of innovation left to do in tech, in privacy, in rights preservation, in creating.
Tech that actually serves a social need. You know, it is popular, and Okay, now let's solve the issue of this toxic business model to actually foment innovation, to actually foment progress, whatever the claims of the marketing on the tan of big tech may be.
Yeah, and just to close, I've really was taken by your Helmut Schmidt speech and you had a line, make no mistake, I'm optimistic, but my optimism is an invitation to analysis in action, not a ticket to complacency. Yes, talk about.
That, Well, that is true, and I think that was me also pushing back a little bit on the idea that a grim diagnosis is somehow pessimism, right, uh huh, Like, we know it's not great, we know it's not great. Along a lot of axis tech is not the only one we know things aren't working. But the most dangerous, and frankly, the most pessimistic place to be is pushing that aside, disavowing that, diluting that analysis in service of immediate comfort.
Stoping to bother to critique because you think there's no point. Essentially, that's the ultimes.
Well, I would say that's almost nihilism. I would say the pessimism is where we paint a rosy picture so we don't have to feel uncomfortable and then base our strategy on that rosy picture in a way that is wholly insufficient to actually tackle the problem, because we never scope the problem, because we were unwilling to carry the intellectual emotional responsibility of recognizing what we're actually up against. So I think the most optimistic thing we can be doing right now is really understanding where we stand, really understanding what is the tech industry, how does it function, how does money move? And okay, what are the alternatives that exist, how do we support them, and how do we recognize the power that we have to shift things right? But that has to be based on a realistic picture. It can't be based on delusion.
And part of that, as you know firsthand, coming back to the beginning of the conversation, is acting as a collective.
Yeah.
Absolutely, I think no one person is going to do this alone ever ever has ever you know, one person may have taken credit at some point or another, but ultimately, like the Internet was a collective effort, the open source Library is you know, every single big tech company that has one CEO, it has millions and millions of contributors. And so we change this together. The issue is will and the issue is resources. The issue is not ideas right. We're not waiting for one genius to figure it out. We're waiting for a clear map and some space to examine it together and share insights and then figure out how to push forward to a world where it's you know, thousands of interesting projects that are all thinking together about creating much better tech and reshaping the industry and its incentives in order to nurture that.
Us.
First of all, Meredith said about fifteen things that I would put on a small office couch pillow if I knew how to needle point.
Well, start with the thing you would love most to put on a needle point.
A grim diagnosis is not an invitation to pessimism.
I love that too.
I just think that, especially right now, it's easy and right to say that there's a lot to be wary of. Yeah, but it also is not an invitation to be weary, ad infinitum.
No. No, I totally agree with you. I mean, I think often if you're critical, you'll called out for being pessimistic. But I thought the way that Meredith reframed that and was basically no, no hold on a second it's optimistic to be critical was really fascinating.
Yeah. I think my favorite line in the whole interview, and maybe one of my just favorite lines I've heard in reporting on technology ever, is that data is a rough proxy for complex reality.
Sounds like you're going to need two pillows or just a.
Very big one. I actually want to talk about something that we've talked about in our production meetings for this show is that we don't want this show to be all doom and gloom. And you know, when I found out that you were interviewing Meredith, I was interested to see what she would say outside of a sort of technopessimist viewpoint, given that she has been quite critical of Google and also of surveillance capitalism. And yet when you hear her talk about Signal, I was sort of optimistic in terms of, oh, here's a tool that we actually have been encouraged by the FBI to use, yes, yet still don't use. But here's a tool that is actually answering a real fear that a lot of people have that Meredith has, of privacy and doing some actionable stuff to make us feel more secure.
Well, Meredith, ultimately the buck stops with her at signal, and like, if you're a product lead or if you're running a company whose main thing is a product, you sure as hell better be optimistic because you're not going to get through your day.
That's right, that's right.
But but I agree with you. The thing that's really stuck with me was this concept of technology being dual use. And Meredith, you know, used this phrase signature strike, which it turns out can apply both to serving you with an AD to make you want to buy something and using your metadata to make a calculation that you're most probably an enemy combatant or a terrorist and kill you. I mean, this is the idea that it's the same basically set of analysis that can apply to shopping or life and death. I don't know, I just I have no sort thinking about that.
What certainly makes me want to protect that data set that's it protects uff today. This episode was produced by Victoria Dominguez, Lizzie Jacobs, and Eliza Dennis. It was executive produced by me care Price, Oswalashan and Kate Osborne for Kaleidoscope and Katrina Norvel for iHeart Podcasts. Our Engineer is Bihit Frasier. Al Murdoch wrote our theme song.
Join us on Friday for tech Stuff's the Weekend tech We'll run through our favorite headlines, talk with our friends at four or four media, and try to tackle a question, when did this become a thing. Please rate and review on Apple Podcasts, Spotify, or wherever you get your podcasts, and reach out to us at tech stuff podcast at gmail dot com with your thoughts and feedback. Thank you,