Make A.I. Work for Democracy: Marietje Schaake
January 25th, 2024
“The initial promise of the digital revolution was also one of the democratic revolution.”
Marietje Schaake is International Policy Director at Stanford University Cyber Policy Center, International Policy Fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, and also serves on the UN’s A.I. Advisory Body. We take a deep dive into how the digital revolution can still fulfill its promise of a democratic revolution. In other words: make A.I. work for democracy.
Over the past 20 years, power became more and more concentrated in the hands of big tech companies, while both Democratic and Republican administrations have chosen to trust market forces, rather than intervening to put on some guardrails. Unsurprisingly, public interest issues are under-addressed. Voluntary commitments by tech companies are often not powerful or enforceable enough to change behavior. But in the last year, more governments and multilateral institutions are recognizing that there are risks with the use of AI. For example, the EU AI Act was just passed to mitigate the risks from the use of AI applications.
Follow Marietje on Twitter:
https://twitter.com/MarietjeSchaake
Follow Mila on Twitter:
Follow Future Hindsight on Instagram:
https://www.instagram.com/futurehindsightpod/
Love Future Hindsight? Take our Listener Survey!
http://survey.podtrac.com/start-survey.aspx?pubid=6tI0Zi1e78vq&ver=standard
Take the Democracy Group’s Listener Survey!
https://www.democracygroup.org/survey
Want to support the show and get it early?
https://patreon.com/futurehindsight
Credits:
Host: Mila Atmos
Guest: Marietje Schaake
Executive Producer: Mila Atmos
Producer: Zack Travis
-
Marietje Schaake Transcript
Mila Atmos: [00:00:04] Welcome to Future Hindsight, a podcast that takes big ideas about civic life and democracy and turns them into action items for you and me. I'm Mila Atmos.
It's 2024 and the future of America is in your hands. Democracy is not a spectator sport, so we are here to bring you an independent perspective about the election this year and empower you to change the status quo.
We are inundated with stories about artificial intelligence, present and future, good and bad. But we don't yet fully comprehend the real impact of AI and technology on democracy. Of course, we have discussed how AI has been problematic, especially in elections. But what's the long term outlook with artificial intelligence continuously and rapidly changing, and big tech largely in the driver's seat? I have a feeling that we may always be a step behind, but this doesn't mean we should give up on gaining a better understanding and on being proactive to make AI work for us as a society. We are a hopeful podcast, so we are here to tell you that it's not too late to help us further unpack the intersection of AI and democracy.
We are joined by Marietje Schaake. She is International Policy Director at Stanford University Cyber Policy Center, International Policy Fellow at Stanford's Institute for Human Centered Artificial Intelligence, and also serves on the UN's AI Advisory Body. While serving as a member of European Parliament for the Dutch Liberal Democratic Party between 2009 and 2019, she focused on trade, foreign affairs and technology policies.
Welcome, Marietje. Thank you so much for joining us today.
Marietje Schaake: [00:02:06] Thank you so much for having me. Very exciting.
Mila Atmos: [00:02:10] We are a pro-democracy podcast, and as such, we look at all the issues through the lens of democracy. From your point of view, what are the risks of AI on democracy? Is it disruptive to democracy and if so, how?
Marietje Schaake: [00:02:24] Well, first of all, thank you for doing that work. I think we should all really join forces and look at the world through the lens of democracy. If we look at what is at stake, not only in general, but especially in this big, big election year. And you're right, AI is a major disrupting factor where it's hard to predict how deep or wide the impact of AI on elections specifically, but also democracy more widely is going to be. And I think, that in and of itself should concern us. So I'm sure people are listening, have read something about, you know, disinformation, foreign interference and how our information ecosystem is poisoned, really by those who want to manipulate voices and voters for their own could be political, but it could also really be a commercial agenda. It could be much more perverse than just trying to sway people to vote a certain way, just to sell stuff and where it was possible, but more time intensive for, let's say, the Russian intelligence services to create social media accounts to populate those accounts with, you know, various messages to suggest that there were tensions in American society or these kinds of polarizations that can be exploited with artificial intelligence and then particularly generative artificial intelligence. The efforts that it takes to create accounts, to create messages, videos, images that are credible and very difficult to distinguish from, you know, what a human being might actually say has simply become a lot faster, easier and cheaper ultimately. So it will also be a matter of, you know, who has access to these tools and who can use them with efficient political agenda. That is something we know exists, and these tools will probably make it more efficient for those who want to use this information in our election cycle. And that's just one example of how AI impacts elections.
Mila Atmos: [00:04:20] So if we're thinking about elections this year in '24, how dangerous is this really? Is this an existential crisis for elections in the US or democracy writ large, or is this something that can really still be mitigated? Because I said it in the intro that we still can, but what are your thoughts on this?
Marietje Schaake: [00:04:39] You know, I think one of the big challenges is that AI will never be the only factor, but it can exacerbate existing factors and so can big tech governance. So let me give you one example that really speaks to me. Former President Trump, who has done a lot of anti-democratic things, continues to challenge the outcomes of the 2020 elections. So while he's running in 2024, one of his major narratives is, even though there's zero evidence for that, that the election was stolen. So this is a narrative that is very prominent in his campaign. And at the same time, we see
a tech platform like YouTube changing their policies and saying, we're not going to intervene in posts and uploads and videos on our platform that actually question the outcomes of the 2020 election. They've changed their policy. Other tech platforms like Meta, Facebook, Instagram, X, of course, formerly Twitter, have laid off a lot of the teams that were actually hired after 2016, 2020 to be more mindful of what kind of posts can actually disinform people, mislead people, manipulate people. These AI developments that are integrated into the social media models, integrated into political campaigns, integrated into those who want to manipulate our democratic process, maybe foreign interference... It's coming in a moment where those investments to actually safeguard democracy have been diminished. While we don't know enough about how this new technology that's also rapidly developing will actually impact people's behavior in the voting process. So to some extent, it's a really big experiment that happens to take place in a year where an unprecedented number of people, around 2 billion worldwide are going to the polls. And so it's a very unfortunate combination where we have this very new, potentially very impactful technology that is thrown into our societies, our information ecosystems, and the very platforms that are responsible for curating our information ecosystem have also laid off or changed their policies to help safeguard democracy. So I'm not optimistic, but I also don't think we should think about AI as the sole issue changer. I mean, we have a US presidential candidate that lies about the outcomes of the previous elections and doesn't shy away from using anti- democratic rhetoric and policies. So, you know, that is arguably much more dangerous than what any technology can bring.
Mila Atmos: [00:07:09] Oh, that's totally fair. Yes, AI exacerbates the existing polarization or the existing difficulties on the ground in any political sphere in any country. So how do we get here when we think about tech companies? Because it seems to me that they are really in the driver's seat in many ways, in normalizing the way in which we use this technology. To your point that they have laid off the people who are responsible for safeguarding content that is misleading or simply false, and they allow that to run amok on their platforms. So is it that we thought about AI and technology in the wrong way? Were we too naive? How is it that the CEOs are called into Congress to testify, but really, nothing changes afterwards?
Marietje Schaake: [00:07:54] I think the question of AI is really also how this latest layer of technological development exacerbates some of the problems that we have
with privatized governance of our information and democratic sphere. And it is really the result of years, if not decades of public policy, particularly in the United States. And I say, particularly in the United States, because there are some unique characteristics of the United States that we should look at. One is, of course, the power of Silicon Valley companies. That's where some of the world's biggest tech companies reside. And it's in that jurisdiction that policy matters the most. And it's both Democratic and Republican administrations that have chosen to trust market forces more than intervening to put on some guardrails to say, "look, it's great that there are all these disruptive companies. It's exciting that they're growing at the speed of light, but they still have to operate within some parameters that we want to set for the sake of preventing discrimination, making sure that democracy is preserved," and so on. And so the US in many ways is behind, particularly when you compare it to the EU, for example, when it comes to reckoning with the actual impact that these technologies have had. Because you're absolutely right. The initial promise of the digital revolution was also one of a democratic revolution. I remember very well when I was just elected into the European Parliament, it was 2009. We had just seen Senator Obama as a successful presidential campaign, where he used social media in clever ways, engaged people with politics that might otherwise be disenfranchised. And I, I really only heard hope around what technology would bring, how it would go as a wave around the world that people with mobile phones in their hands and connections to social media platforms would be empowered against those repressive governments that they may have to live under, you know, and that their ability to connect, mobilize, share human rights abuses that they could document, for example, would basically firepower their democratic aspirations. And that hasn't proven to be true. In fact, for the past consecutive 15 years, the world has seen democratic backsliding and flat out threats to democracy for a variety of reasons. And so that initial democratizing promise of what technology could bring as a unique factor did not materialize. And at the same time, again, a series of U.S. administrations, politicians of both political parties, have not managed to really change the reliance on market forces, the enormous concentration of power and data into the hands of tech companies in the US, and that has really formed the foundation for the AI development that we see now. Because without these huge data sets, without taking the liberty to scrape the internet, I mean, you know, pretty serious endeavors, these large language models that AI is built off of could not have been built. So I would like to think of what has happened over the past 20 years were the stepping stones that are now leading to even further concentration of power in the hands of tech companies, and the unique
ability of only a handful of companies to develop generative AI products. So we cannot look at the current developments in isolation. We really have to appreciate that history.
Mila Atmos: [00:11:23] Yeah, I hadn't thought about it that way. Well, I think this is a perfect segue, though, to my next question. Of course, we have heard repeated calls for governance, oversight, regulation, and I'm using air quotes now. And there's an open question about what governance and regulation really means in practice and how it's enforceable. But for sure, governments are not sitting idly by. I'm curious about your thoughts on the most recent developments since you have been following this for many years, because in December, the EU AI Act was passed, which aims to set a global regulatory standard. There was also the Singapore Conference on Artificial Intelligence and the Bletchley Park Declaration after the UK AI Safety Summit. And for background, and I mentioned this earlier, when you were serving in the European Parliament, you were very involved in technology policies. You helped actually to launch the net neutrality policies now in force across Europe, advanced several initiatives to promote digital freedoms, including stricter export controls for cyber surveillance technology and new policies for e-commerce and cybersecurity. So with all of this experience under your belt and in your current role on the UN AI advisory body, tell us about what the most recent policies and efforts get right and what you think is missing.
Marietje Schaake: [00:12:42] Thank you. It's a big question. So let me try to unpack. There is indeed a lot of policy activity around AI, much more than what I've seen around previous waves of technology. And what is unique about the past year is that a lot of governments and multilateral institutions are recognizing that there's risk involved with the use of AI, and that is in sharp contrast with this sort of fear and excitement that I was sketching a little bit before that we saw around the wider spread of mobile phones and social media platforms. The expectations were sky high, and there was not really a focus on risks of what it would do for surveillance or data collection or disinformation, for that matter. And so I think it's a good development that there is a more realistic view that technologies come with opportunities and with risks, and that, of course, mitigating risks requires more effort than just letting the opportunities blossom. Now, where I see wide varieties in two ways. On the one hand, governments are primarily political vehicles, and so it should not be surprising that governments are trying to regulate AI in the mirror image of the values that they stand for. So, for example, China has been very active in regulating AI, but it has done so in a way that would never suit the European Union or
the United States, because it is so top down, it is so focused on control and really focused on instrumentalizing this technology, as well as previous technologies for control, for censorship and for state purposes. Essentially, in the US, we have seen a strong focus on existing laws and how they might regulate AI. I don't think that's necessarily President Biden's and his administration's biggest ideas, but it is the maximum political space that they have. Given the fact that Congress is probably not going to get to any majorities with regard to new legislation. So his best bet is to focus on existing law as well as executive orders, which we've also seen a couple of, because, of course, he's the pen holder and he can basically by decree, say we're going to intervene. And then there's the EU, which has really been the only major Democratic bloc or, you know, a set of countries in this case to adopt binding regulation. And that's another difference that I want to point to. The various initiatives that you mentioned, I was there in Singapore for the conference. I was there in the UK for the AI Safety Summit, and indeed plans are being put on the table. The G7 has put out a code of conduct. There is a lot of activity, but hardly any of these efforts are binding except for the EU's AI act, which is a law. And I think that's a big difference that people have to appreciate, because I think one of the big lessons learned from the past 20 years is that voluntary commitments often are not powerful or enforceable enough so that companies say, "yeah, you know, let's let's talk about how we need to solve this problem," but then it gives them so much space to continue to focus on their profit models, that actually those very public interest issues are under-addressed and violations of principles are not held to account sufficiently. So we will have to see. There is now a political handshake on the AI act in Europe, but it is not entered into force yet. That's always a bit confusing. I think press releases go out. This law has been agreed and people think, "okay, great, tomorrow will be a bright new day." But it often takes one, two or even three years. Can you imagine to transition to get the enforcement mechanisms in place to adopt this EU law on the member state level? So to have this sort of federal system do its work between the EU level and the national level. So any provisions that people might hope can be helpful in the context of election campaigns will basically come after this major election year. So we have an idea of the contours of this law, but we don't have an idea of the implementation of it yet. That always leads to the question, can regulation keep up with technological development? And I think that's just an ongoing challenge that we will have to navigate, whether we govern for, you know, the one or the other political idea.
Mila Atmos: [00:17:00] Um, of course, you know, I think this is one of the problems with democracy or with representative democracies because you think a law is passed and the next day it's all good. And I think people are so disappointed in that, when they figure out that it doesn't take effect for some time and they think that they already did quote the right thing, you know, and they they got it done. But it's taking years. But relatedly, you just mentioned that the technology is continuing to change. We don't really understand, of course, on the outside of these tech companies, how the inner workings of the algorithms are written. So the challenge, which you also just pointed out is to write laws that are flexible enough for evolving technologies. So from what you've seen, do you think that they are anticipating that enough? Or do you have an idea about how that can be done right?
Marietje Schaake: [00:17:55] It is indeed a big challenge. And I think in the EU AI act, there's at least an effort to anticipate the unknown. And that is done by appointing what is called an AI board. So every member state delegates an expert to this EU wide board, which will look at research and market developments to see what new AI applications emerge. And the foundation of the UI act is to look at the risk that emerges from the use of AI applications, so it doesn't focus as much on the technology, although a little bit, and we can talk about that more. But but the real backbone of the law is to say we need to look at what the impact is when it's used. So I'll give you some examples. If AI is used to scan cvs for job applications, which happens all the time, then your or mine access to employment can be at stake when this algorithm does this scanning in a way that's unjust or discriminatory. And so there's risk associated to that. There's also risk associated to the use of real time biometric information collection or facial recognition or even, you know, the kinds of systems that we've seen over in China that are called social credit systems, that are very far reaching when it comes to using AI for gathering data and then keeping track of people's every behavior. And so the EU has kind of made a spectrum where it says there are some AI applications, like the social credit scoring that we're not going to allow. There are high risk applications, there are sort of medium risks applications and there are low risk applications. And with every level of risk there are mitigating measures proposed. And so on the one hand, it tries to lay a foundation of mitigation of risk now. And with that AI board, it tries to anticipate the future. What I think will be a space to watch is how the law will be enforced and how successful that will be, because we have seen with past regulations, for example, the General Data Protection Regulation, which is another big landmark law that also has
impact on tech companies, that enforcement is not as powerful as it should be. And so I think one answer to your question of how can governments keep up with the rapid development of new technologies is to emphasize more the enforcement side. So to give watchdogs, regulators, oversight bodies, more resources in the form of skills mandates to really probe where, for example, discrimination takes place in the context of algorithmic settings, or how antitrust rules, which are, you know, a century old, might be at stake in new technological contexts. So instead of looking at a new law for every new technology, I think a lot can be solved with existing laws. There may still be some loopholes that then have to be filled, but it can only be efficiently done when the enforcement matters, when sanctions bite, and when companies cannot simply pay, you know, a fine that sounds high for the average newspaper reader, but that is actually really a manageable part of the cost of doing business for companies. So I would say more emphasis on enforcement, more anticipation of what might come in the form of being able to include new technologies in existing laws. And I do think everything can be tackled as long as there is the political will.
Mila Atmos: [00:21:32] We're going to take a quick break to share about a new podcast that I think you'll enjoy called Kennedy Dynasty. When we come back, Marietje will discuss what the opportunity is with AI, and what you can be asking political candidates on the campaign trail this year.
Allison Holland: [00:21:50] I'm Allison Holland, host of the Kennedy Dynasty podcast. Equipped with a microphone and a long time fascination of the Kennedy family, I am joined by an incredible cast of experts, friends and guests to take you on a fun, relaxed, yet informative journey through history and pop culture. From book references to fashion to philanthropy to our modern expectations of the presidency itself. You'll see that there is so much more to Kennedy than just JFK or conspiracy theories. Join me for the Kennedy Dynasty podcast.
Mila Atmos: [00:22:20] And now let's return to my conversation with Marietje Schaake. I have a side question here about some of the enforcement, because I know that one of the big things that's happening in the US is that people think we should use antitrust laws to break up, for example, Amazon. What are your thoughts on this?
Marietje Schaake: [00:22:38] So I think antitrust has an enormously powerful foundation in that it's actually only a few principles, right? No monopolists, no abuse of market power, no price agreements, for example. So it's very powerful and it has stood the test of time. What is challenging is that the processes, the cases take a very long time. Companies also have the right to appeal a decision. And so sometimes these cases and their appeals drag out for five, six, seven years. And if you look at the speed of not only technological development, but the market capture that a company can realize in the space of five, six, seven years, every day is sort of worth pushing the process right, waiting until the final verdict happens. And then often antitrust cases are settled with a fine instead of, for example, saying this merger and acquisition should have never happened. When you think about a controversial one, it's WhatsApp and Facebook merging. So questions about what happens to the data behind the scenes, are those also merged. And so I think antitrust in potential can do a lot, but it cannot solve everything. And there's also a trend now to try to, you know, think about antitrust measures not only as economic tools but also as tools to protect democracy. And while I think it can really help democracy to break the power of big companies, I think it's also a very indirect way of trying to say, "hey, the outsized power of tech companies have a big impact on our democratic societies. Let's do something."
Mila Atmos: [00:24:12] Yeah, it sounds like basically we need a combination of all of the above. We should enforce antitrust laws. We should also have new AI laws and make them work together, but make sure they're enforceable. You know, this conversation is making me think of Shoshana Zuboff and her warning that what big tech seeks is instrumentarian power, government by algorithm, you know, basically usurping the power of the government. But we know that it is possible to make AI work for the people. How do you think about where the opportunity is in all of this?
Marietje Schaake: [00:24:49] Well, it's great to reference Shoshana Zuboff, who has done groundbreaking work to really show how the very business models of Big Tech are so perverse, including for democracies. Another quote that I love by her is to say we need to "make sure technology lives in democracy's house" instead of the other way around. And I think one area that's really exciting about what I can do to serve civic developments, democratic participation, and I think AI can help condense large volumes of information. Government records, for example, can be very bulky, very hard to search for one individual. For example, if you want to find dating back 30 years what the state
House has decided on health care in your local county, it can be really helpful for AI to pull up that information. Or to summarize, you know, lengthy proceedings. Imagine that you want to follow your city council's proceedings, and you cannot just watch the live stream every Tuesday afternoon and evening, then to get condensed summaries can be helpful. But I just think with every AI application in the phase that we're in, there should just not be blind trust. And I think that that's a kind of searching phase that everybody is in right now that a lot of people, whether it's journalists, whether it's civil society organizations, whether it's small businesses or students that I teach at Stanford, are excited maybe AI can help them, you know, write a paper, write an application letter for a law school. But there are real pitfalls to be mindful of. Like I heard from someone I know really well that they looked themselves up through ChatGPT, the generative AI tool that I think people are now familiar with, and out rolled his CV. It sounded great, but there were just all these things made up added to it, you know, awards won, or sometimes like articles allegedly written that were never written before. And it can really go wrong very quickly. There are cases known of lawyers that have tried to use generative AI in court cases where the application is actually completely made up court cases. And so, you know, it can also very, very easily poison information in a way that's hard to detect for the individual. And so I would say even with summarizing information, be very careful not to blindly trust whatever rolls out.
Mila Atmos: [00:27:12] Mhm. Yeah. These hallucinatory results, you know, things that are just not real. So...
Marietje Schaake: [00:27:18] You know what's funny, because I think an hallucination almost has like an endearing part of it. It's kind of like, oh you see something in the clouds and maybe you're a little bit buzzy, you know, from, from one, uh, substance or the other. It's a friendly way of saying it lies. It spits out complete nonsense. And so maybe we should just call it that, because hallucination is kind of like, oh, it's just daydreaming a little bit, you know, but the effects are worse than that.
Mila Atmos: [00:27:42] Yes, I totally agree. We should just call it lies. Let's, let's be real. So 2024, of course, is an election year here in the United States. This means that a lot of us will go to campaign events and have an opportunity to speak to politicians firsthand. So if we have an opportunity to meet with somebody who is going to be
passing laws, writing bills, what should we be demanding from our electeds? And if we want future policy to be fruitful, how can we frame that debate from our perspective?
Marietje Schaake: [00:28:13] Well, that's, that's a great question. And I really hope all listeners go to campaign events, you know, ask those tough questions in person to the candidates that they're interested in. Because really, democracy stands or falls with participation, and it's a real risk that people lose interest, do other things are entertained instead of engaged. And so I think, first of all, voters should make clear that tech policy is a key priority for them, whether it's the impact on national security, whether it's the impact on jobs, whether it's the impact on civil liberties or on governance at all. So for a variety of people with a variety of political views, a lot of their key issues will be impacted by technology and AI. And it's important that their candidates have a vision of it and that they learn about it. Then I think in terms of tangible laws, it is still quite a mystery to me why the US does not have a data protection law. I know President Biden is interested in it, but he doesn't have the support from Congress. So I think elections can really make a big difference there in making sure it's back on the agenda, making sure that candidates are elected that support this, because it really also, again, impacts such a wide range of interactions between people and their government, people, and companies. It's about consumer protection. It's about safeguarding people's personal information against companies that just scrape it off the internet and can use it for facial recognition tools that are sold to the police. I mean, there are a lot of ways in which your personal information is abused, can be abused further in the future, and why there should be boundaries to what companies can do. And then on top of that, I think coming back to the first kinds of examples of of the various ways in which technology impacts aspects of our society, you know, children in their classrooms, patients in hospitals, it's such a a layer that really sits over all aspects of our lives that I think people can find their own contexts to ask what matters to them most with regard to the impact of technology. So I know a lot of people are hopeful about what technology can bring to, for example, improve health care issues. But I also am aware that there are concerns, you know, what happens to very, very personal data, what happens to our bodies if we start merging technology with medicine and have chips implanted or tools available that can measure whether are people with dementia are actually taking their medications or that can try to enhance neurological capabilities, we need to be critical in what the promise of companies are and what the pitfalls are for people. Because really, for too long, the promises of companies have been leading policymakers in how they behaved.
And I think it's time that that equation is turned around, that the concerns of people become leading again in how they deal with technology companies.
Mila Atmos: [00:31:15] Yeah. Well said. I hadn't thought about, of course, that we have things in our bodies that are installed, or monitors that we are continuously using, and that data is being shared widely in a way that we don't fully comprehend. And definitely, we definitely need a data protection law in the United States. So one thing we don't discuss a lot is the real life repercussions of increased AI use on the job market. A lot of people will lose their jobs, which -- I don't have to be the one to say -- will of course have repercussions on our democracy. What are your thoughts about that?
Marietje Schaake: [00:31:50] I'm so glad you asked, because at most of these conferences and international gatherings where the impact of AI is discussed, there is a focus on safety, there is a focus on security. And while I understand that these are very important points to safeguard, especially from a state's point of view, I miss the focus on the impact of jobs, and I've seen a wide variety of predictions. A study by Goldman Sachs that looks at the macroeconomic benefits by McKinsey and other consulting company that expects that jobs will be impacted, but over time, the net benefits for the global economy will be X trillion. You know, and what I miss in those studies is an appreciation for the fact that productivity and economic impact is not just about dollars. Over time, the jobs are more for people than just their income, even if income is incredibly important. But it's also a sense of purpose, a sense of pride, a sense of community, a sense of personal development on the job. And so when An Elon Musk quite casually said after the AI summit in the United Kingdom, he had an interview with UK Prime Minister Rishi Sunak, which, you know, we can also talk about the merits of prime ministers doing interviews with tech tycoons. But during that interview, Elon Musk said we need to get ready for the jobless society. It was kind of like this casual comment as part of a larger conversation, and I just don't think we can emphasize enough how disruptive that would be, and that even with a small percentage of job loss as a result of AI, the social and thus political implications will be enormous. So that's another thing that people should ask the candidate's whose rallies they're going to attend ahead of the election this year, and particularly also presidential candidates, because these are serious impacts that require major public policy efforts, including major resources, which I just don't see how there is an equal distribution between the private profits, which are highly concentrated and significant, and those societal costs, which unemployment
would definitely be an expensive issue to deal with through welfare benefits, re-skilling major efforts on the part of public policy. So yes, I hope, unlike the Netherlands, where I'm from and we just had elections where AI was not a topic at all, I hope in the US it will be an election topic, even though there's competition from other important issues that also need to be discussed.
Mila Atmos: [00:34:22] Well, we recently had an interview with Theda Skocpol, who had been researching the fallout from the decimation of unions in de-industrialized steel cities. And those people are so disenchanted -- that would be an understatement -- with how politics works and how there have been all these empty promises to re-skill them and find them new jobs. And it's been really, really hard for them. And it will be very, very hard when we have a, quote, jobless society, although I don't know how we can have that. You know, like you said, for humans it isn't only the income, it's also that you need something to do, have pride in, and a place to go, have community. It isn't simply something to get your paycheck.
Marietje Schaake: [00:35:06] And maybe in America more than elsewhere, because in America, this whole notion of the American dream that if you work hard, you can buy a home, and you are independent of any state support has been very pervasive and I think to the detriment of many Americans. You know, I also was shocked during Covid how people who were sick kept working because they basically earned their income day by day. Like you said, you know, they're not unionized. They don't have many labor rights and the so-called gig economy. So, you know, platform economy companies like Uber and others contribute to this freelance existence, which has zero security, often doesn't generate enough income to be health insured. It's been interesting comparing, even though it was a terrible reason, the Covid pandemic. But in real time, the US model without many securities for workers and people who need to bridge, even if it's a brief period of unemployment or illness compared to Europe, where, of course, these social safety nets do exist and are much more robust. So yeah, I agree that we can learn a lot from history when it comes to the promise of the Industrial revolution, right, which is huge in the United States. And then these macroeconomic developments really overlook the lived experience of people who saw steel mills closed, or who saw their jobs being outsourced to the other side of the world. And a lot of these same dynamics are happening in the tech sphere, where the promise is huge, but there's a lack of consideration for individuals, for the risks that they're exposed to, for the threats to their
jobs. And a lot of employers are not going to tell their employees that they're looking into AI as an opportunity to save costs. Well, that's of course, what's happening right now. It's not just governments who are thinking about how to handle AI. It's also boardrooms are excited. They're like, "oh, maybe we can cut 10% of the workforce. Excellent. How are we going to do that?" We need scenarios, I think, in countries and states to assess which sectors, which individuals are going to be most exposed. And we cannot wait to see the fallout really materialize before this kind of anticipatory work is going to be done.
Mila Atmos: [00:37:24] Indeed. So what are two things an everyday person can do to make AI work for democracy, live in the house of democracy?
Marietje Schaake: [00:37:34] It's a good question, because I actually worry about how disempowered individuals are vis-a-vis the big companies, so I'm not sure that I think the onus is on the individual to make this work for them. I think the onus is on governments, and so hopefully then these people can ask more action from their governments. Because really, if you look at the power imbalance between individual internet or AI users and companies that are producing these applications, there's no way of outsmarting their legal teams, their design teams, and cracking the code. In fact, engineers within AI companies often don't even know what to expect from AI applications. So, you know, what are you and I going to do? But to think critically about AI, I think, is the main thing that people can do to think carefully about promises that are made about upsides and think about, you know, when the school starts introducing applications. Is that working for your kids? When health care providers are suggesting that AI will be a solution, is that really the case? So to really try to think about it in a multidimensional sense is important. And we also need researchers to be able to do that. And one of the big problems is that AI models are very intransparent for researchers. And so we lack this sort of public interest assessment not only on the micro level, but also on the societal level, which is something that I think needs to change.
Mila Atmos: [00:38:54] Hhhm. This has been a sobering conversation. But again, we are a hopeful podcast. So looking into the future, what makes you hopeful?
Marietje Schaake: [00:39:03] I'm always hopeful when I see my students at Stanford, they are not only bright and motivated, but they're also quite principled and aware that
technology needs to take a healthy space within society. So I'm always hopeful when I see them and when I see them reckoning with being at a university where computer sciences are very strong as a field of research. And of course, a lot of people also moved to work in the tech sector. But I do notice a change. I mean, people have seen the ease with which workers have been laid off at tech companies and how incredibly harsh that can be. It's more popular to work in government again for grads, even at Stanford, which, you know, I never thought I'd see the day. But that's exciting. And we're actually teaching students about policy with the aim of enthusing them to be a part of the next decade, which I think will be impacting the next century when it comes to legislation around tech and artificial intelligence. So the moment is really now, and that goes for students, but also and that's maybe my second reason to be hopeful for policymakers. People appreciate what is at stake and are dealing with the big questions of what public policy should look like. And the risks are being zoomed in on. So we have lost, thankfully, some of the naivete that we saw 20 years ago when the first wave of these consumer technologies was coming to market and people really saw them as democratizing forces. Painful lessons have been learned. Not great, but it is a reason for hope that this time around we can do a better job.
Mila Atmos: [00:40:38] Wow. Well, that is unexpected. That young people want to be in government and want to be policymakers. That is very, very hopeful. Thank you very much for joining us on Future Hindsight. It was really a pleasure to have you on the show.
Marietje Schaake: [00:40:51] Thanks. It was great to join you.
Mila Atmos: [00:40:53] Marietje Schaake is international policy director at the Cyber Policy Center at Stanford University and international policy fellow at Stanford's Institute for Human Centered Artificial Intelligence.
Next week on Future Hindsight, we're joined by Anat Shenker-Osorio. She's been on the podcast before, and she's helped us figure out just how to be good messengers. She is the progressive messaging guru, and she'll help us pick apart messaging across the political spectrum and show us the way on how we can be effective communicators this election cycle.
Anat Shenker-Osorio: [00:41:33] And that is really the essential thing that we need to understand about 2024. It's about marshaling in people the feeling of defiance. "If you don't decide, they'll decide for you." The power, as you said so beautifully in the intro, is indeed in your hands again, not in Democrats' hands, not in picking a savior, but rather in recognizing that we are the ones who make our own future.
Mila Atmos: [00:42:02] That's next time on Future Hindsight.
And before I go, first of all, thanks for listening. You must really like the show. If you're still here, we have an ask of you. Could you rate us or leave a review on Apple Podcasts? It seems like a small thing, but it can make a huge difference for an independent show like ours. It's the main way other people can find out about the show. We really appreciate your help. Thank you.
This episode was produced by Zack Travis and me. Until next time, stay engaged. The Democracy Group: [00:42:45] This podcast is part of the democracy Group.