AI for Equality: Orly Lobel

October 20th, 2022

“AI can help us really achieve the goals that we've always had.”

Orly Lobel is the Warren Distinguished Professor of Law and the Director of the Center for Employment and Labor Law. Her latest book is The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future. We discuss reframing our public discourse around technology in order to proactively use it as a tool for equality. 

Lobel urges us to think about what our goals, social norms, and values are in a democratic society. Because we’re racing forward with integrating technology into our lives, we need a more balanced debate about how privacy ought to be offset by other values. In addition to talking about AI technology gone wrong, we should consider the comparative advantage of AI over a human decision maker, who has a lot of biases. 

Follow Orly on Twitter: 

https://twitter.com/orlylobel

Follow Mila on Twitter:

https://twitter.com/milaatmos 

Follow Future Hindsight on Instagram:

https://www.instagram.com/futurehindsightpod/

Love Future Hindsight? Take our Listener Survey!

http://survey.podtrac.com/start-survey.aspx?pubid=6tI0Zi1e78vq&ver=standard 

Sponsor

Thanks to Avast for supporting Future Hindsight! Go to Avast.com to learn more about Avast One now.

Want to support the show and get it early?

https://patreon.com/futurehindsight 

Credits:

Host: Mila Atmos 

Guest: Orly Lobel

Executive Producer: Mila Atmos

Producers: Zack Travis and Sara Burningham

  • Orly Lobel Transcript

    Mila Atmos: [00:00:00] Thanks to Avast for supporting Future Hindsight. With Avast One, you can confidently take control of your online world by helping you stay safe from viruses, phishing attacks, ransomware, hacking attempts, and other cybercrimes. Learn more about Avast One at Avast.com.

    Mila Atmos: [00:00:22] Welcome to Future Hindsight, a podcast that takes big ideas about civic life and democracy and turns them into action items for you and me. I'm Mila Atmos. When I was working on a trading desk in the nineties, I found out that my teammate was making double the pay that I was making. And when I complained, I was told that it was because I was fresh out of college and also because this other guy's wife was about to have their first baby. And I thought, "Well, that's unfair," but I didn't really know what to do about it, so I didn't do anything about it. And when I looked for a new job, though, I said in my job interview, I know that my teammate is making double what I'm making, and I know that I'm just as good as he is at my job. And I demanded that I get paid at least the same. And it worked. I got paid more in that new job because I had the data and I was able to use it to get a better deal, to get closer to pay equity, which let's face it, is a long way off. Women still only earn $0.82 for every dollar a man earns. And this is kind of the nub of the big idea at the core of today's show. What if we could use data enabled by technology and artificial intelligence to drive more equality? Inequality reaches far beyond the gender pay gap. It's built into so many of the systems governing our daily lives. So how can we engineer those systems toward equality? Orly Lobel sets out a vision for just that in her new book, The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future. It's out now, published by Public Affairs. Orly Lobel is the Warren Distinguished Professor of Law, the director of the Center for Employment and Labor Law, and founding member of the Center for Intellectual Property Law and Markets at the University of San Diego. Orly, welcome to Future Hindsight. Thank you for joining us.

    Orly Lobel: [00:02:27] Thank you, Mila. Great pleasure to be here.

    Mila Atmos: [00:02:30] I think people think of artificial intelligence even now in a kind of artificial brain kind of conception, but A.I. is actually braided into our daily lives. Can you

    talk about that -- highlight where it's central and we just might not be noticing it or thinking about it?

    Orly Lobel: [00:02:48] Absolutely. Artificial intelligence and automation in general. Digital technology is all around us, integrated into every aspect of our work lives, our home lives, and really our intimate relations from the job market all the way to dating and to our family interactions. And, you know, some of it we notice and some of it is more obscure what kinds of decisions are made about us when we apply to jobs or when we are searching for data, searching for information on the web, or even applying for benefits with government.

    Mila Atmos: [00:03:36] Yeah, I think one of the things that was like a small example that was in your book is about having a Roomba, you know, which is like your most common robot to own for people. And people don't think that that is an A.I. device that maps your entire house. And it's sort of, you know, it's a it's a low level appliance. It's kind of how people think about it. And yet actually it's very sophisticated.

    Orly Lobel: [00:03:57] You know, Roomba is such a, as you said, kind of a low level sophistication. And also by design, you know, it's.

    Mila Atmos: [00:04:06] Not.

    Orly Lobel: [00:04:07] That Hollywood kind of robot that we envision like a humanoid, and yet people have those ambivalent feelings about integrating a Roomba into their house. There's on the one hand, maybe fear of what you just said about mapping the entire house. And, you know, what is it collecting about us that it doesn't maybe really need for just vacuuming. But on the other hand, people get very attached to the Roomba. So there's even some stories about people that don't want to replace the Roomba with a new one that got smashed or needs fixing, but rather, they request the same one because they feel it has become in some way a member of their family.

    Mila Atmos: [00:04:52] Yes, I thought that was an interesting anecdote in your book, but we're really interested in equality here on the show and civic engagement. And one of my favorite interviews for this show, in fact, we won an award for it Not to Brag was with Shoshana Zuboff about surveillance capitalism. And her argument really is that A.I.

    is anti-democratic. And I think your argument kind of builds on that and highlights some specific aspects of the ways in which A.I. reinforces and amplifies inequality. You know, essentially we are programming our biases in and then the biases come back out. So what are some particularly egregious examples of this?

    Orly Lobel: [00:05:31] So a lot of examples repeat themselves. And that's part of the problem, actually. And there are algorithms that weren't trained well that received partial data, be it for facial recognition or other kinds of biometric screening for health purposes. So dermatology screening, you know, there's research. If you don't introduce the algorithm to enough data, you have a machine really that's making decisions based on the majority and not taking into account minorities of groups that have less access to digital technology, who have been studied less, who have just not been perceived as the prototypical person. So that would always include women. So much of our clinical trials and kind of market designs have been catered to men. And so you have a lot of these kind of bias in bias out really everywhere you look, there is that risk.

    Mila Atmos: [00:06:34] You know, when when we think about A.I. And collecting data, so much of A.I. Saying, no, we don't want to give people our data, we want to remain private, we want to protect who we are. But essentially your main argument is that it actually needs more data about us in order to make better choices and get to equality. Can you explain a little bit how this would work?

    Orly Lobel: [00:06:58] Yes, absolutely. So there's this real tension here. And you mentioned Shoshana Zuboff's book, Surveillance Capitalism. It's an important book, actually. We have the same publisher and we're in agreement in some ways. But in other ways, there's a real tension and conflict and normative choices that we have to make as a society about the kind of data that's protected versus collected. And I've been arguing that this very influential term of surveillance capitalism that has now informed our cultural debates and our policymaking about how much risk there is, how many dangers and possible harms there are when there is invasion into our private personal spaces, our information, our sensitive data. And what I have seen and research and the kind of tech policy that I do, there's been a cutting off of a lot of potential for creating better data to train the algorithms, more information to use with the help of A.I., to really redistribute a lot of the resources that we have in a society. And so in the book, I give a lot of examples of how there are enclaves or data deserts or people

    at the edge of data who actually turn out to be the more vulnerable people in our society. And definitely if you take a more global view of this, so it's not just about the United States, but it's really between developed and developing countries. There's a huge problem that so many people around the world really don't have access to the Internet, to smartphones, and we're not collecting enough data about them. So in a lot of ways, I think that just having that lens of surveillance capitalism versus understanding that we could have surveillance liberalism or kind of shifting the Big Brother idea to how about a very friendly, benign and helpful, you know, sister or cousin that makes the right decisions that actually correct for centuries of discrimination and exclusion. I think that we really are at a point where we need to have a more balanced debate about how privacy has to be offset by other values and goals that we have in our society. And actually, I loved, Mila, your first story about the gender pay gap. It's also true about racial pay gaps that we're still in this reality, having had pay equity laws on the books for for many, many years, we still have significant pay gaps. And so that's actually been something that I've researched quite a bit. I write about it in the book, and I've been also involved in private industry in tackling it through the help of technology.

    Mila Atmos: [00:10:03] Yeah, So let's talk about that. How would A.I. close the gender pay gap?

    Orly Lobel: [00:10:09] Yes. So, you know, again, it's not A.I. alone and it's A.I. with policy and with concerted efforts and willingness by companies to put in the work. But there is certainly this really frustrating evidence about the stagnating pay gap. And exactly like you described it, that it's not just in different positions. So definitely we have occupational segregation, which is another thing that actually automation can help with. And it's a different conversation about what kinds of jobs are going to be replaced and how our human capital can be valued. You know, robots are taking on more of our work, but even when you take just like your story, the two identically qualified people in the same position, or perhaps the woman is actually doing more, as we tend to see in a lot of organizations, we have these clear pay discrimination realities that are very widespread. And so this has been very frustrating to me, as somebody who teaches employment law. I direct the Center for Employment and Labor Policy, and I do a lot of consulting work both to governments and to private industry about it and expert witness and discrimination case. And just really frustrating about how much human bias we have in pricing. It's so tough for people to really understand how they are making

    decisions about what they're going to pay their good workers, how they're going to incentivize them, how they're going to reward great performance. It turns out we're really bad at it. And in The Equality Machine, one of the most important messages that I show throughout the book is that it's not enough to talk about an algorithm gone wrong or an A.I. having the risk of bias. What we need to be talking about is the comparative advantage, because we have to recognize and always remind ourselves that the status quo, the human decision maker, has a lot of biases and the questions are who fares better and what has a better improvement trajectory? What can we detect and how can we make it de-biased? And with human, it's very, very difficult. So on the salary questions in my work, I show that again, what we need is more information. The big problem is that people don't know that they're underpaid. Women don't know their worth, and there's questions about mobility and how much you hop across different jobs, which is kind of exactly like you described. Again, your story is exactly describing the reality is that you can't really fight the organization when you don't have a lot of outside options and you don't get retention deals and renegotiations if you don't have outside recruitment. And so that's one way that now with digital technology and access to so many more job offerings or job offers, job ads knowledge about where your skill can be best applied. So intermediary platforms like LinkedIn and many others and job ads are really, again, democratizing our access to, you know, replacing the word of mouth. They are using A.I. to kind of figure out who they need to send ads to. And again, I know that there's been a lot of fear about these automated ads that have biases in them, but they are really the drive of moving away from the very narrow job market that we used to see, where it was like an all boys club. People would just bring their friends and their close knit family and extended community members. And now we have a global job market. We have, through again, technology, the help of remote work. So that's one huge driver on just getting the labor market, the job market more competitive and allowing people to kind of find out their worth just through search and knowing more about what their opportunities are. But even more than that, these intermediaries, there's like an app that I describe that's called Know Your Worth, where it's really crowdsourcing all the information that people share on the web about what their salaries are and what's comparable. And it's really kind of reversing these years of asymmetric information where employers know all the pay scale and employees don't. And then one step forward... Actually, I'm on the board of Pay Equity Alliance with major industry leaders from Nerdwallet to Nordstrom and many others. Business is really interested in doing the right thing on pay equity. And because of our limited computation abilities as

    humans and because of our unconscious biases and just kind of the difficulty to see everything that's happening, even when there's a real interest, you really need the software for it. And so there is great leaps in the software that is available now for companies to implement and to suddenly see these gaps that were hidden kind of go beyond job formalities, titles, and look at what are the work products, how people are evaluated and kind of look at, compare across many positions, whether there is gender pay gaps and how to correct them.

    Mila Atmos: [00:15:45] Yeah. You know, I think the big takeaway for me when I read your book is how important it is to have transparency and and also to have the data of the performance and what people are doing with these jobs. Because like you said, there was so much information asymmetry. And I would argue there's also information asymmetry from the employer, because I think a lot of them just make these like gut decisions. You know, I like this person and I'm going to give this person a raise without actually evaluating whether that person is the better performer than the person sitting next door, you know, doing the same job.

    Mila Atmos: [00:16:23] We are going to take a short break to thank our sponsors. And when we come back, Orly questions the knee jerk negative coverage of technology and A.I. and outlines the ways we can repurpose technological tools to protect the vulnerable. But first, thanks Avast for supporting Future Hindsight. Avast is a global leader in cyber protection for more than 30 years and trusted by over 435 million users and prevents over 1.5 billion attacks every month. Avast empowers you with digital safety and privacy, no matter who you are, where you are, or how you connect. Enjoy the opportunities that come with being connected, on your terms. Avast's new all in one solution, Avast One, helps you take control of your safety and privacy online through a range of features. It's their best protection yet, giving you everything you need to take control of your safety and privacy online and is accessible through a single easy to use interface. Privacy Features keep your identity and actions hidden. Security Solutions stop malware, phishing, and virus attacks. Performance products clean up and speed up your devices; and VPN allows you to connect safely and securely to public wi fi and conduct your business wherever you want without the fear of cybercrime. And their award winning antivirus stops viruses and malware from harming your devices. I'm a fan of their data breach monitoring. It enables you to find out if your online accounts have been compromised and whether you need to change your passwords. Thank you Avast

    for supporting Future Hindsight. Confidently take control of your online world with Avast One. It helps you stay safe from viruses, phishing attacks, ransomware, hacking attempts, and other cybercrimes. Learn more about Avast One at Avast.com. And now let's return to my conversation with Orly Lobel.

    Mila Atmos: [00:18:27] The other thing that I thought was also really interesting was that you stressed over and over again that technology is a tool. And I think so much of how we perceive technology when we think about surveillance capitalism is that we think about it as, like this... Almost another being that has its own life and its own emotions and its own motives, which is just not true. And so when you think about making the technology work for us, what is the first thing that comes to your mind when we think about the equality machine?

    Orly Lobel: [00:19:03] Exactly like you describe it. We need to think about what our goals are, what our social norms are, what our values are in a democratic society. And we need to use technology as a tool to help us fulfill those goals. And with automation, with digital technology, with algorithms making decisions, we have this great thing of having like a digital paper trail where we can have audits and we can have checks on what are the outputs of what we're using. So there's all these examples of A.I. gone wrong right now that are really kind of shaping our view of where we're at. So one of them is the facial recognition gone wrong. Another one is the Amazon résumé parsing tool that they were building. And it turned out that because the algorithm was looking at all the successes in the past, it really prioritized men who played lacrosse, or specific male names. But what is not really talked about when this example comes up again and again in the media, and kind of the coverage of why we should fear automating job applications and kind of the hiring process, it doesn't come up that this was never actually put into use because Amazon actually saw that this algorithm is not doing the right thing because exactly like you say, it's a tool. It threw away this tool and it's building a better tool. And there are many, many better tools out there these days. And I really see how they're getting much less coverage by the media. If we move from jobs to, let's say, health, there's been also coverage of well, there's been some automation of radiology screening, but maybe two human radiologists do better than this radiologist that is an A.I. Well, you know, when you look at actually the the improvements that can be made and more data, as we discussed, that makes the A.I. better, we actually get stronger and stronger. And we have to update. We have to forget the the failed one that

    really is not relevant anymore. So that's really kind of the the bigger lens and framework. We're racing forward with integrating technology into our lives. So if we don't ask the right questions and we don't try to find the best practices, then we're going to be in a reality where, you know, the metaverse is coming and where we all have avatars and we all have digital personal assistants in our homes and we have smart refrigerators and whatever it will be, you know, autonomous vehicles will have all of these. But we wouldn't have asked and directed the technology in ways that create more safety for people who are more vulnerable. The other thing is that you ask, how do we use this tool? Again, we have to be frank about how there's going to be just like there always were some trade offs. So we talked about privacy versus getting full information and equality, but there's other trade offs. We might want more security and safety versus more speech that's kind of unmoderated. You know, public health. So the A.I. won't solve those kinds of questions, like do we remove false information in order to create more public health during a global pandemic. That's something that, as a society will always have to make these decisions between trade offs. But the A.I. can help us really achieve the goals that we've always had.

    Mila Atmos: [00:22:45] Right. Well, speaking of trade offs, I was kind of surprised to read that you see data collection and facial recognition as a public good. Why is that?

    Orly Lobel: [00:22:55] The thing that we need to understand is that when we have more information, we can solve many more problems around the world. With facial recognition, for example. There's a lot of fear with using biometrics, like what do people know about us? What does the government know about us? But the question that we need to ask is the more substantive question of what are our democracies like? What are we using, information that we have for? And so, you know, again, it won't solve this, but we have to also understand beyond the possible harms and we have to articulate them. We we can't have these overarching, unarticulated, possible harms of like, what can somebody do because they know my face. There are many medical uses right now for facial recognition, for helping people in remote locations around the world that don't have access to regular medicine, to doctors. They're they're amazing scientists. Like, I was just at Tel Aviv University and there's this scientist that's developing an early detection and treatment for people just through facial recognition technology that will help parents kind of know about propensities to diseases and help provide for their kids. That's enormously democratizing that kind of remote health. With law enforcement, I

    think the thing that people fear most with facial recognition, like, you know, people will know my face and there's going to be law enforcement. And law enforcement can go wrong, as we know. So, again, we need to articulate what we're afraid of. Are we afraid of the false positives? Are we afraid of inaccuracies? Or are we afraid of too much accuracy? Too much, you know, exactly correct detection of crime. And I think there are risks on both sides. But certainly right now and again, I don't think it's talked about enough in the media when we talk about facial recognition and all, there's all these policies in the EU and around the country like Illinois and before Congress to ban facial recognition and most uses. It's not talked about that facial recognition is really a game changer in trafficking. So, really finding missing children who have been trafficked. Pairing facial recognition with A.I. that can re-image someone from when they were lost, like at the age of 8, to 4 years later as as a child of 12 years old. So it's a hugely beneficial tool for these kinds of very important law enforcement. So, you know, I think that thinking in these binary ways of banning versus, you know, free for all, using it for evil, it's just simply not the right conversation that we need to have. We need to think about it as a public good, as you said, and think about the positive uses rather than saying, oh, we have this technology, but we're not going to use it.

    Mila Atmos: [00:25:58] Right. Yeah, I think that was actually one of the most striking examples in your book about the facial recognition being able to recognize a child that was trafficked years before that a human couldn't recognize because she had grown older or she was wearing makeup and it was, you know, not the same to the naked eye, but the A.I. could identify her. So, we're going to veer back into the Future Hindsight lane, which is about civic engagement and civil society. We talk to lots of citizen changemakers, as we call them, and I know they listen. So this is for them. How can activists use technology and A.I. To further their causes?

    Orly Lobel: [00:26:41] Well, first of all, so important for me, and this is why I wrote The Equality Machine, is first of all, to have skin in the game and be in the inside, if you will. So I think that just being from the outside and criticizing and raising the concerns and the fears that we've talked about is not really going to do the work. So there are a lot of positive nonprofits right now that are looking at A.I. for good. There are a lot of private industry departments that are about ethics in A.I., algorithmic accountability, algorithmic equality. And those positions need to be diverse. They need to be diverse both on kind of our identity composition, but also they need to be interdisciplinary. So activists with

    backgrounds that are not in computer science, in social work, and sociology and psychology and behavioral economics, from really every field and place. We need to have these conversations and engagement and have a vision of how do we use these tools, repurpose them for the goals that we have. And there's so many examples. Again, I don't think that they're covered enough. But if you're working in the area of environmental protection and climate change, if you're working on poverty alleviation and education and literacy, if you're working in the field of public health and access and accessibility and disability rights or on workplace and labor organization, the tools are out there and we need to really kind of grab them, be in the conversation, have skin in the game, and have also the kind of positive, constructive vision. That's really just the primary goal of writing The Equality Machine of kind of shifting the conversation to the next steps and being constructive about it.

    Mila Atmos: [00:28:41] So what you're saying is really that we need to have humans actively shaping what we're putting in in order to get the results we want. But we also know that sometimes this technology is being abused, misused, in fact, to terrorize other humans. Right. Whether that's, you know, one of the examples that you had is that you could use your own security cameras at home to abuse your spouse. And so what's the, what's the line here? How can we actually do this successfully in a way that really tamps down one and and elevates the other?

    Orly Lobel: [00:29:20] Right. And there's no magic formula. You know, we're... This is our reality. So where technology has both sides of the coin of, for positive change and also in the hands of the wrong purposes, they can be abused. The example that you described with people finding themselves kind of trapped in their own homes with surveillance that's used by an abusive spouse. There's lots of examples I describe in the book where actually, with the help of law enforcement and nonprofits, technology can actually predict better and help women and other people who are experiencing abuse to actually get out of these kinds of relationships. So again, it's a more balanced story competition. So definitely competition in what's offered in the market and kind of more consumer choice and more consumer input is key. So there's very important work that's being done right now by government. And part of that conversation about antitrust and competition policy and looking at big tech and whether we have enough access to build the tools with thinking about nonprofits, also using these tools, also having access to the data. But the most important thing for us as a society, to have a focus on digital

    literacy and to educate and to give access to people who have been traditionally like disempowered. So let's say in a reality of domestic abuse where there's starting to be automation, but the person who's controlling the automation is the abuser. If we think about this seriously of from an early age, making people not afraid of A.I., but actually, I use that word skin in the game, you know, from a young age, teaching young girls and young boys to use technology to code, to even if they're not in that field of coding, to know what to look for, to know what to ask for, and to demystify the whole kind of interaction, human-machine interactions. I think that we will see a lot of progress on that front. I mean, activists, and again, these are your listeners, these wonderful activists who are changing the world. They know how much technology has helped organize more than ever before. If you think about hashtags like the MeToo movement and like Black Lives Matter, you know that the scale of activism has just changed dramatically as we've become more comfortable using our digital connectivity. And so we need to kind of move forward with that and have this kind of cautiously optimistic stance, which I advocate in the book.

    Mila Atmos: [00:32:18] I have a question that circles back to the HR question and pay equity, because one of the things that you also mentioned in the book is that, you know, you could find out people who are potentially abusive at work. I was thinking about this just now with the abusive spouse and how you could, you know, gather all this data, whether there are people who are toxic at work, or harass other people at work, and how gathering more data can potentially expose someone like that. But it could also inadvertently give you a false positive. So what is too much information and what is just right or is there such a thing? And how do you think about that?

    Orly Lobel: [00:32:56] Yeah, so with technology, you can move forward with less false positive when you have these digital trails. So with harassment at work, I describe a slew of startups, new apps that are serving employers, companies and employees to really track and trace, and collect and keep a trail about what interactions look like. And then, you know, you will have to have some judgment calls. In the end, the kind of he said, she said, not believing the victim, not remembering, not seeing the patterns of repeat abusers. All of this can become something of the past. So I actually just emailed, like 5 minutes before we started our podcast, Gretchen Carlson, who was of course the Fox newscaster, who was very important in exposing the Fox sexual harassment #MeToo environment. But she and I also co-authored recently a Day One report for the

    Biden administration about NDAs and how they're serving to hide a lot of information about what's happening in corporate culture. And again, it's not just about sexual harassment. It's about discrimination at large, including pay inequities, but also it's about all sorts of improprieties. You know, anything that would be a subject to whistle blowing and what technology is doing, what I describe with these new apps and these new ways of keeping digital records but keeping them protected. Of course, we need cybersecurity and there's lots of tools of anonymizing and keeping sensitive information from being misused. What it does really is it creates more confidence for people to speak up. It really kind of solves this game theory that we're in where, you know, one person in an abusive environment will oftentimes be too afraid of being the one person who goes against the Goliath. And nobody wants to be the David that will be fired as a whistleblower. But when we have these third party providers like these apps that allow you to submit grievances and anonymized ways and to kind of have like it's like a notary that is with you your whole career, you know, you're kind of keeping these notebooks about what's happening. I think that there's so much more potential to really expose the things that are wrong and and prove that the things that are the false positives that you asked about are just false. So I think we actually need to fear less of the false positives under this kind of environment.

    Mila Atmos: [00:35:50] Right. So as an everyday citizen, what are two things I could be doing to demand that the algorithms, that the technology be used for equality?

    Orly Lobel: [00:36:02] Yeah. So digital literacy, really feeling comfortable with what is an algorithm. It's it's just a formula that's trained by data. In the book, I, in The Equality Machine, I really tried to simplify a lot of these terms that sometimes seem scary to everyday citizens, like what is deep learning and neural networks and machine learning. And it's really, you know, things that we can all understand, even if we don't understand how exactly they like the inner workings or how to program them. So, so educating and having a balanced conversation, again, not allowing sensationalist reporting about how we're all going to be tracked and traced and, you know, no more privacy and no more human decision making and and being smart about differentiating between fact and fiction, between Hollywood depictions and what are the realities. And then understanding that even when there are failures, if we ask the right questions, there can be efforts to reform and to get better.

    Mila Atmos: [00:37:11] Excellent. So as we are closing in towards the end, looking into the future, what makes you hopeful?

    Orly Lobel: [00:37:19] My daughters are now getting older. And so I have one daughter who is an A.I. researcher at Stanford. She's a college student at Stanford doing symbolic systems and computer science and A.I. and neuroscience and economics. And the other one is studying cyber security and cyber operations and intelligence at USC. And then I have one in middle school. But this generation, so my daughters, my students, I think there's there's much less of a fear of technology in their lives and in their realities. That really makes me hopeful. I think that, for example, with autonomous vehicles, they get it. Like they get that we will be at a time where they will be safer than human drivers. And I see a lot of people, and kind of my colleagues and reporters that are terrified of this and reporting about an accident that happened with Waymo or Uber and kind of having a conclusion from that that we're not, you know, we're not going to get there. We're not going to have safe autonomous vehicles. I think there is a lot of kind of positive energy. And the generation that we're training now, that I'm teaching, that I see are doing amazing things and are doing the cause lawyering or public organizing, the for profits. So I see a lot of things to be hopeful about. And now we just need to really tackle the very wicked problems that we have in our society. And our work is cut out for us.

    Mila Atmos: [00:39:06] It is. It is. But it is hopeful that young people are jumping in and really putting their skin in the game, as you have mentioned earlier. And I think you're right, it bodes well that the more young people are in it to form the future of this technology, the better. Well, thank you very much for being on Future Hindsight. It was really a pleasure to have you on.

    Orly Lobel: [00:39:27] It was my pleasure. Thank you. Mila, it was great.

    Mila Atmos: [00:39:31] Orly Lobel is the Warren Distinguished Professor of Law, the Director of the Center for Employment and Labor Law, and founding member of the Center for Intellectual Property Law and Markets at the University of San Diego.

    Mila Atmos: [00:39:50] Next week on Future Hindsight, an energizing and inspiring conversation ahead of the midterms, I'll be joined by Amanda Brown Lierman. She's the

    Executive Director at Supermajority and Supermajority Education Fund, which build women's political power through efforts to inform, train, and organize women across age, race, and background.

    Amanda Brown Lierman: [00:40:11] Fundamentally, women believe that our lives should be safe, our bodies should be respected, our work should be valued, our families should be supported, and that our government should represent us. And those very basic value propositions are what women hold hope for, and what women are fighting for.

    Mila Atmos: [00:40:30] Amanda is full of ideas and encouragement for getting out the vote and engaging in your community, as we enter the final days of incredibly consequential elections for state houses, state attorneys general, congressional elections, gubernatorial elections, school boards, the list goes on. It's a lot. You won't want to miss it. That's next time on Future Hindsight.

    Mila Atmos: [00:40:54] We're also active on Twitter and would love to engage with you all there. You can follow me @MilaAtmos. That's one word: Mila Atmos or follow the pod @futur_hindsight. This episode was produced by Zack Travis and Sara Burningham. Until next time, stay engaged.

    The Democracy Group: [00:41:24] This podcast is part of the democracy group.

Previous
Previous

Flexing the Women’s Vote: Amanda Brown Lierman

Next
Next

How the NRA Radicalized America: Ryan Busse