Feminist Tech Geopolitics with Dr Shobita Parthasarathy

Dr Shobita Parthasarathy.

Shobita Parthasarathy is a professor of public policy and director of the Science, Technology, and Public Policy Program. Her research focuses on the comparative and international politics and policy related to science and technology. She is the author of multiple articles and two books: Building Genetic Medicine: Breast Cancer, Technology, and the Comparative Politics of Health Care (MIT Press 2007; paperback 2012), which influenced the 2013 U.S. Supreme Court case challenging the patentability of human genes; and Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe (University of Chicago Press, 2017) which won the Robert K. Merton Prize from the American Sociological Association.

Can you tell us about yourself, the work you do, and how this journey into this field got started for you?

 In high school, I had two interests. One was in science and the other in politics, policy, and law. When I went to college, I originally thought that I would study more on the law, and follow that track. For various reasons, though, and in part because the University of Chicago, where I went, requires one to take a very strong liberal arts curriculum, I decided instead, initially, to study biology. I remained interested in a range of different policy questions and kept trying to find ways to connect to them. So, I took courses focused on health and environmental policy. While those areas were interesting, I felt like they didn't quite offer what I wanted.  

At that time, the Human Genome Project was coming to a close. I was very interested in the question of how we were starting to gain all this information on people's genomes and were able to test and predict certain diseases. Yet, I thought we didn't really have good public policies to ensure that those technologies were used for social benefit. Blending social justice and science had not entirely been conceptualized, but I forged ahead. I gained some experience, I worked in Washington DC for a while, and then pursued a PhD that helped me really focus on the social, political, philosophical, historical dimensions of science and technology.  

My PhD dissertation became my first book. It focused on the development of genetic testing for breast cancer in the US and Britain. I learned a lot as I did that research on how politics and values shape science and technology, and gained insights on the social—and sometimes disparate--impacts of technology. One site of impact that's worth calling out relates to the ability to patent human genes. In the wake of the Human Genome Project, lots of people were trying to patent human genes to make money. This had real impacts on the kind of research conducted and the costs of access, as well. All of this was done in the name of expanding innovation. 

The seeming contradiction between these two states - where we want to expand access to innovation through such policies, but in reality, the policies produce harm – became one of many threads that I've continued to engage with in my work.  

A lot of the research I've done along the way has made me quite critical and sceptical of innovation systems, by which I mean the institutions, policies, processes, and norms that shape how we think innovation must be done. At the heart of this is the assumption that innovation must centre economic benefit and impact, and that if we do that, we will produce societal benefit along the way – that is, either the economic impact will produce the social benefit, or the technologies themselves will do so. Through different projects, I've learned that it's a far more complicated story. 

What is that complicated story like?  

A few years ago, I had for a long time been wanting to do research in India. I noticed that there was a lot of language about using technology for good or for equity in the Indian context. India is a particularly interesting place because it has for a very long time had the independence to think differently about the relationship between technology and society.

You can think about how Gandhi talked about the spinning wheel as a technology that could liberate the country from the colonizers. You could think about import substitution policies, you could think about India's frugal approach to the space program. All of those are attempts to implicitly or explicitly challenge the dominant approach to innovation that we use in the West. But in recent times, India's been investing heavily on what they call inclusive innovation. 

I became particularly interested in sanitation and hygiene, specifically menstrual health and hygiene, which is the space where a lot of this inclusive innovation was designed to address poverty and inequality. Along the way, I discovered that it tends to become a conduit for the more dominant innovation system to take over, and attempts to foster grassroots innovation as a means of inclusive innovation fall away. Ultimately, what you have is a pretty traditional Western approach to innovation in the name of inclusion and equity, which ends up not serving the people it is designed to serve in a variety of ways. In some ways, it even ends up harming them. It is a depressing story – I am in the process of working on my next book and I'm hoping that the end of the book will be hopeful and leave people with a few strategies to really think about whether and how technological innovation can serve populations better—or, how we can design innovation systems to serve populations better.  

One of the things that we see, now, is that with time, people are recognizing that this dominant innovation system is not serving them and they're fighting against it. In many ways, it has led to increased mistrust and frustration. But the dynamics are different in the Indian context. For example, where on the one hand, there's the idea that technological innovation can be the avenue and route to equity – we don’t question this because of the notion that we have extreme technological capability, and on the other hand, you have marginalized populations across the country that are not actually being served. They may not have the significant voice in pushing back, but they will. And that in itself makes it necessary to think about these issues, if we want to simply advocate for equity as a moral good.  

On the face of it, the intersection between tech and social justice is a no-brainer. And yet, we find social justice perspectives excluded in the pursuit of tech advancements. What does your experience in the field tell you about why this is always the case? 

There are maybe two ways in which you may think about it. One is that there are interests in terms of the particular way of talking about innovation, and the other is in relation to how people are trained to talk about innovation. Those aren't entirely separate, but I'll go through both of them. The first relates to interests in how we talk about and think of technological innovation. By that, I mean market interests. Companies want us to think that innovation is the way to go, is the answer to everything because they stand to benefit from it, and this narrative also helps them avoid regulation (because innovation is treated as inherently good). They create the impression that they are doing socially, economically, and publicly crucial work, and ask to be left alone so they can innovate. This isn’t just companies – you could say this of innovators in the academy and market, and governments, even.  

At the same time, governments are often judged by their economic growth, specifically their gross domestic product and by how much it's increasing. One of the main ways in which that is evaluated is the extent of innovation, given that innovation produces economic benefits through investment, and stimulates the gross domestic product.  

Countries are also evaluated in terms of the number of patents innovators have, and universities are evaluated based on the number of research grants they have – and all these things work together to reinforce the idea that we need to innovate in order to survive. For governments, there is also the fact that innovation is tied to national security and competitiveness. For example, the US and India might be prioritizing more semiconductor manufacturing and greater development to counter China. That is part of the narrative of urgency, as well, and the idea that the main values you want to maximize are economic benefit and growth, market expansion, scalability, and financial sustainability. Universities are also increasingly moving towards market orientation, having internalized the idea that the worth of a university is determined by the number of innovative technologies it can foster and help push into the marketplace. In combination, all these factors make it difficult to break through with alternative goals, values, and ways of thinking about innovation.  

At the same time, I think that scientists and technologists are also taught that their work is somehow more important and more valuable than other kinds of work, and of course that they are also smarter. And so they don’t learn about the limitations of their work, or how to analyze the real social, political, and environmental consequences both good and bad.  

And yet you hold onto hope of the alternative, right? What do you think might get us there?  

I do! Gender is something that you and I both care a lot about, and there are lots of feminist theorists and gender studies scholars who talk about how the language of, for example, competitiveness as opposed to cooperation is itself extremely gendered. The idea of expansion is also gendered. These are the dominant frames by which we think about tech innovation, and it makes it very difficult to pull out these alternative narratives that might actually benefit people.  

In addition to the university at the administrative level, I also think it's crucial to think about how students, engineers, and scientists are trained. I do a lot of work training engineers and scientists at the University of Michigan. The crucial point for me is that as I said earlier, scientists and engineers have almost no exposure to thinking about public impact in any way except for economic benefit and efficiency, technical excellence, and scientific discovery for its own good. They just are not given the cognitive frameworks to think about public good in any other way. 

Because of that, they don't know that it might be worthwhile to spend some time kicking the tires of an emerging technology to make sure that it's really beneficial, to think through how they might design a technology so that it is more useful, to reflect on the values that shape that development process, and to identify how they can maximize socially beneficial values. All of these questions are not covered in a traditional STEM curriculum. Unfortunately, for so many emerging STEM professionals and students, the general idea is that the humanities and social sciences, the fields that provide you with the critical lenses to do this kind of work and engage in such thinking are completely dismissed at best and denigrated at worst, as somehow inferior forms of knowledge that are not worthwhile and become a drain on education. Students may be required to take these courses, but they grumble about it in the process and they don't think it's useful. 

That is unfortunately, too often, cultivated by an environment and mentors that focus only on economics and national security and not on the multifaceted impacts of science and technology. That's why I do a lot of work training scientists and engineers. To start thinking about innovation in a more complex way, it is crucial to start somewhere – and one of the places that I think is more effective than I imagined before, is education.  

What do you think we have lost as a result of this colonial capitalist approach to innovation and tech?

First and foremost, I think we have lost the potential for social solidarity. One of the things that initially interested me in inclusive innovation, this concept that I'm studying in the Indian context, was in fact a recognition that the modern, or what we call the “innovation” or “knowledge economy” was exacerbating inequality around the world, and that the increase in inequality was leading to social and political unrest. This seems especially clear now. We are seeing what some of the impacts are world over, with the rise of populism. Many people are looking at governments that have focused on innovation this way and see that these institutions have not served them properly.

I would also say that we have lost the ability to think about how technology can actually help people. By making all sorts of assumptions about the extraordinary power of technology, we've limited our ability to enable technology to do a lot of the work that it claims to do – or that people claim it does or can do.  

With all of this underway, what role do you think the law and policy can play in restoring meaningful impact as an outcome of technological innovation? Is there anything else we should be focusing on besides law and policy?  

I think right now, given the political situation in different parts of the world, law and policy tend to feel a little hopeless. However, I tend to think across two sorts of categories.  

The first category is what we might call research and innovation policy, which relates to the kinds of research that get funded and incentivized. For the last few years, in the United States, in particular, the government has tried to provide incentives, however limited and problematic, to encourage researchers to think about these kinds of issues and responsible technology development. The European Union, for example, has encouraged projects that focus on responsible research and innovation for a while now. At the very least, while I have criticized those efforts – I sometimes call them equity washing – where you're calling what you used to call regular old technology development “responsible,” now. At the very least, it was a signal to researchers and innovators that they had to think about questions of responsibility. At the same time, major tech companies began to really try to incorporate this attention in ways different from what they were doing previously. That's incredibly important and useful. Such research and innovation processes can encourage researchers to reflect on the values that shape the technologies they are developing, to get trained in responsible innovation, and to engage citizens – especially those from marginalized communities in conversations and to encourage them to participate in shaping the kinds of technologies that are developed. This can include developing new metrics for evaluating what constitutes a good technology, for example. Such research and innovation policies can actually foster greater attention to this chasm that has opened up between research and social benefit. 

I've also done a lot of work on intellectual property and patents and thinking about when, how, and where we patent something can help us take steps to ensure that we're incentivizing certain kinds of innovation over certain others. History shows us that this has been done in various ways.  

The second category concerns regulatory policies. In the earlier parts of the 20th century, infrastructure had been developed world over to ensure that medical and environmental projects were not too risky. They’re are all great infrastructure, but they're often limited. They don’t take the socioeconomic dimensions and the equity questions into account. A lot of scholars, including myself, have thought about the use of impact assessments – say, equity or broader impact assessments – to address this gap.  

At the end of the day, we want technologies that work. We want technologies that people want to purchase. We're moving rather quickly to a moment where it's becoming clear that AI is not the magic technology that it has been sold as for the last couple of years. More and more people are turning away from AI because it is scary and is not doing what it promised. One way to ensure that you have technologies that work is to ensure that you incorporate attention to questions of equity, and the needs of marginalized populations. One example I can offer here is of the pulse oximeter in the United States – it is a technology that does not work for people with darker skin tones. It is not a thing most physicians want – they want to treat their patients in a way that is effective. Even though we are at a moment where questions of social equity and justice are not taken seriously enough, I hope that questions that are actually about markets, accuracy, and effectiveness might breakthrough. They may unearth other ways of talking about the importance of these issues.   

Let’s imagine something together, out loud: If technology was owned, developed, and deployed by people as communities rather than as capitalists, we wouldn't find ourselves in the heart of so many of the crises we're in right now. What does this statement evoke in you?   

I do think that, at the very least, we wouldn’t be in the current crises because either people might be buying into technology more or might even be de-centring technology altogether.

One of the problems we see is that technology is too frequently treated as not only a one-size-fits-all solution, but also as a kind of complete solution. If we can return power to communities, we might be able to enable them to develop the kinds of solutions that work best for them. That might include technology in some ways, but it would retain a human dimension to it and enable communities to imagine solutions that have nothing to do with technology at all. It might decentre and even reduce the kind of hype around technology, particularly emerging technologies. 

The idea that people should be empowered is not new. I feel strongly about it because in many places around the world, people don't even know that they have power, that they can speak out, and that they can advocate for themselves. So at the very least, in some ways, it's not a technological project at all. It's more of a project of ensuring that people around the world know what their rights are, and whether and how they can exercise them, and that they have support in exercising them. 

One of the questions that emerges, as I'm imagining is, “What is the relationship between extreme decentralization of the kind that I'm talking about, and the benefits of globalization?” We've become far more global, and this is actually wonderful in many respects. But unfortunately, it is fundamentally market oriented. This has created other sorts of pressures. It exacerbates, in odd ways, the economic competitiveness and distrust towards certain countries in terms of security. That is not great. I wonder if we may think about translocal coalitions, where we could imagine civil societies connecting with each other in bottom up ways. There are lots of opportunities for coalitions that are bottom up and locally led, as opposed to the way in which globalization has looked for the last three decades.  

One of the outgrowths, at the very least, of the hype and structure of the innovation system has been the concentration of wealth and political power in the hands of just a few people around the world. My hope would be with this kind of radical reimagining, where you're really empowering local communities and these local communities are, in turn, coalescing with one another, there would be mechanisms that make it very difficult, to amass the kind of political and economic power that we now see the real dangers of, around us.   

What might a decolonial feminist lens bring to our collective engagements with tech and innovation, both as people consuming it and potentially developing and deploying it?  

The language around tech deploys terms like master and slave, and expresses the desire to dominate countries using technology. We think about artificial general intelligence as a goal that is somehow capable of being all seeing and all knowing, and producing its own outputs with a purpose of being a god figure, essentially. That's what we are implicitly moving towards. There's not even benevolence engineered into these technologies. When we talk about the individual technologists, we're often using language implicitly or explicitly around domination. 

Sometimes it feels more fanciful than other times, but this is a time where things feel like they're precarious and falling apart. It's no better a time to imagine than now, in my opinion. There is a framework from African feminists, which talks about things like inclusivity and participation. I think humility and empathy are crucial values in these discussions, extending to understanding that the most marginalized communities deserve to have a central voice in conversations around technology. 

I have been experimenting with a group of colleagues on what community-based technology development could look like. For a few years, I have been working, on behalf of the program that I run, the Science Technology and Public Policy Program at University of Michigan, with community partners, leaders of organizations that represent civil society, and marginalized communities in our local area to empower them to participate more in public and policy discourse around technology and other issues. We’ve produced policy briefs and educational resources, and we’ve engaged in policy and technical conversations. In the case of local governments, we have helped them understand AI more critically, so that they know, at the very least, the kinds of questions they need to ask as they're considering whether or not to purchase an AI technology. 

I've also been thinking about what it might look like if we were to think about technology development with these communities, while really centering their knowledge, and our critical understandings of technology. I've been working with a computer scientist and a data scientist on this project, alongside community partners, which include groups that represent people with disabilities and formerly incarcerated people. We're working with them to think through the challenges they face, the real problems on ground, and the solutions they envision. We explore where technology might facilitate, and where technology might actually cause the problem. We think in more critical ways about how technologists can work together across disciplines, like the humanities and social sciences, and engage with community partners to think about solutions. What’s interesting, though, is that communities are not talking about technologies as solutions, but rather, as problems.   

What if we were to imagine innovation processes that are open and technology is developed in collaboration with communities, so that they are really actively working for the public good? Too frequently, in existing conversations, there is an assumption that the community does not know about technology and needs to be taught – but we don’t talk enough about what the community knows, and how we can learn from it. This is a big part of what I'm working towards, and what I think is a decolonial feminist approach to technology development.

 

 

Next
Next

Gender, Peace, and CyberSecurity