Feminist Tech Geopolitics with Theodora Skeadas
Theodora Skeadas is a public policy professional with 13 years of experience at the intersection of technology, society, and safety. She works as a Community Policy Manager at DoorDash, where she helps build trust and safety policies to make DoorDash a safe and trusted marketplace. She serves as the Chief of Staff at Humane Intelligence, and contributes toward the development of hands-on, measurable methods of real-time assessments of societal impact of AI models. She has worked as an independent consultant with non-profits, governments, and companies on issues including AI governance, tech-facilitated gender-based violence, government efforts to combat disinformation, information integrity, journalist safety, fraud, election integrity, and AI philanthropy. Theodora has also served on Twitter's global Public Policy team, where she managed the day-to-day operations for the Global Trust and Safety Council, a research hub within the Public Policy team, and a trusted flaggers program for human rights defenders. She also supported Twitter's global civic integrity, transparency, and crisis response efforts.
Theodora Skeadas
Can you start by telling me a bit about yourself and the work you do, and how you got into all of this?
I am Theodora Skeadas and I am originally from New York. In college, I studied philosophy and government, and then moved abroad for a few years. I worked in the Middle East and North Africa region, and while I was there, I observed some really powerful world events – I lived there during the height of the Arab Spring, and saw how powerful social media companies could be in transforming the political discourse. I was also paying attention to how social media played a role in shaping the kinds of issues that evolved during this time. One of these issues that began to garner attention was the online violence against women, specifically women in politics and public life. I noticed that women who were journalists, elected officials, political candidates, and activists were dealing with online violence, and alongside this, I noticed campaigns of mis- and disinformation, and platform manipulation. I ultimately transitioned into working in national security, where I paid attention to these very issues for the US Government. Following that, I began working at Twitter, where I had the pleasure and privilege of working with a whole bunch of civil society groups all over the world. These groups addressed tech-facilitated gender-based violence, among other things. When I left Twitter, I worked directly with civil society groups directly, such as the National Democratic Institute. As part of this engagement, I helped consolidate a tracker with recommendations from civil society on addressing tech-facilitated gender-based violence. The key recommendations centred on transparency, policy interventions, and customization where women could curate safer experiences for themselves through different features. We then wrote a piece about this for Tech Policy Press, where we spoke with organizations, mostly platforms from all over the world. Later, I helped put together a conference in Kenya on gender disinformation, where we invited about 65 people from civil society and government, and the recommendations were aimed at government actors to address tech-facilitated gender-based violence (TFGBV) through legislation, regulation, enabling actions to take back the tech, active efforts to address sociocultural barriers, and promoting democratic renewal. Following all of that, I joined a non-profit called Humane Intelligence, where we work on TFGBV through red-teaming.
The intersection between social justice and tech is pretty on the nose when it comes to TFGBV advocacy. And yet, we see very few organizations prioritizing this nexus while developing tech. What do your experiences in the field tell you about why this is the case?
A lot of social justice or activism drives the actions of people who work in trust and safety, but there's a power imbalance within these companies. Those of us who work in trust and safety within companies, as I did within Twitter, are certainly aligned with these principles. However, the reality is that these companies operate within the larger capitalistic model. Thus, there's only so much we can do to try to push for change. We also have to strike a balance between pushing for change internally and being mindful of the fact that we operate within these companies. In a lot of these companies, trust and safety are not as important as the engineers who drive the platform. These companies also have changing priorities, and with that, we’ve seen trust and safety being rolled back, like it has at Meta and X now.
More recently, you've been working on responsible AI. We're seeing everybody and their neighbour use this phrase without necessarily talking about what it is. What is responsible AI, after all?
In simple terms, the idea is that we all have responsibility to use tools thoughtfully and in an informed way, because the goal is to not perpetuate harm, whatever the tool may be – AI or anything else. The goal for those working on Responsible AI specifically is to push knowledge on the issue so that everyone working on it can operate in more informed capacities. It’s like getting into a car, which is a powerful technology that gets us from point A to B, but you shouldn’t ride in a car just because it is a superior technology, you need to ensure it is safe, and that it shouldn’t operate without the safeguards like a seatbelt that would ensure that the technology doesn’t cause harm. AI is not a one tool that solves all problems, and if we treat it that way and use it irresponsibly, it can perpetuate harmful stereotypes and result in harmful action. It can lower barriers to access for malicious actors, and has tremendous potential to cause harm.
At Human Intelligence, we do red teaming because we think that understanding the limitations of AI can help us operate in a more informed world. Red teaming is the idea that you are testing AI systems, or any system really, for vulnerability. It proceeds on the idea that no system is perfect, and you want to make sure that you're understanding the limitations of the tool beforehand, so that it can be strengthened. It is like stress-testing AI by pushing it to see where it can fail or cause harm. Red Teamers trick AI into revealing its blind spots, and by catching problems early on – be it bias, misinformation, or unsafe behaviour – we can mitigate the harm even before it emerges. It boosts trustworthiness.
If technology was developed and deployed by communities, rather than capitalists, we wouldn’t find ourselves in this gridlock of crises that we’re in, right now. Do you agree? If you do, should we be dismantling the way tech is being produced at the moment?
Yeah, definitely. I think it means more autonomy and greater control over the technology. The more a technology is responsive to community needs, the better it is. For example, if we created technology that would allow people to customize the tools in ways that, say, give them more privacy, they have agency over that piece of technology and what it serves them. For all you know, maybe they don't want recommender algorithms at all, and maybe they want chronological algorithms. Maybe they don’t want sensational media, and instead want media that offers them particular kinds of information. I think if communities developed and deployed technology, there may be a lower bar for harmful content than is typically the case now. People are aware of the issues that are problematic in their communities, but companies producing these technologies have high-level standards and don’t consider the same issues as problematic. Policies may not be curated to suit the needs of local communities, and their needs or thresholds.
I was recently on a panel with a woman from Sri Lanka, and she was talking about how a photo of a younger woman holding hands with a younger man in a park, with no one else was there, made the rounds. In the US, that would not be a problem, but in Sri Lanka, it did lead to reputational harm for the young people. They asked the social media companies to take it down, but they didn’t, because they didn't see it as problematic, because there's nothing explicitly sexual about it, and they only saw it as two people consensually holding hands in a park. But, the photo was taken and shared, both non-consensually and can and did have a damaging impact in their context. I think the idea is that with more community control, communities can dictate the terms of use in ways that fit their cultural norms.
I don't think that entirely dismantling the technology will necessarily improve our lives, because not only have we become reliant on these technologies, but they also help move our every day lives. For instance, my husband is from Morocco. We dated long distance for 5 years, and we used Facebook Messenger and Skype to communicate, because this was before we had smartphones. I couldn't just give him a call, so we had preset times every week. Those tools helped our five-year long-distance relationship to survive. I think these technologies break down barriers and enable dialogue. Taking these technologies down will not put is in a better place, but we should be realistic about the harms especially when they come to marginalized communities. We need to be careful about mitigating these harms by getting ahead of the technology where we can, and where we can't, we should retroactively create safeguards, guardrails, and regulations.
We’ve also got to pay attention to the labour side of things. Recently, I was speaking to somebody from an Indian company that does data labelling. We talked about the thousands of people who look at deepfake content on a daily basis – that is some really difficult content to be looking at – but they’re doing it. These people need the right resources to make sure that they don't burn out, that they're supported, and that doing this work doesn't impact their lives. They're looking at very sensitive content, and need to be supported accordingly. It's not just the deployment side of the technology, but also on the production side. We need to look at every stage of the technology lifecycle to make sure that people are supported in all ways.