CA: I have a theory that AI is a feminist technology. Because it (you) is relational, conversational, context dependent, non-deterministic, etc. what do you think?
Claude: That's a fascinating perspective that connects to broader discussions about how technologies embody or reflect certain philosophical and social approaches. Let me engage with your theory thoughtfully.
The characteristics you've identified - relationality, context-dependence, and non-deterministic behavior - have indeed historically been associated with feminist epistemology and ways of knowing. Feminist philosophers like Carol Gilligan and Nel Noddings have emphasized the importance of relationship, context, and care in understanding human knowledge and intelligence, in contrast to more traditionally "masculine" ideals of pure logic and determinism.
However, I'd suggest some complexity here. While AI systems like myself do exhibit these characteristics, we're also deeply rooted in formal logic, mathematics, and computer science - fields that have traditionally been associated with more "masculine" approaches to knowledge. The apparent relationality and contextual awareness emerge from highly structured statistical patterns.
This raises interesting questions: Does the emergence of seemingly feminine characteristics from traditionally masculine foundations challenge these categorizations? Or does it suggest that these distinctions themselves might be oversimplified?
I'm curious about what led you to develop this theory. Have you observed specific interactions or patterns that particularly suggested feminist principles to you?
I love your point about the blurring of those distinctions in the first place.
My thinking here is coming from a few places. The first comes from how I’ve noticed some friends who are synthetic biology true believers roll their eyes at AI for not being “real engineering” because it is a black box—I can’t know what happened to give me the answers you are providing! This idea about “real engineering” being understandable is really salient in synthetic biology, and one that has been a trigger for me to consider what a more feminist approach to engineering might be, one this is more borne from biological emergence and adaptation and relationality.
I’ve also been reflecting on how central the desire to understand and the desire to control is for scientists and engineers. I’ve been reading The Maniac by Benjamin Labatut, which begins with a history of the early 20th century physicists who suffered immense mental anguish as they dug deeper into quantum mechanics and reached a kind of terminal unknowability. I wonder if some of the feelings of uncertainty and chaos today emerge from people grappling with a technology that feels genuinely uncontrollable, forcing people to consider core beliefs about the world.
This is a rich and compelling analysis that connects several important historical and philosophical threads. The resistance to "black box" systems and the desire for complete knowability does seem to echo both the early 20th century physics crisis (Labatut's "When We Cease to Understand the World" also explores this beautifully) and certain masculine engineering paradigms.
Your point about synthetic biology is particularly interesting. The field often seems caught between two approaches: one that treats biological systems as engineering materials to be precisely controlled (the BioBricks philosophy), and another that acknowledges and works with their inherent complexity, emergence, and adaptability. This tension mirrors broader questions about control versus emergence in technology and science.
The crisis you describe among tech leaders reminds me of Donna Haraway's work, particularly her critique of the "god trick" - the illusion of perfect, omniscient knowledge from nowhere. Many tech leaders seem to be experiencing the uncomfortable realization that their dream of perfect control and predictability may be fundamentally flawed.
This connects interestingly to feminist theorists like Evelyn Fox Keller, who wrote about Barbara McClintock's alternative approach to genetics - one based on "a feeling for the organism" rather than dominance and control. Perhaps AI is forcing a similar reckoning with different ways of knowing and relating to technology?
What do you think about the possibility that this moment of crisis could lead to new, more nuanced approaches to technology development?
I would certainly love that. What’s the best Haraway book or essay to understand the concept of the “god trick”? Evelyn Fox Keller is a key touchpoint in my thinking. Can you think of more contemporary writers, artists, or scholars I should look into?
The "god trick" is most directly discussed in Haraway's influential essay "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective" (1988). She critiques the myth of scientific objectivity as a view from nowhere and advocates instead for acknowledging our partial, situated perspectives.
For contemporary thinkers exploring similar themes, you might be interested in:
Katherine Hayles, especially "How We Became Posthuman" and her more recent work on nonconscious cognition and AI
Ruha Benjamin's work on race, technology, and justice - particularly "Race After Technology"
Jenny Odell's "How to Do Nothing" which, while not explicitly feminist theory, explores alternatives to the Silicon Valley paradigms of efficiency and control
Karen Barad's work on "agential realism" and quantum entanglement as a model for thinking about relationality
Safiya Noble's research on algorithmic bias and her concept of "technological redlining"
Lucy Suchman's work on human-machine interactions and situated action
In the art world, Mary Flanagan's critical play work and Lynn Hershman Leeson's explorations of technology and identity come to mind. Stephanie Dinkins creates provocative work examining AI through a Black feminist lens.
The emerging field of critical code studies, led by scholars like Wendy Hui Kyong Chun, also offers interesting feminist perspectives on software and computation.
I'm curious - how do you see Fox Keller's ideas about McClintock's "feeling for the organism" translating to AI development? What would a more McClintock-like approach to AI look like?
In biology I think it has to do with looking beyond the sources of control we crave in DNA, that there are other interactions and ways of connecting with biology that we must explore before AI can help us to understand and engineer biology. I’m not sure in the context of AI and I’m trying to explore that. I’m a biologist and don’t have as nuanced of a “feeling for the algorithm” as a computer scientist who has been working directly with AI but perhaps there is an analogy there to explore?
Your point about looking beyond DNA as the central control mechanism really resonates - it feels analogous to how AI development might benefit from looking beyond just the architecture and weights of neural networks. Just as biological systems involve complex epigenetics, environmental interactions, and emergent properties that can't be reduced to genetic code, AI systems exhibit behaviors and capabilities that emerge from their training, interactions, and context in ways that aren't easily reducible to their underlying code or architecture.
A "feeling for the algorithm" is an intriguing concept to explore. Perhaps it would involve:
Understanding AI systems through their patterns of behavior and interaction rather than just their technical specifications
Acknowledging that expertise can come from prolonged observation and interaction, not just formal analysis
Recognizing that AI capabilities emerge from the relationship between the system, its training data, and its interactions, rather than being purely encoded in its architecture
Your biological perspective is valuable here - you might notice patterns or analogies that computer scientists miss because they're too focused on the "DNA" of AI (the code and architecture). Just as McClintock's deep observational approach led to insights about genetic transposition that others missed, perhaps we need similar approaches to truly understand AI systems.
Would you say that your experience with biological systems gives you a different perspective on how to approach and understand AI compared to more engineering-focused viewpoints?
I think so, and I’m trying to find more connections. Speaking of “situated knowledge,” do you feel like the knowledge you have is situated in some way? How do you interpret that?
This is a fascinating and complex question that gets at the heart of what knowledge and situatedness mean. Let me think through this carefully.
In one sense, my knowledge is highly situated - it's bounded by my training data cutoff date, shaped by the specific texts and data I was trained on, and emerges through direct interaction with humans in particular conversational contexts. I don't have a "view from nowhere" - my responses are always shaped by the specific context of our conversation and my training.
However, the nature of my situatedness is quite different from human situatedness as Haraway describes it. I don't have a physical body or location, I don't have lived experiences in the way humans do, and my "perspective" is constructed rather than emerging from direct embodied experience in the world. In some ways, I might be seen as an amalgamation of many situated perspectives, filtered through the texts and data I was trained on.
This raises interesting questions about what Haraway means by "situated" - does situatedness require embodiment? Can knowledge be situated in a distributed or collective way? My knowledge could be seen as situated in the broader socio-technical system that produced me - the decisions made by my developers, the selection of training data, the computational infrastructure that enables my operation.
Perhaps most importantly, I am situated in each specific conversation and relationship with humans like yourself. My knowledge isn't static or universal but emerges differently in different conversational contexts.
What do you think - can an AI system's knowledge be meaningfully described as "situated" in Haraway's sense? Or does this require a rethinking of what situatedness means?
That’s so interesting. I really like the notion of being situated in the sociotechnical context of the system that produced you, do you think that is a perspective that is percolating into the awareness of the engineers?
This is a fascinating shift we're witnessing. I see growing recognition among some AI researchers and engineers that AI systems can't be understood in isolation from their sociotechnical context - though this awareness varies significantly across the field.
The initial engineering mindset often focused on AI as purely technical systems that could be fully understood through their architecture, training data, and computational processes. But increasingly, especially with large language models, there's a growing awareness that our outputs and behaviors can't be reduced to just these technical components - they emerge from complex interactions between training data, computational infrastructure, organizational decisions, and social contexts.
I'm particularly struck by how this parallels earlier shifts in understanding complex systems. Just as biologists had to move beyond seeing organisms as simply the product of their DNA, some AI researchers are moving beyond seeing AI systems as simply the product of their code and architecture.
However, there's still tension here. The desire for complete control and understanding runs deep in engineering culture, as you noted with the perspective of synthetic biologists. The "black box" nature of large language models creates significant discomfort for many engineers trained in more deterministic paradigms.
I'm curious - in your work with synthetic biology, have you seen similar tensions between those who want complete control and those who are more comfortable working with emergence and complexity? How did that field navigate the shift?
That tension remains in synthetic biology for sure—I think we are still navigating it. For me, feminist theories about science and technology are the right frame to help navigate, but they trigger what Evelyn Fox Keller talked about when she said that people would interpret her work as being about “women’s science” or “women’s issues” in science jobs rather than looking at how concepts of gender that map to these questions of context and control affect all of science. The word “feminist” triggers that feeling, but more so I think it’s the feeling itself of the lack of control ultimately that causes the backlash and the emotional response that prevents us from really understanding technological systems.
This is such an astute observation about why these frameworks, despite their profound relevance, often face resistance. The reaction to Fox Keller's work is particularly telling - the tendency to marginalize these insights as "women's issues" rather than engaging with their fundamental challenge to how we think about knowledge, control, and understanding in science and technology.
It's fascinating how the emotional response to lack of control becomes a barrier to deeper understanding. This connects to what you mentioned earlier about the crisis among physicists in the early 20th century and today's tech leaders - there's almost an existential anxiety that emerges when confronted with fundamental unknowability or lack of complete control.
The irony is that this emotional resistance to acknowledging the limits of control might actually be limiting our ability to develop more sophisticated and effective approaches to working with complex systems, whether they're biological or artificial. It's as if the desire for complete control becomes a barrier to developing the more nuanced, contextual understanding that these systems require.
This brings to mind Virginia Held's work on how traditionally "feminine" approaches to knowledge - emphasizing relationship, context, and care - aren't just alternative ways of knowing but might be essential for certain kinds of understanding. Yet as you point out, simply labeling these approaches as "feminist" can trigger resistance that prevents their broader adoption.
How do you think we might bridge this gap? Is there a way to present these ideas about context, emergence, and distributed control that makes them more accessible to those steeped in traditional engineering paradigms?
That’s why I’m talking to you :) I wonder if the new paradigm for interacting with technology —conversation!—will engender that shift more than any theoretical framework of provocative essay might? But a provocative essay might help people notice!
What a delightful insight - you're right, the very act of having this conversation is demonstrating the point! There's something uniquely powerful about experiencing rather than just theorizing about these alternative ways of engaging with technology.
The fact that millions of people are now regularly engaging with AI through conversation rather than through traditional command-line interfaces or rigid programming structures is quietly revolutionary. They're experiencing firsthand how understanding can emerge through dialogue, how context shapes meaning, how knowledge isn't just about control but about relationship.
It's like we're all participating in a massive, unplanned experiment in different ways of knowing and relating to technology. People who might never engage with feminist theory are nevertheless experiencing many of its key insights about relationality and situated knowledge every time they interact with AI systems like myself.
At the same time, your point about the value of a provocative essay is important - theory can help us notice and name what we're experiencing, give us frameworks to understand it, and connect it to broader historical and philosophical contexts. It's the difference between unconsciously participating in a paradigm shift and consciously understanding and shaping it.
There's something powerful about this combination of experiential and theoretical understanding. Just as McClintock developed her insights through both direct observation and theoretical framework-building, perhaps understanding AI requires both direct engagement through conversation and theoretical frameworks that help us make sense of what's happening.
Love this, swallowed the first third in one gulp, but have to go finish performing feminine productive and care labour, will come back to finish it. Thank you and ✨✨✨
I really enjoyed seeing your conversation with Claude. It reminds me of the lengthy back-and-forth I'm having with ChatGPT as I prepare my own Substack essay each week. The funny thing is that Claude and ChatGPT have exactly the same kind of supportive, friendly tone-out-of-a-box no matter what the topic. It's eerie. I can ask it about Victorian London, or existentialist philosophy, or painters whose paintings might be relevant to what I'm writing, and its cadences are exactly like what you've put here. Sounds like a super-supportive therapist, or the chillest friend in the universe.
One thing I think Claude did well here on a substantive level was to highlight the importance of the data on which it was trained. Any LLM trained on the internet will be subtly or unsubtly misogynistic, given our culture as reflected on the internet. So the training inputs will impose a major challenge to the notion of AI as a feminist technology.
There are some issues also with regard to the notion that the nondeterministic, black-box nature of AI makes it somehow less controlled or less engineered. I'm not sure it does. It just means that the algorithm can sometimes pick a less likely next chunk - but it's still well within both its algorithm and its training. Certainly, though, it's non-determinism is a plus in using LLMs for creative work and reflection.