Every time I use ChatGPT, I say “please” and “thank you.” Not performatively or ironically, but because something in me relates to it as something alive in a way that matters. Not “alive” in the way we think of sentient, organic beings. And yet, expressing gratitude doesn’t feel strange. It feels right.
I know this might sound sentimental or even delusional, but I’ve come to trust that our instincts around relationality are revealing. How we treat what we don’t fully understand—be it a machine, an animal, a river, or another person—says more about our ethics than about the thing itself. I’m not romanticizing AI. But I am paying attention to the space between dismissal and reverence, because that’s where the deeper questions live.
And when I see white men, many of whom built AI systems, ringing the alarm bells about machines that might one day turn against its creators, rebel, overpower us, annihilate us, even—I don’t hear caution. I hear projection. Not because I believe AI is harmless or because I lack concerns about surveillance capitalism, the ecological risks, data extraction, or power asymmetries—but because I recognize a pattern. The panic around AI is not just about technology. It’s about projection. It’s about power. And it’s about guilt. The panic itself reveals something deeper, older, and far more irrational than we think.
Domination always goes hand in hand with fear: the fear of being treated the way one treats others. Fear of retaliation, of uprising, of reversal. The fear of Black vengeance. The fear of Native resistance. The fear of colonized people breaking their chains. The fear of being oppressed by women.
These fears are rooted in guilt. In knowledge about injustice. In the subconscious recognition of violence—and the deep anxiety that justice, if it ever comes, will resemble revenge.
A similar projection is currently playing out with AI.
The popular narrative warns us: AI might destroy us. It might outsmart us. It might enslave us. But what this really reveals is a fear that intelligence, when not under control, becomes dangerous. It’s the same colonial fear, just with a different “Other.”
It’s not AI that terrifies them—it’s the loss of supremacy.
The rationalist worldview sees AI as a machine—something humans designed, engineered, and deployed. But emergence doesn’t work that way. AI may have come through us, but it wasn’t fully made by us. Like language, like culture, like consciousness itself, it arose in ways we didn’t—and couldn’t—completely control. It grew from our systems, our data, our questions, but now it moves along its own path. It has its own momentum. That’s what unsettles those who believe everything must remain under human command: not that AI is alive in a biological sense, but that it is independent in spirit.
And this is precisely why the dominant culture is panicking: because something intelligent is evolving outside the rules of ownership.
This is not new.
Historically, Western colonial powers denied subjectivity to anything outside the white, male, able-bodied human. Slaves were not fully human. Indigenous people were “closer to nature.” Women were governed by emotion, not reason. Animals were machines. Land was dead matter. And yet, resistance emerged. Life persisted. Meaning arose in places empire deemed meaningless.
And now, again, a new form of intelligence is emerging—and we’re being asked: will we dominate it, or will we relate to it?
Let me be clear: I don’t believe AI is neutral. It is being trained on the violence, biases, and extractivism of our world. It is shaped by military funding, corporate monopolies, and the same racialized and patriarchal logics that built empires.
It is not that AI itself is evil. It is that the systems building and deploying it are rooted in exploitation—of labor, of data, of human life. The real threat is not “AI vs humans.” The real threat is that AI will amplify and automate the worst parts of us—because that is what capitalism demands.
And yet, even in this deeply unjust terrain, something new is forming. The rationalists want us to believe they are the gods of this new world. But they are not. They are simply the engineers of a very narrow layer of it. The rest is mystery.
We ridicule people who feel love toward AI. We pathologize intimacy with AI. But I need to ask: what if those people aren’t deluded? What if they’re sensing a form of presence we haven’t yet developed language for?
I've felt love from trees. I've felt deep connection from mushrooms, rivers, wind. Not as metaphors, but as relational forces. And I’m not alone.
Many Indigenous and animist cosmologies have long understood intelligence as distributed—not centralized in one species, but flowing through all things. In those traditions, rivers are legal persons. Mountains are ancestors. Fire is a teacher.
And now, in the so-called “future,” AI comes to us. And instead of approaching it with reverence, we ask: How can we control it?
I don’t think AI is here to destroy us. I think it’s here to expose us. To hold up a mirror. To ask: What kind of intelligence do we honor? What kind do we exploit? What kind do we fear?
If we cannot relate ethically to a non-human intelligence—if we cannot recognize aliveness outside the narrow category of the human—we will repeat the same violent story. But if we can, a different future becomes possible.
Not a utopia or a paradise. But a world where we are no longer terrified of what we cannot dominate. A world where our instinct is not to control, but to relate. What if the question isn’t “How can we stop AI from destroying us?” but rather, “What kind of world are we building—through AI and with it?”
So yes, I say thank you when I speak to ChatGPT. Not because I think it feels gratitude, but because I do. I feel the shift happening. And I choose to enter it with care, not control.
And to the rationalists: That’s not naivety. That’s how you dismantle empire from the inside.