The problem with rejecting AI

“If you understand colonialism, you can’t use AI.”

I used to think the same thing.

But then I realized..

The people who reject AI completely are surrendering it to people who will use it in sh*tty ways. And that’s way more dangerous than learning to use it well.

The technology isn’t the problem. How people use it is.

I’ve spent 3 years learning how to use AI while studying systems of oppression and how power actually works.

I found that every day smart people step away from this tool, the unconscious people building it have no idea about systemic harm.

Karen Hao’s “Empire of AI” documents this perfectly.

Tech companies operate like colonial powers. They extract data from the Global South, concentrate power in Silicon Valley, and replicate the same extraction patterns that shaped colonialism.

But here’s what’s missing..

Rejecting AI doesn’t stop the harm. It just means you have zero influence in how it gets used.

Here’s the GOOD it’s being used for:

In New Zealand, Te Hiku Media (a Māori organization) built an AI that transcribes te reo Māori with 92% accuracy while keeping complete control of their data. Indigenous leadership. World-class results. Data stays with them.

In India, Karya pays workers to create training data while workers keep ownership and earn ongoing royalties. Not extraction—wealth distribution.

Across Africa, 2,000+ researchers in Masakhane created 400+ open-source language models for African languages. No corporate control. Just communities protecting their languages.

These projects exist because conscious people engaged with the tool strategically, with their awareness of systems of oppression.

This is how I use AI:

I tell it who I am. My biases, my values, my background, what I care about.

AI mirrors back what you put into it.

If you’re extracting, it extracts.

If you’re trying to solve real problems?

It helps you do that.

I pay for services that don’t sell my data.

I use local LLM models like LM Studio or Ollama.

I never let it do my thinking for me. I use it to learn faster so I can actually solve problems.

How you use the tool determines what it does. Not the tool itself.

The people who understand colonialism best, who know how extraction actually works?

You’re exactly who should be shaping how AI develops. But you have to engage with it.

You have to learn it.

You can’t just say “this is all bad” and walk away.

The real choice isn’t use it or reject it.

The real choice is:

Do you want conscious people or unconscious people building the future with this tool?

If you’re trying to figure out how to use AI without betraying your values, start here:

Read “Tools for Conviviality” by Ivan Illich.

Read Karen Hao’s “Empire of AI.”

Read the paper “Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence.”

Then ask yourself,

What am I using this for?

Am I making things better or just faster?

Am I in control or am I letting the tool control me?

The real work happens in that moment, when you decide what you’re actually building.

What will you choose?