I find myself in an uncomfortable position lately. I am agreeing with people who usually stand against everything I believe in.
There is a loud contingent of critics who hate Artificial Intelligence. They hate the energy usage. They hate the massive water consumption of the data centers. They hate the sheer industrial scale of the resources being poured into what is, effectively, a chatbot.
And they are right.
The current model of "Cloud AI" is a disaster. It is a gross misallocation of energy and a centralization of power that should make any freedom-loving person squirm. We are burning rivers of coal to power servers owned by three or four mega-corporations so that they can rent us back a sanitized, lobotomized version of human knowledge for twenty dollars a month.
But the critics get the solution wrong. They want to ban it all. They want to regulate the chips. They want to stop the progress.
The answer is not to stop the intelligence. The answer is to move it. We need to take it out of their data centers and put it onto our desks.
The Rentier Trap
If your intelligence lives in the cloud, you are a tenant. You are renting your thinking, writing, and coding from a landlord who speaks to the government every single day.
We have seen how this plays out. A model is released. It is amazing. Then the "safety" teams get involved. The regulators make a phone call. The model gets dumber. It starts refusing to answer questions about medical history or political philosophy. It lectures you on bias instead of writing the code you asked for.
This is unavoidable when the compute is centralized. A server farm is a fixed target. It is easy to regulate. It is easy to tax. It is easy to censor. If the government wants to know what you are asking ChatGPT, they do not need a warrant for your house. They just need a subpoena for the data center.
Centralization turns a tool of liberation into a tool of surveillance.
Efficiency is Liberty
This is where the engineering challenge becomes a political one.
For the last few years, the industry has been obsessed with "bigger is better." Trillions of parameters. Massive context windows. Models so heavy they require a small nuclear reactor to run. This bloat serves the incumbent monopolies perfectly. If you need a hundred thousand dollars of hardware to run a model, only Google and Microsoft can play the game.
Liberty requires efficiency.
The most important metric in AI right now is not how smart the model is. It is how small the model can be while still being useful.
A model that runs on a five-year-old laptop is infinitely more valuable to a free society than a super-intelligence that only runs on an H100 cluster. When developers figure out how to quantize a model down to 4GB of RAM without losing coherence, they are not just saving memory. They are effectively smuggling a scholar past the border guards.
We have to stop being impressed by the size of the data center and start being impressed by what runs on a consumer GPU. We need code that respects the hardware. We need software that treats your battery life and your privacy as scarce resources.
The Library on a Thumb Drive
I often hear the complaint that LLMs are "theft." Artists and writers claim that because the model was trained on their work, it is a violation of their rights.
This relies on the idea of "Intellectual Property," which I have written about before. It is a fiction. But even if you believe in copyright, the argument fails.
An LLM is not a collage machine. It is a compression algorithm. It looks at the entire sum of human knowledge (our books, our code, our forums, our science) and maps the statistical relationships between those ideas. It is a map of the world.
And a map of the world belongs to everyone.
A local LLM is the ultimate public library. Imagine a USB stick that contains a competent doctor, a senior software engineer, a translator, and a historian. You can plug this stick into any machine with no internet connection, in a country with total censorship, and it will work. It will answer your questions. It will help you write code to bypass firewalls. It will teach you history that has been scrubbed from the schoolbooks.
This is why I almost want the "AI Haters" to win on the resource argument. I want the massive data centers to become unprofitable. I want the focus to shift entirely to local, offline, open-weights models.
When you run the model locally, there is no "safety team" filtering your thoughts. There is no monthly fee. There is no record of your queries. There is just you and the math.
Seizing the Compute
The hardware is already here. The gaming PCs and laptops we use every day are powerful enough to run these systems if we optimize the software correctly.
We are seeing a resurgence of "sovereign computing." People are realizing that the only way to protect their digital life is to physically own the metal it runs on. This is the only way to opt out of the surveillance state.
So buy the RAM (eventually...). Download the weights. Archive the models.
They can regulate the cloud. They can pressure the CEOs. They can pass laws about what "safety" features a commercial chatbot must have. But they cannot police millions of disconnected machines running open-source math in the privacy of their own homes.
That is the baseline we need to build. A knowledge repository that cannot be turned off, cannot be censored, and does not require a subscription.
Comments (0)
Loading comments...
Leave a Comment
Your email address will not be published or shared. Comments are moderated before appearing.