CalCompute is back

California SB 53 brings back CalCompute - a public computing cluster for AI democratization. Explore the opportunities and risks of government-controlled AI infrastructure.

CalCompute is back like a boomerang. California wants to democratize AI – but what does that really mean?

Just a few days ago, on September 29, 2025, California Governor Gavin Newsom signed SB 53 – the Transparency in Frontier Artificial Intelligence Act. The legislation introduces new transparency standards for operators of the largest LLM models: requiring public disclosure of security protocols, reporting of critical incidents, and protecting whistleblowers at AI companies.

AI regulation, while still controversial, is slowly becoming a global standard. Europe is sharpening its teeth – the AI Act is gaining force, and EU countries are gradually implementing its provisions. California, historically a pioneer in tech regulation (remember CCPA or net neutrality?), is once again charting its own course in the face of federal decision-making paralysis.

But there's one element of SB 53 that intrigues me far more than yet another compliance requirement – CalCompute.

A story that keeps coming back

CalCompute isn't a new idea. It first appeared in February 2024 as part of a far more controversial bill, SB 1047, authored by State Senator Scott Wiener. That bill was a lightning rod in the tech world – it aimed to impose strict regulations on creators of "frontier AI models," meaning models whose training costs exceed $100 million.

SB 1047 required:

Reactions? Mixed. Elon Musk supported it. Nancy Pelosi called it "more harmful than helpful." Meta's Yann LeCun warned it would kill open-source models. Anthropic (creators of Claude) was cautiously positive, but with reservations.

In September 2024, Governor Newsom vetoed SB 1047. The reason? Too restrictive, too much legal uncertainty, risk of pushing innovation out of California.

But there was one thing in that bill everyone agreed on – CalCompute.

What exactly is CalCompute?

CalCompute is a project to create a public computing cluster – a supercomputer that would be accessible to:

Why does this matter?

Training advanced AI models requires massive computing power, costing tens, often hundreds of millions of dollars. This creates a natural barrier to entry – only tech giants (OpenAI, Google, Meta, Anthropic) can afford it. Smaller players, even with brilliant ideas, are outmatched from the start.

CalCompute aims to change that. The idea is simple: democratize access to computing power so AI isn't the exclusive domain of a few Silicon Valley corporations.

The consortium is set to be established at the University of California, comprising representatives from academia, labor unions, AI experts, ethicists, and public interest advocates. By January 2026, they're expected to present a detailed plan: how much it will cost, how large the cluster should be, who will have access, and under what terms.

Why did CalCompute survive while SB 1047 failed?

Senator Wiener learned his lesson. After the SB 1047 veto, Governor Newsom convened a group of experts (including AI "godmother" Fei-Fei Li from Stanford), which released a report in June 2025. The conclusion? AI regulations are needed, but they must be empirical, flexible, and based on a "trust but verify" approach.

SB 53 is exactly that version 2.0 – taking the least controversial elements from SB 1047:

And it leaves the door open for evolution – California's Department of Technology is required to annually update the regulations based on technological developments and stakeholder consultations.

Sounds great. But does it really?

The idea of AI democratization is brilliant. A world where not only OpenAI or Google decide what models are created and what they're used for is a better world. CalCompute could:

But there's also a flip side.

What happens when the government starts deciding who gets access to computing power and who doesn't? Who gets a slot on CalCompute's calendar – and according to what criteria?

Could the concept of "AI in the public interest" become, in practice, what those in power deem a "desirable" use of technology? Might a mechanism emerge where politically inconvenient projects have harder access to resources?

And there's an even more fundamental question: CalCompute could become the de facto standard for what AI can and cannot be.

If the public cluster gains dominance as the training infrastructure for AI, then the government – not the market, open-source community, or researchers – becomes the Oracle defining the boundaries of technology. Even with noble intentions, the path to control is short.

What's next?

SB 53 is just the beginning. This is pioneering regulation – the first of its kind in the US – but the world is watching. If CalCompute succeeds, we might see similar initiatives in other states or countries. If it fails – it will serve as a warning of how easily good intentions can morph into government overreach.

Questions worth asking:

I'm a proponent of AI regulation. But I'm also skeptical of simple solutions to complex problems. CalCompute sounds good on paper, but the devil – as always – is in the implementation details.

California is experimenting. We're watching.

What do you think? Is CalCompute the future of democratic AI, or the beginning of something concerning?