Quality Assurance for Society

Published on June 10, 2025

As someone who spends their days thinking about quality assurance and testing, I’m trained to look beyond whether something works to ask whether it works well, and for whom. Quality isn’t just about technical functionality; it’s about how humans interact with technology, what happens when systems fail, and whether the design serves user needs or merely designer intentions. These questions become critical when we’re not just testing software, but evaluating proposals to restructure society itself around technological systems.

Tech Future

In March 2021, OpenAI CEO Sam Altman published an essay titled “Moore’s Law for Everything” that has gained renewed relevance as artificial intelligence reshapes our economy. While presented as a utopian vision of shared prosperity, Altman’s proposal deserves closer scrutiny and I say this because beneath its techno-optimist veneer lies a blueprint for unprecedented corporate control over American society.

I’m going to pick on Altman and his ideas but readers should understand that some of the wealthiest and most influential (and thus most powerful) people in the tech industry are essentially in line with these broad views.

The Seductive Promise

Altman’s essay begins with a compelling premise: AI will create enormous wealth, but without policy intervention, “most people will end up worse off than they are today.” His solution sounds appealingly democratic: redistribute corporate wealth by taxing the most valuable companies in shares rather than cash, then distribute those shares to every American citizen.

“The best way to improve capitalism is to enable everyone to benefit from it directly as an equity owner,” Altman writes. Under his proposed American Equity Fund, citizens would receive annual distributions of both dollars and company shares. “Poverty would be greatly reduced and many more people would have a shot at the life they want,” he promises. “If everyone owns a slice of American value creation, everyone will want America to do better.”

The vision is potentially seductive, right? A future where technology delivers abundance and everyone benefits. However, I would argue that a closer look at the mechanics shows that a different picture emerges.

The Fine Print of Forever

Altman’s plan contains a crucial detail that undermines its democratic promise. The tax rate on companies, he specifies, “must be much smaller than the average growth rate of the companies.” This isn’t an oversight. It’s the entire point. By ensuring that public distributions never outpace corporate growth, Altman guarantees that real control always remains with the original owners and boards.

Citizens would receive shares, yes, but never enough to meaningfully influence corporate decisions. They would be perpetual minority shareholders in companies that increasingly control every aspect of their lives. The “equity” in Altman’s vision is more akin to a dividend than genuine ownership. It’s a carefully calibrated allowance designed to create the illusion of participation while preserving existing power structures.

To make this system permanent, Altman suggests enshrining it in the Constitution itself, proposing “a constitutional amendment delineating the allowable ranges of the tax.” What sounds like democratic protection is actually institutional entrenchment, making it extraordinarily difficult for future generations to escape this arrangement, even if they wanted to.

From Money to Compute: The Evolution of Control

By 2024, Altman’s thinking had evolved in a revealing direction. In a podcast interview, he floated a new concept: “universal basic compute.” Instead of cash payments, “everybody gets like a slice of GPT-7’s compute. They can use it, they can resell it, they can donate it to somebody to use for cancer research.”

Okay, so, that’s interesting. This represents a fundamental shift from redistributing money to redistributing computational resources, essentially replacing national currency with corporate-controlled computing power. “You own like part of the productivity,” Altman explained, envisioning a future where “owning a unit of a large language model like GPT-7 could be more valuable than money.”

Think about what this means: in Altman’s future, your ability to participate in the economy depends entirely on access to AI systems controlled by private companies. Need housing, food, education, healthcare? You’ll need to use your allocated computing time on OpenAI’s systems. Want to start a business? Great. I hope OpenAI’s board approves of your venture.

The Company Town, Digitized

This isn’t innovation, is it? No, I would argue, it’s not. Instead, it’s the digitization of an old and troubling model: the company town. In nineteenth and early twentieth century America, mining and manufacturing companies often owned entire communities. Workers lived in company housing, shopped at company stores, and were paid in company scrip that could only be spent within the company’s ecosystem. The system created total dependency while maintaining the fiction of economic participation.

Altman’s vision updates this model for the AI age. Instead of coal mines and steel mills, we have language models and algorithmic systems. Instead of company scrip, we have compute allocations. Instead of geographic isolation enforcing dependency, we have technological monoculture. If AI truly “does almost everything” as Altman predicts, then the company controlling that AI controls everything.

Sounds fun, doesn’t it? The parallel becomes even more stark when we consider Altman’s assumption that in his AI-powered future, traditional human labor becomes largely obsolete. Just as company towns isolated workers from external economic opportunities, an AI-dominated economy would eliminate alternatives to the computational resources controlled by companies like OpenAI.

Democracy Under Algorithmic Rule

Perhaps most concerning is what this means for democratic governance. While Altman acknowledges the need for “strong leadership from our government,” the actual power in his vision rests with whoever controls the AI systems that generate economic value. In a world where AI produces “most of the world’s basic goods and services,” democratic institutions become subordinate to corporate boards.

Okay, so let’s consider the implications here. If citizens depend on AI-controlled systems for their basic needs, and those systems are owned by private companies, then the decisions of those companies become more consequential than the decisions of elected officials. Corporate governance effectively supersedes democratic governance, with citizens reduced to dependent beneficiaries rather than sovereign participants. This isn’t necessarily intentional authoritarianism. After all, Altman may genuinely believe his vision would benefit humanity. But impact matters more than intent, and the impact of concentrating such power in private hands would be the effective end of democratic self-determination.

The Illusion of Inevitability

Throughout his essay, Altman repeatedly emphasizes that “the changes coming are unstoppable” and that we must “embrace them and plan for them.” This framing is both strategic and revealing. By presenting his specific vision as technological inevitability rather than political choice, Altman discourages critical examination of alternatives.

But there’s nothing inevitable about organizing an AI-powered economy around private ownership and corporate control. We could choose public development of AI systems, cooperative ownership models, or robust regulatory frameworks that preserve democratic accountability. Altman’s vision is one possibility among many. However, that’s the case only if we accept his premise that his preferred arrangements are synonymous with technological progress itself.

A Future Worth Questioning

“The future can be almost unimaginably great,” Altman concludes his essay. He may be right about the potential for AI to create abundance and solve pressing human problems. But greatness isn’t just about technological capability, is it? I hope not. I hope it’s also about who controls that capability and how it serves human flourishing.

The question isn’t whether AI will transform our economy. Not only will it certainly do so, it already has. The question is whether that transformation serves democratic values and human agency, or whether it concentrates unprecedented power in the hands of a small group of technologists who believe their private visions should become public policy.

Altman’s proposal deserves credit for grappling seriously with AI’s economic implications. I don’t see nearly enough of that being discussed. However, Altman’s proposed solutions, such as permanent minority shareholding in corporate AI systems, constitutional entrenchment of corporate taxation, and the replacement of currency with computational resources, point toward a future that’s less about shared prosperity and more about managed dependency.

Before we embrace this vision of tomorrow’s company town, we might ask: who elected Sam Altman to redesign American democracy? And do we really want to find out what it means to live in a world where the company store is powered by artificial intelligence?

The future may indeed be unimaginably great. But it should be our choice to make, not his (or Big Tech’s) to impose.

A Quality Perspective

As technologists, I believe we have a responsibility that extends beyond building systems that work. We must ensure they work well for the humans who use them. Just as we wouldn’t (ideally) ship software without testing for edge cases, security vulnerabilities, and user experience failures, we shouldn’t accept social proposals without subjecting them to the same rigorous analysis.

Altman’s vision, however well-intentioned, exhibits classic design flaws: single points of failure, inadequate user control, and systemic vulnerabilities that could catastrophically impact end users. In this case, “end users” made up of entire populations. The same critical thinking we apply to code review and system architecture should guide our evaluation of proposals to reshape society around technology. Quality assurance isn’t just about software; it’s about ensuring that technological power serves human flourishing rather than concentrating control in the hands of those who write the code.

Share