The company behind ChatGPT wants governments to overhaul tax systems, protect workers, and create public wealth funds before superintelligent AI reshapes the economy. OpenAI has published a policy paper titled "Industrial Policy for the Intelligence Age: Ideas to Keep People First." The document argues that AI is advancing fast enough to demand urgent economic reform. It calls for new approaches to taxation, labor policy, and social protections as society prepares for superintelligence.
The 13-page blueprint landed on Monday. It is one of the boldest attempts yet by an AI company to shape policy around its own technology. It also arrives at a tense moment. Fears about AI-driven job losses and wealth concentration are growing worldwide.
The Core Proposals
OpenAI argues that AI access should be treated as a foundational economic resource on par with literacy. Pricing, it says, must not shut out hourly workers or marginalized communities.
On taxation, the company is blunt. It proposes shifting the tax burden from labor to capital. The reasoning is straightforward. AI-driven growth could hollow out the tax base funding Social Security, Medicaid, SNAP, and housing assistance. Corporate profits would rise. Payroll tax revenue would shrink.
OpenAI also floats a robot tax. Bill Gates proposed the same idea back in 2017. The concept is simple: automated systems would pay into the tax base just as human workers do.
The most sweeping idea is a nationally managed public wealth fund. Every American would receive an ownership stake in AI-generated economic gains. The fund would be seeded partly by AI companies. Returns would go directly to citizens.
Reshaping Work
OpenAI proposes subsidizing a four-day work week with no pay cut. It also wants employers to boost retirement contributions, cover more healthcare costs, and subsidize childcare.
The document includes an automatic safety-net mechanism. Here is how it would work. Once AI-related job displacement crosses defined thresholds, income support and wage insurance would kick in automatically. No new legislation needed. As conditions improve, the expanded benefits would wind down on their own.
The urgency is real. White-collar payrolls have contracted for 29 consecutive months. Economists call that unprecedented outside a recession.
Safety and Containment
The paper goes beyond economics. It calls for frontier model auditing, incident reporting systems, and "model-containment playbooks." Those playbooks would address scenarios where dangerous AI systems cannot be easily recalled.
CEO Sam Altman told Axios that a major AI-enabled cyberattack is "totally possible" within the next year. He added that AI-created pathogens are "no longer theoretical."
Questions About Motives
The timing raises eyebrows. The New Yorker recently published an extensive investigation into OpenAI. It revealed that co-founder Ilya Sutskever wrote internal memos in 2023 accusing Altman of being deceptive about safety protocols.
Those trust issues led the board to fire Altman. They concluded he had not been consistently candid. The firing triggered chaos. Employees threatened to leave. Investors pressured the board to reinstate him.
OpenAI was founded as a nonprofit dedicated to benefiting all of humanity. It has since converted to a for-profit company. Critics question whether its mission and its shareholder obligations can coexist.
Altman acknowledged the tension. He told Axios the company feels urgency and wants a serious public debate.
The Bigger Picture
Whether any of this gains political traction is unclear. The paper arrives as Congress prepares to debate AI legislation. AI tools are already displacing workers across industries. The public is paying attention.
The company most likely to disrupt the global economy is now telling governments to act fast. The question is whether anyone will listen and whether OpenAI's own actions will match its words.







