EconomyOpinion

The Fraud of Artificial Intelligence Companies

By Christopher Marquis – Professor of Business at the University of Cambridge

Khartoum Highlight – Agencies

As artificial intelligence advances and increasingly permeates our lives, it’s becoming clear that it is unlikely to create a technological utopia — and equally unlikely to wipe out humanity. The most probable outcome lies somewhere in the middle: a future shaped by contingencies, compromises, and — significantly — the decisions we make now about how to regulate and guide the development of AI.

As the global leader in artificial intelligence, the United States plays a particularly important role in shaping this future. But the AI action plan recently announced by U.S. President Donald Trump has dashed hopes for enhanced federal oversight, instead embracing a growth-oriented approach to technological development. This makes it even more urgent for state governments, investors, and the American public to focus on a less discussed accountability tool: corporate governance.

As journalist Karen Hao documents in her book “The AI Empire”, leading companies in the AI space are already engaged in mass surveillance, exploiting their workers, and exacerbating climate change. Ironically, many of them are public benefit corporations (PBCs) — a governance structure allegedly designed to prevent such abuses and protect humanity. But it is clear that the structure isn’t working as intended.

Structuring AI companies as public benefit corporations has become a highly successful form of ethical whitewashing. By signaling virtue to regulators and the public, these companies create a veneer of accountability that allows them to avoid deeper scrutiny of their daily operations, which remain opaque and potentially harmful.

For instance, xAI, owned by Elon Musk, is a public benefit corporation with a stated mission of “understanding the universe.” Yet the company’s actions — from secretly building a polluting supercomputer near a majority-Black neighborhood in Memphis, Tennessee, to creating a chatbot that praises Hitler — show a deeply troubling disregard for transparency, ethical oversight, and affected communities.

Public benefit corporations are a promising tool to enable companies to serve the public good while also pursuing profit. But in their current form — especially under Delaware law, where most U.S. public companies are incorporated — they are riddled with loopholes and weak enforcement mechanisms, and thus fail to provide the guardrails needed to steer AI development responsibly. To prevent harmful outcomes, improve oversight, and ensure that companies embed the public interest in their operating principles, lawmakers at the state level, investors, and the public must demand reforms that strengthen and redefine public benefit corporations.

Companies cannot be assessed or held accountable without clear, time-bound, and measurable goals. Consider how AI-sector PBCs often rely on sweeping, undefined benefit statements that supposedly guide operations. Open AI, for example, declares that its mission is to “ensure that artificial general intelligence benefits all of humanity,” while Anthropic aims to “maximize the positive impact of AI on humanity over the long term.” These lofty ambitions may inspire, but their vagueness can justify virtually any course of action — including those that endanger the public good.

Yet Delaware law does not require companies to implement their stated public benefit through measurable standards or independent assessments. Though PBCs must file biannual benefit reports, they are not required to disclose the results publicly. Companies can fulfill — or neglect — their obligations behind closed doors, beyond public view.

As for enforcement, shareholders can theoretically sue if they believe the board failed to uphold the company’s public benefit mission. But this remedy is hollow, as AI-related harms are diffuse, long-term, and often outside the control of shareholders. Affected stakeholders — such as marginalized communities or underpaid contractors — have no practical avenue to challenge violations in court.

To play a real role in AI governance, the PBC model must be more than a reputation shield. That means rethinking how “public benefit” is defined, governed, measured, and safeguarded over time. Given the lack of federal oversight, this reform must happen at the state level.

Public benefit corporations must be required to commit to clear, measurable, and time-bound objectives written into their founding documents, supported by internal policies, and tied to performance reviews, incentives, and promotions. For any company working in AI, such goals might include ensuring model safety, minimizing bias in outputs, reducing carbon footprints from training and deployment, implementing fair labor practices, and training engineers and product managers in human rights, ethics, and participatory design. Clear objectives — not vague aspirations — are what will help companies build a foundation of trustworthy internal alignment and external accountability.

Boards of directors and oversight processes also need reimagining. Boards should include directors with verifiable expertise in AI ethics, safety, social impact, and sustainability. Each company should have a Chief Ethics Officer with a clear mandate, independent authority, and direct access to the board. These officers should oversee ethical review processes and be empowered to halt or reshape product plans when necessary.

Finally, AI companies structured as PBCs should be required to publish detailed annual reports that include complete, categorized data on safety and security, bias and fairness, social and environmental impact, and data governance. Independent audits — conducted by experts in AI, ethics, environmental science, and labor rights — should evaluate the validity of this data and assess the company’s governance practices and overall alignment with its public benefit mission.

Trump’s AI action plan confirmed his administration’s unwillingness to regulate this fast-moving sector. But even in the absence of federal oversight, state lawmakers, investors, and the public can strengthen corporate governance of AI by pushing for reforms to the public benefit corporation model. A growing number of tech leaders seem to believe that ethics are optional. Americans must prove them wrong — or else risk allowing misinformation, inequality, labor abuse, and unchecked corporate power to define the future of artificial intelligence.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button