How I Made $5000 in the Stock Market

Prince Harry and Meghan Are Wrong. Banning AI ‘Superintelligence’ Won’t Work.

Oct 22, 2025 14:20:00 -0400 by Martin Baccardax | #AI #Barron's Take

It is naive—and likely counterproductive—to try to pry the powers of AI out of the hands of tech giants and into some sort of Luddite-adjacent purgatory. Above, playing paper, scissors, rock with BERTI the robot at the Science Museum, in London. (Shaun Curry / AFP / Getty Images)

Pandora opened a box. Aladdin rubbed a lamp. Turing devised a test.

The last of the three events might be the only one that actually took place, but each of the narratives reflects a concern with unleashing forces beyond human control and the futility of trying to reverse their release.

A diverse group of public figures, ranging from Apple co-founder Steven Wozniak to Prince Harry and Meghan Markle to former White House advisor Steve Bannon, signed a petition Wednesday that effectively seeks to do what Pandora and Aladdin couldn’t do, and what mathematician and code breaker Alan Turing likely wouldn’t want to do: Put the problems that AI has created back into some sort of airtight vault.

“We call for a prohibition on the development of superintelligence,” the nonprofit Future of Life Institute (FLI) said in a statement signed by more than 800 scientists, celebrities, and political and religious leaders, at least until there is “broad scientific consensus that it will be done safely and controllably” and with “strong public buy-in.”

Good luck with that.

Aside from the fact that “superintelligence” is as ambiguous as “strong public buy-in,” the notion that either can be used as levers with which to pry the powers of AI out of the hands of tech giants and into some sort of Luddite-adjacent purgatory is breathtakingly naive—and likely counterproductive.

Government efforts to control global capital markets have largely resulted in fewer listed companies and the proliferation of a shadow banking system that took a massive amount of risk out of the public eye and transferred it into the opaque world of private finance—with predictably disastrous results.

AI may ultimately prove to be a threat to the current economic order. And ”superintelligence,” which the FLI defines as technology that can “significantly outperform all humans on essentially all cognitive tasks,” may alter it further. But economies and markets will adapt and create new sectors and jobs that we can’t yet imagine.

Industrial automation in the 1970s, for example, was considered the death knell for American manufacturing that would lead to waves of unemployment and the loss of U.S. economic leadership. What actually occurred was the rapid and historic expansion of the services sector and three decades of growth in labor-force participation, which cemented America’s place as the world’s largest and most important economy.

That’s been true in the tech world, as well.

President Lyndon Johnson’s effort to use the country’s developing expertise in computing to create a “National Data Bank” was shot down by privacy advocates and a reluctant Congress in 1965.

As lawmakers took their eyes off the ball, and advances in computers and digital communication make it easier to find, retrieve, and share information electronically, personal data became an effective form of currency for Big Tech that the exchange of which ultimately led to an even more damaging erosion of personal privacy.

Retired tech titans and social media luminaries can no more control the development of AI by banning it than the Catholic Church could undo the fact that Earth revolves around the sun by imprisoning Galileo.

Turing said that a computer would deserve to be called intelligent only “if it could deceive a human into believing that it was human.”

The best way to prevent AI from achieving that, it would seem, is by ensuring that it is developed in the full glare of the public, not by trying to prohibit its development outright.

Write to Martin Baccardax at martin.baccardax@barrons.com