What if Regulation Makes the AI Monopoly Worse? by Bhaskar Chakravorti

Apart from being artificial intelligence’s breakout year, in the race to steer the technology’s development, 2023 was also the year when the AI community splintered into various tribes: accelerationists, doomers, and regulators.

By year’s end, it seemed as if the accelerationists had won. Power had consolidated with a handful of the largest of the Big Tech companies investing in the hottest of start-ups; generative AI products were being rushed out; and doomers, with their dire warnings of AI risks, were in retreat. The regulators were in hot pursuit of the accelerationists with uncharacteristic agility, unveiling bold regulation proposals and, with a year of many elections and an anticipated surge in AI-powered disinformation ahead, corralling bills to rush into law.

Ironically, though, the regulators may have added to the wind on the backs of the accelerationists: New regulations may inadvertently add to the accelerationists’ market power.

How can it be that regulators tasked with preserving the public interest could take actions that might make matters worse? Do we now need different regulations to rein in an even more powerful industry? Are there creative alternatives for safeguarding the public interest?

Consider, first, the reasons why the AI industry is already primed for concentration.