US, UK, and EU finally come together to prevent AI monopoly (catastrophic market failure, not the game)

As we've continued developing AI to boldly go where nobody has gone before and where many of us never wanted to go in the first place, we regular folk have had plenty of concerns. Whether it's the potential for AI job takeovers or deepfake misinformation, there's plenty to worry about. However, one concern that's not been discussed so much is the risk of an AI monopoly. Thankfully, the US, UK, and EU are on the ball enough to see this risk and are now coming together to prevent it.

Four governing bodies spanning the US, UK, and EU, have signed a joint statement which, according to the UK government, “affirms commitment to unlock the opportunity, growth and innovation that AI technologies could provide with fair and open competition.” 

That's the positive spin, anyway. But we all know the flipside to competition is monopoly, and it seems that this is precisely what these agencies are hoping to prevent. The four agencies in question—the CMA (UK Competition and Markets Authority), EC (European Commission), DoJ (US Department of Justice), and FTC (US Federal Trade Commission)—have a few specific monopolistic risks in mind.

One such risk that these governing bodies identify is the “concentrated control of key inputs,” such as chips, data centres, and specialist expertise. Now, it might be because I'm a PC Gamer, but when I hear this I think of Nvidia. When we have popular industry experts such as Jim Keller saying that “Nvidia is slowly becoming the IBM of the AI era,” we can't ignore that Nvidia is the closest thing we have to a monopoly (without actually being a monopoly) in the AI chip market.

However, and as I argued when reporting on Keller's statement, monopolies rarely remain thus. And if we throw in some pro-competitive aid such as this international statement signifies? Well, things might just turn out okay after all.

But that's enough positivity, let's return to the doom and gloom. The joint statement also mentions the risk of “entrenching or extending market power in AI-related markets,” presumably referring to Big Tech companies that already have monopolies in certain areas.

Digital generated image of data server.

(Image credit: Andriy Onufriyenko via Getty Images)

It goes on to mention the risk that such large firms would have “the ability to protect against AI-driven disruption, or harness it to their particular advantage, including through control of the channels of distribution of AI or AI-enabled services to people and businesses.” As an analogy, think Google's control over search, but apply this to the burgeoning AI industry.

The joint statement also mentions the risk of “arrangements involving key players,” ie, good, old-fashioned market collusion. Which has me thinking, actually: Although AI is new, are these market risks not the same ones we've always faced, at any point in time in capitalism's history, in any industry?

The answer, I think, is a little yes and a little no. Yes, the risk of monopoly and the way it might manifest is the same as ever, but with AI the problem could occur much quicker and on a much larger scale. At least, so we're supposed to believe if we buy into all the talk of the “next industrial revolution.” 

I suppose the argument might go as follows: AI isn't just like any other technology, it's going to cut across and affect all industries to a much greater degree than any other technology since the industrial revolution. So, who controls AI will control not just a market segment but the entire market. Furthermore, because AI improves itself exponentially quickly this is going to come about quicker than we can regulate.

AI, explained

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

That's certainly a scary prospect, and now that I've put words to the thought I'm starting to think the CMA, EC, DoJ, and FTC are treading the right tracks, here. Let's just hope their talk of “fair dealing,” “interoperability,” and “choice” are backed up by action, because as it stands all we really have is a statement of principle—a valiant principle, but just a principle nonetheless.

Principles, and the serious thought required to come up with and enforce them, are what we direly need as AI continues along its seemingly inevitable path. It doesn't even seem like the big players in the tech industry are on the same page when it comes to AI and its role in the market. We can see this clearly in Elon Musk's lawsuit against OpenAI, in which he claims that the AI company was supposed to be working to help humanity rather than chase profits. Forget what AI companies are doing, is there even widespread agreement on what they they should be doing?

Of course, none of this might matter if the AI market's built on a bubble that's bound to bust. We've already seen Sequoia analyst David Cahn (via Tom's Hardware) point out the stupendous amount of money the AI industry needs to accrue to essentially pay off its investment debts. 

Then again, I'm not sure a bubble of this size bursting would be much of a better alternative to AI monopoly. Both would suck. I'll leave you with that cheery thought.

Source

About Author