The Secret to Less Burnout Apple Watch Ultra Deal Musk Is Right: Regulate AI Now Sony Xperia 1 V Review Best Solar Companies Verizon 5G Home Internet Best Credit Cards Find Movers You Can Trust
Want CNET to notify you of price drops and the latest stories?
No, thank you
Accept

Why Elon Musk Is Right About Needing to Regulate AI Now

Commentary: It's not for the reason you think.

gettyimages-1247508329
NurPhoto/Getty

Something needs to be done about artificial intelligence before it's too late.

You probably heard some variation of that statement dozens of times in the years before ChatGPT. You've probably heard it a hundred times in the months since ChatGPT. 

Something needs to be done about AI or else. If the development of artificial intelligence continues at its current pace, the worry goes, catastrophe is likely to follow. Be it a tsunami of misinformation, millions of jobs lost or the apocalypse itself, the AI revolution carries enormous risks. 

In March an open letter called for all labs to pause development on AI for six months, during which time the government could work on sensible regulation. It was signed by Elon Musk, Apple co-founder Steve Wozniak and, among other tech and academic luminaries, Yuval Noah Harari, the author of the book Sapiens.  

"Over the past couple of years, new AI tools have emerged that threaten the survival of human civilization," Harari wrote last month. "AI has gained some remarkable abilities to manipulate and generate language … AI has thereby hacked the operating system of our civilization." 

Some alarming words from the guy who literally wrote the book on the human race. 

The open letter argues that now's the time to put guardrails in place because AI will soon be too intelligent to constrain. Or in the words of Musk, if we "only put in regulations after something terrible has happened, it may be too late to actually put the regulations in place. The AI may be in control at that point."

But there is another reason why lawmakers should jump on AI now. History tells us there's a limited amount of time where it's politically possible to regulate AI. 

The problem is, as always, the culture war; the way in which many important issues are co-opted and made partisan by politicians and online grifters hell bent on weaponizing the sort of tribalism that's been rendered visible every day on social media platforms like Twitter. If AI becomes part of the culture war, thoughtful and extensive regulation will be much harder to achieve. 

The process of politicization may have already begun. That Musk quote above? He gave it during an appearance on Tucker Carlson's show, back when Carlson still had a show. This is how the former Fox host introduced one of the Musk segments:

"Longer term, AI may become autonomous and take over the world. But in the short term, it's being used by politicians to control what you think, to end your independent judgment and end democracy on the eve of a presidential election." 

gettyimages-1246507198

Elon Musk is one of many tech luminaries petitioning a pause in AI development for 6 months. 

Bloomberg/Getty

Bad precedents 

The unchecked spread of AI could prelude disaster. But if there's one thing US lawmakers have proven themselves adept at, it's courting disaster for political gain. This is often done via fear mongering. Casting AI as a plot to end democracy, as Carlson did, is one of many ways this could happen. Once the blood-boiling talking points are devised, tempers can prove difficult to calm. 

You don't have to look far to see examples of pathological partisanship. As I write these words, Congress is playing a game of chicken over raising the debt ceiling. GOP leaders are refusing to authorize the government to borrow money to pay its bills unless the White House agrees to cut green-energy incentives, rescind Biden's student loan forgiveness initiative, and reduce spending on social security. 

It's an example of politics corrupting what should be a simple process. The raising of the debt ceiling is typically an administrative ritual, but in recent decades has become a political football. But there are real risks attached: If neither side blinks and the ceiling isn't raised, millions lose access to Medicare, the military doesn't get paid and global markets would be disrupted by the US not paying its debt obligations. 

Again, this should be easy -- far easier than regulating AI. But it demonstrates how even the clearest objectives can be corrupted by politics. 

Climate change, and the persistent resistance from governments around the world to adequately tackle it, is perhaps the best example of the culture war stalling action. Compromise becomes difficult when one side says climate change is apocalyptic while the other maintains it's overblown or not real. A similar divide would make regulating AI impossible or, at best, slow. Too slow. 

Even on issues where there is a bipartisan consensus that something should be done, Democrats and Republicans often run in opposing directions. Practically everyone agrees that Big Tech should be regulated. Democrats fret that hugely profitable tech companies don't protect data enough and bully smaller competitors. Republicans cry foul over censorship and claim Silicon Valley elites are eroding free speech. No major bill cracking down on Big Tech has passed, ever. 

The same inertia could plague AI regulation if the parties, despite agreeing that something should be done, prescribe different solutions. 

First answers for AI regulation 

Comprehensive regulations tackling the possible externalities of AI will take years to develop. But there are some quick-and-easy rules that could and should be applied now. These are called for by the nearly 28,000 people who signed the Musk-backed open letter. 

First, regulation should enforce more transparency on the part of AI developers. That would mean transparency about when AI is being used, as in the case of companies using AI algorithms to sort through job or rental applications, for instance. California is already tackling the former issue, with a bill seeking to require companies to notify people if AI-powered algorithms were used to make decisions on a company's behalf.  

We also need companies like OpenAI, which is behind ChatGPT, to make available to researchers the data on which chat bots are trained. Copyright claims are likely to abound in the AI age (how do we pay news publications for the stories  chatbots like GPT base their answers on, or the photographs AI-art generators use as inputs). More transparency about what data AI systems are trained on will help make those disputes coherent.

Perhaps most importantly, AI should declare that it is AI. 

One big worry about artificial intelligence is its ability to sound convincing and persuasive. Dangerous qualities in the wrong hands. Prior to the 2016 elections, Russia used fake social media accounts in an attempt to sow discord around contentious issues like immigration and racial tension. If powered by sophisticated AI, such attempts at rabble rousing would be more effective and harder to spot. 

In the same way that Instagram forces influencers to #ad when they're paid for a post, Facebook and Twitter screeds should have to declare themselves AI. Deepfake videos should be tagged in a way that makes them recognizable products of artificial intelligence. 

A Democratic congresswoman from New York, Yvette Clarke, suggested such measures in a bill submitted earlier this month. But it was in response to the Republican National Committee releasing an anti-Joe Biden ad created with AI imagery, auguring more AI malarkey to come as the 2024 elections approach. 

AI is not yet in the culture war like climate change or even Big Tech companies. But how long will that be the case?

Editors' note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.

Now playing: Watch this: ChatGPT Creator Testifies Before Congress On AI Safety...
15:01