It’s Time to Build AI Weapons

OpenAI’s board, these wannabe Oppenheimers, were revealed to be — as Logan Roy of ‘Succession’ would put it — not serious people.

mikkelwilliam/Getty Images

The past week has been wild, chaotic, and “Succession”-esque dramatic for artificial intelligence development. This anarchy did not arise from a technical breakthrough, radical product launch or computing failure. Rather, the fault lay in all too human operators. 

To summarize events: on Friday afternoon, November 17, OpenAI announced that the board had fired Sam Altman, their CEO and co-founder. Mr. Altman had taken the company from aspirational non-profit to $80-billion ultra-unicorn, but — as far we understand from their muddy, confused communications — the board wanted OpenAI to go slowly and carefully, managing its potent technology. They feared Mr. Altman was being too reckless and aggressive in his roll-out of new products, and for that, he was sacked.

Then all the employees threatened to resign; the board hired two interim CEOs in the space of four days; Microsoft, which owns 49 percent of OpenAI, offered Mr. Altman a job, sending its share price soaring, and said any OpenAI employee could join him; internal workplace dynamics were being communicated through coordinated love-heart emoji Quote Tweets; and, this morning, OpenAI announced that its board had totally capitulated and Mr. Altman will be returning as CEO. 

The manic, fever dream of a tech news cycle had ended, achieving little except deeply discrediting the most notable people in the AI industry. 

OpenAI’s board of wannabe Oppenheimers were exposed to be — as Logan Roy of “Succession” would put it — “not serious people.” On Sunday, the Atlantic reported that, at a 2023 leadership offsite, board member Ilya Sutskever commissioned and burned a wooden effigy that represented unaligned AI — AI that doesn’t follow human objectives and interests. Mr. Sutskever, by all accounts a brilliant software engineer,  should be respected in his domain of expertise — building AI systems — but should be taken a lot less seriously on other topics, including how AI should be used.

That holds when discussing AI’s most contentious use case, weaponry. Among the “accelerationists,” “decelerationists,” doomers and dreamers, those fearful that AI will destroy the world and others shaking with excitement, the loudest, most notable voices in the AI community — and in the tech space more broadly — are almost completely united that their technology can never be used in weaponry. 

If the defense industry had its way, the best minds in tech would be using deep learning systems to improve our detection software, increase drones’ strike targeting precision, and find and track targets through highly dense, low visibility settings. But Silicon Valley engineers — who are happy to spend their days building privacy-invasive ad systems, new social media tools and expensive, energy-hogging gimmicks — are outraged when asked to contribute to the national defense that secures their freedom. 

In 2018, Google employees protested their company’s participation in the Pentagon’s “Project Raven,” which proposed using AI to identify buildings and items “of interest” from drone footage. The chief executive, Sundar Pichai, folded, writing a servile blog entry that June entitled “AI at Google: Our Principles,” promising that his company would not develop AI for use in “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” The same year, Microsoft employees similarly complained about a proposed $10 billion Joint Enterprise Defense Infrastructure project, in which they would build cloud services for military applications of AI, and the next year, demanded that Microsoft cease making HoloLens headsets for the America army.

The idea that AI weaponry is unacceptably dangerous has even seeped into the White House. Before last week’s meeting between President Joe Biden and the Communist Chinese dictator, Xi Jinping, the South China Morning Post — a Communist Chinese state paper — reported that the two leaders would sign a pledge limiting the use of AI systems in weaponry. This didn’t eventuate, but even the presence of this rumor suggests the dominance of this view, and it was a point of discussion between the two leaders. 

Among serious and unserious tech people, the risk of using AI in weapons is the one point almost everyone agrees upon. But they are gravely, dangerously wrong.

Many working in the field compare building AI to the splitting of the atom– a tool that could save or burn down the world. It’s a more than slightly self-serving analogy — which strokes technologists’ egos whilst incentivizing regulation to strangle new, upstart competition — but even were you to grant it, it wouldn’t support the argument against weaponization.

Contrary to conventional wisdom, the existential risk of nuclear weapons does not come from their power or from nuclear war itself. Rather, they fall into three camps:

  1. Accidental use. The chief example of this is the 1961 Goldsboro B-52 incident, when two 3.8-megaton nuclear bombs fell out of a plane flying over North Carolina. It is only luck that stopped one of them — which was semi-armed when it hit the ground — from going off. 
  2. Misuse based on faulty information. The chief example here is the 1983 Soviet false alarm incident when computer readouts showed America had fired multiple missiles at Russia, an “attack” against which USSR protocol required a retaliatory nuclear response. However, duty officer Stanislav Petrov refused to pass the information up the chain of command, dismissing it as a false alarm. His sound insolence may have averted our destruction.
  3. Use of nuclear weapons by our enemies. 

In each case, better, more advanced, more secure technology reduces these risks. It makes weapons less likely to be released by accident or misuse, and harder to steal by our enemies. The same is true for AI weapons systems.

However, though I’ve granted that both AI systems and nuclear warheads are powerful, in most other ways they are radically dissimilar. Nukes are a blunt instrument —so potent they can instantly dematerialize large areas — whereas AI simply improves upon the precision, accuracy and effectiveness of existing systems. It could allow drones to fly faster and more efficiently, hit targets more consistently, and do so while causing fewer civilian casualties, less environmental damage and lower operational expenses. 

And AI tools, no matter how sophisticated, are just tools. They are programs designed to achieve the ends we ask of them, and they only can do what we allow them to do. The fear mongering example is that AI weaponry could pull the trigger without any human authorization, that we could experience a scenario akin to that in James Cameron’s 1984 film “The Terminator.” And though we may get to a stage where the technology is so advanced that we would choose to let it push the nuclear button, so to speak, that isn’t a default, but a matter of choice. 

Most crucially though, even if Silicon Valley were to convince the White House to abandon all use of AI weaponry, Beijing will not, nor will Moscow.

We should be concerned about our enemies having AI weapons. But the best way to make us safe is to have better tech than they do. There are some awesome, small start-ups building cool drone systems, and if Ukraine wins, we should expect a flurry of Kiev-based companies joining the fray. But Palmer Luckey’s Anduril is the only big budget, Silicon-Valley-unicorn type startup working on AI weapons. We need to do more.

So, to those developing AI, I implore you: stop trying to build the best chatbot on the market. Instead, ensure that America has the best, smartest weapons on the planet. 


The New York Sun

© 2024 The New York Sun Company, LLC. All rights reserved.

Use of this site constitutes acceptance of our Terms of Use and Privacy Policy. The material on this site is protected by copyright law and may not be reproduced, distributed, transmitted, cached or otherwise used.

The New York Sun

Sign in or  Create a free account

or
By continuing you agree to our Privacy Policy and Terms of Use