“War Games” IRL

Washington, D.C. – Senator Edward J. Markey (D-Mass.) and Representatives Ted W. Lieu (CA-36), Don Beyer (VA-08), and Ken Buck (CO-04) introduced the bipartisan and bicameral Block Nuclear Launch by Autonomous Artificial Intelligence Act legislation to safeguard the nuclear command and control process from any future change in policy that allows AI to make nuclear launch decisions.

This proposed legislation raises so many critically important questions that it’s hard to know where to start. First and foremost, our adversaries in a nuclear conflict will use AI to attack. Removing our ability to apply the very latest technology for our defense (which is often a good offense) seems wrong on its face. Even if you think it’s a good idea – which it may be – there are no rules when it comes to deploying nuclear weapons, which are purpose-built to destroy everything and kill everyone in the target zone (on the scale of cities). Exactly which “human” or “ethical” rule(s) does that follow?

In my Sunday essay, Default to Distrust: A New Paradigm for Engaging with Generative AI Models, I explore the idea that as the output from large language models (LLMs) and generative AI becomes increasingly indistinguishable from that produced by humans, it’s time to consider a new paradigm: “default to distrust,” which is really the idea that Senator Markey and the other representatives are suggesting. They don’t want us to be “fooled” into a nuclear conflict by an AI hallucination. Based on what people have been reading – and the ignorant (sensationalist) reporting surrounding generative AI – it’s not an irrational thought.

Two things:

1) Go deep. Learn about generative AI for yourself. Start with my free online course, Generative AI for Executives, but don’t stop there. Go deep. There are a wealth of academic and commercial writings about how these tools work. Become an expert.

2) Stop talking anecdotally about ChatGPT and start talking to experts about what it can and cannot do. To get the most out of these tools, you need to understand its potential and its limitations. This knowledge comes from talking to experts and implementing workflows and processes that utilize the very latest tech.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

Tags

Categories

PreviousDefault to Distrust: A New Paradigm for Engaging with Generative AI Models NextAI Music Deepfakes Are Here. Now What?

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in technology, media, and marketing.

Subscribe