The Ethics of AI: Jacob Ward on the Choices We Need to Make Today

The Ethics of Artificial Intelligence: Tech Journalist Jacob Ward on the Dangers of AI and Reclaiming Human Choice

The conversation around artificial intelligence is often framed as a technical one: a whirlwind of large language models, neural networks, and processing power. But journalist and author Jacob Ward argues that that’s the wrong conversation. The real story isn’t about the ones and zeros. It’s about us.

Jacob is a Lavin Exclusive Speaker whose book The Loop predicted the current commercial AI mania nearly a year before ChatGPT’s debut. A former NBC technology correspondent who’s reported for The New York Times, WIRED, and many more, he’s spent his career using technology as a lens to understand humanity. “Over and over again,” he explains, “technology has been a really interesting way for me to unpack what the pressures and predictions are for humanity.” From this vantage point, the ethics of artificial intelligence aren’t a far-off philosophical debate. They are an urgent, practical issue around the choices we’re making right now—and the choices we will need to make in the future.

Jacob sat down with us on our podcast, Lavin Voices, to reveal what everyone should know about the ethics of AI and reclaiming human autonomy in an AI world. Find some of his insights below, and contact us to book him to speak at your event!

It’s Not Sci-Fi, It’s Capitalism

When covering technology for NBC News, Jacob and his team had a rule: “no dark typists and no ones and zeros.” Instead of leaning on tired stereotypes of shadowy hackers, they focused on the real-world impact. This approach revealed that the true dangers of AI and technology are rarely about rogue robots; they are about flawed human systems, particularly democracy and capitalism.

“You’re either talking about people’s free and open access to a shared pool of accurate information… or you’re talking about the commercial pressures that push people to adopt innovations in sometimes rushed and unethical ways,” Jacob says.

He points to the debate over self-driving cars. A typical story might celebrate the novelty of a driverless ride. Jacob’s approach asks a more fundamental question: Why are we trying to do away with taxi drivers? This reveals a deeper truth: the technology is often a solution in search of a problem, driven by market pressures that overlook profound social consequences.

How AI Influences Human Decision-Making

Jacob’s central thesis, articulated in his book The Loop, is that AI-powered systems are becoming dangerously adept at decoding and exploiting our ancient, instinctive decision-making circuits. This creates a feedback loop:

  1. Snap Judgments: Our brains use ancient, pattern-recognizing shortcuts to make quick decisions.
  2. Behavioral Analysis: AI systems analyze our behavior to predict the choices we’re most likely to make.
  3. Limited Options: These systems then present us with a narrow set of options designed to guide us toward a predictable, profitable outcome.

The result is a “downward spiral of shrinking choices.” As we hand over more of our decision-making to these convenient systems, we risk losing the very ability to choose for ourselves. Jacob paints a stark picture of the end state: “We’re just drinking weird smoothies for our dinner, drinking Soylent and wearing beige, and we don’t know how to talk to our spouses anymore.” The efficiency of the algorithm strips away the messy, inefficient, but essential parts of being human.

The Case for Inefficiency

The primary allure of AI is convenience. But Jacob cautions against embracing it uncritically, sharing a powerful concept from a federal judge: “weak perfection.”

Weak perfection is the idea that you could, theoretically, make a life-altering decision—like entering a guilty plea—as easy as swiping left or right on a phone. It’s perfectly convenient but disastrously weak. The justice system, in contrast, is designed to be deliberately inefficient. It forces you to show up in person, consult with counsel, and engage your higher, more rational cognitive functions.

“Our creativity, our rationality, our caution, our sense of equality—all of that is exhausting to engage,” Jacob argues. “There are certain human functions that we’re going to want to keep full of friction so that we keep engaging our brains in it.”

This is the core ethical challenge: society must deliberately preserve its inefficiencies to protect the best parts of being human.

How to Break the Loop

While regulation and cultural pushback (like the younger generation’s term “clanker” for people over-reliant on AI) will play a role, Jacob insists that the responsibility lies with the leaders implementing this technology today.

“This innovation is absolutely in the hands of you in the audience,” he says. “You are experimenting with live ammunition, and it’s important to understand the responsibility that you carry.”

For companies grappling with AI implementation, he offers a practical starting point he calls the “Super Villain test.”

He asks leadership teams: “If you became hell-bent on doing something bad with what you have created here… what would it look like?” By identifying the potential for misuse, companies can build processes to prevent it, protecting both society and their own reputation.

“Like Adolescents With a Car”

Ultimately, the future of AI isn’t about the technology itself, but about the choices we make. Will we use it to amplify the best, most thoughtful parts of our humanity? Or will we allow it to cater to our most primitive, easily manipulated instincts? As Jacob concludes, we are like “adolescents with a car right now,” and it’s time we learned how to drive.

Want More From Jacob?

Watch his Lavin Voices podcast episode below, and get in touch with us to learn more about him and our other top AI speakers!

It's Not Sci-Fi, It's Capitalism

When covering technology for NBC News, Jacob and his team had a rule: "no dark typists and no ones and zeros." Instead of leaning on tired stereotypes of shadowy hackers, they focused on the real-world impact. This approach revealed that the true dangers of AI and technology are rarely about rogue robots; they are about flawed human systems, particularly democracy and capitalism. “You're either talking about people's free and open access to a shared pool of accurate information... or you're talking about the commercial pressures that push people to adopt innovations in sometimes rushed and unethical ways,” Jacob says. He points to the debate over self-driving cars. A typical story might celebrate the novelty of a driverless ride. Jacob’s approach asks a more fundamental question: Why are we trying to do away with taxi drivers? This reveals a deeper truth: the technology is often a solution in search of a problem, driven by market pressures that overlook profound social consequences.

How AI Influences Human Decision-Making

Jacob’s central thesis, articulated in his book The Loop, is that AI-powered systems are becoming dangerously adept at decoding and exploiting our ancient, instinctive decision-making circuits. This creates a feedback loop:
  1. Snap Judgments: Our brains use ancient, pattern-recognizing shortcuts to make quick decisions.
  2. Behavioral Analysis: AI systems analyze our behavior to predict the choices we’re most likely to make.
  3. Limited Options: These systems then present us with a narrow set of options designed to guide us toward a predictable, profitable outcome.
The result is a “downward spiral of shrinking choices.” As we hand over more of our decision-making to these convenient systems, we risk losing the very ability to choose for ourselves. Jacob paints a stark picture of the end state: "We're just drinking weird smoothies for our dinner, drinking Soylent and wearing beige, and we don't know how to talk to our spouses anymore." The efficiency of the algorithm strips away the messy, inefficient, but essential parts of being human.

The Case for Inefficiency

The primary allure of AI is convenience. But Jacob cautions against embracing it uncritically, sharing a powerful concept from a federal judge: "weak perfection." Weak perfection is the idea that you could, theoretically, make a life-altering decision—like entering a guilty plea—as easy as swiping left or right on a phone. It’s perfectly convenient but disastrously weak. The justice system, in contrast, is designed to be deliberately inefficient. It forces you to show up in person, consult with counsel, and engage your higher, more rational cognitive functions. “Our creativity, our rationality, our caution, our sense of equality—all of that is exhausting to engage,” Jacob argues. “There are certain human functions that we're going to want to keep full of friction so that we keep engaging our brains in it.” This is the core ethical challenge: society must deliberately preserve its inefficiencies to protect the best parts of being human.

How to Break the Loop

While regulation and cultural pushback (like the younger generation’s term “clanker” for people over-reliant on AI) will play a role, Jacob insists that the responsibility lies with the leaders implementing this technology today. “This innovation is absolutely in the hands of you in the audience,” he says. “You are experimenting with live ammunition, and it's important to understand the responsibility that you carry.” For companies grappling with AI implementation, he offers a practical starting point he calls the “Super Villain test.” He asks leadership teams: “If you became hell-bent on doing something bad with what you have created here... what would it look like?” By identifying the potential for misuse, companies can build processes to prevent it, protecting both society and their own reputation.

"Like Adolescents With a Car"

Ultimately, the future of AI isn't about the technology itself, but about the choices we make. Will we use it to amplify the best, most thoughtful parts of our humanity? Or will we allow it to cater to our most primitive, easily manipulated instincts? As Jacob concludes, we are like "adolescents with a car right now," and it’s time we learned how to drive.

Want More From Jacob?

Watch his Lavin Voices podcast episode below, and get in touch with us to learn more about him and our other top AI speakers! https://www.youtube.com/watch?v=MkwIfP7HLe8

Most Popular

FOLLOW US

Other News