It’s Not Sci-Fi, It’s Capitalism
When covering technology for NBC News, Jacob and his team had a rule: “no dark typists and no ones and zeros.” Instead of leaning on tired stereotypes of shadowy hackers, they focused on the real-world impact. This approach revealed that the true dangers of AI and technology are rarely about rogue robots; they are about flawed human systems, particularly democracy and capitalism.
“You’re either talking about people’s free and open access to a shared pool of accurate information… or you’re talking about the commercial pressures that push people to adopt innovations in sometimes rushed and unethical ways,” Jacob says.
He points to the debate over self-driving cars. A typical story might celebrate the novelty of a driverless ride. Jacob’s approach asks a more fundamental question: Why are we trying to do away with taxi drivers? This reveals a deeper truth: the technology is often a solution in search of a problem, driven by market pressures that overlook profound social consequences.
How AI Influences Human Decision-Making
Jacob’s central thesis, articulated in his book The Loop, is that AI-powered systems are becoming dangerously adept at decoding and exploiting our ancient, instinctive decision-making circuits. This creates a feedback loop:
- Snap Judgments: Our brains use ancient, pattern-recognizing shortcuts to make quick decisions.
- Behavioral Analysis: AI systems analyze our behavior to predict the choices we’re most likely to make.
- Limited Options: These systems then present us with a narrow set of options designed to guide us toward a predictable, profitable outcome.
The result is a “downward spiral of shrinking choices.” As we hand over more of our decision-making to these convenient systems, we risk losing the very ability to choose for ourselves. Jacob paints a stark picture of the end state: “We’re just drinking weird smoothies for our dinner, drinking Soylent and wearing beige, and we don’t know how to talk to our spouses anymore.” The efficiency of the algorithm strips away the messy, inefficient, but essential parts of being human.
The Case for Inefficiency
The primary allure of AI is convenience. But Jacob cautions against embracing it uncritically, sharing a powerful concept from a federal judge: “weak perfection.”
Weak perfection is the idea that you could, theoretically, make a life-altering decision—like entering a guilty plea—as easy as swiping left or right on a phone. It’s perfectly convenient but disastrously weak. The justice system, in contrast, is designed to be deliberately inefficient. It forces you to show up in person, consult with counsel, and engage your higher, more rational cognitive functions.
“Our creativity, our rationality, our caution, our sense of equality—all of that is exhausting to engage,” Jacob argues. “There are certain human functions that we’re going to want to keep full of friction so that we keep engaging our brains in it.”
This is the core ethical challenge: society must deliberately preserve its inefficiencies to protect the best parts of being human.
How to Break the Loop
While regulation and cultural pushback (like the younger generation’s term “clanker” for people over-reliant on AI) will play a role, Jacob insists that the responsibility lies with the leaders implementing this technology today.
“This innovation is absolutely in the hands of you in the audience,” he says. “You are experimenting with live ammunition, and it’s important to understand the responsibility that you carry.”
For companies grappling with AI implementation, he offers a practical starting point he calls the “Super Villain test.”
He asks leadership teams: “If you became hell-bent on doing something bad with what you have created here… what would it look like?” By identifying the potential for misuse, companies can build processes to prevent it, protecting both society and their own reputation.
“Like Adolescents With a Car”
Ultimately, the future of AI isn’t about the technology itself, but about the choices we make. Will we use it to amplify the best, most thoughtful parts of our humanity? Or will we allow it to cater to our most primitive, easily manipulated instincts? As Jacob concludes, we are like “adolescents with a car right now,” and it’s time we learned how to drive.
Want More From Jacob?
Watch his Lavin Voices podcast episode below, and get in touch with us to learn more about him and our other top AI speakers!




