My Mom Is Scared of AI. She’s Scared of the Wrong Thing.

The immediate danger isn’t AI itself. It’s humans using AI as a tool for unchecked power.

Aroet Hale is in a Monday video chat. On one screen is his mom. On another, his younger brother, calling on his one day off a week from his mission in Washington state. They’re just catching up.

And then, somehow, the conversation drifts from family news to the future of the human race.

The topic is AI.

And Aroet’s mom says the thing that millions of people are feeling.

“I’m scared,” she says. It’s not a detailed, itemized fear. It’s a general, atmospheric dread. She doesn’t know a lot about AI, she admits, and she’s scared of what it’s going to do.

She’s not alone. 50% of Americans are more concerned than excited about the increased use of AI in daily life. When asked to weigh the technology’s impact 57% rate the societal risks of AI as high, while only 25% believe its benefits are high.

Bar chart titled "50% of Americans are more concerned than excited about the increased use of AI in daily life," showing survey data from 2021 to 2025. Each year’s bar is divided into three segments: dark blue for "More excited than concerned," light gray for "Equally excited and concerned," and olive green for "More concerned than excited." The chart reveals a rising trend in concern, with 50% of respondents in 2023, 2024, and 2025 expressing more concern than excitement. Data source: Pew Research Center.
Bar chart titled "Majority of Americans rate the risks of AI for society as high; fewer rate the benefits of AI as high," based on Pew Research Center survey data from June 2023. It features two horizontal bars: one showing perceptions of AI's benefits (25% rate them Very High/High, 41% Medium, 18% Low, 15% Not sure), and another showing perceptions of AI's risks (57% rate them Very High/High, 26% Medium, 6% Low, 11% Not sure). The chart highlights that more Americans view AI's societal risks as high compared to its benefits. Bar chart titled "Majority of Americans rate the risks of AI for society as high; fewer rate the benefits of AI as high," based on Pew Research Center survey data from June 2023. It features two horizontal bars: one showing perceptions of AI's benefits (25% rate them Very High/High, 41% Medium, 18% Low, 15% Not sure), and another showing perceptions of AI's risks (57% rate them Very High/High, 26% Medium, 6% Low, 11% Not sure). The chart highlights that more Americans view AI's societal risks as high compared to its benefits.

For most people, AI is a force of nature—not a tool. It’s a giant, glowing spaceship that’s appeared in the sky, just like in the movie Independence Day. People are standing on rooftops, looking up at this immense, silent, unknowable thing. They don’t know what it is or what it wants. They feel a threat because they don’t understand it. It’s the fear of the unknown, of a loss of control. 61% of Americans wish they had more control over how AI is used in their life, while 57% feel they have “not too much or no control” at all.

The simple explanation for this fear is a lack of understanding. Yet the data reveals a more complex picture. A YouGov poll that quantified the fear of extinction also found that among people who self-identify as knowing “a great deal about AI,” concern about it causing the end of the human race jumps to 49%.

Bar chart from YouGov showing levels of concern about artificial intelligence (AI) among U.S. adult citizens, segmented by self-reported knowledge of AI. Concern categories include "Very concerned," "Somewhat concerned," "Not sure," "Not very concerned," and "Not at all concerned," each color-coded. Four groups are shown: all U.S. adults, those who know a great deal, a fair amount, and not much about AI. The chart reveals that individuals who know a great deal about AI are more likely to be "Very concerned" (23%) compared to other groups. Data collected November 27–December 3, 2024.

This presents a paradox. If fear were simply a product of ignorance, expertise should be its antidote. Instead, for a significant portion of those closest to the technology, deeper knowledge seems to amplify the dread. This suggests that the narrative of existential risk is not just a product of confusion. It is a powerful idea with deep roots inside the technical community itself, a story being told by the very people building the machine and those who use it. It’s a story we’re all telling ourselves. And it deserves closer scrutiny.

Why “Existential Risk” Is the Wrong Way to Think About AI

Aroet isn’t scared of AI.

Six months before this call, he might have been. He had the same concerns many of us have—that AI would erode human connection, that it would hollow out creativity. But something had changed. He’d taken a job at OpenTeams, and in doing so, he’d begun working with the machine itself. After months spent helping to build transparent language models he discovered AI wasn’t Independence Day. It was a complicated tool, but one built and controlled by people.

What he explains to his mom on that Monday call is the gap between what people fear about AI and what the real problem is.

His mom, like so many of us, is worried about the existential risk—that AI will wake up, become Skynet, and decide humanity is obsolete. This is the humans-vs-machine narrative. Aroet gently pushes back, telling her this is a distraction. The far more immediate danger isn’t the AI becoming a bad actor; it’s humans using AI as a tool for unchecked power. It’s not the machine that’s the problem; it’s the person at the keyboard.

Data Transparency Makes AI Safer

The resource fueling the issue is data. Good data builds good AI models. That’s why the greatest threat isn’t rogue AI models, but rather opaque systems being built with our information.

A shadow economy of data brokers is treating humanity’s collective knowledge as raw material to be extracted and gatekept. This is not a niche business. The global data broker market was valued at $257.2 billion in 2023 and is projected to reach $441.4 billion by 2032. This data is fed into proprietary, “black box” AI models. These systems, whose internal logic is often inscrutable even to their creators, can then be used to make decisions that affect our lives, from loan applications to job prospects, entrenching biases and centralizing immense power. These companies offer a solution every person and business will eventually need, and they are trying to convince the world that the price of admission is your data, your privacy, and your control.

They want you to think there’s no other option. But there is.

This is why Aroet isn’t scared anymore. He understands the fight is between two competing philosophies of technological development. The antidote to unaccountable actors building AI with obscured data is a commitment to building this technology transparently, with data you can own and models you can control. This approach, rooted in open source principles, is not without risk. Proponents of closed systems rightly argue that releasing powerful models publicly could allow bad actors to create disinformation or malware more easily. Yet this risk must be weighed against the systemic danger of allowing a handful of corporations to operate unaccountable, privately-owned systems that shape our society in secret. The transparency of the open approach allows for public scrutiny, independent auditing, and a true distribution of power that is ultimately safer.

This is not a fringe movement fighting a sling battle. Hugging face, an open source platform, has become the central hub for AI. It hosts over 2 million models and is actively used by more than 50,000 organizations, including Meta, Google, and Microsoft.

He didn’t completely erase his mom’s fear in one conversation. How could he? You can’t undo a feeling that’s in the air all around you. But he did something more important: he gave her fear the right name.

And this is the choice we all face. We have been told a story about the future of AI. It’s a simple, terrifying, and convenient story, because it asks nothing of us. It turns us all into spectators instead of actors.

That story is a distraction. The real story is about control. It’s a quiet, slow-motion battle over the plumbing of our new world. Are we going to let this new power be centralized and privatized? Or are we going to demand that it be a tool that empowers everyone?

Open Source Is the Answer

AI is going to become as important to day-to-day life as plumbing. The problem is not that the plumbing exists, the problem is when one company controls all of it and gets to arbitrarily decide who gets fresh water and who doesn’t.

This is where Aroet’s work at OpenTeams becomes so interesting. OpenTeams designs, builds, and maintains the plumbing, but it doesn’t try to own it. OpenTeams wants every customer to own their own AI. They build infrastructure in such a way that allows organizations to run AI systems with the same transparency one would expect from public institutions. It’s a fundamentally different philosophy. Anyone can check their work. Anyone can own it.

That’s the genius of open source. Power comes from radical transparency because it distributes the ability to act. If a bank is hiding a bias in a proprietary credit-scoring algorithm, how would you ever know? But if that model is open source, independent auditors can test it, developers can patch it, and regulators can hold it accountable.

OpenTeams Gives You Ownership

OpenTeams’ philosophy is to give AI back to the people who use it. They call it AI Sovereignty. Instead of renting a closed system you can’t control, you own it. You inspect the code, you control the data, and you decide how the machine should behave.

And this is why Aroet isn’t scared anymore. He knows the fight isn’t about whether we should have AI. It’s about whether we will hand over the controls or insist on keeping them in our own hands.

The antidote to our fear is transparency. We shouldn’t be scared of AI; we should be scared of its misuse. That is a fear we can manage, because it puts the power back where it belongs: with people.

aroet Hale Headshot. The background features a rocky outdoor setting, suggesting a natural environment like a cliff or mountainside.

Aroet Hale

Aroet Hale, (first name pronounced “eh-row”), is a Federal Account Manager at OpenTeams.

Contact Us

Solve Your Biggest AI Headaches with OpenTeams

Share:

Related Articles

Davos, Sovereignty, and the Quiet Power of Europe’s Open Source AI

Every January, the global economic conversation moves to a small alpine town in Switzerland. Heads of state, founders, and technologists gather in Davos to debate the future. This year, beneath the familiar headlines about geopolitical tension and economic uncertainty, another theme dominated nearly every private conversation: artificial intelligence.

Read More