Why AI Will Never Become an Evil Overlord

When ChatGPT was released in 2022, many speculated it might eventually wake up and become self-aware. But fears like this are rooted more in Hollywood fiction than in the reality of how AI actually works. Let me explain why AI taking over the world is not just improbable—it's impossible.

Why People Are Scared

If you're a Gen-Xer or just an action movie lover, you've seen the 1984 film classic The Terminator. It takes place in the future as an AI military defense system (SkyNet) becomes self-aware and initiates war against humanity by automatically launching nuclear warheads.

While movies like these are entertaining Hollywood sci-fi, they feed into collective fears that an intelligent computer system will one day wake up and take over the world. A general lack of understanding about how AI works stokes this uneasiness into a blazing social crisis.

To make matters worse, we rarely hear clear explanations about how AI learning models work or why they behave in a particular way. This leads to an underlying feeling that AI is unpredictable and prone to unexpected behavior. However, this is far from the truth. What feels like unpredictability is really just the complexity of algorithms processing vast amounts of data—still following programmed instructions, and certainly not acting on its own.

To illustrate, let's take it one step further and look at how AI actually learns. If an AI model is programmed to identify pine trees by analyzing ten images of Christmas trees, the algorithm wouldn't be able to identify a pine tree without ornaments. But after thousands of images of pine trees are fed through the model, the algorithms could identify all kinds of pine trees with or without tinsel by analyzing graphical patterns.

So yes, the AI system appears to grow more and more intelligent over time but keep in mind it's still only doing exactly what it's been programmed to do, which is to identify pine trees. AI can become a pine tree expert, but it will never ponder the meaning of Christmas or decide whether it prefers a spruce over a Douglas fir.

Leveling up its pine tree skills is worlds away from becoming self-aware.


AI Can't & Won't Make Decisions on Its Own

No matter how sophisticated, self-actualization isn't possible because computers lack the basic traits that drive decision making:

  • Computers and AI systems lack consciousness.
  • They lack intrinsic motivation.
  • AI behavior is dependent solely on programming logic.
  • Computer systems are devoid of identity.
  • They lack personal values and ethics.
Lack of inherent human characteristics prevent computer systems from making decisions on their own because they don't have hopes, dreams and desires. So even if they were smart enough to make independent decisions, they don't care to.


፨ ፨ ፨

With over 25 years of technical industry experience, Sarah Writtenhouse shares her insights about job market trends, emerging technology and software development. For more articles, find her on Medium.com and subscribe to receive articles in your inbox.

Comments