Welcome to Insights with Dr. D. I’m your host, Dr D, and today we’re tackling a subject that’s not just about technology—it’s about humanity, our purpose, and the choices we make in shaping the future.
Artificial intelligence—AI—has become one of the most defining topics of our time. It’s in our homes, our workplaces, and increasingly, in our minds. But why? Why did we create AI? Why is it rising now, and what does it reveal about the world we’re living in?
Today, we’re going to explore the current zeitgeist—the spirit of the age—and dive into the fascinating and sometimes troubling reasons behind humanity’s relationship with AI. We’ll look at its history, its moral dilemmas, and what it might mean for our future. So settle in, because this conversation is about to get real
Let’s start with the question that frames this entire discussion: What is the current zeitgeist? What is the spirit of our age, and how does AI fit into it?
Right now, it feels as though we’re living in a paradox. On the one hand, we’re more connected than ever before, thanks to technology. On the other hand, many of us feel disconnected—from ourselves, from each other, and from the natural world.
AI has risen in this context, not just as a tool but as a response to our collective needs and fears. Some say AI is here to make life easier, to help us work smarter, faster, and harder. But is that the whole story? Or is it also a reflection of deeper issues—like our inability to self-actualize, as Maslow would put it?
Maslow’s hierarchy of needs tells us that once we’ve met our basic needs—food, shelter, safety—we start striving for higher levels of fulfillment: love, belonging, esteem, and finally, self-actualization. But what if we’ve stalled out? What if, instead of climbing higher, we’ve become stuck in the lower rungs, distracted by convenience, entertainment, and endless consumption?
In many ways, AI has become a stand-in for our unrealized potential. It promises to do what we can’t—or won’t—do for ourselves. But is that a promise we want to accept?
To understand where we are now, we need to look back. AI didn’t appear out of nowhere. It’s the result of a long history of human innovation, each step building on the one before it.
The calculator was one of the earliest tools that could be considered “artificial intelligence.” It didn’t think or learn, but it could solve problems faster than any human. THERE WAS ALSO A CHECKERS GAME built in 1952, which DID learn independantly. These inventions were groundbreaking in their time, but they were also simple compared to the AI of today.
What’s interesting is that each of these tools was designed to make life easier. The calculator freed us from doing complex math by hand. And now, AI is taking things further—it’s not just a tool; it’s a collaborator, a partner, even a decision-maker. But as Carl Jung says, technology isn’t making life simpler, its cramming our lives with so much work that we have no free time anymore, and we have lost touch with our spiritual nature. And he wrote this in 1941! Imagine if he saw what life was like now – he would be spinning in his grave!
So here’s the question: As we’ve advanced technologically, have we also advanced emotionally, spiritually, and morally? Or have we let our tools outpace our growth as people?
Let’s talk about why we create in the first place. Human beings are inherently creative. We build, we innovate, we dream. But let’s be honest—there’s also a little bit of laziness in our creativity.
Think about it: The wheel was invented to make transportation easier. The washing machine saved hours of manual labor. The internet gave us instant access to information. And AI? It’s the ultimate convenience.
But here’s the paradox: While laziness drives innovation, it can also lead to stagnation. If we rely too much on AI to do the thinking, the creating, and the problem-solving for us, what happens to our own abilities?
Are we creating tools to help us grow, or are we creating crutches that keep us from evolving? It’s a tough question, but one we need to face if we’re going to use AI responsibly.
Everyone talks about how Google made us all dumber. That’s a bit simplistic, but it does keep us from needing to retain information. Because – hey – we can just go Google it again if we need to!
But we need to retain information, to learn, to grow – otherwise we are dead in the water. And do we really want AI to not only outsmart us but to do that while making decisions for us? It could be a dystopian nightmare come true!
Let’s take a step back and think about what it means to learn. Learning is one of the most defining characteristics of being human. It’s how we grow, adapt, and innovate. There are some hallmarks of learning that have remained constant throughout human history, no matter how advanced our technology becomes.
The first hallmark of learning is curiosity. It’s that innate desire to explore, ask questions, and solve problems. Think about a child learning to walk or talk—they aren’t following a manual or waiting for instructions. They’re experimenting, failing, and trying again because their curiosity drives them.
But when it comes to AI, there’s a risk that we stop being curious ourselves. Why? Because AI answers our questions before we even have to ask them. Need to know the weather? AI’s already told you. Want to learn how to cook a new recipe? AI’s delivered a video tutorial right to your phone.
In this convenience-driven world, we risk losing the joy of discovery. When AI gives us the answers, do we stop seeking them out for ourselves?
Another hallmark of learning is trial and error. This is how humans have developed everything from fire to flight. We try, we fail, we learn, and we improve.
But here’s the thing: AI doesn’t learn like we do. It processes vast amounts of data almost instantaneously, finding patterns and solutions that would take us years to discover. While this is an incredible tool, it also changes our relationship with learning.
For example, when you use AI to generate ideas, solve problems, or even create art, you bypass the messy, imperfect process of trial and error. While this might seem like a time-saver, it also means you’re missing out on the deeper learning that comes from making mistakes and figuring things out for yourself.
So, the question becomes: If we are outsourcing the learning process to AI, what are we losing in the process?
The third hallmark of learning is reflection—taking the time to think critically about what you’ve learned and how to apply it. This is where true understanding happens.
AI, however, can shortcut this process by delivering pre-packaged answers and solutions. For example, when AI recommends a decision—whether it’s the best stock to invest in, the fastest route to your destination, or the perfect gift for a loved one—do we stop to reflect on why that’s the best choice? Or do we simply trust the machine?
The danger here is that we might lose our ability to think critically. If we rely too heavily on AI to “do the thinking” for us, we risk becoming passive consumers of information rather than active learners and thinkers. At that point, we HAVE to let AI take over because we have lost our ability to make choices. Andwhat happens when those systems fail? Will we still have the skills and resilience to make choices and adapt to a world without them?
Now, let’s shift gears and talk about the role of media in shaping how we think about AI. Malcolm Gladwell, in his book The Revenge of the Tipping Point, quoted Larry Gross, who said, “The media creates the cultural consciousness about how the world works... and what the rules are.”
This is especially true when it comes to AI. The media doesn’t just report on AI—it shapes the narrative. Think about the headlines you’ve seen: “AI will replace millions of jobs.” “AI will revolutionize healthcare.” “AI will destroy humanity.” These stories don’t just inform us; they influence how we feel and how we act.
But is the media telling us the full story? Or is it feeding us a version of events designed to elicit a specific reaction? Deepfakes are a perfect example. They blur the line between reality and fiction, making it harder than ever to trust what we see and hear. Are deepfakes a form of art, a technological breakthrough, or a dangerous tool for manipulation?
And what about AI-generated content? Is it just another way for the media to churn out stories faster and cheaper, or is it something more? These questions matter because they shape how we interact with AI—and with each other.
Think about the iconic films that have shaped our cultural imagination: 2001: A Space Odyssey, Blade Runner, The Terminator, Minority Report, Her, Ex Machina. These movies didn’t just entertain us; they planted seeds in our collective consciousness about what the future might look like—and what it might mean to coexist with technology.
But here’s the fascinating question: Are these movies simply reflecting our hopes and fears, or are they actively shaping the technologies we create?
Consider this: The videophones in 2001: A Space Odyssey predated the invention of video calls by decades. The touchscreens and gesture-based controls in Minority Report inspired real-world innovations in user interfaces. Even Star Trek’s communicator is often credited with inspiring the development of modern mobile phones.
In many ways, movies act as a kind of blueprint for the future. They give us permission to imagine what’s possible, and that imagination often turns into reality. But there’s a flip side to this, too.
When movies show us dystopian futures dominated by AI—think The Terminator or The Matrix—do they make us more cautious, or do they create a kind of self-fulfilling prophecy? If we internalize the idea that AI will eventually become our overlord, does that influence the way we design it? Do we, in some subconscious way, build toward the very futures we fear?
Movies also shape how we feel about technology. They don’t just show us what a future with AI might look like; they tell us how to feel about it. In Her, for example, we’re drawn into an emotional relationship with an AI assistant, which raises profound questions about love, connection, and what it means to be human. Meanwhile, Ex Machina explores themes of power, manipulation, and control, forcing us to confront the darker sides of AI.
It’s a fascinating dynamic: Fiction inspires innovation, and innovation, in turn, influences fiction. It’s a feedback loop that has the power to create both incredible advancements and deeply unsettling consequences.
This brings us to a fascinating question: Is AI art, technology, or both? When an AI generates a painting or writes a song, can we call it art?
Some say yes. After all, art is about evoking emotion, and if an AI creation moves you, isn’t that enough? But others argue that art is inherently human—it’s an expression of our experiences, our struggles, our dreams.
The danger is that if we start valuing AI-generated art the same way we value human art, are we risk for devaluing the human experience itself. Art isn’t just about the end product; it’s about the process, the journey, the imperfections.
So where do we draw the line? Can AI be a tool for creativity without replacing the creator? These are questions we’ll need to answer as AI continues to evolve.
One of the most profound impacts of AI is its ability to tap into our emotional and psychological systems. Social media algorithms, for example, are designed to keep us scrolling, liking, and sharing. They feed on our dopamine responses, creating a cycle of reward and addiction.
AI is taking this to the next level. It can generate content that feels deeply personal—whether it’s a playlist tailored to your mood, a chatbot that mimics a friend, or art that speaks to your soul. But here’s the catch: It’s not real.
So why do we feel a connection to it? It’s because AI is incredibly good at mimicry. It doesn’t create; it imitates. And while that can be powerful, it also raises a troubling question: Are we building relationships with machines because we’ve failed to build them with each other?
This is where the dissociation comes in. As we spend more time in digital spaces and less time in the real world, we risk losing touch with what’s real. And as AI becomes more sophisticated, that line will only get blurrier.
So let’s talk about the moral dilemmas of AI, because there are many. Should AI have rights? If it creates art, who owns it? If it makes a mistake, who’s responsible? Because. At lest at this point, humans still have to prompt AI to generate output
As we continue to dream, innovate, and build, let’s remember that for now we are the authors of this story. The future is unwritten, and it’s up to us to decide how it unfolds.
The rise of AI is inevitable, but its impact is not. We have a choice in how we use it, how we regulate it, and how we integrate it into our lives.
Will we use AI to enhance our humanity, or will we let it replace the very things that make us human? Will we confront the moral dilemmas head-on, or will we sweep them under the rug?
The answers to these questions will define the next chapter of our story. And as we stand on the brink of this new era, one thing is clear: The future of AI isn’t just about technology—it’s about us.