…..and How We Can Save Ourselves
Based on conversations with economists and AI specialists, this essay looks at what AI can really do for society – and where I see its limits. I’ve come to believe that our future depends far more on human integrity, education, and our collective will than on any machine. Keyword: Responsible AI. I share what I’ve observed, what a careful analysis reveals, and where I stand. But of course, I’d love to hear your perspective.

Beyond the Hype – A Sober Look at the AI Revolution
Let’s be honest: artificial intelligence has become the new religion of progress.
We are told it will cure cancer, reverse climate change, run our companies, and maybe even fix our marriages if we ask politely enough. Every conference stage, TED Talk, and LinkedIn post seems to promise salvation through algorithms.
And yet, beneath all this digital euphoria runs a deep unease.
Will AI take our jobs? Entrench inequality? Decide who gets healthcare or a mortgage?
Or worse: is there a risk, that it will quietly make us irrelevant?
After years of observing this debate – from the front row of academia and the trenches of corporate decision-making (although this was before AI became so widespread and available to everybody) – I’ve come to a simple conclusion:
(Click on image to see the full overview)
AI is not our saviour. It’s our amplifier and our mirror.
It amplifies whatever we feed into it – brilliance or bias, empathy or greed – and reflects our collective systems, values, and flaws back at us with unnerving accuracy.
AI has no soul, no conscience, no intrinsic sense of “good.” Nevertheless, I always end my prompts with “Thank You”. What it has is scale. It executes human intent – good or bad – faster, louder, and wider than ever before.
So, the question isn’t just what AI will do to us.
It’s what we will do with AI.
And whether we have the courage, education, and moral clarity to steer it wisely, under the umbrella “responsible AI” – before it steers us.

What AI Really Is – and Why That Matters
Before we can talk about impact, we need to clear the fog. AI doesn’t “think.” It doesn’t “learn” like a human. It doesn’t “understand” your business, your feelings, or your cat videos. Although many users seem to believe this. There is even a disturbing trend to see AI as religion: ChatGPT Religion: The Disturbing AI Cult.
What large language models (like ChatGPT) do is predict the next statistically likely word, based on trillions of examples. It’s a breathtakingly sophisticated guessing machine – I compare it to a parrot with a PhD in probability.
That means AI doesn’t create truth; it recombines it. It doesn’t generate wisdom; it synthesizes what’s already out there. And since most of what’s “out there” is written by humans with blind spots, biases, and occasionally questionable judgment, those same biases are baked into every digital prediction.
When you ask AI to summarize “the typical professional,” it might over-represent men. When you ask it to “suggest a good leader,” it might prefer youth. When you ask it to “write a diet plan for women,” it might use unrealistic, data-skewed health metrics. AI is biased – as are the texts it has been trained on.
Unfortunately, these are not innocent errors, they are reflections of the data we’ve produced as a society. And because AI amplifies patterns, it doesn’t just mirror inequality – it multiplies it.
So, when I say AI is a mirror, I mean it quite literally.
The question is: do we like what we see?

History Repeats – Only Faster
If all this sounds familiar, it’s because we’ve been here before. Well, if you are my age, you have seen economic bubbles burst. Every industrial revolution has promised liberation and delivered disruption first.
The steam engine freed us from physical labour but trapped millions in factories.
The computer promised “the paperless office” and gave us inboxes overflowing with digital busywork.
The pattern is always the same: early adopters profit, while ordinary people adjust, often painfully.
Yes, society eventually catches up – but only after decades of inequality, policy failure, and public backlash.
The Industrial Revolution generated immense wealth but concentrated it in a few hands for nearly a century. Real wages stagnated while profits soared.
And now, as AI begins its own revolution, we are watching the same movie again – only in high definition.
Here’s the unromantic truth: technology doesn’t automatically create fairness.
It creates potential. What happens next depends on governance, education, and human decency.
Without deliberate intervention, the “AI revolution” will follow the same pattern – immense wealth for a few, lost livelihoods for many, and a widening gap between those who understand the tools and those who are used by them. There are experts around, who are sure, this will happen rather sooner than later. Therefore, it is even more important, to focus on “responsible AI”: think about consequences, before blindly following a trend.

The Productivity Illusion
There’s a persistent fantasy that AI will finally make the economy boom – that by automating drudgery, we’ll all have time for creativity, family, or yoga retreats. Lovely idea. Unfortunately, reality isn’t playing along.
Decades of data show that massive investments in technology do not automatically lead to higher productivity. Economists call it the “productivity paradox”: we see the gadgets everywhere – but not in the GDP.
Why? Because plugging in new technology doesn’t automatically fix broken systems.
Real productivity comes from humans – educated, healthy, motivated humans – who know how to integrate new tools into meaningful work.
When companies adopt AI, they often see an initial drop in productivity before any long-term gains appear. Systems must be redesigned, staff retrained, data cleaned up – and all of that takes time and money.
Most companies don’t have a strategy or plan in place, yet hope, that AI will fix a lack of clear vision. It doesn’t. The flashy dashboards might impress shareholders, but transformation only works when people understand what the machines are actually doing. Following a trend, without knowing what the goal should be, is not in line with responsible AI.
That’s why the smartest nations invest not just in AI labs, but in education, healthcare, and social stability. Those are the real engines of progress. You can’t build a high-tech future on a foundation of burnout, misinformation, and economic precarity, in the sense of lacking in predictability, job security, material or psychological welfare.
For women 50-plus – many of whom are balancing professional expertise, caregiving, and lifelong learning – this point hits home. We are told that we are no longer needed and can easily be replaced by AI. That AI can do so much more, than we can. But productivity isn’t about doing more; it’s about doing what matters, not, what is possible
Unfortunately, there’s no shortage of “experts” claiming that success is possible without expertise, experience, wisdom, or skill – that all that is needed is the right “AI agent”, a set of “prompts” and prosperity will follow. The reality? The results are mediocre at best, and often just plain ridiculous.

The Human Equation
The conversation about automation frequently boils down to numbers: jobs lost versus jobs created. But let’s pause for a moment. Work is not just an economic function – it’s an emotional anchor for many professionals.
Our professions give us structure, purpose, and identity. Losing that doesn’t just shrink a bank account; it unravels a sense of belonging.
We’ve seen this before, in deindustrialized towns across Europe and the U.S., where the collapse of local industries led to depression, substance abuse, and an erosion of community life. Those scars didn’t fade with retraining programs or “upskilling.” They lingered for generations.
Now imagine a world where whole categories of intellectual work – teaching, writing, design, customer service – are hollowed out by automation. Not because humans can’t do them better, but because corporations can do them cheaper.
This isn’t innovation; it’s extraction, and it carries a human cost. It is not responsible AI, it is highly irresponsible. Here’s the question: if companies like Amazon automate everything and make large numbers of people redundant, what happens next? Scale that up. When incomes fall and markets contract, who’s left to buy their products?
If we truly want a humane digital era, we must redefine what “work” means. It might involve more flexible roles, part-time projects, mentorship, or community engagement. The point isn’t to cling to old models, but to protect what matters: dignity, connection, and purpose.

The Power Problem – and Why It Should Worry You
Let’s talk about the elephant in the server room: power.
The AI revolution isn’t being driven by small startups or visionary scientists. It’s being shaped by a handful of tech giants who control the data, the chips, and the cloud.
Nvidia, Microsoft, Google, Meta, Amazon – they own the infrastructure, the talent, and increasingly, the narrative.
Innovation? I call it oligopoly. It is not in line with responsible AI, but driven by greed.
The irony is hard to miss: the technology that was supposed to democratize knowledge is concentrating power more aggressively than any industrial monopoly before it. When a handful of CEOs can decide how billions of people access information, we don’t have a technology problem – we have a governance crisis.
Even worse, many AI systems operate as “black boxes.” They make decisions – about hiring, credit, healthcare, even criminal sentencing – that no one can fully explain. Not even their creators. And when no one can explain a decision, no one can be held accountable.
This is not some abstract ethical issue. It’s about democracy. When governments start using private AI systems to make public decisions (God beware), citizens lose their right to understand – or challenge – the logic behind them. A government that can’t explain its own algorithms can’t be trusted to protect its people.

Reclaiming Control – The Ethics and Governance We Deserve
There’s a growing international movement to bring AI under democratic control – from the EU’s AI Act to UNESCO’s ethical frameworks and the G7’s “Hiroshima Process.” All these initiatives share a simple principle: AI must serve people, not the other way around.
The EU’s approach is a good start – banning manipulative or high-risk systems, demanding transparency, and requiring human oversight for algorithms that affect lives. It’s bureaucratic, yes, but necessary. Because without guardrails, even well-intentioned innovation can turn into digital authoritarianism. And just imagine evil people trying to make use of this.
But regulation is only half the story. The other half is mindset. We must shift from “replacement” to “augmentation.” From building machines that do our jobs to building tools that amplify our skills.
The most promising AI applications today – in medicine, education, and research – follow this principle.
AI helps doctors detect early disease but doesn’t replace their judgment.
It helps teachers personalize lessons but doesn’t replace their empathy.
It helps scientists connect data points faster but doesn’t replace curiosity.
That’s the model we need: human at the centre, AI as assistant.

How We Can Save Ourselves
The future of AI isn’t written in code. It’s written in us – in our choices, values, and collective courage to think critically. If you’ve read my earlier posts, you know I’m happily teamed up with AI – as in, it fetches the coffee, does the time-consuming searches and files the spreadsheets. I let it handle the grunt work, but I don’t hand over the keys. On those terms, ChatGPT, Gemini, and I are on excellent speaking terms.
Here’s what that means in practice, for all of us:
1. For individuals: Learn, question, and stay awake.
Don’t fear AI – understand it. Use it, experiment with it, but never outsource your judgment.
If something looks too perfect, it probably is. Verify. Challenge. Be curious.
The best antidote to manipulation is literacy – not technical, but critical literacy.
2. For society: Invest in people, not just platforms.
Education, healthcare, and care work are not “cost centres.” They’re the infrastructure of resilience.
We don’t need more apps that tell women how to “optimize” their mornings – we need policies that support them through midlife transitions, flexible careers, and ongoing learning.
3. For policymakers: Be brave.
Stop outsourcing regulation to the very companies creating the risk.
Demand transparency, enforce accountability, and redistribute the wealth generated by automation.
If AI boosts profits, those gains should serve public welfare, not just shareholder dividends.
A fair future is not anti-technology. It’s pro-human.

The Takeaway: The Mirror Never Lies
AI is not a monster. It’s not a messiah, either.
It’s a mirror – and a megaphone. It will amplify whatever we feed it.
If we feed it greed, it will optimize for profit.
If we feed it compassion, it might help us heal.
If we feed it wisdom – real, hard-earned, human wisdom – it might just become the greatest tool we’ve ever built.
But make no mistake: the algorithm won’t save us from ourselves.
That’s still our job.
And perhaps that’s the uncomfortable but liberating truth of this whole conversation.
AI can reflect what we are – but only we can decide who we become.

About the Author
I’m Dr. Heike Franz – researcher, coach, and unapologetic myth-buster.
After decades in global corporate leadership and two doctorates later, I’ve come to believe that the biggest threat in the AI age isn’t the technology itself – it’s our willingness to stop thinking critically.
Through my work at Dr. Franz Consulting, I help women 50-plus combine longevity science, leadership, and digital intelligence to build sharp minds, strong bodies, and smart decisions – without the hype.
If this article resonated with you, join me for more straight talk on AI, health, and human intelligence at www.drfranz-consulting.com, or connect with me on LinkedIn.
Because the future won’t be written by machines – it will be written by women who refuse to hand over their power.







