Wax Wings & AI
A long time ago, a boy named Icarus flew with wings made of feathers and wax. Drunk on freedom, he flew too high. The sun melted the wax. He fell into the sea.
The myth gets remembered as a warning about arrogance.
But it’s also a story about fundamental misunderstanding.
Icarus didn’t fall because he flew too high. He fell because he mistook looking like flight for being able to fly.
Contrast Icarus’ flight with the day the Wright brothers flew.
They didn’t flap harder. They didn’t glue feathers to wood. They tested wing shapes in wind tunnels. Measured pressure. Studied airflow.
They figured out what actually holds something in the sky: lift.
And once they understood lift, they flew by building machines that looked nothing like nature’s definition of flight.
Icarus failed because his flight was built on mimicry.
The Wright brothers succeeded because theirs was built on understanding.
And that’s exactly where we are with artificial intelligence.
We’ve built machines that mimic intelligence. They talk, they write and they might even surprise us with solutions to complex problems. But it’s all just pattern matching.
And pattern-matching isn’t thinking … computation isn’t comprehension.
Today’s most advanced AI doesn’t truly know what it’s doing. It has no beliefs, no intrinsic goals, no subjective experience. It mirrors the patterns in the data it’s been fed. It reflects us — cleverly, eerily, and often usefully.
But it doesn’t understand.
That’s not a mind.
That’s wax and feathers.
But the real problem is that we’re starting to believe the illusion. And that belief is turning into fear due to our lack of understanding. Pioneers like Geoffrey Hinton now fear AI will spontaneously develop goals, subgoals, even a drive for self-preservation. He worries it could resist being turned off.
On the other hand, theoretical physicist Roger Penrose offers a different perspective. He argues consciousness isn’t just an emergent property of sufficient computational complexity. Understanding, he posits, can't simply be simulated by replicating neuron functions with silicon. It requires something more fundamental, perhaps rooted in undiscovered physics.
Both Penrose and Hinton cannot be correct though.
Penrose's view makes a lot more sense to me, and here's why.
We’ve built astonishing systems that mimic intelligence. But we still haven’t cracked what intelligence fundamentally is. We haven’t discovered the “lift” of genuine thought—the irreducible principle that actually holds a conscious mind aloft.
It’s that simple.
Yes, computation is clearly a part of intelligence. But it’s not the whole thing. Not by a long shot.
Hinton poses a compelling thought experiment: replace each neuron in a brain, one by one, with a nanochip replicating its function perfectly. The person, he argues, would remain conscious. Therefore, a fully replicated system is a conscious mind.
This functionalist argument seems logical, but it relies on a critical assumption. It presumes consciousness arises solely from the functional arrangement of parts, regardless of substrate.
But consider this: hydrogen is explosive, oxygen feeds fire. Combining them logically suggests volatility. Yet, what emerges is water—stable, life-giving, and wet.
Wetness isn't a property of hydrogen or oxygen alone. It's an emergent property of their specific interaction under the right conditions.
In the same way, consciousness might not be a guaranteed output of any complex computation, but an emergent phenomenon arising from specific, poorly understood biological (or potentially other) processes. Mimicking the function of the parts doesn't guarantee you recreate the emergent whole.
We’re flying with systems that look like minds. But we still don’t know what makes a mind fly.
That’s why the true danger isn’t malevolent machines. It’s trusting a pattern-matching system with real power: healthcare, warfare, finance, justice.
Just as Icarus trusted wings of wax to keep him safe in flight.
Let’s not confuse the flawless mimicry of intelligence with real understanding. That confusion — not AI itself — is the biggest threat.