AI is an Impostor (Fundamental Flaw Explained)



What if today’s incredible AI is just a brilliant “impostor”?

This episode features host Dr. Tim Scarfe in conversation with guests Prof. Kenneth Stanley (ex-OpenAI), Dr. Keith Duggar (MIT), and Arkash Kumar (MIT).

While AI today produces amazing results on the surface, its internal understanding is a complete mess, described as “total spaghetti” [00:00:49]. This is because it’s trained with a brute-force method (SGD) that’s like building a sandcastle: it looks right from a distance, but has no real structure holding it together [00:01:45].

To explain the difference, Keith Duggar shares a great analogy about his high school physics classes [00:03:18]. One class was about memorizing lots of formulas for specific situations (like the “impostor” AI). The other used calculus to derive the answers from a deeper understanding, which was much easier and more powerful. This is the core difference: one method memorizes, the other truly understands.

The episode then introduces a different, more powerful way to build AI, based on Kenneth Stanley’s old experiment, “Picbreeder” [00:04:45]. This method creates AI with a shockingly clean and intuitive internal model of the world. For example, it might develop a model of a skull where it understands the “mouth” as a separate component it can open and close, without ever being explicitly trained on that action [00:06:15]. This deep understanding emerges bottom-up, without massive datasets.

The secret is to abandon a fixed goal and embrace “deception” [00:08:42]—the idea that the stepping stones to a great discovery often don’t look anything like the final result. Instead of optimizing for a target, the AI is built through an open-ended process of exploring what’s “interesting” [00:09:15]. This creates a more flexible and adaptable foundation, a bit like how evolvability wins out in nature [00:10:30].

The show concludes by arguing that this choice matters immensely. The “impostor” path may be hitting a wall, requiring insane amounts of money and energy for progress and failing to deliver true creativity or continual learning [00:13:00]. The ultimate message is a call to not put all our eggs in one basket [00:14:25]. We should explore these open-ended, creative paths to discover a more genuine form of intelligence, which may be found where we least expect it.

Extended interview here: https://www.youtube.com/watch?v=KKUKikuV58o

REFS:
Questioning Representational Optimism in Deep Learning:
The Fractured Entangled Representation Hypothesis
Akarsh Kumar, Jeff Clune, Joel Lehman, Kenneth O. Stanley
https://arxiv.org/pdf/2505.11581

Kenneth O. Stanley, Joel Lehman
Why Greatness Cannot Be Planned: The Myth of the Objective
https://amzn.to/44xLaXK

Original show with Kenneth from 4 years ago:

Kenneth Stanley is SVP Open Endedness at Lila Sciences
https://x.com/kenneth0stanley

Akarsh Kumar (MIT)
https://akarshkumar.com/

AND… Kenneth is HIRING (this is an OPPORTUNITY OF A LIFETIME!)
Research Engineer: https://job-boards.greenhouse.io/lila/jobs/7890007002
Research Scientist: https://job-boards.greenhouse.io/lila/jobs/8012245002

Tim’s Code visualisation of FER based on Akarsh repo: https://github.com/ecsplendid/fer

TRANSCRIPT: https://app.rescript.info/public/share/YKAZzZ6lwZkjTLRpVJreOOxGhLI8y4m3fAyU8NSavx0

source