THE RESET | Day 3: Hitting Walls
Researching adaptive learning path generation reveals a hard truth: the problem is NP-hard, meaning no single algorithm can produce a perfect path for every learner. The reframe changes everything — with a human leader in the loop to review and adjust, AI doesn't need to be perfect, it needs to be good enough.
Disclaimer: The content provided in this series is designed to provide helpful information on the subjects. I am not responsible for any actions you take or do not take as a result of reading this series, and I'm not liable for any damages or negative consequences from action or inaction to any person reading or following the information in this series.
Last time, we went through what makes a good offer, how to select a target avatar, and how I’m planning the main offer for this new venture.
Today is about what happened when theory met reality.
1. The Deep Dive
With the offer framed and the business side sketched out, I had to go back to the only thing that really matters in the long run:
The product.
So I went down the rabbit hole.
I started mapping out what a perfect Adaptive Fully Automated E2E L&D Platform would look like:
- What existing platforms are actually doing under the hood
- How they structure knowledge, content, and assessments
- What the latest research says about knowledge decomposition and learning path generation
Then I ran into this line in one of the more recent papers:
"It is not possible to have a single learning path that suits every learner in the widely heterogeneous e-Learning environment. […] this problem is NP-hard."
In IT terms, NP-hard means you’ve basically picked a fight with one of the hardest classes of problems in computer science. There’s no known way to solve it efficiently (in polynomial time), even with supercomputers.
So what does that mean for the dream of:
“Click a button and get a perfect, ready-to-use learning path for your entire team.”
It means: no one really knows how to do that yet.
At least, not in the fully general, “works for everyone, everywhere, on anything” sense.
I re-read the line:
"It is not possible to have a single learning path…"
The problem is obvious when you think about it:
- People start from different skill levels and backgrounds
- One person learns something in 3 days, another in 3 weeks
- Prior knowledge, time availability, learning preferences, and context all matter
There is no one path.
The reframe
NP-hard doesn’t mean “give up.” It means unsolvable in the general case.
There is no single algorithm that produces the perfect path for every person on every topic in every context.
But that’s not what I’m building.
I’m not trying to solve learning for “every learner in a widely heterogeneous environment.”
I’m solving it for:
- A specific team
- Learning specific knowledge
- In a specific context
- With a human leader reviewing and steering the outcome
That constraint changes everything.
The leader:
- Knows their team
- Knows what actually matters
- Knows what to cut and what to keep
The AI doesn’t need to be perfect.
It needs to be good enough that the leader can:
- Review the generated path
- Adjust it quickly
- Ship it with confidence
This pattern already works elsewhere:
- AI code assistants don’t write perfect code
- They write 80% correct code
- A developer reviews, fixes, and ships in a fraction of the time
The human-in-the-loop isn’t a limitation.
It’s the feature.
So instead of panicking at “NP-hard,” I got excited.
Because the academic world is obsessed with solving the fully automated version of the problem —
And almost no one is building the human-assisted version.
They’re chasing perfect automation.
I’m building a power tool for the leader.
2. Yann LeCun and the Convergence
While I’m deep in papers and competitor architectures, the news drops:
Yann LeCun is leaving Meta.
For context:
- One of the three “Godfathers of AI”
- Turing Award winner
- Former Chief AI Scientist at Meta
He launches AMI Labs (Advanced Machine Intelligence).
- First round: $1 billion raised
- Valuation: $3.5 billion
- Revenue: $0
Normally, I’d scroll past. Another AI lab. Another billion. Whatever.
But this time I read what he’s actually trying to build.
And it stopped me.
Beyond next-token prediction
LeCun has been saying for years that current LLMs are fundamentally limited.
They:
- Predict the next token
- Don’t actually understand anything
His JEPA (Joint-Embedding Predictive Architecture) work lays out what he thinks comes next:
AI that builds world models.
Not systems that just describe the world.
Systems that understand:
- How the world got to its current state
- What can happen next
- How interventions change outcomes
He’s approaching this from physics:
- Energy-based models
