A new paper from Meta AI maps three eras of supply chain intelligence - from ERP extensions to agent harnesses to something nobody in the industry is talking about yet.
Last week, a team from Meta AI and KAUST - including Jürgen Schmidhuber, the man behind LSTM - published a paper called Neural Computers.
The idea in one sentence: stop building AI that uses a computer. Build AI that is the computer.
Not a smarter agent. Not a better copilot. A fundamentally different machine - one where computation, memory, and I/O live inside a single learned system instead of being stitched together from code, APIs, and scaffolding.
It's early. The prototype can't reliably do two-digit addition. The authors estimate a working version is three years out.
But the question they're asking landed differently for me than it probably did for most readers. Because I've spent 20+ years living through the exact problem they're describing - from the inside of supply chain operating rooms, not from a research lab.
The question, translated for the operations floor
Your ERP processed 400,000 transactions last quarter. Your TMS routed 12,000 shipments. Your demand planning system generated forecasts every week for 18 months straight.
None of them got any better at their job because of it.
I keep coming back to a specific memory. LafargeHolcim, post-merger. Eighty countries, four business units, CHF 5B in logistics and procurement spend. We had just finished integrating two massive ERP landscapes. The system worked. It executed. It was stable.
And then a port closure in West Africa cascaded into a raw material shortage that hit three plants in two countries simultaneously. The ERP had no idea. It had processed millions of transactions across those exact routes - and none of that operational history helped it anticipate or adapt to what was happening. My team did. The system waited for us to tell it what to do.
That was 2016. Ten years later, the fundamental problem hasn't changed.
We've tried to fix this twice already. Both attempts hit a ceiling.
Three eras of making supply chains smarter
Most companies are stuck somewhere between the first two. The Neural Computer paper accidentally maps the third.
Era 1: ERP Extensions. This is where I started. P&G in the mid-2000s, managing logistics across the Nordics and Central-Eastern Europe during the Gillette and Wella acquisitions. Every new capability meant a new module, a new configuration cycle, a new round of testing. At Zeppelin, I led a full TMS and WMS deployment from scratch - operating model redesign, Lean Six Sigma, the whole transformation playbook.
You know what it felt like? Like translating human intelligence into machine language, one business rule at a time. Every workaround our best operators invented had to be reverse-engineered into a spec, handed to a consultant, configured, tested, and deployed. Six months later, the business had already moved on. The system was always running yesterday's playbook.
The ceiling: the system can never be smarter than the person who configured it. And it never gets smarter on its own.
Era 2: Agent Harnesses. This is where the industry's energy is right now. You wrap AI agents around your existing systems - copilots that read emails and create POs, exception-handling bots that triage alerts, demand sensing agents that pull signals from external data.
I've been building this. The Orchestrator - nine specialised agents, each doing one thing well, communicating through Postgres, governed by a policy layer the human controls. Vessel Watch tracking AIS positions in real time. Doc Processor running five-step validation pipelines. Risk Monitor scoring route risk and invoking other agents when thresholds are crossed. All running on a €5 VPS, deployed through Telegram, under fifty cents a day.
But I have to be honest about what it doesn't do. The agents complete tasks inside the existing environment. They make the harness smarter - the prompts, the workflows, the memory stores - not the system underneath. When Vessel Watch encounters a completely new pattern of port congestion it's never seen before, it escalates or fails. That pattern doesn't become part of how the system operates next time. It becomes a note in my Postgres log and a prompt update I make manually on Saturday morning.
The ceiling: every new capability still requires me to build it. A new workflow, a new tool integration, a new agent. The scaffold gets thicker. The underlying system stays exactly the same.
Era 3: Capability Extension. This is what the Neural Computer paper describes - and this is where it got genuinely interesting for me.
Instead of extending the ERP (Era 1) or harnessing agents around it (Era 2), capabilities enter the runtime itself. The system doesn't just execute what was coded or retrieve what was stored. It accumulates operational intelligence through interaction. Every exception resolved, every routing decision corrected, every demand signal validated - these don't just get logged. They become part of how the machine operates next time.
Think about what that would mean for the West Africa port closure scenario. Instead of my team scrambling to reconfigure the system while three plants waited, the machine would already carry the pattern from every previous disruption it had processed - not as a rule someone coded, not as a memory an agent retrieved, but as an operational capability that was now part of how it runs.
The Neural Computer paper calls this the difference between organising around explicit programs (Era 1), organising around tasks (Era 2), and organising around runtime (Era 3).
For supply chains, that distinction matters more than most of the AI coverage I read. We've spent 30 years making ERP systems more configurable and 3 years wrapping agents around them. The next shift is systems that get structurally better at running your operation because they ran your operation yesterday.
Four conditions that should become your evaluation framework
The paper defines four conditions for a "Completely Neural Computer." After reading them, I realised they work better as a vendor evaluation checklist than anything Gartner has published:
1. Turing complete. Can it handle genuinely novel operational scenarios, or does it only work for the three use cases the demo showed?
I've sat through enough vendor presentations to know the difference. Show me a system that handles a scenario your team didn't pre-configure, and now we're talking.
2. Universally programmable. Can you install new capabilities through interaction, demonstrations, and constraints - or does every change require a dev sprint and a deployment cycle?
At J&J, we integrated Synthes - a CHF 21B acquisition. Every supply chain process had to be rebuilt. If those capabilities could have been installed through operational interaction rather than 18 months of configuration, we would have saved a year and a significant amount of organisational pain.
3. Behavior-consistent. Does it stay stable unless you explicitly update it? Or does it silently drift until someone notices the forecasts are wrong and nobody knows when they started?
This is the one that matters most for regulated industries. When I was managing CHF 5B in spend across 80 countries at Holcim, even a small drift in procurement routing logic could cascade into compliance failures across multiple jurisdictions. Any system that learns must also prove it hasn't changed when it wasn't supposed to.
4. Machine-native semantics. Does it operate in ways natural to its own architecture? Or is it just imitating a spreadsheet with more steps?
Most "AI-powered" supply chain tools I've seen are spreadsheets with a chatbot bolted on. The genuinely interesting systems are the ones that work in fundamentally different ways - and deliver insights that the old architecture couldn't produce at all.
Nobody meets these four conditions today. But they're the right questions. They'll separate the vendors doing real work from the ones selling polished demos.
How this changes where you put the money
If you're making supply chain software decisions right now, the three eras aren't just a framework. They're an investment thesis.
Era 1 investment logic: pay for configuration. You buy a module, hire consultants to configure it, amortise the cost over years of deterministic execution. I've run these projects. At Olam, the Supply Chain system deployment took the better part of two years. The ROI was real - but every new capability cost roughly the same as the last one. No compounding returns. Year five looks like year one, just with more modules.
Era 2 investment logic: pay for scaffolding. You invest in workflow platforms, API integrations, memory infrastructure, prompt engineering. The ROI shows up as faster task completion and headcount reallocation. Better than Era 1 - but every new capability still has a marginal cost. A new agent, a new workflow, a new integration. I know this because I build them every week.
Era 3 investment logic: pay for runtime infrastructure. This is where the economics change fundamentally. If the system accumulates capability through operation, the marginal cost of the next capability drops toward zero. You're no longer paying for each individual improvement. You're investing in a substrate that gets better because it operates.
That's a compounding return on operational data - something no ERP extension or agent harness can deliver.
What to do right now
Stop treating operational data as exhaust. Every workaround, every exception resolution, every judgment call your best operator makes at 3 PM - this is training data for runtime-class systems. If it lives in someone's head, it's already lost. I learned this the hard way. The best logistics coordinator I ever worked with at P&G retired and took thirty years of Baltic shipping intuition with her. No system captured any of it.
Evaluate vendors against the four CNC conditions above. Nobody passes today. But the vendors building toward behavior-consistency, installable capabilities, and interaction-as-programming are worth watching. The ones selling you another agent scaffold wrapped around your 2019 ERP are not.
Build governance infrastructure now, while your systems are still deterministic. When the machine starts learning from operating, governance gets exponentially harder - drift detection, rollback, audit trails, behavioural versioning. The organisations that build this infrastructure early won't just be compliant. They'll be the only ones who can safely adopt Era 3 systems when they arrive.