Big Tech is currently facing a physical infrastructure crisis. Data centres are consuming energy at a rate that strains national power grids. Microsoft, Google, and Amazon are investing in nuclear energy to power them. Some are seriously discussing server infrastructure beyond Earth. The cloud, it turns out, is not infinite.
And yet the organizations building this infrastructure are grappling with a problem that has nothing to do with storage capacity or energy consumption. The problem is not that they cannot hold more data. The problem is that having the data is not the same as having the answer.
After two decades in Knowledge Management across technology, healthcare, and consulting environments — I have watched organizations build progressively more sophisticated systems for capturing and storing knowledge. The repositories got bigger. The search got faster. The platforms got smarter. And the fundamental problem persisted: critical knowledge sat in the right place and still never reached the right person.
"The infrastructure question is not whether your organization can store more. It is whether what you store ever becomes useful to the person making a decision under pressure."
The Last Mile Problem
Concept
In logistics and telecoms, the Last Mile Problem describes the final and most expensive leg of delivery. The infrastructure is built. The network is connected. The product travels thousands of miles without friction. And then it fails at the last step: getting from the distribution point to the person who actually needs it.
Knowledge has exactly the same problem. Organizations invest heavily in capturing, storing, and indexing knowledge. The repositories are built. The platforms are deployed. The content is there. And then it fails at the last mile getting from the system to the person making a decision, at the moment they are making it, in a form they can actually use.
Unlike visible costs — a failed project, a compliance breach, a customer complaint — the Last Mile failure is invisible. Nobody files a report on the decision made with outdated information. Nobody tracks the time spent reconstructing knowledge that already existed somewhere in the organization. But it compounds. Every policy change that takes three months to reach the frontline team. Every expert whose knowledge lives entirely in their head and leaves with them when they resign. Every AI system delivering confidently wrong answers because the knowledge it retrieved was never governed or verified.
Consider this scenario
A regulatory requirement changes. The updated guidance is published internally, reviewed by compliance, and filed correctly in the knowledge base. Six months later, a frontline team is still operating on the previous version not because they ignored the update, but because the knowledge existed and never moved.
Three client-facing decisions were made in that window using outdated information. One is now under review. The cost of correction — legal review, client communication, process audit — runs to tens of thousands. The cost of the knowledge infrastructure failure that caused it: invisible on every report.
That is the Last Mile Problem. Paid quietly. Paid repeatedly. Never attributed to the right cause.
The Last Mile Problem is not a technology problem. More storage does not solve it. Faster search does not solve it. Even AI does not solve it, in fact, AI amplifies it. An AI system trained on or retrieving from ungoverned, unverified knowledge does not just fail to deliver the answer. It delivers the wrong answer with authority. The Air Canada chatbot case, ruled on by the British Columbia Civil Resolution Tribunal in February 2024, is one documented example — not a criticism but an illustration of a governance challenge that every organization deploying AI in customer-facing roles now faces. It will not be the last.
The Empathy Gate
Concept
If the Last Mile Problem describes the cost of knowledge that never moves, the Empathy Gate describes the discipline that stops the wrong knowledge from moving instead.
The Empathy Gate is the human filter in the knowledge flow. It is the moment — designed deliberately into the infrastructure — where someone asks: is this actually useful to the person making this decision, or is it just more data added to the noise they are already drowning in?
Most knowledge systems are built to maximise what is captured and stored. Very few are built to filter ruthlessly for what is actually needed. The result is that well-intentioned KM implementations make the information overload problem worse, not better because they add volume without adding judgement.
The Empathy Gate is not a technology feature. It is a design choice built into taxonomy decisions, into content ownership models, into the criteria for what gets published versus what gets archived versus what gets retired. In practice it looks like this: a knowledge base that surfaces three results instead of fifty. A content owner who retires an article rather than updating it endlessly. A governance model that asks "who needs this and when" before anything gets published. These are not small decisions. They are the difference between a system that helps people think and one that makes thinking harder.
This is why Knowledge Management, needs a judgement support. And judgement is not something that scales by buying a better platform. The Empathy Gate has to be designed in deliberately because every system, left to its own logic, will default to capturing more rather than curating better.
The Cognitive Reservoir
But even when the right knowledge reaches the right person at the right moment — there is still the question of whether they have the cognitive space to use it.
We are building faster ways to move more information. We are not increasing the human capacity to process it.
Every piece of irrelevant information that reaches a decision-maker costs cognitive resource. Every search that returns fifty results when three would have been sufficient costs time and attention. Every AI-generated summary that requires fact-checking before it can be trusted costs exactly the kind of mental energy that should be going toward the actual decision.
A mature knowledge infrastructure should not just deliver more. It should protect the cognitive space that makes it possible to use what is delivered. It should handle the mechanical friction — the searching, the verification, the synthesis of scattered sources so that the human at the end of the pipeline can do what only a human can do: apply context, weigh competing priorities, and make a judgement call that the data alone cannot make.
This is what I mean when I talk about knowledge as infrastructure rather than knowledge as content. Infrastructure is not noticed when it works and is only noticed when it fails. The water comes out of the tap. The lights turn on. The road holds the weight of the vehicle. Nobody thinks about the pipes, the grid, or the engineering that makes those outcomes reliable until they stop being reliable.
The same standard applies to knowledge infrastructure. When it is working, people make better decisions faster and attribute it to good instincts or a strong team. When it fails, they file incident reports and look for someone to blame. The infrastructure is invisible in both cases. The consequences are not.
Knowledge as Infrastructure
The most useful shift I have made in how I think about knowledge work is treating knowledge not as content to be managed but as infrastructure to be maintained.
After two decades building these systems I can say that the architecture evolves, the tools change, and what worked three years ago needs rethinking today. The systems that handle this best are not the ones who declared victory at go-live. They are the ones who kept the maintenance schedule, stayed curious about what was not working, and treated the infrastructure as a living network of knowledge, people, and practice — one that breathes, adapts, and needs tending — rather than a finished product.
Knowledge infrastructure is not a solved problem. It is a practice. And like all good practice, it rewards those who keep showing up. This is why treating knowledge as infrastructure is the practical answer to the Last Mile Problem. When you design a knowledge system as infrastructure with ownership built in, maintenance cycles planned from day one, and delivery calibrated to the moment of need the Last Mile stops being a gap and becomes part of the design.
What follows is not a prescription. It is what I keep observing across industries, at different scales, in organizations that genuinely tried to get this right.
What this Means in Practice
Two decades of building knowledge systems across different industries and scales has produced a consistent observation: the gap is almost never in the technology. It is almost always in the same three places.
Ownership without accountability. Every piece of knowledge should not only have the name of someone who created it but also someone accountable for its accuracy at regular intervals. Knowledge without an owner decays silently and gets retrieved with the same authority as knowledge that is current and verified.
Infrastructure without maintenance cycles. Knowledge systems built with go-live dates and no built in maintenance schedules is a big red flag. Knowledge infrastructure gets a quarterly reminder email that nobody opens is, technically, also a form of knowledge that nobody opens.
Delivery without a human filter. The Empathy Gate is designed out of most implementations in the name of efficiency. Automated ingestion. Bulk migration. Comprehensive indexing. The result is comprehensive noise, delivered faster and at greater scale than before.
"The question worth asking is not whether your organization has a knowledge infrastructure. It is whether that infrastructure is protecting the people inside it or simply adding to the noise they already cannot process.
Whether it is visible enough to act on and whether the people responsible for knowledge infrastructure have the mandate, the ownership model, and governance to close that final gap."
References
Moffatt v. Air Canada, 2024 BCCRT 149 — British Columbia Civil Resolution Tribunal, February 14, 2024. https://www.canlii.org/en/bc/bccrt/doc/2024/2024bccrt149/2024bccrt149.html
IEEE Spectrum — Big Tech Embraces Nuclear Power to Fuel AI and Data Centers. https://spectrum.ieee.org/nuclear-powered-data-center
MIT Technology Review — Should We Be Moving Data Centers to Space? March 3, 2025. https://www.technologyreview.com/2025/03/03/1112758/should-we-be-moving-data-centers-to-space/
American Bar Association Business Law Today — BC Tribunal Confirms Companies Remain Liable for Information Provided by AI Chatbot. February 2024. https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-february/bc-tribunal-confirms-companies-remain-liable-information-provided-ai-chatbot/
Part of KnowledgeNova Insights