In this episode of Spatial Experience Design, MapsPeople’s CPTO Christian Christensen joins host Søren Vasø to unpack the rising role of spatial MCPs—Model Context Protocols—that enable AI to understand tools, data, and context without custom code. From finding a quiet spot at a convention center to answering, “Will I make my flight?”, this conversation explores how MCPs simplify complex logic behind the scenes and create smarter, more human interactions. Whether you're building workplace apps, digital concierges, or spatially aware assistants, this episode makes the case for MCP as AI’s new favorite format.
Key Topics:
When people talk about AI today, the focus is often on what the model can do. But the real test is how well the AI understands the world around it, especially when the stakes are tied to real-world movement, decisions, or environments. That’s where MCPs come in.
MCP stands for Model Context Protocol, an emerging standard designed to help AI systems interact with backend tools and APIs, without requiring developers to hardcode every integration. Think of it as a universal translator between AI models and the systems they need to query: calendars, occupancy sensors, maps, flight data, weather services, and more.
Rather than building custom logic to tell AI what to ask, where to get the data, and how to interpret it, MCP handles that heavy lifting. It exposes what's available in terms of tools, data, and functions. Then AI figures out how to use it in response to natural-language prompts. This is the difference between an AI that returns canned answers and one that can deliver real-time, personalized, spatially aware responses.
At MapsPeople, we’re building a Spatial MCP because traditional map interfaces are just one piece of a much bigger puzzle. Spatial data lives in your APIs, from routing, search, positioning, departments, and resources. This is the context people need to get helpful answers.
Imagine someone in an airport asking, “Will I make my flight?” The system not only understands the question, it can tap into flight status, gate location, walking distances, and current position — all through connected MCPs — and return a confident answer like “Yes. You’re at Gate A2. Boarding starts in 12 minutes. It’s a 3-minute walk. You’re doing good on time.”
That’s not just maps. That’s a better experience powered by spatial understanding.
MCP allows AI to process the context it needs to help people without unnecessary extra steps.
All of that becomes possible when your AI can understand space, time, people, and purpose. The MCP handles the logic, the tools, and the data sources behind the scenes. You just write the prompt.
This is a new way of thinking about digital interfaces: less clicking and configuring, more asking and responding, just like you would with a colleague.
Good News: You don’t have to go all-in from day one. Most teams should begin by identifying a single, isolated use case. Start with one that could be improved with richer, more contextual responses.
Whether that’s enhancing an internal help bot, offering location-aware facility search, or powering a digital concierge, MCP gives you a future-proof way to expand. Each new tool you add, like a calendar, a weather plugin, or a search engine, is like plugging in another USB-C cable. The AI simply knows more, so people get better answers.
MCP standard is gaining momentum, with backing from the biggest players in AI. Authentication controls, cross-tool interactions, and developer-friendly support are all evolving rapidly. For product teams building the next generation of spatial experiences, MCPs are the bridge to smarter, faster, more helpful interactions.
Additional Resources: