In this episode of Spatial Experience Design, host Søren Vasø is joined by Christian Christensen, Chief Product & Technology Officer at MapsPeople, for a hands-on look at how AI can be applied to real-world spatial data. Using Gemini Flash 2.0 and a live MapsIndoors prototype, they explore how AI understands context, responds to open-ended queries, and adapts across different environments without structured inputs. From booking meeting rooms to locating vegan lunch options, the demo reveals practical, scalable use cases that go far beyond basic search. Whether you're in workplace strategy, product design, or just AI-curious, this conversation breaks down what’s actually possible today—and what to watch next.
Key Topics:
-
- AI demo
- Gemini Flash 2.0
- Context-aware AI assistants
- Large language models (LLMs) for real environments
- Real-time spatial intelligence
You Don’t Need to “Use AI.” You Need Better Results.
A lot of companies are wondering how to “use AI” right now. But the question that matters more is this: Can it actually do something useful with your data—today?
Our Chief Product & Technology Officer, Christian, built a working demo over a weekend—connecting Gemini Flash 2.0 to MapsIndoors and feeding it real spatial data.
No perfect inputs. No curated tags. Just the scattered, semi-structured information most teams already have.
Then he hit “run.”
The result? Smarter answers, spatial awareness, and a glimpse at where AI is finally delivering actual value in environments like offices, stadiums, and public venues.
What’s Changed (And Why This Matters Now)
AI isn’t new. But three things have changed rapidly:
- It’s faster than ever (responses feel real-time)
- It’s cheaper to run and scale
- It can finally hold onto context—and act on it
That last one’s a big deal. It means we no longer need to format every detail or build rigid logic trees. The model can infer what a user means, even when the language is messy or the request spans multiple steps.
In short: AI doesn’t just search anymore. It understands.
The Demo: Real Data, Real Environment, Real Results
Here’s what the prototype could do with a real MapsIndoors setup:
- Locate a colleague by name—then show their exact desk on a map
- Answer follow-up questions based on that location (like where to eat nearby)
- Provide dietary options without any food items being tagged as “vegan”
- Understand phrases like “where can I charge my car?” and return a relevant map
- Find Zoom-enabled meeting rooms, understand nicknames, and sort by context
- Retain the thread of conversation across multiple steps
AI That Understands Environments, Not Just Queries
One of the biggest unlocks is spatial awareness. The AI now knows what floor you’re on, which canteen you’re near, who you work with, and what’s available based on the time of day. For example:
- Ask what vegan options are available, and you won’t see breakfast items if you’re asking at noon
- Ask “Can I charge my car?” and it links to the EV lot on your current campus
- Ask for a meeting room near “The Shining” and it connects the dots back to author Stephen King
- Ask “Can I book it?” and it can be configured to trigger external tools like Google Workspace or Outlook
What’s different here is that it actually supports people in the way that they interact with their environment. It’s the beginning of real user experiences that save time, reduce confusion, and make spaces feel more responsive.
What It Takes to Build Something Like This
You don’t need to fine-tune your own AI model or build from scratch. But you do need to make smart choices about:
- Which LLM you use (for this experiment’s large language model, we used Gemini Flash for its price, speed, and 1M-token context window)
- What data you feed it (structured and unstructured can work—context is key)
- How you design the prompt (the system message tells the model what tools it can use, and what actions it’s allowed to take)
If you’re thinking about how to integrate AI into your space, whether it's an office, campus, stadium, or mall, start by asking:
- What are people already trying to do here?
- What do they wish they could ask, instead of clicking around?
- Where would spatial context help make the answer better?
Why This One’s Worth Paying Attention To
What makes this demo different is that it’s grounded in real problems—the kind that affect people’s time, confidence, and ability to navigate a space.
It's not about replacing maps. It's about making them feel useful in the moment:
- When you're late and need to find the nearest meeting room
- When you want to grab food that fits your diet and schedule
- When you need to know where your team is sitting
- When you’re planning your office days and want to simplify bookings
What’s Next
The more context we layer in—from schedules and availability, to roles and permissions—the more helpful and human the experience becomes. Not robotic, but smarter.
AI isn’t the goal. Better answers are.
When context meets capability, it finally starts to click.
Additional Resources:
April 25, 2025