Blog

The Seven Principles of Ambient Agents

By
Alex Dean
&
May 14, 2025
Share this post

The Ambient Agent Manifesto, Part Two

Missed Part One? Check it out: What Are Ambient Agents and Why Are We So Excited About Them at Snowplow?

Prolog: Prompting fast and making things

After an explosion of AI agent build-outs, along with frameworks to support these builds, the community is taking a beat to discern the underlying principles of these systems.

There has been a robust debate between the OpenAI and LangChain teams, thoughtful contributions from Anthropic and LlamaIndex, and a standout“back to basics” manifesto from Dexter Horthy (“12 factor agents”). And everybody has now filed their 1,000 words on MCP and Agent2Agent (including yours truly).

To be clear, as an industry we are making exciting progress. AI agents represent a “strategic inflection point”, dramatically impacting everything from internal business processes to multi-channel customer journeys. AI engineers in the enterprise are not asking permission to integrate Foundation Models into core workflows: in many organizations a hundred PoCs have bloomed, with some (but by no means all) being promoted from lab to live.

At Snowplow, we have been tracking this ecosystem evolution closely – I was lucky enough to attend the CrewAI event in New York in April, and Yali will be at LangChain’s Interrupt event in San Francisco this week. Snowplow is investing heavily in “Customer RAG” – supplying customer behavior to chatbots and assistants – and this brings us into close contact with specific agentic frameworks plus protocols such as MCP and Agent2Agent.

What is becoming clear to us is that we are reaching a stage of maturity where AI agents are starting to be woven into all software and technology. And this maturity level demands a ‘leveling up’ of the design principles and architectural patterns we use to build agentic systems. The industry is starting to use the term “ambient agent” as an umbrella to describe this new approach.

The speedrun to ambient multi-agent systems

Yali and I have been building Snowplow since 2012 and we’ve never seen an invention-and-adoption cycle quite like agentic AI – it’s a veritable speedrun. At this pace we are quickly reaching a state where:

  • Agents are being integrated into all non-legacy software and technology
  • Individual agents are being trusted with more, well, agency: they are being set long-term goals, like improve process efficiency or increase customer loyalty, and are ‘always on’ to achieve these goals
  • Agents are gaining more autonomy in pursuit of said goals: more steps in the control flow are being determined by agents; agents are no longer simply being instructed to perform steps in a control flow authored by a human
  • More and more systems are multi-agent. This specialization is a natural next step in breaking down complex tasks into smaller steps and assigning them to specialist agents that can be optimized on those steps
  • More and more companies are running multiple multi-agent systems – according to Accenture’s Chief AI Officer, Lan Guan, 10% to 15% of her clients are already using multi-agent systems, and she expects that to grow to 30% in 18-24 months (source)
  • More and more software vendors are building agents into their offerings, so customers can integrate these agents into their own multi-agent systems, crossing organizational boundaries in the process

Put all this together and “ambient agents” aren’t a specific type of agent – they are a set of principles for how enterprises (and software vendors) should successfully build and scale multi-agentic systems.

Ambient agents in the arena

In the first post in this series, I described a set of “ambient agents” working together to improve an online shopping experience. Ambient agents are found in the arena: both reactive to changes in their environment and proactive about solving problems without being instructed.

Let’s share two more examples to really illustrate the idea:

Executive assistant

The professional executive assistant has been the holy grail of applied use cases for AI researchers and builders since Alan Kay’s Knowledge Navigator concept video for Apple in 1987.

I like Azeem Azhar’s vision for what an agentic executive assistant can become:

Your agent should be capable of proactive problem-solving. The irritating package delivery that needs rescheduling? You merely express the intent: “This is a pain point for me, deal with it.” … Your agent takes over, interfacing directly with the vendor’s AI agent. It analyzes your calendar for true availability (understanding your preference for uninterrupted deep work blocks or the variability of picking your kids up from dance class), negotiates a new slot, and updates your schedule, all without further intervention. It simply informs you of the resolution.

LangChain is actively exploring these types of ambient agents – check out their open-source project executive-ai-assistant, which itself leverages a concept they call “Agent Inbox”. Executive assistants are a rapid area of experimentation for ambient agents because you can ‘animate’ these agents by giving them access to just a few key messaging technologies (think email, text, chat).

Video game director

I love this example because this ambient agent was actually built – and before LLMs existed! The co-operative zombie shooter game Left4Dead (2008) featured an internal system called the “Director” to adapt the gameplay in real-time to keep it challenging but rewarding.

The Director was really an agentic system before the term was fashionable. It had all the core ingredients:

  • A clear goal of making the game enjoyable to play
  • The ability to constantly observe how the players were engaging with the game environment, each other and the zombie enemies
  • The ability to influence the game in real-time, by spawning enemies and items in varying positions and quantities that it determined

In fact the Director concept was so effective at optimizing gameplay that Left4Dead actually included a second agentic concept - a musical director to keep the soundtrack interesting throughout the game.

Seven principles of ambient agents

From these examples we can start to discern the core principles of these ambient multi-agent systems – I count seven core principles.

But first, credit where credit’s due: these principles come from a combination of our own experiences and the thinking coming out of Confluent and Akka (shoutouts to Sean Falconer and Tyler Jewell in particular).

The seven principles are as follows:

  1. Goal-oriented – agents are set a clear primary objective which gives them purpose and drives their behavior
  2. Autonomous operation – agents act independently without human prompting, making decisions and taking actions based on the changing world around them
  3. Continuous perception – agents continuously observe and monitor their environment
  4. Semantic reasoning – agents need a semantic understanding of the environment, and the agent’s role within it, to make effective decisions
  5. Persistence across interactions – agents must be able to remember their prior experiences in order to make progress towards their long-term goals
  6. Multi-agent collaboration – individual agents with specialized capabilities will work together to solve complex problems, even across organizational boundaries
  7. Asynchronous communication via event streams – agents communicate through shared event streams, enabling loose coupling, fault tolerance, and many-to-many information flow

The following diagram integrates all seven principles into a single illustrative set of ambient agents:

An illustration of the seven principles of ambient agents

Let’s go through each principle in turn.

Goal-oriented

The agent has to be goaled – it has to know what it exists to do. In many ways this is the highest order principle: without a clear purpose, an agent is really just a task executor in support of someone else’s goal; it’s really a tool not an agent.

An example of a goal could be: make sure that 90% of incoming support requests are deflected from our (human) call center operators; or, keep monthly stockouts on our best-selling products to less than 1%.

Note that in addition to being given this goal, it is likely that the agent will have a set of constraint KPIs and safety guardrails to ensure a balanced system, for example:

  • Increase sales but don’t allow discounting to reduce margin below 80%
  • Increase sales but don’t manipulate or deceive the customer in any way

By clearly setting goals, we enable the ambient agent to operate autonomously…

Autonomous operation

Our agents need to be autonomous operators, so that they are not limited in their effectiveness by the availability – and capability – of their human operators. If we are to realize the full potential of agents, we need to give them the right span of control to be as effective as possible.

Ambient agents aren't ‘coin-operated’, stuck waiting for external input from humans. They know what their goal is, they know what their action space is, and they independently make decisions and take action in pursuit of those goals. 

Note that autonomous operation does not mean that there aren't other agents or humans in the loop for authorization purposes. Like a military officer, the agent has a clear span of control, but decisions that fall outside of that span have to be sent up the “chain of command” for approval.

The autonomous principle is so clear and compelling that it is starting to appear in vendor marketing for AI agents, for example from Bloomreach, emphasis mine:

Unify your data, personalize every journey, and let AI-powered agents autonomously execute campaigns in real time across 13+ channels—driving higher LTV while you focus on strategy, not workflows.

As mentioned above – autonomy depends on the agent’s ability to make continuous sense of the world around them. This leads us to perception…

Continuous perception

Ambient agents continuously monitor the world as things happen, responding to their real-time observations rather than waiting for explicit instructions.

The world is always changing and one of the most exciting possibilities with agents is that they can respond to it in real-time - but they can only do that if they're able to perceive it continuously. It is this constant perception of their environment that ‘animates’ the agent and enables the rest of our principles.

Both the Confluent and Akka teams have been doing solid foundational work in this area – check out Sean Falconer’s recent post, The Future of AI Agents is Event-Driven, and Akka’s spring webinar ‘Design patterns for agentic AI’.

For this principle, I am deliberately using the term ‘perception’ rather than the more common language of ‘event-driven’. Why? Ambient agents don’t consume the raw firehose of every granular environmental event, any more than my prefrontal cortex directly consumes my eyeballs’ rod and cone data.

Instead, an ambient agent needs to perceive higher-level events which are semantically meaningful in the context of its goal.

If we take an ecommerce example, it’s the difference between sending the agent “shopper Harper Deckard just abandoned her cart after 10 minutes” versus sending all the raw signal around Harper’s site activity for the agent to try to deduce the same.

For any data architects or engineers reading this – think of this as the ambient agent consuming the gold-layer signal, not the bronze-layer data (per the Medallion Architecture). Stream processing is essential to turn the firehost of raw events into clear signals that can be incorporated into the agent's context window (pull), or delivered in real-time to the agent via Kafka events or Akka messages (push).

Semantic reasoning

Ambient agents work towards specific goals, and they perceive their environment continuously in support of these goals. Next, ambient agents need to connect the two together, and understand:

  • What the observations from the environment mean for their goals
  • What actions they could take
  • What impact those actions are likely to have
  • And thus: choose the action that is most appropriate to driving their goal

Over time they should even be able to improve their understanding based on the actions they took to date and the impact that those actions had – learning tasks through repetition.

This all requires a semantic understanding of the environment, how that environment works and what the agent’s potential role is in that environment. The better that semantic understanding, the better able the agent should be to make effective decisions – to perform semantic reasoning.

Today, most agentic applications rely on the world-ontology that is baked into the underlying LLM through its own training data. But given how critical semantic reasoning is, it's not surprising that more AI engineers are working to encode this understanding formally, and build agents that can develop that semantic representation over time. In pursuit of this, engineers are embracing semantic layers, knowledge graphs and ontologies; VCs are also joining the dots.

The end game for semantic reasoning includes the entire loop of perception, semantic interpretation, action prediction, decision-making, and learning from outcomes – it's about integrating all environmental information into a coherent semantic framework to guide decision-making and continuously improve. A good place to learn more is the 2024 paper ‘Unifying Large Language Models and Knowledge Graphs: A Roadmap’.

Persistence across interactions

LLMs are a fundamentally stateless technology – but an ambient agent needs to have a memory like an elephant, not like a goldfish. Being set a long-term goal, an ambient agent has to be able to remember its observations, its actions over time, and its own impact on the environment and other agents; in short, it needs to be able to track its progress against its goal.

My favorite example of this is Leonard Shelby, Guy Pierce’s character in the Christopher Nolan movie Memento (2000). Shelby is trying to solve who murdered his wife, but his anterograde amnesia makes it almost impossible for him to make progress; his behavior devolves into a mix of trial-and-error, mistaken hunches and leaving himself ambiguous aides-memoires.

Long-term persistence is the central design concept of a set of “agentic memory” startups, including Cognee, Mem0 and Zep. There is also some interesting work coming out of ServiceNow Research, with their idea of "tape agents", a skeuomorphic concept of a central and shared cassette tape that stores all of the history of these agents.

By talking about these agents sharing their history with each other, we arrive at the next principle: collaboration between ambient agents!

Multi-agent collaboration

Individual agents with specialized capabilities will work together to solve complex problems, sharing context and coordinating through structured protocols.

These agents will be assembled inside each enterprise, inside each SaaS vendor, and across organizational boundaries; this is what Google’s Agent2Agent Protocol exists to facilitate.

A multi-agent approach embraces both agent specialization and agent composability:

Agent specialization means that individual agents can be designed to solve specific problems – they have a clear purpose and remit. This is especially important given the non-deterministic ("stochastic") nature of these agents: just like micro-services are easier to reason about than monoliths, a set of specialized agents with legible boundaries and interfaces is much easier to build and operate.

Agent composability is very exciting to me. When agents dynamically discover and collaborate with each other, they can compose together higher-level solutions to business problems. I called this out in my recent post on MCP and Agent2Agent:

At Snowplow, we have spent many years helping two software categories (digital analytics and Customer Data Platform) to move to much more composable approaches atop the enterprise data warehouse. What is so exciting about Agent2Agent is the potential for composability to break out of individual software categories and become an emergent property across all enterprise SaaS.

This leads us on to the last of our seven principles, on the optimal way for these agents to communicate with each other…

Asynchronous communication via event streams

How will multi-agent systems work in practice? Agents will communicate with each other asynchronously through shared event streams rather than formalized point-to-point connections.

There are multiple reasons to take this approach, including:

  • Scalability – as agents proliferate, the number of point-to-point integrations skyrockets. By communicating via asynchronous event streams, it becomes easy for multiple agents to consume the same streams, reducing the number of connections and improving efficiency and consistency. This is very similar to the original rationale for the Enterprise Service Bus
  • Consistency – with two agents consuming the same event stream, they are both working with the same source of truth. If however those two agents are talking independently to a third agent (the source of the stream), they will inevitably be consuming different versions of the truth (because agents are stochastic systems) 
  • Resilience and fault tolerance – by replacing point-to-point integrations with loosely coupled event streams, the overall system is much less fragile in the case of a single agent falling over. When the agent is recovered, it can simply pick up where it left off with the event stream
  • Emergent intelligence – agents can discover and respond to events that it wasn't explicitly programmed to monitor. Asynchronous communication will allow powerful, unexpected collaborations to emerge between agents in pursuit of their assigned goals

All of this is really bringing the Event-Driven Architecture pattern popularized by Apache Kafka and others into the agentic ecosystem. EDA architectures enable loose coupling, fault tolerance, and many-to-many information flow. EDAs are fundamentally ‘open’ architectures for agent composition, versus more tightly bound, ‘closed’ architectures such as DAG-based orchestration.

Asynchronous communication is my seventh and final principle, but don’t mistake it for the least important! To be truly high agency, ambient agents must escape the bounds of pre-ordained, human-authored, DAG-bound orchestration.

To learn more on this principle, check out the great thinking coming out of the teams at Confluent and Akka: Event-Driven Agent Mesh, AI Agents are Microservices with Brains and Design patterns for agentic AI.

Conclusion

We have now run-through the seven principles which collectively define ambient agents. These seven principles collectively define a vision of AI agents that are environmentally aware, proactive, and capable of operating within a larger ecosystem of interconnected agents.

Ambient agents are not a new type of agent. Instead, they represent the coming wave of architectural patterns for building sophisticated, goal-oriented, agents. These principles put the high agency – the autonomy – into AI agents.

Thanks for reading this far! All content written and reviewed by humans, all emdashes painstakingly typed out with double-hyphens.

Coming next…

For our next post in this manifesto, we will focus exclusively on one of the seven principles: continuous perception for ambient agents.

Snowplow’s long pedigree in observing customer digital behavior puts us in an exciting position to help fuel continuous perception for agents, helping them to see how customers are interacting online, and even how customers are reacting to the agent’s own behavior. Indeed, perception for AI agents is a major part of the Snowplow 2025 product roadmap.

Stay tuned for our next post to understand the core concepts underpinning agentic perception.

Subscribe to our newsletter

Get the latest content to your inbox monthly.

Get Started

Whether you’re modernizing your customer data infrastructure or building AI-powered applications, Snowplow helps eliminate engineering complexity so you can focus on delivering smarter customer experiences.