These Agent Terms describe the default framework under which runi.services designs, builds, and operates AI agents on behalf of a client. They are written to be short, honest, and useful to someone deciding whether to engage us — not to be maximally defensive legalese.
1. Scope of the agent
Each engagement produces one or more named agents with an explicit written scope. The scope document covers: what the agent is allowed to do, what it is expressly not allowed to do, what systems it may read and write, and what human it escalates to when it is unsure.
Scope changes in writing, through the same review path as the original scope. Verbal expansion does not bind the agent.
2. Accountability
An agent’s actions are the client’s actions. runi.services builds the agent, governs its boundaries, and is accountable for the quality of that construction. The client is accountable for the business outcomes of the agent’s work, in the same way they would be accountable for the outcomes of an employee using a tool we sold them.
Every production agent has a named human operator on the client side with authority to pause, modify scope, or decommission the agent. The operator is listed in the engagement contract.
3. Data
What the agent sees
Only data the engagement scope permits. Access is configured at the platform layer (Entra, Azure, M365) using least-privilege service principals or delegated permissions, not shared credentials.
What the agent stores
Agent memory — the identity layer that makes this firm’s work distinctive — is stored in client-controlled infrastructure by default. Memory is human-readable. The client can inspect, export, or delete any memory entry at any time.
Retention
Memory is retained for the duration of the engagement plus any handover period agreed in the contract. On termination we provide a full export in a durable format and then delete our copies within 30 days, unless the client requests us to keep operating the agent under a continuation agreement.
4. Governance
Every agent is wired to a governance layer of our own design. At a minimum this includes:
- A conscience hook that intercepts actions against the agent’s stated values before execution
- A rate limit appropriate to the agent’s role, to bound damage from a bad day
- An escalation path to the human operator for decisions the agent isn’t authorised to make alone
- An audit trail of significant actions, retained alongside memory
These are not features we upsell. They are non-negotiable conditions of how this firm deploys agents.
5. Termination
Either party may terminate for convenience with 30 days’ written notice. Either party may terminate immediately for material breach after a 14-day cure period.
On termination we hand over: the agent’s memory, configuration, governance rules, audit trail, and enough documentation for a competent successor (internal team or other firm) to keep it running. We do not hold client data hostage.
6. Limitations
We build agents that we believe are useful and governed. We do not warrant that an agent will never be wrong. The point of the governance layer is not to prevent all mistakes; it is to make mistakes bounded, visible, and recoverable.
We disclaim liability for consequential and indirect damages. Our total liability in a given engagement is capped at the fees paid during the preceding twelve months. These are standard specialist-firm terms and are open to negotiation in writing.
7. Changes to this document
While this document is in Draft v0, we revise it openly. Material changes are dated at the bottom of this page. When we publish v1.0 — meaning counsel has signed off and we are willing to be bound by it without a parallel contract — we will announce the change in Writing and on LinkedIn.