A deep marketing agent that runs your outbound pipeline as a continuous learning loop on the Single Data Backbone — Missions build the target database, Playbooks turn your marketing knowledge into agent skills, and every email send waits for a human approval before it goes out.
Growth Engine architecture
Missions build target databases. Playbooks turn your marketing knowledge into reusable skills. The agent runs them continuously — but every email send waits for a human approval before it goes out.
Continuous
Missions and Playbooks run on a schedule, not in one-shot batches.
Human-in-the-loop
Every email campaign waits for a human approval before sending.
One backbone
Validation reads the same record the CRM displays. No sidecar warehouse.
Why this is hard
The autonomous AI SDR cohort sold as a full SDR replacement saw 50–70% customer churn within 90 days according to industry analyses, with ~79% email-accuracy rates that translated to roughly 1-in-5 messages bouncing — well above the 2–5% ceiling a healthy program tolerates and far above Gmail's February 2024 spam-complaint cap of 0.30%.
< 0.30%
Gmail spam-complaint ceiling
Google bulk-sender requirements, Feb 2024
50–70%
90-day churn for full-replacement AI SDRs
Industry analyses, 2024–25 cohort
~79%
Email accuracy on agent-sourced lists
≈ 1-in-5 bounce at agent volume
The lesson is not “agents don't work.” It is that autonomy without governance amplifies whatever was already broken in the pipeline. Growth Engine is built around the parts that have to be governed — sourcing against the canonical CRM record, training on your data only, and a human approval gate before any send. Full research below in our State of Outbound Agentic Pipelines (2026).
Pillar 1
A Mission is a continuously refreshed, ICP-fit account list assembled from compliant data sources. The agent does the work of scraping, enriching, and deduplicating against your CRM so your reps see only accounts that are net-new and worth pursuing.
Compliant enrichment, not scraped lists.
Sources include Google Places (firmographic + location), Placer.ai (foot-traffic signal for retail/services ICPs), CRM history (won, lost, existing customers), and any first-party intent data you connect. Every source is rate-limited and consent-checked.
Deduplicated against the canonical record.
Validation queries the same `accounts` table the CRM displays. There is no eventual-consistency window where the agent prospects an account that closed-won this morning — the failure mode behind the most public 2024–25 AI SDR incidents.
ICP-fit gate before any contact is added.
Each candidate is scored against the ICP definition you control (industry, size, signal pattern). Below the threshold, the contact never enters outreach — limiting downstream spam-complaint exposure at the source.
Continuous, not batch.
Missions run on a schedule. New accounts surface as they appear in the source data; stale accounts decay out. The target list is always current without you re-running anything.
Pillar 2
A Playbook is a typed bundle of agent skills (ICP scoring, outreach copy, reply triage, re-engagement) trained on your company's own marketing knowledge — past campaigns, win/loss notes, brand voice, sales calls. You own the data and the trained skills. They are bound to your tenant and used only for your work.
You own the training data and the trained skills.
Customer marketing data (past campaigns, win/loss notes, ICP definitions, brand voice, sales call transcripts) is uploaded to your tenant, used to train your playbook skills, and never pooled with other customers or used to train shared models.
Skills are typed, testable, and composable.
Each skill (e.g., `score-icp`, `draft-first-touch`, `triage-reply`) has a defined input/output contract and a versioned set of training examples. You can swap, retrain, or roll back individual skills without touching the rest of the Playbook.
Brand voice and policy are enforced, not suggested.
A Playbook includes hard constraints — words to avoid, claims that require citation, sequence length caps — that are checked before generated copy reaches a draft. Reward-hacking shortcuts ("write spammier copy to maximize replies") are filtered out structurally, not socially.
Refined from production outcomes.
Reflection events (replies, opportunities, opt-outs) flow back into skill training. The Playbook gets sharper over time on your specific funnel, not on a generic outbound benchmark.
Pillar 3
Missions and Playbooks run continuously and autonomously. The single step that does not run autonomously is the email send. Every campaign — every batch of outbound — is staged for human review against an approval gate before any message leaves the sender domain.
The autonomous parts: source, validate, reflect.
The agent runs Missions, scores leads, drafts copy, schedules cadence, and writes reflection events without waiting for a human. These are the parts where autonomy compounds value.
The supervised part: send.
When a campaign is ready, it queues for review. A human sees the cohort, the message, the predicted spam-complaint impact, and the deliverability state of the sender domain. Nothing sends without approval — the dividing line that explains why governed AI SDRs ship pipeline and ungoverned ones don't.
Approval thresholds are configurable.
You set what requires approval: every campaign, only campaigns over N recipients, only campaigns to a new ICP, or only campaigns whose predicted spam-complaint rate is within X of Gmail's 0.30% ceiling. Routine campaigns can be auto-approved with a fast-track rule; novel ones always wait.
Append-only audit log on the SDB.
Every Mission run, every skill execution, every approval event, every send is logged to the same backbone the CRM and ERP read from. Reproducible after the fact. Auditable for compliance review — CAN-SPAM, GDPR legitimate-interest, EU AI Act disclosure.
The contract you can hold us to
Growth Engine is evaluated against the same measurement contract we publish for any autonomous outbound system. Not a vendor pitch — quality gates that come from Google's bulk-sender requirements and M3AAWG's industry best practices.
| Stage | Primary metric | Quality gate (with source) |
|---|---|---|
| Sourcing | ICP-fit rate of net-new accounts / week | Verified firmographics + intent · zero overlap with existing-customer or open-opportunity records |
| Validation | % of sourced leads passing dedupe + bounce + CRM checks | Hard-bounce removal per M3AAWG BCP · bounce rate < 2% · no duplicate against active CRM records |
| Outreach | Positive reply rate by cohort, first-touch and sequence | Spam-complaint rate < 0.30% (Google bulk-sender requirements) · ≤ 0.1% feedback-loop (M3AAWG) · unsubscribe < 2% |
| Reflection | Closed-loop % of agent learnings tied to opportunity-stage changes | Every ICP, message, and channel change logged with the agent action that caused it · reproducible from CRM event log |
See the full reasoning, primary sources, and autonomy-specific failure modes in State of Outbound Agentic Pipelines (2026) · § A measurement contract.
How it connects
Growth Engine reads and writes to the same Single Data Backbone as the rest of the platform. Validation joins against the canonical accounts table. Reflection writes to the same event log that drives revenue dashboards. Closed-loop attribution is a query, not an integration project.
Drop your email and we'll walk a Mission build, a Playbook execution, and a HIL approval — with the audit log visible — on the same SDB that runs CRM and ERP. Early access; partner cohorts forming now.
Customer relationships, sales pipeline, and revenue recognition on the Single Data Backbone.
Complete financial operations: GL, AP/AR, financial reporting, multi-entity, multi-currency.
Project management with Shape Up methodology, resource tracking, and project financials wired directly to ERP.
Submit expenses by text or photo. AI extracts, categorizes, and policy-checks every entry in seconds.