1 · What AI models we use
SwarmEngines runs on AWS Bedrock AgentCore. The underlying foundation models we currently call include:
- Anthropic Claude Haiku 4.5 — low-latency reasoning, classification, templated drafting.
- Anthropic Claude Sonnet 4.6 — default for brand-voice drafting, scoring, multi-step reasoning.
- Anthropic Claude Opus 4.6 — reserved for high-stakes deep research and complex synthesis.
- Amazon Nova Sonic — real-time speech-to-speech for voice skills (AI Receptionist, voice callbots).
Each skill declares its model tier on the admin runbook. When we change the underlying model in a skill (e.g., upgrade Sonnet 4.6 → 5), we give customers at least 14 days' notice via email. You can always see the current model in your skill's detail page.
2 · Customer data and AI training
We do not use your data to train AI models. Ever.
Our contractual arrangements with Anthropic (via AWS Bedrock) and Amazon (for Nova Sonic) prohibit the use of customer inputs/outputs for model training. AWS Bedrock provides this as a default guarantee — see AWS Bedrock Data Protection.
If we ever wanted to use customer data to improve the platform (not the foundation model), we would obtain explicit opt-in consent first, and you would be able to revoke at any time.
3 · AI outputs — nature and limits
Probabilistic, not deterministic
AI outputs are probabilistic. Two runs with the same inputs can produce different outputs. We use techniques (temperature controls, prompt engineering, output validation) to reduce variance for customer-facing actions, but variance cannot be eliminated entirely.
Factual accuracy is not guaranteed
AI models sometimes produce plausible-sounding but incorrect information ("hallucinations"). For any factual claim that matters (pricing, policies, guarantees, technical specs, legal claims), you are responsible for verifying the output before it goes out.
No professional advice
Outputs may touch legal, medical, tax, financial, or engineering subjects. They are not professional advice. Regulated decisions must be reviewed by a licensed professional. SwarmEngines skills that touch regulated areas (dental insurance verification, legal conflict check, etc.) are tools that inform human review — not substitutes for it.
4 · Customer disclosure obligations
AI disclosure laws are evolving fast. You are responsible for making the disclosures applicable to your jurisdiction(s). At minimum, you should:
- Disclose that communications are AI-generated when law requires it. California's SB 1001 (bot-disclosure for sales/voter-influence bots), Colorado AI Act, and many other emerging state and international laws create disclosure duties. When in doubt, disclose.
- Voice calls in two-party-consent states: play an audible disclosure at the start of any call that is recorded ("This call may be recorded and handled by an AI assistant"). Our platform can inject it; you must enable it for calls to/from those states.
- Disclose at the moment of data collection if you're collecting information via an AI agent that you'll later use for marketing or sales. Consent must be informed.
- Be accurate about identity. Our agents are named by default (e.g., "Aria" for AI Receptionist). Do not configure them to claim they are a specific human employee who does not exist, and do not use first-person framing ("I went to the job site this morning") in a way that deceives the recipient.
5 · Agent I/O retention
We log inputs and outputs of every agent run for 30 days, then auto-delete. Retention is for:
- Debugging when a customer reports a skill misbehaving.
- Replay for error recovery (re-running a failed job).
- Accurate billing and cost attribution.
- Abuse investigation on report.
Inputs and outputs are stored encrypted in our tenant-isolated stores. Only designated on-call engineers can access them on a case-specific, audited basis. Access logs are immutable.
Customer can request shorter retention (7 days minimum) on Scale and Company tier plans. Contact contact@swarmengines.com.
6 · Human-oversight controls
Every skill that produces customer-facing actions exposes configuration knobs for human oversight:
- Approval-required mode — outputs are drafted and queued for a team member's one-click approval.
- Escalation thresholds — for reputation skills, any review or complaint below a configurable star rating goes to your inbox instead of to the public response.
- Tone limits — for voice skills, you configure acceptable tone (formal/balanced/warm) and the agent cannot deviate.
- Rate limits — per-skill caps on messages per contact per day.
- Auto-handoff triggers — keyword or sentiment triggers that switch an agent to a human.
We recommend leaving approval-required ON for the first 7–14 days of a new skill activation, then progressively relaxing as you build confidence. The wizard defaults to "approval required" for reputation and support skills on first run.
7 · Prohibited AI uses
In addition to the prohibited-content list in the Acceptable Use Policy, you may not use the Services to:
- Generate deepfakes, voice clones, or signatures of real people without their written permission.
- Manipulate elections or generate political disinformation.
- Impersonate a government agency, law-enforcement body, or regulated professional.
- Create content that fraudulently claims false credentials (medical, legal, engineering).
- Perform automated decision-making with legal or similarly significant effect on individuals (credit decisions, hiring, benefits, parole/probation assessment) without human review. EU customers note: GDPR Article 22 applies.
- Discriminate against protected classes in any way prohibited by applicable anti-discrimination law.
- Circumvent safety filters or try to jailbreak the underlying LLMs.
8 · AI incidents
If you observe an AI output that is seriously inappropriate — toxic, factually dangerous, privacy-violating, or legally problematic — report it to ai-safety@swarmengines.com. Include the skill, timestamp, and as much of the I/O as you can share. We triage within one business day and will let you know what changes we make.
Material incidents (systemic problems affecting many customers) are reported in a post-mortem on /security, with a summary of cause, scope, and remediation.