Responsible AI
Last updated: April 3, 2026
neww.ai operates an AI platform with 30 vertical products and 53+ engines. This page explains how we build, evaluate, and govern that system — what we do, what we don't do, and how we're accountable.
Our principles
Safety by default
Safety controls are defaults, not opt-ins. Guardrails, rate limits, and content policies ship on every product.
Human oversight
Agents act, humans decide. High-impact actions require explicit confirmation. Every tool call is audit-logged.
Continuous evaluation
We run evals on every model change, every prompt change, every deploy. Quality regressions block release.
Transparency
We publish our system prompts, our eval results, and our incident reports. Trust is earned by being legible.
What we do
- Model selection is transparent. Users can see which engine handled each request on request.
- We default to the smallest capable model. Larger models are used only when needed.
- Every agent has a named tool surface. Agents cannot call tools outside their declared capabilities.
- Prompt injection mitigations are part of every product that reads untrusted input.
- PII is redacted before logs are written. Sensitive fields never enter analytics.
- We maintain a red-team playbook and run adversarial evals monthly.
- High-risk domains (medical, legal, financial) display clear disclaimers and route to human review where required.
What we don't do
- We do not train our models on your data by default.
- We do not sell your data. Ever.
- We do not share identifiable user data with model providers beyond what's strictly needed to generate a response.
- We do not use dark patterns to extend engagement.
- We do not pretend AI output is human.
Reporting concerns
If you believe a neww.ai product has produced harmful, biased, or unsafe output, or if you discover a vulnerability, we want to hear from you.
Related: Acceptable Use Policy, Trust Center, Privacy.