Orepli reads every inbound email, finds the answer in your knowledge base, and drafts a reply. Auto-send when it's confident. Escalate when it isn't. Every step cited and logged.
Support inboxes are a queue of repeats. The answer is already in your docs — a policy, a product spec, a how-to. But the human in the loop still has to find it, phrase it, and paste it. Meanwhile first-response time slips, tone drifts, and nobody can audit why a reply said what it said.
Every email runs through four gates: connect, ground, classify, reply. You keep control at each one. Turn auto-send on when you're comfortable. Leave it off forever if you prefer every reply reviewed.
Plug in Gmail in a click, or any IMAP/SMTP mailbox with encrypted credentials. Multiple mailboxes per workspace — one per queue, per team, per region.
Drop in FAQs, help articles, refund policies, product specs — anything in plain text or Markdown. Orepli chunks it, embeds it, and scopes it to the mailbox that should use it.
Pick a tone, set the confidence threshold that has to be crossed before auto-send, and choose how many KB chunks to retrieve per reply. Start in draft-only mode until you trust the output.
Every inbound email runs through the pipeline. You see the full timeline — classification, retrieved snippets with citations, the draft, and either the send confirmation or the escalation reason.
Built on the stack that serious RAG teams already run — Postgres with pgvector, Claude for generation, explicit citations, per-tenant isolation. No black box.
Every draft cites the KB chunks it was built from. pgvector under the hood. Swap the embedder if you need HF, OpenAI, or local.
Replies only auto-send when model confidence clears your threshold. Everything else becomes a draft or a routed escalation.
Per-mailbox voice — terse, warm, formal, technical. Different queues, different voices, same knowledge base.
Edge cases forward to the right human with the reason, the retrieved context, and the draft we would have sent.
Received → classified → retrieved → drafted → sent. Every step is logged, queryable, and isolated per tenant.
Failures never silently disappear. Retry, inspect, or escalate. Your ops team gets a real picture of what broke and why.
Cited chunks, confidence score, model, latency, token count. When a draft ships, you can trace back why — and when it doesn't, you see exactly which rule caught it.
Pick the plan that fits your mailbox count and monthly token budget. Overage is linear — no surprise bills, no throttles. Upgrade or downgrade any time.
For a founder or a small team automating a single support queue.
For small support teams running separate queues for sales, ops, and support.
For teams scaling RAG-native email across brands or regions.
Still have questions? sales@orepli.com — we reply with Orepli, obviously.
Only when the draft's confidence score is above the threshold you set on the mailbox. Everything under the threshold becomes a reviewable draft or gets escalated. You can also run a mailbox in draft-only mode indefinitely.
Gmail via OAuth, or any IMAP/SMTP inbox with TLS. Credentials are encrypted at rest with a tenant-scoped key. OAuth tokens are refreshed automatically.
Claude from Anthropic by default, with a mock mode for local development. Embeddings default to a deterministic local hash embedder; you can switch to Hugging Face sentence-transformers or a managed provider via a single env var.
Knowledge base documents are scoped to the mailbox you upload them to. Email content is stored per tenant in Postgres with pgvector. You can request export or deletion at any time. See the Privacy page for the full picture.
Yes. Register a workspace, connect a mailbox, run inbound emails through the simulator, and preview drafts before turning on auto-send. No card needed to evaluate.
Yes, for Scale customers and above. The stack is FastAPI + Postgres + pgvector + Next.js, deployable via Docker Compose. Contact sales@orepli.com to discuss.
Spin up a workspace, connect a mailbox, upload a KB, and watch the pipeline work on your real inbox. No credit card to evaluate.