Dentospire
Live data · updated 10 May 2026

Open AI Transparency

Every metric we track — published. Real accuracy, latency, cost, and provider numbers from production.

Early-stage data — Dentospire is in the first 90 days of paid clinic onboarding. The numbers below are blended from live telemetry + internal smoke-test runs. This page refreshes daily. Once we cross 10K calls, raw aggregates fully replace the representative slice.

Why this page exists

No other dental SaaS — Dentrix, Eaglesoft, Practo Ray, Drlogy, Cliniify, Carestream, or any of the new "AI dental" entrants — publishes their AI accuracy, cost, latency, or provider numbers. You either trust their marketing copy, or you don't. There is nothing in between.

Dentospire is built by a practising dentist for clinics that handle real patient data under DPDP Act 2023 and UK GDPR. We think you deserve to see the numbers. So we publish them — every AI call is logged, every cost is summed, every provider fallback is tracked, and this page reads straight from that table. If our numbers ever get worse, you'll see it here before we tell you.

Live AI metrics

Aggregated from ai_usage_events (per-call telemetry table). 5,136 calls logged in the last 30 days.

AI X-ray analysis

Vision · Multi-provider
Total analyses
1,284
Avg latency
8.4s
Cost per analysis
$0.0420
2nd-opinion concordance
91.0%
Provider mix
Anthropic 62.0% · Bedrock 21.0% · Gemini 12.0%

Voice-to-SOAP

Speech · Multilingual
Total transcriptions
642
Avg WER
8.4%
Languages used
en-IN · hi-IN · ta-IN · te-IN · kn-IN · mr-IN
Cost per minute
$0.0180

DARA AI assistant

Natural-language · Clinic data
Total queries
3,120
Avg response time
1.85s
Avg tokens / query
920
Cost per query
$0.0110

Quick Mode

3-min consultation
Sessions
218
Avg time-to-completion
174s
Approval rate
93.0%
Doctor edit rate
27.0%

AI provider fallback chain

If any single AI provider is unavailable — rate-limited, quota-exhausted, returning errors, or globally degraded — Dentospire automatically routes to the next provider. We track and publish per-provider success rates.

Tier 1
AWS Bedrock
Claude Sonnet 4.5
Tier 2
Anthropic
Claude Sonnet 4.5
Tier 3
Google Gemini
Gemini 2.5 Pro
Tier 4
Vertex AI
Gemini 2.0 Flash
Tier 5
Groq
Llama 3.3 70B
Tier 6
OpenAI
GPT-4o-mini
Tier 7
NVIDIA NIM
Llama 3.3 70B
Tier 8
Anthropic Haiku
Claude Haiku 4.5
Provider / modelCalls (30d)Share of traffic
AWS Bedrock (Claude Sonnet 4.5)2,157
42.0%
Anthropic (Claude Sonnet 4.5)924
18.0%
Google Gemini (Gemini 2.5 Pro)616
12.0%
Vertex AI (Gemini 2.0 Flash)462
9.0%
Groq (Llama 3.3 70B)359
7.0%
OpenAI (GPT-4o-mini)308
6.0%
NVIDIA NIM (Llama 3.3 70B)205
4.0%
Anthropic Haiku (Claude Haiku 4.5)105
2.0%

Data integrity

Logged

Every AI call writes to ai_usage_events. The numbers above read directly from that table.

Per-call detail

Model name, input/output tokens, cache tokens, latency, cost, success flag, error code.

PHI redaction

All PHI is redacted via redactPII() and verified by assertNoPhiLeak before any external AI provider call.

Cost transparency

Last 30 days · all clinics · all features
$128.40
Across 5,136 AI calls — averaging $0.0250 per call.
This is the total cost we pay to AI providers — not what we charge clinics. Dentospire absorbs the cost spread and bills clinics on a fixed-tier subscription.

Compliance

DPDP Act 2023 (India) UK GDPR DCI compliant consent AES-256 encryption 24-hour EMR edit lock SHA-256 tamper detection

Full details in our Data Processing Agreement and Sub-processors list.