How to read this source
Start with headings and summary blocks. Use this page to verify citations, claim wording, and chapter-level evidence support before interpreting conclusions.
Internal Research Source
Primary internal technical manuscript covering architecture, safety logic, validation doctrine, and claim boundaries.
How to read this source
Start with headings and summary blocks. Use this page to verify citations, claim wording, and chapter-level evidence support before interpreting conclusions.
Used by chapters
Share
Section index
--- doc_id: ASTRACBC-OCULAR-FULL title: AstraCBC Smartphone-Only Ocular Biomarker SaMD Full Documentation version: v1.0-draft status: draft owner: Tambua Health Engineering audience: Product, Clinical, Regulatory, ML, QA, Operations device_name: AstraCBC updated_at: 2026-02-15 ---
| Section | Target pages | |---|---:| | 1. Executive Summary | 3 | | 2. AstraCBC Extract and Gap Analysis | 10 | | 3. Product Definition and Scope | 12 | | 4. Regulatory and QMS | 18 | | 5. Optics, Capture, and Ergonomics | 18 | | 6. Quality Control and Preprocessing | 12 | | 7. ML Architecture and Arbitration | 16 | | 8. Data Strategy and Labeling | 14 | | 9. Verification, Validation, and Trials | 16 | | 10. Software Architecture and Operations | 15 | | 11. UX and WCAG | 8 | | 12. Risk Register and Safety Case | 10 | | 13. Business and Deployment | 10 | | 14. Claim to Evidence Map | 4 | | Total | 166 |
---
AstraCBC is a smartphone-only ocular biomarker SaMD program that aims to produce early health risk signals from guided eye capture using no external hardware or attachments. The intended clinical posture is conservative and safety-first: quality-gated outputs, explicit abstention states, confidence bands, and mandatory confirmatory-care direction for treatment decisions.
The inherited internal architecture from the AstraCBC base whitepaper is reused as the core process contract:
`guided capture -> QC -> feature extraction -> ensemble inference -> rules + ML arbitration -> output or abstain -> escalation guidance`
This full document converts that architecture into an implementation-ready dossier covering engineering, validation, regulatory, usability, risk management, and deployment operations.
| Parameter | Provisional target | Notes | |---|---:|---| | Capture duration | 6 to 12 s | Multi-frame signal stabilization | | Minimum usable frame rate | 24 fps | 30 fps preferred | | Minimum input resolution | 1280 x 720 | Higher accepted if available | | Max allowed motion blur ratio | <= 0.20 | Above threshold routes to reacquire | | Pass confidence threshold | >= 0.70 | Below routes to abstain/reacquire | | Overall abstain target | <= 25% | Tracked by device and subgroup | | Critical alert false-negative target | <= 10% | Endpoint-specific and conservative |
| Strategy | Benefit | Risk | Decision | |---|---|---|---| | Aggressive output with low abstain | Higher completion | Unsafe false confidence | Rejected | | Conservative output with abstain | Safer clinical posture | More retakes/abstains | Selected | | No device stratification | Simpler ops | Hidden bias/performance drift | Rejected | | Device-aware release gates | Better control | Higher operational complexity | Selected |
mermaid
flowchart LR
A[Guided Capture] --> B[Quality Gates]
B -->|pass| C[Features]
B -->|reacquire| A
B -->|abstain| Z[Inconclusive + Confirmatory Advice]
C --> D[Model Ensemble]
D --> E[Rules + ML Arbiter]
E --> F[Risk Signal + Confidence + Severity]
F --> G[Escalation Guidance]text
+--------------------------------------------------------------+
| AstraCBC Result |
| Quality: PASS Confidence: MODERATE Severity: GRADE_2 |
| Signal: Anemia-risk pattern |
| Next step: Confirm with clinician within 7 days |
+--------------------------------------------------------------+---
This section maps what is directly reusable from the existing AstraCBC whitepaper and what must be re-authored for ocular biomarker scope.
Reusable core:
| Extracted control | Current state | Ocular-required update | |---|---|---| | QC states | pass/reacquire/abstain | Add ocular ROI confidence threshold | | Confidence bands | low to very_high | Add calibration drift confidence attenuation | | Severity grades | grade_1 to grade_4 | Add eye-specific contraindication routing | | Escalation timer | supported | Bind to endpoint-specific safety policy |
| Area | Reuse as-is | Adapt | Replace | |---|---|---|---| | Safety output model | Yes | Minor copy updates | No | | Disease label taxonomy | Partial | Ocular relevance filter | Yes for non-ocular-only labels | | Validation framework | Yes | Add ocular capture protocol | No | | Data contracts | Partial | Add ocular ROI metadata | Yes where CBC-specific |
mermaid
flowchart TD
S1[ASTRACBC Source Sections] --> M1[Method Extraction]
S1 --> L1[Limit Extraction]
S1 --> A1[Assumption Extraction]
M1 --> O1[Ocular Mapping]
L1 --> O1
A1 --> O1
O1 --> G1[Gap Register]
G1 --> P1[Protocol + SAP + Claim Map]text
Traceability Matrix
AstraCBC section -> Ocular requirement -> Owner -> Due date -> Status---
Product class: software as a medical device delivered as a smartphone app.
Locked constraints:
Primary output types:
| Requirement | Target | |---|---:| | Time-to-first-result | <= 3 min from capture start | | Capture guidance completion rate | >= 85% | | Successful first-pass QC | >= 60% on supported devices | | Reacquire completion | >= 70% | | Hard safety wording display | 100% of high-severity outputs |
| Product framing | Pros | Cons | Decision | |---|---|---|---| | Diagnostic replacement | Strong consumer appeal | Unsafe and non-compliant | Rejected | | Early risk screening | Safety-aligned | Requires clearer UX | Selected | | Silent uncertainty | Cleaner UI | Unsafe misuse risk | Rejected | | Explicit confidence + abstain | Safer behavior | More complexity | Selected |
mermaid
flowchart LR
U[User] --> C[Capture]
C --> Q[QC]
Q -->|pass| R[Risk Signal]
Q -->|fail| H[Reacquire Help]
R --> E[Escalation Guidance]
E --> F[Confirmatory Path]text
Primary Result Card
- Risk signal: Infection-risk pattern
- Confidence: Moderate
- Quality: Pass
- Action: Seek clinic evaluation within 24h---
This section defines the compliance backbone for the ocular SaMD dossier and aligns quality artifacts to standards and market expectations.
Minimum aligned frameworks:
| Regulatory artifact | Quantitative requirement | |---|---:| | Requirement traceability coverage | 100% requirements mapped to verification | | Risk-control verification coverage | 100% for high-severity hazards | | CAPA closure SLA | <= 30 days (critical) | | Software SOUP inventory completeness | 100% components listed | | Cybersecurity threat model review cadence | Quarterly minimum |
| Regulatory path style | Benefit | Risk | Decision | |---|---|---|---| | Broad first release claims | Larger market | High evidence burden | Rejected | | Narrow staged claims | Lower initial risk | Slower expansion | Selected | | Ad-hoc model updates | Velocity | Change risk | Rejected | | PCCP-governed updates | Controlled iteration | Process overhead | Selected |
mermaid
flowchart TD
PRD[Product Requirements] --> TR[Traceability Matrix]
TR --> V[Verification]
TR --> VAL[Validation]
RMF[ISO 14971 RMF] --> TR
UEF[IEC 62366 Usability File] --> VAL
SLP[IEC 62304 Lifecycle Plan] --> V
CEP[Clinical Evaluation Plan] --> VAL
PCCP[PCCP + MLOps] --> POST[Post-market Controls]text
Regulatory Document Stack
- Intended Use and Labeling
- Clinical Evaluation Plan
- Risk Management File
- Software Lifecycle File
- Usability Engineering File
- Cybersecurity File---
Phone-only ocular capture must operate under uncontrolled real-world conditions while still producing quality-gated signals.
Primary target regions:
Fundus without attachments remains exploratory with high abstain expectation.
| Capture metric | Provisional target | |---|---:| | Eye ROI coverage | >= 65% target region visible | | Focus confidence | >= 0.75 | | Exposure clipping | <= 5% saturated pixels in ROI | | Specular glare area | <= 8% of ROI | | Head movement | <= 2.5 deg/s median during capture |
| Capture mode | Pros | Cons | Decision | |---|---|---|---| | Rear camera assisted | Better optics on many devices | Harder self-framing | Optional guided helper mode | | Front camera self-capture | Better usability | Lower image quality on some devices | Default with capability checks | | Flash-on always | Better SNR | Comfort/safety concerns | Adaptive only | | Ambient-only | Comfortable | Quality instability | Allowed with stricter QC |
mermaid
flowchart LR
G[Guidance Overlay] --> F[Frame Acquisition]
F --> R[ROI Detection]
R --> E[Exposure Check]
E --> M[Motion Check]
M --> GL[Glare Check]
GL --> D[Decision: pass/reacquire/abstain]text
Capture Screen
[Eye alignment oval]
[Lighting meter]
[Stability meter]
[Prompt: Hold still for 4 seconds]---
QC is a hard gate, not a soft score. No clinical output is permitted when minimum signal requirements are not met.
| QC check | Pass threshold | Fail action | |---|---:|---| | Blur metric | >= 0.70 | Reacquire | | Motion artifact score | <= 0.20 | Reacquire | | ROI confidence | >= 0.75 | Reacquire | | Glare contamination | <= 0.08 | Reacquire/abstain | | Illumination stability | >= 0.65 | Reacquire | | Device compatibility | Supported tier only | Abstain with explanation |
| QC strategy | Safety | UX | Decision | |---|---|---|---| | Global static thresholds | Medium | Simple | Interim only | | Device-tier adaptive thresholds | High | Medium complexity | Selected | | No abstain branch | Low | Superficially simple | Rejected |
mermaid
flowchart TD
I[Raw Frames] --> N[Normalization]
N --> C1[Blur Check]
N --> C2[Motion Check]
N --> C3[Glare Check]
N --> C4[ROI Confidence]
C1 --> D[QC Decision Engine]
C2 --> D
C3 --> D
C4 --> D
D -->|pass| O[Feature Extraction]
D -->|reacquire| R[Capture Retry]
D -->|abstain| A[Inconclusive]text
QC Panel
Focus: PASS
Motion: PASS
Glare: FAIL
Action: Retake in softer lighting---
The ML stack combines feature-level models with rules-based safety arbitration.
Inference output is never shown without QC pass and calibration checks.
| ML metric | Target | |---|---:| | Primary endpoint AUC | >= 0.82 (endpoint-specific) | | Calibration slope | 0.90 to 1.10 | | Brier score ceiling | <= 0.18 | | Subgroup performance delta | <= 0.07 | | Inference latency | <= 1.2 s on supported devices |
| Model family | Pros | Cons | Decision role | |---|---|---|---| | Gradient boosted trees | Fast, interpretable features | Limited representation power | Baseline and fallback | | Lightweight CNN/transformer | Better pattern capture | Higher complexity | Primary in supported tiers | | Rule-only | Transparent | Low sensitivity to subtle patterns | Safety guardrails only |
mermaid
flowchart LR
F[Engineered Features] --> M1[Model A]
F --> M2[Model B]
F --> M3[Model C]
M1 --> ENS[Ensemble Aggregator]
M2 --> ENS
M3 --> ENS
ENS --> R[Rules Engine]
R --> UQ[Uncertainty + Abstain Gate]
UQ --> OUT[Final Output]text
Model Card Snapshot
- Intended use: early risk signal
- Inputs: ocular-derived features only
- Failure modes: glare, low perfusion, extreme motion
- Required warning: confirmatory testing required---
Data governance defines what is collected, how it is labeled, and how it is split for robust generalization.
| Dataset attribute | Minimum requirement | |---|---:| | Sites for initial model | >= 3 | | Distinct device families | >= 8 | | Each key subgroup sample | >= 200 (pilot target) | | Holdout by site | 1 full unseen site minimum | | Holdout by device family | >= 2 unseen families |
| Label type | Strength | Limitation | Use | |---|---|---|---| | Lab value paired labels | Strong objective anchor | Timing and logistics burden | Core endpoints | | Clinician adjudication | Useful for composite phenotypes | Inter-rater variability | Secondary labels | | Self-reported outcomes | Scalable | Lower reliability | Exploratory only |
mermaid
flowchart TD
CAP[Capture Data] --> META[Metadata: device/OS/context]
CAP --> LAB[Reference Labels]
META --> CUR[Curated Dataset]
LAB --> CUR
CUR --> SPLIT[Train/Val/Test + Site/Device Holdouts]
SPLIT --> TRAIN[Model Development]
SPLIT --> EVAL[Locked Evaluation]text
Dataset Card
- Cohort size
- Device distribution
- Demographic distribution
- Label timing compliance
- Missingness profile---
Verification confirms software and algorithm correctness. Validation confirms clinical performance in intended use settings.
| Trial metric | Target | |---|---:| | Protocol adherence | >= 95% | | Endpoint confidence interval width | Predefined in SAP | | Device holdout pass rate | >= 90% of endpoint threshold | | Severe-case sensitivity | Endpoint-specific, safety-biased | | Monitoring report cadence | Monthly minimum |
| Trial design | Pros | Cons | Decision | |---|---|---|---| | Single-site pilot | Fast start | Weak generalizability | Use for feasibility only | | Multi-site prospective | Better external validity | Slower and costlier | Required for claims | | Retrospective validation only | Cheap | High bias risk | Not sufficient |
mermaid
flowchart LR
P[Protocol] --> ENR[Enrollment]
ENR --> CAP[Guided Capture]
CAP --> REF[Reference Label Collection]
REF --> DB[Locked Trial Dataset]
DB --> SAP[Statistical Analysis]
SAP --> REP[Clinical Report]
REP --> DEC[Claim Decision]text
Trial Dashboard
- Enrolled: 1,240
- Label-complete: 1,105
- Protocol deviations: 3.2%
- Holdout device pass: 92%---
System must support safe real-time inference on-device, reliable audits, controlled updates, and post-market monitoring.
| Operational metric | Target | |---|---:| | App crash-free sessions | >= 99.5% | | P95 inference latency | <= 1.5 s | | Audit event completeness | 100% for decision-critical events | | Incident acknowledgment SLA | <= 4 h (critical) | | Rollback readiness | <= 60 min to previous stable model |
| Architecture choice | Benefit | Risk | Decision | |---|---|---|---| | Pure on-device only | Strong privacy | Limited fleet visibility | Hybrid optional telemetry | | Full cloud inference | Central control | Latency/privacy dependence | Rejected for core flow | | Staged rollout by cohort | Controlled risk | Slower full rollout | Selected |
mermaid
flowchart TD
APP[Mobile App] --> ENG[Inference Engine]
ENG --> LOG[Audit Logger]
LOG --> BUF[Secure Event Buffer]
BUF --> CLOUD[Optional Telemetry Service]
CLOUD --> MON[Monitoring + Drift]
MON --> REL[Release Controller]
REL --> APPtext
Release Console
- Current model: v1.3.2
- Rollout: 20% cohort
- Drift alert: none
- Rollback package: ready---
UX must balance clarity, safety, and completion. The experience should reduce user error and avoid overconfidence.
| UX metric | Target | |---|---:| | Task success (first capture) | >= 75% | | Critical task error rate | <= 5% | | Accessibility defects (critical) | 0 | | Reading level of key safety copy | Grade 6-8 | | WCAG contrast compliance | >= 4.5:1 text contrast |
| UX pattern | Pros | Cons | Decision | |---|---|---|---| | Dense scientific language | Precise | Low user comprehension | Rejected | | Plain-language with expandable detail | Clear and scalable | Requires careful drafting | Selected | | Hidden uncertainty | Cleaner UI | Unsafe interpretation | Rejected |
mermaid
flowchart LR
O[Onboarding] --> C[Guided Capture]
C --> Q[Quality Feedback]
Q --> R[Result Card]
R --> N[Next Step Guidance]
N --> H[Help/Support]text
Result UI (Accessible)
- Risk signal: moderate
- Confidence: moderate
- Quality: pass
- Action button: Find confirmatory care
- Secondary: Retake scan---
Risk management follows ISO 14971 principles with hazard identification, control implementation, residual risk assessment, and post-market feedback loops.
| Risk control metric | Target | |---|---:| | High-severity hazard control verification | 100% | | Unmitigated high residual risks | 0 accepted without executive waiver | | Post-release safety signal review cadence | Weekly | | CAPA closure for critical issues | <= 30 days |
| Safety strategy | Benefit | Risk | Decision | |---|---|---|---| | Output-first with warnings | Better completion | Unsafe misuse | Rejected | | Gate-first with abstain | Lower harm probability | Higher friction | Selected | | Static controls only | Simpler ops | Weak against drift | Rejected | | Dynamic surveillance + CAPA | Better long-term safety | Operational overhead | Selected |
mermaid
flowchart TD
H[Hazard Identification] --> A[Risk Analysis]
A --> C[Control Definition]
C --> V[Control Verification]
V --> R[Residual Risk Evaluation]
R --> PM[Post-Market Monitoring]
PM --> CAPA[Corrective/Preventive Actions]
CAPA --> Htext
Risk Heatmap
Severity (y) vs Probability (x)
- R-001 glare misclassification: medium residual risk
- R-004 subgroup bias: medium residual risk---
Deployment model is staged and safety-gated by country, device capability, and support readiness.
Core assets:
| Deployment metric | Target | |---|---:| | Supported-device coverage in launch markets | >= 70% of active smartphone base | | Staged rollout blast radius | <= 20% per increment | | Customer support first-response SLA | <= 24 h | | Drift alert triage SLA | <= 48 h | | Post-market model review cadence | Monthly minimum |
| Rollout strategy | Pros | Cons | Decision | |---|---|---|---| | Big-bang global | Fast awareness | High safety and support risk | Rejected | | Staged by market/device | Controlled risk | Slower growth | Selected | | Single static model forever | Stable ops | Performance drift risk | Rejected |
mermaid
flowchart LR
DEV[Model Build] --> QA[Validation Gates]
QA --> R1[Staged Rollout 10%]
R1 --> R2[Staged Rollout 50%]
R2 --> R3[Full Rollout]
R1 --> MON[Monitoring]
R2 --> MON
R3 --> MON
MON -->|alert| RB[Rollback]text
Market Launch Card
- Country: KE
- Device support: 72%
- Rollout stage: 20%
- Safety incidents this week: 0 criticalmermaid
gantt
title Ocular Smartphone-only SaMD Program
dateFormat YYYY-MM-DD
section Foundations
Claims + Intended Use Freeze :a1, 2026-03-01, 45d
QMS + Risk Baseline :a2, 2026-03-10, 75d
section Build
Capture + QC v1 :b1, 2026-04-01, 120d
ML + Arbitration v1 :b2, 2026-05-01, 120d
section Evidence
Pilot Study :c1, 2026-06-15, 120d
Holdout Validation :c2, 2026-10-15, 90d
Go/No-Go 1 :milestone, c3, 2027-01-20, 1d
section Clinical
Prospective Multisite Trial :d1, 2027-02-01, 180d
Go/No-Go 2 :milestone, d2, 2027-08-05, 1d
section Release
Staged Market Rollout + PMS :e1, 2027-08-10, 120d---
All external claims must be linked to evidence packages and explicit pass thresholds before release.
| Claim governance metric | Target | |---|---:| | Public claims with mapped evidence | 100% | | Claims missing predefined thresholds | 0 | | Claims without subgroup gates | 0 | | Claims without abstain limits | 0 |
| Claim | Risk | Reference standard | N target | Study type | Metrics | Pass threshold | Abstain limit | Subgroup gate | Labeling constraint | |---|---|---|---:|---|---|---|---|---|---| | Early anemia-risk signal from ocular capture | Missed risk | IMDRF N41 + AstraCBC policy | 1200 | Prospective paired-label | Sens, Spec, NPV | Sens >= 0.90 at locked threshold | <= 25% | Delta <= 0.07 | Confirmatory testing required | | Hyperbilirubinemia-risk signal from sclera | Missed severe case | IMDRF N41 | 900 | Multisite prospective | AUC, Sens | AUC >= 0.82 and Sens >= 0.92 at safety cutpoint | <= 30% | Delta <= 0.08 | Not for treatment-only decision | | Pupillary neuro-risk signal | False alarms | Clinical comparator protocol | 800 | Blinded comparator | ICC, F1 | ICC >= 0.85 and F1 >= 0.80 | <= 20% | Delta <= 0.10 | Use with escalation guidance | | Universal diagnosis from any phone in any condition | Overclaim | AstraCBC prohibited claims | N/A | N/A | N/A | Not permitted | N/A | N/A | Prohibited | | No confirmatory testing required | Harmful misuse | AstraCBC prohibited claims | N/A | N/A | N/A | Not permitted | N/A | N/A | Prohibited |
mermaid
flowchart LR
C[Claim Draft] --> E[Evidence Mapping]
E --> T[Threshold Definition]
T --> V[Validation Results]
V --> D[Decision: Approve/Reject]
D --> L[Labeling + UX Copy]text
Claim Review Screen
Claim: "Early anemia-risk signal"
Evidence: 3 studies linked
Status: CONDITIONAL APPROVAL
Missing: subgroup gate review signature---
Do not publish or imply:
Required external wording baseline:
---
---