Skip to main content

Internal Research Source

AstraCBC Ocular SaMD Full Document

Primary internal technical manuscript covering architecture, safety logic, validation doctrine, and claim boundaries.

internal-markdownEvidence status: active sourceUpdated 2026-02-15

How to read this source

Start with headings and summary blocks. Use this page to verify citations, claim wording, and chapter-level evidence support before interpreting conclusions.

Share

SMS

Section index

--- doc_id: ASTRACBC-OCULAR-FULL title: AstraCBC Smartphone-Only Ocular Biomarker SaMD Full Documentation version: v1.0-draft status: draft owner: Tambua Health Engineering audience: Product, Clinical, Regulatory, ML, QA, Operations device_name: AstraCBC updated_at: 2026-02-15 ---

AstraCBC Smartphone-Only Ocular Biomarker SaMD Full Documentation

Table of Contents and Target Pagination

| Section | Target pages | |---|---:| | 1. Executive Summary | 3 | | 2. AstraCBC Extract and Gap Analysis | 10 | | 3. Product Definition and Scope | 12 | | 4. Regulatory and QMS | 18 | | 5. Optics, Capture, and Ergonomics | 18 | | 6. Quality Control and Preprocessing | 12 | | 7. ML Architecture and Arbitration | 16 | | 8. Data Strategy and Labeling | 14 | | 9. Verification, Validation, and Trials | 16 | | 10. Software Architecture and Operations | 15 | | 11. UX and WCAG | 8 | | 12. Risk Register and Safety Case | 10 | | 13. Business and Deployment | 10 | | 14. Claim to Evidence Map | 4 | | Total | 166 |

---

1) Executive Summary

1.1 Overview

AstraCBC is a smartphone-only ocular biomarker SaMD program that aims to produce early health risk signals from guided eye capture using no external hardware or attachments. The intended clinical posture is conservative and safety-first: quality-gated outputs, explicit abstention states, confidence bands, and mandatory confirmatory-care direction for treatment decisions.

The inherited internal architecture from the AstraCBC base whitepaper is reused as the core process contract:

`guided capture -> QC -> feature extraction -> ensemble inference -> rules + ML arbitration -> output or abstain -> escalation guidance`

This full document converts that architecture into an implementation-ready dossier covering engineering, validation, regulatory, usability, risk management, and deployment operations.

1.2 Phone-only quantitative specifications (program-level)

| Parameter | Provisional target | Notes | |---|---:|---| | Capture duration | 6 to 12 s | Multi-frame signal stabilization | | Minimum usable frame rate | 24 fps | 30 fps preferred | | Minimum input resolution | 1280 x 720 | Higher accepted if available | | Max allowed motion blur ratio | <= 0.20 | Above threshold routes to reacquire | | Pass confidence threshold | >= 0.70 | Below routes to abstain/reacquire | | Overall abstain target | <= 25% | Tracked by device and subgroup | | Critical alert false-negative target | <= 10% | Endpoint-specific and conservative |

1.3 Trade-offs

  • Strict phone-only scope improves scalability but limits direct chemistry pathways.
  • Conservative abstention improves safety but reduces immediate completion rates.
  • Device coverage breadth increases reach but introduces camera pipeline variability.
  • Strong warning language protects users but may reduce marketing conversion.

1.4 Compare table

| Strategy | Benefit | Risk | Decision | |---|---|---|---| | Aggressive output with low abstain | Higher completion | Unsafe false confidence | Rejected | | Conservative output with abstain | Safer clinical posture | More retakes/abstains | Selected | | No device stratification | Simpler ops | Hidden bias/performance drift | Rejected | | Device-aware release gates | Better control | Higher operational complexity | Selected |

1.5 Block diagram

mermaid

flowchart LR
  A[Guided Capture] --> B[Quality Gates]
  B -->|pass| C[Features]
  B -->|reacquire| A
  B -->|abstain| Z[Inconclusive + Confirmatory Advice]
  C --> D[Model Ensemble]
  D --> E[Rules + ML Arbiter]
  E --> F[Risk Signal + Confidence + Severity]
  F --> G[Escalation Guidance]

1.6 Figure mock

text

+--------------------------------------------------------------+
| AstraCBC Result                                               |
| Quality: PASS   Confidence: MODERATE   Severity: GRADE_2      |
| Signal: Anemia-risk pattern                                  |
| Next step: Confirm with clinician within 7 days              |
+--------------------------------------------------------------+

1.7 Checklist

  • [ ] Product statement remains smartphone-only with no attachments.
  • [ ] Abstain state is visible in all user-facing results.
  • [ ] All high-risk outputs include confirmatory-care text.
  • [ ] Program-level metrics and thresholds are versioned.

1.8 Citations

  • AstraCBC base whitepaper sections 4-9, 11-16, 19.
  • IMDRF SaMD Clinical Evaluation (N41).
  • ISO 14971, IEC 62304, IEC 62366-1.

1.9 Acceptance criteria

  • Cross-functional leads approve product boundary and safety posture.
  • Program-level thresholds are frozen for pilot release planning.

---

2) AstraCBC Extract and Gap Analysis

2.1 Overview

This section maps what is directly reusable from the existing AstraCBC whitepaper and what must be re-authored for ocular biomarker scope.

Reusable core:

  • Smartphone-only boundary and no-hardware constraint.
  • Capture -> QC -> features -> inference -> arbitration -> abstain pattern.
  • Misclassification governance and conservative escalation language.
  • Risk and drift management philosophy.

2.2 Phone-only quantitative specifications

| Extracted control | Current state | Ocular-required update | |---|---|---| | QC states | pass/reacquire/abstain | Add ocular ROI confidence threshold | | Confidence bands | low to very_high | Add calibration drift confidence attenuation | | Severity grades | grade_1 to grade_4 | Add eye-specific contraindication routing | | Escalation timer | supported | Bind to endpoint-specific safety policy |

2.3 Trade-offs

  • Reusing CBC-first wording accelerates drafting but can create claim confusion for ocular endpoints.
  • Preserving existing governance minimizes risk but may require stricter thresholds due to ocular confounders.
  • Generic failure modes are reusable, but ocular optics introduces additional glare and eyelid-occlusion complexity.

2.4 Compare table

| Area | Reuse as-is | Adapt | Replace | |---|---|---|---| | Safety output model | Yes | Minor copy updates | No | | Disease label taxonomy | Partial | Ocular relevance filter | Yes for non-ocular-only labels | | Validation framework | Yes | Add ocular capture protocol | No | | Data contracts | Partial | Add ocular ROI metadata | Yes where CBC-specific |

2.5 Block diagram

mermaid

flowchart TD
  S1[ASTRACBC Source Sections] --> M1[Method Extraction]
  S1 --> L1[Limit Extraction]
  S1 --> A1[Assumption Extraction]
  M1 --> O1[Ocular Mapping]
  L1 --> O1
  A1 --> O1
  O1 --> G1[Gap Register]
  G1 --> P1[Protocol + SAP + Claim Map]

2.6 Figure mock

text

Traceability Matrix
AstraCBC section -> Ocular requirement -> Owner -> Due date -> Status

2.7 Checklist

  • [ ] Every inherited method has an ocular equivalent.
  • [ ] Every inherited limitation is represented in risk controls.
  • [ ] Every assumption is testable through planned experiments.
  • [ ] Gap owners are assigned with target dates.

2.8 Citations

  • AstraCBC sections 4-9, 11-16, 19.
  • IMDRF N41 evidence framework.

2.9 Acceptance criteria

  • Gap register is complete and signed off by systems, clinical, and regulatory leads.
  • No inherited claim appears without mapped ocular evidence.

---

3) Product Definition and Scope

3.1 Overview

Product class: software as a medical device delivered as a smartphone app.

Locked constraints:

  • No attachments.
  • No specimen collection.
  • No external calibration card.
  • No definitive diagnosis claims without confirmatory testing.

Primary output types:

  • Early risk signal label.
  • Confidence band.
  • Severity grade.
  • Escalation timer and next-step guidance.

3.2 Phone-only quantitative specifications

| Requirement | Target | |---|---:| | Time-to-first-result | <= 3 min from capture start | | Capture guidance completion rate | >= 85% | | Successful first-pass QC | >= 60% on supported devices | | Reacquire completion | >= 70% | | Hard safety wording display | 100% of high-severity outputs |

3.3 Trade-offs

  • Faster onboarding vs thorough safety education.
  • Broad market messaging vs precise claim boundaries.
  • Single global flow vs country-specific escalation rules.

3.4 Compare table

| Product framing | Pros | Cons | Decision | |---|---|---|---| | Diagnostic replacement | Strong consumer appeal | Unsafe and non-compliant | Rejected | | Early risk screening | Safety-aligned | Requires clearer UX | Selected | | Silent uncertainty | Cleaner UI | Unsafe misuse risk | Rejected | | Explicit confidence + abstain | Safer behavior | More complexity | Selected |

3.5 Block diagram

mermaid

flowchart LR
  U[User] --> C[Capture]
  C --> Q[QC]
  Q -->|pass| R[Risk Signal]
  Q -->|fail| H[Reacquire Help]
  R --> E[Escalation Guidance]
  E --> F[Confirmatory Path]

3.6 Figure mock

text

Primary Result Card
- Risk signal: Infection-risk pattern
- Confidence: Moderate
- Quality: Pass
- Action: Seek clinic evaluation within 24h

3.7 Checklist

  • [ ] Intended use text aligns with claim map.
  • [ ] Contraindications are visible before use.
  • [ ] All outputs include quality status.
  • [ ] Confirmatory-care reminder is persistent.

3.8 Citations

  • AstraCBC sections 4, 7, 9, 19.
  • IMDRF N41.
  • IEC 62366-1.

3.9 Acceptance criteria

  • Product requirement document is approved by product, safety, and regulatory stakeholders.

---

4) Regulatory and QMS

4.1 Overview

This section defines the compliance backbone for the ocular SaMD dossier and aligns quality artifacts to standards and market expectations.

Minimum aligned frameworks:

  • IMDRF SaMD clinical evaluation.
  • ISO 13485 quality management process.
  • ISO 14971 risk management and residual risk acceptance.
  • IEC 62304 software lifecycle.
  • IEC 62366-1 usability engineering.
  • ISO 15004-1/2 and IEC 62471 for eye-directed light safety context.
  • FDA software, cybersecurity, and PCCP guidance.
  • HIPAA/GDPR privacy controls where applicable.

4.2 Phone-only quantitative specifications

| Regulatory artifact | Quantitative requirement | |---|---:| | Requirement traceability coverage | 100% requirements mapped to verification | | Risk-control verification coverage | 100% for high-severity hazards | | CAPA closure SLA | <= 30 days (critical) | | Software SOUP inventory completeness | 100% components listed | | Cybersecurity threat model review cadence | Quarterly minimum |

4.3 Trade-offs

  • Faster premarket filing vs stronger prospective evidence package.
  • Broad indications vs tighter claim-specific labeling.
  • Frequent model updates vs conservative PCCP governance.

4.4 Compare table

| Regulatory path style | Benefit | Risk | Decision | |---|---|---|---| | Broad first release claims | Larger market | High evidence burden | Rejected | | Narrow staged claims | Lower initial risk | Slower expansion | Selected | | Ad-hoc model updates | Velocity | Change risk | Rejected | | PCCP-governed updates | Controlled iteration | Process overhead | Selected |

4.5 Block diagram

mermaid

flowchart TD
  PRD[Product Requirements] --> TR[Traceability Matrix]
  TR --> V[Verification]
  TR --> VAL[Validation]
  RMF[ISO 14971 RMF] --> TR
  UEF[IEC 62366 Usability File] --> VAL
  SLP[IEC 62304 Lifecycle Plan] --> V
  CEP[Clinical Evaluation Plan] --> VAL
  PCCP[PCCP + MLOps] --> POST[Post-market Controls]

4.6 Figure mock

text

Regulatory Document Stack
- Intended Use and Labeling
- Clinical Evaluation Plan
- Risk Management File
- Software Lifecycle File
- Usability Engineering File
- Cybersecurity File

4.7 Checklist

  • [ ] Intended use and contraindications approved.
  • [ ] RMF, SLP, UEF, and CEP baselines exist.
  • [ ] Cybersecurity and privacy controls documented.
  • [ ] PCCP change classes and validations defined.

4.8 Citations

  • IMDRF N41.
  • ISO 13485, ISO 14971.
  • IEC 62304, IEC 62366-1.
  • ISO 15004-1/2, IEC 62471.
  • FDA software/cybersecurity/PCCP guidance.

4.9 Acceptance criteria

  • Regulatory strategy memo approved and traceability matrix initialized with owners.

---

5) Optics, Capture, and Ergonomics

5.1 Overview

Phone-only ocular capture must operate under uncontrolled real-world conditions while still producing quality-gated signals.

Primary target regions:

  • Sclera
  • Conjunctiva
  • Pupil

Fundus without attachments remains exploratory with high abstain expectation.

5.2 Phone-only quantitative specifications

| Capture metric | Provisional target | |---|---:| | Eye ROI coverage | >= 65% target region visible | | Focus confidence | >= 0.75 | | Exposure clipping | <= 5% saturated pixels in ROI | | Specular glare area | <= 8% of ROI | | Head movement | <= 2.5 deg/s median during capture |

5.3 Trade-offs

  • Rear camera quality vs front camera usability.
  • Flash-assisted illumination vs user comfort and safety.
  • Strict posture guidance vs user dropout risk.

5.4 Compare table

| Capture mode | Pros | Cons | Decision | |---|---|---|---| | Rear camera assisted | Better optics on many devices | Harder self-framing | Optional guided helper mode | | Front camera self-capture | Better usability | Lower image quality on some devices | Default with capability checks | | Flash-on always | Better SNR | Comfort/safety concerns | Adaptive only | | Ambient-only | Comfortable | Quality instability | Allowed with stricter QC |

5.5 Block diagram

mermaid

flowchart LR
  G[Guidance Overlay] --> F[Frame Acquisition]
  F --> R[ROI Detection]
  R --> E[Exposure Check]
  E --> M[Motion Check]
  M --> GL[Glare Check]
  GL --> D[Decision: pass/reacquire/abstain]

5.6 Figure mock

text

Capture Screen
[Eye alignment oval]
[Lighting meter]
[Stability meter]
[Prompt: Hold still for 4 seconds]

5.7 Checklist

  • [ ] Device capability matrix is implemented.
  • [ ] Capture SOPs include low-vision and accessibility accommodations.
  • [ ] Eye illumination safety assumptions are documented.
  • [ ] Failure prompts are plain language and actionable.

5.8 Citations

  • AstraCBC section 8.
  • ISO 15004-1/2, IEC 62471.
  • Smartphone ocular imaging literature (sclera/pupil methods).

5.9 Acceptance criteria

  • Pilot capture protocol reproducibility is demonstrated across target device tiers.

---

6) Quality Control and Preprocessing

6.1 Overview

QC is a hard gate, not a soft score. No clinical output is permitted when minimum signal requirements are not met.

6.2 Phone-only quantitative specifications

| QC check | Pass threshold | Fail action | |---|---:|---| | Blur metric | >= 0.70 | Reacquire | | Motion artifact score | <= 0.20 | Reacquire | | ROI confidence | >= 0.75 | Reacquire | | Glare contamination | <= 0.08 | Reacquire/abstain | | Illumination stability | >= 0.65 | Reacquire | | Device compatibility | Supported tier only | Abstain with explanation |

6.3 Trade-offs

  • Tight thresholds improve safety but increase abstain rate.
  • Looser thresholds improve conversion but risk unreliable outputs.
  • Device-specific thresholds improve performance but increase maintenance burden.

6.4 Compare table

| QC strategy | Safety | UX | Decision | |---|---|---|---| | Global static thresholds | Medium | Simple | Interim only | | Device-tier adaptive thresholds | High | Medium complexity | Selected | | No abstain branch | Low | Superficially simple | Rejected |

6.5 Block diagram

mermaid

flowchart TD
  I[Raw Frames] --> N[Normalization]
  N --> C1[Blur Check]
  N --> C2[Motion Check]
  N --> C3[Glare Check]
  N --> C4[ROI Confidence]
  C1 --> D[QC Decision Engine]
  C2 --> D
  C3 --> D
  C4 --> D
  D -->|pass| O[Feature Extraction]
  D -->|reacquire| R[Capture Retry]
  D -->|abstain| A[Inconclusive]

6.6 Figure mock

text

QC Panel
Focus: PASS
Motion: PASS
Glare: FAIL
Action: Retake in softer lighting

6.7 Checklist

  • [ ] QC thresholds are versioned and auditable.
  • [ ] QC outputs are logged with reason codes.
  • [ ] Reacquire UX is tested on low-end devices.
  • [ ] Abstain policy is covered by regression tests.

6.8 Citations

  • AstraCBC sections 8, 12, 16.
  • IEC 62304 verification requirements.

6.9 Acceptance criteria

  • QC decision determinism validated on fixed test corpus with no nondeterministic drift.

---

7) ML Architecture and Arbitration

7.1 Overview

The ML stack combines feature-level models with rules-based safety arbitration.

Inference output is never shown without QC pass and calibration checks.

7.2 Phone-only quantitative specifications

| ML metric | Target | |---|---:| | Primary endpoint AUC | >= 0.82 (endpoint-specific) | | Calibration slope | 0.90 to 1.10 | | Brier score ceiling | <= 0.18 | | Subgroup performance delta | <= 0.07 | | Inference latency | <= 1.2 s on supported devices |

7.3 Trade-offs

  • Complex ensembles improve accuracy but reduce interpretability.
  • Simpler models aid explainability but may reduce robustness.
  • On-device inference improves privacy but constrains model size.

7.4 Compare table

| Model family | Pros | Cons | Decision role | |---|---|---|---| | Gradient boosted trees | Fast, interpretable features | Limited representation power | Baseline and fallback | | Lightweight CNN/transformer | Better pattern capture | Higher complexity | Primary in supported tiers | | Rule-only | Transparent | Low sensitivity to subtle patterns | Safety guardrails only |

7.5 Block diagram

mermaid

flowchart LR
  F[Engineered Features] --> M1[Model A]
  F --> M2[Model B]
  F --> M3[Model C]
  M1 --> ENS[Ensemble Aggregator]
  M2 --> ENS
  M3 --> ENS
  ENS --> R[Rules Engine]
  R --> UQ[Uncertainty + Abstain Gate]
  UQ --> OUT[Final Output]

7.6 Figure mock

text

Model Card Snapshot
- Intended use: early risk signal
- Inputs: ocular-derived features only
- Failure modes: glare, low perfusion, extreme motion
- Required warning: confirmatory testing required

7.7 Checklist

  • [ ] Model card exists for each release candidate.
  • [ ] Calibration and subgroup metrics are reviewed before release.
  • [ ] Uncertainty thresholds are fixed pre-validation.
  • [ ] Rollback package exists for every production model.

7.8 Citations

  • AstraCBC sections 9, 12, 16.
  • FDA PCCP guidance.
  • STARD-AI/CONSORT-AI/SPIRIT-AI reporting recommendations.

7.9 Acceptance criteria

  • Model release gates pass for endpoint, calibration, subgroup, and abstain limits.

---

8) Data Strategy and Labeling

8.1 Overview

Data governance defines what is collected, how it is labeled, and how it is split for robust generalization.

8.2 Phone-only quantitative specifications

| Dataset attribute | Minimum requirement | |---|---:| | Sites for initial model | >= 3 | | Distinct device families | >= 8 | | Each key subgroup sample | >= 200 (pilot target) | | Holdout by site | 1 full unseen site minimum | | Holdout by device family | >= 2 unseen families |

8.3 Trade-offs

  • Broader inclusion improves generalization but increases collection time.
  • Strict label timing windows improve validity but lower enrollment throughput.
  • Heavy synthetic augmentation improves robustness but may introduce unrealistic artifacts.

8.4 Compare table

| Label type | Strength | Limitation | Use | |---|---|---|---| | Lab value paired labels | Strong objective anchor | Timing and logistics burden | Core endpoints | | Clinician adjudication | Useful for composite phenotypes | Inter-rater variability | Secondary labels | | Self-reported outcomes | Scalable | Lower reliability | Exploratory only |

8.5 Block diagram

mermaid

flowchart TD
  CAP[Capture Data] --> META[Metadata: device/OS/context]
  CAP --> LAB[Reference Labels]
  META --> CUR[Curated Dataset]
  LAB --> CUR
  CUR --> SPLIT[Train/Val/Test + Site/Device Holdouts]
  SPLIT --> TRAIN[Model Development]
  SPLIT --> EVAL[Locked Evaluation]

8.6 Figure mock

text

Dataset Card
- Cohort size
- Device distribution
- Demographic distribution
- Label timing compliance
- Missingness profile

8.7 Checklist

  • [ ] Data dictionary includes all ocular ROI fields.
  • [ ] Label windows are defined per endpoint.
  • [ ] Leakage checks run for all splits.
  • [ ] Synthetic data generation is documented and bounded.

8.8 Citations

  • AstraCBC sections 11-13.
  • IMDRF N41 clinical evidence domains.
  • STARD-AI protocol transparency guidance.

8.9 Acceptance criteria

  • Dataset and labeling SOP are frozen before model lock.

---

9) Verification, Validation, and Trials

9.1 Overview

Verification confirms software and algorithm correctness. Validation confirms clinical performance in intended use settings.

9.2 Phone-only quantitative specifications

| Trial metric | Target | |---|---:| | Protocol adherence | >= 95% | | Endpoint confidence interval width | Predefined in SAP | | Device holdout pass rate | >= 90% of endpoint threshold | | Severe-case sensitivity | Endpoint-specific, safety-biased | | Monitoring report cadence | Monthly minimum |

9.3 Trade-offs

  • Earlier trials with narrower endpoints reduce risk but limit immediate claim breadth.
  • Broad endpoint sets increase value but raise sample size and protocol complexity.

9.4 Compare table

| Trial design | Pros | Cons | Decision | |---|---|---|---| | Single-site pilot | Fast start | Weak generalizability | Use for feasibility only | | Multi-site prospective | Better external validity | Slower and costlier | Required for claims | | Retrospective validation only | Cheap | High bias risk | Not sufficient |

9.5 Block diagram

mermaid

flowchart LR
  P[Protocol] --> ENR[Enrollment]
  ENR --> CAP[Guided Capture]
  CAP --> REF[Reference Label Collection]
  REF --> DB[Locked Trial Dataset]
  DB --> SAP[Statistical Analysis]
  SAP --> REP[Clinical Report]
  REP --> DEC[Claim Decision]

9.6 Figure mock

text

Trial Dashboard
- Enrolled: 1,240
- Label-complete: 1,105
- Protocol deviations: 3.2%
- Holdout device pass: 92%

9.7 Checklist

  • [ ] Trial protocol approved by ethics and regulatory teams.
  • [ ] SAP locked prior to unblinding.
  • [ ] DSMB and monitoring plans active.
  • [ ] Endpoint pass/fail gates predefined.

9.8 Citations

  • AstraCBC sections 11-14.
  • IMDRF N41.
  • STARD-AI, CONSORT-AI, SPIRIT-AI.

9.9 Acceptance criteria

  • Primary endpoint and subgroup gates meet predefined criteria with locked analysis.

---

10) Software Architecture and Operations

10.1 Overview

System must support safe real-time inference on-device, reliable audits, controlled updates, and post-market monitoring.

10.2 Phone-only quantitative specifications

| Operational metric | Target | |---|---:| | App crash-free sessions | >= 99.5% | | P95 inference latency | <= 1.5 s | | Audit event completeness | 100% for decision-critical events | | Incident acknowledgment SLA | <= 4 h (critical) | | Rollback readiness | <= 60 min to previous stable model |

10.3 Trade-offs

  • On-device processing improves privacy but limits model complexity.
  • Optional cloud telemetry improves drift detection but requires strict privacy controls.
  • Faster release cadence improves iteration but increases operational risk.

10.4 Compare table

| Architecture choice | Benefit | Risk | Decision | |---|---|---|---| | Pure on-device only | Strong privacy | Limited fleet visibility | Hybrid optional telemetry | | Full cloud inference | Central control | Latency/privacy dependence | Rejected for core flow | | Staged rollout by cohort | Controlled risk | Slower full rollout | Selected |

10.5 Block diagram

mermaid

flowchart TD
  APP[Mobile App] --> ENG[Inference Engine]
  ENG --> LOG[Audit Logger]
  LOG --> BUF[Secure Event Buffer]
  BUF --> CLOUD[Optional Telemetry Service]
  CLOUD --> MON[Monitoring + Drift]
  MON --> REL[Release Controller]
  REL --> APP

10.6 Figure mock

text

Release Console
- Current model: v1.3.2
- Rollout: 20% cohort
- Drift alert: none
- Rollback package: ready

10.7 Checklist

  • [ ] Security threat model reviewed and versioned.
  • [ ] All decision-critical events logged with immutable IDs.
  • [ ] Staged rollout and rollback runbooks validated.
  • [ ] Incident response tabletop test completed.

10.8 Citations

  • AstraCBC sections 15-16.
  • IEC 62304.
  • FDA device software and cybersecurity guidance.

10.9 Acceptance criteria

  • Operational readiness review passes for observability, rollback, and incident response.

---

11) UX and WCAG

11.1 Overview

UX must balance clarity, safety, and completion. The experience should reduce user error and avoid overconfidence.

11.2 Phone-only quantitative specifications

| UX metric | Target | |---|---:| | Task success (first capture) | >= 75% | | Critical task error rate | <= 5% | | Accessibility defects (critical) | 0 | | Reading level of key safety copy | Grade 6-8 | | WCAG contrast compliance | >= 4.5:1 text contrast |

11.3 Trade-offs

  • Simpler copy improves comprehension but can omit nuance.
  • Detailed medical disclaimers improve legal safety but may reduce readability.
  • Strong warning prominence improves safety but can increase anxiety.

11.4 Compare table

| UX pattern | Pros | Cons | Decision | |---|---|---|---| | Dense scientific language | Precise | Low user comprehension | Rejected | | Plain-language with expandable detail | Clear and scalable | Requires careful drafting | Selected | | Hidden uncertainty | Cleaner UI | Unsafe interpretation | Rejected |

11.5 Block diagram

mermaid

flowchart LR
  O[Onboarding] --> C[Guided Capture]
  C --> Q[Quality Feedback]
  Q --> R[Result Card]
  R --> N[Next Step Guidance]
  N --> H[Help/Support]

11.6 Figure mock

text

Result UI (Accessible)
- Risk signal: moderate
- Confidence: moderate
- Quality: pass
- Action button: Find confirmatory care
- Secondary: Retake scan

11.7 Checklist

  • [ ] Keyboard and screen-reader support for all critical flows.
  • [ ] Focus order and labels tested.
  • [ ] Safety copy is plain-language and persistent.
  • [ ] Localization-ready copy keys established.

11.8 Citations

  • IEC 62366-1.
  • WCAG 2.2.
  • AstraCBC section 12 and 19 for safety messaging constraints.

11.9 Acceptance criteria

  • Summative usability test passes critical task success threshold with no severe safety defects.

---

12) Risk Register and Safety Case

12.1 Overview

Risk management follows ISO 14971 principles with hazard identification, control implementation, residual risk assessment, and post-market feedback loops.

12.2 Phone-only quantitative specifications

| Risk control metric | Target | |---|---:| | High-severity hazard control verification | 100% | | Unmitigated high residual risks | 0 accepted without executive waiver | | Post-release safety signal review cadence | Weekly | | CAPA closure for critical issues | <= 30 days |

12.3 Trade-offs

  • Strict risk controls improve safety but can slow release cadence.
  • Lower abstain thresholds improve user throughput but may increase hazard exposure.

12.4 Compare table

| Safety strategy | Benefit | Risk | Decision | |---|---|---|---| | Output-first with warnings | Better completion | Unsafe misuse | Rejected | | Gate-first with abstain | Lower harm probability | Higher friction | Selected | | Static controls only | Simpler ops | Weak against drift | Rejected | | Dynamic surveillance + CAPA | Better long-term safety | Operational overhead | Selected |

12.5 Block diagram

mermaid

flowchart TD
  H[Hazard Identification] --> A[Risk Analysis]
  A --> C[Control Definition]
  C --> V[Control Verification]
  V --> R[Residual Risk Evaluation]
  R --> PM[Post-Market Monitoring]
  PM --> CAPA[Corrective/Preventive Actions]
  CAPA --> H

12.6 Figure mock

text

Risk Heatmap
Severity (y) vs Probability (x)
- R-001 glare misclassification: medium residual risk
- R-004 subgroup bias: medium residual risk

12.7 Checklist

  • [ ] Risk file includes hazard-control traceability.
  • [ ] Residual risk acceptance rationale documented.
  • [ ] Safety signal monitoring is operational.
  • [ ] CAPA workflow and ownership are clear.

12.8 Citations

  • ISO 14971.
  • AstraCBC sections 11, 12, 16, 19.

12.9 Acceptance criteria

  • Risk management review board signs off release with no unresolved high-risk hazards.

---

13) Business and Deployment

13.1 Overview

Deployment model is staged and safety-gated by country, device capability, and support readiness.

Core assets:

  • Device compatibility matrix.
  • App store staged rollout and rollback plan.
  • Privacy and data-flow documentation.
  • Incident response and post-market drift surveillance.

13.2 Phone-only quantitative specifications

| Deployment metric | Target | |---|---:| | Supported-device coverage in launch markets | >= 70% of active smartphone base | | Staged rollout blast radius | <= 20% per increment | | Customer support first-response SLA | <= 24 h | | Drift alert triage SLA | <= 48 h | | Post-market model review cadence | Monthly minimum |

13.3 Trade-offs

  • Wider immediate rollout increases growth and risk simultaneously.
  • Narrow rollout reduces risk but slows evidence accumulation.

13.4 Compare table

| Rollout strategy | Pros | Cons | Decision | |---|---|---|---| | Big-bang global | Fast awareness | High safety and support risk | Rejected | | Staged by market/device | Controlled risk | Slower growth | Selected | | Single static model forever | Stable ops | Performance drift risk | Rejected |

13.5 Block diagram

mermaid

flowchart LR
  DEV[Model Build] --> QA[Validation Gates]
  QA --> R1[Staged Rollout 10%]
  R1 --> R2[Staged Rollout 50%]
  R2 --> R3[Full Rollout]
  R1 --> MON[Monitoring]
  R2 --> MON
  R3 --> MON
  MON -->|alert| RB[Rollback]

13.6 Figure mock

text

Market Launch Card
- Country: KE
- Device support: 72%
- Rollout stage: 20%
- Safety incidents this week: 0 critical

13.7 Checklist

  • [ ] Compatibility matrix published and user-visible.
  • [ ] Store release checklist includes safety copy validation.
  • [ ] Incident response on-call schedule active.
  • [ ] Drift monitoring dashboards and thresholds configured.

13.8 Citations

  • AstraCBC section 15 and 16.
  • HIPAA and GDPR requirements.
  • FDA cybersecurity and PCCP guidance.

13.9 Acceptance criteria

  • Operations readiness review passes before each rollout stage increase.

13.10 Program Gantt

mermaid

gantt
title Ocular Smartphone-only SaMD Program
  dateFormat  YYYY-MM-DD
  section Foundations
  Claims + Intended Use Freeze      :a1, 2026-03-01, 45d
  QMS + Risk Baseline               :a2, 2026-03-10, 75d
  section Build
  Capture + QC v1                   :b1, 2026-04-01, 120d
  ML + Arbitration v1               :b2, 2026-05-01, 120d
  section Evidence
  Pilot Study                       :c1, 2026-06-15, 120d
  Holdout Validation                :c2, 2026-10-15, 90d
  Go/No-Go 1                        :milestone, c3, 2027-01-20, 1d
  section Clinical
  Prospective Multisite Trial       :d1, 2027-02-01, 180d
  Go/No-Go 2                        :milestone, d2, 2027-08-05, 1d
  section Release
  Staged Market Rollout + PMS       :e1, 2027-08-10, 120d

---

14) Claim to Evidence Map

14.1 Overview

All external claims must be linked to evidence packages and explicit pass thresholds before release.

14.2 Phone-only quantitative specifications

| Claim governance metric | Target | |---|---:| | Public claims with mapped evidence | 100% | | Claims missing predefined thresholds | 0 | | Claims without subgroup gates | 0 | | Claims without abstain limits | 0 |

14.3 Trade-offs

  • More aggressive claim language may improve CTR but increases clinical and regulatory risk.
  • Conservative claim language protects safety but requires stronger UX explanation to maintain conversion.

14.4 Master claim-evidence table

| Claim | Risk | Reference standard | N target | Study type | Metrics | Pass threshold | Abstain limit | Subgroup gate | Labeling constraint | |---|---|---|---:|---|---|---|---|---|---| | Early anemia-risk signal from ocular capture | Missed risk | IMDRF N41 + AstraCBC policy | 1200 | Prospective paired-label | Sens, Spec, NPV | Sens >= 0.90 at locked threshold | <= 25% | Delta <= 0.07 | Confirmatory testing required | | Hyperbilirubinemia-risk signal from sclera | Missed severe case | IMDRF N41 | 900 | Multisite prospective | AUC, Sens | AUC >= 0.82 and Sens >= 0.92 at safety cutpoint | <= 30% | Delta <= 0.08 | Not for treatment-only decision | | Pupillary neuro-risk signal | False alarms | Clinical comparator protocol | 800 | Blinded comparator | ICC, F1 | ICC >= 0.85 and F1 >= 0.80 | <= 20% | Delta <= 0.10 | Use with escalation guidance | | Universal diagnosis from any phone in any condition | Overclaim | AstraCBC prohibited claims | N/A | N/A | N/A | Not permitted | N/A | N/A | Prohibited | | No confirmatory testing required | Harmful misuse | AstraCBC prohibited claims | N/A | N/A | N/A | Not permitted | N/A | N/A | Prohibited |

14.5 Block diagram

mermaid

flowchart LR
  C[Claim Draft] --> E[Evidence Mapping]
  E --> T[Threshold Definition]
  T --> V[Validation Results]
  V --> D[Decision: Approve/Reject]
  D --> L[Labeling + UX Copy]

14.6 Figure mock

text

Claim Review Screen
Claim: "Early anemia-risk signal"
Evidence: 3 studies linked
Status: CONDITIONAL APPROVAL
Missing: subgroup gate review signature

14.7 Checklist

  • [ ] Every claim has owner and evidence package.
  • [ ] Every claim has pass/fail threshold and CI requirement.
  • [ ] Every claim has abstain and subgroup limits.
  • [ ] Prohibited claims are blocked in content pipeline.

14.8 Citations

  • AstraCBC section 19 (prohibited overclaims).
  • IMDRF N41 evidence framework.
  • ISO 14971 risk linkage principles.

14.9 Acceptance criteria

  • Claim review board signs off all launch claims with evidence, thresholds, and labeling constraints.

---

Appendix A - Prohibited Claims Policy (Operational)

Do not publish or imply:

  1. Definitive diagnosis without confirmatory testing.
  2. Direct blood chemistry or direct blood cell counting from smartphone-only ocular capture.
  3. Universal performance across unsupported devices.
  4. Guaranteed accuracy in all lighting or user conditions.

Required external wording baseline:

  • "This app provides early risk signals from smartphone ocular capture."
  • "Results should be confirmed by licensed medical professionals before treatment decisions."

---

Appendix B - Required Deliverables Checklist

  • Wireframes and HiFi flows for onboarding, capture, QC, output, escalation.
  • Software architecture and API contracts.
  • QC metric definitions and threshold registry.
  • Model cards and release checklist.
  • Trial protocol and SAP.
  • Risk register and hazard-control traceability.
  • Device compatibility matrix.
  • Incident response and post-market drift monitoring playbooks.

---

Appendix C - Citation Index

Internal

  • [I1] AstraCBC whitepaper: `docs/astracbc-whitepaper/ASTRACBC_PHONE_ONLY_CBC_WHITEPAPER.md`

External standards/guidance (to cite in authored sections)

  • [R1] IMDRF SaMD Clinical Evaluation (N41)
  • [R2] ISO 13485
  • [R3] ISO 14971
  • [R4] IEC 62304
  • [R5] IEC 62366-1
  • [R6] ISO 15004-1
  • [R7] ISO 15004-2
  • [R8] IEC 62471
  • [R9] FDA Device Software Functions and Mobile Medical Applications guidance set
  • [R10] FDA Cybersecurity guidance for medical devices
  • [R11] FDA AI/ML PCCP guidance
  • [R12] HIPAA Privacy Rule
  • [R13] GDPR
  • [R14] STARD-AI
  • [R15] CONSORT-AI
  • [R16] SPIRIT-AI

External domain literature groups

  • [L1] Smartphone sclera/chromaticity and ambient subtraction literature
  • [L2] Smartphone pupillometry and PLR validation literature
  • [L3] Ocular image quality and segmentation robustness literature