Dermatology Image Annotation FHIR Implementation Guide
0.1.0 - ci-build
Dermatology Image Annotation FHIR Implementation Guide - Local Development build (v0.1.0) built by the FHIR (HL7® FHIR® Standard) Build Tools. See the Directory of published versions
This IG implements the HL7 AI Transparency on FHIR IG (ballot January 2026) to ensure that AI-generated dermatology annotations are identifiable, auditable, and distinguishable from human-produced results.
The AIPrediction profile requires meta.security to include the AIAST code from the v3-ObservationValue CodeSystem. This code, defined by the AI Transparency IG, signals that the resource contains content asserted by an artificial intelligence system.
Any resource that originates from or is informed by AI analysis carries the AIAST tag. This includes pure AI predictions and AI-assisted annotations where a clinician confirmed or corrected an AI result (see ExampleAIAssistedConfirmed).
For detailed audit trails, this IG uses Provenance resources to document which AI system produced a prediction, when it was created, and what source data it consumed. The ExampleAIPredictionProvenance demonstrates this pattern, recording the AI model Device as the agent and the source clinical photograph as the entity.
Provenance resources complement the AIAST tag by providing richer context that the security tag alone cannot convey: model identity, version, input data references, and temporal information.
The AIAST security tag answers a binary question: did AI participate in creating this content? Clinical workflows require more granularity. The AnnotationMethod CodeSystem extends this concept with five levels of AI involvement:
The AIAST tag alone cannot distinguish between a clinician who accepted an AI suggestion unchanged and one who corrected it before accepting. This distinction matters for clinical quality, training data curation, and AI performance monitoring. The annotation method codes fill that gap while maintaining compatibility with the AIAST tag for systems that only need the binary signal.
Systems that consume annotations from this IG can filter and sort by AI involvement at two levels of granularity. A simple filter on meta.security = AIAST identifies all AI-touched content. A more detailed query on method distinguishes clinician-confirmed from clinician-corrected results, enabling AI performance tracking and annotation quality analysis.