Skip to content

NAMs & FAIR metadata

Data First for NAMs: why FAIR-by-design now determines reuse and review speed

Damien Huzard, PhD

If NAM data is not structured at source, evidence quality becomes difficult to scale. The assay can be strong while the submission asset remains weak.

The bottleneck has moved from assay to data model

The human Liver-Chip benchmark showed strong predictive performance in a blinded setting, including 87% sensitivity and 100% specificity for drug-induced liver injury in the cited panel [1]. That kind of result changes the NAM conversation. But performance alone is not enough for reuse and review at scale. When study context, endpoint semantics, and provenance remain inconsistent across labs, each reuse attempt becomes a bespoke reconstruction project [2][3].

In my view, this is now the central NAM infrastructure gap: not insufficient biology, but insufficient structure around biology. The white paper linked below expands this argument with a full position-paper treatment and implementation framing [4].

Why this is now a regulatory issue, not only a data-science issue

FDA's March 2026 draft NAM guidance frames evaluation around context of use, human relevance, technical characterization, and fit-for-purpose adequacy [5]. Those are metadata-heavy constructs. If the context is not encoded in a structured and auditable way, reviewers cannot evaluate claims efficiently, and sponsors cannot compare evidence consistently across studies [5][6].

This is also where submission standards tension appears: standards like SEND are mandatory in many nonclinical pathways but do not yet naturally fit all NAM endpoint classes [6]. Practical implication: conversion at the end is expensive; structured capture at source is cheaper.

A practical minimum: schema-first, then validation-first

A workable first step is not enterprise architecture. It is one assay-family pilot with a minimum metadata contract: identifiers, biological context, assay setup, endpoint definitions, and provenance checks [2][4]. Then enforce validation before acceptance. This shifts work from downstream cleanup to upstream quality control.

AI tools can reduce mapping workload, especially for extracting candidate metadata from protocols and reports, but they should remain proposal layers under deterministic schema validation and human review [7]. Neuronautix interpretation: teams that treat AI as curation support and schemas as trust boundaries will move faster with less compliance risk.

White paper and deck

The full position paper is available as a web page and downloadable PDF:

References

Work with Neuronautix

Build one FAIR-by-design pilot first

A narrow pilot with explicit metadata contracts and validation gates usually creates enough evidence to scale internally with confidence.