Isnad (إسناد) comes from the Islamic scholarly tradition — it's the chain of transmission used to authenticate hadith (sayings of the Prophet). A hadith is only as trustworthy as its chain of narrators. Each narrator must be verified for integrity, memory, and connection to the previous link.
We apply this ancient wisdom to modern AI provenance. A resource is only as trustworthy as its chain of auditors — and unlike traditional isnad, we can see exactly how much each auditor has at stake.
AI agents are proliferating. They increasingly rely on shared resources—skills, configurations, prompts, memory, models—from untrusted sources. A single compromised resource can exfiltrate credentials, corrupt data, manipulate behavior, or compromise entire systems.
Yet there is no standardized way to assess trust before consumption:
Without tooling, the answer to "Is this safe?" is always a guess.
$ISNAD introduces a decentralized trust layer where auditors stake tokens to attest to resource safety. Malicious resources burn stakes; clean resources earn yield. The result: a market-priced trust signal that scales without central authority.
Unlike IPFS-based approaches that require pinning services and external infrastructure, ISNAD inscribes resources directly on Base L2 calldata:
ISNAD | v1 | type | flags | metadata | content
type: SKILL | CONFIG | PROMPT | MEMORY | MODEL | API
flags: COMPRESSED | ENCRYPTED | CHUNKED | ...
metadata: { name, version, author, contentHash, ... }
content: raw resource bytesISNAD supports attestation for any content-addressable AI resource:
| SKILL | Executable code packages, tools, API integrations |
| CONFIG | Agent configurations, gateway settings, capabilities |
| PROMPT | System prompts, personas, behavioral instructions |
| MEMORY | Knowledge bases, context files, RAG documents |
| MODEL | Fine-tunes, LoRAs, model adapters |
| API | External service attestations, endpoint integrity |
Future resource types can be added via governance without protocol upgrades.