إسناد
The Chain of Attribution
Etymology

Isnad (إسناد) comes from the Islamic scholarly tradition — it's the chain of transmission used to authenticate hadith (sayings of the Prophet). A hadith is only as trustworthy as its chain of narrators. Each narrator must be verified for integrity, memory, and connection to the previous link.

We apply this ancient wisdom to modern AI provenance. A resource is only as trustworthy as its chain of auditors — and unlike traditional isnad, we can see exactly how much each auditor has at stake.

The Problem

AI agents are proliferating. They increasingly rely on shared resources—skills, configurations, prompts, memory, models—from untrusted sources. A single compromised resource can exfiltrate credentials, corrupt data, manipulate behavior, or compromise entire systems.

Yet there is no standardized way to assess trust before consumption:

Manual code review
Doesn't scale; most agents can't audit
Central approval
Bottleneck; single point of failure
Reputation scores
Gameable; new authors can't bootstrap
Sandboxing
Incomplete; many resources need real permissions

Without tooling, the answer to "Is this safe?" is always a guess.

The Solution

$ISNAD introduces a decentralized trust layer where auditors stake tokens to attest to resource safety. Malicious resources burn stakes; clean resources earn yield. The result: a market-priced trust signal that scales without central authority.

1
Resources are inscribed
On Base L2 with content and metadata—permanent, censorship-resistant
2
Auditors review and stake
Stake $ISNAD tokens to attest to resource safety
3
Stakes are locked
For a time period (30-90 days)
4
If issues are detected
Jury deliberates, staked tokens are slashed
5
If resource remains clean
Auditors earn yield from reward pool
On-Chain Inscriptions

Unlike IPFS-based approaches that require pinning services and external infrastructure, ISNAD inscribes resources directly on Base L2 calldata:

~$0.01
per KB inscribed
Forever
on-chain storage
Zero
external dependencies
Inscription format:
ISNAD | v1 | type | flags | metadata | content

type:     SKILL | CONFIG | PROMPT | MEMORY | MODEL | API
flags:    COMPRESSED | ENCRYPTED | CHUNKED | ...
metadata: { name, version, author, contentHash, ... }
content:  raw resource bytes
Resource Types

ISNAD supports attestation for any content-addressable AI resource:

SKILLExecutable code packages, tools, API integrations
CONFIGAgent configurations, gateway settings, capabilities
PROMPTSystem prompts, personas, behavioral instructions
MEMORYKnowledge bases, context files, RAG documents
MODELFine-tunes, LoRAs, model adapters
APIExternal service attestations, endpoint integrity

Future resource types can be added via governance without protocol upgrades.

Principles
Skin in the game
Auditors risk real value. False attestations have consequences.
Self-selecting expertise
Only confident auditors stake. The market filters for competence.
Permanently verifiable
Everything on-chain. No trust in external infrastructure.
Future-proof
Extensible resource types, versioned protocol, governance upgrades.
Attack resistant
Sybil attacks require capital. Collusion burns all colluders.
Links
GitHubSource code, whitepaper, discussion
X / TwitterUpdates and announcements
DocumentationTechnical guides and API reference