v0.2.0 — 2026-03-09

Design Document

This is the canonical technical reference for ItsGoin. It describes the vision, the architecture, and the current state of every subsystem — with full implementation detail. This document is versioned; each update records what changed.

Changelog

v0.2.0 (2026-03-09): Major design updates — three-layer architecture (Mesh/Social/File), N+10 identification, keep-alive sessions, 3-tier revocation, multi-device identity, growth loop redesign, pull sync from social/file layers, relay pipes default to own-device-only, remove anchor register loop.

v0.1.0 (2026-03-09): First versioned edition. Consolidated from ARCHITECTURE.md, code review, and gap analysis into a single source of truth.

1. The Vision

"A decentralized fetch-cache-re-serve content network that supports public and private sharing without a central server. It replaces 'upload to a platform' with 'publish into a swarm' where attention creates distribution, privacy is client-side encryption, and availability comes from caching, not money."

The honest promise: Cold content survives only if someone intentionally keeps hosting it. The system is a loss-risk network — best-effort availability, not durability guarantees.

Guiding principles

2. Identity & Bootstrap

First startup

  1. Identity: Load or generate ed25519 keypair from {data_dir}/identity.key. NodeId = 32-byte public key. A unique device identity is also generated for multi-device coordination (see Section 21).
  2. Storage: Open SQLite database (distsoc.db), auto-migrate schema.
  3. Blob store: Create {data_dir}/blobs/ with 256 hex-prefix shards (00/ through ff/).
  4. Bootstrap anchors: Load from {data_dir}/anchors.json. If missing, use hardcoded default anchor.
  5. Bootstrap: If peers table is empty, connect to a bootstrap anchor. Request referrals and matchmaking (unless self or the other node is an anchor). Persist on that anchor's referral list until released (at referral count limit) while beginning the growth loop immediately.

Startup cycles

Spawned after bootstrap completes:

CycleIntervalPurpose
Pull syncOn demand (3h Self Last Encounter threshold)Pull new posts from social + upstream file peers
Routing diff120s (2 min)Broadcast N1/N2 changes to mesh + keep-alive sessions
Rebalance600s (10 min)Clean dead connections, reconnect preferred, signal growth
Growth loop60s + reactive (on N2/N3 receipt)Fill empty mesh slots until 101 (90% threshold for reactive mode)
Recovery loopReactive (mesh empty)Emergency reconnect via anchors
Social/File connectivity check60sVerify <N4 access to N+10 of active social + file peers; open keep-alive sessions as needed
Removed: Anchor register loop. Anchors are for forming initial mesh connections when bootstrapping, not for ongoing registration. Nodes only connect to anchors during bootstrap or recovery.

3. N+10 Identification

Concept

Every node is identified not just by its NodeId but by its N+10: the node's own NodeId plus the NodeIds of its 10 preferred peers. This accelerates the capacity to find any node — if you can reach any of the 11 nodes in someone's N+10, you can find them.

Where N+10 appears

ContextWhat's included
Self identificationAll self-identification messages include the sender's N+10
Following someoneWhen you follow a peer, you store and maintain their N+10 in your social routes
Post headersEvery post header includes the author's current N+10. Updated whenever they post.
Blob headersBlob/file headers include: (1) the author's N+10, (2) the upstream file source's N+10 (if not the author), (3) N+10s of up to 100 downstream file hosts
Recent post listsAuthor manifests include the author's N+10 alongside their recent post list

Why this works

Preferred peers are bilateral agreements — stable, long-lived connections. By including them in identification, any node that can find any of your 10 preferred peers can transitively find you within one hop. This eliminates most discovery cascades for socially-connected nodes.

Status: Partial

N+10 is partially implemented — preferred peers exist and are tracked, but N+10 is not yet included in all identification contexts (post headers, blob headers, self-identification messages). Currently preferred_tree in social routes provides similar functionality for relay selection.

4. Connections & Growth

Connection types

Slot architecture

Slot kindDesktopMobilePurpose
Preferred103Bilateral agreements, eviction-protected
Non-preferred9112Growth loop fills these with diverse peers
Total mesh10115Long-lived routing backbone
Keep-alive sessionsNo hard limitNo hard limitSocial/file layer peers not in mesh (max 50% of session capacity reserved for keep-alive)
Sessions (interactive)No hard limitNo hard limitActive DM, group interaction, anchor matchmaking
Relay pipes102Own-device relay by default; opt-in for relaying for others
v0.2.0 change: Removed the distinction between "local" (71) and "wide" (20) non-preferred slots. The growth loop goes wide by default. Session counts are no longer hard-limited — an average computer can sustain ~1000 QUIC sessions without strain. The 50% keep-alive reservation ensures sessions remain available for interactive use.

MeshConnection struct

Each mesh connection tracks: node_id, connection (QUIC), slot_kind (Preferred or NonPreferred), remote_addr (captured from Incoming before accept), last_activity (AtomicU64), created_at.

Keepalive

5. Connection Lifecycle

5.1 Growth Loop (60s timer + reactive on N2/N3 receipt)

Timer: Fires every 60 seconds. Checks current mesh count. If < 101, runs a growth cycle.

Reactive trigger: Fires immediately after receiving a peer's N2/N3 list (from initial exchange or routing diff). Continues firing on each new N2/N3 receipt until mesh is 90% full (~91 connections). After 90%, switches to timer-only mode.

Candidate selection (N2 diversity scoring):

score = 1.0 / reporter_count + (0.3 if not_in_N3)

Connection attempt cascade:

  1. Direct connect (15s timeout) — use stored/resolved address
  2. Introduction fallback — find N2 reporters who know this peer, ask each to relay-introduce us

Failure handling: Track consecutive failures. After 3 consecutive failures, back off (break loop, wait for next signal). Mark unreachable peers for future skipping.

5.2 Rebalance Cycle (every 600s)

Executed in priority order:

  1. Dead connection removal: Remove connections with close_reason() set, or idle > 600s (zombie)
  2. Stale entry pruning: N2/N3 entries older than 7 days, social route watchers older than 24 hours
  3. Priority 0 — Preferred peer reconnection: Iterate preferred_peers table, reconnect any that are disconnected. If at capacity, evict the lowest-diversity non-preferred peer to make room. Prune preferred peers unreachable for 24+ hours.
  4. Priority 1 — Reconnect recently dead: Re-establish dropped non-preferred connections
  5. Priority 2 — Signal growth loop: Fill remaining empty slots via growth loop
  6. Idle session cleanup: Reap interactive sessions idle > 300s (5 min). Keep-alive sessions are NOT reaped by idle timeout.
  7. Relay intro dedup pruning: Clear seen_intros entries older than 30s, cap at 500
Note: Low diversity score alone does NOT trigger eviction. The only eviction path is Priority 0 (making room for a preferred peer).

5.3 Recovery Loop (reactive, mesh empty)

Trigger: disconnect_peer() fires when last mesh connection drops.

  1. Debounce 2 seconds (wait for cascading disconnects to settle)
  2. Gather anchors: known_anchors table (top-5 by success_count) → fallback to anchor_peers list
  3. For each anchor: connect, request referrals and matchmaking, try direct connect to each referral, fallback to hole punch via anchor for unreachable referrals
  4. Persist on anchor's referral list until released, begin growth loop immediately

5.4 Initial Exchange (on every new connection)

When two nodes connect, they exchange:

Processing: Their N1 → our N2 table (tagged to reporter). Their N2 → our N3 table (tagged to reporter). Store profile, apply deletes, record replica overlaps. Trigger growth loop immediately with new N2/N3 candidates if mesh < 90% full.

5.5 Incremental Routing Diffs (every 120s + on change)

NodeListUpdate (0x01) contains N1 added/removed, N2 added/removed. Sent via uni-stream to all mesh peers and keep-alive sessions. Receiver processes: their N1 adds → our N2 adds, their N2 adds → our N3 adds, etc.

6. Network Knowledge Layers (N1/N2/N3)

LayerSourceContainsShared?Stored in
N1Our connections + social contactsNodeIds onlyYes (as "N1 share")mesh_peers + social_routes
N2Peers' N1 sharesNodeIds tagged by reporterYes (as "N2 share")reachable_n2
N3Peers' N2 sharesNodeIds tagged by reporterNeverreachable_n3

<N4 access

A node has <N4 access to a target if the target appears in its N1, N2, or N3 tables. This means the target is reachable within 3 hops without needing worm search or relay introduction. The social/file connectivity check (see Section 14) uses <N4 access to determine whether keep-alive sessions are needed.

What is NEVER shared

Address resolution cascade (connect_by_node_id)

StepMethodTimeoutSource
0Social route cachesocial_routes table (cached addresses for follows/audience)
1Peers tableStored address from previous connection
2N2 ask reportervariesAsk the mesh peer who reported target in their N1
3N3 chain resolvevariesAsk reporter's reporter (2-hop chain)
4Worm search3s totalFan-out to all peers → bloom to wide referrals
5Relay introduction15sHole punch via intermediary relay
6Session relayPipe traffic through intermediary (own-device or opt-in)

7. Three-Layer Architecture (Mesh / Social / File)

The network operates across three distinct layers, each with its own connections, routing, and purpose. The separation enables specialized behavior without the layers interfering with each other.

LayerPurposeConnectionsSync trigger
MeshStructural backbone: N1/N2/N3 routing, diversity, discovery101 mesh slots (preferred + non-preferred)N/A — mesh is infrastructure, not content
SocialFollows, audience, DMs — the human relationshipsSocial routes + keep-alive sessions as neededPull posts when Self Last Encounter > 3 hours
FileContent storage and distribution — blobs, CDN treesUpstream/downstream file peers + keep-alive sessions as neededPull on blob request, push on post creation

Key principle: mesh is not for content

Pull sync does not pull posts from mesh peers. Mesh connections exist for routing diversity and discovery. Content flows through the social layer (posts from people you follow) and the file layer (blobs from upstream/downstream hosts). This separation means mesh connections can be optimized purely for network topology without social bias.

Cross-layer benefits

Each layer's connections contribute to finding nodes and referrals for the other layers. Keep-alive sessions from the social and file layers participate in N2/N3 routing, which improves <N4 access for all three layers. A social keep-alive session might provide the N2 entry that helps the mesh growth loop find a diverse new peer, and vice versa.

8. Anchors

Intent

Anchors are "just peers that are always on" — standard ItsGoin nodes on stable servers. They run the same code with no special protocol. Their value comes from being consistently available for bootstrapping new nodes into the network and matchmaking (introducing peers to each other).

Each profile can carry a preferred anchor list — infrastructure addresses, not social signals.

Status: Complete (with gaps)

When anchors are used

Anchor referral mechanics

When a bootstrapping node connects, the anchor provides referrals from its mesh and referral list. The node persists on the anchor's referral list until released at the referral count limit. During this time, the anchor can matchmake — introducing the new node to other peers requesting referrals.

Session fallback for full anchors

When an anchor's mesh is full (101/101), new nodes fall back to a session connection for matchmaking. The anchor accepts referral requests over session connections, not just mesh.

Remaining gaps

GapImpact
Profile anchor lists not used for discoveryProfiles have an anchors field but it's not consulted during address resolution
No anchor-to-anchor awarenessAnchors don't discover each other unless they connect through normal mesh growth
Bootstrap chicken-and-eggA fresh anchor with few peers produces few N2 candidates for new nodes. Growth stalls because there's nothing to grow from.

9. Referrals

Status: Complete

Referral list mechanics (anchor side)

Anchors maintain an in-memory HashMap of registered peers. Each entry: { node_id, addresses, use_count, disconnected_at }.

PropertyValue
Tiered usage caps3 uses if list < 50, 2 uses at 50+, 1 use at 100+
Disconnect grace2 minutes before pruning
Sort orderLeast-used first (distributes load)
Auto-supplementWhen explicit list is sparse (< 3 entries), supplement with random mesh peers

10. Relay & NAT Traversal

Status: Complete

Relay selection (find_relays_for)

Find up to 3 relay candidates, prioritized:

  1. Preferred tree intersection: Target's preferred_tree (from social_routes, ~100 NodeIds) intersected with our connections. Prefer our own preferred peers within that tree. TTL=0.
  2. N2 reporters: Our mesh peers who reported the target in their N1 share. TTL=0.
  3. N3 via preferred tree: Target's preferred_tree intersected with N3 reporters. TTL=1.
  4. N3 reporters: Any N3 reporter for the target. TTL=1.

RelayIntroduce flow (0xB0/0xB1)

  1. Requester → opens bi-stream to relay, sends RelayIntroduce { target, requester, requester_addresses, ttl }
  2. Relay handles three cases:
    • We ARE the target: Return our addresses, spawn hole punch to requester
    • Target is our mesh peer: Forward request to target on new bi-stream, relay response back. Inject observed public addresses for both parties.
    • TTL > 0 and target in our N2: Forward to the reporter with TTL-1 (chain forwarding, max TTL=2)
  3. Requester receives RelayIntroduceResult { target_addresses, relay_available }, then:
    • hole_punch_parallel(): Try all returned addresses in parallel, retry every 2s, 30s total timeout
    • If hole punch fails and relay_available: open SessionRelay (0xB2) pipe through the intermediary

Session relay (relay pipes)

Intermediary splices bi-streams between requester and target. Desktop: max 10 concurrent pipes. Mobile: max 2. Each pipe has a 50MB byte cap and 2-min idle timeout.

v0.2.0 change: Relay pipes are own-device-only by default. A node will only relay traffic between its own devices (same identity key, different device identity). Users can opt in to relaying for others in Settings, but this is not enabled automatically. This prevents nodes from unknowingly burning bandwidth for random peers while still enabling personal multi-device routing.

Deduplication & cooldowns

MechanismWindowPurpose
seen_intros30sPrevents forwarding loops
relay_cooldowns5 min per targetPrevents relay spamming

Hole punch mechanics

Parse all returned addresses into QUIC EndpointAddr. Spawn parallel connect attempts to every address simultaneously. Each attempt: 2s timeout, retried until 30s total deadline. First successful connection wins, all others aborted.

11. Worm Search

Status: Complete

Used at step 4 of connect_by_node_id, after N2/N3 resolution fails.

Algorithm

  1. Build needles: target NodeId + target's N+10 (up to 10 preferred peers from their profile/cached N+10)
  2. Local check: Search own connections + N2/N3 for any of the 11 needles
  3. Fan-out (500ms timeout): Send WormQuery{ttl=0} (0x60) to all mesh peers in parallel. Each peer checks their local connections + N2/N3.
  4. Bloom round (1.5s timeout): Each fan-out response includes a random "wide referral" peer. Connect to those referrals and send WormQuery{ttl=1} (they fan-out to their peers with ttl=0).
  5. Total timeout: 3 seconds for the entire search.

Dedup & cooldown

MechanismWindowPurpose
seen_worms10sPrevents loops during fan-out
Miss cooldown5 min (in DB)Prevents repeated searches for unreachable targets

12. Preferred Peers

Status: Complete

Negotiation (MeshPrefer, 0xB3)

Properties

13. Social Routing

Status: Complete

Caches addresses for follows and audience members, separate from mesh connections.

social_routes table

FieldPurpose
node_idThe social contact's NodeId
nplus10Their N+10 (NodeId + 10 preferred peers)
addressesTheir known IP addresses
peer_addressesTheir N+10 contacts (PeerWithAddress list)
relationFollow / Audience / Mutual
statusOnline / Disconnected
last_connected_msWhen we last connected
reach_methodDirect / Relay / Indirect
preferred_tree~100 NodeIds for relay tree

Wire messages

CodeNameStreamPurpose
0x70SocialAddressUpdateUniSent when a social contact's address changes or they reconnect
0x71SocialDisconnectNoticeUniSent when a social contact disconnects
0x72SocialCheckinBiKeepalive with address + N+10 updates

Reconnect watchers

reconnect_watchers table: when peer A asks about disconnected peer B, A is registered as a watcher. When B reconnects, A gets a SocialAddressUpdate notification. Watchers pruned after 24 hours.

Social route lifecycle

14. Keep-Alive Sessions

Status: Planned

Purpose

When the mesh 101 doesn't provide <N4 access to all the nodes we need for social and file operations, keep-alive sessions bridge the gap. These are long-lived connections that participate in N2/N3 routing but are not part of the mesh 101.

Social/File connectivity check (every 60s)

Periodically check whether we have <N4 access (within N1/N2/N3) to the N+10 of every node we need:

For any node whose N+10 is NOT reachable within N3, open a keep-alive session to the closest available node in their N+10 (or to them directly if possible). This ensures we can always find and reach our social and file contacts without worm search.

Keep-alive session behavior

Cross-layer benefit

Keep-alive sessions from the social and file layers feed N2/N3 entries back into the mesh layer. A social keep-alive to a friend's preferred peer might provide N2 entries that help the mesh growth loop. Similarly, a file keep-alive to an upstream host might provide access to nodes the mesh has never seen. The three layers compound each other's reach.

15. Content Propagation

Intent

"Attention creates propagation": when you view something, you cache it. The cache is optionally offered for serving. Hot content spreads naturally through demand. Cold content decays unless intentionally hosted.

The CDN vision: every file by author X carries an author manifest with the author's N+10 and recent post list. If you hold any file by author X, you passively know X's recent posts and can find X through their N+10.

Status: Partial

What's missing

GapImpact
No passive file-chain propagationAuthorManifest only travels with explicit BlobResponse, not passively. Holding old files by an author doesn't notify you of their new posts.
N+10 not yet in file headersBlob headers should include author N+10, upstream N+10, and downstream N+10s. Currently only AuthorManifest travels with blobs.
No "fetch from any peer who has it"Blobs are fetched from specific peers. No content-addressed routing ("who has blob X?").

16. Files & Storage

Blob storage Complete

PropertyValue
CID formatBLAKE3 hash of blob data (32 bytes, hex-encoded)
Filesystem path{data_dir}/blobs/{hex[0..2]}/{hex} (256 shards)
Metadata tableblobs (cid, post_id, author, size_bytes, created_at, last_accessed_at, pinned)
Max blob size10 MB
Max attachments per post4

File headers (intent)

Every post and blob carries header information that enables discovery and routing:

Blob transfer flow (0x90/0x91)

  1. Requester sends BlobRequest { cid, requester_addresses }
  2. Host checks local BlobStore:
    • Has blob: Return base64-encoded data + CDN manifest + file header (N+10s, recent posts). Try to register requester as downstream (max 100). If full, return existing downstream as redirect candidates.
    • No blob: Return found: false
  3. Requester verifies CID, stores blob locally, records upstream in blob_upstream table. Updates Self Last Encounter for the author based on file header.

CDN hosting tree Complete

Blob eviction Complete

priority = pin_boost + (relationship * heart_recency * freshness / (peer_copies + 1))
FactorCalculation
pin_boost1000.0 if pinned, else 0.0. Own blobs auto-pinned.
relationship5.0 (us), 3.0 (mutual follow+audience), 2.0 (follow), 1.0 (audience), 0.1 (stranger)
heart_recencyLinear decay over 30 days: max(0, 1 - age/30d)
freshness1 / (1 + post_age_days)
peer_copiesKnown replica count (from post_replicas, only if < 1 hour old)

Hosting quota & pin modes Planned

ConceptStatus
3x hosting quota tracking & enforcementNot started. Every node must host 3x the bytes they personally posted.
Anchor pin vs Fork pinNot started. Anchor pin = host the original (author retains control). Fork pin = independent copy (you become key owner).
Personal vaultNot started. Private durability for saved/pinned items, not counted toward 3x.

17. Sync Protocol

Wire format

[1 byte: MessageType] [4 bytes: length (big-endian)] [length bytes: JSON payload]

Max payload: 16 MB. ALPN: distsoc/3.

Pull sync: social + file layers, not mesh

v0.2.0 change: Pull sync pulls posts from social layer peers (follows, audience) and upstream file peers, NOT from mesh peers. Mesh connections exist for routing diversity, not content. This separates infrastructure from content flow.

Self Last Encounter: For each peer we sync with, we track the timestamp of our last successful sync. When Self Last Encounter ages beyond 3 hours, a pull sync is triggered. Self Last Encounter is updated to the newer of: (a) what's currently stored, or (b) the "file last update" timestamp from file headers received during blob transfers. Since file headers include the author's recent post list, downloading a blob from any peer hosting that author's content can update Self Last Encounter for the author.

Pull sync filtering

Message types (37 total)

HexNameStreamPurpose
0x01NodeListUpdateUniIncremental N1/N2 diff broadcast
0x02InitialExchangeBiFull state exchange on connect
0x03AddressRequestBiResolve NodeId → address via reporter
0x04AddressResponseBiAddress resolution reply
0x05RefuseRedirectUniRefuse mesh + suggest alternative
0x40PullSyncRequestBiRequest posts filtered by follows
0x41PullSyncResponseBiRespond with filtered posts
0x42PostNotificationUniLightweight "new post" push to social contacts
0x43PostPushUniDirect encrypted post delivery to recipients
0x44AudienceRequestBiRequest audience member list
0x45AudienceResponseBiAudience list reply
0x50ProfileUpdateUniPush profile changes
0x51DeleteRecordUniSigned post deletion
0x52VisibilityUpdateUniRe-wrapped visibility after revocation
0x60WormQueryBiFan-out search beyond N3
0x61WormResponseBiWorm search reply
0x70SocialAddressUpdateUniSocial contact address changed
0x71SocialDisconnectNoticeUniSocial contact disconnected
0x72SocialCheckinBiKeepalive + address + N+10 update
0x90BlobRequestBiFetch blob by CID
0x91BlobResponseBiBlob data + CDN manifest + file header
0x92ManifestRefreshRequestBiCheck manifest freshness
0x93ManifestRefreshResponseBiUpdated manifest reply
0x94ManifestPushUniPush updated manifests downstream
0x95BlobDeleteNoticeUniCDN tree healing on eviction
0xA0GroupKeyDistributeUniDistribute circle group key to member
0xA1GroupKeyRequestBiRequest group key for a circle
0xA2GroupKeyResponseBiGroup key reply
0xB0RelayIntroduceBiRequest relay introduction
0xB1RelayIntroduceResultBiIntroduction result with addresses
0xB2SessionRelayBiSplice bi-streams (own-device default)
0xB3MeshPreferBiPreferred peer negotiation
0xB4CircleProfileUpdateUniEncrypted circle profile variant
0xC0AnchorRegisterUniRegister with anchor (bootstrap/recovery only)
0xC1AnchorReferralRequestBiRequest peer referrals from anchor
0xC2AnchorReferralResponseBiReferral list reply
0xE0MeshKeepaliveUni30s connection heartbeat

18. Encryption

Envelope encryption (1-layer) Complete

  1. Generate random 32-byte CEK (Content Encryption Key)
  2. Encrypt content: ChaCha20-Poly1305(plaintext, CEK, random_nonce)
  3. Store as: base64(nonce[12] || ciphertext || tag[16])
  4. For each recipient (including self):
    • X25519 DH: our_ed25519_private (as X25519) * their_ed25519_public (as montgomery)
    • Derive wrapping key: BLAKE3_derive_key("distsoc/cek-wrap/v1", shared_secret)
    • Wrap CEK: ChaCha20-Poly1305(CEK, wrapping_key, random_nonce) → 60 bytes per recipient

Visibility variants

VariantOverheadAudience limit
PublicNoneUnlimited
Encrypted { recipients }~60 bytes per recipient~500 (256KB cap)
GroupEncrypted { group_id, epoch, wrapped_cek }~100 bytes totalUnlimited (one CEK wrap for the group)

PostId integrity

PostId = BLAKE3(Post) covers the ciphertext, NOT the recipient list. Visibility is separate metadata. This means visibility can be updated (re-wrapped) without changing the PostId.

Group keys (circles) Complete

Three-tier access revocation

Three levels of revocation, chosen based on threat level:

Tier 1: Remove Going Forward (default)

Revoked member is excluded from future posts automatically. They retain access to anything they already received. This is the default behavior when removing a circle member — no special action needed.

When to use: Normal membership changes. Someone leaves a group, you unfollow someone. The common case.

Cost: Zero. Just stop including them in future recipient lists.

Tier 2: Rewrap Old Posts (cleanup)

Same CEK, re-wrap for remaining recipients only. The revoked member can no longer unwrap the CEK even if they later obtain the ciphertext. Propagate updated visibility headers via VisibilityUpdate (0x52).

When to use: Revoked member never synced the post (common with pull-based sync — encrypted posts only sent to recipients). You want to clean up access lists.

Cost: One WrappedKey operation per remaining recipient, no content re-encryption.

Tier 3: Delete & Re-encrypt (nuclear)

Generate new CEK, re-encrypt content, wrap new CEK for remaining recipients, push delete for old post ID, repost with new content but same logical identity. Well-behaved nodes honor the delete.

When to use: Revoked member already has the ciphertext and could unwrap the old CEK. Only for content that poses an actual danger/risk if the revoked member retains access. Recommended against in most cases.

Cost: Full re-encryption + delete propagation + new post propagation. Heavy.

Trust model: The app honors delete requests from content authors by default. A modified client could ignore deletes, but this is true of any decentralized system. For legal purposes: the author has proof they issued the delete and revoked access.

Private profiles (Phase D-4) Complete

Different profile versions per circle, encrypted with the circle/group key. A peer sees the profile version for the most-privileged circle they belong to. CircleProfileUpdate (0xB4) wire message. Public profiles can be hidden (public_visible=false strips display_name/bio).

19. Delete Propagation

Status: Complete

Delete records

DeleteRecord { post_id, author, timestamp_ms, signature } — ed25519-signed by author. Stored in deleted_posts table (INSERT OR IGNORE). Applied: DELETE from posts table WHERE post_id AND author match.

Propagation paths

  1. InitialExchange: All delete records exchanged on connect
  2. DeleteRecord message (0x51): Pushed via uni-stream to connected peers on creation
  3. PullSync: Included in responses for eventual consistency

CDN cascade on delete

  1. Send BlobDeleteNotice to all downstream hosts (with our upstream info for tree healing)
  2. Send BlobDeleteNotice to upstream
  3. Clean up blob metadata, manifests, downstream/upstream records
  4. Delete blob from filesystem

20. Social Graph Privacy

Status: Complete

Known temporary weakness: An observer who diffs your N1 share over time can infer your social contacts (they're the stable members while mesh peers rotate). This will be addressed when CDN file-swap peers are added to N1, making the stable set larger and harder to distinguish.

21. Multi-Device Identity

Status: Planned

Concept

Multiple devices share the same identity key (ed25519 keypair, same NodeId). All devices ARE the same node from the network's perspective. Posts from any device appear as the same author.

Device identity

Each device also generates a unique device identity (separate ed25519 keypair). This device-specific key is used to:

Setup

Export identity.key from one device, import on another. The device identity is generated automatically on each device. Once two devices share an identity key, they can discover each other through normal network routing (same NodeId appears at multiple addresses).

22. Phase 2: Global Reciprocity / QoS

Status: Planned

The MVP is explicitly a "friends-only swarm" that works at small scale. Phase 2 adds:

  1. Hosting Pledge — signed assertion: "I host ≥ max(3x my posted bytes, 128MB minimum)"
  2. Random chunk audits — probabilistic proof of storage
  3. Tit-for-tat QoS — serve contributors first, guests last when overloaded
  4. Soft enforcement — degrade service gracefully, don't hard-ban NAT/mobile users

"Without Phase 2, MVP network will behave like a friends-only swarm. That's fine — just don't market it as resilient public infra until reciprocity exists."

Appendix A: Timeout Reference

ConstantValuePurpose
MESH_KEEPALIVE_INTERVAL30sPing to prevent zombie detection
ZOMBIE_TIMEOUT600s (10 min)No activity → dead connection
SESSION_IDLE_TIMEOUT300s (5 min)Reap idle interactive sessions (NOT keep-alive)
SELF_LAST_ENCOUNTER_THRESHOLD10800s (3 hours)Trigger pull sync when last encounter exceeds this
QUIC_CONNECT_TIMEOUT15sDirect connection establishment
HOLE_PUNCH_TIMEOUT30sOverall hole punch window
HOLE_PUNCH_ATTEMPT2sPer-address attempt within window
RELAY_INTRO_TIMEOUT15sRelay introduction request
RELAY_PIPE_IDLE120s (2 min)Relay pipe idle before close
RELAY_COOLDOWN300s (5 min)Per-target relay cooldown
RELAY_INTRO_DEDUP30sDedup intro forwarding
WORM_TOTAL_TIMEOUT3sEntire worm search
WORM_FAN_OUT_TIMEOUT500msPer-peer fan-out query
WORM_BLOOM_TIMEOUT1.5sBloom round to wide referrals
WORM_DEDUP10sIn-flight worm dedup
WORM_COOLDOWN300s (5 min)Miss cooldown before retry
REFERRAL_DISCONNECT_GRACE120s (2 min)Anchor keeps peer in referral list after disconnect
N2/N3_STALE_PRUNE7 daysRemove old reach entries
PREFERRED_UNREACHABLE_PRUNE24 hoursRemove preferred peers that can't be reached
GROWTH_LOOP_TIMER60sPeriodic growth loop check
CONNECTIVITY_CHECK60sSocial/file <N4 access check for keep-alive sessions
DM_RECENCY_WINDOW14400s (4 hours)DM'd nodes included in connectivity check

Appendix B: Design Constraints

ConstraintValueNotes
Visibility metadata cap256 KBApplies to WrappedKey lists in encrypted posts
Max recipients (per-recipient wrapping)~500256KB / ~500 bytes JSON per WrappedKey
Max blob size10 MBPer attachment
Max attachments per post4
Public post encryption overheadZeroNo WrappedKeys, no sharding, unlimited audience
Max payload (wire)16 MBLength-prefixed JSON framing
Mesh slots101 (Desktop) / 15 (Mobile)Preferred + non-preferred, no local/wide distinction
Keep-alive session cap50% of session capacityEnsures interactive sessions remain available

Appendix C: Implementation Scorecard

AreaStatus
Mesh connection architecture (101 slots, preferred/non-preferred)Complete
N1/N2/N3 knowledge layersComplete
Growth loop (60s timer + reactive on N2/N3)Partial (timer exists, reactive trigger needs update)
Preferred peers + bilateral negotiationComplete
N+10 identificationPartial (preferred peers exist, N+10 not in all headers)
Worm searchComplete
Relay introduction + hole punchComplete
Session relay (own-device default)Partial (relay works, own-device restriction not implemented)
Social routing cacheComplete
Three-layer architecture (Mesh/Social/File)Partial (layers exist conceptually, pull sync still uses mesh)
Keep-alive sessionsPlanned
Self Last Encounter sync triggerPlanned
Algorithm-free reverse-chronological feedComplete
Envelope encryption (1-layer)Complete
Group keys for circlesComplete
Three-tier access revocationPartial (Tier 1+2 work, Tier 3 crypto exists but no UI)
Private profiles per circleComplete
Pull-based sync with follow filteringComplete
Push notifications (post/profile/delete)Complete
Blob storage + transferComplete
CDN hosting tree + manifestsComplete
Blob eviction with priority scoringComplete
Anchor bootstrap + referralsComplete
Delete propagation + CDN cascadeComplete
Multi-device identityPlanned
Content propagation via attentionPartial
3x hosting quotaPlanned
Phase 2 reciprocity/QoSPlanned
Audience shardingPlanned
Custom feedsPlanned

Appendix D: Critical Path Forward

The highest-impact items, in priority order:

1. Three-layer separation (pull sync from social/file, not mesh)

Implement Self Last Encounter tracking and move pull sync to social + upstream file peers. This is the foundation for the layered architecture.

2. N+10 in all identification

Add N+10 (NodeId + 10 preferred peers) to self-identification, post headers, blob headers, and social routes. Dramatically improves findability.

3. Keep-alive sessions

Implement social/file connectivity check and keep-alive sessions for peers not reachable within N3. Cross-layer N2/N3 routing from keep-alive sessions.

4. Growth loop reactive trigger

Fire growth loop immediately on N2/N3 receipt until 90% full. Currently only timer-based.

5. Multi-device identity

Same identity key across devices with device-specific identity for self-discovery and own-device relay.

6. File-chain propagation

Make AuthorManifest with N+10 and recent posts work passively. Enable discovery of new content from any blob holder.

7. Own-device relay restriction

Restrict relay pipes to own-device by default, opt-in for relaying for others.

Appendix E: Features Designed But Not Built

FeatureSourceStatus
Three-layer pull sync (social/file, not mesh)v0.2.0 designPlanned
N+10 in all identification & headersv0.2.0 designPlanned
Keep-alive sessionsv0.2.0 designPlanned
Multi-device identityv0.2.0 designPlanned
Own-device relay restrictionv0.2.0 designPlanned
Self Last Encounter sync triggerv0.2.0 designPlanned
3x hosting quota tracking & enforcementproject discussion.txtPlanned
Anchor pin vs Fork pin distinctionproject discussion.txtPlanned
Audience sharding for groups > 250ARCHITECTURE.mdPlanned
Repost as first-class post typeproject discussion.txtPlanned
Custom feeds (keyword/media/family rules)project discussion.txtPlanned
Bounce routing (social graph as routing)ARCHITECTURE.mdPlanned
Reactions (pin/thumbs up/thumbs down)TODO.mdPlanned
RefuseRedirect handling (retry suggested peer)protocol.rsPartial (send-only)
Profile anchor list used for discoveryARCHITECTURE.mdPartial (field exists)
File-chain propagation (passive post discovery)DesignPartial (manifest exists)
Anchor-to-anchor gossip/registryObserved gapPlanned

Appendix F: File Map

crates/core/
  src/
    lib.rs          — module registration, parse_connect_string, parse_node_id_hex
    types.rs        — Post, PostId, NodeId, PublicProfile, PostVisibility, WrappedKey,
                      VisibilityIntent, Circle, PeerRecord, Attachment
    content.rs      — compute_post_id (BLAKE3), verify_post_id
    crypto.rs       — X25519 key conversion, DH, encrypt_post, decrypt_post, BLAKE3 KDF
    blob.rs         — BlobStore, compute_blob_id, verify_blob
    storage.rs      — SQLite: posts, peers, follows, profiles, circles, circle_members,
                      mesh_peers, reachable_n2/n3, social_routes, blobs, group_keys,
                      preferred_peers, known_anchors; auto-migration
    protocol.rs     — MessageType enum (37 types), ALPN (distsoc/3),
                      length-prefixed JSON framing, read/write helpers
    connection.rs   — ConnectionManager: mesh QUIC connections (MeshConnection),
                      session connections, slot management, initial exchange,
                      N1/N2 diff broadcast, pull sync, relay introduction
    network.rs      — iroh Endpoint, accept loop, connect_to_peer,
                      connect_by_node_id (7-step cascade), mDNS discovery
    node.rs         — Node struct (ties identity + storage + network), post CRUD,
                      follow/unfollow, profile CRUD, circle CRUD, encrypted post creation,
                      startup cycles, bootstrap, anchor register cycle

crates/cli/
  src/main.rs       — interactive REPL: post, feed, circles, connect, sync, etc.

crates/tauri-app/
  src/lib.rs        — Tauri v2 commands (38 IPC handlers), DTOs

frontend/
  index.html        — single-page UI: 5 tabs (Feed / My Posts / People / Messages / Settings)
  app.js            — Tauri invoke calls, rendering, identicon generator, circle CRUD
  style.css         — dark theme, post cards, visibility badges, transitions

License

ItsGoin is released under the Apache License, Version 2.0. You may use, modify, and distribute this software freely under the terms of that license.

This is a gift. Use it well.