# j17 Documentation

Last updated: 2026-04-15T04:12:51Z
Generated: 2026-04-15T06:41:14Z
Source: https://j17.dev/docs

---

---


# Getting Started

Get up and running with j17 in under five minutes.

## Prerequisites

You'll need:
- A j17 account (sign up at [j17.dev](https://j17.dev))
- An API key for your instance
- `curl` or any HTTP client

## 1. Create your spec

Define your aggregate types and events in JSON:

```json
{
  "aggregate_types": {
    "user": {
      "events": {
        "was_created": {
          "schema": {
            "type": "object",
            "properties": {
              "name": { "type": "string" },
              "email": { "type": "string", "format": "email" }
            },
            "required": ["name", "email"]
          },
          "handler": [
            { "set": { "target": "", "value": "$.data" } },
            { "set": { "target": "created_at", "value": "$.metadata.timestamp" } }
          ]
        },
        "had_email_updated": {
          "schema": {
            "type": "object",
            "properties": {
              "email": { "type": "string", "format": "email" }
            },
            "required": ["email"]
          },
          "handler": [
            { "merge": { "target": "", "value": "$.data" } }
          ]
        }
      }
    }
  },
  "agent_types": ["user", "admin"]
}
```

The spec defines:
- **aggregate_types**: Kinds of entities in your system
- **events**: What can happen to those entities
- **handlers**: How events transform state (using the [Tick language](/docs/reference/tick))
- **agent_types**: Who can trigger events

Upload via the dashboard or [Admin API](/docs/api/admin-api).

## 2. Post an event

```bash
curl -X POST https://myapp.j17.dev/user/550e8400-e29b-41d4-a716-446655440000/was_created \
  -H "Authorization: Bearer $J17_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "data": { "name": "Alice", "email": "alice@example.com" },
    "metadata": {
      "actor": { "type": "admin", "id": "550e8400-e29b-41d4-a716-446655440001" }
    }
  }'
```

Every event needs:
- **URL path**: `/{aggregate_type}/{aggregate_id}/{event_type}`
- **data**: The event payload (validated against your schema)
- **metadata.actor**: Who performed the action

## 3. Query the aggregate

```bash
curl https://myapp.j17.dev/user/550e8400-e29b-41d4-a716-446655440000 \
  -H "Authorization: Bearer $J17_API_KEY"
```

Response:

```json
{
  "ok": true,
  "data": {
    "name": "Alice",
    "email": "alice@example.com",
    "created_at": 1705312800
  },
  "metadata": {
    "length": 1,
    "created_at": 1705312800,
    "updated_at": 1705312800
  }
}
```

That's it. You're event sourcing.

## What just happened?

1. You defined the shape of your data (the spec)
2. You recorded a fact (the event)
3. j17 computed current state by applying the handler

The event is stored forever. The aggregate is derived on demand. This is the core trade-off: you give up simple SQL queries, you get immutable history.

## Next steps

- Learn the [core concepts](/docs/concepts): events, aggregates, handlers
- Build something real with [AI-assisted development](/docs/guides/ai-development)
- Explore the [full API reference](/docs/api)
- Read the [spec reference](/docs/reference/spec) to model complex domains

## Common gotchas

**Events are immutable.** There's no DELETE. To "undo," write a compensating event.

**IDs matter.** Use UUIDv4 for most things. For human-readable codes (booking references, promo codes), use 9-character humane codes.

**Actors are required.** Every event must include `metadata.actor` with `type` and `id`. This isn't bureaucracy -- it's your audit trail.

---


# Core Concepts

Event sourcing is simple: store facts, derive state. But simple doesn't mean obvious. These concepts explain the mental model.

## Events are facts

An event says "this happened." It's not a command ("do this") or a state ("this is"). It's history:

- `was_placed` not `place_order`
- `had_email_updated` not `update_email`

Events are immutable. Once written, they never change. If you need to "undo," write a compensating event.

## Aggregates are derived

An aggregate is current state computed from all its events. There's no "update the user record." There's "write a `was_updated` event, compute state fresh."

This seems inefficient. It's not:
- j17 caches aggressively
- Events are small (hundreds of bytes)
- Computation is fast (Zig handlers)
- You get history for free

## The spec defines everything

Your spec is JSON that declares:
- What aggregate types exist
- What events can happen
- How events transform state (handlers)
- Who can trigger events (agents)

No code deployment needed. Upload a new spec, behavior changes immediately.

## Handlers are pure functions

A handler takes (state, event) and returns new state. No side effects, no external calls. Just data transformation.

```
state + event -> handler -> new_state
```

Tick provides 17 declarative operations in four categories: basic (set, merge, append, remove, increment, decrement), array (filter, map, update_where, upsert, append_unique), dynamic key (set_at, merge_at, remove_at, increment_at), and control flow (conditional, let).

This makes them:
- Testable (input -> expected output)
- Cacheable (same input, same output)
- Parallelizable (no shared state)

## Time is explicit

Every event has a timestamp. Query historical state:

```bash
GET /order/123?at=1705312800  # State on Jan 15
```

Audit trails are free. Just look at the events.

## Consistency is optional

j17 supports three OCC modes: internal (default) for automatic request-time safety, external (via `previous_length`) for read-modify-write consistency, and disabled (via `skip_occ`) for append-only patterns. Pick the right model for your use case.

## Learn more

- [Events](/docs/concepts/events) - Writing immutable facts
- [Aggregates](/docs/concepts/aggregates) - Computing current state
- [Handlers](/docs/concepts/handlers) - Transforming state
- [Atomicity](/docs/concepts/atomicity) - Concurrency and consistency

# Events

Events are immutable facts about something that happened in your system.

## Writing an event

Events are written via HTTP POST to `/{aggregate_type}/{aggregate_id}/{event_type}`:

```bash
curl -X POST https://myapp.j17.dev/order/550e8400-e29b-41d4-a716-446655440000/was_placed \
  -H "Authorization: Bearer $J17_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "data": {
      "items": [{ "sku": "WIDGET-1", "quantity": 2 }],
      "total": 59.98
    },
    "metadata": {
      "actor": { "type": "user", "id": "550e8400-e29b-41d4-a716-446655440001" }
    }
  }'
```

Every event requires:

- **aggregate_type** - In the URL path (e.g., `order`)
- **aggregate_id** - In the URL path (UUIDv4 or humane code)
- **event_type** - In the URL path (e.g., `was_placed`)
- **data** - The event payload
- **metadata.actor** - Who or what caused this event

## Metadata structure

Every event includes metadata:

| Field | Required | Description |
|-------|----------|-------------|
| `actor` | Yes | `{ "type": "user", "id": "..." }` - who caused this event |
| `target` | No | `{ "type": "item", "id": "..." }` - what the action targeted |
| `previous_length` | No | Expected event count for external OCC |

Actor and target types must be declared in your spec's `agent_types` and `target_types` arrays.

## Events are immutable

Once written, an event can never be changed or deleted. This is the core principle of event sourcing.

To "undo" something, write a new event representing the reversal:

```bash
curl -X POST https://myapp.j17.dev/order/550e8400-.../was_cancelled \
  -H "Authorization: Bearer $J17_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "data": { "reason": "Customer request" },
    "metadata": {
      "actor": { "type": "user", "id": "..." }
    }
  }'
```

The order isn't deleted -- both `was_placed` and `was_cancelled` exist in history. The aggregate state reflects the cancellation because the handler for `was_cancelled` updates the status.

## Naming conventions

Use snake_case past-tense verbs that describe what happened:

- `was_created` not `create`
- `had_email_updated` not `updateEmail`
- `was_cancelled` not `cancel`

This makes it clear that events are facts about the past, not commands requesting future action.

# Aggregates

An aggregate is the current state derived from replaying all events for a given type and ID.

## How aggregates work

When you query an aggregate, j17 replays all events and applies each handler in order:

```
Event 1: was_created { name: "Alice" }        -> State: { name: "Alice", created_at: 1705312800 }
Event 2: had_email_updated { email: "..." }    -> State: { name: "Alice", email: "...", created_at: ... }
Event 3: had_name_updated { name: "Alicia" }   -> State: { name: "Alicia", email: "...", created_at: ... }
```

Each event type has a handler defined in your spec that determines how it transforms state. For example:

```json
{
  "was_created": {
    "handler": [
      { "set": { "target": "", "value": "$.data" } },
      { "set": { "target": "created_at", "value": "$.metadata.timestamp" } }
    ]
  },
  "had_name_updated": {
    "handler": [
      { "merge": { "target": "", "value": "$.data" } }
    ]
  }
}
```

## Querying aggregates

```bash
curl https://myapp.j17.dev/user/550e8400-e29b-41d4-a716-446655440000 \
  -H "Authorization: Bearer $J17_API_KEY"
```

Response:

```json
{
  "ok": true,
  "data": {
    "name": "Alicia",
    "email": "alice@example.com",
    "created_at": 1705312800
  },
  "metadata": {
    "length": 3,
    "created_at": 1705312800,
    "updated_at": 1705399200
  }
}
```

`metadata.length` is the number of events applied to this aggregate. Pass it as `previous_length` when writing to ensure no concurrent changes occurred between your read and write. `created_at` and `updated_at` are Unix timestamps from the first and most recent events.

## Aggregate IDs

The aggregate ID is the second segment of the URL path when writing or reading events:

```
POST /{aggregate_type}/{aggregate_id}/{event_type}
GET  /{aggregate_type}/{aggregate_id}
```

You supply the ID -- j17 never generates one for you. The ID determines which event stream the event is appended to.

### Accepted formats

| Format | Example | Use case |
|--------|---------|----------|
| UUID v4 | `550e8400-e29b-41d4-a716-446655440000` | Default for all entities |
| UUID v5 | `a4339497-daa0-5c39-a0d3-8e894750d2b0` | Deterministic IDs derived from external data |
| Tagged UUID | `uuid:2026` | Time-partitioned streams (fiscal years, quarters) |
| Humane code | `ABC123XYZ` | Human-readable 9-character codes (promos, bookings) |
| `global` | `global` | Built-in singleton (no spec changes needed) |
| Custom singleton | `all` | Named singletons (requires spec config) |

Use UUIDs by default. The other formats exist for specific use cases described below.

### UUIDs (v4 and v5)

Most aggregates should use UUIDs. v4 (random) is the most common. v5 (deterministic, namespace-based) is useful when you need to derive a stable ID from external data -- for example, generating a consistent aggregate ID from an external user's email address.

### Humane codes

9-character identifiers using Crockford base32 (e.g. `ABC123XYZ`). Useful when humans need to type or read aggregate IDs, such as booking codes or promo codes. j17 normalizes ambiguous characters at the edge: `I` becomes `1`, `L` becomes `1`, `O` becomes `0`, `U` becomes `V`, and lowercase is uppercased.

### Tagged UUIDs

A tagged UUID is a UUID followed by a colon and a short tag (1-10 lowercase alphanumeric characters): `{uuid}:{tag}`. This creates a separate aggregate stream for each tag while keeping the same entity identity.

The primary use case is **time-partitioned aggregates** -- when you need to close out one period and start fresh without losing history:

```bash
# Write to the current fiscal year's ledger
curl -X POST https://myapp.j17.dev/ledger/a433...d2b0:2025/entry_posted \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"data": {"amount": 500, "account": "revenue"}, "metadata": {"actor": {"type": "system", "id": "..."}}}'

# Close the year, start a new stream
curl -X POST https://myapp.j17.dev/ledger/a433...d2b0:2026/was_opened \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"data": {"carried_forward": 12500}, "metadata": {"actor": {"type": "system", "id": "..."}}}'
```

Each tagged variant is a fully independent aggregate with its own event stream, length, and state. The old period's aggregate remains readable and immutable while the new one starts from zero.

Common tag patterns:

| Tag | Example | Use case |
|-----|---------|----------|
| `2026` | `uuid:2026` | Fiscal year partitioning |
| `202603` | `uuid:202603` | Monthly partitioning |
| `q1` | `uuid:q1` | Quarterly periods |
| `v2` | `uuid:v2` | Schema versioning / migrations |

Tags must be 1-10 characters, lowercase letters and digits only. The UUID portion must be a valid v4 or v5 UUID.

## Singleton aggregates

A singleton is an aggregate type where you only need one instance -- app-wide config, a global counter, a shared audit log. Instead of generating a UUID, you use a fixed, well-known ID.

### The `global` keyword

Every aggregate type automatically supports the keyword `global` as an ID, with no spec changes required:

```bash
# Write to a singleton config aggregate
curl -X POST https://myapp.j17.dev/config/global/was_updated \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"data": {"dark_mode": true}, "metadata": {"actor": {"type": "admin", "id": "..."}}}'

# Read it back
curl https://myapp.j17.dev/config/global \
  -H "Authorization: Bearer $API_KEY"
```

This is the simplest way to create a singleton. Use it for settings, feature flags, rate limit config, or any "one per type" aggregate.

### Custom singletons

If you need more than one named singleton per type, or want a more descriptive name than `global`, add custom singletons to your spec:

```json
{
  "singletons": ["all", "daily_summary"],
  "aggregate_types": {
    "company_audit": {
      "events": {
        "RecordCreated": {
          "schema": { "type": "object" },
          "handler": [
            { "increment": { "target": "total_events" } },
            { "set": { "target": "last_activity", "value": "$.metadata.timestamp" } }
          ]
        }
      }
    }
  }
}
```

Then use the singleton name as the aggregate ID:

```bash
curl -X POST https://myapp.j17.dev/company_audit/all/RecordCreated \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"data": {"action": "user_login"}, "metadata": {"actor": {"type": "system", "id": "..."}}}'
```

Custom singletons are defined at the top level of the spec (not per aggregate type) and are available to all aggregate types in the instance. Use them sparingly -- `global` handles most singleton cases.

### Don't fake singletons with UUIDs

If you need a singleton, use `global` or a custom singleton name. Don't generate a fixed UUID (like `00000000-0000-5000-8000-000000000000`) and use it as a pseudo-singleton -- it works, but it's fighting the system. Singletons are a first-class concept with a cleaner API and clearer intent.

## Aggregate sizing

Each aggregate is replayed from its event stream on read. This is fast -- under 1ms for small aggregates and under 50ms for 10,000 events -- but aggregates are designed to represent individual entities, not unbounded collections.

If an aggregate grows unboundedly, you have several options:

- **[Tagged UUIDs](#tagged-uuids)** -- partition by time period. A ledger aggregate that accumulates entries all year can be split into `uuid:2025`, `uuid:2026`, etc. Each period gets its own stream that stops growing when the period closes.
- **[Projections](/docs/guides/projections)** -- maintain a queryable view across aggregates. An audit log with one event per action can sometimes be better modeled as many small aggregates (one per audited entity) with a projection that provides the cross-cutting query view.

Use whatever is most natural to your domain.

# Atomicity and Concurrency

Event sourcing makes strong consistency simple -- events append to a stream in order. But when multiple clients write concurrently, you need a strategy.

## The problem

Two users update the same order simultaneously:

```
User A: GET order/123 (sees length=5)
User B: GET order/123 (sees length=5)

User A: POST was_shipped (expects length=5, creates event 6)
User B: POST was_delivered (expects length=5, creates event 6)
```

Without protection, both succeed. The order has two events at position 6. Data loss.

## Optimistic Concurrency Control (OCC)

j17 uses OCC to prevent this. There are three modes: internal (default), external, and disabled.

### Internal OCC (default)

When you don't provide `previous_length`, j17 uses internal OCC:

1. Read current stream length
2. Validate and prepare your event
3. Atomically write if length hasn't changed

```json
{
  "data": { "status": "shipped" },
  "metadata": {
    "actor": { "type": "user", "id": "user-456" }
  }
}
```

Internal OCC catches races during request processing (another write arriving in the same millisecond window). On conflict, j17 automatically retries once with fresh length. If retry also fails (sustained contention), you get 409.

This is the right default for most workloads. You don't need to think about concurrency unless you have read-modify-write patterns.

### External OCC

When you provide `previous_length`, j17 uses external OCC:

```json
{
  "data": { "status": "shipped" },
  "metadata": {
    "actor": { "type": "user", "id": "user-456" },
    "previous_length": 5
  }
}
```

j17 checks: does this aggregate currently have 5 events?

- Yes: Write succeeds, returns new length (6)
- No: Write fails with 409 Conflict

External OCC catches everything internal OCC catches, plus modifications since your read. No automatic retry -- the conflict is meaningful because your client's view was stale.

Use external OCC for:
- Read-modify-write patterns (read aggregate, decide, write)
- Long-running user sessions where data might change
- Critical operations where you need guaranteed consistency

### Disabling OCC (skip_occ)

For append-only patterns where conflicts don't matter:

```json
{
  "data": { "entry": "user logged in" },
  "metadata": {
    "actor": { "type": "system", "id": "auth-service" },
    "skip_occ": true
  }
}
```

Requires spec opt-in per event type:

```json
{
  "aggregate_types": {
    "audit": {
      "events": {
        "entry_was_added": {
          "allow_skip_occ": true,
          "schema": { "..." : "..." }
        }
      }
    }
  }
}
```

Safe for append-only logs, analytics, activity feeds. Dangerous for state machines, balances, or anything with check-then-act logic.

### OCC summary

| Mode | `previous_length` | Catches | Auto-retry | Use case |
|------|-------------------|---------|------------|----------|
| Internal | Not provided | Request-time races | Yes (once) | Default safety net |
| External | Provided | All modifications since read | No | Read-modify-write |
| Disabled | `skip_occ: true` | Nothing | N/A | Append-only logs |

## Handling conflicts

When you get a 409:

```json
{
  "ok": false,
  "error": {
    "code": "conflict",
    "message": "Optimistic concurrency check failed",
    "details": {
      "expected": 5,
      "actual": 6
    }
  }
}
```

Options:

1. **Refetch and retry**: Get current state, reapply your change, try again
2. **Reject**: Tell the user someone else modified the data
3. **Merge**: Combine your change with the new state

### Refetch and retry

```javascript
async function writeWithRetry(aggregateType, id, eventType, data, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    // Get current state
    const current = await fetchAggregate(aggregateType, id);

    try {
      return await writeEvent(aggregateType, id, eventType, {
        data,
        metadata: {
          actor: { type: 'user', id: 'user-456' },
          previous_length: current.length
        }
      });
    } catch (err) {
      if (err.code === 'conflict' && attempt < maxRetries - 1) {
        continue; // Retry
      }
      throw err;
    }
  }
}
```

### Reject

Show the user a message: "Someone else updated this. Please refresh and try again."

Safer than silent retries that might overwrite important changes.

### Merge (advanced)

For non-conflicting changes, merge automatically:

```javascript
// User A changes shipping_address
// User B changes status
// These don't conflict -- apply both

const current = await fetchAggregate('order', id);
const newData = mergeChanges(current.state, userInput);

await writeEvent('order', id, 'was_updated', {
  data: newData,
  metadata: { previous_length: current.length }
});
```

Only safe for orthogonal changes. Don't merge conflicting status updates.

## Batch writes (same aggregate)

j17 provides atomic batch writes for multiple events targeting the same aggregate. All events succeed or none are written:

```bash
POST /order/550e8400-e29b-41d4-a716-446655440000
Content-Type: application/json
Authorization: Bearer $J17_API_KEY

{
  "events": [
    {"type": "was_created", "data": {"customer_id": "abc"}},
    {"type": "had_item_added", "data": {"sku": "WIDGET-1", "qty": 2}},
    {"type": "had_item_added", "data": {"sku": "GADGET-3", "qty": 1}}
  ],
  "metadata": {
    "actor": {"type": "admin", "id": "system"},
    "previous_length": 0
  }
}
```

Response:

```json
{
  "ok": true,
  "stream_ids": ["1706789012345-0", "1706789012345-1", "1706789012345-2"],
  "count": 3,
  "implied_count": 5
}
```

The `previous_length` applies to the batch as a whole. Each event in the batch can trigger implications; all implications see the pre-batch state.

Good use cases for batch writes:
- Initial entity creation with multiple setup events
- Complex state transitions that are logically atomic
- Bulk imports or data migrations

## Cross-aggregate consistency

OCC protects single aggregates. What about operations across multiple?

Example: Place an order AND reserve inventory.

```
order:123  -> was_placed
inventory:456 -> was_reserved
```

If the first succeeds but the second fails, you have an order without reserved inventory. With N independent writes, you have 2^N possible success/failure combinations.

### The solution: implications

Instead of submitting multiple events from the client, submit ONE trigger event. j17 automatically emits derived events atomically:

```json
{
  "aggregate_types": {
    "order": {
      "events": {
        "was_placed": {
          "schema": { "..." : "..." },
          "handler": [
            { "set": { "target": "", "value": "$.data" } }
          ],
          "implications": [
            {
              "emit": {
                "aggregate_type": "inventory",
                "id": "$.data.product_id",
                "event_type": "was_reserved",
                "data": {
                  "quantity": "$.data.qty",
                  "order_id": "$.key"
                }
              }
            }
          ]
        }
      }
    }
  }
}
```

Now your client code is simple:

```javascript
const result = await writeEvent('order', orderId, 'was_placed', {
  data: orderData,
  metadata: { actor: { type: 'user', id: userId } }
});
// Either everything succeeded, or nothing was written
// result.implied_count tells you how many derived events were created
```

The trigger event contains all the context needed for derived events. Placing an order is the business fact; inventory reservation follows from that fact.

### When implications don't fit

Implications work when derived events follow deterministically from the trigger event data. They don't fit when:

- You need external data at decision time (inventory check from external system)
- Steps take time or can fail externally (payment processing, shipping)
- Different authorization contexts are required

For these cases, use the saga pattern: trigger event starts the workflow, external processes respond asynchronously, confirmation/failure events drive next steps, and compensation events handle rollback.

## Read-modify-write cycles

Common pattern:

1. GET aggregate
2. Modify state in memory
3. POST event with `previous_length`

This is safe with external OCC. If someone else writes between 1 and 3, your POST fails and you retry.

Alternative: **Calculated events**

Instead of sending the new state, send the operation:

```json
{
  "data": { "amount": 50 },
  "metadata": {
    "actor": { "type": "user", "id": "user-456" },
    "previous_length": 5
  }
}
```

Handler computes new balance: `state.balance + event.data.amount`

This is naturally mergeable. If someone else also adds $50, both succeed and balance becomes $200.

## Performance trade-offs

| Mode | Latency | Consistency | Use case |
|------|---------|-------------|----------|
| Internal | Baseline | Request-time safety | Default for most writes |
| External | Baseline | Strong | Read-modify-write, critical ops |
| Disabled | Baseline | None | Append-only logs |

The overhead is minimal. OCC adds a single integer comparison.

## Debugging conflicts

High conflict rates indicate:
- Hotspots (everyone updating the same aggregate)
- Long gaps between GET and POST
- Missing calculated events (sending absolute values instead of deltas)

Fix hotspots by sharding (add sub-aggregates) or using calculated events.

## See also

- [Writing events](/docs/api/writing-events) - OCC in practice
- [Handlers](/docs/concepts/handlers) - How events transform state
- [Events](/docs/concepts/events) - Event structure and metadata

# Handlers

Handlers transform aggregate state when events occur. They're pure functions: (state, event) -> new_state.

## What handlers do

When you query an aggregate, j17:
1. Loads all events for that aggregate
2. Applies handlers in chronological order
3. Returns the final state

```
Event 1: was_created { name: "Alice" }
  -> Handler: set name -> State: { name: "Alice" }

Event 2: had_email_updated { email: "alice@example.com" }
  -> Handler: merge email -> State: { name: "Alice", email: "..." }

Final state: { name: "Alice", email: "alice@example.com" }
```

## Declarative vs imperative

**Imperative** (traditional code):
```javascript
function applyEvent(state, event) {
  if (event.type === 'was_created') {
    state.name = event.data.name;
    state.created_at = Date.now();
  } else if (event.type === 'had_email_updated') {
    state.email = event.data.email;
  }
  return state;
}
```

Problems:
- Hidden logic (Date.now() is a side effect)
- Hard to test (mock Date)
- Runtime errors (undefined properties)

**Declarative** (j17 Tick):
```json
{
  "was_created": {
    "handler": [
      { "set": { "target": "", "value": "$.data" } },
      { "set": { "target": "created_at", "value": "$.metadata.timestamp" } }
    ]
  }
}
```

Benefits:
- No side effects (timestamp comes from event metadata)
- Easy to test (input -> output)
- Validated at upload (typos caught early)

## Tick operations

Tick provides 17 declarative operations in four categories:

### Basic operations

The foundation. Cover most use cases.

- `set` - Replace value at a target path
- `merge` - Shallow merge an object into the target
- `append` - Add an element to an array
- `remove` - Remove an element from an array
- `increment` - Add to a numeric value
- `decrement` - Subtract from a numeric value

### Array operations

Work on arrays of objects (e.g., line items, members, tags).

- `filter` - Keep only elements matching a predicate
- `map` - Transform each element in an array
- `update_where` - Update elements that match a predicate
- `upsert` - Update a matching element or append if none match
- `append_unique` - Append only if no matching element exists

### Dynamic key operations

Operate on keys determined at runtime from event data.

- `set_at` - Set a value at a dynamic key path
- `merge_at` - Merge into a dynamic key path
- `remove_at` - Remove a dynamic key
- `increment_at` - Increment a value at a dynamic key path

### Control flow

- `conditional` - If/then/else branching with predicates
- `let` - Bind a variable (e.g., find an element) for use in subsequent operations

See [Tick reference](/docs/reference/tick) for complete syntax.

## Handler context

Handlers access:

| Path | Meaning |
|------|---------|
| `$.data` | Event payload |
| `$.metadata.timestamp` | Event timestamp |
| `$.metadata.actor` | Who triggered the event |
| `$.state` | Current aggregate state (before this event) |
| `$.key` | Aggregate key (type/id) |

## Common patterns

### Initialize state

```json
{
  "was_created": {
    "handler": [
      { "set": { "target": "", "value": "$.data" } },
      { "set": { "target": "status", "value": "active" } },
      { "set": { "target": "created_at", "value": "$.metadata.timestamp" } }
    ]
  }
}
```

### Update field

```json
{
  "had_email_updated": {
    "handler": [
      { "set": { "target": "email", "value": "$.data.email" } }
    ]
  }
}
```

### Conditional update

```json
{
  "had_status_changed": {
    "handler": [
      { "set": { "target": "status", "value": "$.data.status" } },
      {
        "if": { "eq": ["$.data.status", "completed"] },
        "then": [
          { "set": { "target": "completed_at", "value": "$.metadata.timestamp" } }
        ]
      }
    ]
  }
}
```

### Append to array

```json
{
  "had_item_added": {
    "handler": [
      {
        "append": {
          "target": "items",
          "value": {
            "$merge": [
              { "$": "$.data" },
              { "added_at": { "$": "$.metadata.timestamp" } }
            ]
          }
        }
      }
    ]
  }
}
```

## Handler ordering

Handlers execute in the order defined:

```json
{
  "handler": [
    { "set": { "target": "status", "value": "processing" } },
    { "increment": { "target": "processing_count", "value": 1 } },
    { "append": { "target": "log", "value": { "action": "processed" } } }
  ]
}
```

Each operation sees the result of previous operations.

## Validation

j17 validates handlers when you upload your spec:

- Target paths are valid
- Array operations target arrays
- Numeric operations target numbers
- JSONPath expressions parse
- No circular references

Invalid specs are rejected with line numbers.

## Testing handlers

Test locally before deploying:

```bash
# Test specific handler
j17 handler test spec.json --aggregate user --event was_created --state '{}' --event-data '{"name":"Alice"}'
```

Or use the CLI dry-run mode:

```bash
j17 events dry-run spec.json events.jsonl
```

## Performance

Tick handlers compile to Zig. Typical throughput: 300k+ operations/second per core.

## See also

- [Tick reference](/docs/reference/tick) - Complete syntax
- [Events](/docs/concepts/events) - What triggers handlers
- [Aggregates](/docs/concepts/aggregates) - What handlers build

---


# API Overview

j17's API is HTTP-first and minimal. POST events, GET aggregates. That's the core. Everything else -- batching, admin, implications -- is built on these primitives.

## Base URLs

```
Production:  https://{instance}.j17.dev
Staging:     https://{instance}-staging.j17.dev
Test:        https://{instance}-test.j17.dev
```

All requests require HTTPS.

## Authentication

Two methods depending on what you're doing:

| Method | Header | Use for |
|--------|--------|---------|
| API Key | `Authorization: Bearer {key}` | Instance data (events, aggregates) |
| JWT | `Authorization: Bearer {token}` | Admin operations, dashboards |

See [Authentication](/docs/api/authentication) for details.

## Core operations

### Write an event

```bash
POST /{aggregate_type}/{aggregate_id}/{event_type}
```

```bash
curl -X POST https://myapp.j17.dev/order/abc123/was_placed \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "data": { "items": [...], "total": 59.99 },
    "metadata": {
      "actor": { "type": "user", "id": "550e8400-e29b-41d4-a716-446655440000" }
    }
  }'
```

Returns the stream ID of the written event.

### Read an aggregate

```bash
GET /{aggregate_type}/{aggregate_id}
```

```bash
curl https://myapp.j17.dev/order/abc123 \
  -H "Authorization: Bearer $API_KEY"
```

Returns current state derived from all events.

### Batch operations

```bash
POST /{aggregate_type}/{aggregate_id}
```

Atomic multi-event writes to a single aggregate. See [Batch operations](/docs/api/batch-operations).

## Request format

### Event POST body

```json
{
  "data": { ... },           // Required. Validated against event schema
  "metadata": {              // Required
    "actor": {               // Required. Who performed the action
      "type": "user",        // Must be in spec's agent_types
      "id": "550e8400-e29b-41d4-a716-446655440000"
    },
    "target": {              // Optional. What was affected
      "type": "order",
      "id": "target-id"
    },
    "previous_length": 5     // Optional. For OCC
  }
}
```

Idempotency is handled via the `X-Idempotency-Key` request header, not in metadata. See [Writing events](/docs/api/writing-events) for details.

### Content-Type

Always `application/json`. j17 rejects other content types with 400.

## Response format

### Success (201 Created) -- single event

```json
{
  "ok": true,
  "stream_id": "1234567890123-0",
  "implied_count": 2
}
```

| Field | Type | Description |
|-------|------|-------------|
| `ok` | boolean | Always true on success |
| `stream_id` | string | Redis stream ID of the written event |
| `implied_count` | integer | Number of implied events (omitted if 0) |

### Success (201 Created) -- batch

```json
{
  "ok": true,
  "stream_ids": ["1234567890123-0", "1234567890123-1"],
  "count": 2,
  "implied_count": 3
}
```

### Error (4xx/5xx)

```json
{
  "ok": false,
  "error": "Event data failed schema validation",
  "path": "data.email"
}
```

Common error codes:

| HTTP | Meaning |
|------|---------|
| 400 | Validation error (bad schema, missing fields) |
| 401 | Invalid or missing API key |
| 403 | Valid auth, but not allowed (wrong environment, read key on write) |
| 404 | Aggregate type or event type not in spec |
| 409 | OCC conflict (stale `previous_length`) |
| 413 | Event data too large |
| 422 | Idempotency key reused with different body |
| 429 | Rate limited |
| 500 | Internal error |

## Query parameters

### GET /{type}

| Parameter | Type | Description |
|-----------|------|-------------|
| `resolve` | flag | Return full aggregate data instead of IDs |
| `synchronous` | boolean | Skip cache when resolving, read from event stream |
| `limit` | integer | Page size (default 50, max 200) |
| `cursor` | string | Last ID from previous page for pagination |

### GET /{type}/{id}

| Parameter | Type | Description |
|-----------|------|-------------|
| `synchronous` | boolean | Bypass cache, compute fresh (`?synchronous=true`) |

### GET /{type}/{id}/events

| Parameter | Type | Description |
|-----------|------|-------------|
| `start` | string | Start from this stream ID (exclusive) |
| `count` | integer | Max events to return (default 100) |

## Rate limits

| Scope | Limit |
|-------|-------|
| Per API key | 500 requests/minute |
| Per IP | 2,000 requests/minute |

Rate limit headers on every response:

```http
X-RateLimit-Limit: 500
X-RateLimit-Remaining: 499
X-RateLimit-Scope: api-key
```

Exceeding limits returns 429.

## Idempotency

POST events support idempotency via the `X-Idempotency-Key` request header:

```bash
curl -X POST https://myapp.j17.dev/order/abc123/was_placed \
  -H "X-Idempotency-Key: order-123-was_placed-20240101" \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"data": {...}, "metadata": {"actor": {...}}}'
```

Same key + same body = cached response replayed (with `X-Idempotency-Replayed: true` header).

Same key + different body = 422 error.

Keys expire after 24 hours.

## CORS

j17 supports cross-origin requests from browser clients:

```http
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: Authorization, Content-Type
```

## Webhooks (Listeners)

j17 delivers events to your HTTP endpoints via listeners defined in your spec. When a matching event is written, j17 POSTs a signed JSON payload to your URL.

Delivery is guaranteed with retry: failed deliveries are retried with exponential backoff (5s, 25s, 125s) up to 3 attempts. Payloads are signed with HMAC-SHA256 using your listener secret, delivered in the `X-J17-Signature` header.

**Signature behavior:** The listener secret is read from your spec at delivery time (not stored with the delivery). This means you can rotate secrets by redeploying your spec — in-flight retries will use the new secret. If j17 cannot read your spec at delivery time (e.g., during a brief cache miss), the payload is delivered unsigned (no `X-J17-Signature` header). Your endpoint should treat a missing signature as an error if you require signed payloads.

## Admin API

Separate endpoints for instance management, accessible via the headnode:

```
POST   /api/instances/:id/spec    # Deploy spec
GET    /api/instances/:id/keys    # List API keys
POST   /api/instances/:id/keys    # Create API key
```

See [Admin API](/docs/api/admin-api) for details.

## Testing

Use staging or test environments:

```bash
# Staging
curl https://myapp-staging.j17.dev/order/abc123/was_placed ...

# Production
curl https://myapp.j17.dev/order/abc123/was_placed ...
```

Different API keys, isolated data. Test freely in staging.

## See also

- [Writing events](/docs/api/writing-events) - Deep dive on POST
- [Reading aggregates](/docs/api/reading-aggregates) - Deep dive on GET
- [Authentication](/docs/api/authentication) - Keys, JWT, environments
- [Batch operations](/docs/api/batch-operations) - Atomic multi-event writes
- [Admin API](/docs/api/admin-api) - Instance management

# Authentication

j17 uses different auth methods depending on what you're doing. API keys for data operations, JWT for admin tasks. Each environment (prod/staging/test) has isolated credentials.

## API Keys

Use API keys for instance data: writing events, reading aggregates, batch operations.

```bash
curl https://myapp.j17.dev/order/abc123 \
  -H "Authorization: Bearer j17_0_prod_abc123xyz"
```

### Key format

```
j17_{version}_{environment}_{random}
```

| Prefix | Environment | Works on |
|--------|-------------|----------|
| `j17_0_prod_*` | Production | `*.j17.dev` |
| `j17_0_staging_*` | Staging | `*-staging.j17.dev` |
| `j17_0_test_*` | Test | `*-test.j17.dev` |

Keys are environment-scoped. A staging key won't work on production. This prevents accidents.

### Creating keys

Via dashboard: Instance settings page, API Keys section.

Via API (requires JWT, on the headnode):

```bash
curl -X POST https://control.j17.dev/api/instances/$INSTANCE_ID/keys \
  -H "Authorization: Bearer $JWT_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Production Backend",
    "scope": "write",
    "environment": "prod"
  }'
```

### Key scopes

| Scope | Permissions |
|-------|-------------|
| `read` | GET aggregates, query events, read projections |
| `write` | POST events + all read permissions |

Use `read` keys for frontend clients that only display data. Use `write` keys for backend services.

### Rotating keys

```bash
curl -X POST https://control.j17.dev/api/keys/$KEY_ID/rotate \
  -H "Authorization: Bearer $JWT_TOKEN"
```

You can also schedule revocation with a grace period:

```bash
curl -X POST https://control.j17.dev/api/keys/$KEY_ID/schedule_revocation \
  -H "Authorization: Bearer $JWT_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"revoke_at": "2024-02-01T00:00:00Z"}'
```

### Storing keys

**Don't** commit keys to git. Use environment variables:

```bash
# .env
J17_API_KEY=j17_0_prod_abc123xyz
```

```javascript
// Read from env, not hardcoded
const apiKey = process.env.J17_API_KEY;
```

**Never** expose write keys in frontend code. Browser clients should use read-only keys or go through your backend.

## JWT Tokens

Use JWT for admin operations: managing API keys, deploying specs, configuring instances. All JWT-authenticated endpoints are on the **headnode** (control plane).

```bash
curl https://control.j17.dev/api/instances \
  -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIs..."
```

### Getting a token

Login with email/password on the headnode:

```bash
curl -X POST https://control.j17.dev/api/login \
  -H "Content-Type: application/json" \
  -d '{
    "email": "admin@example.com",
    "password": "..."
  }'
```

Response:

```json
{
  "token": "eyJhbGciOiJIUzI1NiIs...",
  "expires_at": 1705312800
}
```

Tokens expire after 24 hours. Refresh by logging in again.

### Token contents

j17 JWTs contain:

```json
{
  "sub": "user-uuid",
  "email": "admin@example.com",
  "role": "instance_admin",
  "instance_id": "instance-uuid",
  "iat": 1705226400,
  "exp": 1705312800
}
```

Don't decode JWTs client-side to make permission decisions. Always verify server-side.

## Environment isolation

Each environment is completely isolated:

| Environment | URL | Credentials |
|-------------|-----|-------------|
| Production | `myapp.j17.dev` | `j17_0_prod_*` keys |
| Staging | `myapp-staging.j17.dev` | `j17_0_staging_*` keys |
| Test | `myapp-test.j17.dev` | `j17_0_test_*` keys |

Data doesn't flow between environments. A user created in staging doesn't exist in production.

### Testing against staging

```bash
# Always hit staging first
curl https://myapp-staging.j17.dev/user/abc123/was_created \
  -H "Authorization: Bearer $STAGING_KEY" \
  -d '{...}'

# Verify it works, then switch to production
curl https://myapp.j17.dev/user/abc123/was_created \
  -H "Authorization: Bearer $PROD_KEY" \
  -d '{...}'
```

## Rate limits

| Scope | Limit |
|-------|-------|
| Per API key | 500 requests/minute |
| Per IP | 2,000 requests/minute |

Platform admins are exempt from rate limiting.

Rate limit headers on every response:

```http
X-RateLimit-Limit: 500
X-RateLimit-Remaining: 499
X-RateLimit-Scope: api-key
```

## Error responses

### Invalid key

```json
{
  "ok": false,
  "error": "Invalid API key"
}
```

HTTP 401. Check the key is correct and not revoked.

### Wrong environment

```json
{
  "ok": false,
  "error": "Staging key cannot access production"
}
```

HTTP 403. You're using a staging key on production (or vice versa).

### Insufficient scope

```json
{
  "ok": false,
  "error": "Read-only key cannot write events"
}
```

HTTP 403. Get a write-scoped key or use a different endpoint.

### Expired JWT

```json
{
  "ok": false,
  "error": "Token expired"
}
```

HTTP 401. Log in again to get a fresh token.

## Best practices

**Use separate keys per service.** Don't share one key across frontend, backend, and admin tools. If one leaks, you only rotate that one.

**Read keys for frontend.** If your browser code needs to fetch aggregates, use a read-only key. Better yet, proxy through your backend.

**Monitor usage.** Dashboard shows which keys are active. Disable unused keys.

**Rotate regularly.** Set a calendar reminder. Quarterly rotation catches leaked keys before they're exploited.

**Never log keys.** If you debug HTTP requests, redact the Authorization header.

## Node secrets (internal)

If you're self-hosting j17 workers, internal communication (headnode-to-worker) uses node secrets:

```http
X-Node-Secret: {random-long-string}
```

You don't need this for normal API usage. It's for the infrastructure layer between headnode and worker nodes.

## See also

- [Writing events](/docs/api/writing-events) - POST with API keys
- [Reading aggregates](/docs/api/reading-aggregates) - GET with API keys
- [Admin API](/docs/api/admin-api) - Requires JWT

# Writing Events

POST events to `/{aggregate_type}/{aggregate_id}/{event_type}`. That's the fundamental operation in j17. Everything else -- batching, implications, projections -- builds on this.

## URL structure

```
POST /{aggregate_type}/{aggregate_id}/{event_type}
```

| Component | Description | Example |
|-----------|-------------|---------|
| `aggregate_type` | Defined in your spec | `order`, `user`, `task` |
| `aggregate_id` | UUIDv4 or humane code | `550e8400-e29b-41d4-a716-446655440000` |
| `event_type` | Event name from spec | `was_placed`, `had_email_updated` |

## Request body

```json
{
  "data": {
    "items": [
      { "sku": "WIDGET-1", "quantity": 2, "price": 29.99 }
    ],
    "shipping_address": {
      "street": "123 Main St",
      "city": "Austin",
      "zip": "78701"
    }
  },
  "metadata": {
    "actor": {
      "type": "user",
      "id": "550e8400-e29b-41d4-a716-446655440001"
    },
    "previous_length": 0
  }
}
```

### data (required)

The event payload. Validated against the JSON Schema in your spec.

```json
{
  "data": {
    "email": "alice@example.com",
    "name": "Alice Smith"
  }
}
```

Validation failures return 400 with details:

```json
{
  "ok": false,
  "error": "Event data failed schema validation",
  "path": "data.email"
}
```

### metadata (required)

#### actor (required)

Who performed the action.

```json
{
  "actor": {
    "type": "user",
    "id": "550e8400-e29b-41d4-a716-446655440001"
  }
}
```

The `type` must match an entry in your spec's `agent_types`. Common types: `user`, `admin`, `system`. The `id` must be a valid UUIDv4.

#### target (optional)

What was affected, if different from the aggregate.

```json
{
  "target": {
    "type": "order",
    "id": "550e8400-e29b-41d4-a716-446655440002"
  }
}
```

Useful for cross-aggregate queries. "Show me all events targeting this order."

#### previous_length (optional)

For [optimistic concurrency control](/docs/concepts/atomicity). The expected number of events in the aggregate before this write.

```json
{
  "previous_length": 5
}
```

If the aggregate has a different length (concurrent write), j17 returns 409:

```json
{
  "ok": false,
  "error": "Concurrent write detected. Stream has 6 events, expected 5."
}
```

Retry with the new length or handle the conflict.

#### skip_occ (optional)

For append-only patterns where conflicts don't matter. Requires `allow_skip_occ: true` in the event's spec definition.

```json
{
  "skip_occ": true
}
```

## Idempotency

Use the `X-Idempotency-Key` request header to prevent duplicate writes. This is a header, not a metadata field.

```bash
curl -X POST https://myapp.j17.dev/order/abc123/was_placed \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -H "X-Idempotency-Key: order-abc123-was_placed-20240115" \
  -d '{"data": {...}, "metadata": {"actor": {...}}}'
```

### How it works

On success (2xx), j17 caches the response body, status, and a hash of your request body. The cache lasts **24 hours**.

- **Retry with same key + same body**: Returns the cached response with `X-Idempotency-Replayed: true` header. No duplicate event written.
- **Retry with same key + different body**: Returns 422 error. This catches accidental key reuse bugs.
- **Failed requests (4xx/5xx)**: Not cached. You can retry with the same key.

### Generating keys

Any string from 1-255 printable ASCII characters. What matters is that keys are unique per distinct operation.

Good patterns:
- `{operation}-{entity_id}-{timestamp}` -- `order-placement-abc123-1705312800`
- `{user_id}-{action}-{nonce}` -- `user-789-checkout-a1b2c3d4`
- UUID v4 -- `550e8400-e29b-41d4-a716-446655440000`

The idempotency key represents the *intent*, not the data. If a user clicks "Submit Order" twice, both clicks should use the same key. But if they add an item and click again, that's a new operation with a new key.

### When to use idempotency keys

**Always use them for:**
- Payment processing
- Order placement
- Any operation with real-world side effects (sending emails, charging cards)
- Batch operations

**Not needed for:**
- GET requests (inherently idempotent)
- Operations already protected by your own deduplication logic

## Response

Success returns 201 Created:

```json
{
  "ok": true,
  "stream_id": "1234567890123-0",
  "implied_count": 2
}
```

| Field | Description |
|-------|-------------|
| `stream_id` | Redis stream ID of the written event |
| `implied_count` | Number of implied events triggered (omitted if 0) |

## Example: Complete flow

Creating an order:

```bash
# 1. Place the order
curl -X POST https://myapp.j17.dev/order/ord_123/was_placed \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -H "X-Idempotency-Key: order-ord_123-was_placed-20240115" \
  -d '{
    "data": {
      "customer_id": "550e8400-e29b-41d4-a716-446655440003",
      "items": [
        { "sku": "WIDGET-1", "name": "Widget", "price": 29.99, "quantity": 2 }
      ]
    },
    "metadata": {
      "actor": { "type": "user", "id": "550e8400-e29b-41d4-a716-446655440003" }
    }
  }'

# Response: {"ok": true, "stream_id": "...", "implied_count": 0}

# 2. Add an item (with OCC)
curl -X POST https://myapp.j17.dev/order/ord_123/had_item_added \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "data": {
      "sku": "GADGET-2",
      "name": "Gadget",
      "price": 19.99,
      "quantity": 1
    },
    "metadata": {
      "actor": { "type": "user", "id": "550e8400-e29b-41d4-a716-446655440003" },
      "previous_length": 1
    }
  }'

# Response: {"ok": true, "stream_id": "..."}

# 3. Mark as paid
curl -X POST https://myapp.j17.dev/order/ord_123/was_paid \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "data": {
      "payment_id": "pay_789",
      "amount": 79.97
    },
    "metadata": {
      "actor": { "type": "system", "id": "550e8400-e29b-41d4-a716-446655440004" },
      "previous_length": 2
    }
  }'

# Response: {"ok": true, "stream_id": "..."}
```

## Aggregate IDs

Three formats accepted:

### UUIDv4 (recommended)

Standard UUIDs:

```
550e8400-e29b-41d4-a716-446655440000
```

Use for most entities. Generate with any UUID library.

### Humane codes

Human-readable identifiers:

```
ABC123XYZ
```

9 characters, Crockford base32. Good for:
- Booking references
- Promo codes
- Support tickets
- Anything customers type

### global

Singleton aggregates:

```bash
curl https://myapp.j17.dev/config/global
```

One per aggregate type. Use for app-wide settings, feature flags.

## Error handling

### 400 Bad Request

Validation error. Schema mismatch, missing actor, invalid aggregate ID.

### 404 Not Found

```json
{
  "ok": false,
  "error": "Event type 'was_placed' not found in spec for aggregate 'order'"
}
```

Your spec doesn't define this event type. Check for typos or update your spec.

### 409 Conflict

OCC check failed. Retry with the current length.

### 413 Request Entity Too Large

Event data exceeds size limit.

### 422 Unprocessable Entity

Idempotency key reused with a different request body.

### 429 Rate Limited

Slow down. Rate limits: 500/min per API key, 2,000/min per IP.

## Retry strategies

For 5xx errors or network failures:

```javascript
async function writeEventWithRetry(path, data, options = {}) {
  const idempotencyKey = options.idempotencyKey || crypto.randomUUID();
  const maxRetries = options.maxRetries || 3;

  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      const response = await fetch(`https://myapp.j17.dev${path}`, {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${apiKey}`,
          'Content-Type': 'application/json',
          'X-Idempotency-Key': idempotencyKey,
        },
        body: JSON.stringify(data),
      });

      if (response.ok) return response.json();
      if (response.status === 422) throw new Error('Idempotency key mismatch');
      if (response.status < 500) throw new Error(`Client error: ${response.status}`);
      // 5xx: retry
    } catch (e) {
      if (attempt === maxRetries - 1) throw e;
      await sleep(Math.pow(2, attempt) * 100);
    }
  }
}
```

Always use idempotency keys when retrying to avoid duplicates.

## See also

- [Reading aggregates](/docs/api/reading-aggregates) - GET to query state
- [Batch operations](/docs/api/batch-operations) - Atomic multi-event writes
- [Atomicity concepts](/docs/concepts/atomicity) - OCC and consistency

# Reading Aggregates

GET aggregates to fetch current state. j17 replays events and applies handlers on demand. Handlers are declarative rules defined in your spec -- for example, a `was_created` event might use `{"set": {"target": "", "value": "$.data"}}` to set initial state, and a `had_name_updated` event might merge a new name into it.

## Basic query

```bash
GET /{aggregate_type}/{aggregate_id}
```

```bash
curl https://myapp.j17.dev/order/ord_123 \
  -H "Authorization: Bearer $API_KEY"
```

Response:

```json
{
  "ok": true,
  "data": {
    "items": [
      { "sku": "WIDGET-1", "quantity": 2, "price": 29.99 }
    ],
    "status": "paid",
    "customer_id": "550e8400-e29b-41d4-a716-446655440003",
    "total": 79.97,
    "placed_at": 1705312800,
    "paid_at": 1705312900
  },
  "metadata": {
    "length": 5,
    "created_at": 1705312800,
    "updated_at": 1705312900
  }
}
```

| Field | Description |
|-------|-------------|
| `data` | Current aggregate state after applying all events through handlers |
| `metadata.length` | Total number of events in this aggregate |
| `metadata.created_at` | Timestamp of the first event (Unix seconds) |
| `metadata.updated_at` | Timestamp of the most recent event (Unix seconds) |

## Stream length (for OCC)

Get just the stream length without computing the full aggregate. This is an O(1) Redis operation, useful for optimistic concurrency control.

```bash
GET /{aggregate_type}/{aggregate_id}/length
```

```bash
curl https://myapp.j17.dev/order/ord_123/length \
  -H "Authorization: Bearer $API_KEY"
```

Response:

```json
{
  "ok": true,
  "length": 5
}
```

Use this length as `previous_length` in your next write:

```javascript
const { length } = await fetch('/order/ord_123/length').then(r => r.json());
await writeEvent('/order/ord_123/was_updated', {
  data: { ... },
  metadata: { actor: { ... }, previous_length: length }
});
```

If another write happened between your read and write, you'll get a 409 Conflict.

## Query parameters

### Synchronous reads

By default, j17 may serve cached aggregate state. To bypass the cache and compute fresh from events:

```bash
curl "https://myapp.j17.dev/order/ord_123?synchronous=true"
```

Use when you need guaranteed-fresh state, such as immediately after a write.

### Get events

Get the raw event stream for an aggregate:

```bash
GET /{aggregate_type}/{aggregate_id}/events
```

```bash
curl "https://myapp.j17.dev/order/ord_123/events" \
  -H "Authorization: Bearer $API_KEY"
```

Response:

```json
{
  "ok": true,
  "events": [
    {
      "stream_id": "1234567890123-0",
      "key": "order:ord_123",
      "type": "was_placed",
      "data": { ... },
      "metadata": {
        "actor": { "type": "user", "id": "..." },
        "timestamp": 1705312800
      }
    },
    {
      "stream_id": "1234567890124-0",
      "key": "order:ord_123",
      "type": "had_item_added",
      "data": { ... },
      "metadata": { ... }
    }
  ]
}
```

Query parameters:

| Parameter | Type | Description |
|-----------|------|-------------|
| `start` | string | Start from this stream ID (exclusive) |
| `count` | integer | Max events to return (default 100) |

### List aggregates

```bash
GET /{aggregate_type}
```

```bash
curl https://myapp.j17.dev/order \
  -H "Authorization: Bearer $API_KEY"
```

Returns known aggregate IDs for a type with cursor-based pagination.

| Parameter | Type | Description |
|-----------|------|-------------|
| `resolve` | flag | Return full aggregate data instead of IDs (`?resolve` or `?resolve=true`) |
| `synchronous` | boolean | Skip cache when resolving, read from event stream (`?synchronous=true`) |
| `limit` | integer | Page size (default 50, max 200) |
| `cursor` | string | Last ID from previous page for pagination |

**IDs only (default):**

```json
{"ok": true, "data": ["order-123", "order-456", "order-789"]}
```

**With `?resolve` — full aggregate data:**

```json
{
  "ok": true,
  "data": [
    {"id": "order-123", "data": {"status": "pending", ...}, "metadata": {...}},
    {"id": "order-456", "data": {"status": "shipped", ...}, "metadata": {...}}
  ]
}
```

Resolved aggregates use the cache when available, falling back to sync reads from the event stream. Use `?synchronous=true` to always read from the stream.

**Pagination:**

```bash
# First page
curl "https://myapp.j17.dev/order?limit=25"
# Response includes cursor: {"ok": true, "data": [...], "cursor": "order-xyz"}

# Next page
curl "https://myapp.j17.dev/order?limit=25&cursor=order-xyz"
```

### Projections

Request a named projection:

```bash
curl "https://myapp.j17.dev/_projections/user_dashboard/cust_456" \
  -H "Authorization: Bearer $API_KEY"
```

Projections are pre-computed views maintained by the platform. Use `?synchronous=true` for a fresh computation instead of the cached value.

Export as CSV: `GET /_projections/:name/export.csv`

Query with SQL: `POST /_projections/query` with `{"sql": "SELECT ...", "params": [...]}`

See the [projections guide](/docs/guides/projections) for full details including limits and restrictions.

## Singleton aggregates

Use the literal `global` as the aggregate ID for one-per-type aggregates:

```bash
curl https://myapp.j17.dev/config/global \
  -H "Authorization: Bearer $API_KEY"
```

If you've defined custom singletons in your spec (`"singletons": ["all"]`), use them the same way:

```bash
curl https://myapp.j17.dev/company_audit/all \
  -H "Authorization: Bearer $API_KEY"
```

See [Aggregates: Singleton aggregates](/docs/concepts/aggregates#singleton-aggregates) for details on `global` vs custom singletons and how to configure them in your spec.

## Response codes

### 200 OK

Aggregate exists. Returns state.

### 404 Not Found

Aggregate doesn't exist (no events yet) or aggregate type not in spec.

```json
{
  "ok": false,
  "error": "Aggregate not found"
}
```

An aggregate with no events returns 404, not empty state.

### 401 Unauthorized

Invalid or missing API key.

### 403 Forbidden

Valid key, but wrong environment (staging key on prod) or insufficient scope.

## Performance

Aggregate computation is fast:

- Small aggregates (< 100 events): < 1ms
- Medium aggregates (< 1,000 events): < 5ms
- Large aggregates (< 10,000 events): < 50ms

If your aggregates grow larger, use [checkpoints](/docs/api/admin-api) to snapshot state periodically. Root checkpoints capture all aggregates of a type at once.

## Optimistic reads

For optimistic UI patterns:

```javascript
// 1. Get stream length (O(1))
const { length } = await fetch('/order/ord_123/length').then(r => r.json());

// 2. Fetch current state
const { data } = await fetch('/order/ord_123').then(r => r.json());

// 3. Render UI
renderOrder(data);

// 4. User makes change -- write with OCC
try {
  await writeEvent('/order/ord_123/was_shipped', {
    data: { ... },
    metadata: { actor: { ... }, previous_length: length }
  });
} catch (err) {
  // 409 Conflict -- refetch and retry
  const fresh = await fetch('/order/ord_123').then(r => r.json());
  renderOrder(fresh.data);
}
```

## Comparing to traditional databases

| Operation | SQL | j17 |
|-----------|-----|-----|
| Read by ID | `SELECT * FROM orders WHERE id = ?` | `GET /order/ord_123` |
| Read length | `SELECT COUNT(*) FROM events WHERE key = ?` | `GET /order/ord_123/length` |
| Read full history | Multiple queries/joins | `GET /order/ord_123/events` |
| List/filter | `SELECT * FROM orders WHERE status = 'paid'` | Not directly (use projections) |

The trade-off: you lose ad-hoc queries, you gain immutable history.

## Best practices

**Store aggregate IDs.** When you create an order, save the ID in your database. You'll need it to query later.

**Use projections for lists.** Don't try to "list all orders." Create a projection that maintains the list.

**Use `?synchronous=true` sparingly.** Default reads may use cached state, which is faster. Only force synchronous when you need guaranteed freshness.

**Handle 404 gracefully.** A 404 just means no events yet. It's not an error condition.

## See also

- [Writing events](/docs/api/writing-events) - POST to modify state
- [Batch operations](/docs/api/batch-operations) - Atomic multi-event writes
- [Admin API](/docs/api/admin-api) - Checkpoints, backups, data export

# Batch Operations

Batch writes submit multiple events to the same aggregate atomically. All events succeed or none are written. A single OCC check protects the entire batch.

## When to batch

**Good for:**
- Initial entity creation with multiple setup events
- Complex state transitions that are logically atomic
- Bulk imports and data migrations

**Not for:**
- Events to different aggregates (use implications instead)
- Single events (no benefit, adds complexity)
- Operations needing immediate per-event feedback

## Batch write

```bash
POST /{aggregate_type}/{aggregate_id}
```

```json
{
  "events": [
    { "type": "was_created", "data": { "name": "Alice" } },
    { "type": "had_email_updated", "data": { "email": "alice@example.com" } },
    { "type": "had_role_assigned", "data": { "role": "admin" } }
  ],
  "metadata": {
    "actor": { "type": "admin", "id": "550e8400-e29b-41d4-a716-446655440000" },
    "previous_length": 0
  }
}
```

Each event needs:
- `type`: Event type from spec
- `data`: Event payload (optional, depends on event schema)

Metadata is shared across all events in the batch (actor, target, previous_length).

## Response format

```json
{
  "ok": true,
  "stream_ids": ["1706789012345-0", "1706789012345-1", "1706789012345-2"],
  "count": 3,
  "implied_count": 5
}
```

| Field | Description |
|-------|-------------|
| `stream_ids` | Redis stream IDs for each written event |
| `count` | Number of events written |
| `implied_count` | Total implied events across all triggers (omitted if 0) |

## OCC in batches

The `previous_length` in metadata applies to the batch as a whole:

- If you specify `previous_length: 0`, the stream must be empty before writing
- All events are written atomically after the OCC check passes
- If OCC fails, none of the events are written

### Getting stream length for OCC

Use the length endpoint for an O(1) check:

```bash
curl https://myapp.j17.dev/order/ord_123/length \
  -H "Authorization: Bearer $API_KEY"

# Response: {"ok": true, "length": 5}
```

Then pass it as `previous_length` in your batch write.

## Implications in batches

Each event in a batch can trigger implications. All implications see the **pre-batch state**, not intermediate states:

```
Submit: [A, B, C] to order:123

S0 (pre-batch state)
+-- Implications for A see S0
+-- Implications for B see S0
+-- Implications for C see S0

After commit: state = S0 + A + B + C
```

## Idempotency

Batch writes support the `X-Idempotency-Key` header just like single writes:

```bash
curl -X POST https://myapp.j17.dev/user/user_123 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -H "X-Idempotency-Key: user-setup-user_123-20240115" \
  -d '{
    "events": [
      { "type": "was_created", "data": { "name": "Alice" } },
      { "type": "had_role_assigned", "data": { "role": "admin" } }
    ],
    "metadata": {
      "actor": { "type": "system", "id": "550e8400-e29b-41d4-a716-446655440001" }
    }
  }'
```

If you retry with the same key and same body, the cached response is returned with `X-Idempotency-Replayed: true`. Same key with different body returns 422.

## Error handling

### 400 Bad Request

Validation error on one or more events. No events written.

```json
{
  "ok": false,
  "error": "Event data failed schema validation",
  "event_index": 1,
  "path": "data.email"
}
```

The `event_index` field tells you which event in the array failed (0-based).

### 409 Conflict

OCC check failed. No events written. Refetch the length and retry.

### Empty or missing events

```json
{
  "ok": false,
  "error": "Missing 'events' array in request body"
}
```

Or:

```json
{
  "ok": false,
  "error": "Events array cannot be empty"
}
```

## Bulk import example

Migrating from an existing database:

```javascript
const batchSize = 50;

for (const user of users) {
  const events = [
    { type: 'was_created', data: { name: user.name, email: user.email } },
    { type: 'had_profile_updated', data: { bio: user.bio, avatar: user.avatar } }
  ];

  await fetch(`https://myapp.j17.dev/user/${user.uuid}`, {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${apiKey}`,
      'Content-Type': 'application/json',
      'X-Idempotency-Key': `migration-user-${user.uuid}`,
    },
    body: JSON.stringify({
      events,
      metadata: {
        actor: { type: 'system', id: '550e8400-e29b-41d4-a716-446655440002' }
      }
    })
  });
}
```

Use idempotency keys so you can resume if interrupted.

## When not to batch

**Events to different aggregates**: Batches only target a single aggregate. For cross-aggregate writes, use implications -- define them in your spec so one trigger event atomically creates derived events in other aggregates.

**Events that are truly independent**: If event A failing shouldn't block event B, use separate POSTs.

**Interactive user actions**: Individual POSTs give clearer error feedback than batch failures.

## See also

- [Writing events](/docs/api/writing-events) - Single event API
- [Atomicity concepts](/docs/concepts/atomicity) - OCC and consistency
- [Reading aggregates](/docs/api/reading-aggregates) - GET to query state

# Admin API

Manage specs, API keys, backups, and configure your instance via the Admin API. Admin operations are split between the **headnode** (identity, billing, spec deployment) and the **worker** (data operations proxied through the headnode).

## Authentication

All admin endpoints require JWT authentication on the headnode:

```bash
curl https://control.j17.dev/api/instances \
  -H "Authorization: Bearer $JWT_TOKEN"
```

Get a JWT via login:

```bash
curl -X POST https://control.j17.dev/api/login \
  -H "Content-Type: application/json" \
  -d '{"email": "admin@example.com", "password": "..."}'
```

## Spec management

### Deploy spec

```bash
POST /api/instances/:instance_id/spec
Authorization: Bearer $JWT
Content-Type: application/json

{
  "environment": "prod",
  "spec": {
    "aggregate_types": { ... },
    "agent_types": [ ... ]
  }
}
```

The spec goes inside the `"spec"` key. The dashboard upload expects the same format.

Specs are validated before deployment. Invalid specs return 422 with error details. Incompatible changes (removing types, changing required fields) are rejected unless `"force": true` is passed (non-prod environments only).

### Get current spec

The current spec for an instance/environment is served via the internal API (headnode-to-worker). Operators view specs through the dashboard UI.

## API key management

All key management is on the **headnode**.

### List keys

```bash
GET /api/instances/:instance_id/keys
Authorization: Bearer $JWT
```

### Create key

```bash
POST /api/instances/:instance_id/keys
Authorization: Bearer $JWT

{
  "name": "Production Backend",
  "scope": "write",
  "environment": "prod"
}
```

Response includes the key (shown once):

```json
{
  "id": "key-uuid",
  "name": "Production Backend",
  "key": "j17_0_prod_xyz789...",
  "scope": "write",
  "environment": "prod",
  "created_at": "2024-01-15T10:00:00Z"
}
```

### Rotate key

```bash
POST /api/keys/:id/rotate
Authorization: Bearer $JWT
```

Creates a new key and revokes the old one.

### Schedule revocation

```bash
POST /api/keys/:id/schedule_revocation
Authorization: Bearer $JWT

{
  "revoke_at": "2024-02-01T00:00:00Z"
}
```

### Revoke key

```bash
DELETE /api/keys/:id
Authorization: Bearer $JWT
```

Revoked keys fail immediately on next use.

## Instance operations

These endpoints are on the headnode and proxy to the appropriate worker node. All are under `/api/instances/:id/ops/:environment/`.

### Checkpoints

Checkpoints snapshot aggregate state for faster replay.

| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/ops/:env/checkpoints` | List checkpoints |
| `POST` | `/ops/:env/checkpoints` | Create checkpoint |
| `POST` | `/ops/:env/checkpoints/:checkpoint_id/restore` | Restore checkpoint |
| `DELETE` | `/ops/:env/checkpoints/:checkpoint_id` | Delete checkpoint |

Root anchors capture Merkle roots for cryptographic verification:

| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | (internal) `/root-anchors` | List root anchors |
| `POST` | (internal) `/root-anchors` | Create root anchor |
| `GET` | (internal) `/root-anchors/latest` | Get latest root anchor |
| `DELETE` | (internal) `/root-anchors/:id` | Delete root anchor |
| `GET` | (internal) `/root-anchor-settings` | Get auto-anchor settings |
| `PUT` | (internal) `/root-anchor-settings` | Update auto-anchor settings |

### Backups

| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/ops/:env/backups` | List backups |
| `POST` | `/ops/:env/backups` | Create backup |
| `DELETE` | `/ops/:env/backups/:backup_id` | Delete backup |

Backup settings (offsite S3 configuration):

| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/api/instances/:id/backup_settings` | Get backup settings |
| `PUT` | `/api/instances/:id/backup_settings` | Update backup settings |

### Blobs

Binary data storage (e.g., WASM modules, config files).

| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | (internal) `/blobs` | List blobs |
| `POST` | (internal) `/blobs` | Upload blob |
| `GET` | (internal) `/blobs/:name` | Get blob |
| `DELETE` | (internal) `/blobs/:name` | Delete blob |

### Scheduled events

| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | (internal) `/scheduled` | List scheduled events |
| `POST` | `/ops/:env/scheduled/:event_id/cancel` | Cancel scheduled event |
| `POST` | `/ops/:env/scheduled/:event_id/retry` | Retry failed event |
| `GET` | (internal) `/scheduled/dead` | List dead letters |

### Sagas

| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | (internal) `/sagas` | List sagas |
| `GET` | (internal) `/sagas/:saga_id` | Get saga detail |
| `POST` | `/ops/:env/sagas/:saga_id/retry` | Retry failed saga |

### Tombstones (GDPR erasure)

Tombstone endpoints require node secret authentication (internal API only). They replace event payloads with tombstone markers while preserving stream structure.

| Method | Endpoint | Description |
|--------|----------|-------------|
| `POST` | (internal) `/tombstone/:type/:id` | Create tombstone request |
| `GET` | (internal) `/tombstones` | List tombstones |
| `GET` | (internal) `/tombstones/:id` | Get tombstone status |
| `DELETE` | (internal) `/tombstones/:id` | Cancel (while pending) |
| `POST` | (internal) `/tombstones/:id/execute` | Execute tombstone |

Tombstones have a configurable grace period (minimum 72 hours) before execution. After execution, event payloads are replaced with `_was_tombstoned` markers containing the original content hash. See the tombstones documentation for details on transitive cascade rules via `onTombstone` spec configuration.

### Listener deliveries (webhooks)

Listeners deliver events to HTTP endpoints with HMAC-SHA256 signed payloads. Failed deliveries retry with exponential backoff (5s, 25s, 125s) up to 3 attempts.

| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | (internal) `/deliveries` | List listener deliveries |

Listeners are configured in your spec, not via API. The delivery system handles:
- Automatic retry with exponential backoff
- HMAC-SHA256 payload signing (`X-J17-Signature` header)
- Delivery cleanup (delivered > 7 days, failed > 30 days)

### Audit

Cryptographic verification of event integrity.

| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | (internal) `/audit/merkle-root/:type/:id` | Get Merkle root for aggregate |
| `GET` | (internal) `/audit/merkle-proof/:type/:id/:index` | Get Merkle proof for event |
| `POST` | (internal) `/audit/merkle-verify` | Verify Merkle proof |
| `GET` | (internal) `/audit/verify-chain/:type/:id` | Verify hash chain integrity |

These are also available via API key auth at `/:type/:id/audit/...`.

### Data loading

| Method | Endpoint | Description |
|--------|----------|-------------|
| `POST` | (internal) `/inject` | Inject test data (staging/test only) |
| `POST` | (internal) `/import` | Import historical events |
| `POST` | (internal) `/import_jsonl` | Import events in JSONL format |
| `POST` | (internal) `/cold_start` | Initial production data load |
| `GET` | (internal) `/export` | Export all events |

### Error lookup

```bash
GET /api/instances/:id/ops/:environment/errors/:error_id
Authorization: Bearer $JWT
```

Returns details for a specific error, including the full error context and stack trace.

## Usage and billing

Usage and billing endpoints are on the headnode.

| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/api/instances/:id` | Instance details including plan/tier |

Usage metrics are tracked per-instance and pushed from workers to the headnode.

## Projections

| Method | Endpoint | Description |
|--------|----------|-------------|
| `POST` | `/api/instances/:id/projections` | Deploy a projection definition |
| `GET` | `/api/instances/:id/projections` | List configured projections |
| `DELETE` | `/api/instances/:id/projections/:name` | Remove a projection |

The deploy body wraps the definition:

```json
{
  "name": "order_summary",
  "environment": "prod",
  "projection": {
    "for_each": "order",
    "sources": {"order": "$key"},
    "select": {"total": "$.order.total"}
  }
}
```

See the [projections guide](/docs/guides/projections) for full details on sources, select transforms, and SQL queries.

## See also

- [Authentication](/docs/api/authentication) - JWT and API keys
- [Writing events](/docs/api/writing-events) - Data plane API
- [Reading aggregates](/docs/api/reading-aggregates) - GET aggregates

---


# Projections

Projections are read-only views that combine data from multiple aggregates into a single response. Your frontend needs an order with customer name, product details, and shipping info -- that's four aggregates. Without projections, you either make 4 API calls or build a custom endpoint. Projections solve this: define the shape you need in JSON, and the platform composes it for you.

## When to use projections

**Good for:**
- Dashboards combining multiple data sources
- API responses needing denormalized data
- Complex queries that would need multiple round-trips

**Not for:**
- Simple single-aggregate reads (use GET directly)
- Write operations (projections are read-only)

## Defining projections

Projections are deployed via the admin API. Define them in JSON:

```json
{
  "name": "order_with_details",
  "for_each": "order",
  "sources": {
    "order": "$key",
    "customer": {"type": "customer", "id": "$.order.customer_id"},
    "products": {"type": "product", "id": "$.order.line_items[*].product_id", "mode": "array"}
  },
  "include": {
    "pricing": "pricing_rules:global"
  },
  "select": {
    "order_id": "$.order.id",
    "total": "$.order.total",
    "customer_name": "$.customer.name",
    "tax_rate": "$.pricing.default_tax_rate",
    "items": {
      "$map": "$.order.line_items",
      "as": "$item",
      "to": {
        "product_id": "$item.product_id",
        "product_name": {
          "$lookup": "$.products",
          "where": {"id": "$item.product_id"},
          "select": "$.name"
        }
      }
    }
  }
}
```

### Fields

| Field | Required | Purpose |
|-------|----------|---------|
| `name` | Yes | Projection identifier (lowercase, alphanumeric with underscores) |
| `for_each` | Yes | Trigger aggregate type -- one projection instance per aggregate of this type |
| `sources` | Yes | Aggregates derived from the trigger's state tree |
| `include` | No | Global singletons via `type:global` syntax |
| `select` | Yes | Output shape with JSONPath expressions |

## Sources

Sources define which aggregates to fetch. The first source can use `"$key"` shorthand to reference the trigger aggregate:

```json
"sources": {
  "order": "$key",
  "customer": {"type": "customer", "id": "$.order.customer_id"}
}
```

### Source options

| Option | Default | Description |
|--------|---------|-------------|
| `type` | Required | Aggregate type to fetch |
| `id` | Required | ID expression (`$key`, literal, or JSONPath) |
| `mode` | `"single"` | `"single"` or `"array"` for fetching multiple |

### Array mode

Use `mode: "array"` when the ID path returns multiple values:

```json
"products": {
  "type": "product",
  "id": "$.order.line_items[*].product_id",
  "mode": "array"
}
```

This fetches all products referenced by the order's line items.

## Include

The `include` field is for global singleton aggregates that are not in the trigger's state tree:

```json
"include": {
  "config": "settings:global",
  "pricing": "pricing_rules:global"
}
```

`include` is restricted to `:global` IDs to guide users toward proper patterns. For non-global aggregates, use `sources` with appropriate ID resolution.

## Select expressions

The `select` field defines the output shape using JSONPath expressions.

### Basic paths

Reference any source or include binding:

```json
"select": {
  "customer_name": "$.customer.name",
  "order_total": "$.order.total"
}
```

### $map transform

Map over arrays to transform each element:

```json
"items": {
  "$map": "$.order.line_items",
  "as": "$item",
  "to": {
    "name": "$item.name",
    "qty": "$item.quantity"
  }
}
```

### $lookup transform

Find and extract from arrays:

```json
"product_name": {
  "$lookup": "$.products",
  "where": {"id": "$item.product_id"},
  "select": "$.name"
}
```

## API endpoints

### Admin (headnode, JWT authentication)

```
POST   /api/instances/:id/projections        # Deploy definition
GET    /api/instances/:id/projections        # List all
DELETE /api/instances/:id/projections/:name  # Remove
```

The deploy body wraps the definition under a `projection` key:

```json
{
  "name": "order_summary",
  "environment": "prod",
  "projection": {
    "for_each": "order",
    "sources": {"order": "$key"},
    "select": {"total": "$.order.total"}
  }
}
```

The dashboard upload expects the same format — use the same JSON file you'd use with curl.

### Data (API key authentication)

```
GET /_projections/:name/:id              # Read cached
GET /_projections/:name/:id?synchronous  # Compute fresh
```

Use `?synchronous` when you need guaranteed fresh data. Otherwise, projections automatically refresh when any source aggregate changes.

## Example: User dashboard

```json
{
  "name": "user_dashboard",
  "for_each": "user",
  "sources": {
    "user": "$key",
    "org": {"type": "organization", "id": "$.user.org_id"},
    "recent_orders": {"type": "order", "id": "$.user.recent_order_ids", "mode": "array"}
  },
  "include": {
    "features": "feature_flags:global"
  },
  "select": {
    "user_name": "$.user.name",
    "org_name": "$.org.name",
    "orders": "$.recent_orders",
    "dark_mode_enabled": "$.features.dark_mode"
  }
}
```

Request:

```bash
curl https://myapp.j17.dev/_projections/user_dashboard/abc123 \
  -H "Authorization: Bearer $API_KEY"
```

Response:

```json
{
  "ok": true,
  "data": {
    "user_name": "Alice",
    "org_name": "Acme Corp",
    "orders": [],
    "dark_mode_enabled": true
  },
  "cached_at": 1706745600
}
```

## Deployment example

```bash
# Deploy a projection (headnode)
curl -X POST "https://console.j17.dev/api/instances/$INSTANCE_ID/projections" \
  -H "Authorization: Bearer $JWT" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "order_summary",
    "environment": "prod",
    "projection": {
      "for_each": "order",
      "sources": {
        "order": "$key",
        "customer": {"type": "customer", "id": "$.order.customer_id"}
      },
      "select": {
        "id": "$.order.id",
        "total": "$.order.total",
        "customer_name": "$.customer.name"
      }
    }
  }'

# Read a projection (worker, API key auth)
curl https://myapp.j17.dev/_projections/order_summary/ord_12345 \
  -H "Authorization: Bearer $API_KEY"
```

## Performance

Projections automatically cache and refresh when source data changes. No manual invalidation required.

Guidelines:
- Keep to 3-4 sources max
- Use array mode sparingly (each ID is a separate fetch)
- For high-traffic projections, use the cached endpoint (default) rather than `?synchronous`

## Limitations

- No joins across instances
- No aggregation across all aggregates (use analytics export)
- Max 10 sources per projection
- `include` is restricted to `:global` IDs

## Compared to read models

Traditional event sourcing uses "read models" -- separate databases updated by event handlers. Projections are simpler:
- No separate data store
- No eventual consistency lag (computed from live data)
- Computed on demand with automatic caching

But projections are not a substitute for heavy analytics. If you need complex cross-aggregate queries, export to a data warehouse.

## CSV Export

Download all rows of a projection as CSV:

```bash
curl https://myapp.j17.dev/_projections/order_summary/export.csv \
  -H "Authorization: Bearer $API_KEY" \
  -o orders.csv
```

Columns: `_key_id` (aggregate ID) followed by `select` keys alphabetically. Nested values are JSON-serialized. A trailer comment reports row counts for completeness verification.

- 100,000 row limit
- 10 exports/min per API key
- Stale-cache rows omitted (reported in trailer)

## SQL Queries

Projection data is automatically materialized into per-instance SQLite tables. Query them with read-only SQL:

```bash
curl -X POST https://myapp.j17.dev/_projections/query \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "sql": "SELECT name, email FROM user_summary WHERE status = ?1 ORDER BY name LIMIT 50",
    "params": ["active"]
  }'
```

```json
{
  "ok": true,
  "columns": ["name", "email"],
  "rows": [["Alice", "alice@example.com"], ["Bob", "bob@example.com"]]
}
```

Each table has `_key_id` (primary key), `_updated_at` (Unix timestamp), plus the `select` columns (all TEXT).

**Restrictions**: SELECT only, no semicolons, 10,000 row default limit, 5-second timeout, 60 queries/min per API key.

**Not for OLAP**: For analytical queries across large datasets, pipe events into ClickHouse, DuckDB, or BigQuery. SQL queries cover operational needs — active users, orders by status, recent activity.

## See also

- [Spec reference](/docs/reference/spec) - Projection definition syntax
- [Caching guide](/docs/guides/caching) - How caching interacts with projections

# Implications

Implications are reactive: when one event happens, they trigger another. All implied events are written atomically with the trigger event. Order placed? Reserve inventory. Low stock? Reorder. User upgraded? Grant features.

This guide covers practical patterns. For full syntax details, see [Implications reference](/docs/reference/implications-reference).

## Basic implication

Define in your spec:

```json
{
  "aggregate_types": {
    "order": {
      "events": {
        "was_placed": {
          "schema": {"type": "object", "properties": {"items": {"type": "array"}}},
          "handler": [{"set": {"target": "", "value": "$.data"}}],
          "implications": [
            {
              "emit": {
                "aggregate_type": "notification",
                "id": "admin",
                "event_type": "was_queued",
                "data": {"message": "New order received"}
              }
            }
          ]
        }
      }
    }
  }
}
```

When `order.was_placed` fires, j17 emits `notification.admin.was_queued` with the given data.

## Conditional implications

Only trigger when conditions are met:

```json
{
  "was_paid": {
    "schema": {"type": "object", "properties": {"amount": {"type": "number"}}},
    "handler": [{"set": {"target": "", "value": "$.data"}}],
    "implications": [
      {
        "condition": {"gte": ["$.data.amount", 100]},
        "emit": {
          "aggregate_type": "loyalty",
          "id": "$.metadata.actor.id",
          "event_type": "had_points_credited",
          "data": {"points": 10}
        }
      }
    ]
  }
}
```

Orders of $100 or more credit loyalty points. Conditions use the same predicate syntax as handlers.

## Cross-aggregate implications

Implications can target different aggregate types:

```json
{
  "user": {
    "events": {
      "had_tier_upgraded": {
        "schema": {"type": "object", "properties": {"new_tier": {"type": "string"}}},
        "handler": [{"set": {"target": "tier", "value": "$.data.new_tier"}}],
        "implications": [
          {
            "condition": {"equals": ["$.data.new_tier", "premium"]},
            "emit": {
              "aggregate_type": "features",
              "id": "$.metadata.actor.id",
              "event_type": "had_premium_enabled",
              "data": {"enabled_at": "$.metadata.timestamp"}
            }
          }
        ]
      }
    }
  }
}
```

User upgrade triggers a feature flag event on a different aggregate.

## Dynamic target IDs

Implications have access to all [standard event paths](/docs/reference/jsonpath#common-event-paths) (`$.key`, `$.id`, `$.type`, `$.data.*`, `$.metadata.*`, `@.*`). Use them to route implied events dynamically:

```json
{
  "emit": {
    "aggregate_type": "user_timeline",
    "id": "$.metadata.actor.id",
    "event_type": "had_activity_added",
    "data": {"event": "$.key", "type": "$.type"}
  }
}
```

## State-based conditions

Access the source aggregate's current state in conditions:

```json
{
  "condition": {"equals": ["@.notifications_enabled", true]},
  "emit": {
    "aggregate_type": "notification",
    "id": "$.metadata.actor.id",
    "event_type": "was_queued",
    "data": {}
  }
}
```

When accessing `@.*` in implications, you get the aggregate's state **before any events in the current request** are applied (S0). In a batch write, all implications in the batch see the same pre-batch state. The trigger event's data is available via `$.data.*`. (`$.state.*` is a deprecated alias for `@.*`.)

## Fan-out with map

Emit one event per item in an array:

```json
{
  "map": {
    "in": "$.data.items",
    "as": "$item",
    "emit": {
      "aggregate_type": "inventory",
      "id": "$item.product_id",
      "event_type": "was_reserved",
      "data": {
        "quantity": "$item.qty",
        "order_id": "$.key"
      }
    }
  }
}
```

### Map with condition filter

Only emit for items matching a condition:

```json
{
  "map": {
    "in": "$.data.items",
    "as": "$item",
    "condition": {"equals": ["$item.priority", "high"]},
    "emit": {
      "aggregate_type": "warehouse",
      "id": "$item.product_id",
      "event_type": "needs_urgent_pick",
      "data": {"qty": "$item.qty"}
    }
  }
}
```

## Data template operators

For complex data transformations, use DSL operators in your `data` template.

### concat - String concatenation

```json
{
  "data": {
    "message": {"concat": ["Order ", "$.key", " was placed"]}
  }
}
```

### coalesce - First non-null value

```json
{
  "data": {
    "name": {"coalesce": ["$.data.display_name", "$.data.name", "Unknown"]}
  }
}
```

### merge - Shallow object merge

```json
{
  "data": {"merge": [
    "@.defaults",
    {"override": "value", "timestamp": "$.metadata.timestamp"}
  ]}
}
```

Later values override earlier ones. Operators can be nested (e.g., a `merge` element can contain a `coalesce`), up to **32 levels** deep.

## Scheduled implications

Delay the triggered event. See the [scheduled events guide](/docs/guides/scheduled-events) for full details.

```json
{
  "was_placed": {
    "implications": [
      {
        "schedule": {
          "delay": "24h",
          "emit": {
            "aggregate_type": "notification",
            "id": "$.metadata.actor.id",
            "event_type": "cart_abandonment_reminder",
            "data": {"cart_id": "$.key"}
          },
          "cancel_on": [
            {
              "aggregate_type": "cart",
              "id": "$.key",
              "event_type": "was_checked_out"
            }
          ]
        }
      }
    ]
  }
}
```

If the cart is checked out within 24 hours, the reminder is automatically canceled.

## Safety limits

Implications have built-in protection against runaway chains:

| Limit | Default | Description |
|-------|---------|-------------|
| `max_depth` | 5 | Maximum chain depth (A implies B implies C implies D implies E) |
| `max_total` | 100 | Maximum total implied events per trigger |
| Template nesting | 32 | Maximum depth of nested template operators (`merge`/`concat`/`coalesce`) |

Exceeding limits returns an error; the entire transaction (trigger + implied) is rejected.

The engine detects cycles at spec validation time. A spec with `A:e1` implying `B:e2` implying `A:e1` will be rejected.

## Audit trail

All implied events include metadata for traceability:

```json
{
  "implied_by": {
    "key": "order:abc123",
    "event_type": "was_placed",
    "depth": 1
  }
}
```

## Ordering and consistency

Implied events are written within microseconds of the trigger event, but they do **not** guarantee they will be the next event on the target aggregate's stream. Another write to the same target aggregate could land between the trigger and the implied event. Design your implied event handlers to be additive (set fields, append to arrays) rather than dependent on being exactly the N+1th event.

The hash chain on each aggregate stream remains intact regardless of interleaving — chain hashes are computed at write time from the actual previous event, not from what the implication expected.

## Error handling

Implication failures don't block the original event. The parent event succeeds, implications are retried separately.

## Common patterns

### Inventory management

```json
{
  "aggregate_types": {
    "order": {
      "events": {
        "was_placed": {
          "handler": [{"set": {"target": "", "value": "$.data"}}],
          "implications": [
            {
              "map": {
                "in": "$.data.items",
                "as": "$item",
                "emit": {
                  "aggregate_type": "inventory",
                  "id": "$item.sku",
                  "event_type": "had_reservation_requested",
                  "data": {
                    "order_id": "$.key",
                    "quantity": "$item.quantity"
                  }
                }
              }
            }
          ]
        }
      }
    },
    "inventory": {
      "events": {
        "had_reservation_requested": {
          "handler": [{"decrement": {"target": "available", "value": "$.data.quantity"}}],
          "implications": [
            {
              "condition": {"gte": ["@.available", "$.data.quantity"]},
              "emit": {
                "aggregate_type": "inventory",
                "id": "$.key",
                "event_type": "was_reserved",
                "data": {"quantity": "$.data.quantity"}
              }
            },
            {
              "condition": {"lt": ["@.available", "$.data.quantity"]},
              "emit": {
                "aggregate_type": "order",
                "id": "$.data.order_id",
                "event_type": "had_backorder_created",
                "data": {"sku": "$.key", "quantity": "$.data.quantity"}
              }
            }
          ]
        }
      }
    }
  }
}
```

### Notifications

```json
{
  "was_posted": {
    "handler": [{"set": {"target": "", "value": "$.data"}}],
    "implications": [
      {
        "emit": {
          "aggregate_type": "notification",
          "id": "$.data.post_author_id",
          "event_type": "was_created",
          "data": {
            "type": "new_comment",
            "post_id": "$.data.post_id",
            "commenter": "$.metadata.actor.id"
          }
        }
      }
    ]
  }
}
```

### Audit logging

```json
{
  "was_created": {
    "handler": [{"set": {"target": "", "value": "$.data"}}],
    "implications": [
      {
        "emit": {
          "aggregate_type": "audit",
          "id": "global",
          "event_type": "had_entry_added",
          "data": {
            "action": "create",
            "target": "$.key",
            "actor": "$.metadata.actor",
            "timestamp": "$.metadata.timestamp"
          }
        }
      }
    ]
  }
}
```

## Testing implications

Write events in staging, verify implications fire:

```bash
# Place order
curl -X POST https://myapp-staging.j17.dev/order/123/was_placed \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"data": {"items": [{"sku": "widget_1", "quantity": 2}]}}'

# Check inventory reservation fired
curl https://myapp-staging.j17.dev/inventory/widget_1 \
  -H "Authorization: Bearer $API_KEY"
```

## Compared to sagas

| | Implications | Sagas |
|--|-------------|-------|
| Trigger | Automatic on event | Explicit (trigger event) |
| Scope | Single event chain | Multi-step workflow |
| Compensation | Manual (write compensating events) | Built-in |
| Use case | Simple reactions | Complex business processes |

Use implications for simple if-this-then-that. Use [sagas](/docs/guides/sagas) for multi-step workflows with compensation.

## When not to use implications

**You need immediate confirmation** -- Implications are atomic with the trigger, but if you need the implied event's response before returning to the client, use a saga.

**Complex conditional logic** -- More than 3-4 conditions? Consider a [WASM handler](/docs/reference/wasm) or saga.

**External calls needed** -- Both Tick and WASM implications are pure data transformations with no network calls or I/O. Fetch external data before writing the trigger event.

## See also

- [Implications reference](/docs/reference/implications-reference) - Full syntax details
- [Sagas guide](/docs/guides/sagas) - Complex workflows
- [Scheduled events](/docs/guides/scheduled-events) - Delayed execution

# Sagas

Sagas orchestrate long-running workflows across multiple aggregates. Unlike implications (which react automatically), sagas are explicit: a trigger event starts them, they execute steps in order, and they handle failure with compensation.

## When to use sagas

**Good for:**
- Multi-step business processes (checkout, onboarding, provisioning)
- Operations spanning external services (payment APIs, email, webhooks)
- Processes needing human approval
- Anything where partial failure requires rollback

**Not for:**
- Simple single-aggregate updates (use regular events)
- Automatic reactions to events (use [implications](/docs/guides/implications))
- Single delayed actions (use [scheduled events](/docs/guides/scheduled-events))

## You might not need a saga

If it's a single delayed action -- send a reminder in 24 hours, expire a token next week -- use a scheduled event. Sagas are for multi-step workflows where steps depend on each other and failures need to unwind previous work.

## Defining sagas

Sagas are defined in `modules.sagas` in your spec:

```json
{
  "modules": {
    "sagas": {
      "checkout": {
        "trigger": {
          "aggregate_type": "cart",
          "event_type": "checkout_started"
        },
        "steps": [
          {
            "name": "reserve_inventory",
            "emit": {
              "aggregate_type": "inventory",
              "id": "$.data.product_id",
              "event_type": "reservation_requested",
              "data": {
                "quantity": "$.data.quantity"
              }
            },
            "await": {
              "aggregate_type": "inventory",
              "event_types": ["was_reserved", "reservation_failed"]
            },
            "compensate": {
              "aggregate_type": "inventory",
              "id": "$.data.product_id",
              "event_type": "reservation_released",
              "data": {}
            },
            "timeout_ms": 30000
          },
          {
            "name": "charge_payment",
            "condition": {"equals": ["$prev.type", "was_reserved"]},
            "emit": {
              "aggregate_type": "payment",
              "id": "$.data.payment_id",
              "event_type": "charge_requested",
              "data": {"amount": "$.data.total"}
            },
            "await": {
              "aggregate_type": "payment",
              "event_types": ["was_charged", "charge_failed"]
            },
            "compensate": {
              "aggregate_type": "payment",
              "id": "$.data.payment_id",
              "event_type": "refund_requested",
              "data": {}
            }
          },
          {
            "name": "notify",
            "condition": {"equals": ["$prev.type", "was_charged"]},
            "emit": {
              "aggregate_type": "notification",
              "id": "$.data.customer_id",
              "event_type": "order_confirmation_sent",
              "data": {
                "transaction_id": "$prev.data.transaction_id"
              }
            }
          }
        ],
        "on_complete": {
          "aggregate_type": "order",
          "id": "$.data.order_id",
          "event_type": "checkout_completed",
          "data": {}
        },
        "on_failed": {
          "aggregate_type": "order",
          "id": "$.data.order_id",
          "event_type": "checkout_failed",
          "data": {"error": "$error.message"}
        }
      }
    }
  }
}
```

### Trigger

```json
"trigger": {
  "aggregate_type": "cart",
  "event_type": "checkout_started"
}
```

When `checkout_started` is written to any cart, this saga starts.

### Steps

Each step has a name, an event to emit, and optionally:
- `await` - event types to wait for (step completes when one arrives)
- `compensate` - event to emit if a later step fails
- `timeout_ms` - how long to wait (required for steps with `await`)
- `condition` - only run if true

### Referencing data

Templates pull data from the trigger event and previous steps. Sagas have access to all [standard event paths](/docs/reference/jsonpath#common-event-paths) (`$.key`, `$.id`, `$.type`, `$.data.*`, `$.metadata.*`, `@.*`) plus saga-specific context:

| Template | Description |
|----------|-------------|
| `$prev.type` | Previous step's response type |
| `$prev.data.*` | Previous step's response data |
| `$context.<step>.*` | Earlier step's result by name |
| `$error.message` | Error message (in `on_failed`) |
| `$error.step` | Failed step name (in `on_failed`) |

`@.*` gives you the trigger aggregate's computed state at the moment the saga was created. This is a snapshot — if the aggregate changes after the saga starts, steps still see the original state. Use this when a saga step needs data from the aggregate that wasn't part of the trigger event (e.g., a `customer_id` that was set at creation time but isn't in the `had_followup_converted` event). (`$.state.*` is a deprecated alias for `@.*`.)

**Type preservation.** Templates pass values through without type coercion. If `@.customer_id` resolves to an object but the target schema expects `"type": "string"`, the step will fail with a type mismatch. Make sure the data types in your aggregate state match what the target event schema declares.

**Optional fields.** Append `?` to any template path to make it optional. If the path resolves to null, the key is omitted from the emitted event data instead of being set to null. This is useful when the trigger event may or may not include a field:

```json
"data": {
  "customer_id": "$.data.customer_id",
  "is_tax_exempt": "$.data.is_tax_exempt?"
}
```

If `is_tax_exempt` isn't present in the trigger event, it's simply left out of the emitted event rather than being set to `null` (which would fail a `"type": "boolean"` schema check).

The `condition` on `charge_payment` checks that inventory was actually reserved before charging. If the previous step returned `reservation_failed`, this step is skipped.

## How sagas execute

1. **Trigger event** is written, saga starts
2. **Execute step 1**: Emit `inventory.reservation_requested`
3. **Wait**: For a matching `await` event or timeout
4. **On success**: Proceed to step 2
5. **On failure**: Run compensations in reverse order, mark saga failed

Each step emits its event, then waits for a response matching one of the `await.event_types`. If no response arrives within `timeout_ms`, the step fails.

### Steps without await

Steps without `await` succeed immediately after emitting. Use this for fire-and-forget actions like logging:

```json
{
  "name": "audit_log",
  "emit": {
    "aggregate_type": "audit",
    "id": "global",
    "event_type": "checkout_attempted",
    "data": {"cart_id": "$.key"}
  }
}
```

## Compensation

When a step fails, the saga walks backward through completed steps and emits their `compensate` events:

```
Step 3 fails
-> Compensate step 2
-> Compensate step 1
-> Saga marked failed
```

Steps without `compensate` are skipped during rollback (like `notify` -- nothing to undo).

If compensation itself fails after 3 attempts (`max_compensation_attempts`), the saga is dead-lettered for manual review.

### Lifecycle events

`on_complete` fires when all steps succeed. `on_failed` fires after compensation completes. Both are optional.

## Saga state

Query saga status via the admin API:

```http
GET /_admin/sagas/:id
Authorization: Bearer <operator-jwt>
```

Response:

```json
{
  "saga_id": "saga_abc123",
  "type": "checkout",
  "status": "running",
  "current_step": "charge_payment",
  "started_at": 1705312800,
  "steps": [
    {"name": "reserve_inventory", "status": "completed", "completed_at": 1705312801},
    {"name": "charge_payment", "status": "waiting", "waiting_since": 1705312802}
  ]
}
```

## Admin API

```http
GET  /_admin/sagas?status=running     # List sagas
GET  /_admin/sagas/:id                # Get saga details
POST /_admin/sagas/:id/retry          # Retry failed saga
```

Filter by `status` (running, completed, failed, dead_lettered), `environment`, `limit`, `offset`.

Retry picks up where it left off, restarting from the failed step. The saga runner polls for work every 1000ms.

## Dead letters

If compensation fails 3 times, the saga is dead-lettered. This requires manual intervention:

1. Check the dashboard (dead letters appear prominently on the home page)
2. Fix the underlying issue
3. Retry via the admin API

## Example: User onboarding

```json
{
  "modules": {
    "sagas": {
      "user_onboarding": {
        "trigger": {
          "aggregate_type": "user",
          "event_type": "was_created"
        },
        "steps": [
          {
            "name": "send_welcome_email",
            "emit": {
              "aggregate_type": "email",
              "id": "$.data.user_id",
              "event_type": "was_queued",
              "data": {"template": "welcome", "email": "$.data.email"}
            }
          },
          {
            "name": "provision_trial",
            "emit": {
              "aggregate_type": "subscription",
              "id": "$.data.user_id",
              "event_type": "trial_was_started",
              "data": {"plan": "starter"}
            },
            "compensate": {
              "aggregate_type": "subscription",
              "id": "$.data.user_id",
              "event_type": "trial_was_cancelled",
              "data": {}
            }
          }
        ]
      }
    }
  }
}
```

## Compared to implications

| | Implications | Sagas |
|--|-------------|-------|
| Trigger | Automatic on event | Trigger event in spec |
| Scope | Event chain (atomic) | Multi-step workflow |
| Compensation | Manual | Built-in |
| Human interaction | No | Yes |
| Failure handling | Retry | Compensation + retry |
| Use case | Simple reactions | Complex processes |

Use implications for if-this-then-that. Use sagas for processes requiring coordination and rollback.

## Limitations

- Max 50 steps per saga
- Max 100 concurrent sagas per instance
- Max 3 compensation attempts before dead-lettering
- **Saga chain depth limit: 3.** A saga step can emit an event that triggers another saga, which can trigger a third. Beyond depth 3, saga and scheduled event hooks are skipped and an audit log entry is recorded. If you see `saga_depth_exceeded` in your audit log, simplify your saga chain or consolidate steps. Listener/webhook hooks are unaffected by the depth limit.

## Best practices

**Keep steps small.** Each step should do one thing. Easier to compensate, easier to debug.

**Design compensations first.** Before writing the action, know how to undo it.

**Set realistic timeouts.** Account for external API latency, human response time.

**Monitor compensation rate.** High compensation means the process design needs rethinking.

## Troubleshooting

### emit_failed: type mismatch

The most common saga step failure. The step error looks like:

```json
{
  "message": "emit_failed",
  "type": "validation_error",
  "detail": "type mismatch: expected string, got object",
  "path": "data.customer_id"
}
```

This means the value resolved by the template doesn't match the target event's schema type. The `path` tells you which property failed, and `detail` tells you what was expected vs. what was provided.

**Common causes:**
- A `merge` handler stored an object where a string was expected. For example, if `@.customer_id` resolves to `{"name": "Alice", "id": "abc"}` instead of `"abc"`, a schema expecting `"type": "string"` will reject it.
- An integer in state is being passed to a schema expecting `"type": "string"`. Templates do not coerce types.
- A `set` handler with `"value": "$.data"` copied the entire event payload into a field, creating an object where a scalar was expected.

**To diagnose:**
1. Check the step error via the admin API: `GET /_admin/sagas/:id`
2. Look at `error.path` to identify which field failed
3. Look at `error.detail` for the expected vs. actual type
4. Check the trigger aggregate's state to see what the template actually resolved to

**To fix:** adjust the handler that builds the source aggregate state so the field has the correct type, or adjust the target event schema to accept the type being provided.

### emit_failed: missing required property

```json
{
  "message": "emit_failed",
  "type": "validation_error",
  "detail": "missing required property",
  "path": "data"
}
```

A template resolved to `null` because the path doesn't exist in the source data. For example, `@.email` returns null if the trigger aggregate's state doesn't have an `email` field, and the resulting event data omits the key entirely.

**To fix:** verify that the trigger event or aggregate state always includes the fields your saga steps reference, or make the field optional in the target schema.

### General debugging

1. **Inspect the saga:** `GET /_admin/sagas/:id` shows each step's status and error details
2. **Check step errors:** failed steps include `type`, `detail`, and `path` fields that identify exactly what went wrong
3. **Verify template sources:** use the aggregates endpoint to confirm what `@.*` templates resolve to at runtime
4. **Check schema alignment:** ensure the data types in your source (trigger event data, aggregate state) match the declared types in the target event schema

## See also

- [Implications guide](/docs/guides/implications) - Simple reactive chains
- [Scheduled events](/docs/guides/scheduled-events) - Delayed execution

# Scheduled Events

Schedule events to fire in the future. Reminders, expirations, delayed notifications -- anything that shouldn't happen immediately.

## How it works

When an event is written, your spec can schedule a future event:

```
User adds item to cart -> Schedule "send reminder" in 24 hours
                          (unless cart is checked out first)
```

If the cart is checked out within 24 hours, the reminder is automatically canceled. If not, the reminder event fires.

## Spec syntax

Add a `schedule` block to any event's implications:

```json
{
  "aggregate_types": {
    "cart": {
      "events": {
        "had_item_added": {
          "schema": {"type": "object", "properties": {"product_id": {"type": "string"}}},
          "handler": [{"append": {"target": "items", "value": "$.data"}}],
          "implications": [
            {
              "schedule": {
                "delay": "24h",
                "emit": {
                  "aggregate_type": "notification",
                  "id": "$.metadata.actor.id",
                  "event_type": "cart_abandonment_reminder",
                  "data": {
                    "cart_id": "$.key"
                  }
                },
                "cancel_on": [
                  {
                    "aggregate_type": "cart",
                    "id": "$.key",
                    "event_type": "was_checked_out"
                  }
                ]
              }
            }
          ]
        }
      }
    }
  }
}
```

### Fields

| Field | Required | Description |
|-------|----------|-------------|
| `delay` | Yes | How long to wait before firing. See [delay formats](#delay-formats). |
| `emit` | Yes | The event to emit when the delay expires. |
| `cancel_on` | No | Events that cancel this scheduled event before it fires. |

### Delay formats

| Format | Duration | Example |
|--------|----------|---------|
| `"Ns"` | N seconds | `"30s"` = 30 seconds |
| `"Nm"` | N minutes | `"30m"` = 30 minutes |
| `"Nh"` | N hours | `"24h"` = 24 hours |
| `"Nd"` | N days | `"7d"` = 7 days |

You can also use `delay_ms` as an integer (milliseconds) instead of the string format.

**Minimum delay: 5 minutes.** Scheduled events are designed for coarse-grained business logic, not real-time operations. Delays under 5 minutes are rejected at spec validation time.

### The emit block

The emit block defines what event fires when the delay expires:

```json
{
  "emit": {
    "aggregate_type": "notification",
    "id": "$.metadata.actor.id",
    "event_type": "reminder_sent",
    "data": {
      "original_cart": "$.key",
      "user_name": "$.data.user_name"
    }
  }
}
```

- `aggregate_type` - Target aggregate type (string literal)
- `id` - Target aggregate ID (JSONPath or string literal)
- `event_type` - Event type to emit (string literal)
- `data` - Event payload (values can be JSONPath expressions)

JSONPath expressions are resolved at scheduling time against the triggering event.

### Cancel conditions

Cancel conditions specify which future events should cancel this scheduled event:

```json
{
  "cancel_on": [
    {
      "aggregate_type": "order",
      "id": "$.key",
      "event_type": "was_completed"
    },
    {
      "aggregate_type": "order",
      "id": "$.key",
      "event_type": "was_cancelled"
    }
  ]
}
```

When any matching event is written, the scheduled event is canceled. All three fields must match for cancellation to occur.

### Race condition warning

There is an inherent race between firing and cancellation. If a cancel event arrives at nearly the same moment the scheduled event is due to fire, the cancellation may not prevent the scheduled event from firing.

For most use cases (reminders, follow-ups, soft deadlines), this is fine -- a redundant reminder is harmless. But if your domain requires strict either-or semantics (payment processed OR refund issued, never both), scheduled events alone are not sufficient. Use full [saga patterns](/docs/guides/sagas) for those cases.

## Fired event metadata

When a scheduled event fires, it includes metadata for audit trails:

```json
{
  "key": "notification:user123",
  "type": "cart_abandonment_reminder",
  "data": {"cart_id": "cart:abc"},
  "metadata": {
    "timestamp": 1705312800,
    "actor": {
      "type": "system_agent",
      "id": "event_scheduler"
    },
    "implied_by": {
      "key": "cart:abc",
      "event_type": "had_item_added",
      "depth": 1
    },
    "scheduled": {
      "scheduled_id": "a1b2c3d4-e5f6-...",
      "fire_at": "2024-01-15T12:00:00Z"
    }
  }
}
```

- `actor.type` is `"system_agent"` -- a reserved actor type for system operations that bypasses normal `agent_types` validation
- `actor.id` is `"event_scheduler"` -- identifies this as a scheduled event
- `implied_by` references the original triggering event
- `scheduled` contains scheduling metadata

## Use cases

### Trial expiration

```json
{
  "was_activated": {
    "handler": [{"set": {"target": "", "value": "$.data"}}],
    "implications": [
      {
        "schedule": {
          "delay": "14d",
          "emit": {
            "aggregate_type": "subscription",
            "id": "$.data.user_id",
            "event_type": "trial_should_expire",
            "data": {"user_id": "$.data.user_id"}
          },
          "cancel_on": [
            {
              "aggregate_type": "subscription",
              "id": "$.data.user_id",
              "event_type": "was_converted"
            }
          ]
        }
      }
    ]
  }
}
```

User converts to paid? Expiration cancels automatically.

### Subscription renewal

```json
{
  "was_activated": {
    "handler": [{"set": {"target": "", "value": "$.data"}}],
    "implications": [
      {
        "schedule": {
          "delay": "30d",
          "emit": {
            "aggregate_type": "billing",
            "id": "$.metadata.actor.id",
            "event_type": "renewal_due",
            "data": {
              "subscription_id": "$.key",
              "plan": "$.data.plan"
            }
          },
          "cancel_on": [
            {
              "aggregate_type": "subscription",
              "id": "$.key",
              "event_type": "was_cancelled"
            }
          ]
        }
      }
    ]
  }
}
```

### Recurring events

j17 doesn't natively support cron-like recurring events. Build recurrence by rescheduling -- each event schedules the next:

```json
{
  "weekly_report_sent": {
    "handler": [{"set": {"target": "last_sent", "value": "$.metadata.timestamp"}}],
    "implications": [
      {
        "schedule": {
          "delay": "7d",
          "emit": {
            "aggregate_type": "reports",
            "id": "global",
            "event_type": "should_send_weekly",
            "data": {}
          }
        }
      }
    ]
  }
}
```

## Admin API

### List pending events

```http
GET /_admin/scheduled?status=pending&limit=50
Authorization: Bearer <operator-jwt>
```

Query params:
- `status` - Filter by status: `pending`, `fired`, `canceled`, `dead_letter`
- `limit` - Max results (default 100)
- `offset` - Pagination offset

### Cancel a scheduled event

```http
DELETE /_admin/scheduled/:scheduled_id
Authorization: Bearer <operator-jwt>
```

### List dead letters

```http
GET /_admin/scheduled/dead_letters
Authorization: Bearer <operator-jwt>
```

Events that failed after max retries (10 attempts).

### Retry dead letter

```http
POST /_admin/scheduled/:scheduled_id/retry
Authorization: Bearer <operator-jwt>
```

Reset a dead-lettered event to pending for another attempt.

## Error handling

If a scheduled event fails to fire (e.g., spec not deployed, validation error), it is retried on the next poll cycle. After 10 failures, it moves to `dead_letter` status.

Dead-lettered events remain in the database for inspection, include `last_error` with the failure reason, and can be manually retried via the admin API.

| Error | Cause | Resolution |
|-------|-------|------------|
| `no_spec_deployed` | No spec in target environment | Deploy a spec |
| Schema validation | Emitted data doesn't match schema | Fix emit data or schema |

## Telemetry events

| Event | When |
|-------|------|
| `[:j17, :scheduled, :created]` | Scheduled event stored |
| `[:j17, :scheduled, :fired]` | Event successfully fired |
| `[:j17, :scheduled, :canceled]` | Canceled by matching event |
| `[:j17, :scheduled, :dead_lettered]` | Max retries exceeded |

## Precision

Scheduled events fire within about a minute of the target time. This is intentional -- scheduled events are for business logic like "send a reminder tomorrow" or "expire this offer in 7 days" where the exact second doesn't matter.

## Environment behavior

Scheduled events fire in the same environment they were created in. An event scheduled in staging fires in staging, and an event scheduled in prod fires in prod. This allows testing scheduled event logic in staging without affecting production.

## Limits

| Limit | Value |
|-------|-------|
| Min delay | 5 minutes |
| Max delay | 1 year |
| Max scheduled per aggregate | 100 |
| Max total per instance | 100,000 |

## Best practices

**Use cancellation liberally.** Always cancel if the triggering condition changes. It's cheaper to cancel than to handle no-op events.

**Short delays in implications.** Use scheduled events for delays over 5 minutes. For sub-minute coordination, use implications or sagas.

**Monitor pending count.** Too many scheduled events indicates a leak -- something that should cancel isn't.

**All times are UTC.** Convert in your application before scheduling.

## See also

- [Implications guide](/docs/guides/implications) - Trigger scheduled events from implications
- [Sagas guide](/docs/guides/sagas) - Complex workflows where race conditions must be explicitly resolved

# Caching

j17 can cache aggregate state in the background, giving you fast reads and helping you stay within your tier's usage limits.

## The trade-off

Without caching, every `GET /{type}/{id}` replays all events to compute the current state. This is accurate but uses resources.

With caching enabled:
- **Faster responses**: Reads return instantly from pre-computed state
- **Lower resource usage**: Helps you stay within tier limits
- **Eventually consistent**: There's a brief window (typically under 1 second, worst case ~10 seconds) where reads might be slightly stale after a write

You can always bypass the cache with `?synchronous=true` when you need guaranteed fresh data.

## Enabling caching

Cache is configured in your spec's `modules` section. List the aggregate types you want cached:

```json
{
  "aggregate_types": {
    "user": { "events": { "..." : {} } },
    "order": { "events": { "..." : {} } },
    "product": { "events": { "..." : {} } },
    "audit_log": { "events": { "..." : {} } }
  },
  "modules": {
    "cache": ["user", "order", "product"]
  }
}
```

Only types listed in `modules.cache` are cached. Types not listed (like `audit_log` above) are always computed fresh.

## Reading cached data

Once enabled, `GET /{type}/{id}` automatically returns cached state:

```bash
curl https://myapp.j17.dev/user/abc123 \
  -H "Authorization: Bearer $API_KEY"
# Returns cached aggregate - fast
```

### When you need fresh data

Use `?synchronous=true` to bypass the cache and compute from events:

```bash
curl https://myapp.j17.dev/user/abc123?synchronous=true \
  -H "Authorization: Bearer $API_KEY"
# Computes from events, guaranteed fresh
```

### Cache invalidation

Caches invalidate automatically when:
- New events are written to the aggregate
- Spec is redeployed (cache clears automatically)

You do not manage invalidation. The system handles it.

## When to cache

**Good candidates:**
- User profiles, settings
- Product catalogs
- Dashboards and lists
- Anything read more often than written

**Skip caching for:**
- Audit logs (append-only, rarely read as aggregates)
- Aggregates where even brief staleness is unacceptable (financial balances, inventory counts during checkout)

## Staleness expectations

- **Typical**: Under 1 second after a write
- **Worst case**: ~10 seconds for very large deployments
- **After spec deploy**: Cache automatically invalidates

## Optimistic UI pattern

Often you don't need to read after a write at all:

1. User clicks "Update Profile"
2. You POST the event to j17
3. On success, immediately update the UI optimistically
4. Don't bother reading the aggregate back

```javascript
async function updateProfile(userId, changes) {
  await fetch(`https://myapp.j17.dev/user/${userId}/had_profile_updated`, {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${apiKey}`,
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      data: changes,
      metadata: { actor: { type: "user", id: userId } }
    })
  });

  // Update UI immediately - don't wait for read
  updateLocalState(changes);
  showToast("Profile updated!");
}
```

**Why this works**: Event writes are atomic. If the POST succeeds, the event is persisted. The cache will catch up. Your user sees instant feedback.

**Handling the rare failure**: If the write fails (validation error, network issue), show an error. For conflicts, refetch and let the user retry. Most UIs can show optimistic updates and handle the rare failure gracefully.

### Optimistic updates with OCC

For cases where conflicts matter (like adding items to a shared order):

```javascript
async function addItem(orderId, item) {
  const current = await fetch(
    `https://myapp.j17.dev/order/${orderId}?synchronous=true`,
    { headers: { "Authorization": `Bearer ${apiKey}` } }
  ).then(r => r.json());

  // Optimistically update UI
  renderOrder({
    ...current.state,
    items: [...current.state.items, item]
  });

  // Write event with OCC
  try {
    await fetch(`https://myapp.j17.dev/order/${orderId}/had_item_added`, {
      method: "POST",
      headers: {
        "Authorization": `Bearer ${apiKey}`,
        "Content-Type": "application/json"
      },
      body: JSON.stringify({
        data: item,
        metadata: {
          actor: { type: "user", id: userId },
          previous_length: current.length
        }
      })
    });
  } catch (err) {
    // Conflict - someone else wrote. Refetch and retry.
    const fresh = await fetch(
      `https://myapp.j17.dev/order/${orderId}?synchronous=true`,
      { headers: { "Authorization": `Bearer ${apiKey}` } }
    ).then(r => r.json());
    renderOrder(fresh.state);
  }
}
```

Note the use of `?synchronous=true` to bypass cache when fresh data is needed for conflict detection.

## Performance comparison

| | Without Cache | With Cache |
|--|---------------|------------|
| Read speed | Slower (replays events) | Fast (pre-computed) |
| Resource usage | Higher | Lower |
| Consistency | Always fresh | Eventually consistent |
| Best for | Critical reads | Most reads |

## Best practices

**Start without caching.** Add it when you see performance issues or want to reduce usage.

**Cache the right types.** High-read, low-write aggregates benefit most. Don't cache types that change every second.

**Use `?synchronous=true` sparingly.** It bypasses the cache entirely -- use it for critical reads where staleness matters, not as a default.

**Monitor your usage.** Caching reduces compute usage, which helps you stay within tier limits.

**Don't cache user-specific data in shared CDN caches.** If you put j17 behind a CDN, use `Vary: Authorization` to avoid leaking data between users.

## See also

- [Reading aggregates](/docs/api/reading-aggregates) - Query parameters
- [Projections guide](/docs/guides/projections) - Multi-aggregate views that also benefit from caching

# AI-Assisted Development

j17 is designed to work well with AI coding assistants. Your entire backend -- spec, docs, and patterns -- fits comfortably in a context window.

## Why j17 works with AI

| Property | Why It Matters |
|----------|----------------|
| **JSON specs** | LLMs parse JSON perfectly. No compilation errors. |
| **HTTP API** | Universal, no SDK lock-in. Every language, every framework. |
| **Declarative handlers** | No hidden state. Predictable behavior. |
| **Small surface area** | Docs + spec + examples fit in ~50k tokens. |
| **Consistent patterns** | Once the AI learns one aggregate, it knows them all. |

## The workflow

1. **Load the docs** -- Download the full docs as a single file from [j17.app/j17-docs.md](/j17-docs.md) and add it to your AI context, or point Claude Code / Cursor at the j17 docs directory.

2. **Describe your domain** -- "I'm building an invoicing system. Invoices can be created, have line items added, be sent, and be paid."

3. **Get a spec** -- The AI generates your `spec.json` with event types, schemas, and handlers.

4. **Refine** -- "Add a `was_overdue` event that fires when payment is 30 days late."

5. **Ship** -- Upload the spec, start POSTing events.

## Example prompt

```
I'm using j17 for a task management app. I need:
- Tasks with title, description, due date, status
- Tasks can be created, updated, completed, deleted
- Tasks belong to projects
- Projects can be created and archived

Generate a j17 spec.json with appropriate event types and Tick handlers.
```

## No Rust required. WASM when you need it.

Other "logic on the server" platforms *require* writing Rust or TypeScript, compiling to WASM, and debugging opaque runtime errors -- for everything, even simple operations.

j17 specs are JSON. Your AI already speaks JSON fluently. The six declarative Tick operations (set, merge, append, remove, increment, decrement) cover 80% of use cases without any code.

When Claude writes a j17 handler, it's writing this:

```json
{
  "handler": [{"merge": {"target": "", "value": "$.data"}}]
}
```

Not this:

```rust
#[spacetimedb::reducer]
fn update_task(ctx: &ReducerContext, id: u64, title: String) -> Result<(), String> {
    if let Some(mut task) = Task::filter_by_id(&id) {
        task.title = title;
        Task::update_by_id(&id, task);
        Ok(())
    } else {
        Err("Task not found".to_string())
    }
}
```

One of these is easy for an AI to get right. One is a minefield of type errors, borrow checker fights, and WASM compilation failures.

**But if you need custom logic**, j17 supports WASM handlers. Write complex validation, call external services, implement business rules that don't fit declarative patterns. The escape hatch exists -- you just don't have to use it for simple operations.

## Reference applications

We maintain reference applications that demonstrate j17 patterns in real-world contexts. Load these into your AI's context for better results:

| App | Description | Good for learning |
|-----|-------------|-------------------|
| **gather** | Social event planning | Multi-aggregate relationships, invitations, RSVPs |
| **invoicer** | Simple invoicing | Line items, state machines, calculated fields |
| **taskboard** | Kanban-style tasks | Projects, ordering, status transitions |

### Loading into Claude Code

```bash
# Add a reference app to your context
claude --add-dir /path/to/j17-examples/gather

# Or fetch directly
claude "Read the gather app spec at https://github.com/17jewels/examples/gather/spec.json and use it as a reference for my project"
```

### Loading into Cursor / Copilot

Add the examples directory to your workspace, or paste the relevant `spec.json` into your conversation.

The AI will pick up patterns like:
- How to structure multi-step workflows (sagas)
- When to use `append` vs `set`
- How to model relationships between aggregates
- Naming conventions for events (`was_created`, `had_item_added`, etc.)

## Tips for AI-assisted j17 development

**Be specific about your domain** -- "Users have profiles with name and email" gives better results than "I need user management."

**Ask for the spec first** -- Get the data model right before asking for frontend code or API calls.

**Iterate on handlers** -- "That `had_item_added` handler should append to `items` array, not replace it."

**Request test events** -- "Generate 5 example events I can POST to test this spec."

## Testing AI-generated output

If you're just exploring, POST events directly to your instance and see what happens. But for real projects:

### Use staging environments

Every j17 instance comes with separate staging and production environments. Different URLs, different API keys, isolated data.

```bash
# Staging
curl -X POST https://myapp-staging.j17.dev/user/123/was_created \
  -H "Authorization: Bearer $STAGING_KEY" \
  -H "Content-Type: application/json" \
  -d '{"data": {"name": "Alice"}, "metadata": {"actor": {"type": "user", "id": "123"}}}'

# Production
curl -X POST https://myapp.j17.dev/user/123/was_created \
  -H "Authorization: Bearer $PROD_KEY" \
  -H "Content-Type: application/json" \
  -d '{"data": {"name": "Alice"}, "metadata": {"actor": {"type": "user", "id": "123"}}}'
```

Let your AI generate specs and events against staging. Promote to production when you're confident.

### Automated testing

Your spec is JSON. Your events are JSON. Test them like any other data:

```bash
# Validate spec schema
j17 spec validate spec.json

# Dry-run events against a spec
j17 events validate spec.json events.json

# Run your test suite against staging
j17 test --env staging
```

### Seeding and test data

Ask your AI to generate seed data:

```
Generate 50 realistic test events for my invoicing spec:
- 10 customers created
- 20 invoices with 2-5 line items each
- Mix of paid, pending, and overdue states
```

### Mass event ingress

For coldstart or migrating existing data, use batch ingestion:

```bash
# Import from JSON lines file
j17 events import events.jsonl --env staging

# Pipe from another source
your-export-script | j17 events import - --env staging
```

Full documentation for each of these is coming. For now, the CLI help (`j17 --help`) covers the basics.

## Coming soon

- Claude Code custom commands for j17
- Prompt templates for common patterns
- One-click "Generate spec from description"

---


# The Spec

The spec is a JSON document that defines your entire data model: aggregate types, events, handlers, validation schemas, and platform features. It's declarative, version-controlled, and the single source of truth for your backend.

## Top-level structure

```json
{
  "aggregate_types": {},
  "agent_types": [],
  "target_types": [],
  "modules": {},
  "geo_types": {},
  "projections": {},
  "singletons": []
}
```

Only `aggregate_types` and `agent_types` are required. Everything else is optional.

## aggregate_types

The core of your spec. Each aggregate type defines what events can happen to it, a JSON Schema for each event's data, and tick handlers that transform state.

```json
{
  "aggregate_types": {
    "order": {
      "events": {
        "was_placed": {
          "schema": {
            "type": "object",
            "properties": {
              "items": { "type": "array" },
              "total": { "type": "number" }
            },
            "required": ["items", "total"]
          },
          "handler": [
            { "set": { "target": "", "value": "$.data" } }
          ]
        },
        "had_item_added": {
          "schema": {
            "type": "object",
            "properties": {
              "item": { "type": "object" }
            },
            "required": ["item"]
          },
          "handler": [
            { "append": { "target": "items", "value": "$.data.item" } }
          ]
        }
      }
    }
  }
}
```

### Event definitions

Each event needs:

| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `schema` | JSON Schema | Yes* | Validates event data |
| `handler` | Tick array | Yes | Transforms aggregate state |
| `implications` | Array | No | Reactive events to emit |

*Reserved event types (see below) must not define a schema.

The handler can be a single operation or an array of operations. Use `[]` for events that don't modify aggregate state (audit-only or projection-only events). See [Tick reference](/docs/reference/tick) for all available operations.

### Schema design

Declare every field your events will carry, even fields your handlers don't reference. This ensures typos and unexpected data are caught at write time rather than silently stored. If your handler references `$.data.email` but an event arrives with `$.data.emai`, the schema rejects it immediately.

If you don't set `additionalProperties: false`, events may include fields beyond what's declared in `properties`. This can be useful for attaching extra context (audit metadata, analytics tags) that handlers don't process but is available when querying the event stream directly. Undeclared fields are stored as-is but aren't validated.

See [JSON Schema reference](/docs/reference/json-schema) for supported keywords.

### Schema evolution

Events are immutable, but your understanding changes. Use optional fields and sensible defaults:

```json
{
  "was_placed": {
    "schema": {
      "type": "object",
      "properties": {
        "items": { "type": "array" },
        "shipping_address": { "type": "object" },
        "gift_message": { "type": "string" }
      },
      "required": ["items", "shipping_address"]
    }
  }
}
```

Old events without `gift_message` validate fine. New events can include it. This approach lets you evolve your schema without breaking existing event streams.

### Reserved event types

Event types starting with `_` are reserved for system-generated events (e.g., `_was_tombstoned` for GDPR erasure). These events:

- Cannot be written via the API -- writes with `_*` event types are rejected
- Must not define a schema -- the system owns the event format
- May define a handler to update aggregate state when replayed, or omit it (the reader skips events with no handler)
- Can use `[]` as a no-op handler if you want the event registered but don't need state changes

```json
{
  "user": {
    "events": {
      "was_created": {
        "schema": { "type": "object", "required": ["name"] },
        "handler": [{ "set": { "target": "", "value": "$.data" } }]
      },
      "_was_tombstoned": {
        "handler": []
      }
    }
  }
}
```

## agent_types

Defines who can trigger events. Every event requires an actor, and the actor's `type` must appear in this list.

```json
{
  "agent_types": ["user", "admin", "staff", "webhook"]
}
```

When writing events, the actor must have a `type` from your `agent_types` list and an `id` that is a valid UUID:

```json
{
  "metadata": {
    "actor": {
      "type": "user",
      "id": "550e8400-e29b-41d4-a716-446655440000"
    }
  }
}
```

Common patterns:
- `user` -- end users of your application
- `admin` -- staff and administrators
- `webhook` -- external services calling your webhooks
- `integration` -- third-party API integrations

### System agents

System agents handle platform operations. They use the reserved `system_agent` type and are not part of your `agent_types` list.

| Agent ID | Purpose | Details |
|----------|---------|---------|
| `event_scheduler` | Fires scheduled events | Adds `metadata.scheduled` block for audit trail |
| `test_data_importer` | Test data injection (non-prod only) | Used when injecting test data without an explicit actor |

When a scheduled event fires:

```json
{
  "metadata": {
    "actor": { "type": "system_agent", "id": "event_scheduler" },
    "scheduled": {
      "scheduled_id": "a1b2c3d4-...",
      "fire_at": "2024-01-15T12:00:00Z"
    }
  }
}
```

### Reserved prefixes

| Reserved | Description |
|----------|-------------|
| `system_*` | Actor type prefix, reserved for platform operations |
| `_*` | Event type prefix, reserved for system-generated events |
| `global` | Aggregate key ID for singletons (one per type) |

Your `agent_types` cannot include any type starting with `system_` -- this is enforced at spec deploy time.

## target_types

Optional. Used when events affect something other than the primary aggregate.

```json
{
  "target_types": ["user", "order", "product"]
}
```

Include in event metadata:

```json
{
  "metadata": {
    "actor": { "type": "user", "id": "..." },
    "target": { "type": "order", "id": "..." }
  }
}
```

Useful for cross-aggregate queries: "show me all events where the target was this order."

## modules

Optional configuration for platform features.

```json
{
  "modules": {
    "cache": ["user", "order"],
    "sagas": {
      "order_fulfillment": { ... }
    }
  }
}
```

| Module | Type | Description |
|--------|------|-------------|
| `cache` | Array of strings | Aggregate types whose computed state is cached in Redis |
| `sagas` | Object | Long-running process definitions |

The `cache` array lists aggregate types that benefit from caching. Cached aggregates are served from Redis instead of being recomputed from the event stream on every read.

See [Implications guide](/docs/guides/implications) for saga definitions.

## geo_types

Define geographic shapes for geospatial queries.

```json
{
  "geo_types": {
    "delivery_zone": {
      "type": "polygon",
      "properties": ["name", "delivery_fee"]
    }
  }
}
```

Attach to aggregates for location-based lookups:

```json
{
  "store": {
    "events": {
      "was_registered": {
        "schema": {
          "type": "object",
          "properties": {
            "name": { "type": "string" },
            "location": { "type": "object" }
          },
          "required": ["name", "location"]
        },
        "handler": [
          { "set": { "target": "", "value": "$.data" } }
        ],
        "geo": {
          "type": "point",
          "path": "$.data.location"
        }
      }
    }
  }
}
```

Query stores near a location:

```bash
curl "https://myapp.j17.dev/store/nearby?lat=40.7&lng=-74.0&radius=5000"
```

## projections

Multi-aggregate views that combine data from multiple sources.

```json
{
  "projections": {
    "user_dashboard": {
      "sources": [
        { "aggregate": "user", "id": "$.user_id" },
        { "aggregate": "order", "query": "user_id = $.user_id" }
      ]
    }
  }
}
```

Projections are read-only and recomputed on demand. Use them when you need data from multiple aggregates in one call.

See [Projections guide](/docs/guides/projections) for details.

## singletons

Custom singleton identifiers beyond the default `global`.

```json
{
  "singletons": ["config", "rate_limits", "feature_flags"]
}
```

Every aggregate type implicitly supports the `global` key for a singleton instance. Custom singletons let you define additional well-known keys.

Note: singletons are not built-in objects — you define the aggregate type in your spec and write events to it like any other aggregate. The singleton name is just the key ID. For example, with a `settings` aggregate type in your spec:

```bash
# Query using the singleton key as the ID
curl https://myapp.j17.dev/settings/global \
  -H "Authorization: Bearer $J17_API_KEY"
```

Use sparingly. Singletons are convenient but can become bottlenecks under high write concurrency.

## Validation at spec deploy

When you deploy a spec, j17 validates it in full before accepting it:

1. **Structure** -- required fields present, correct types, no unknown keys
2. **agent_types** -- non-empty array, no `system_*` prefix
3. **JSON Schema** -- only [supported keywords](/docs/reference/json-schema), valid Draft 2020-12 syntax
4. **Handlers** -- valid [tick operations](/docs/reference/tick) and JSONPath expressions
5. **Implications** -- valid emit templates and JSONPath expressions

Invalid specs are rejected with detailed error messages pointing to the exact location of the problem.

### Backward compatibility

When deploying a spec update to an environment that already has a spec, j17 checks that the new spec is backward-compatible with the existing one. This prevents changes that would make existing events unreadable or fail validation.

**Blocked changes** (checked recursively through nested object schemas):
- Removing an aggregate type (existing event streams would become unreadable)
- Removing an event type (existing events would fail aggregation)
- Adding a new required field to an existing event schema (existing events may not have it)
- Changing the type of an existing property (existing events have the old type)
- Setting `additionalProperties: false` on a schema that previously allowed them (existing events with extra fields would fail)

**Always allowed:**
- Adding new aggregate types
- Adding new event types to existing aggregates
- Adding optional fields to existing event schemas
- Changing handlers (aggregates are recomputed from events, not stored)

If you need to make an incompatible change during early development, use `"force": true` in the request body to bypass the compatibility check. This is only available on staging and test environments -- production always enforces compatibility.

**Deprecating event types:** If you need to consolidate or rename event types in production, keep the old types in your spec with permissive schemas (no `required` fields) and no-op handlers (`[]`). Your application writes only the new types going forward. Old events remain readable and aggregate computation still works. This is the recommended pattern for production schema evolution.

### Cache invalidation

When a spec is deployed, j17 automatically invalidates cached aggregates that were computed with the old spec. This ensures handler changes take effect immediately without manual cache flushing.

## Validation at event write

When an event is written, j17 validates six things:

1. **Aggregate type** -- must exist in spec's `aggregate_types`
2. **Event type** -- must exist for that aggregate type
3. **Actor** -- `type` must be in `agent_types` (or be `system_agent`), `id` must be a valid UUID
4. **Target type** -- if present, must be in `target_types`
5. **Event data** -- must validate against the event's JSON Schema
6. **Key ID** -- must be a valid UUID, humane code, or configured singleton

All six checks must pass before the event is persisted. Failures return a structured error response with the specific check that failed.

## Complete example

A task management app with projects, tasks, system events, and implications:

```json
{
  "aggregate_types": {
    "project": {
      "events": {
        "was_created": {
          "schema": {
            "type": "object",
            "properties": {
              "name": { "type": "string", "minLength": 1 },
              "description": { "type": "string" }
            },
            "required": ["name"]
          },
          "handler": [
            { "set": { "target": "", "value": "$.data" } },
            { "set": { "target": "status", "value": "active" } },
            { "set": { "target": "created_at", "value": "$.metadata.timestamp" } }
          ]
        },
        "was_archived": {
          "schema": { "type": "object" },
          "handler": [
            { "set": { "target": "status", "value": "archived" } },
            { "set": { "target": "archived_at", "value": "$.metadata.timestamp" } }
          ]
        }
      }
    },
    "task": {
      "events": {
        "was_created": {
          "schema": {
            "type": "object",
            "properties": {
              "project_id": { "type": "string", "format": "uuid" },
              "title": { "type": "string", "minLength": 1 },
              "description": { "type": "string" }
            },
            "required": ["project_id", "title"]
          },
          "handler": [
            { "set": { "target": "", "value": "$.data" } },
            { "set": { "target": "status", "value": "todo" } },
            { "set": { "target": "created_at", "value": "$.metadata.timestamp" } }
          ]
        },
        "had_status_changed": {
          "schema": {
            "type": "object",
            "properties": {
              "status": { "enum": ["todo", "in_progress", "done"] }
            },
            "required": ["status"]
          },
          "handler": [
            { "set": { "target": "status", "value": "$.data.status" } },
            {
              "if": { "eq": ["$.data.status", "done"] },
              "then": [
                { "set": { "target": "completed_at", "value": "$.metadata.timestamp" } }
              ]
            }
          ]
        },
        "had_assignee_changed": {
          "schema": {
            "type": "object",
            "properties": {
              "assignee_id": { "type": "string", "format": "uuid" }
            },
            "required": ["assignee_id"]
          },
          "handler": [
            { "set": { "target": "assignee_id", "value": "$.data.assignee_id" } }
          ]
        },
        "_was_tombstoned": {
          "handler": []
        }
      }
    }
  },
  "agent_types": ["user", "admin"],
  "target_types": ["project", "task"],
  "modules": {
    "cache": ["task"]
  }
}
```

## Best practices

**Start small.** Define one aggregate type with 2-3 events. Add complexity as needed.

**Use past tense for event names.** `was_created`, not `create`. Events are facts about the past. Use snake_case: `had_profile_updated`, `was_placed`, `had_item_added`.

**Declare complete schemas.** Include every field in your schema, even ones your handlers don't use. This catches typos and malformed data at write time.

**Keep handlers simple.** If you need more than 3-4 operations per handler, consider whether you're modeling the right thing. Complex logic might need a saga or multiple events.

**Version in git.** The spec is code. Track it, review it, deploy it like any other code change.

**Validate before deploying.** Use the CLI to catch errors early:

```bash
j17 spec validate spec.json
```

## See also

- [Tick reference](/docs/reference/tick) -- handler operations
- [JSON Schema reference](/docs/reference/json-schema) -- validation keywords
- [Projections guide](/docs/guides/projections) -- multi-aggregate views
- [Implications guide](/docs/guides/implications) -- reactive event chains

# Tick - Declarative Handlers

Tick is j17's declarative handler language. Seventeen operations organized into four categories let you build aggregate state from events using pure JSON — no code to compile, no runtime to manage.

## Philosophy

Imperative code has hidden state. Declarative operations have none. When you read a Tick handler, you know exactly what it does — no side effects, no surprises.

Compare:

**Imperative (what you might write by hand):**
```javascript
function applyEvent(state, event) {
  if (event.type === 'had_item_added') {
    if (!state.items) state.items = [];
    state.items.push({
      ...event.data,
      added_at: Date.now()
    });
  }
}
```

**Declarative (Tick):**
```json
{
  "append": {
    "target": "items",
    "value": { "$merge": [{ "$": "$.data" }, { "added_at": { "$": "$.metadata.timestamp" } }] }
  }
}
```

The Tick version is explicit, testable, and optimizable. The imperative version has bugs waiting to happen (what if `items` isn't an array? what if `data` is null?).

## Handler structure

A handler is an array of operations applied sequentially to aggregate state:

```json
{
  "was_created": {
    "schema": { ... },
    "handler": [
      { "set": { "target": "", "value": "$.data" } },
      { "set": { "target": "status", "value": "active" } }
    ]
  }
}
```

Each operation reads from the event (via [JSONPath](/docs/reference/jsonpath)) and writes to aggregate state. A handler is required for every event type. Use `[]` (empty array) for events that don't modify aggregate state — useful for audit-only or projection-only events.

## Accessing event data

Operations reference event data using JSONPath expressions:

| Path | Resolves to |
|------|-------------|
| `$.data` | Event payload |
| `$.data.field` | Specific field from payload |
| `$.data.field?` | Optional field (no-op if missing) |
| `$.metadata.timestamp` | Event timestamp (Unix epoch) |
| `$.metadata.actor` | Actor object |
| `$.metadata.actor.id` | Actor ID |
| `$.key` | Event key (e.g., `user:abc123`) |
| `$.type` | Event type (e.g., `was_created`) |
| `@.field` | Current aggregate state |

## Basic operations

### set

Replace the value at a path.

```json
{ "set": { "target": "status", "value": "active" } }
```

The `target` is a dot-separated path in aggregate state. The `value` can be a literal or a JSONPath expression:

```json
{ "set": { "target": "profile.name", "value": "$.data.name" } }
```

Target `""` (empty string) means the root — use this to initialize state from event data:

```json
{ "set": { "target": "", "value": "$.data" } }
```

Creates intermediate objects automatically. Setting `{ "target": "a.b.c", "value": 1 }` on an empty state produces `{ "a": { "b": { "c": 1 } } }`.

### merge

Shallow merge an object into state at a path.

```json
{ "merge": { "target": "profile", "value": "$.data" } }
```

Given state `{ "profile": { "name": "Alice" } }` and event data `{ "email": "alice@example.com" }`, the result is `{ "profile": { "name": "Alice", "email": "alice@example.com" } }`.

Shallow only — nested objects are replaced, not recursively merged.

### append

Add an item to an array.

```json
{ "append": { "target": "items", "value": "$.data.item" } }
```

Creates the array if it doesn't exist.

### remove

Remove elements from an array. Three forms:

**By value** (primitive arrays):
```json
{ "remove": { "target": "tags", "value": "$.data.tag" } }
```

**By field match** (object arrays):
```json
{ "remove": { "target": "items", "where": { "id": "$.data.item_id" } } }
```

**By predicate** (complex conditions):
```json
{
  "remove": {
    "target": "sessions",
    "match": {
      "expired": {
        "timestamp": "$item.created_at",
        "maxAgeSeconds": 86400,
        "now": "$.metadata.timestamp"
      }
    }
  }
}
```

Within `match`, use `$item` to reference the current array element being tested.

### increment

Add to a numeric field.

```json
{ "increment": { "target": "balance", "by": "$.data.amount" } }
```

Creates the field with value `0` if it doesn't exist, then adds `by`. The `by` field accepts a literal number or a JSONPath expression.

### decrement

Subtract from a numeric field.

```json
{ "decrement": { "target": "stock", "by": "$.data.quantity" } }
```

Same behavior as `increment` but subtracts.

## Array operations

### filter

Keep only array elements that match a predicate, removing the rest.

```json
{
  "filter": {
    "target": "sessions",
    "keep": {
      "not": {
        "expired": {
          "timestamp": "$item.created_at",
          "maxAgeSeconds": 1209600,
          "now": "$.metadata.timestamp"
        }
      }
    }
  }
}
```

The `keep` field takes any [predicate](#predicates). Use `$item` to reference the current element.

### map

Transform every element in an array. Each element becomes a temporary state that the `apply` operations act on.

```json
{
  "map": {
    "target": "stages",
    "as": "$stage",
    "apply": [
      { "set": { "target": "reviewed", "value": true } },
      { "set": { "target": "reviewed_at", "value": "$.metadata.timestamp" } }
    ]
  }
}
```

The `as` field (default: `$item`) names the binding for use in nested expressions. Operations inside `apply` treat the array element as their state.

### update_where

Merge fields into array elements that match a condition.

**By field match:**
```json
{
  "update_where": {
    "target": "addresses",
    "match": { "id": "$.data.address_id" },
    "merge": { "verified": true, "verified_at": "$.metadata.timestamp" }
  }
}
```

**By predicate:**
```json
{
  "update_where": {
    "target": "line_items",
    "match": { "equals": ["$item.status", "pending"] },
    "merge": { "status": "confirmed" }
  }
}
```

The field match shorthand `{ "id": "$.data.address_id" }` is equivalent to `{ "equals": ["$item.id", "$.data.address_id"] }`. Use the shorthand for simple field equality (the common case) and explicit predicates for complex conditions.

### upsert

Replace a matching element or insert if not found. Effectively "replace if exists, insert if not" for object arrays keyed by a field.

```json
{
  "upsert": {
    "target": "addresses",
    "match": { "id": "$.data.address.id" },
    "value": "$.data.address"
  }
}
```

The `match` field takes a single key-value pair: the field name and the value to match. Removes any existing item where the field equals the value, then appends the new `value`.

### append_unique

Add to an array only if the value isn't already present.

**Primitive arrays:**
```json
{ "append_unique": { "target": "tags", "value": "$.data.tag" } }
```

**Object arrays** (check uniqueness by a specific field):
```json
{
  "append_unique": {
    "target": "members",
    "value": "$.data.member",
    "uniqueField": "id"
  }
}
```

No-op if the value (or an object with a matching `uniqueField`) already exists.

## Dynamic key operations

These operations work on objects where the key is determined at runtime — useful for maps, lookup tables, and per-key counters.

### set_at

Set a value at a dynamic key within an object.

```json
{ "set_at": { "target": "preferences", "key": "$.data.setting_name", "value": "$.data.setting_value" } }
```

Sets `target[key] = value`. Creates the target object if it doesn't exist. The `key` must resolve to a string.

### merge_at

Shallow merge into a dynamic key within an object.

```json
{
  "merge_at": {
    "target": "members",
    "key": "$.data.user_id",
    "value": { "role": "$.data.role", "joined_at": "$.metadata.timestamp" }
  }
}
```

Preserves existing fields at that key. If the key doesn't exist, creates it. Both the existing value and the new `value` must be objects — use `set_at` to replace non-object values.

**Example:**
```
State:   { "members": { "u1": { "role": "viewer", "status": "pending" } } }
Event:   { "data": { "user_id": "u1" } }
Op:      merge_at target="members" key="$.data.user_id" value={"status": "active"}
Result:  { "members": { "u1": { "role": "viewer", "status": "active" } } }
```

### remove_at

Remove a key from an object.

```json
{ "remove_at": { "target": "permissions", "key": "$.data.permission_name" } }
```

No-op if the key or target doesn't exist.

### increment_at

Increment a numeric value at a dynamic key.

```json
{ "increment_at": { "target": "vote_counts", "key": "$.data.option_id", "by": 1 } }
```

Creates the key with value `0` if it doesn't exist, then adds `by`.

## Control flow

### Conditional (if/then/else)

Execute operations conditionally based on a predicate.

```json
{
  "if": { "equals": ["$.data.status", "completed"] },
  "then": [
    { "set": { "target": "completed_at", "value": "$.metadata.timestamp" } }
  ],
  "else": [
    { "set": { "target": "status", "value": "$.data.status" } }
  ]
}
```

The `else` branch is optional. Both `then` and `else` take arrays of operations.

Conditionals nest:

```json
{
  "if": { "equals": ["$.data.action", "escalate"] },
  "then": [
    {
      "if": { "equals": ["@.priority", "critical"] },
      "then": [
        { "set": { "target": "escalation_level", "value": 2 } }
      ],
      "else": [
        { "set": { "target": "escalation_level", "value": 1 } }
      ]
    }
  ]
}
```

### let

Bind a variable by finding an element in an array. The variable is available to all subsequent operations in the handler.

```json
{
  "let": {
    "name": "$address",
    "find": {
      "in": "addresses",
      "where": { "id": "$.data.address_id" }
    }
  }
}
```

The `find` clause searches the array at the given state path. The `where` clause takes a single key-value pair: the field to match and the value to match against.

Reference the bound variable with its name:

```json
{ "set": { "target": "primary_street", "value": "$address.street" } }
```

Variables work with `$merge` expressions:

```json
{
  "set": {
    "target": "snapshot",
    "value": {
      "$merge": [
        "$address",
        { "verified": true, "verified_at": { "$": "$.metadata.timestamp" } }
      ]
    }
  }
}
```

## Predicates

Predicates are boolean conditions used in `if/then/else`, `filter`, `remove` (with `match`), `update_where` (with predicate match), `every`, and `some`.

### equals

Compare two values for equality.

```json
{ "equals": ["$.data.status", "active"] }
{ "equals": ["$item.id", "$.data.item_id"] }
```

Both arguments can be JSONPath expressions, literals, or item references.

### includes

Check if a value exists in an array.

```json
{ "includes": { "array": "@.approved_ids", "value": "$.data.user_id" } }
```

### min_items

Check that an array has at least N elements.

```json
{ "minItems": { "array": "@.items", "min": 1 } }
```

### max_items

Check that an array has at most N elements.

```json
{ "maxItems": { "array": "@.tokens", "max": 5 } }
```

### expired

Check if a timestamp has exceeded a maximum age.

```json
{
  "expired": {
    "timestamp": "$item.created_at",
    "maxAgeSeconds": 1209600,
    "now": "$.metadata.timestamp"
  }
}
```

Returns true if `now - timestamp > maxAgeSeconds`. Useful in `filter` and `remove` to clean up stale data.

### every

Check that ALL elements in an array match a predicate.

```json
{ "every": { "in": "@.stages", "match": { "equals": ["$item.status", "done"] } } }
```

Within `match`, `$item` refers to the current element.

### some

Check that at least one element matches.

```json
{ "some": { "in": "@.approvals", "match": { "equals": ["$item.role", "admin"] } } }
```

### subset_of

Check that every item in one array exists in another.

```json
{ "subset_of": { "items": "@.required_steps", "array": "@.completed_steps" } }
```

Returns true if every element in `items` exists in `array`. Useful for checking completeness (e.g., all required steps finished).

### Logical operators

Combine predicates with `not`, `and`, and `or`:

```json
{ "not": { "equals": ["$.data.status", "deleted"] } }

{ "and": [
  { "minItems": { "array": "@.items", "min": 1 } },
  { "equals": ["@.status", "pending"] }
] }

{ "or": [
  { "equals": ["$.data.role", "admin"] },
  { "equals": ["$.data.role", "owner"] }
] }
```

### Predicates summary

| Predicate | JSON key | Description |
|-----------|----------|-------------|
| equals | `equals` | Two values are equal |
| includes | `includes` | Value exists in array |
| min_items | `minItems` | Array has >= N elements |
| max_items | `maxItems` | Array has <= N elements |
| expired | `expired` | Timestamp exceeds max age |
| every | `every` | All array items match |
| some | `some` | Any array item matches |
| subset_of | `subset_of` | All items in A exist in B |
| not | `not` | Negate a predicate |
| and | `and` | All predicates true |
| or | `or` | Any predicate true |

Predicates can be nested — `not`, `and`, `or`, `every`, and `some` all accept predicates as arguments, so you can build complex conditions like `{"and": [{"not": {"equals": ...}}, {"or": [...]}]}`. Nesting is limited to **32 levels**, which is far beyond any practical use case.

## $merge expressions

`$merge` composes objects dynamically anywhere a value is expected.

```json
{
  "append": {
    "target": "activity_log",
    "value": {
      "$merge": [
        { "action": "item_added" },
        { "$": "$.data" },
        { "timestamp": { "$": "$.metadata.timestamp" } },
        { "actor_id": { "$": "$.metadata.actor.id" } }
      ]
    }
  }
}
```

`$merge` takes an array of objects and shallow-merges them left to right. Within the array:
- Plain objects contribute their fields directly
- `{ "$": "$.path" }` evaluates a JSONPath and includes the result
- Variable references like `"$found"` resolve to bound variables

Later entries overwrite earlier ones for the same key, just like `Object.assign` in JavaScript.

If an element in the `$merge` array cannot be resolved (the path doesn't exist, a variable is unbound), that element is **silently skipped** rather than causing an error. This makes `$merge` forgiving by design — you can merge defaults with optional overrides without worrying about which fields are present. This differs from operations like `set` or `append`, where a missing required path (`$.data.foo` without `?`) is an error.

`$merge` expressions can be nested (a `$merge` element can itself resolve to an object containing another `$merge`), but nesting is limited to **16 levels**. Beyond that, the value resolves as not-found and the operation fails. In practice, you should never need more than 2-3 levels of nesting.

## Automatic timestamps

The engine automatically injects two fields into every aggregate:

- **`created_at`** — timestamp of the first event (set once, never overwritten)
- **`updated_at`** — timestamp of the most recent event (updated on every event)

These are Unix epoch integers (same format as `$.metadata.timestamp`). You do not need handlers for these — they are injected after all handlers run. If your handlers also set `created_at` or `updated_at`, the engine's values overwrite them.

> **Reserved field names**: Do not use `created_at` or `updated_at` as handler targets unless you intend the engine to overwrite them.

## Complete example: order management

This spec defines an `order` aggregate with five event types demonstrating most operation categories:

```json
{
  "aggregate_types": {
    "order": {
      "events": {
        "was_placed": {
          "schema": {
            "type": "object",
            "properties": {
              "customer_id": { "type": "string" },
              "items": {
                "type": "array",
                "items": {
                  "type": "object",
                  "properties": {
                    "sku": { "type": "string" },
                    "name": { "type": "string" },
                    "price": { "type": "number" },
                    "quantity": { "type": "integer", "minimum": 1 }
                  },
                  "required": ["sku", "name", "price", "quantity"]
                }
              }
            },
            "required": ["customer_id", "items"]
          },
          "handler": [
            { "set": { "target": "", "value": "$.data" } },
            { "set": { "target": "status", "value": "pending" } },
            { "set": { "target": "placed_at", "value": "$.metadata.timestamp" } }
          ]
        },

        "had_item_added": {
          "schema": {
            "type": "object",
            "properties": {
              "sku": { "type": "string" },
              "name": { "type": "string" },
              "price": { "type": "number" },
              "quantity": { "type": "integer", "minimum": 1 }
            },
            "required": ["sku", "name", "price", "quantity"]
          },
          "handler": [
            {
              "upsert": {
                "target": "items",
                "match": { "sku": "$.data.sku" },
                "value": "$.data"
              }
            }
          ]
        },

        "had_item_removed": {
          "schema": {
            "type": "object",
            "properties": {
              "sku": { "type": "string" }
            },
            "required": ["sku"]
          },
          "handler": [
            {
              "remove": {
                "target": "items",
                "where": { "sku": "$.data.sku" }
              }
            }
          ]
        },

        "had_status_changed": {
          "schema": {
            "type": "object",
            "properties": {
              "status": { "type": "string", "enum": ["confirmed", "shipped", "delivered", "cancelled"] }
            },
            "required": ["status"]
          },
          "handler": [
            { "set": { "target": "status", "value": "$.data.status" } },
            {
              "if": { "equals": ["$.data.status", "shipped"] },
              "then": [
                { "set": { "target": "shipped_at", "value": "$.metadata.timestamp" } }
              ]
            },
            {
              "if": { "equals": ["$.data.status", "delivered"] },
              "then": [
                { "set": { "target": "delivered_at", "value": "$.metadata.timestamp" } }
              ]
            }
          ]
        },

        "had_note_added": {
          "schema": {
            "type": "object",
            "properties": {
              "text": { "type": "string" }
            },
            "required": ["text"]
          },
          "handler": [
            {
              "append": {
                "target": "notes",
                "value": {
                  "$merge": [
                    { "$": "$.data" },
                    { "added_at": { "$": "$.metadata.timestamp" } },
                    { "author": { "$": "$.metadata.actor.id" } }
                  ]
                }
              }
            }
          ]
        }
      }
    }
  }
}
```

After placing an order and adding a note, the aggregate state looks like:

```json
{
  "customer_id": "cust_001",
  "items": [
    { "sku": "WIDGET-A", "name": "Widget A", "price": 29.99, "quantity": 2 }
  ],
  "status": "pending",
  "placed_at": 1710000000,
  "notes": [
    { "text": "Rush delivery requested", "added_at": 1710000060, "author": "op_123" }
  ],
  "created_at": 1710000000,
  "updated_at": 1710000060
}
```

## Limitations

| Limitation | Workaround |
|------------|------------|
| No type coercion (`"123"` != `123`) | Ensure event data has correct types via schema |
| No arithmetic (`price * quantity`) | Use WASM handler |
| No string operations (concat, split) | Use WASM handler |
| No date/time math (add days) | Use WASM handler |
| No cross-aggregate reads | Use [implications](/docs/reference/implications-reference) |

## When to escape to WASM

Tick covers the vast majority of state-building patterns. Use [WASM handlers](/docs/reference/wasm) when you need:

- Arithmetic beyond increment/decrement
- String manipulation
- Date/time calculations
- Complex business logic
- Cryptographic operations

## Performance

Tick handlers are interpreted by the Zig engine and run in-process. No VM overhead, no garbage collection. Typical throughput: 300k+ operations/second per core.

## Validation

j17 validates handlers when you upload your spec:

- JSONPath expressions are syntactically valid
- Target paths can be created or already exist
- Required fields are present for each operation
- Predicate structures are well-formed

Invalid handlers are rejected with error details and hints.

## See also

- [JSONPath reference](/docs/reference/jsonpath) — Path syntax for accessing event and state data
- [WASM reference](/docs/reference/wasm) — Escape hatch for complex logic
- [Spec reference](/docs/reference/spec) — Full spec structure including handler placement
- [Implications reference](/docs/reference/implications-reference) — Cross-aggregate side effects

# JSON Schema

j17 validates event data against JSON Schema Draft 2020-12. Each event type in your spec can have a `schema` that validates the event's `data` field. Events with invalid data are rejected before the handler runs.

## Supported keywords

### Type validation

```json
{
  "type": "object"
}
```

Types: `object`, `array`, `string`, `number`, `integer`, `boolean`, `null`

Multi-type syntax is also supported:

```json
{
  "type": ["string", "null"]
}
```

### Enumerations

```json
{
  "enum": ["pending", "processing", "completed", "failed"]
}
```

Or a single fixed value:

```json
{
  "const": "active"
}
```

### Object properties

```json
{
  "type": "object",
  "properties": {
    "name": { "type": "string" },
    "age": { "type": "integer" },
    "email": { "type": "string", "format": "email" }
  },
  "required": ["name", "email"],
  "additionalProperties": false
}
```

| Keyword | Description |
|---------|-------------|
| `properties` | Define fields and their schemas |
| `required` | Array of required field names |
| `additionalProperties` | `false` to reject unknown fields, or a schema to validate them against. Excludes keys matched by `properties` and `patternProperties` |
| `minProperties` | Minimum number of properties |
| `maxProperties` | Maximum number of properties |

### patternProperties

Validate properties whose names match a regex pattern:

```json
{
  "type": "object",
  "properties": {
    "name": { "type": "string" }
  },
  "patternProperties": {
    "^x-": { "type": "string" }
  },
  "additionalProperties": false
}
```

Properties matching `^x-` (like `x-custom`, `x-label`) are validated against the given schema. Properties matched by either `properties` or `patternProperties` are excluded from `additionalProperties` checks.

### propertyNames

Constrain the names of properties (not their values):

```json
{
  "type": "object",
  "propertyNames": {
    "pattern": "^[a-z_]+$",
    "maxLength": 30
  }
}
```

Every property name in the object must match the given schema. Useful for enforcing naming conventions.

### dependentRequired

If a property exists, other properties become required:

```json
{
  "type": "object",
  "properties": {
    "credit_card": { "type": "string" },
    "billing_address": { "type": "string" },
    "cvv": { "type": "string" }
  },
  "dependentRequired": {
    "credit_card": ["billing_address", "cvv"]
  }
}
```

If `credit_card` is present, `billing_address` and `cvv` must also be present.

### dependentSchemas

If a property exists, the entire object must also match an additional schema:

```json
{
  "type": "object",
  "properties": {
    "type": { "enum": ["personal", "business"] },
    "name": { "type": "string" }
  },
  "dependentSchemas": {
    "type": {
      "if": {
        "properties": { "type": { "const": "business" } }
      },
      "then": {
        "properties": {
          "tax_id": { "type": "string" }
        },
        "required": ["tax_id"]
      }
    }
  }
}
```

### String constraints

```json
{
  "type": "string",
  "minLength": 3,
  "maxLength": 100,
  "pattern": "^[a-zA-Z0-9]+$",
  "format": "email"
}
```

| Keyword | Description |
|---------|-------------|
| `minLength`/`maxLength` | Character count bounds |
| `pattern` | Regex pattern match (see [regex engine note](#regex-engine)) |
| `format` | Predefined format validation (see below) |

Supported formats:

- `email` — Email address
- `date-time` — ISO 8601 date-time
- `date` — YYYY-MM-DD
- `time` — HH:MM:SS
- `uri` — Absolute URI
- `uri-reference` — URI or relative reference
- `uuid` — RFC 4122 UUID (any version)
- `hostname` — Hostname (includes IDNA/punycode A-label validation per RFC 5891)
- `ipv4` / `ipv6` — IP addresses
- `regex` — ECMA-262 regular expression
- `json-pointer` — JSON Pointer (RFC 6901)

**Note:** `format` is treated as an assertion (validator), not an annotation. Invalid formats are rejected. See [compliance testing](#compliance-testing) for details.

### Numeric constraints

```json
{
  "type": "number",
  "minimum": 0,
  "maximum": 1000,
  "exclusiveMaximum": 1000,
  "multipleOf": 0.01
}
```

| Keyword | Description |
|---------|-------------|
| `minimum`/`maximum` | Inclusive bounds |
| `exclusiveMinimum`/`exclusiveMaximum` | Exclusive bounds |
| `multipleOf` | Number must be a multiple of this value |

### Array constraints

```json
{
  "type": "array",
  "items": { "type": "string" },
  "minItems": 1,
  "maxItems": 10,
  "uniqueItems": true
}
```

| Keyword | Description |
|---------|-------------|
| `items` | Schema applied to all array items |
| `prefixItems` | Schemas for specific array positions (tuple validation) |
| `minItems`/`maxItems` | Array length bounds |
| `uniqueItems` | All items must be distinct |
| `contains` | At least one item must match the given schema |
| `minContains` | Minimum number of items matching `contains` |
| `maxContains` | Maximum number of items matching `contains` |

### prefixItems (tuple validation)

Validate specific positions in an array:

```json
{
  "type": "array",
  "prefixItems": [
    { "type": "string" },
    { "type": "integer" },
    { "type": "boolean" }
  ],
  "items": false
}
```

With `"items": false`, the array must have exactly three elements matching the positional schemas. Without it, additional items are allowed and unconstrained (or validated by the `items` schema).

### contains

Require that at least one array item matches:

```json
{
  "type": "array",
  "contains": { "type": "string", "minLength": 1 }
}
```

Combined with `minContains`/`maxContains` for range checks:

```json
{
  "type": "array",
  "contains": { "type": "integer", "minimum": 100 },
  "minContains": 2,
  "maxContains": 5
}
```

At least 2 and at most 5 items must be integers >= 100.

## Composition

### allOf

Must match all schemas:

```json
{
  "allOf": [
    { "type": "object" },
    {
      "properties": {
        "id": { "type": "string" }
      }
    },
    {
      "properties": {
        "created_at": { "type": "integer" }
      }
    }
  ]
}
```

### anyOf

Must match at least one:

```json
{
  "anyOf": [
    { "type": "string" },
    { "type": "integer" }
  ]
}
```

### oneOf

Must match exactly one:

```json
{
  "oneOf": [
    {
      "properties": { "type": { "const": "email" } },
      "required": ["email"]
    },
    {
      "properties": { "type": { "const": "sms" } },
      "required": ["phone"]
    }
  ]
}
```

### not

Must not match:

```json
{
  "not": { "type": "null" }
}
```

## Conditionals

### if/then/else

```json
{
  "properties": {
    "shipping_method": { "enum": ["standard", "express"] }
  },
  "if": {
    "properties": {
      "shipping_method": { "const": "express" }
    }
  },
  "then": {
    "properties": {
      "express_fee": { "type": "number", "minimum": 0 }
    },
    "required": ["express_fee"]
  }
}
```

If `shipping_method` is `express`, then `express_fee` is required.

## References

### $defs and $ref

Define reusable schemas:

```json
{
  "$defs": {
    "address": {
      "type": "object",
      "properties": {
        "street": { "type": "string" },
        "city": { "type": "string" },
        "zip": { "type": "string" }
      },
      "required": ["street", "city", "zip"]
    }
  },
  "type": "object",
  "properties": {
    "shipping_address": { "$ref": "#/$defs/address" },
    "billing_address": { "$ref": "#/$defs/address" }
  }
}
```

Only local `$ref` within `$defs` is supported. Remote URLs (`$ref: "https://..."`) are not supported.

## Common patterns

### Timestamp

```json
{
  "timestamp": {
    "type": "integer",
    "description": "Unix epoch seconds"
  }
}
```

### Money

```json
{
  "amount": {
    "type": "number",
    "minimum": 0,
    "multipleOf": 0.01,
    "description": "Amount in dollars"
  }
}
```

### UUID

```json
{
  "id": {
    "type": "string",
    "format": "uuid"
  }
}
```

### Email

```json
{
  "email": {
    "type": "string",
    "format": "email"
  }
}
```

### URL

```json
{
  "website": {
    "type": "string",
    "format": "uri"
  }
}
```

### Nested object

```json
{
  "profile": {
    "type": "object",
    "properties": {
      "name": { "type": "string" },
      "avatar": { "type": "string", "format": "uri" }
    },
    "required": ["name"]
  }
}
```

### Array of objects

```json
{
  "items": {
    "type": "array",
    "items": {
      "type": "object",
      "properties": {
        "sku": { "type": "string" },
        "quantity": { "type": "integer", "minimum": 1 }
      },
      "required": ["sku", "quantity"]
    }
  }
}
```

### Nullable field

```json
{
  "properties": {
    "middle_name": {
      "anyOf": [
        { "type": "string" },
        { "type": "null" }
      ]
    }
  }
}
```

Or using multi-type syntax:

```json
{
  "properties": {
    "middle_name": {
      "type": ["string", "null"]
    }
  }
}
```

### Conditional required fields

```json
{
  "type": "object",
  "properties": {
    "payment_type": { "type": "string", "enum": ["card", "bank", "crypto"] },
    "card_number": { "type": "string" },
    "bank_account": { "type": "string" },
    "wallet_address": { "type": "string" }
  },
  "required": ["payment_type"],
  "allOf": [
    {
      "if": { "properties": { "payment_type": { "const": "card" } } },
      "then": { "required": ["card_number"] }
    },
    {
      "if": { "properties": { "payment_type": { "const": "bank" } } },
      "then": { "required": ["bank_account"] }
    },
    {
      "if": { "properties": { "payment_type": { "const": "crypto" } } },
      "then": { "required": ["wallet_address"] }
    }
  ]
}
```

### Reusable definitions

```json
{
  "$defs": {
    "money": {
      "type": "object",
      "properties": {
        "amount": { "type": "integer", "minimum": 0 },
        "currency": { "type": "string", "pattern": "^[A-Z]{3}$" }
      },
      "required": ["amount", "currency"]
    }
  },
  "type": "object",
  "properties": {
    "subtotal": { "$ref": "#/$defs/money" },
    "tax": { "$ref": "#/$defs/money" },
    "total": { "$ref": "#/$defs/money" }
  },
  "required": ["subtotal", "total"]
}
```

## Regex engine

The `pattern` keyword uses the [mvzr](https://github.com/mnemnion/mvzr) regex engine (a Zig-native implementation). j17 carries a patched version that fixes a bug with `(group)*` zero-matches after greedy quantifiers. Track the upstream repository for fixes — if a new release includes the patch, the vendored copy can be updated.

mvzr supports standard regex syntax. If you encounter unexpected behavior with complex patterns, test against the mvzr engine directly or simplify the pattern.

## Unsupported keywords

Using these keywords will reject the spec at upload time:

| Keyword | Reason |
|---------|--------|
| `unevaluatedProperties` | Requires cross-subschema evaluation tracking |
| `unevaluatedItems` | Requires cross-subschema evaluation tracking |
| `$dynamicRef` / `$dynamicAnchor` | Dynamic reference resolution not supported |
| `$anchor` | Named schema anchors not supported (use `$defs` + `$ref` instead) |
| `$vocabulary` | Vocabulary declarations not supported |

Remote `$ref` (URLs like `$ref: "https://..."`) is also not supported. All schemas must be inline or defined in `$defs`.

### Annotation-only keywords

These keywords are accepted but have no validation effect (parsed and ignored):

| Keyword | Description |
|---------|-------------|
| `contentEncoding` | Content encoding hint (e.g., `base64`) |
| `contentMediaType` | Content media type hint (e.g., `application/json`) |
| `contentSchema` | Schema for decoded content |

Per the JSON Schema spec, these are annotations — they describe content but do not constrain it.

## Validation errors

When validation fails, the API returns a 400 error with details:

```json
{
  "ok": false,
  "error": {
    "code": "validation_failed",
    "message": "Event data failed schema validation",
    "details": {
      "field": "email",
      "constraint": "format",
      "expected": "email",
      "received": "not-an-email"
    }
  }
}
```

Common errors:

| Error | Fix |
|-------|-----|
| `required` | Add missing field |
| `type` | Wrong data type (string vs number) |
| `format` | Invalid format (malformed email) |
| `additionalProperties` | Remove unexpected field |
| `enum` | Value not in allowed list |

## Compliance testing

The implementation is tested against the [official JSON Schema Test Suite](https://github.com/json-schema-org/JSON-Schema-Test-Suite) for Draft 2020-12. The test harness embeds the official fixture files directly.

### Keyword tests

44 of the 46 required keyword test files from the suite's `tests/draft2020-12/` directory are run. Tests for unsupported keywords (`unevaluatedProperties`, `unevaluatedItems`, `$dynamicRef`, `$anchor`, `$vocabulary`) are skipped at parse time rather than failing.

Within test files that are run, individual test groups using unsupported features (e.g. remote `$ref`, `$dynamicRef`) are also skipped. The harness tracks these as "skipped" rather than "failed".

### Format tests

The official suite has two sets of format tests:

- **`tests/draft2020-12/format.json`** tests format as a pure annotation (the Draft 2020-12 default). Under annotation semantics, `"format": "uuid"` never rejects invalid values.

- **`tests/draft2020-12/optional/format/*.json`** tests format as a validator (assertion semantics). These expect invalid formats to be rejected.

j17 uses the `optional/format/` suite. Our implementation treats `format` as an assertion — invalid formats are rejected — which is the behavior users expect from a validation layer. This matches the `format-assertion` vocabulary from the spec and is explicitly supported by Draft 2020-12 as an implementation choice.

### Known limitations

One known test failure remains: the `vocabulary` test suite includes a test for custom metaschemas with no validation vocabulary, which requires remote `$schema` resolution (not supported).

All hostname format tests pass, including IDNA/punycode A-label validation (RFC 5891). The implementation includes a full punycode decoder (RFC 3492) and IDNA2008 codepoint classification with contextual rules.

## Best practices

**Be strict.** Use `additionalProperties: false` to catch typos.

**Add descriptions.** Help future you (and your team) understand the schema.

**Version carefully.** Add fields as optional first. Make required after confirming no old events lack them.

**Validate early.** Use the CLI to check schemas before deploying:

```bash
j17 spec validate spec.json
```

**Keep schemas small.** Large nested schemas are hard to reason about. Break into referenced components with `$defs`.

## Limitations summary

| Limitation | Impact |
|------------|--------|
| No remote `$ref` | All schemas must be inline or in `$defs` |
| No dynamic refs | `$dynamicRef`/`$dynamicAnchor` not supported |
| No `$anchor` | Named anchors not supported (use `$defs` + `$ref` instead) |
| No unevaluated* | Cannot catch unvalidated properties/items across composition |
| mvzr regex engine | `pattern` uses mvzr, not PCRE or ECMA-262 (see [regex engine note](#regex-engine)) |

For complex validation beyond JSON Schema capabilities, validate in your application before submitting events.

## Open source

j17's JSON Schema validation is powered by [zig-jsonschema](https://github.com/whiskeytuesday/zig-jsonschema), an open source Zig implementation of Draft 2020-12.

## See also

- [Spec reference](/docs/reference/spec) — Event schema definition
- [JSONPath reference](/docs/reference/jsonpath) — Accessing schema data in handlers

# Implications Reference

Complete reference for the implications system. See [Implications guide](/docs/guides/implications) for practical patterns.

## Overview

Implications are j17's reactive event system: when event A happens, automatically create event(s) B. All implied events are written atomically with the trigger event -- the trigger and its implications succeed or fail as a unit.

Implications are defined per event type in your spec:

```json
{
  "aggregate_types": {
    "order": {
      "events": {
        "was_placed": {
          "schema": { ... },
          "handler": [ ... ],
          "implications": [
            { ... },
            { ... }
          ]
        }
      }
    }
  }
}
```

There are seven implication types, from pure-data declarative to full custom code:

| Type | Key | Purpose |
|------|-----|---------|
| [tick](#tick) | `emit` | Declarative condition + emit |
| [map](#map) | `map` | Fan-out: one event per array item |
| [scheduled](#scheduled) | `schedule` | Delayed emission with cancel conditions |
| [pipeline](#pipeline) | `pipeline` | Chained steps |
| [wasm](#wasm) | `wasm` | WASM blob for complex logic |
| [binary_container](#binary-container) | `binary_container` | Pre-built executable image |
| [runtime_container](#runtime-container) | `runtime_container` | Code + lockfile (you write code, we build) |

## Tick

The most common type. Optionally test a condition, then emit an event to a target aggregate.

### Basic emit

```json
{
  "emit": {
    "aggregate_type": "notification",
    "id": "admin",
    "event_type": "was_queued",
    "data": {"message": "New order received"}
  }
}
```

| Field | Required | Description |
|-------|----------|-------------|
| `aggregate_type` | yes | Target aggregate type |
| `id` | yes | Target aggregate ID -- literal string or JSONPath expression |
| `event_type` | yes | Event type to emit on the target |
| `data` | no | Event data -- literals, JSONPath expressions, or template operators |

### Conditional emit

Add a `condition` to gate the implication on a predicate. Conditions use the same predicate syntax as tick handlers (`equals`, `not_equals`, `gt`, `gte`, `lt`, `lte`, `in`, `not_in`, `exists`, `and`, `or`, `not`).

```json
{
  "condition": {"equals": ["$.data.priority", "urgent"]},
  "emit": {
    "aggregate_type": "alert",
    "id": "ops-team",
    "event_type": "was_triggered",
    "data": {"source": "$.key"}
  }
}
```

This implication only fires when `$.data.priority` equals `"urgent"`.

### Dynamic target ID

Use a JSONPath expression for the `id` field to route implied events dynamically. Any string starting with `$` is treated as a path expression:

```json
{
  "emit": {
    "aggregate_type": "user_timeline",
    "id": "$.metadata.actor.id",
    "event_type": "had_activity_added",
    "data": {
      "source_key": "$.key",
      "source_type": "$.type"
    }
  }
}
```

### Accessing event data

All JSONPath expressions resolve against a context object containing:

| Path | Description |
|------|-------------|
| `$.key` | Trigger event key (e.g., `"order:abc123"`) |
| `$.type` | Trigger event type (e.g., `"was_placed"`) |
| `$.data.*` | Trigger event payload |
| `$.metadata.*` | Trigger event metadata (actor, timestamp, etc.) |
| `@.*` | Source aggregate's current state (see [State access semantics](#state-access-semantics)) |
| `$.state.*` | Deprecated alias for `@.*` (will be removed at or before 1.0) |

```json
{
  "condition": {"gte": ["$.data.amount", 100]},
  "emit": {
    "aggregate_type": "loyalty",
    "id": "$.metadata.actor.id",
    "event_type": "had_points_earned",
    "data": {
      "order_key": "$.key",
      "amount": "$.data.amount",
      "customer_tier": "@.tier"
    }
  }
}
```

## Map

Fan-out pattern: emit one event per item in an array. Useful for order line items, batch operations, and similar one-to-many patterns.

```json
{
  "map": {
    "in": "$.data.items",
    "as": "$item",
    "emit": {
      "aggregate_type": "inventory",
      "id": "$item.product_id",
      "event_type": "was_reserved",
      "data": {
        "quantity": "$item.qty",
        "order_id": "$.key"
      }
    }
  }
}
```

| Field | Required | Description |
|-------|----------|-------------|
| `in` | yes | JSONPath to the array to iterate |
| `as` | yes | Binding name for the current item (e.g., `$item`) |
| `emit` | yes | Emit template -- can use both event paths (`$.data.*`) and item binding (`$item.*`) |
| `condition` | no | Predicate to filter items before emitting |

### Map with condition filter

Only emit for items matching a condition:

```json
{
  "map": {
    "in": "$.data.items",
    "as": "$item",
    "condition": {"equals": ["$item.requires_shipping", true]},
    "emit": {
      "aggregate_type": "warehouse",
      "id": "$item.warehouse_id",
      "event_type": "had_pick_requested",
      "data": {
        "product_id": "$item.product_id",
        "qty": "$item.qty",
        "order_id": "$.key"
      }
    }
  }
}
```

### Object iteration

The `map` construct works with arrays only. To iterate over object keys, use `$entries` to convert an object to an array of `{key, value}` pairs:

```json
{
  "map": {
    "in": "$.data.role_changes.$entries",
    "as": "$entry",
    "emit": {
      "aggregate_type": "user",
      "id": "$entry.key",
      "event_type": "had_role_updated",
      "data": {"role": "$entry.value"}
    }
  }
}
```

## Scheduled

Delayed implications emit events at a future time, with optional cancel conditions. This is a saga-lite pattern for coarse-grained business logic delays.

```json
{
  "schedule": {
    "delay": "24h",
    "emit": {
      "aggregate_type": "notification",
      "id": "$.metadata.actor.id",
      "event_type": "cart_abandonment_reminder",
      "data": {"cart_id": "$.key"}
    },
    "cancel_on": [
      {
        "aggregate_type": "cart",
        "id": "$.key",
        "event_type": "was_checked_out"
      }
    ]
  }
}
```

| Field | Required | Description |
|-------|----------|-------------|
| `delay` | yes | Duration string before firing |
| `emit` | yes | Event to emit after delay |
| `cancel_on` | no | Array of event patterns that cancel this scheduled event if they occur before the delay expires |

### Delay format

The `delay` field accepts duration strings with a numeric value and unit suffix:

| Suffix | Unit | Example | Milliseconds |
|--------|------|---------|--------------|
| `s` | seconds | `"300s"` | 300,000 |
| `m` | minutes | `"30m"` | 1,800,000 |
| `h` | hours | `"24h"` | 86,400,000 |
| `d` | days | `"7d"` | 604,800,000 |

**Minimum delay is 5 minutes.** Any delay shorter than `"5m"` (300,000ms) is rejected at spec validation time.

### Cancel conditions

Each cancel condition matches on a specific event pattern. If any matching event occurs during the delay window, the scheduled event is cancelled.

```json
{
  "cancel_on": [
    {
      "aggregate_type": "order",
      "id": "$.key",
      "event_type": "was_completed"
    },
    {
      "aggregate_type": "order",
      "id": "$.key",
      "event_type": "was_cancelled"
    }
  ]
}
```

| Field | Required | Description |
|-------|----------|-------------|
| `aggregate_type` | yes | Aggregate type to watch |
| `id` | yes | Aggregate ID -- JSONPath expression resolved at schedule time |
| `event_type` | yes | Event type that triggers cancellation |

## Pipeline

Chain multiple steps where transform steps enrich context for downstream emit steps. Pipelines can mix any implication type as steps.

```json
{
  "pipeline": [
    {"wasm": {"blob_name": "enrich.wasm", "mode": "transform"}},
    {
      "emit": {
        "aggregate_type": "notification",
        "id": "$.enriched.recipient",
        "event_type": "was_queued",
        "data": "$.enriched.payload"
      }
    }
  ]
}
```

Pipeline steps are executed in order. A step in `transform` mode returns enriched context that becomes available to subsequent steps. A step in `emit` mode produces implied events.

Valid step types: `tick`, `map`, `wasm`, `binary_container`, `runtime_container`, `scheduled`.

## Wasm

For logic that exceeds what declarative tick can express, use a WASM blob. The blob receives the trigger event and context as JSON, and returns an array of events to emit.

### Short form

```json
{
  "wasm": "order-notifier.wasm"
}
```

### Long form

```json
{
  "wasm": {
    "blob_name": "order-processor.wasm",
    "entrypoint": "compute_implications",
    "mode": "emit"
  }
}
```

| Field | Required | Default | Description |
|-------|----------|---------|-------------|
| `blob_name` | yes | -- | Name of the WASM blob (uploaded via admin API) |
| `entrypoint` | no | `"compute_implications"` | Exported function name |
| `mode` | no | `"emit"` | `"emit"` returns events; `"transform"` returns enriched context (for pipelines) |

The WASM function receives JSON input:

```json
{
  "event": {
    "key": "order:abc123",
    "type": "was_placed",
    "data": { ... },
    "metadata": { ... },
    "state": { ... }
  }
}
```

And returns an array of events to emit (in `emit` mode), or an enriched context object (in `transform` mode).

## Binary Container

Ship a pre-built executable image. Same interface as WASM (receives event JSON on stdin, returns events on stdout) but runs as a sandboxed container.

```json
{
  "binary_container": {
    "image": "myregistry/order-processor:v2",
    "mode": "emit"
  }
}
```

| Field | Required | Default | Description |
|-------|----------|---------|-------------|
| `image` | yes | -- | Container image reference |
| `mode` | no | `"emit"` | `"emit"` or `"transform"` |

## Runtime Container

Ship your code and lockfile; j17 builds and runs the container. Supported runtimes: `node`, `elixir`, `ruby`, `python`.

```json
{
  "runtime_container": {
    "runtime": "node",
    "entrypoint": "implications/order.js",
    "mode": "emit"
  }
}
```

| Field | Required | Default | Description |
|-------|----------|---------|-------------|
| `runtime` | yes | -- | One of: `node`, `elixir`, `ruby`, `python` |
| `entrypoint` | yes | -- | Path to your handler file |
| `mode` | no | `"emit"` | `"emit"` or `"transform"` |

## Data Template Operators

The `data` field in emit templates supports three DSL operators for constructing complex event payloads. Operators are recognized as single-key objects where the key is the operator name.

### `concat` -- string concatenation

Concatenates an array of values into a single string. JSONPath expressions are resolved first. Numbers and booleans are coerced to strings.

```json
{
  "data": {
    "message": {"concat": ["Order ", "$.key", " was placed by ", "$.metadata.actor.id"]}
  }
}
```

Result: `"Order order:abc123 was placed by user:456"`

### `coalesce` -- first non-null value

Returns the first resolved value that is not null. Useful for fallback chains.

```json
{
  "data": {
    "display_name": {"coalesce": ["$.data.display_name", "$.data.name", "Anonymous"]}
  }
}
```

If `$.data.display_name` is null or missing, falls back to `$.data.name`, then to `"Anonymous"`.

### `merge` -- shallow object merge

Merges an array of objects. Later values override earlier ones. Non-object values are skipped.

```json
{
  "data": {"merge": [
    "@.defaults",
    {
      "updated_by": "$.metadata.actor.id",
      "timestamp": "$.metadata.timestamp"
    }
  ]}
}
```

### Nesting operators

Operators can be nested inside each other:

```json
{
  "data": {"merge": [
    {"message": {"concat": ["Order ", "$.data.order_id", " processed"]}},
    {"name": {"coalesce": ["$.data.display_name", "$.data.email"]}},
    {"status": "pending"}
  ]}
}
```

## State Access Semantics

### Pre-batch state (S0)

When accessing `@.*` in implications, you get the aggregate's state **before any events in the current request** are applied. This is called S0 (state zero).

**Single event write:**
- You submit event A
- Implications for A see S0 (state before A)
- After commit: state is S0 + A = S1

**Batch event write:**
- You submit events A, B, C in one request
- Implications for A see S0
- Implications for B see S0 (not S0 + A)
- Implications for C see S0 (not S0 + A + B)
- After commit: state is S0 + A + B + C = S3

### Why S0?

S0 plus the trigger event data contains the same information as intermediate states. The trigger event data is available via `$.data.*`, so implications can reference both current state and the event payload without ambiguity.

All implications in a batch see the same state -- no coupling between sibling events, and no ordering dependencies within a batch.

### Best practices

Reference stable fields from state and changed fields from the event:

```json
{
  "emit": {
    "aggregate_type": "audit",
    "id": "log",
    "event_type": "was_recorded",
    "data": {
      "customer_name": "@.customer_name",
      "new_email": "$.data.email"
    }
  }
}
```

Avoid designing implications that expect to see state changes from sibling events in the same batch. If event A sets a field and event B's implication needs that field, either include the value in event B's data, use a single combined event, or make event B a separate request after A completes.

## Audit Trail

All implied events automatically include `implied_by` metadata for traceability:

```json
{
  "implied_by": {
    "key": "order:abc123",
    "event_type": "was_placed",
    "depth": 1
  }
}
```

| Field | Description |
|-------|-------------|
| `key` | Key of the event that triggered this implication |
| `event_type` | Type of the trigger event |
| `depth` | How many levels deep in the implication chain (1 = direct implication, 2 = implied by an implied event, etc.) |

## Safety Limits

Implications have built-in protection against runaway chains:

| Limit | Default | Description |
|-------|---------|-------------|
| `max_depth` | 5 | Maximum implication chain depth (A implies B implies C implies D implies E) |
| `max_total` | 100 | Maximum total implied events from a single trigger event |

Exceeding either limit returns an error and the entire transaction (trigger event plus all implied events) is rejected. Nothing is written.

### Cycle detection

The engine detects static cycles at spec validation time. A spec where aggregate type A's event implies an event on aggregate type B, which in turn implies the same event back on A, will be rejected when the spec is submitted.

For example, if `order:was_placed` implies `inventory:was_reserved`, and `inventory:was_reserved` implies `order:was_placed`, the spec will fail validation with a cycle error.

## API Response

When implications fire, the write API response includes the count of implied events:

```json
{
  "stream_id": "1706789012345-0",
  "implied_count": 3
}
```

## Limitations

1. **Atomic only.** Tick, map, and pipeline implications are written atomically with the trigger event. There is no async retry for these types. Scheduled implications are the mechanism for deferred execution.

2. **No external calls.** All implication types that run in the write path (tick, map, wasm, containers) are pure data transformations. No network calls, no I/O. If you need external data, fetch it before writing the trigger event and include it in the event data.

3. **Arrays only for map.** The `map` construct iterates arrays. Use `$entries` to convert objects to `[{key, value}, ...]` arrays before mapping.

4. **Depth limits compound with fan-out.** If a map emits 10 events and each of those has implications that emit 10 more, you hit 100 events at depth 2. Design fan-out chains carefully against the `max_total` limit.

## See Also

- [Implications guide](/docs/guides/implications) -- practical patterns and recipes
- [Handlers reference](/docs/reference/handlers-reference) -- tick predicate syntax (shared with implication conditions)
- [WASM reference](/docs/reference/wasm) -- custom logic via WebAssembly

# JSONPath

j17 uses a subset of JSONPath for accessing values in events, state, and bindings. This document covers the syntax, resolution contexts, and extensions available in handlers and implications.

## Basic syntax

All paths start with `$` representing the root of the document being resolved against.

### Field access

```jsonpath
$.name                   // Top-level field
$.data.profile.name      // Nested field access
```

Given this JSON:

```json
{"name": "Alice", "profile": {"email": "alice@example.com"}}
```

| Path | Result |
|------|--------|
| `$` | Entire object |
| `$.name` | `"Alice"` |
| `$.profile.email` | `"alice@example.com"` |
| `$.missing` | `null` (not found) |

### Array indexing

```jsonpath
$.items[0]               // First element
$.items[2]               // Third element
$.items[-1]              // Last element
$.items[-2]              // Second to last
```

Given `{"items": ["a", "b", "c", "d"]}`:

| Path | Result |
|------|--------|
| `$.items[0]` | `"a"` |
| `$.items[-1]` | `"d"` |
| `$.items[-2]` | `"c"` |
| `$.items[99]` | `null` (out of bounds) |

### Mixed access

Dot notation and array indexing can be combined freely:

```jsonpath
$.data.items[0].name
$.users[-1].profile.email
$.matrix[0][1]
```

## Resolution contexts

JSONPath resolves against different data depending on where it appears.

### Common event paths

These paths are available in **all** contexts (handlers, implications, sagas):

| Path | Description | Example value |
|------|-------------|---------------|
| `$.key` | Aggregate key (type:id) | `"user:a1b2c3d4-..."` |
| `$.id` | Aggregate ID (the ID portion after the colon — typically a UUID, [humane code](/docs/concepts/aggregates), or `global`) | `"a1b2c3d4-..."` |
| `$.type` | Event type | `"was_created"` |
| `$.data.*` | Event data payload | `$.data.user_id` |
| `$.metadata.*` | Event metadata | `$.metadata.timestamp` |
| `$.metadata.actor.*` | Who performed the action | `$.metadata.actor.id` |
| `@.*` | Aggregate state | `@.status` |

### In handlers

In Tick handlers, `$` resolves against the **event** and `@` resolves against the **current aggregate state**:

| Path prefix | Resolves against |
|-------------|------------------|
| `$.*` | Event |
| `@.*` | Current aggregate state (mutates as operations run) |
| `@` | Entire state object |

- **`target` fields** use state paths (no prefix): `"profile.name"`
- **`@.*` paths** read from the current aggregate state (mutates as operations run)
- **`$.*` paths** read from the event

```json
{
  "if": { "equals": ["@.status", "pending"] },
  "then": [
    {"set": {"target": "status", "value": "$.data.new_status"}}
  ]
}
```

Here `@.status` reads from the current aggregate state, `$.data.new_status` reads from the event, and `"status"` in the target writes to state.

Note: `@` in handlers reflects the *accumulating* state — if an earlier operation in the same handler changes `status`, a later `@.status` sees the updated value.

### In implications

In implications, `@` resolves against the **source aggregate state** (S0 snapshot — the state at the time the event was written, before implications run):

| Path prefix | Resolves against |
|-------------|------------------|
| `$.*` | Trigger event |
| `@.*` | Source aggregate state (S0 snapshot) |

`$.state.*` is a deprecated alias for `@.*`. It will be removed at or before 1.0.

```json
{
  "condition": {"equals": ["@.notifications_enabled", true]},
  "emit": {
    "aggregate_type": "notification",
    "id": "$.metadata.actor.id",
    "event_type": "was_sent",
    "data": {"placed_by": "@.user_name"}
  }
}
```

### In sagas

Sagas have the same event paths plus additional context from the saga workflow:

| Path | Description |
|------|-------------|
| `$prev.type` | Previous step's response event type |
| `$prev.data.*` | Previous step's response data |
| `$context.<step>.*` | Earlier step's result by name |
| `$error.message` | Error message (in `on_failed`) |
| `$error.step` | Failed step name (in `on_failed`) |

`@.*` in sagas resolves against the trigger aggregate's state **at saga creation time** — a frozen snapshot that doesn't change as the saga progresses.

## State paths vs JSONPath

Handlers use two kinds of paths:

| Type | Prefix | Used in | Resolves against |
|------|--------|---------|------------------|
| **State path** | None | `target` fields | Aggregate state |
| **JSONPath** | `$` | `value` fields | Event data |

```json
{"set": {"target": "profile.name", "value": "$.data.name"}}
```

- `profile.name` — state path (where to write in the aggregate)
- `$.data.name` — JSONPath (what value to read from the event)

State paths support nested access with dots (`profile.address.zip`) and are used with `set`, `merge`, `append`, `remove`, `increment`, `decrement`, and `filter` targets.

## Optional paths

Append `?` to any JSONPath to make it optional. When the path does not exist, the operation becomes a no-op instead of failing:

```json
{"set": {"target": "memo", "value": "$.data.memo?"}}
```

| Condition | Behavior |
|-----------|----------|
| Field missing | No-op (field not added to state) |
| Field is `null` | Sets to `null` |
| Field present | Sets value normally |

Optional paths work with bindings too:

```json
{"set": {"target": "note", "value": "$found.note?"}}
```

This is particularly useful for events where some fields are only sometimes present, avoiding the need for conditional wrappers.

## Variable bindings

Operations that iterate or look up values create named bindings accessible via `$name` syntax.

### `$item` — iteration binding

Created by `map`, `every`, `some`, and `filter` operations. Defaults to `$item` but can be renamed with the `as` field:

```json
{
  "every": {
    "in": "items",
    "match": {"equals": ["$item.active", true]}
  }
}
```

```json
{
  "map": {
    "target": "items",
    "as": "$entry",
    "apply": [
      {"set": {"target": "updated_at", "value": "$.metadata.timestamp"}}
    ]
  }
}
```

### `$found` — lookup binding

Created by `let` operations that find an item in a state array:

```json
{
  "let": {
    "name": "$address",
    "find_target": "addresses",
    "find_field": "id",
    "find_value": "$.data.address_id"
  }
}
```

After this, `$address.street` resolves to the `street` field of the found address. Bindings created by `let` are available to all subsequent operations in the same handler.

### Binding sub-paths

Bindings work like JSONPath against the bound value:

```json
"$item.name"          // field access on bound value
"$found.profile.email" // nested field access
"$item.tags[0]"       // array index on bound value
```

If the sub-path does not exist, `?` can be appended to make it optional:

```json
"$item.optional_field?"
```

## `$entries` — object to array

Append `.$entries` to a path to convert an object into an array of `{key, value}` pairs:

```jsonpath
$.data.settings.$entries
```

Given `{"settings": {"theme": "dark", "lang": "en"}}`, this produces:

```json
[{"key": "theme", "value": "dark"}, {"key": "lang", "value": "en"}]
```

Use with `map` in implications to iterate over object keys:

```json
{
  "map": {
    "in": "$.data.config.$entries",
    "as": "$entry",
    "emit": {
      "aggregate_type": "audit",
      "id": "$entry.key",
      "event_type": "config_changed",
      "data": {"key": "$entry.key", "value": "$entry.value"}
    }
  }
}
```

`$entries` returns `null` if the value at the base path is not an object.

## `$merge` — object composition

Combine multiple objects into one using `$merge`. Objects are merged left to right, with later values overriding earlier ones:

```json
{
  "set": {
    "target": "result",
    "value": {"$merge": ["$found", {"updated": true}, "$.data.extra"]}
  }
}
```

Each item in the `$merge` array can be:
- A JSONPath resolving to an object (`"$.data.profile"`)
- A binding resolving to an object (`"$found"`)
- A literal object (`{"status": "active"}`)

Non-object items and optional-missing items are silently skipped.

## Predicate paths

Predicates in `if`/`then`/`else` conditionals and array operations use JSONPath for comparisons:

```json
{
  "if": {"equals": ["$.data.status", "active"]},
  "then": [
    {"set": {"target": "status", "value": "$.data.status"}}
  ]
}
```

Array predicates (`every`, `some`) resolve their `in` field against state for simple paths and against the event for JSONPaths:

```json
{
  "every": {
    "in": "items",
    "match": {"equals": ["$item.done", true]}
  }
}
```

Here `"items"` resolves against the current aggregate state (no `$` prefix), while `"$item.done"` accesses each array element via the iteration binding.

## Practical examples

### Set from event data

```json
{"set": {"target": "name", "value": "$.data.name"}}
{"set": {"target": "created_at", "value": "$.metadata.timestamp"}}
{"set": {"target": "created_by", "value": "$.metadata.actor.id"}}
```

### Deep property access

```json
{"set": {"target": "shipping_zip", "value": "$.data.shipping.address.zip_code"}}
```

### Array element access

```json
{"set": {"target": "first_item", "value": "$.data.items[0].name"}}
{"set": {"target": "last_item", "value": "$.data.items[-1].name"}}
```

### Optional fields

```json
{"set": {"target": "note", "value": "$.data.note?"}}
{"set": {"target": "priority", "value": "$.data.priority?"}}
```

### Using bindings with `$merge`

```json
[
  {"let": {"name": "$found", "find_target": "items", "find_field": "id", "find_value": "$.data.item_id"}},
  {"set": {"target": "selected", "value": {"$merge": ["$found", {"flag": true}]}}}
]
```

### Implication routing

```json
{
  "emit": {
    "aggregate_type": "user",
    "id": "$.metadata.actor.id",
    "event_type": "had_order_placed",
    "data": {"order_key": "$.key", "customer": "@.customer_name"}
  }
}
```

### Iterating object keys

```json
{
  "map": {
    "in": "$.data.settings.$entries",
    "as": "$setting",
    "emit": {
      "aggregate_type": "audit",
      "id": "$setting.key",
      "event_type": "setting_changed",
      "data": {"value": "$setting.value"}
    }
  }
}
```

## Not supported

j17 uses a focused subset of JSONPath. The following standard JSONPath features are not implemented:

| Feature | Syntax | Alternative |
|---------|--------|-------------|
| Wildcard | `$.*`, `$.items[*]` | Use explicit paths or `map` |
| Recursive descent | `$..name` | Use explicit nested paths |
| Filter expressions | `$.items[?(@.active)]` | Use `filter` operation with predicates |
| Slice | `$.items[0:3]` | Use individual indices |
| Union | `$.items[0,2]` | Use individual indices |
| Script expressions | `$.items[(@.length-1)]` | Use `[-1]` for last element |
| Bracket notation for fields | `$.data["name"]` | Use dot notation `$.data.name` |

For complex data transformation, use Tick operations (`filter`, `map`, `every`, `some`) or [WASM handlers](/docs/reference/wasm).

## Quick reference

### Event paths (all contexts)

| Syntax | Description |
|--------|-------------|
| `$.key` | Aggregate key (`user:abc123`) |
| `$.id` | Aggregate ID (`abc123`) |
| `$.type` | Event type |
| `$.data.*` | Event data |
| `$.metadata.*` | Event metadata (actor, timestamp, target) |
| `@.*` | Aggregate state |
| `$.path?` | Optional (no-op if missing) |

### Handler-specific

| Syntax | Description |
|--------|-------------|
| `"field"` (no prefix) | State path (handler targets, predicate `in`) |
| `$binding.field` | Variable binding (after `let`, in `map`/`every`/`some`) |
| `$.path.$entries` | Object to `{key, value}` array |
| `{"$merge": [...]}` | Shallow object merge |

### Saga-specific

| Syntax | Description |
|--------|-------------|
| `$prev.type` | Previous step's response type |
| `$prev.data.*` | Previous step's response data |
| `$context.<step>.*` | Step result by name |
| `$error.message` | Error message (in `on_failed`) |
| `$error.step` | Failed step name (in `on_failed`) |

### Deprecated

| Syntax | Use instead |
|--------|-------------|
| `$.state.*` | `@.*` |

## See also

- [Tick reference](/docs/reference/tick) — using JSONPath in handlers
- [Implications guide](/docs/guides/implications) — using JSONPath in implications
- [Sagas guide](/docs/guides/sagas) — saga-specific templates

# WASM Handlers

Tick covers 80% of use cases. For the rest -- arithmetic, string manipulation, complex business rules -- you can escape to WebAssembly for full programmatic control.

## When to use WASM

**Use Tick when you can.** Declarative handlers are simpler to reason about, easier to audit, and have no build step.

**Escape to WASM when you need:**

- Arithmetic calculations (`price * quantity * (1 - discount)`)
- String manipulation (formatting, parsing, concatenation)
- Date/time logic (add days, calculate durations)
- Complex business rules (multi-condition logic trees)
- Loops with complex termination conditions
- Data transformations Tick can't express

## What WASM can and cannot do

**Can do:**
- Any pure computation (math, string ops, data transforms)
- Read event data and current aggregate state
- Return new state (handlers) or emit events (implications)
- Complex conditional logic

**Cannot do:**
- Network calls or I/O
- Access filesystem
- Call external services
- Persist data outside the aggregate

WASM handlers are pure functions. No side effects, no external access.

## The contracts

WASM is used in two places: **handlers** (computing aggregate state) and **implications** (reactive event creation). Each has a different contract.

### Handler contract

Handlers receive the current aggregate state and the event, and return the new state.

**Input:**
```json
{
  "state": { "items": [], "total": 0 },
  "event": {
    "key": "order:abc123",
    "type": "had_item_added",
    "data": { "sku": "WIDGET-1", "price": 9.99, "quantity": 2 },
    "metadata": { "actor": { "type": "user", "id": "u001" } }
  }
}
```

**Output (success):**
```json
{
  "state": {
    "items": [{ "sku": "WIDGET-1", "price": 9.99, "quantity": 2, "line_total": 19.98 }],
    "total": 19.98
  }
}
```

The returned `state` replaces the aggregate's state entirely. This is the key difference from implications -- handlers transform state, implications emit new events.

**Output (error):**
```json
{
  "error": {
    "type": "validation_failed",
    "message": "Quantity must be positive",
    "code": "INVALID_QUANTITY",
    "context": { "field": "quantity", "value": -5 }
  }
}
```

Error fields:
- `type` (required) -- Error category like `"validation_failed"`, `"business_rule"`, `"precondition"`
- `message` (required) -- Human-readable description
- `code` (optional) -- Application-specific code for programmatic handling
- `context` (optional) -- Structured data about the error

Handler errors are returned to the client with a 422 status.

### Implication contract

Implications receive the triggering event and context, and return actions (events to emit).

**Input:**
```json
{
  "event": {
    "key": "order:abc123",
    "type": "was_placed",
    "data": { "items": [{ "product_id": "p-123", "quantity": 2 }] },
    "metadata": { "actor": { "type": "user", "id": "u001" } }
  },
  "context": {}
}
```

**Output:**
```json
{
  "actions": [
    {
      "emit": {
        "aggregate_type": "inventory",
        "id": "p-123",
        "event_type": "was_reserved",
        "data": { "quantity": 2, "order_id": "abc123" }
      }
    }
  ]
}
```

Each action's `emit` object requires:
- `key` (as `"type:id"`) **or** both `aggregate_type` and `id`
- `event_type` -- The event type to create
- `data` -- Event payload (optional, defaults to `{}`)

Implications can also return errors using the same error format as handlers.

## Spec configuration

### Handler

Short form -- blob name only, uses default entrypoint `apply_handler`:

```json
{
  "aggregate_types": {
    "order": {
      "events": {
        "had_item_added": {
          "schema": { "..." : "..." },
          "handler": { "wasm": "order-handler.wasm" }
        }
      }
    }
  }
}
```

Long form -- custom entrypoint and timeout:

```json
{
  "handler": {
    "wasm": {
      "blob_name": "order-handler.wasm",
      "entrypoint": "handle_item_added",
      "timeout_ms": 100
    }
  }
}
```

Handler config fields:
- `blob_name` (required) -- Name of the uploaded WASM blob
- `entrypoint` (optional) -- Function to call, defaults to `"apply_handler"`
- `timeout_ms` (optional) -- Execution timeout, defaults to `100`

### Implication

Short form:

```json
{
  "implications": [
    { "wasm": "order-notifier.wasm" }
  ]
}
```

Long form:

```json
{
  "implications": [
    {
      "wasm": {
        "blob_name": "order-enricher.wasm",
        "entrypoint": "compute_notifications",
        "mode": "emit"
      }
    }
  ]
}
```

Implication config fields:
- `blob_name` (required) -- Name of the uploaded WASM blob
- `entrypoint` (optional) -- Function to call, defaults to `"compute_implications"`
- `mode` (optional) -- `"emit"` (return events) or `"transform"` (enrich context for pipeline chaining), defaults to `"emit"`

WASM implications can be used in [pipelines](/docs/reference/implications-reference) alongside tick implications:

```json
{
  "implications": [
    {
      "pipeline": [
        { "wasm": { "blob_name": "enrich.wasm", "mode": "transform" } },
        {
          "emit": {
            "aggregate_type": "notification",
            "id": "$.metadata.actor",
            "event_type": "was_queued",
            "data": {}
          }
        }
      ]
    }
  ]
}
```

## Memory interface

j17 uses a simple memory protocol for passing data between the host and your WASM module. Your module must export three functions:

| Export | Signature | Purpose |
|--------|-----------|---------|
| `malloc` | `(size: i32) -> i32` | Allocate a buffer, return pointer |
| `free` | `(ptr: i32)` | Free a buffer (optional, not called by j17) |
| `apply_handler` | `(ptr: i32, len: i32) -> i64` | Process input, return packed result |

The entrypoint (default `apply_handler`) returns a packed i64: `(output_ptr << 32) | output_len`.

Data flow:
1. j17 calls `malloc(input_size)` to allocate a buffer in WASM memory
2. j17 copies the input JSON into that buffer
3. j17 calls `apply_handler(ptr, len)` with the buffer location
4. Your function reads the input, computes the result, allocates an output buffer, and returns the packed pointer+length
5. j17 reads the output JSON from WASM memory

## Building WASM

Any language that compiles to WASM works. The module must export `malloc` and `apply_handler` (or your custom entrypoint name).

### AssemblyScript

TypeScript-like syntax, compiles directly to WASM with no separate toolchain.

```typescript
import { JSON } from "assemblyscript-json";

export function malloc(size: i32): i32 {
  return heap.alloc(size) as i32;
}

export function free(ptr: i32): void {
  heap.free(ptr);
}

export function apply_handler(ptr: i32, len: i32): i64 {
  // Read input JSON from memory
  const input = String.UTF8.decodeUnsafe(ptr, len);
  const json = <JSON.Obj>JSON.parse(input);

  const event = json.getObj("event")!;
  const state = json.getObj("state")!;

  // Your logic here...
  const output = '{"state": {"processed": true}}';

  // Write output to memory and return packed pointer+length
  const outBytes = String.UTF8.encode(output);
  const outPtr = changetype<i32>(outBytes);
  const outLen = outBytes.byteLength;

  return (i64(outPtr) << 32) | i64(outLen);
}
```

Build:

```bash
asc handler.ts -o handler.wasm --optimize
```

### Zig

```zig
const std = @import("std");

var allocator = std.heap.wasm_allocator;

export fn malloc(size: i32) i32 {
    const slice = allocator.alloc(u8, @intCast(size)) catch return 0;
    return @intCast(@intFromPtr(slice.ptr));
}

export fn free(ptr: i32) void {
    _ = ptr;
    // wasm_allocator doesn't support individual frees
}

export fn apply_handler(ptr: i32, len: i32) i64 {
    const input = @as([*]u8, @ptrFromInt(@intCast(ptr)))[0..@intCast(len)];

    // Parse input JSON, compute result...
    const parsed = std.json.parseFromSlice(
        std.json.Value, allocator, input, .{}
    ) catch return 0;
    defer parsed.deinit();

    // Build output
    const output = "{\"state\": {\"computed\": true}}";
    const out_buf = allocator.alloc(u8, output.len) catch return 0;
    @memcpy(out_buf, output);

    const out_ptr: i64 = @intCast(@intFromPtr(out_buf.ptr));
    const out_len: i64 = @intCast(out_buf.len);
    return (out_ptr << 32) | out_len;
}
```

Build:

```bash
zig build-lib handler.zig -target wasm32-freestanding -O ReleaseSmall
```

### Rust

```rust
use serde::{Deserialize, Serialize};
use serde_json::{json, Value};

#[derive(Deserialize)]
struct Input {
    event: Value,
    state: Value,
}

#[derive(Serialize)]
struct Output {
    state: Value,
}

#[no_mangle]
pub extern "C" fn malloc(size: i32) -> *mut u8 {
    let mut buf = Vec::with_capacity(size as usize);
    let ptr = buf.as_mut_ptr();
    std::mem::forget(buf);
    ptr
}

#[no_mangle]
pub extern "C" fn free(ptr: *mut u8, size: i32) {
    unsafe {
        let _ = Vec::from_raw_parts(ptr, 0, size as usize);
    }
}

#[no_mangle]
pub extern "C" fn apply_handler(ptr: *const u8, len: i32) -> i64 {
    let input_bytes = unsafe {
        std::slice::from_raw_parts(ptr, len as usize)
    };
    let input: Input = serde_json::from_slice(input_bytes).unwrap();

    // Your logic here...
    let output = Output {
        state: json!({"processed": true}),
    };

    let output_bytes = serde_json::to_vec(&output).unwrap();
    let out_ptr = output_bytes.as_ptr() as i64;
    let out_len = output_bytes.len() as i64;
    std::mem::forget(output_bytes);

    (out_ptr << 32) | out_len
}
```

Build:

```bash
cargo build --target wasm32-unknown-unknown --release
```

### C

```c
#include <stdlib.h>
#include <string.h>

__attribute__((export_name("malloc")))
void* wasm_malloc(int size) {
    return malloc(size);
}

__attribute__((export_name("free")))
void wasm_free(void* ptr) {
    free(ptr);
}

__attribute__((export_name("apply_handler")))
long long apply_handler(char* ptr, int len) {
    // Parse input JSON (use a JSON library like cJSON)
    // ...

    // Build output
    const char* output = "{\"state\": {\"done\": true}}";
    int out_len = strlen(output);
    char* out_buf = malloc(out_len);
    memcpy(out_buf, output, out_len);

    long long result = ((long long)(size_t)out_buf << 32)
                     | (long long)out_len;
    return result;
}
```

Build:

```bash
clang --target=wasm32 -O2 -nostdlib \
  -Wl,--no-entry -Wl,--export-all \
  -o handler.wasm handler.c
```

## Uploading WASM blobs

Upload your compiled `.wasm` file via the admin API:

```bash
curl -X POST https://myapp.j17.dev/_admin/blobs \
  -H "Authorization: Bearer $OPERATOR_JWT" \
  -F "name=order-handler.wasm" \
  -F "file=@./dist/order-handler.wasm"
```

Blobs are versioned. The spec references by name; j17 uses the latest version. You can pin a specific version with `"blob_name": "order-handler.wasm:v2"`.

## Output format

When using WASM as a handler, the output is a complete state replacement:

```json
{ "state": { "status": "processed", "total": 42.50 } }
```

When using WASM as an implication, the output contains actions:

```json
{
  "actions": [
    { "emit": { "key": "user:u001", "event_type": "was_notified", "data": { "message": "Order placed" } } },
    { "emit": { "aggregate_type": "inventory", "id": "p-123", "event_type": "was_reserved", "data": { "quantity": 2 } } }
  ]
}
```

## Error handling

Both handlers and implications can return structured errors:

```json
{
  "error": {
    "type": "business_rule",
    "message": "Cannot add items to a completed order",
    "code": "ORDER_COMPLETED",
    "context": { "order_status": "completed" }
  }
}
```

j17 distinguishes between:
- **Handler errors** -- returned to the caller as a 422 response
- **Runtime errors** -- WASM traps, timeouts, memory violations -- returned as 500 responses with diagnostic hints

## Testing WASM locally

Test your handler before uploading:

```bash
# Run your WASM with test input using any WASM runtime
echo '{"state": {}, "event": {"key": "order:123", "type": "had_item_added", "data": {"sku": "W1", "price": 9.99}}}' \
  | your-wasm-runner handler.wasm

# Or upload to a test environment and exercise via the API
curl -X POST https://myapp-test.j17.dev/order/test-123/had_item_added \
  -H "Authorization: Bearer $TEST_KEY" \
  -d '{"data": {"sku": "W1", "price": 9.99, "quantity": 1}}'
```

For unit testing in your language's native test framework, extract your business logic into testable functions and test the JSON-in/JSON-out contract independently of the WASM memory interface.

## Performance

j17 executes WASM via the **wasm3 interpreter** (not a JIT). This is a deliberate choice for security and predictability.

Performance characteristics:
- **Cold start**: Microseconds (module parse + instantiate)
- **Warm execution**: Sub-millisecond for typical handlers
- **Default timeout**: 100ms per invocation
- **Stack size**: 64KB default

Compared to Tick handlers (~0.01ms), WASM has measurable overhead. The difference adds up at scale, so use WASM only when Tick can't express your logic.

Tips for fast WASM handlers:
- Keep modules small (<1MB recommended)
- Avoid heavy allocations in hot paths
- Reuse buffers where possible
- Pre-compute what you can at build time

## Debugging

If your WASM handler fails, check these common issues in order:

1. **Missing exports** -- Are `malloc` and your entrypoint (default `apply_handler`) exported? Use `wasm-objdump -x handler.wasm` or `wasm-tools print handler.wasm` to inspect exports.

2. **Wrong return value** -- The entrypoint must return an i64 with `(ptr << 32) | len`. A common mistake is returning just a pointer or just a length.

3. **Invalid JSON** -- Is your output valid JSON matching the contract? Missing `"state"` key (for handlers) or `"actions"` key (for implications) will produce an `invalid_output` error.

4. **Panics** -- Unhandled errors in Rust/Zig abort the WASM instance, surfacing as a trap. Handle all errors gracefully and return an error JSON instead.

5. **Memory bounds** -- Writing past the end of allocated memory causes a trap. Ensure your output buffer is large enough.

Enable debug tracing to see WASM input/output in the j17 logs:

```bash
curl -X POST https://myapp-test.j17.dev/_admin/debug \
  -H "Authorization: Bearer $JWT" \
  -d '{"wasm_trace": true}'
```

## Security

WASM handlers run with defense-in-depth isolation:

| Layer | Protection |
|-------|------------|
| WASM | Memory sandboxing, no system calls |
| wasm3 | Resource limits, interpreter-based (no JIT = no JIT bugs) |
| Zig NIF | Crash isolation via BEAM NIF semantics |
| BEAM | Process isolation, fault tolerance |

Constraints enforced at runtime:
- 64KB stack (configurable)
- 100ms execution timeout (configurable per handler)
- No filesystem access
- No network access
- Memory isolated per invocation
- No access to other instances' data

Exceeding limits returns a 500 with a diagnostic error (timeout, OOM, or trap).

## Example: Order line total calculation

A handler that computes line totals -- something Tick can't do because it requires multiplication:

```typescript
// AssemblyScript
import { JSON } from "assemblyscript-json";

export function malloc(size: i32): i32 {
  return heap.alloc(size) as i32;
}

export function free(ptr: i32): void {
  heap.free(ptr);
}

export function apply_handler(ptr: i32, len: i32): i64 {
  const input = String.UTF8.decodeUnsafe(ptr, len);
  const json = <JSON.Obj>JSON.parse(input);

  const event = json.getObj("event")!;
  const data = event.getObj("data")!;
  const state = json.getObj("state")!;

  const price = data.getNum("price")!.valueOf();
  const quantity = data.getInteger("quantity")!.valueOf();
  const line_total = price * f64(quantity);

  // Get current total from state, add line total
  const current_total = state.getNum("total")
    ? state.getNum("total")!.valueOf()
    : 0.0;

  const output = `{"state": {"total": ${current_total + line_total}}}`;

  const outBytes = String.UTF8.encode(output);
  const outPtr = changetype<i32>(outBytes);
  const outLen = outBytes.byteLength;
  return (i64(outPtr) << 32) | i64(outLen);
}
```

## Example: Fraud detection implication

An implication that checks order amount and emits a review event for high-value orders:

```typescript
export function apply_handler(ptr: i32, len: i32): i64 {
  const input = String.UTF8.decodeUnsafe(ptr, len);
  const json = <JSON.Obj>JSON.parse(input);

  const event = json.getObj("event")!;
  const data = event.getObj("data")!;
  const metadata = event.getObj("metadata")!;
  const actor = metadata.getObj("actor")!;

  const amount = data.getNum("amount")!.valueOf();
  const user_id = actor.getString("id")!.valueOf();

  let output: string;

  if (amount > 1000.0) {
    output = `{
      "actions": [{
        "emit": {
          "aggregate_type": "review",
          "id": "${user_id}",
          "event_type": "was_flagged",
          "data": {"amount": ${amount}, "reason": "high_value_order"}
        }
      }]
    }`;
  } else {
    output = '{"actions": []}';
  }

  const outBytes = String.UTF8.encode(output);
  const outPtr = changetype<i32>(outBytes);
  const outLen = outBytes.byteLength;
  return (i64(outPtr) << 32) | i64(outLen);
}
```

## See also

- [Tick reference](/docs/reference/tick) -- Declarative handlers (use these first)
- [Implications reference](/docs/reference/implications-reference) -- Reactive event creation
- [Spec reference](/docs/reference/spec) -- Using WASM in specs

# Tombstones (GDPR Erasure)

j17 provides built-in tombstone support for GDPR Article 17 ("Right to Erasure", also known as the "Right to be Forgotten") and similar data deletion requirements. Tombstoning replaces event payloads with tombstone markers while preserving stream structure, hash chain integrity, and an audit trail.

## Why tombstones

Event sourcing and GDPR sit in tension: event stores are append-only, but data subjects can request erasure. Deleting events would break hash chains and lose the audit history that regulators expect.

Tombstones resolve this by replacing event payloads in place. The stream keeps its structure and length, the hash chain is recomputed, and the original content hash is preserved so you can prove an event existed without retaining its contents.

## How it works

1. **Request** a tombstone for an aggregate (e.g., `user:alice`).
2. **Grace period** (minimum 72 hours) allows cancellation.
3. **Execution** replaces all events in the stream with `_was_tombstoned` markers.
4. **Transitive cascade** optionally tombstones related streams (e.g., direct messages authored by Alice).
5. **Audit record** captures what was deleted, when, by whom, and the legal basis.

After tombstoning, the stream still exists with the same number of events, but every event's payload is replaced with a tombstone marker containing the original content hash (for auditability) and no PII.

## Spec configuration

Configure transitive tombstone behavior per aggregate type using `onTombstone` in your spec:

```json
{
  "aggregate_types": {
    "user": {
      "events": { "..." : "..." }
    },
    "message": {
      "events": { "..." : "..." },
      "onTombstone": {
        "actor": "cascade",
        "target": "preserve"
      }
    },
    "order": {
      "events": { "..." : "..." },
      "onTombstone": {
        "actor": "preserve",
        "target": "preserve"
      }
    }
  }
}
```

### Cascade rules

Each role (`actor`, `target`) can be set to:

| Value | Behavior |
|-------|----------|
| `"cascade"` | Tombstone the stream if the deleted entity appears in this role |
| `"preserve"` | Leave the stream untouched |

In the example above, when `user:alice` is tombstoned:

- **`message:m1`** where Alice is the **actor** (sender) -- tombstoned (`actor: "cascade"`)
- **`message:m2`** where Alice is the **target** (recipient) -- preserved (`target: "preserve"`)
- **`order:o1`** where Alice is the **actor** (buyer) -- preserved (`actor: "preserve"`)

Aggregate types without an `onTombstone` configuration are never scanned during transitive discovery.

### How transitive discovery works

When a tombstone executes, j17 scans all aggregate types that have at least one `"cascade"` role in their `onTombstone` config. For each stream of those types, it reads all events and checks whether the tombstoned entity appears as `metadata.actor` or `metadata.target`. If it does, the cascade rule for that role determines whether the stream is tombstoned or preserved.

This means transitive erasure follows the actor/target relationships already present in your event metadata -- no additional configuration required beyond the `onTombstone` block.

## Tombstone lifecycle

```
create ──→ pending ──→ executing ──→ completed
               │
               └──→ cancelled
```

| Status | Meaning |
|--------|---------|
| `pending` | Created, within grace period or awaiting manual execution |
| `executing` | Tombstone execution in progress (streams being rewritten) |
| `completed` | All streams rewritten, caches cleared, scheduled work cancelled |
| `cancelled` | Tombstone cancelled during grace period |

### Grace period

Every tombstone has a minimum grace period of **72 hours** (enforced server-side). You can request a longer grace period with the `grace_period_days` parameter. If you pass a value that results in less than 72 hours, the minimum is applied.

During the grace period, the tombstone can be cancelled. After the grace period elapses, the tombstone can be executed.

To execute before the grace period elapses, pass `force=true` to the execute endpoint. This is logged as a warning in the server logs.

## Tombstone event format

After tombstoning, each original event in the stream is replaced with:

```json
{
  "key": "user:alice",
  "type": "_was_tombstoned",
  "data": {
    "original_hash": "7f3a2b9c8d1e...",
    "chain_hash": "a4b2c1d3e5f6...",
    "original_type": "was_created",
    "tombstone_id": "550e8400-...",
    "tombstoned_at": "2026-03-08T14:30:00Z"
  },
  "metadata": {
    "actor": {"type": "system", "id": "system"},
    "timestamp": 1741444200
  }
}
```

| Field | Description |
|-------|-------------|
| `original_hash` | SHA-256 of the original event JSON (proves an event existed without retaining its contents) |
| `chain_hash` | The event's hash chain position before tombstoning (allows reconstruction of the pre-tombstone Merkle root) |
| `original_type` | The event type that was replaced (e.g., `was_created`, `had_profile_updated`) |
| `tombstone_id` | Links back to the tombstone record for audit purposes |
| `tombstoned_at` | ISO 8601 timestamp of when the tombstone was executed |

The hash chain is recomputed after rewriting. The `chain_hash` field preserves the pre-tombstone hash so the original Merkle root can be verified against external anchors.

## Execution details

When a tombstone executes, the following steps occur in order:

1. **Snapshot root anchor** -- a Merkle root is computed and stored as `pre_tombstone_root` on the tombstone record. This is the cryptographic proof of the state before erasure.

2. **Discover transitive references** -- all aggregate types with `onTombstone` cascade rules are scanned. Streams where the tombstoned entity appears as actor or target are identified and classified as `cascade` or `preserve`.

3. **Rewrite streams** -- the primary stream and all cascade-marked transitive streams are rewritten. Each event is replaced with a `_was_tombstoned` marker. The hash chain is recomputed for each stream. Stream lengths are verified with XLEN guards to prevent concurrent write conflicts. If any rewrite fails, the operation is retried (up to 3 attempts).

4. **Clear caches** -- cached aggregates for all tombstoned streams are deleted.

5. **Cancel scheduled work** -- any pending scheduled events targeting tombstoned streams are cancelled.

6. **Mark completed** -- the tombstone record is updated with `completed` status, `affected_streams` summary, and `executed_at` timestamp.

If execution fails after all retries, the tombstone remains in `executing` status and the failure is logged. Execution can be reattempted.

## Audit trail

Each completed tombstone records:

| Field | Description |
|-------|-------------|
| `pre_tombstone_root` | Merkle root computed immediately before erasure (cryptographic proof of prior state) |
| `affected_streams` | List of streams that were rewritten, with event counts and actions |
| `executed_at` | When the tombstone was executed |
| `legal_basis` | The legal justification provided at creation time |
| `request_id` | External request tracking ID |
| `requested_by` | Who requested the erasure |

The `affected_streams` field contains entries like:

```json
[
  {"key": "user:alice", "event_count": 12, "action": "tombstoned"},
  {"key": "message:m1", "event_count": 3, "action": "tombstoned"},
  {"key": "message:m2", "event_count": 5, "action": "tombstoned"}
]
```

## Writes after tombstoning

j17 does **not** block writes to a tombstoned aggregate key. After tombstoning `user:alice`, your application can still write new events to `user:alice`.

This is deliberate:

- **j17 stores opaque event payloads.** The platform does not know whether your events contain PII. Whether a post-tombstone write reintroduces personal data is a concern for your application layer, not the event store.
- **The audit trail is preserved.** The tombstone record, pre-tombstone Merkle root, and original content hashes remain regardless of subsequent writes. New events append after the tombstone markers.
- **Your application controls the boundary.** If you need to prevent writes to tombstoned aggregates, check tombstone status in your application before submitting events.

If re-creating a tombstoned entity reintroduces PII (e.g., because your system still maps the aggregate ID to a real person), that is a data controller responsibility under GDPR, not a storage platform concern.

## Best practices

**Record the legal basis.** Always provide `legal_basis` and `request_id` when creating tombstones. These fields are stored in the audit trail and demonstrate compliance to regulators.

**Use meaningful `requested_by` values.** Include the operator or system that initiated the request so the audit trail is traceable.

**Respect the grace period.** The 72-hour minimum exists so accidental tombstone requests can be caught and cancelled. Avoid routine use of `force=true`.

**Design your spec's `onTombstone` rules carefully.** Think through which aggregate types should cascade and which should be preserved. A direct message sent by a user contains PII and should cascade; an order placed by a user might need to be retained for financial records. The right answer depends on your domain and legal requirements.

**Tombstoning erases the user, not their footprint.** When a user is tombstoned, events they authored on other aggregates (e.g., a `was_commented_upon` event on an article) remain intact -- only the user's own aggregate stream is erased (plus any cascade-configured streams). The actor reference in those events will point to a tombstoned aggregate, so your application should handle missing actors gracefully (e.g., display "Deleted User"). This is the standard pattern used by GitHub, Reddit, and similar platforms.

**Aggregate-level, not event-level.** Tombstones erase entire streams, not individual events. If you need to erase specific content (e.g., a single comment's text) without erasing the whole aggregate, store that content in a mutable store outside the event stream and reference it by ID. Then you can delete the content without touching the event history.

**Check tombstone status before re-creating entities.** If your application allows re-registration with the same aggregate ID after tombstoning, the new events will append after the tombstone markers. The aggregate will re-compute from the tombstone markers (which produce no state) plus the new events. This works correctly, but you should be aware of it.

---


# FAQ

Common questions about j17. Not finding what you need? Email support@j17.dev

## Getting started

**How do I get an API key?**

Sign up at j17.dev, create an instance, then generate keys in Settings -> API Keys.

**Is there a free tier?**

Yes. 1,000 events/month, 1GB storage, 100 req/min. No credit card required.

**What's the difference between staging and production?**

Separate environments with isolated data and API keys. Test freely in staging without affecting production.

**Can I use j17 from the browser?**

Yes. j17 supports CORS. Use read-only API keys for client-side code, or proxy writes through your backend.

## Events and aggregates

**Can I delete an event?**

No. Events are immutable. Write a compensating event instead (e.g., `order_was_cancelled`).

**How many events can an aggregate have?**

Soft limit of 10,000 events per aggregate. After that, use checkpoints or consider if your aggregate boundaries are right.

**What's a good aggregate size?**

Most aggregates have 10-100 events. Hundreds is fine. Thousands suggests you need checkpoints or a different model.

**Can I query across aggregates?**

Not directly. Use projections for multi-aggregate views, or export to a data warehouse for analytics.

**How do I list all users?**

Event sourcing isn't table scanning. Maintain a projection with user IDs, or track IDs in your database when you create them.

## Spec and handlers

**Do I need to redeploy when I change the spec?**

No. Upload the new spec via API or dashboard. Changes apply immediately.

**Can I have multiple event types with the same name?**

No. Event types are unique within an aggregate type. `user.was_created` and `order.was_created` are fine.

**What happens to old events when I change a handler?**

Nothing. Old events stay as-is. New events use the new handler. If you need to migrate, consider upcasters (store handler version in event metadata).

**Can I use TypeScript/JavaScript for handlers?**

Not directly. Tick's 17 declarative operations cover most cases without code. For complex logic that Tick can't express, that's a sign to rethink your event modeling.

**How do I validate complex business rules?**

Either:
1. Validate client-side before writing (for UX)
2. Use Tick conditionals for simple rules
3. Accept the event and handle invalid states via implications
4. Use external validation in your backend before posting to j17

## Performance

**How fast is j17?**

- Event write: ~5ms
- Aggregate read: ~2ms (small), ~10ms (medium), ~50ms (large)
- Tick handler: 300k+ ops/sec per core

**When should I use caching?**

For aggregates queried > 100 times/minute. Start without caching, add when needed.

**What are the rate limits?**

- 500 requests/minute per API key
- 2,000 requests/minute per IP address

**Can I use j17 for real-time applications?**

Use polling. SSE/WebSocket subscriptions are not yet available.

## Data and storage

**Where is my data stored?**

EU (Germany) by default. US and Asia regions coming soon. Contact support to request a region.

**Is my data encrypted?**

Yes. Encrypted at rest and in transit. TLS 1.3 for all connections.

**Can I export my data?**

Yes. Use the Admin API to export aggregates or events as JSON/NDJSON.

**What happens if I exceed my plan limits?**

We don't throttle hard -- your app keeps working. You'll get an email notification. Upgrade or contact us within 48 hours.

**How long do you keep my data?**

Forever, unless you delete it. We don't auto-purge old events.

## Safety limits

j17 enforces safety limits on spec complexity:

- **Handler depth**: max 5 levels of nesting
- **Handler operations**: max 100 total operations per event type

These exist to prevent accidentally creating specs that are expensive to evaluate.

## Security

**Can I rotate API keys?**

Yes. Create new keys, update your services, delete old keys. Old requests fail immediately.

**Are webhooks signed?**

Yes. j17 signs webhooks with HMAC-SHA256. Verify the signature to ensure authenticity.

**Can j17 see my data?**

Technically yes -- we store it. But we:
- Don't access it unless debugging a support issue (with permission)
- Don't sell or share it

**Is j17 SOC 2 compliant?**

In progress. Contact sales@j17.dev for compliance questions.

## Pricing and billing

**How is pricing calculated?**

Per day based on:
- Events written
- Storage used
- API calls made

See pricing page for tiers.

**Do you charge for reads?**

Yes, but minimally. Reads are 1/10th the cost of writes.

**What counts as an "event"?**

Every POST to `/{type}/{id}/{event}` counts as one event, regardless of size.

**Do scheduled/implied events count?**

Yes. Any event written to your instance counts toward your quota.

**Can I set billing alerts?**

Yes. Configure alerts at 50%, 80%, and 100% of your plan limits.

## Troubleshooting

**I get 409 Conflict. What does that mean?**

Optimistic concurrency check failed. Someone else wrote to the aggregate between your GET and POST. Refetch and retry.

**I get 422 Validation failed. How do I fix it?**

Your event data doesn't match the schema. Check the `details` field in the error response -- it tells you exactly what failed.

**Events aren't triggering implications.**

Check:
1. Is the implication defined in your spec?
2. Does the `when` condition evaluate to true?
3. Check `/admin/implications/failed` for errors

**My aggregate loads slowly.**

- Enable caching for frequently-read aggregates
- Create a checkpoint for large aggregates
- Check if you're including events unnecessarily

**I'm rate limited.**

Check your current usage. API keys are limited to 500 req/min and IPs to 2,000 req/min. Implement client-side backoff with exponential retry.

## Comparisons

**How is j17 different from Postgres?**

Postgres stores current state. j17 stores history and derives state. Trade-off: you get audit trails and temporal queries, you lose simple SQL.

**vs Redis?**

Redis is a cache. j17 is persistent storage with cache characteristics. Don't use Redis as your primary event store.

**vs Kafka?**

Kafka is a message bus. j17 is a database. Kafka doesn't compute aggregates or validate schemas.

**vs EventStoreDB/KurrentDB?**

j17 is managed, cheaper, and simpler. EventStoreDB is more powerful (projections, complex queries) but requires expertise and budget.

**vs Supabase/Firebase?**

Those are general-purpose databases. j17 is purpose-built for event sourcing. Use them for CRUD, j17 for audit trails and temporal data.

## Advanced

**Can I self-host j17?**

Not yet. We're focused on the managed service. Enterprise on-prem may come later.

**Do you support multi-region?**

Not yet. Single region per instance. Multi-region replication is on the roadmap.

**Can I write events in a transaction?**

Yes. Use batch writes to post multiple events to the same aggregate atomically. For cross-aggregate atomicity, use implications.

**How do I migrate from another database?**

1. Identify events in your existing data (every UPDATE is a potential event)
2. Write a migration script that POSTs events to j17
3. Dual-write for a period (write to both old and new)
4. Cut over reads to j17
5. Stop writing to old database

See our migration guide for details.

**Can I use j17 with AI/LLMs?**

Yes. j17 is AI-friendly:
- Spec is JSON (LLMs understand it)
- API is HTTP (universal)
- Entire backend fits in a context window

Load our docs into Claude/Cursor and describe your app. The AI will generate your spec.

## Getting help

**Documentation:** docs.j17.dev
**Email:** support@j17.dev
**Status:** status.j17.dev

We typically respond within 24 hours. Enterprise customers get priority support with 4-hour SLA.
