Skip to content

Session Lifecycle

Cortex does not run as a daemon that watches sessions. It runs as a collection of scripts triggered by the AI client’s own lifecycle events. The hook surface is small and stable.

Fires when an AI client is starting a fresh session for a given working directory.

Cortex’s SessionStart hook:

  1. Resolves the active project from the working directory.
  2. Calls into the recall path with two scopes:
    • L0 — records scoped specifically to the active project.
    • L1 — records scoped to the broader area or domain.
  3. Selects the highest-ranked records using activation + recency.
  4. Renders them as a compact context block.
  5. Emits the block to stdout, where the client injects it into the model’s context.

The block has a budget. Both the count and the rendered length are capped so that SessionStart never floods the model’s context.

Fires when the AI client is about to compact — about to summarise the running conversation and discard the original turns to free up context.

Cortex’s PreCompact hook:

  1. Receives the conversation snapshot from the client.
  2. Writes a pre_compact event to inbox/pending/.
  3. Returns immediately so compaction can proceed.

The motivation is recovery. Compaction is a lossy operation; if anything useful happened in the run-up to it, it should be preserved as a record before the originals disappear.

Fires when the session ends cleanly.

Cortex’s Stop hook:

  1. Receives the final session transcript or summary from the client.
  2. Writes a stop event to inbox/pending/.
  3. Returns immediately.

This is the canonical capture for what happened in a session. The normalizer turns it into one or more records during its next pass.

Three reasons:

  • No background process to manage. Hooks live and die with the client invocation. There is nothing to forget to start, no service to debug, no process tree to inspect.
  • Failure-isolated. A hook crash is a hook crash. It does not bring down a service. The next session’s hook fires fresh.
  • Triggered at the right moments. The lifecycle events are the natural read/write points for memory; a polling daemon would either poll too often (waste) or too rarely (miss the moment).

The hook scripts only write events. They do not normalize, do not query the database, do not call the LLM. This keeps them fast (so they do not slow down session start or stop), simple (so they almost never fail), and isolated (so a normalize bug does not break session lifecycle).

Normalize runs on its own schedule, separately, when nothing is waiting for it.

┌──────────────────┐
│ AI client │
│ (Claude Code, │
│ Codex, etc.) │
└────────┬─────────┘
│ SessionStart
┌──────────────────┐ inject relevant
│ cortex-session- │ ──▶ records into
│ start.py │ model context
└──────────────────┘
│ PreCompact / Stop
┌──────────────────┐ write event to
│ cortex-pre- │ ──▶ inbox/pending
│ compact.py │
│ cortex-stop.py │
└──────────────────┘
(later, separately)
┌──────────────────┐
│ cortex- │ ──▶ records + index
│ normalize.py │
└──────────────────┘

The arrows in the bottom block are deliberately disconnected from the top ones. There is no synchronous dependency between session lifecycle and normalization — only a shared filesystem.