Skip to content

Reflect

reflect is the operation that asks Cortex to summarise. It is the only operation other than distillation that calls the LLM, and it is the bridge between an active session and its persistent record.

Reflect takes:

ArgumentDescription
scopeOptional scope to bound the reflection
sinceOptional time bound (default: current session)
styleOptional rendering style, e.g. brief, full, section keyword

It returns a structured rollup, suitable for either display to the user or capture as a new record.

  1. Selects the input set. The recent session’s events and accessed records, optionally filtered by scope or time bound.
  2. Composes a structured prompt that asks the LLM to produce a sectioned reflection: opening intent, conversation journey, key discoveries, friction points, recurring patterns, learning opportunities.
  3. Calls the LLM through the standard wrapper.
  4. Parses the response into discrete sections.
  5. Returns the structured result.

Optionally, the result can be captured as a record in its own right via the --save flag — producing a reflect-typed record with provenance back to the input set.

The temptation with reflection is to ask for prose. “Summarise the session” returns a paragraph; the paragraph is fine to read once and useless to search later.

A structured reflection has fixed sections. Each section is independently addressable, separately renderable, and individually queryable. The structure is what makes the reflection a record rather than just a paragraph.

The default section set is opinionated and stable:

  1. Opening intent. What was the session trying to do?
  2. Prompt styles. What kinds of prompts were used?
  3. Questions asked. What did the user actually ask?
  4. Conversation journey. What was the arc of the session?
  5. Key discoveries. What was learned?
  6. Friction points. Where did things stall, fail, or backtrack?
  7. Recurring patterns. What kept coming up?
  8. Tooling opportunities. What would have made the session easier?
  9. Learning opportunities. What is worth carrying forward?

The structure is borrowed from a longer-running reflection practice and tuned for sessions with AI agents. Other section sets are possible by swapping the prompt template.

Reflect operates on a single session or short time window. It is synchronous, called explicitly, and produces one structured artefact.

Distillation operates on a whole scope across long time horizons. It runs in the background, called by the scheduler, and produces multiple summary records.

They share the same underlying capability — LLM-driven rollup — but the access patterns are different enough to warrant separate operations.

The expected pattern is at session boundaries: end-of-session, end-of-day, end-of-investigation. The MCP client invokes reflect, the structured result is shown to the user, and (optionally) captured as a record.

Calling reflect mid-session is also fine; the input set is whatever the session has produced so far. A reflection partway through a long session can be useful for re-orienting.