Time-travel debugging
Trace recording captures complete execution state at each step, enabling post-execution debugging through replay.
Basic usage
Enable tracing at the mission level:
mission DebuggablePipeline {
trace: full
action Process {
get "/data"
store response -> data { key: .id }
for item in data {
map item -> Processed { ... }
}
}
run Process
}
Trace modes
full
Captures complete state snapshots:
mission FullTrace {
trace: full
action Process {
// Every step captures: variables, response, store state
}
run Process
}
Captures:
- All variable values
- Current
response - Store snapshots
- Loop context and iteration state
- Step timing information
minimal
Captures lightweight state markers:
mission MinimalTrace {
trace: minimal
action Process {
// Captures step transitions, not full state
}
run Process
}
Captures:
- Step type and name
- Timestamps
- Errors (if any)
- Basic flow information
Trace snapshots
What's captured
Each trace snapshot includes:
| Field | Description |
|---|---|
id | Unique snapshot ID |
index | Sequential snapshot number |
timestamp | When the snapshot was taken |
mission | Mission name |
action | Current action name |
stepIndex | Step index within action |
stepType | Type of step (fetch, store, map, etc.) |
phase | before or after the step |
variables | All variable values |
stores | Store state (keys/counts) |
stepDuration | Execution time (after phase only) |
Snapshot phases
Each step generates two snapshots:
Step 1: get "/data"
→ Snapshot (before): variables = { page: 1 }
→ Snapshot (after): variables = { page: 1 }, response = {...}
Step 2: store response -> data
→ Snapshot (before): response = {...}
→ Snapshot (after): stores = { data: { count: 100 } }
Using the trace replayer
Loading a trace
import { TraceReplayer, FileTraceStore } from 'reqon';
const store = new FileTraceStore('.vague-data/traces');
const replayer = new TraceReplayer(store);
// Load trace by execution ID
const session = await replayer.loadTrace('exec-abc123');
console.log(session.totalSnapshots); // 42
console.log(session.currentIndex); // 0
Navigating snapshots
// Step forward
const next = await replayer.next();
console.log(next.snapshot.stepType); // 'fetch'
console.log(next.hasNext); // true
// Step backward
const prev = await replayer.previous();
// Jump to specific snapshot
const result = await replayer.goToStep(10);
console.log(result.snapshot.variables);
// Jump to action
await replayer.goToAction('ProcessData');
Comparing snapshots
// See what changed between two snapshots
const diff = replayer.compareSnapshots(5, 6);
console.log(diff.variableChanges);
// [
// { name: 'response', type: 'added', newValue: {...} },
// { name: 'page', type: 'modified', oldValue: 1, newValue: 2 }
// ]
console.log(diff.storeChanges);
// [
// { store: 'data', type: 'modified', itemsAdded: 100 }
// ]
Timeline view
// Get execution timeline
const timeline = replayer.getTimeline();
for (const event of timeline) {
console.log(`${event.timestamp}: ${event.type} - ${event.action}`);
}
// 2024-01-20T09:00:00: action-start - FetchData
// 2024-01-20T09:00:01: step-complete - fetch
// 2024-01-20T09:00:02: step-complete - store
// 2024-01-20T09:00:02: action-complete - FetchData
Trace storage
File storage (default)
.vague-data/traces/
├── exec-abc123/
│ ├── meta.json # Trace metadata
│ └── snapshots/
│ ├── 000000.json # First snapshot
│ ├── 000001.json
│ └── ...
└── exec-def456/
└── ...
Memory storage
For testing or ephemeral traces:
import { MemoryTraceStore, TraceRecorder } from 'reqon';
const store = new MemoryTraceStore();
const recorder = new TraceRecorder({ store, mode: 'full' });
Use cases
Debugging failed executions
mission DataPipeline {
trace: full
action Process {
get "/data"
for item in response.items {
validate item {
assume .amount > 0 // Might fail
}
}
}
run Process
}
When validation fails, replay the trace:
const replayer = new TraceReplayer(store);
await replayer.loadTrace('exec-failed');
// Find the failure point
const timeline = replayer.getTimeline();
const failure = timeline.find(e => e.type === 'error');
// Go to the step before the failure
await replayer.goToStep(failure.snapshotIndex - 1);
// Inspect the data that caused the failure
console.log(replayer.current().variables.item);
Understanding data transformations
mission TransformPipeline {
trace: full
action Transform {
for item in raw {
map item -> CleanedItem {
name: upper(.name),
amount: .price * .quantity,
status: match .state {
"A" => "active",
_ => "inactive"
}
}
}
}
run Transform
}
Replay to see input/output at each transformation:
const replayer = new TraceReplayer(store);
await replayer.loadTrace('exec-123');
// Find map steps
while (await replayer.next()) {
const snap = replayer.current();
if (snap.stepType === 'map') {
const diff = replayer.compareSnapshots(snap.index - 1, snap.index);
console.log('Input:', diff.variableChanges.find(v => v.name === 'item')?.oldValue);
console.log('Output:', diff.variableChanges.find(v => v.name === 'response')?.newValue);
}
}
Performance analysis
const replayer = new TraceReplayer(store);
await replayer.loadTrace('exec-123');
const timeline = replayer.getTimeline();
const slowSteps = timeline
.filter(e => e.type === 'step-complete' && e.duration > 1000)
.sort((a, b) => b.duration - a.duration);
console.log('Slowest steps:');
for (const step of slowSteps.slice(0, 5)) {
console.log(`${step.stepType}: ${step.duration}ms`);
}
Data handling
Truncation
Large data is automatically truncated in traces:
import { truncateForTrace } from 'reqon';
// Arrays > 100 items are truncated
const largeArray = Array.from({ length: 500 }, (_, i) => i);
const truncated = truncateForTrace(largeArray, 100);
// [0, 1, ..., 99, "[truncated: 400 more items]"]
// Strings > 1000 chars are truncated
const longString = 'x'.repeat(5000);
const truncatedStr = truncateForTrace(longString, 100, 1000);
// "xxx...[truncated: 4000 more chars]"
Circular references
Circular references are safely handled:
import { safeClone } from 'reqon';
const obj = { name: 'test' };
obj.self = obj;
const cloned = safeClone(obj);
// { name: 'test', self: '[circular reference]' }
Best practices
- Use
fullfor debugging - When you need complete visibility - Use
minimalfor production - Lower overhead, basic flow tracking - Clean up old traces - Traces can grow large over time
- Combine with checkpoint - Full durability and debuggability
Performance impact
| Mode | Storage | CPU | Memory |
|---|---|---|---|
full | ~1KB/step | Moderate | Moderate |
minimal | ~100B/step | Low | Low |
| None | None | None | None |
For production with many executions:
// Set up trace cleanup
const store = new FileTraceStore('.vague-data/traces', {
maxTraces: 100, // Keep last 100 traces
maxAge: '7d' // Delete traces older than 7 days
});