Skip to main content

Debugging

A comprehensive guide to finding and fixing issues in Constellation pipelines.

Overview

Constellation provides multiple debugging tools:

ToolUse Case
DashboardVisual DAG inspection, node status, execution history
Step-through ExecutionBatch-by-batch execution with state inspection
Execution TrackerPer-node timing and value capture
Debug ModeRuntime type validation with configurable levels
API EndpointsProgrammatic access to execution state
LSP DiagnosticsReal-time editor feedback

Start with the Dashboard

The dashboard at http://localhost:8080/dashboard is the fastest way to debug. Run your script, see which nodes fail (red border), and click them to inspect values and errors.

1. Common Debugging Patterns

Pattern: Isolate the Failing Step

When a pipeline fails, narrow down which step is causing the issue.

# debug-isolate.cst
# Comment out later steps to find where failure occurs

@example("test-input")
in data: String

step1 = ProcessA(data)
# step2 = ProcessB(step1) # Commented out
# step3 = ProcessC(step2) # Commented out

out step1 # Test each step individually

Pattern: Add Intermediate Outputs

Expose intermediate values to understand data flow.

# debug-intermediate.cst

@example("raw data with issues")
in rawData: String

cleaned = Trim(rawData)
normalized = Lowercase(cleaned)
processed = ParseJson(normalized)

# Expose intermediate values for debugging
out cleaned # See what Trim produces
out normalized # See what Lowercase produces
out processed # Final output

Pattern: Type Inspection

When type errors occur, explicitly annotate types to catch mismatches early.

# debug-types.cst

@example(42)
in value: Int

# Explicit type annotation helps catch errors
result: String = IntToString(value)

out result

2. Step-Through Execution

Constellation supports batch-by-batch execution for debugging, where modules at the same dependency level execute together.

How Batches Work

Batch 0: Input data nodes (provided values)
Batch 1: Modules with no dependencies
Batch 2: Modules depending on Batch 1 outputs
Batch 3: Modules depending on Batch 2 outputs
...

Viewing Batch Structure

The DAG visualization in the dashboard shows the execution hierarchy. Nodes at the same horizontal level execute in the same batch.

Node States During Execution

StateDescriptionVisual Indicator
PendingNot yet executedGray border
RunningCurrently executingBlue dashed border, pulsing
CompletedSuccessfully finishedGreen solid border
FailedExecution errorRed border

Using the Dashboard for Step-Through

  1. Open the Dashboard at http://localhost:8080/dashboard
  2. Select a script from the file browser
  3. View the DAG to understand execution order
  4. Run the script and observe node states update
  5. Click nodes to see computed values

3. Inspecting Intermediate Values

Via the Dashboard

When you run a script through the dashboard:

  1. Each node shows its execution status with a colored border
  2. Hover over any node to see a tooltip with:
    • Node kind (Input, Operation, Output, etc.)
    • Type signature
    • Computed value (truncated if large)
    • Execution duration
  3. Click a node to open the details panel with full information

Via the API

Get detailed execution information programmatically:

# Execute and get outputs
curl -X POST http://localhost:8080/run \
-H "Content-Type: application/json" \
-d '{
"source": "in x: Int\nresult = Add(x, 10)\nout result",
"inputs": {"x": 5}
}'

Response includes execution status:

{
"success": true,
"outputs": {"result": 15},
"status": "completed",
"executionId": "550e8400-e29b-41d4-a716-446655440000"
}

Via Execution History

# List recent executions
curl http://localhost:8080/api/v1/executions?limit=10

# Get details for a specific execution
curl http://localhost:8080/api/v1/executions/{executionId}

# Get DAG visualization with execution state
curl http://localhost:8080/api/v1/executions/{executionId}/dag

4. Using the Dashboard for Debugging

Scripts View

The main debugging interface:

+------------------+----------------------------------------+-------------+
| File Browser | Inputs Panel | Outputs Panel | Node Details|
| +-----------------+----------------------+ |
| docs/ | name: [Alice] | {"greeting": ...} | Kind: Op |
| example.cst | count: [5] | | Type: Str |
| debug.cst | | | Value: ... |
| | [Run] | | |
| +-----------------+----------------------+ |
| | Code Editor | DAG | |
| | in name: String | (visual) | |
| | greeting = Hello(name) | [O] | |
| | out greeting | | | |
| | | [O] | |
+------------------+-------------------------+--------------+-------------+

Key Dashboard Features

FeatureHow to UseDebugging Value
File BrowserNavigate to .cst filesFind the script to debug
Inputs PanelFill in test valuesTest with different inputs
Outputs PanelView execution resultsSee final output or error
Code EditorEdit script in placeMake quick fixes
DAG VisualizationView pipeline structureUnderstand data flow
Node DetailsClick any nodeInspect intermediate values
Execution HistorySwitch to History viewCompare past executions

Live Preview

As you type in the code editor:

  • The DAG updates in real-time
  • Compilation errors appear in the error banner
  • Input forms update to match declared inputs

Keyboard Shortcuts

ShortcutAction
Ctrl+EnterExecute the current script
+ / -Zoom DAG in / out
FitFit DAG to viewport
TB / LRToggle layout direction

5. Troubleshooting Failed Pipelines

Execution Status Values

StatusMeaningAction
completedAll outputs computedSuccess
suspendedWaiting for more inputsProvide missing inputs
failedError during executionCheck error message

Handling Suspended Executions

When a pipeline suspends due to missing inputs:

# List suspended executions
curl http://localhost:8080/executions

# Resume with additional inputs
curl -X POST http://localhost:8080/executions/{id}/resume \
-H "Content-Type: application/json" \
-d '{"additionalInputs": {"missingInput": "value"}}'

Common Failure Patterns

SymptomLikely CauseSolution
Missing required inputInput not providedAdd to inputs JSON
Input type mismatchWrong JSON typeMatch expected type
Module not foundTypo in module nameCheck case-sensitive spelling
Output not foundOutput name mismatchVerify out declaration
Compilation failedSyntax errorCheck LSP diagnostics

6. Reading Error Messages

Compilation Errors

Compilation errors include source location:

Line 3, Column 15: Type mismatch
Expected: Int
Found: String

result = Add(name, 10)
^^^^

Interpretation:

  • Line/Column: Exact location in source
  • Expected/Found: The type conflict
  • Context: The offending expression

Runtime Errors

Runtime errors show the execution context:

{
"success": false,
"error": "Module 'FetchUser' failed: Connection refused to api.example.com",
"status": "failed"
}

Interpretation:

  • Module name: Which step failed
  • Error message: Root cause from the module

API Error Responses

HTTP errors include structured information:

{
"error": "InputError",
"message": "Input 'count' expected Int, got String",
"requestId": "abc-123"
}

Use requestId for correlating with server logs.


7. Debugging Type Errors

Common Type Error Patterns

Mismatched Parameter Types

# ERROR: Add expects Int, got String
in text: String
result = Add(text, 10) # Type error!

Fix: Use type conversion:

in text: String
num = ParseInt(text)
result = Add(num, 10)
out result

Record Field Access Errors

# ERROR: Field 'email' not found
in user: { name: String }
email = user.email # Error: field doesn't exist

Fix: Check the record type definition:

in user: { name: String, email: String }
email = user.email
out email

Optional Type Handling

# ERROR: Cannot use Optional<String> where String expected
in maybeValue: Optional<String>
result = Uppercase(maybeValue) # Error!

Fix: Use coalesce to provide default:

in maybeValue: Optional<String>
value = maybeValue ?? "default"
result = Uppercase(value)
out result
warning

Always coalesce Optional<T> before passing to modules. Uppercase(maybeValue) will fail; use Uppercase(maybeValue ?? "default") instead.

Using Debug Mode for Type Validation

Set the CONSTELLATION_DEBUG environment variable:

ValueBehavior
offNo validation (zero overhead)
errorsLog violations, continue execution (default)
fullThrow on violations (development)
# Development mode - strict type checking
export CONSTELLATION_DEBUG=full
make server

# Production mode - log only
export CONSTELLATION_DEBUG=errors
make server

8. Performance Debugging

Identifying Slow Modules

After execution, check the /metrics endpoint:

curl http://localhost:8080/metrics

Response includes timing data:

{
"cache": {
"hits": 150,
"misses": 12,
"hitRate": 0.926,
"evictions": 0
},
"server": {
"uptime_seconds": 3600,
"requests_total": 1000
}
}

Dashboard Node Timing

In the dashboard:

  1. Run your pipeline
  2. Click on any node
  3. View durationMs in the details panel
  4. Identify nodes taking longer than expected

Execution History Analysis

Use the History view to compare execution times:

# Get execution with timing
curl http://localhost:8080/api/v1/executions/{id}

Response includes:

{
"startTime": 1704067200000,
"endTime": 1704067201234,
"nodeCount": 15
}

Common Performance Issues

IssueSymptomSolution
No cachingSame requests repeatedAdd cache: 60s option
Serial executionLong total timePipeline parallelizes automatically
Large payloadsSlow serializationReduce data size, use projection
External API slowHigh latency on one nodeAdd timeout, retry options

Adding Caching for Performance

# Cache expensive operations
result = ExpensiveService(input) with cache: 300s

9. Logging and Tracing

Server Logs

The server logs important events to stderr. Control logging verbosity with the CONSTELLATION_LOG_LEVEL environment variable:

# Development: see all debug information
export CONSTELLATION_LOG_LEVEL=DEBUG
make server 2>&1 | tee server.log

# Production: minimal logging (default)
make server

# Quiet: errors only
export CONSTELLATION_LOG_LEVEL=ERROR
make server

Log categories:

  • DEBUG: Cache hits/misses, compilation details (hidden by default)
  • INFO: Pipeline execution lifecycle, starts and completions
  • WARN: Module timeouts
  • ERROR: Failures with full stacktraces

See also: Logging Configuration for detailed setup guide, library integration, and production deployment strategies.

Request Tracing

Add request IDs for correlation:

curl -X POST http://localhost:8080/run \
-H "X-Request-ID: my-trace-id-123" \
-H "Content-Type: application/json" \
-d '{"source": "...", "inputs": {}}'

Error responses include the request ID:

{
"error": "ExecutionError",
"message": "...",
"requestId": "my-trace-id-123"
}

Execution Tracker

The runtime includes an execution tracker that captures per-node data:

// Programmatic access to execution traces
val tracker = ExecutionTracker.create
val execId = tracker.startExecution("my-dag")
// ... execution happens ...
val trace = tracker.getTrace(execId)
// trace.nodeResults contains per-node status, values, timing

Traces include:

  • Node status: Pending, Running, Completed, Failed
  • Computed values: JSON-serialized (truncated if large)
  • Timing: Per-node durationMs
  • Errors: Error messages for failed nodes

10. LSP Diagnostics

The Language Server Protocol integration provides real-time feedback in editors.

VSCode Extension

Install the Constellation VSCode extension for:

  • Syntax highlighting for .cst files
  • Real-time error diagnostics as you type
  • Autocomplete for module names and stdlib functions
  • Hover documentation for types and functions
  • Run shortcut (Ctrl+Shift+R) to execute scripts

Diagnostic Categories

SeverityIconMeaning
ErrorRedPrevents compilation
WarningYellowMay cause issues
InfoBlueSuggestions

Common Diagnostics

MessageCauseFix
Unknown module 'X'Module not registeredCheck spelling, import namespace
Type mismatchIncompatible typesAdd conversion or fix declaration
Unused variableDeclared but never usedRemove or use in output
Missing outputNo out declarationAdd out variableName

Quick Reference

Debug Checklist

When a pipeline fails:

  1. Check the error message for the failing module and cause
  2. Open the dashboard and load the script
  3. Run the pipeline and observe which nodes fail (red border)
  4. Click the failing node to see the error details
  5. Check the inputs - are they the right type?
  6. Isolate the step by commenting out later operations
  7. Add intermediate outputs to see values at each step
  8. Check the logs for additional context

Environment Variables

VariablePurposeValues
CONSTELLATION_LOG_LEVELLogging verbosityDEBUG, INFO (default), WARN, ERROR
CONSTELLATION_DEBUGType validation leveloff, errors, full
CONSTELLATION_PORTServer portDefault: 8080

Useful API Endpoints

EndpointMethodPurpose
/runPOSTExecute and debug pipelines
/executionsGETList suspended executions
/executions/{id}GETGet execution details
/executions/{id}/dagGETGet DAG with execution state
/metricsGETPerformance metrics
/health/detailGETDetailed diagnostics