Skip to main content
Every tools/call request passes through a deterministic pipeline. The stages are identical for DB-backed and script-backed tools — only the execution step differs. If any stage fails, execution halts immediately and an error response is returned.
Execution pipeline: tool resolution, auth, input transform, execution, output transform, response

Tool resolution

The runtime matches the tool name from the tools/call request to a compiled tool definition loaded at startup. If no tool matches, a JSON-RPC error is returned with code -32601.

Authentication

If the tool defines an auth block, the configured plugin is invoked with the request context and policy parameters.
  • Plugin returns nil: authentication passes, pipeline continues.
  • Plugin returns an error: pipeline halts, error returned to caller.
  • No auth block: stage is skipped entirely.

Input transform

If mappers.input is configured or discovered by convention, the mapper script’s export default function executes in the embedded runtime. Payload:
{
  "inputs": { "user_id": 42 },
  "tool": "get-user"
}
The returned object replaces the inputs for all subsequent stages. Throwing an error aborts the pipeline. If no input transform is configured, inputs pass through unchanged.

Execution

The execution stage branches based on the tool’s configuration. Only one path runs per invocation.

Script-backed tools

When handler is configured, the handler script’s export default function executes in the embedded runtime. The return value becomes the execution result.

DB-backed tools

When use and statement are configured, the executor runs six substeps:
  1. Input validation. Validate all declared inputs against type definitions. Missing required inputs and type mismatches produce errors.
  2. Environment substitution. Resolve {{ env.VAR }} placeholders. Missing variables produce errors.
  3. Input substitution. Replace {{ inputs.field }} placeholders with post-transform values. Substitution is textual.
  4. Cache check. If caching is enabled, compute the cache key from tool name + statement hash. On hit, return cached result and skip connector execution.
  5. Connector execution. Execute the statement against the configured connector and return row/object results.
  6. Cache store. On miss, store the result with the configured TTL.

Output transform

If mappers.output is configured or discovered by convention, the mapper script’s export default function executes in the embedded runtime. Payload:
{
  "results": [{ "id": 42, "name": "Jane Doe", "email": "jane@example.com" }],
  "tool": "get-user"
}
The returned value replaces the result for response serialization. Throwing an error aborts the pipeline. If no output transform is configured, the raw result is used directly.

Response serialization

The final result is JSON-encoded and wrapped in an MCP content block:
{
  "content": [
    {
      "type": "text",
      "text": "[{\"id\":42,\"name\":\"Jane Doe\",\"email\":\"jane@example.com\"}]"
    }
  ]
}

Error propagation

StageError conditionJSON-RPC code
Tool resolutionTool name not found-32601
AuthenticationPlugin returns error-32000
Input transformScript throws-32000
Input validationMissing input or type mismatch-32000
Env substitutionMissing variable-32000
Connector executionQuery error-32000
Handler executionScript throws-32000
Output transformScript throws-32000
Error messages from scripts and connectors are included in the response. Stack traces are logged at debug level but not exposed to callers.

Observability hooks

Each pipeline execution is instrumented with OpenTelemetry spans when tracing is enabled. Span attributes include tool name, execution stage, cache hit/miss, connector type, and duration. Sensitive values are redacted. See Observability.