Skip to main content
Every AI agent that queries a database, calls an API, or runs business logic needs the same infrastructure: connection management, input validation, authentication, caching, observability, and a transport protocol that LLMs understand. With Hyperterse, you define your tools in YAML configuration files. The framework compiles, validates, bundles, and serves them as a standards-compliant Model Context Protocol server. You own the data and the logic — the framework handles everything else.

What you can build

You use Hyperterse to expose databases and custom logic as MCP tools that any AI agent can call. A typical tool definition looks like this:
app/tools/get-user/config.terse
description: 'Get user by ID'
use: primary-db
statement: 'SELECT id, name, email FROM users WHERE id = {{ inputs.user_id }}'
inputs:
  user_id:
    type: int
    required: true
auth:
  plugin: api_key
  policy:
    value: '{{ env.API_KEY }}'
This configuration creates an MCP tool called get-user that queries your database, validates inputs, and enforces API key authentication — without writing any application code.

How it works

1

Define your tools

Create .terse YAML files for your database connections and tools. Each directory under app/tools/ becomes one MCP tool.
2

Compile your project

Run hyperterse build to validate configuration, bundle scripts, and serialize everything into a single deployable artifact.
3

Serve your MCP server

Run hyperterse serve to boot from the compiled artifact. Your tools are immediately available at /mcp over Streamable HTTP.
During development, run hyperterse start --watch to skip the separate build step. The framework recompiles automatically on every file change.
Hyperterse lifecycle: compile-time steps flow into runtime initialization

Key capabilities

Next steps