anthropic Extension

This extension allows rill scripts to access Anthropic’s Claude API. The host binds it to a namespace with prefixFunctions('llm', ext), and scripts call llm::message(), llm::embed(), and so on. Switching to OpenAI or Google means changing one line of host config. Scripts stay identical.

Five functions cover the core LLM operations. message sends a single prompt. messages continues a multi-turn conversation. embed and embed_batch generate vector embeddings. tool_loop runs an agentic loop where the model calls rill closures as tools. All return the same dict shape (content, model, usage, stop_reason, id, messages), so scripts work across providers without changes.

The host sets API key, model, and temperature at creation time — scripts never handle credentials. Each call emits a structured event (anthropic:message, anthropic:tool_call) for host-side logging and metrics.

Quick Start

import { createRuntimeContext, prefixFunctions } from '@rcrsr/rill';
import { createAnthropicExtension } from '@rcrsr/rill-ext-anthropic';

const ext = createAnthropicExtension({
  api_key: process.env.ANTHROPIC_API_KEY!,
  model: 'claude-sonnet-4-5-20250929',
});
const functions = prefixFunctions('anthropic', ext);
const ctx = createRuntimeContext({ functions });

// Script: anthropic::message("Explain TCP handshakes")

Configuration

const ext = createAnthropicExtension({
  api_key: process.env.ANTHROPIC_API_KEY!,
  model: 'claude-sonnet-4-5-20250929',
  temperature: 0.7,
  max_tokens: 4096,
  system: 'You are a helpful assistant.',
  embed_model: 'voyage-3',
  base_url: 'https://custom-endpoint.example.com',
  max_retries: 3,
  timeout: 30000,
});
ParameterTypeDefaultDescription
api_keystringAPI key (required)
modelstringModel identifier (required)
temperaturenumberResponse randomness, 0.0–2.0
max_tokensnumber4096Maximum response tokens
systemstringDefault system prompt
embed_modelstringModel for embed operations
base_urlstringCustom API endpoint
max_retriesnumberRetry attempts for failures
timeoutnumberRequest timeout in ms

Functions

message(text, options?) — Send a single prompt:

anthropic::message("Explain TCP handshakes") => $result
$result.content      # Response text
$result.model        # Model used
$result.usage.input  # Input tokens
$result.usage.output # Output tokens

messages(messages, options?) — Multi-turn conversation:

[
  [role: "user", content: "What is rill?"],
  [role: "assistant", content: "A scripting language."],
  [role: "user", content: "Tell me more."],
] -> anthropic::messages => $result
$result.content   # Latest response
$result.messages  # Full conversation history

embed(text) — Generate text embedding:

anthropic::embed("sample text") => $vec
$vec -> .dimensions  # Vector size
$vec.model           # Embedding model used

embed_batch(texts) — Batch embeddings:

["first text", "second text"] -> anthropic::embed_batch => $vectors
$vectors.len  # Number of vectors

tool_loop(prompt, options?) — Agentic tool-use loop:

tool("get_weather", "Get current weather", [city: "string"], {
  "Weather in {$city}: 72F sunny"
}) => $weather_tool

anthropic::tool_loop("What's the weather in Paris?", [
  tools: [$weather_tool],
  max_turns: 5,
]) => $result
$result.content  # Final response
$result.turns    # Number of LLM round-trips

Per-Call Options

OptionTypeApplies ToDescription
systemstringmessage, messages, tool_loopOverride system prompt
max_tokensnumbermessage, messages, tool_loopOverride max tokens
toolslisttool_loop (required)Tool descriptors
max_turnsnumbertool_loopLimit LLM round-trips
max_errorsnumbertool_loopConsecutive error limit (default: 3)
messageslisttool_loopPrepend conversation history

Result Dict

All functions except embed and embed_batch return:

FieldTypeDescription
contentstringResponse text
modelstringModel identifier
usage.inputnumberInput token count
usage.outputnumberOutput token count
stop_reasonstringWhy generation stopped
idstringRequest identifier
messageslistConversation history

The tool_loop result adds turns (number of LLM round-trips).

Error Behavior

Validation errors (before API call):

  • Empty prompt → RuntimeError RILL-R004: prompt text cannot be empty
  • Missing role → RuntimeError RILL-R004: message missing required 'role' field
  • Invalid role → RuntimeError RILL-R004: invalid role '{value}'
  • Missing content → RuntimeError RILL-R004: {role} message requires 'content'
  • No embed_model → RuntimeError RILL-R004: embed_model not configured
  • Missing tools → RuntimeError RILL-R004: tool_loop requires 'tools' option

API errors (from provider):

  • Rate limit → RuntimeError RILL-R004: Anthropic: rate limit
  • Auth failure → RuntimeError RILL-R004: Anthropic: authentication failed (401)
  • Timeout → RuntimeError RILL-R004: Anthropic: request timeout
  • Other → RuntimeError RILL-R004: Anthropic: {detail} ({status})

Tool loop errors:

  • Unknown tool → RuntimeError RILL-R004: unknown tool '{name}'
  • Error limit → RuntimeError RILL-R004: tool loop aborted after {n} consecutive errors

Events

EventEmitted When
anthropic:messagemessage() completes
anthropic:messagesmessages() completes
anthropic:embedembed() completes
anthropic:embed_batchembed_batch() completes
anthropic:tool_looptool_loop() completes
anthropic:tool_callTool invoked during loop
anthropic:tool_resultTool returns during loop
anthropic:errorAny operation fails

Test Host

A runnable example at packages/ext/anthropic/examples/test-host.ts demonstrates integration:

# Set API key
export ANTHROPIC_API_KEY="sk-ant-..."

# Built-in demo
pnpm exec tsx examples/test-host.ts

# Inline expression
pnpm exec tsx examples/test-host.ts -e 'llm::message("Tell me a joke") -> $.content -> log'

# Script file
pnpm exec tsx examples/test-host.ts script.rill

Override model or endpoint with ANTHROPIC_MODEL and ANTHROPIC_BASE_URL.

See Also