QuantOracle

Add Reliable Quant Finance Math to Your Vercel AI SDK Agent in 5 Minutes

GPT-4o computing a Black-Scholes price in-context is wrong by ~5%. Greeks drift 5–30%. Kelly fractions get the sign right but the magnitude wrong. And the model doesn't know it's wrong. If your AI SDK app does anything quantitative — options pricing, position sizing, portfolio projections — it's eating that error silently.

Published May 14, 2026

The fix is the standard pattern: don't ask the model to compute, give it a tool that does. The Vercel AI SDK makes this clean — define a tool with a Zod schema and an execute function and the model invokes it automatically when the user asks something quantitative. @quantoracle/ai-tools is a single npm install that wires 15 of these tools into your agent. Black-Scholes, Kelly Criterion, Monte Carlo, VaR, Sharpe, correlation, impermanent loss, liquidation price — the deterministic math that LLMs fumble.

Here's how to get it working end-to-end, including the Next.jsuseChat streaming pattern. No API key, no signup, free tier covers 13 of the 15 tools.

Install

pnpm add @quantoracle/ai-tools ai zod
pnpm add @ai-sdk/openai  # or any model provider

Zero config beyond that. The package ships ESM + CJS + .d.ts, has zero runtime dependencies of its own, and the API behind it requires no auth for 1,000 calls/day per IP.

First tool call

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { quantoracleTools } from "@quantoracle/ai-tools";

const result = await generateText({
  model: openai("gpt-4o"),
  tools: quantoracleTools(),
  maxSteps: 3,
  prompt:
    "Price a 30-day SPY $500 call with vol=18%, spot=$498, rate=5%. " +
    "Then size a $50k account position with a 55% win rate, " +
    "$1,200 average win, and $800 average loss.",
});

console.log(result.text);

What happens: the model reads its tool list, picks price_option for the first half of the prompt, fills in the parameters, hits the QuantOracle API (~70ms), gets a deterministic response with the option price + full Greeks. Then it picks calculate_kelly for the second half. maxSteps: 3 lets the model do both in one response.

The output is structured JSON (not Markdown), so it streams cleanly to UI components and the model can reason over specific fields rather than re-parse strings.

The bundle picker — pick only the tools your agent needs

The default quantoracleTools() call ships 5 tools — the highest-leverage ones for any quantitative agent. That's deliberate: past ~20 tools, LLM tool selection accuracy drops noticeably. So we curate by default and let you opt into more.

// Default — 5 core tools
quantoracleTools()

// Options-focused — 9 tools
quantoracleTools({ include: ["core", "options"] })

// Quant research / risk dashboard — 13 tools
quantoracleTools({ include: ["core", "options", "risk"] })

// DeFi onchain agent — 7 tools (adds IL + liquidation price)
quantoracleTools({ include: ["core", "defi"] })

// All 15 tools
quantoracleTools({ include: "all" })

The bundles:

Streaming to a Next.js useChat client

The tools work identically in an API route that streams to useChat:

// app/api/chat/route.ts
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
import { quantoracleTools } from "@quantoracle/ai-tools";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai("gpt-4o"),
    messages,
    tools: quantoracleTools({ include: ["core", "options"] }),
    maxSteps: 5,
  });

  return result.toDataStreamResponse();
}

On the client side, useChat automatically renders tool invocations as separate parts of the message. Because the tools return structured JSON, you can render them as cards or tables instead of parsing markdown:

'use client';
import { useChat } from "ai/react";

export default function QuantChat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();

  return (
    <div>
      {messages.map((m) => (
        <div key={m.id}>
          <strong>{m.role}:</strong>
          {m.parts.map((part, i) => {
            if (part.type === "tool-invocation") {
              const { toolName, result } = part.toolInvocation;
              // Render structured result however you want
              if (toolName === "price_option" && result) {
                return (
                  <OptionCard
                    key={i}
                    price={result.price}
                    greeks={result.greeks}
                  />
                );
              }
            }
            if (part.type === "text") return <span key={i}>{part.text}</span>;
            return null;
          })}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input value={input} onChange={handleInputChange} />
      </form>
    </div>
  );
}

You get the structured benefit of tool calls (typed fields, no string parsing) plus the streaming UX of useChat. The model still drives — it picks the tool, validates the arguments against the Zod schema, calls execute, then synthesizes a natural-language summary citing the structured result.

Why deterministic math matters here

The pitch isn't "LLMs are bad at math, use a tool." The pitch is more subtle: LLMs are bad at knowing when they're wrong at math. A model that's 5% off on a Black-Scholes price will confidently state the price as if it's exact. A user who builds a hedge on top of that gets a hedge sized to the wrong delta. The error compounds.

QuantOracle's endpoints are pure-function HTTP: same inputs in, same outputs out, every time. Sub-70ms per call. Citation-tested against Hull / Wilmott / Lopez de Prado. There's no model behind them — just deterministic numerical methods, exactly the kind of code that's easy to write once and impossible to do reliably from inside a language model.

The mental model: the LLM is the part of your agent that's good at choosing what to compute and how to explain the result. QuantOracle is the part that's good at doing the computation reliably. That's the same division of labor as arithmetic-via-tool-calling, just applied to quant.

Two of the core tools are paid composites: assess_portfolio_risk (Sharpe + Sortino + Calmar + VaR + CVaR + Kelly + Hurst in one call) and recommend_hedge (ranked hedge structures with costs and floors). They cost $0.04 USDC each, settled via x402 on Base or Solana mainnet.

You don't need these to get started — the free tier covers the 13 non-composite tools at 1,000 calls per IP per day. If you do want them, wire an x402 payment handler:

import { quantoraclePaidTools } from "@quantoracle/ai-tools";

const tools = quantoraclePaidTools({
  include: "all",
  x402PayHandler: async (paymentRequirements) => {
    // Sign payment with your viem wallet client (Base)
    // or @solana/web3.js Keypair (Solana).
    return await signX402Header(paymentRequirements);
  },
});

The package retries the request automatically once the payment header is signed. The model sees the result as if it were a normal tool call — no special handling needed in your prompt.

Sponsoredtradingview.com →
See it on a chart with TradingView Pro

Charts, indicators, and alerts to put these concepts to work on real markets. The chart platform used by 60M+ traders. Free 30-day Pro trial.

QuantOracle earns a commission if you sign up via this link. Doesn't cost you extra.

Full affiliate disclosure

When you need more than 15 tools

The full QuantOracle API has 73 endpoints — fixed income, FX/macro, technical indicators, derivative exotics (barrier/Asian/lookback), TVM, GARCH forecasting, cointegration. We expose 15 in this package because that's where LLM tool-selection still works well. For broader coverage there are two options:

  • Call the REST API directly. Every endpoint accepts JSON, returns JSON, and is CORS-enabled. Browse the full catalogue at quantoracle.dev/api-docs.
  • Use the QuantOracle MCP server. Best for general-purpose agents that need full breadth — the model only sees tool definitions for the tools it actually invokes per call, so the context cost stays low even with 73 tools available.