WEB SEARCH API
FOR AI BUILDERS
Give your agents, scripts and LLM pipelines real-time web access. No scraping — just clean JSON ready for your context window.
Try Prismfy API
Low Latency
Fast enough for real-time agent loops and streaming LLM responses.
LLM-Ready JSON
Every result returns title, URL, snippet and relevance score — token-efficient and ready for your prompt.
Multi-Engine Merge
Query different search engines in one call. Results are merged, ranked and returned as a single array.
Your API Control Panel
Monitor usage, rotate keys and inspect logs — all in one place.
Queries Remaining
—
Sign up to get 3,000 / mo
Free tier
Recent Queries
"claude code mcp tools 2025"
2 min ago
"openai gpt-5 release date"
4 min ago
"YC W25 batch companies list"
8 min ago
Built for how AI developers actually work
From Claude Code plugins to production RAG pipelines — drop Prismfy in anywhere.
Web Search inside Claude Code
Register Prismfy as a MCP tool so Claude can browse the web during your coding sessions without leaving the terminal.
// .claude/settings.json
{
"mcpServers": {
"prismfy": {
"command": "npx prismfy-mcp",
"env": {
"PRISMFY_API_KEY": "ss_live_..."
}
}
}
}LLM Tool / Function Calling
Expose Prismfy as a tool to GPT-4o, Claude or Gemini. Your agent calls real-time web search only when it needs fresh data.
import { Prismfy } from '@prismfy/prismfy'
const client = new Prismfy({ apiKey: 'ss_live_...' })
// tool handler
async function web_search({ query }) {
const result = await client.search(query)
return result.results.map(r => ({
title: r.title, url: r.url,
snippet: r.content
}))
}Company & Lead Research
Build scripts that automatically search, qualify and enrich company data — funding rounds, hiring signals, news — all in one pipeline.
from prismfy import Prismfy
client = Prismfy(api_key="ss_live_...")
companies = ["Stripe", "Linear", "Vercel"]
for name in companies:
result = client.search(
f"{name} funding CEO latest news",
engines=["google", "hackernews", "reddit"]
)
sheet.append_row([name, result.results[0].url])Real-time RAG Context
Fetch fresh web results before every LLM call. Keep your knowledge base current without manual updates or expensive re-indexing.
from prismfy import Prismfy
client = Prismfy() # reads PRISMFY_API_KEY from env
def get_context(query: str) -> str:
result = client.search(
query,
engines=["google", "arxiv"],
time_range="week"
)
return "\n".join(
f"{r.title}: {r.content[:300]}"
for r in result.results
)News & Signal Monitoring
Schedule keyword searches on a cron, pipe results into Slack, Discord or your AI pipeline. Never miss a competitor move or market signal.
from prismfy import Prismfy
client = Prismfy(api_key="ss_live_...")
# runs every 15 min via cron
result = client.search(
"OpenAI GPT-5 release date",
engines=["google", "hackernews", "reddit"],
time_range="day"
)
if result.results:
slack.post(result.results[0].title)Build Your Own Perplexity
Combine Prismfy with any LLM to ship answer engines, research assistants, or chatbots with live web access in under a day.
import { Prismfy } from '@prismfy/prismfy'
import OpenAI from 'openai'
const search = new Prismfy({ apiKey: 'ss_live_...' })
const openai = new OpenAI()
const result = await search.search(userQuery)
const context = result.results
.map(r => `${r.title}\n${r.content}`).join('\n\n')
const answer = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user',
content: `${context}\n\nQ: ${userQuery}` }]
})Request
curl -X POST https://api.prismfy.io/v1/search \
-H "Authorization: Bearer ss_live_xxxx" \
-H "Content-Type: application/json" \
-d '{"query":"Claude 4 release notes","engines":["google","bing"]}'Response
{
"query": "Claude 4 release notes",
"cached": false,
"meta": {
"taskId": "tsk_abc123",
"durationMs": 1400,
"engines": ["google", "bing"],
"page": 1
},
"results": [
{
"title": "Claude 4 — Anthropic",
"url": "https://anthropic.com/news/claude-4",
"content": "Anthropic releases Claude 4 with...",
"engine": "google",
"score": 0.98
}
]
}Simple, transparent pricing
Start free. Ship fast. Scale when you need to.
Pay As You Go
- ✓No monthly commitment
- ✓From $10 top-up
- ✓60 RPM
- ✓All 15 search engines
Data Digger
- ✓10,000 queries/month
- ✓$2.5 per extra 1,000
- ✓60 RPM
- ✓All 15 search engines
- ✓Priority support
Enterprise
- ✓Unlimited queries
- ✓300 RPM
- ✓Custom SLA
- ✓Dedicated account manager
- ✓24/7 support
Stop scraping. Start building.
Free tier · 3,000 queries/month · No credit card needed