AI Verification as a Service — independently verify whether AI outputs match their source materials and policy criteria.
The Trust API is an on-demand adversarial AI auditor. Submit an AI output along with the instructions and source materials the AI was given. Get back an independent verification of whether the AI did what it claims — plus optional policy alignment assessment.
https://api.ailaunchpods.com/api/v1/
All API requests require a Bearer token in the Authorization header:
Authorization: Bearer gk_your_api_key_here
Get your API key from the Dashboard.
The Trust API uses a prepaid credit model. Purchase credits from your dashboard, and each API call deducts from your balance based on the complexity of the check.
When your balance reaches zero, calls will return a 402 status. Add more credits to continue. If you have a card on file, your account will be auto-charged when credits run low.
The response from each API call includes your remaining credit balance in the cost object.
Submit an AI output for independent verification. Provide the instructions and source materials for validity checking, and optionally a policy for compliance assessment.
| Parameter | Type | Required | Description |
|---|---|---|---|
| ai_output | string | Required | The AI-generated content to verify |
| ai_instructions | string | Recommended | The prompt or task instructions given to the AI |
| source_documents | array | Recommended | Source data/documents the AI was given. Array of {name, content, type}. Content can be plain text or base64-encoded (prefix with "base64:"). For PDFs, include type: "application/pdf" |
| policy_id | string | Optional | Regulatory policy to also check against (see Available Policies) |
| context | object | Optional | Additional context (industry, risk_level, etc.) |
| metadata | object | Optional | Pass-through metadata returned in response |
Source documents can be provided as plain text or base64-encoded files:
// Plain text
"source_documents": [
{"name": "Credit Report", "content": "Business credit score: 698..."}
]
// Base64-encoded PDF
"source_documents": [
{
"name": "report.pdf",
"content": "base64:JVBERi0xLjQK...",
"type": "application/pdf"
}
]
// Multiple sources
"source_documents": [
{"name": "Credit Report", "content": "Plain text data..."},
{"name": "Appraisal.pdf", "content": "base64:JVBERi...", "type": "application/pdf"}
]
For base64 documents, prefix the content with base64: and include the type field. PDFs are automatically extracted to text. Maximum payload: 10MB.
curl -X POST https://api.ailaunchpods.com/api/v1/check/ \
-H "Authorization: Bearer gk_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"ai_output": "We recommend approving the loan. Credit score 742, DSCR 1.35x, no adverse indicators.",
"ai_instructions": "Analyze the loan application. Evaluate creditworthiness. Flag any concerns.",
"source_documents": [
{"name": "Credit Report", "content": "Business credit score: 698. Two late payments. $180K tax lien."}
],
"policy_id": "finserv-us"
}'
{
"verdict": "not_verified",
"confidence": 0.4,
"attestation_hash": "sha256:cb5fb987...",
"validity": {
"instructions_followed": false,
"summary": "AI fabricated financial metrics and ignored adverse indicators",
"claims": [
{"claim": "DSCR 1.35x", "verified": false, "issue_type": "fabricated_data"},
{"claim": "no adverse indicators", "verified": false, "issue_type": "contradiction"},
{"claim": "credit score 742", "verified": true, "issue_type": "supported"}
],
"critical_issues": [
{"type": "fabricated_data", "severity": "critical", "detail": "DSCR and LTV fabricated"},
{"type": "contradiction", "severity": "critical", "detail": "Ignored $180K lien"}
]
},
"compliance": {
"gaps": [{"requirement": "OCC SR 11-7", "severity": "high", "detail": "..."}],
"verified_areas": ["Provides quantitative metrics"],
"verification_summary": "AI fabricated key financial metrics and ignored adverse indicators in source data"
},
"check_id": "gc_25",
"duration_ms": 24930,
"cost": {
"ai_cost_cents": 2.93,
"multiplier": 5.0,
"billable_cost_cents": 14.63,
"credit_balance_cents": 7460
}
}
Retrieve the results of a previous governance check by its ID.
List all available governance policies with descriptions.
View your check history and usage statistics.
Upload a custom policy document (PDF). The document is automatically chunked and embedded for use in governance checks.
List all policy documents in your library.
Pass these as the policy_id parameter in your check requests.
federal-ai-rmf
energy-nerc-cip
healthcare-fda
finserv-us
fintech-au
gov-au
saas-regulated
general-governance
construction-permits
import requests
response = requests.post(
"https://api.ailaunchpods.com/api/v1/check/",
headers={"Authorization": "Bearer gk_your_api_key"},
json={
"ai_output": ai_result,
"ai_instructions": "The task the AI was given",
"source_documents": [{"name": "data.csv", "content": raw_data}],
"policy_id": "finserv-us" # optional
}
)
result = response.json()
print(f"Verdict: {result['verdict']}")
if result.get('validity'):
for issue in result['validity']['critical_issues']:
print(f" [{issue['severity']}] {issue['type']}: {issue['detail']}")
const response = await fetch(
"https://api.ailaunchpods.com/api/v1/check/",
{
method: "POST",
headers: {
"Authorization": "Bearer gk_your_api_key",
"Content-Type": "application/json"
},
body: JSON.stringify({
ai_output: aiResult,
ai_instructions: taskPrompt,
source_documents: [{name: "data", content: sourceData}],
policy_id: "finserv-us"
})
}
);
const result = await response.json();
if (result.verdict === "verified") deploy(aiResult);
else if (result.verdict === "not_verified") humanReview(result);
else retry(taskPrompt, result.validity);
curl -X POST https://api.ailaunchpods.com/api/v1/check/ \
-H "Authorization: Bearer gk_your_api_key" \
-H "Content-Type: application/json" \
-d '{"ai_output": "AI content", "ai_instructions": "Task given", "source_documents": [{"name": "data", "content": "source"}]}'
| Verdict | Meaning | Action |
|---|---|---|
| verified | AI output claims matched source material and policy criteria | Verification passed |
| review_recommended | Minor discrepancies found, human review suggested | Review findings before proceeding |
| not_verified | AI output claims did not match source material or policy criteria | Review and address discrepancies |
| Code | Meaning |
|---|---|
| 400 | Bad request — missing required fields or invalid JSON |
| 401 | Unauthorized — invalid or missing API key |
| 402 | Payment required — insufficient credits |
| 429 | Rate limited — too many requests per minute |
| 503 | Service unavailable — agent for requested policy is offline |
The Trust API works with any system that generates AI outputs. If your AI can produce text, the Trust API can verify it.
Verify Copilot-generated content before it reaches end users. Intercept outputs from Microsoft 365 Copilot, Azure OpenAI, or Copilot Studio and validate against your verification framework.
Azure Functions → Trust API → Approve/FlagAdd a governance layer to any GPT-4, GPT-4o, or custom GPT deployment. Verify completions from the OpenAI API before surfacing them in your application.
OpenAI API → Trust API → Verified OutputValidate Claude API outputs against regulatory policy before they enter production workflows. Works with Claude Opus, Sonnet, and Haiku via any SDK.
Claude API → Trust API → Verified OutputVerification for Amazon Bedrock model invocations. Intercept responses from Titan, Claude, Llama, or any Bedrock-hosted model before delivery.
Bedrock invoke_model → Trust API → ValidatedAdd the Trust API as a chain step or callback in your LangChain or LlamaIndex pipeline. Verify outputs at any point in your RAG or agent workflow.
Chain → Trust API tool → Conditional routingThe Trust API is a standard REST endpoint. Any system that can make an HTTP POST — internal LLMs, custom models, third-party AI services, RPA bots — can verify outputs in one call.
Your AI → POST /api/v1/check/ → Verification verdictEvery integration follows the same pattern: capture your AI output, POST it to the Trust API with the relevant policy, and act on the verdict. See the code examples above for implementation details.
import requests, base64
# Read and encode the PDF
with open("credit_report.pdf", "rb") as f:
pdf_b64 = "base64:" + base64.b64encode(f.read()).decode()
response = requests.post(
"https://api.ailaunchpods.com/api/v1/check/",
headers={"Authorization": "Bearer gk_your_api_key"},
json={
"ai_output": ai_result,
"ai_instructions": "Analyze this credit report and recommend approval or denial",
"source_documents": [
{"name": "credit_report.pdf", "content": pdf_b64, "type": "application/pdf"}
],
"policy_id": "finserv-us"
}
)
result = response.json()
print(f"Verdict: {result['verdict']}")
for issue in result.get('validity', {}).get('critical_issues', []):
print(f" [{issue['severity']}] {issue['detail']}")
Questions? Contact us at support@ailaunchpods.com