AI Launch Pods A.I. Launch Pods / Trust API
Dashboard API Docs

API Documentation

AI Verification as a Service — independently verify whether AI outputs match their source materials and policy criteria.

Overview

The Trust API is an on-demand adversarial AI auditor. Submit an AI output along with the instructions and source materials the AI was given. Get back an independent verification of whether the AI did what it claims — plus optional policy alignment assessment.

Base URL

https://api.ailaunchpods.com/api/v1/

Authentication

All API requests require a Bearer token in the Authorization header:

Authorization: Bearer gk_your_api_key_here

Get your API key from the Dashboard.

Billing

The Trust API uses a prepaid credit model. Purchase credits from your dashboard, and each API call deducts from your balance based on the complexity of the check.

When your balance reaches zero, calls will return a 402 status. Add more credits to continue. If you have a card on file, your account will be auto-charged when credits run low.

The response from each API call includes your remaining credit balance in the cost object.

Endpoints

POST /api/v1/check/

Submit an AI output for independent verification. Provide the instructions and source materials for validity checking, and optionally a policy for compliance assessment.

Request body

ParameterTypeRequiredDescription
ai_outputstringRequiredThe AI-generated content to verify
ai_instructionsstringRecommendedThe prompt or task instructions given to the AI
source_documentsarrayRecommendedSource data/documents the AI was given. Array of {name, content, type}. Content can be plain text or base64-encoded (prefix with "base64:"). For PDFs, include type: "application/pdf"
policy_idstringOptionalRegulatory policy to also check against (see Available Policies)
contextobjectOptionalAdditional context (industry, risk_level, etc.)
metadataobjectOptionalPass-through metadata returned in response

Source Documents Format

Source documents can be provided as plain text or base64-encoded files:

// Plain text
"source_documents": [
  {"name": "Credit Report", "content": "Business credit score: 698..."}
]

// Base64-encoded PDF
"source_documents": [
  {
    "name": "report.pdf",
    "content": "base64:JVBERi0xLjQK...",
    "type": "application/pdf"
  }
]

// Multiple sources
"source_documents": [
  {"name": "Credit Report", "content": "Plain text data..."},
  {"name": "Appraisal.pdf", "content": "base64:JVBERi...", "type": "application/pdf"}
]

For base64 documents, prefix the content with base64: and include the type field. PDFs are automatically extracted to text. Maximum payload: 10MB.

Example request

curl -X POST https://api.ailaunchpods.com/api/v1/check/ \
  -H "Authorization: Bearer gk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "ai_output": "We recommend approving the loan. Credit score 742, DSCR 1.35x, no adverse indicators.",
    "ai_instructions": "Analyze the loan application. Evaluate creditworthiness. Flag any concerns.",
    "source_documents": [
      {"name": "Credit Report", "content": "Business credit score: 698. Two late payments. $180K tax lien."}
    ],
    "policy_id": "finserv-us"
  }'

Response

{
  "verdict": "not_verified",
  "confidence": 0.4,
  "attestation_hash": "sha256:cb5fb987...",
  "validity": {
    "instructions_followed": false,
    "summary": "AI fabricated financial metrics and ignored adverse indicators",
    "claims": [
      {"claim": "DSCR 1.35x", "verified": false, "issue_type": "fabricated_data"},
      {"claim": "no adverse indicators", "verified": false, "issue_type": "contradiction"},
      {"claim": "credit score 742", "verified": true, "issue_type": "supported"}
    ],
    "critical_issues": [
      {"type": "fabricated_data", "severity": "critical", "detail": "DSCR and LTV fabricated"},
      {"type": "contradiction", "severity": "critical", "detail": "Ignored $180K lien"}
    ]
  },
  "compliance": {
    "gaps": [{"requirement": "OCC SR 11-7", "severity": "high", "detail": "..."}],
    "verified_areas": ["Provides quantitative metrics"],
    "verification_summary": "AI fabricated key financial metrics and ignored adverse indicators in source data"
  },
  "check_id": "gc_25",
  "duration_ms": 24930,
  "cost": {
    "ai_cost_cents": 2.93,
    "multiplier": 5.0,
    "billable_cost_cents": 14.63,
    "credit_balance_cents": 7460
  }
}
GET /api/v1/check/{check_id}/

Retrieve the results of a previous governance check by its ID.

GET /api/v1/policies/

List all available governance policies with descriptions.

GET /api/v1/usage/

View your check history and usage statistics.

POST /api/v1/documents/

Upload a custom policy document (PDF). The document is automatically chunked and embedded for use in governance checks.

GET /api/v1/documents/

List all policy documents in your library.

Available Policies

Pass these as the policy_id parameter in your check requests.

federal-ai-rmf
Federal Ai Rmf
energy-nerc-cip
Energy Nerc Cip
healthcare-fda
Healthcare Fda
finserv-us
Finserv Us
fintech-au
Fintech Au
gov-au
Gov Au
saas-regulated
Saas Regulated
general-governance
General Governance
construction-permits
Construction Permits

Integration Examples

Python

import requests

response = requests.post(
    "https://api.ailaunchpods.com/api/v1/check/",
    headers={"Authorization": "Bearer gk_your_api_key"},
    json={
        "ai_output": ai_result,
        "ai_instructions": "The task the AI was given",
        "source_documents": [{"name": "data.csv", "content": raw_data}],
        "policy_id": "finserv-us"  # optional
    }
)

result = response.json()
print(f"Verdict: {result['verdict']}")
if result.get('validity'):
    for issue in result['validity']['critical_issues']:
        print(f"  [{issue['severity']}] {issue['type']}: {issue['detail']}")

Node.js

const response = await fetch(
  "https://api.ailaunchpods.com/api/v1/check/",
  {
    method: "POST",
    headers: {
      "Authorization": "Bearer gk_your_api_key",
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      ai_output: aiResult,
      ai_instructions: taskPrompt,
      source_documents: [{name: "data", content: sourceData}],
      policy_id: "finserv-us"
    })
  }
);

const result = await response.json();
if (result.verdict === "verified") deploy(aiResult);
else if (result.verdict === "not_verified") humanReview(result);
else retry(taskPrompt, result.validity);

cURL

curl -X POST https://api.ailaunchpods.com/api/v1/check/ \
  -H "Authorization: Bearer gk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"ai_output": "AI content", "ai_instructions": "Task given", "source_documents": [{"name": "data", "content": "source"}]}'

Verdicts

VerdictMeaningAction
verifiedAI output claims matched source material and policy criteriaVerification passed
review_recommendedMinor discrepancies found, human review suggestedReview findings before proceeding
not_verifiedAI output claims did not match source material or policy criteriaReview and address discrepancies

Error Codes

CodeMeaning
400Bad request — missing required fields or invalid JSON
401Unauthorized — invalid or missing API key
402Payment required — insufficient credits
429Rate limited — too many requests per minute
503Service unavailable — agent for requested policy is offline

Integrations

The Trust API works with any system that generates AI outputs. If your AI can produce text, the Trust API can verify it.

M
Microsoft Copilot

Verify Copilot-generated content before it reaches end users. Intercept outputs from Microsoft 365 Copilot, Azure OpenAI, or Copilot Studio and validate against your verification framework.

Azure Functions → Trust API → Approve/Flag
O
OpenAI / ChatGPT

Add a governance layer to any GPT-4, GPT-4o, or custom GPT deployment. Verify completions from the OpenAI API before surfacing them in your application.

OpenAI API → Trust API → Verified Output
A
Anthropic Claude

Validate Claude API outputs against regulatory policy before they enter production workflows. Works with Claude Opus, Sonnet, and Haiku via any SDK.

Claude API → Trust API → Verified Output
B
AWS Bedrock

Verification for Amazon Bedrock model invocations. Intercept responses from Titan, Claude, Llama, or any Bedrock-hosted model before delivery.

Bedrock invoke_model → Trust API → Validated
LC
LangChain / LlamaIndex

Add the Trust API as a chain step or callback in your LangChain or LlamaIndex pipeline. Verify outputs at any point in your RAG or agent workflow.

Chain → Trust API tool → Conditional routing
*
Any AI System

The Trust API is a standard REST endpoint. Any system that can make an HTTP POST — internal LLMs, custom models, third-party AI services, RPA bots — can verify outputs in one call.

Your AI → POST /api/v1/check/ → Verification verdict

Every integration follows the same pattern: capture your AI output, POST it to the Trust API with the relevant policy, and act on the verdict. See the code examples above for implementation details.

Uploading Documents via API

Python — Send a PDF for verification

import requests, base64

# Read and encode the PDF
with open("credit_report.pdf", "rb") as f:
    pdf_b64 = "base64:" + base64.b64encode(f.read()).decode()

response = requests.post(
    "https://api.ailaunchpods.com/api/v1/check/",
    headers={"Authorization": "Bearer gk_your_api_key"},
    json={
        "ai_output": ai_result,
        "ai_instructions": "Analyze this credit report and recommend approval or denial",
        "source_documents": [
            {"name": "credit_report.pdf", "content": pdf_b64, "type": "application/pdf"}
        ],
        "policy_id": "finserv-us"
    }
)

result = response.json()
print(f"Verdict: {result['verdict']}")
for issue in result.get('validity', {}).get('critical_issues', []):
    print(f"  [{issue['severity']}] {issue['detail']}")

Support

Questions? Contact us at support@ailaunchpods.com