Your agent already works. Your compliance team says no. Change one line — every LLM call runs inside an Intel TDX enclave. Same SDK, same code, hardware-encrypted.
base_url to api.voltagegpu.com/v1/confidentialThe viral open-source AI agent. Connects to Telegram, Signal, Discord — reads files, sends emails, browses the web, calls APIs. Point it at our TDX API and every LLM call is hardware-encrypted.
Best for: Personal AI assistant for lawyers, consultants, analysts who handle confidential documents daily.
# ~/.openclaw/config.yaml
providers:
- name: voltagegpu-confidential
type: openai
base_url: https://api.voltagegpu.com/v1/confidential
api_key: vgpu_YOUR_API_KEY
model: contract-analyst # or any of the 8 agents
# That's it. Every LLM call now runs in a TDX enclave.Multi-agent orchestration framework. Build pipelines where multiple AI agents collaborate on complex tasks. Swap the LLM backend to our TDX API — zero code change in your agents.
Best for: Dev teams building document processing pipelines for finance, legal, healthcare — where data must stay confidential.
from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI
# One line change: point to VoltageGPU TDX
llm = ChatOpenAI(
base_url="https://api.voltagegpu.com/v1/confidential",
api_key="vgpu_YOUR_API_KEY",
model="financial-analyst", # or any of the 8 agents
)
analyst = Agent(role="Senior Financial Analyst", llm=llm, ...)
crew = Crew(agents=[analyst], tasks=[...])
crew.kickoff() # All inference is now TDX-encryptedThe most popular LLM framework. RAG pipelines, chains, agents, tool-use — everything works with our OpenAI-compatible API. Same SDK, same code, confidential inference.
Best for: Any LangChain app that processes sensitive data: legal search, medical Q&A, financial analysis, compliance automation.
from langchain_openai import ChatOpenAI
# Drop-in replacement — same OpenAI SDK, TDX-encrypted
llm = ChatOpenAI(
base_url="https://api.voltagegpu.com/v1/confidential",
api_key="vgpu_YOUR_API_KEY",
model="compliance-officer",
)
# Use in any chain, agent, or RAG pipeline
from langchain.chains import RetrievalQA
qa = RetrievalQA.from_chain_type(llm=llm, retriever=retriever)
result = qa.invoke("Does this policy comply with GDPR Art. 25?")If your code uses the OpenAI SDK (or any OpenAI-compatible client), it works with us. Change the base URL, keep everything else. curl, Python, Node.js, Go — all supported.
Best for: Any existing AI application that needs to become GDPR-compliant or handle sensitive documents.
# Python
from openai import OpenAI
client = OpenAI(
base_url="https://api.voltagegpu.com/v1/confidential",
api_key="vgpu_YOUR_API_KEY",
)
response = client.chat.completions.create(
model="contract-analyst",
messages=[{"role": "user", "content": "Review this NDA..."}],
)
# Node.js — identical pattern
# import OpenAI from 'openai';
# const client = new OpenAI({ baseURL: "https://api.voltagegpu.com/v1/confidential", apiKey: "vgpu_..." });Try our pre-built templates — 8 domain-expert agents ready to use in your browser. Upload a document and get structured analysis in seconds, no code required.