Skip to main content

Helicone

1.1 API Configuration

Method 1: Helicone AI Gateway

Helicone AI Gateway forwards requests to your API via the Helicone-Target-URL header:

import openai

client = openai.OpenAI(
api_key="YOUR_HPC_AI_API_KEY",
base_url="https://ai-gateway.helicone.ai/v1",
default_headers={
"Helicone-Auth": "Bearer YOUR_HELICONE_API_KEY",
"Helicone-Target-URL": "https://api.hpc-ai.com/inference/v1"
}
)

response = client.chat.completions.create(
model="minimax/minimax-m2.5",
messages=[{"role": "user", "content": "Hello!"}]
)

Method 2: Self-Hosted Helicone

export HPC_AI_API_KEY="sk-your-hpc-ai-api-key"
export HELICONE_API_KEY="sk-helicone..."

curl --request POST \
--url http://localhost:8585/v1/gateway/oai/v1/chat/completions \
--header "Authorization: Bearer $HPC_AI_API_KEY" \
--header "Helicone-Auth: Bearer $HELICONE_API_KEY" \
--header "Helicone-Target-URL: https://api.hpc-ai.com/inference/v1" \
--header "Content-Type: application/json" \
--data '{
"model": "minimax/minimax-m2.5",
"messages": [{"role": "user", "content": "Hello!"}]
}'

1.2 References