Skip to main content

Portkey

1.1 Usage

  1. Log in to Portkey dashboard, go to Model Catalog

  2. Select Add ProviderSelf-hosted / Custom

  3. Enter API Endpoint and API Key

  4. Add custom model and set Model Slug

  5. Call with @hpc-ai/minimax/minimax-m2.5

Method 2: Passthrough Mode

from portkey_ai import Portkey

client = Portkey(
api_key="PORTKEY_API_KEY",
provider="passthrough",
custom_host="https://api.hpc-ai.com/inference/v1",
default_headers={
"authorization": "Bearer YOUR_HPC_AI_API_KEY"
}
)

response = client.chat.completions.create(
model="minimax/minimax-m2.5",
messages=[{"role": "user", "content": "Hello!"}]
)

Method 3: cURL

curl https://api.portkey.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-provider: passthrough" \
-H "x-portkey-custom-host: https://api.hpc-ai.com/inference/v1" \
-H "authorization: Bearer YOUR_HPC_AI_API_KEY" \
-d '{
"model": "minimax/minimax-m2.5",
"messages": [{"role": "user", "content": "Hello!"}]
}'

Method 4: OpenAI SDK Compatibility

from openai import OpenAI

client = OpenAI(
api_key="PORTKEY_API_KEY",
base_url="https://api.portkey.ai/v1"
)

# Specify provider via header
response = client.chat.completions.create(
model="minimax/minimax-m2.5",
messages=[{"role": "user", "content": "Hello!"}],
extra_headers={
"x-portkey-provider": "passthrough",
"x-portkey-custom-host": "https://api.hpc-ai.com/inference/v1"
}
)

1.2 Troubleshooting

Common IssueSolution
Request failedVerify custom_host URL format
Auth failedEnsure authorization header is passed correctly
Model not foundEnsure Model Slug matches the added custom model

1.3 References