Portkey
1.1 Usage
Method 1: Add Custom Provider via Model Catalog (Recommended)
-
Log in to Portkey dashboard, go to Model Catalog
-
Select Add Provider → Self-hosted / Custom
-
Enter API Endpoint and API Key
-
Add custom model and set Model Slug
-
Call with
@hpc-ai/minimax/minimax-m2.5
Method 2: Passthrough Mode
from portkey_ai import Portkey
client = Portkey(
api_key="PORTKEY_API_KEY",
provider="passthrough",
custom_host="https://api.hpc-ai.com/inference/v1",
default_headers={
"authorization": "Bearer YOUR_HPC_AI_API_KEY"
}
)
response = client.chat.completions.create(
model="minimax/minimax-m2.5",
messages=[{"role": "user", "content": "Hello!"}]
)
Method 3: cURL
curl https://api.portkey.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-provider: passthrough" \
-H "x-portkey-custom-host: https://api.hpc-ai.com/inference/v1" \
-H "authorization: Bearer YOUR_HPC_AI_API_KEY" \
-d '{
"model": "minimax/minimax-m2.5",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Method 4: OpenAI SDK Compatibility
from openai import OpenAI
client = OpenAI(
api_key="PORTKEY_API_KEY",
base_url="https://api.portkey.ai/v1"
)
# Specify provider via header
response = client.chat.completions.create(
model="minimax/minimax-m2.5",
messages=[{"role": "user", "content": "Hello!"}],
extra_headers={
"x-portkey-provider": "passthrough",
"x-portkey-custom-host": "https://api.hpc-ai.com/inference/v1"
}
)
1.2 Troubleshooting
| Common Issue | Solution |
|---|---|
| Request failed | Verify custom_host URL format |
| Auth failed | Ensure authorization header is passed correctly |
| Model not found | Ensure Model Slug matches the added custom model |