Codex
Codex is OpenAI's coding agent that runs in your terminal. Official site: developers.openai.com/codex
1.1 Usage
Codex CLI and the Codex IDE extension share the same config.toml layers. For HPC-AI deployment, the safest setup is to define a custom model provider and load the API key from an environment variable.
Method 1: User Config File (Recommended)
Add the following to ~/.codex/config.toml:
model = "minimax/minimax-m2.5"
model_provider = "hpcai"
[model_providers.hpcai]
name = "HPC-AI Model APIs"
base_url = "https://api.hpc-ai.com/inference/v1"
env_key = "INFERENCE_API_KEY"
env_key_instructions = "Export INFERENCE_API_KEY before starting Codex"
wire_api = "responses"
request_max_retries = 4
stream_max_retries = 5
stream_idle_timeout_ms = 300000
# supports_websockets = false
Then export your API key and start Codex:
export INFERENCE_API_KEY="your-hpc-ai-api-key"
codex
Method 2: Project Config
If you want the setting to apply only to one repository, place the same configuration in <project>/.codex/config.toml.
Codex loads project config only for trusted projects.
Method 3: Named Profile
Add a reusable profile in ~/.codex/config.toml:
[profiles.hpcai]
model = "minimax/minimax-m2.5"
model_provider = "hpcai"
Then start Codex with:
export INFERENCE_API_KEY="your-hpc-ai-api-key"
codex --profile hpcai
1.2 Troubleshooting
| Common Issue | Solution |
|---|---|
| WebSocket transport errors | Add supports_websockets = false if support HTTP/SSE streaming but not the Responses API WebSocket transport |
| Project config is ignored | Trust the project, or move the provider config to ~/.codex/config.toml |