GPT-OSS 120B is OpenAI’s open-weight 120B parameter language model, offering powerful text generation capabilities with advanced reasoning and instruction-following abilities.Documentation Index
Fetch the complete documentation index at: https://runpod-b18f5ded-promptless-remove-flash-beta-notification.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Try in playground
Test GPT-OSS 120B in the Runpod Hub playground.
| Endpoint | https://api.runpod.ai/v2/gpt-oss-120b/runsync |
| Pricing | $10.00 per 1M tokens |
| Type | Text generation |
This endpoint is fully compatible with the OpenAI API. See the OpenAI compatibility examples below.
Request
All parameters are passed within theinput object in the request body.
Prompt for text generation.
Maximum number of tokens to output.
Randomness of the output. Lower values make output more predictable and deterministic. Range: 0.0-1.0.
Nucleus sampling threshold. Samples from the smallest set of words whose cumulative probability exceeds this threshold.
Restricts sampling to the top K most probable words.
Stops generation if the given string is encountered.
Response
Unique identifier for the request.
Request status. Returns
COMPLETED on success, FAILED on error.Time in milliseconds the request spent in queue before processing began.
Time in milliseconds the model took to generate the response.
OpenAI API compatibility
GPT-OSS 120B is fully compatible with the OpenAI API format. You can use the OpenAI Python client to interact with this endpoint.Python (OpenAI SDK)
stream=True:
Python (Streaming)
Cost calculation
GPT-OSS 120B charges $10.00 per 1M tokens. Example costs:| Tokens | Cost |
|---|---|
| 1,000 tokens | $0.01 |
| 10,000 tokens | $0.10 |
| 100,000 tokens | $1.00 |
| 1,000,000 tokens | $10.00 |