API Reference

Completions

Log in to see full request history
Body Params
boolean

Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events.

integer

Generates best_of completions server-side and returns the best one. Must be greater than n when used together.

boolean

Echo back the prompt in addition to the completion

number

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text.

logit_bias
object

Modify the likelihood of specified tokens appearing in the completion. Maps tokens to bias values from -100 to 100.

integer

Include log probabilities of the most likely tokens. Maximum value is 5.

integer

The maximum number of tokens that can be generated in the completion.

integer

How many completions to generate for each prompt.

number

Number between -2.0 and 2.0. Positive values penalize new tokens based on their presence in the text so far.

integer

If specified, attempts to generate deterministic samples. Determinism is not guaranteed.

Up to 4 sequences where the API will stop generating further tokens.

stream_options
object

Options for streaming response. Only set this when stream is True.

string

The suffix that comes after a completion of inserted text. Only supported for gpt-3.5-turbo-instruct.

number

Sampling temperature between 0 and 2. Higher values make output more random, lower more focused.

number

Alternative to temperature. Consider only tokens with top_p probability mass. Range 0-1.

string

A unique identifier representing your end-user, which can help OpenAI monitor and detect abuse.

string
required

model specified as model_vendor/model, for example openai/gpt-4o

required

The prompt to generate completions for, encoded as a string

Responses

Language
Credentials
Request
Click Try It! to start a request and see the response here! Or choose an example:
application/json