Request Headers

authorizationstring*required

The authorization token (required).

Request Path

authorstring*required

The author of the App.

idstring*required

The unique identifier of the App.

Request Body

argumentsjson_valueoptional

A JSON object containing the arguments to pass to the App.

retry_tokenstringoptional

A token that can be used to retry the request from the point of failure.

seednumberoptional

If specified, the inferencing will sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed for some models.

service_tierenumoptional

Specifies the processing type used for serving the request.

Variants
Auto"auto"
Default"default"
Flex"flex"
streambooleanoptional

If set to true, the model response data will be streamed to the client as it is generated using server-sent events.

stream_optionsobjectoptional

Options for streaming response.

Properties
include_usagebooleanoptional

If set, an additional chunk will be streamed before the data: [DONE] message. The usage field on this chunk shows the token usage statistics for the entire request, as well as the cost, if requested.

usageobjectoptional

OpenRouter accounting configuration.

Properties
includebooleanoptional

Whether to include Cost in the response usage.

Response Body (Unary)

completionsarray*required

Completions returned by the App.

Items
Multichat Completionobject

A Multichat completion.

Properties
type"multichat"*required
idstring*required

A unique identifier for the chat completion.

choicesarray*required

An array of choices returned by the Query Tool.

Items
deltaobject*required

An object containing the incremental updates to the chat message.

Properties
contentstringoptional

The content of the message generated by the model.

refusalstringoptional

The refusal information if the model refused to generate a message.

role"assistant"*required

The role of the message, which is always assistant for model-generated messages.

annotationsarrayoptional

The annotations added by the model in this message.

Items
Annotationobject
Properties
type"url_citation"*required
url_citationobject*required
Properties
end_indexnumber*required

The end index of the citation in the message content.

start_indexnumber*required

The start index of the citation in the message content.

titlestring*required

The title of the cited webpage.

urlstring*required

The URL of the cited webpage.

audioobjectoptional

The audio generated by the model in this message.

Properties
idstring*required
datastring*required
expires_atnumber*required
transcriptstring*required
tool_callsarrayoptional

The tool calls made by the model in this delta.

Items
idstring*required

The tool call ID.

type"function"*required
functionobject*required
Properties
namestring*required

The name of the function being called.

argumentsstring*required

The arguments passed to the function.

reasoningstringoptional

The reasoning text generated by the model in this message.

finish_reasonenum*required

The reason why the model finished generating the response.

Variants
Stop"stop"

The model finished generating because it reached a natural stopping point.

Length"length"

The model finished generating because it reached the maximum token limit.

ToolCalls"tool_calls"

The model finished generating because it made one or more tool calls.

ContentFilter"content_filter"

The model finished generating because it triggered a content filter.

Error"error"

The model finished generating because an error occurred.

indexnumber*required

The index of the choice in the list of choices.

logprobsobjectoptional

The log probabilities of the tokens in the delta.

Properties
contentarrayoptional

An array of log probabilities for each token in the content.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

refusalarrayoptional

An array of log probabilities for each token in the refusal.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

errorobjectoptional

If an error occurred while generating this choice, the error object.

Properties
codenumber*required

The HTTP status code for the error.

messagejson_value

A JSON message describing the error. Typically, either a string or an object.

modelstring*required

The base62 22-character unique identifier for the LLM that produced this choice.

model_indexnumberoptional

The index of the LLM in the Multichat Model that produced this choice.

completion_metadataobject*required

Details about the chat completion which produced this choice.

Properties
idstring*required

A unique identifier for the chat completion.

creatednumber*required

The Unix timestamp (in seconds) when the chat completion was created.

modelstring*required

The model used for the chat completion.

service_tierenumoptional

The service tier used for the chat completion.

Variants
Auto"auto"
Default"default"
Flex"flex"
system_fingerprintstringoptional

A fingerprint representing the system configuration used for the chat completion.

usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

providerstringoptional

The upstream (or upstream upstream) LLM provider used for the chat completion.

creatednumber*required

The Unix timestamp (in seconds) when the chat completion was created.

modelstring*required

The 22-character unique identifier for the Multichat Model which generated the completion.

object"chat.completion"*required
usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

indexnumber*required
Score Completionobject

A Score completion.

Properties
type"score"*required
idstring*required

A unique identifier for the chat completion.

choicesarray*required

An array of choices returned by the Query Model.

Items
messageobject*required

The message generated by the model for this choice.

Properties
contentstringoptional

The content of the message generated by the model.

refusalstringoptional

The refusal information if the model refused to generate a message.

role"assistant"*required

The role of the message, which is always assistant for model-generated messages.

annotationsarrayoptional

The annotations added by the model in this message.

Items
Annotationobject
Properties
type"url_citation"*required
url_citationobject*required
Properties
end_indexnumber*required

The end index of the citation in the message content.

start_indexnumber*required

The start index of the citation in the message content.

titlestring*required

The title of the cited webpage.

urlstring*required

The URL of the cited webpage.

audioobjectoptional

The audio generated by the model in this message.

Properties
idstring*required
datastring*required
expires_atnumber*required
transcriptstring*required
tool_callsarrayoptional

The tool calls made by the model in this delta.

Items
idstring*required

The tool call ID.

type"function"*required
functionobject*required
Properties
namestring*required

The name of the function being called.

argumentsstring*required

The arguments passed to the function.

reasoningstringoptional

The reasoning text generated by the model in this message.

imagesarrayoptional

The images generated by the model in this message.

Items
Imageobject
Properties
type"image_url"*required
image_urlobject*required
Properties
urlstring*required
finish_reasonenum*required

The reason why the model finished generating the response.

Variants
Stop"stop"

The model finished generating because it reached a natural stopping point.

Length"length"

The model finished generating because it reached the maximum token limit.

ToolCalls"tool_calls"

The model finished generating because it made one or more tool calls.

ContentFilter"content_filter"

The model finished generating because it triggered a content filter.

Error"error"

The model finished generating because an error occurred.

indexnumber*required

The index of the choice in the list of choices.

logprobsobjectoptional

The log probabilities of the tokens in the delta.

Properties
contentarrayoptional

An array of log probabilities for each token in the content.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

refusalarrayoptional

An array of log probabilities for each token in the refusal.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

confidence_weightnumberoptional

The weight of the LLM that produced this choice.

confidencenumberoptional

The Confidence Score of the choice.

errorobjectoptional

If an error occurred while generating this choice, the error object.

Properties
codenumber*required

The HTTP status code for the error.

messagejson_value

A JSON message describing the error. Typically, either a string or an object.

modelstring*required

The base62 22-character unique identifier for the LLM that produced this choice.

model_indexnumberoptional

The index of the LLM in the Score Model that produced this choice.

completion_metadataobject*required

Details about the chat completion which produced this choice.

Properties
idstring*required

A unique identifier for the chat completion.

creatednumber*required

The Unix timestamp (in seconds) when the chat completion was created.

modelstring*required

The model used for the chat completion.

service_tierenumoptional

The service tier used for the chat completion.

Variants
Auto"auto"
Default"default"
Flex"flex"
system_fingerprintstringoptional

A fingerprint representing the system configuration used for the chat completion.

usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

providerstringoptional

The upstream (or upstream upstream) LLM provider used for the chat completion.

creatednumber*required

The Unix timestamp (in seconds) when the chat completion was created.

modelstring*required

The 22-character unique identifier for the Score Model which generated the completion.

object"chat.completion"*required
usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

weight_dataenumoptional

Details about how the weights were computed for the Score Model.

Variants
Static Weight Dataobject

Indicates that static weights were used for the Score Model.

Properties
type"static"*required
Training Table Weight Dataobject

Indicates that training table weights were used for the Score Model.

Properties
type"training_table"*required
embeddings_responseobject*required
Properties
dataarray*required

An array of embedding objectst.

Items
Embeddingobject

An embedding vector.

Properties
embeddingarray*required

The embedding vector as an array of floats.

Items
Floatnumber

A float in the embedding vector.

indexnumber*required
object"embedding"*required
modelstring*required

The name of the model used to generate the embeddings.

object"list"*required
usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

indexnumber*required
outputjson_valueoptional

The final output of the App.

retry_tokenstringoptional

A token that can be used to retry the request from the point of failure.

errorobjectoptional

An error object containing details about any error that occurred during the request.

Properties
codenumber*required

The HTTP status code for the error.

messagejson_value

A JSON message describing the error. Typically, either a string or an object.

app_publishedbooleanoptional

Indicates whether the app has been successfully published. Only ever present if requested.

Response Body (Streaming)

completionsarray*required

Completion chunks streamed by the App.

Items
Multichat Completion Chunkobject

A chunk of a streaming Multichat completion.

Properties
type"multichat"*required
idstring*required

A unique identifier for the chat completion.

choicesarray*required

An array of choices returned by the Query Tool.

Items
deltaobject*required

An object containing the incremental updates to the chat message.

Properties
contentstringoptional

The content of the message delta.

refusalstringoptional

The refusal reason if the model refused to generate a response.

role"assistant"optional

The role of the message delta.

tool_callsarrayoptional

The tool calls made by the model in this delta.

Items
indexnumber*required

The index of the tool call in the message.

idstringoptional

The tool call ID.

type"function"optional
functionobjectoptional
Properties
namestringoptional

The name of the function being called.

argumentsstringoptional

The arguments passed to the function.

reasoningstringoptional

The reasoning text generated by the model in this delta.

imagesarrayoptional

The images generated by the model in this delta.

Items
Imageobject
Properties
type"image_url"*required
image_urlobject*required
Properties
urlstring*required
finish_reasonenumoptional

The reason why the model finished generating the response.

Variants
Stop"stop"

The model finished generating because it reached a natural stopping point.

Length"length"

The model finished generating because it reached the maximum token limit.

ToolCalls"tool_calls"

The model finished generating because it made one or more tool calls.

ContentFilter"content_filter"

The model finished generating because it triggered a content filter.

Error"error"

The model finished generating because an error occurred.

indexnumber*required

The index of the choice in the list of choices.

logprobsobjectoptional

The log probabilities of the tokens in the delta.

Properties
contentarrayoptional

An array of log probabilities for each token in the content.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

refusalarrayoptional

An array of log probabilities for each token in the refusal.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

errorobjectoptional

If an error occurred while generating this choice, the error object.

Properties
codenumber*required

The HTTP status code for the error.

messagejson_value

A JSON message describing the error. Typically, either a string or an object.

modelstringoptional

The base62 22-character unique identifier for the LLM that produced this choice.

model_indexnumberoptional

The index of the LLM in the Multichat Model that produced this choice.

completion_metadataobjectoptional

Details about the chat completion which produced this choice.

Properties
idstring*required

A unique identifier for the chat completion.

creatednumber*required

The Unix timestamp (in seconds) when the first chat completion chunk was created.

modelstring*required

The model used for the chat completion.

service_tierenumoptional

The service tier used for the chat completion chunk.

Variants
Auto"auto"
Default"default"
Flex"flex"
system_fingerprintstringoptional

A fingerprint representing the system configuration used for the chat completion chunk.

usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

providerstringoptional

The upstream (or upstream upstream) LLM provider used for the chat completion chunk.

creatednumber*required

The Unix timestamp (in seconds) when the first chat completion chunk was created.

modelstring*required

The 22-character unique identifier for the Multichat Model which generated the completion.

object"chat.completion.chunk"*required
usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

indexnumber*required
Score Completion Chunkobject

A chunk of a streaming Score completion.

Properties
type"score"*required
idstring*required

A unique identifier for the chat completion.

choicesarray*required

An array of choices returned by the Score Model.

Items
deltaobject*required

An object containing the incremental updates to the chat message.

Properties
contentstringoptional

The content of the message delta.

refusalstringoptional

The refusal reason if the model refused to generate a response.

role"assistant"optional

The role of the message delta.

tool_callsarrayoptional

The tool calls made by the model in this delta.

Items
indexnumber*required

The index of the tool call in the message.

idstringoptional

The tool call ID.

type"function"optional
functionobjectoptional
Properties
namestringoptional

The name of the function being called.

argumentsstringoptional

The arguments passed to the function.

reasoningstringoptional

The reasoning text generated by the model in this delta.

imagesarrayoptional

The images generated by the model in this delta.

Items
Imageobject
Properties
type"image_url"*required
image_urlobject*required
Properties
urlstring*required
finish_reasonenumoptional

The reason why the model finished generating the response.

Variants
Stop"stop"

The model finished generating because it reached a natural stopping point.

Length"length"

The model finished generating because it reached the maximum token limit.

ToolCalls"tool_calls"

The model finished generating because it made one or more tool calls.

ContentFilter"content_filter"

The model finished generating because it triggered a content filter.

Error"error"

The model finished generating because an error occurred.

indexnumber*required

The index of the choice in the list of choices.

logprobsobjectoptional

The log probabilities of the tokens in the delta.

Properties
contentarrayoptional

An array of log probabilities for each token in the content.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

refusalarrayoptional

An array of log probabilities for each token in the refusal.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

confidence_weightnumberoptional

The weight of the LLM that produced this choice.

confidencenumberoptional

The Confidence Score of the choice.

errorobjectoptional

If an error occurred while generating this choice, the error object.

Properties
codenumber*required

The HTTP status code for the error.

messagejson_value

A JSON message describing the error. Typically, either a string or an object.

modelstringoptional

The base62 22-character unique identifier for the LLM that produced this choice.

model_indexnumberoptional

The index of the LLM in the Score Model that produced this choice.

completion_metadataobjectoptional

Details about the chat completion which produced this choice.

Properties
idstring*required

A unique identifier for the chat completion.

creatednumber*required

The Unix timestamp (in seconds) when the first chat completion chunk was created.

modelstring*required

The model used for the chat completion.

service_tierenumoptional

The service tier used for the chat completion chunk.

Variants
Auto"auto"
Default"default"
Flex"flex"
system_fingerprintstringoptional

A fingerprint representing the system configuration used for the chat completion chunk.

usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

providerstringoptional

The upstream (or upstream upstream) LLM provider used for the chat completion chunk.

creatednumber*required

The Unix timestamp (in seconds) when the first chat completion chunk was created.

modelstring*required

The 22-character unique identifier for the Score Model which generated the completion.

object"chat.completion.chunk"*required
usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

weight_dataenumoptional

Details about how the weights were computed for the Score Model.

Variants
Static Weight Dataobject

Indicates that static weights were used for the Score Model.

Properties
type"static"*required
Training Table Weight Dataobject

Indicates that training table weights were used for the Score Model.

Properties
type"training_table"*required
embeddings_responseobject*required
Properties
dataarray*required

An array of embedding objectst.

Items
Embeddingobject

An embedding vector.

Properties
embeddingarray*required

The embedding vector as an array of floats.

Items
Floatnumber

A float in the embedding vector.

indexnumber*required
object"embedding"*required
modelstring*required

The name of the model used to generate the embeddings.

object"list"*required
usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

indexnumber*required
outputjson_valueoptional

The final output of the App.

retry_tokenstringoptional

A token that can be used to retry the request from the point of failure.

errorobjectoptional

An error object containing details about any error that occurred during the request.

Properties
codenumber*required

The HTTP status code for the error.

messagejson_value

A JSON message describing the error. Typically, either a string or an object.

app_publishedbooleanoptional

Indicates whether the app has been successfully published. Only ever present if requested.

Objective Artificial Intelligence, Inc.