Documentation
APIs
Translation

Translation

Context-aware English-Korean translation that leverages previous dialogues to ensure unmatched coherence and continuity in your conversations.

Available models

ModelRelease dateContext LengthDescription
solar-1-mini-translate-enko2024-02-22 beta32768English-to-Korean translation specialized model based on the solar-mini. Maximum context length is 32k tokens.
solar-1-mini-translate-koen2024-02-22 beta32768Korean-to-English translation specialized model based on the solar-mini. Maximum context length is 32k tokens.

Request

POST https://api.upstage.ai/v1/solar/chat/completions

Parameters

The messages parameter is a list of message objects. Each message object has a role (which can be one of "user", or "assistant") and content. The "system" role is not used in the translation API.

A user message is what the user inputs. Through a user message, users can request the translation of a sentence. The assistant message can either record the previous response from the assistant or be the result of the translation. With multi-turn input, Translation API dynamically adapts to your conversation tone, intelligently translating only the latest message for a seamless experience.

Currently, timeout for non-stream mode is 60 seconds. Please use stream mode for long text translation.

Request headers

Authorization string Required
Authentication token, format: Bearer API_KEY

Request body

messages list Required
A list of messages comprising the conversation so far.

messages[].content string Required
The contents of the user message.

messages[].role string Required
The role of the messages author. (which can be one of "user", or "assistant")

model string Required
The model name to generate the completion.

max_tokens integer Optional
An optional parameter that limits the maximum number of tokens to generate. If max_tokens is set, sum of input tokens and max_tokens should be lower than or equal to context length of model. Default value is inf.

stream boolean Optional
An optional parameter that specifies whether a response should be sent as a stream. If set true, partial message deltas will be sent. Tokens will be sent as data-only server-sent events. Default value is false.

temperature float Optional
An optional parameter to set the sampling temperature. The value should lie between 0 and 2. Higher values like 0.8 result in a more random output, whereas lower values such as 0.2 enhance focus and determinism in the output. Default value is 0.7.

top_p float Optional
An optional parameter to trigger nucleus sampling. The tokens with top_p probability mass will be considered, which means, setting this value to 0.1 will consider tokens comprising the top 10% probability.

Response

Return values

Returns a chat.completion object, or a streamed sequence of chat.completion.chunk objects if the request is streamed.

The chat completion object

id string
A unique identifier for the chat completion. Each chunk has the same ID.

object string
The object type, which is always chat.completion

created integer
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.

model string
A string representing the version of the model being used.

system_fingerprint null
This field is not yet available.

choices list
A list of chat completion choices.

choices[].finish_reason string
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached.

choices[].index integer
The index of the choice in the list of choices.

choices[].message object
A chat completion message generated by the model.

choices[].message.content string
The contents of the message.

choices[].message.role string
The role of the author of this message.

choices[].logprobs null
This field is not yet available.

usage object
Usage statistics for the completion request.

usage.completion_tokens integer
Number of tokens in the generated completion.

usage.prompt_tokens integer
Number of tokens in the prompt.

usage.total_tokens integer
Total number of tokens used in the request (prompt + completion).

The chat completion chunk object

id string
A unique identifier for the chat completion. Each chunk has the same ID.

object string
The object type, which is always chat.completion.chunk

created integer
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.

model string
A string representing the version of the model being used.

system_fingerprint null
This field is not yet available.

choices list
A list of chat completion choices.

choices[].finish_reason string
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached.

choices[].index integer
The index of the choice in the list of choices.

choices[].delta object
A chat completion message generated by the model.

choices[].delta.content string
The contents of the message.

choices[].delta.role string or null
The role of the author of this message.

choices[].logprobs null
This field is not yet available.

Example

Request

curl --location 'https://api.upstage.ai/v1/solar/chat/completions' \
  --header 'Authorization: Bearer YOUR_API_KEY' \
  --header 'Content-Type: application/json' \
  --data '{
    "model": "solar-1-mini-translate-koen",
    "messages": [
      {
        "role": "user",
        "content": "아버지가방에들어가셨다"
      },
      {
        "role": "assistant",
        "content": "Father went into his room"
      },
      {
        "role": "user",
        "content": "엄마도들어가셨다"
      }
    ]
}'

Response

{
  "id": "3e5649eb-50aa-416a-b999-92a3eb75b0d5",
  "object": "chat.completion",
  "created": 1708441325,
  "model": "upstage/solar-1-mini-translate-koen",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Mother went into her room"
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 37,
    "completion_tokens": 10,
    "total_tokens": 47
  },
  "system_fingerprint": null
}