Groundedness Check

Groundedness Check

Check the groundedness of an assistant's response to a user-provided context.

Large Language Models (LLMs) are capable of generating elaborate, information-rich texts, but they are prone to hallucinations -- they can produce factually incorrect (i.e., ungrounded) responses. A popular approach to overcoming this limitation of LLMs is to provide chunks of text, often called "contexts," which LLMs can use as a point of reference to generate factually correct outputs. This approach is known as Retrieval-Augmented Generation, or RAG.

However, RAG does not always guarantee truthful answers from LLMs. Therefore, an additional step is required to check whether a model-generated output is indeed grounded in a given context. The Groundedness Check API is specifically designed for this purpose: to check the groundedness of an assistant's response to a context provided by a user. Given two messages – a user-provided context and a model response – the API will return whether the response is grounded, not grounded, or if it is unsure about the groundedness of the response to the context.

Available models

ModelRelease dateContext LengthDescription
solar-1-mini-groundedness-check2024-05-02 beta32768Solar-based groundedness check model with a 32k context limit.
solar-1-mini-groundedness-check is alias for our latest solar-1-mini-groundedness-check model. (Currently solar-1-mini-groundedness-check-240502)




The messages parameter should be a list of message objects containing two elements: 1) a user-provided context and 2) an assistant's response to be checked. Each message object must specify the type of message as either user (context) or assistant (response) using the role attribute, and set the content attribute with the corresponding text string.

The API response will be a string with a value of either grounded, notGrounded, or notSure. The notSure response is returned when the groundedness of the assistant's response to the provided context cannot be clearly determined.

Request headers

Authorization string Required
Authentication token, format: Bearer API_KEY

Request body

messages list Required
A list of two message objects: 1) a user-provided context and 2) an assistant's response to be checked for groundedness.

messages[].role string Required
The role attribute of a message object must be set to either "user" to indicate the user-provided context or "assistant" to indicate the assistant's response.

messages[].content string Required
The content attribute of a message object with role: "user" should contain the source context provided by the user. Similarly, the content attribute of a message object with role: "assistant" should contain the assistant's response that needs to be checked for groundedness. The content string must not be empty ("").

model string Required
The name of the model used to perform the groundedness check. Currently, the only available model is "solar-1-mini-groundedness-check".

temperature float Optional
A parameter to control the randomness of the output; it should be a float value between 0 and 2. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more focused and deterministic. If not provided, the default value is 0.7.


Return values

Returns a chat.completion object.

The chat completion object

id string
A unique identifier assigned to the chat completion, with each segment sharing the same ID.

object string
The type of object, consistently designated as chat.completion

created integer
The creation time of the chat completion, marked by a Unix timestamp in seconds.

model string
A string indicating the version of the model that was utilized for the request.

choices list
An array of choices provided as chat completion outcomes.

choices[].index integer
The index of the choice in the list of choices.

choices[].message object
A chat completion message generated by the model.

choices[].message.role string
The role of the author of this message.

choices[].message.content string
The contents of the message.

choices[].logprobs null
This field is not yet available.

choices[].finish_reason string
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached.

usage object
Usage statistics for the completion request.

usage.completion_tokens integer
Number of tokens in the generated completion.

usage.prompt_tokens integer
Number of tokens in the prompt.

usage.total_tokens integer
Total number of tokens used in the request (prompt + completion).

system_fingerprint null
This field is not yet available.



curl --location '' \
--header 'Authorization: Bearer UPSTAGE_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
  "model": "solar-1-mini-groundedness-check",
  "messages": [
      "role": "user",
      "content": "Mauna Kea is an inactive volcano on the island of Hawaiʻi. Its peak is 4,207.3 m above sea level, making it the highest point in Hawaii and second-highest peak of an island on Earth."
      "role": "assistant",
      "content": "Mauna Kea is 5,207.3 meters tall."
  "temperature": 0.5


Success - HTTP Status 200 OK

  "id": "c43ecfa6-31a9-4884-a920-a5f44fb727df",
  "object": "chat.completion",
  "created": 1710338020,
  "model": "solar-1-mini-groundedness-check-240502",
  "choices": [
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "notGrounded"
      "logprobs": null,
      "finish_reason": "stop"
  "usage": {
    "prompt_tokens": 132,
    "completion_tokens": 3,
    "total_tokens": 135
  "system_fingerprint": ""

Error - HTTP Status : 400 Bad Request (Reason: found more/less than 1 user or assistant message)

  "error": {
    "message": "invalid request: 1 user message expected, found 2",
    "type": "invalid_request_error",
    "param": null,
    "code": null

Error - HTTP Status : 400 Bad Request (Reason: missing user or assistant message content)

  "error": {
    "message": "invalid request: user message content is required",
    "type": "invalid_request_error",
    "param": null,
    "code": null