POST
/
v1
/
logs
/
{log_id}
/
rating

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

log_id
string
required

Body

application/json
good_answer
boolean | null

The rating given to this request. It can be good (True), bad (False) or none (None == null)

feedback
string | null
default:

An optional text with accompanies the feedback's rating

Response

200
application/json
conversation_id
string | null
required
parent_request_id
string | null
required
app_id
string | null
required

The ID of the app that performed the request

auth0_id
string | null
required

The Auth0 ID of the user that performed the request

response
object
required

The response object

timestamp
integer
required

The timestamp of the request, in milliseconds

id
string
required

ID of the request

good_answer
boolean | null

The rating given to this request. It can be good (True), bad (False) or none (None == null)

feedback
string | null
default:

An optional text with accompanies the feedback's rating

prompt_tokens
integer
default:
0

Number of tokens the request used

completion_tokens
integer
default:
0

Number of tokens the response used

total_tokens
integer
default:
0

Number of tokens of (request + response)

prompt_tokens_cost
number
default:
0

Cost (in $) of the prompt

completion_tokens_cost
number
default:
0

Cost (in $) of the response

total_tokens_cost
number
default:
0

Cost (in $) of the (request + response)

prompt_tokens_cost_savings
number
default:
0

Cost (in $) saved in the prompt costs comparison to the benchmark model

completion_tokens_cost_savings
number
default:
0

Cost (in $) saved in the completion costs comparison to the benchmark model

total_tokens_cost_savings
number
default:
0

Cost (in $) saved in total, in comparison to the benchmark model

costs_incurred
boolean
default:
true

When a request requires multiple intermediate calls, they are stored as 'no costs incurred' -- that way we can store the costs, but don't charge the user

namespace
string | null

The name of the provider's model which was used to answer the request

payload
object

The payload sent with the request

privacy_level
enum<integer>
default:
1

How much is logged? 1: everything, 2: mask request+response (but show log), 3: Not visible, not retrievable, no information stored.

Available options:
1,
2,
3
prompt
string | null

The prompt in text format

request_type
enum<string> | null
default:
chat_completions

The type of request (text completion or chat) the user sends and expects back

Available options:
completions,
chat_completions
response_text
string

The response in text format

status_code
integer
default:
-1

The status code of the request to the AI model

is_test
boolean
default:
false

True if the request was performed from a sandbox app

added_on
string | null

When the request was performed

latency
number
default:
-1

Time it took for the LLM to respond

parent_id
string | null

Reference to the ID of the parent of this log. A log has a parent when it's a subrequest used to retrieve the final answer.

is_saved_for_later
boolean
default:
false
comment_count
integer
default:
0
parent
object | null

The parent of the Request, if any. Requests which are part of a series of sub-requests (like multiple LLM calls, or RAG) will have the final, resulting Log as parent.