GPT Chat Task (LLM Task)
Result
Properties
response
The response as it was received from Open AI.
response.id
The unique identifier for the chat completion in Open AI's system. This is not used in Stubber.
response.object
The type of the response object, this should always be chat.completion.
response.created
The timestamp at which the response was created in Open AI's system.
response.model
The exact model that was used for the chat completion. This property could be of some significance, since it is not always exactly the model that is passed as a parameter. This is the case here. No model was passed in the Basic Usage section, so the default gpt4o-mini was used, yet the value in the response is gpt4o-mini.
response.choices
The choices object is the property that contains the message with some additional information. It should very rarely be necessary to use this property. Since the message is the desired result from this task, we include it in the top level of the response for convenience.
response.usage
The details of the token usage can be found in this property.
prompt_tokens: the amount of tokens that the prompt, ie. the messages, used.completion_tokens: the amount of tokens in the completion, ie. the response of the assistant.total_tokens: a sum of theprompt_tokensand thecompletion_tokens.
response.system_fingerprint
This is a unique identifier for Open AI for the system that was responsible for handling the request.
response.message
This is the response of the assistant on the prompt or messages that was provided for the task. This message will also be passed as the message of the feedback action, _update_from_chat_assistant_task.
_gpt_chat_details.message_data_extracted
When message_data_extraction is enabled, this object contains any data that was extracted from HTML-style tags or JSON code blocks in the assistant's response. The data is organized by tag name or "json" for JSON blocks. If there are multiple JSON blocks, the data will be numbered and the key will be json, json_2, etc.
For example: