1. Tasks
  2. GPT Chat Task

Tasks

GPT Chat Task

Start a GPT chat completion task. Pass some messages to this task to get a response from an AI chat assistant. The assistant's response will be available in the stub the task ran from as a _update_from_chat_assistant feedback action. The task can call Stubber actions marked as ai_callable when suitable using GPT's functions functionality.

The rest of this document assumes familiarity with adding a task to a stub. See tasks documentation

Basic usage

Start a chat with the default model, gpt-3.5-turbo.

Note: This task has a wide variety of possible use cases, see the Examples section.

loading...

Failsafes

Due to the nature of AI chat assistants, there is a risk of infinitely repeating chats when two auto responding bots find themselves talking to one another.

We have 3 failsafe limits in place to minimize the damage from such occurrences, these are:

  • 200 messages in a chat
  • 1000 interactions with the model
  • $50 worth of tokens reached in the chat (this one is based only on gpt-4 token rates)

These failsafe limits can be adjusted, each one has its own parameter that can be set independently of the others. They can be found in the Parameters section.

Parameters

messages

optional
array

The messages to submit for a chat completion. It is an array of objects in which each object takes the structure {"role": {{role}}, "content": {{content}}}, where role can be one of system, user and assistant, and content is the string value of the message text. The messages in the array should be sorted ascending chronologically, meaning that the message at the first index in the array should be the earliest message, and the last message in the array should be the latest.

Each chat has an array of messages that is automatically handled by the Stubber system. When there is a back and forth discussion between an assistant and a user, the messages are appended to this array, unless the message_operation is set, then the messages are overwritten by the messages of the current task.

Default: []


messages.role

optional
array

The role of a specific message in the messages array. This can be one of system, user and assistant:

  • system: indicates a message to the system, and could be used to give instructions or rules to the assistant.
  • user: messages from the user, this is the role that should be used for questions or statements that the user wants the assistant to respond on.
  • assistant: the role for messages where the assistant responded. These assistant messages could also be added to messages to put words in the assistant's mouth.

Default: null


message_operation

optional
string

This can be either set or append. It defines the operation to use for the messages in the active chat (via gptchattaskuuid). The set operation erases the previous messages of the chat and sets the messages passed in this instance of the task as the messages for the chat completion. The append operation appends the messages of the current task to the existing messages in the chat.

Default: append


model

optional
string

The model that the task should use to do this specific instance of chat completion. If it is the first instance of a chat, the model will be saved for next instances in the same chat. This parameter is thus not required, but recommended for at least the initial chat to set the desired model. Common models to use are:

  • gpt-3.5-turbo: Fast and inexpensive, but at the cost of reasoning capabilities.
  • gpt-4: Slower than gpt-3.5-turbo, with better reasoning capabilities.
  • gpt-4-turbo: State of the art model, large context window, fast, good reasoning capabilities, better instruction following, JSON mode and more.
  • gpt-4o: State of the art model, omnichannel, large context window, fast, good reasoning capabilities, better instruction following, JSON mode and more.

See Models for a full list of supported models.

Default: gpt-3.5-turbo


submit_to_model

optional
boolean

Whether or not the task should actually submit the messages to the model/assistant for a response. This could be useful to set to false when a chat should be initiated by the assistant with a default message. Example:

loading...

Default: true


set_model

optional
boolean

Override the model in the specific chat with this model. This will change the model for future chat instances in cases where the specific model is not specified. If a single task has model and set_model, the set_model model will be used

Examples:

  • gpt-3.5-turbo
  • gpt-4o

Default: null


gptchattaskuuid
optional
array

The gptchattaskuuid is the globally unique reference used inside Stubber to keep track of the specific details of a chat, such as the model details, the previous messages, token usage etc. This is by default a deterministic uuid generated from the stub's stubref, such that each stub automatically has a unique chat. It is only required to specify this if you want fine grain control over which GPT tasks should use which chats across various stubs. If you want multiple different chats on a single stub, use chat_name instead.

Default: {{#deterministicuuid stub.stubref}}


chat_name
optional
array

The human readable name of the chat, such as "supervisor" or "contractor". Each chat has a unique id, gptchattaskuuid, that is used to enable tasks to keep appending messages and instructions to the same chat, continuously building on the chat context. This unique id is generated with a combination of the stubref and the chat_name. Chats happening on different stubs are thus automatically different. If you want to have different chats on the same stub, you have to give each chat its own chat_name, and then use this chat_name in all usages of the gpt_chat_task for that chat.

Default: ""


function_call

optional
string or object

This parameter determines whether functions will be added to the chat completion task. With this parameter set to auto, the assistant will automatically choose when to call a function, and which function to call. If the assistant calls a function and everything is set up correctly, as described here, the action with the same name as the function will be ran on the stub of the chat.

This parameter can also be used to force a specific function (and consequently, action) to be ran. This can be done by specifying the parameter as {"name": "get_weather"}. This example will force the assistant to call the get_weather function.

Default: auto


functions

optional
array
An array of function objects to make available to the assistant. It is important to note that actions with the action_meta containing ai_function_calling: true, will be added to the functions the assistant has access to automatically.

This parameter should only rarely be required. Below is an example of a single function object:

loading...

Function objects that are generated automatically from actions have the same structure. The fields of the action becomes the properties, the help of each field becomes the description of the property. The name of the action becomes the name of the function. The description of the action becomes the description of the function.

Default: null


temperature

optional
array

This is a parameter to change the consistency of the response of the assistant. Lower values, such as 0.15, result in more consistent responses. Higher values, such as 0.80, will generate more creative and diverse results.

Default: null


action_result_inject_data

optional
any

This parameter defines the data that is returned to the chat assistant when the assistant calls an action. By default the entire stubpost is passed, this can use quite a lot of unnecessary tokens. You can pass the result of a specific task with {{{{skip}}}}~~stubpost.tasks.savedata{{{{/skip}}}}. You can pass multiple fields or task results by making this parameter an array or object of these values. The skip Handlebar helper is required.

Default: {{{{skip}}}}~~stubpost{{{{/skip}}}}


max_tokens

optional
integer

The maximum tokens that the assistant can respond with. In general this is not required, and should mainly be used if a abnormally short response is required.

Default: null


set_system_message

optional
string

The system message passed here will replace the system message in the existing chat. It will be the system message for all chats going forward on the same chat. So it will not last just a single task instance, but will be the new system message until it is changed again.

Default: null


response_format

optional
object

This only works for the gpt-4-1106-preview model. For the assistant to respond in only json, this parameter has to be set to {"type": "json_object"}, and you have to mention the word "json" somewhere in the system message.

Default: null


disable_model_response

optional
boolean

When this is set to true, the task will not publish the feedback action, by default _update_from_chat_assistant_task. The task can still execute actions, which can be disabled with disable_model_action_execution.

Default: false


disable_model_action_execution

optional
boolean

If set up correctly, when an assistant decides to run a function, it can run an action on Stubber. When this is set to true, the task will not execute actions on Stubber's system.

Default: false


assistant_response_action_name

optional
string

The feedback action that the task will call with the result of the chat completion.

Default: _update_from_chat_gpt


append_response_to_messages

optional
string

Each chat has an array of messages that is automatically handled by the Stubber system. When there is a back and forth discussion between an assistant and a user, the messages are appended to this array, unless the message_operation is set, then the messages are overwritten. When append_response_to_messages is false, the response of the assistant is not appended to the array of messages of the chat.

Default: true


stop_on_high_messages

optional
integer
boolean

This parameter indicates the number of messages that is allowed in a chat before it is blocked as a failsafe mechanism for bot vs bot chats. To disable it (we recommend not disabling this), you have to set this parameter explicitly as false.

This is such a low value because chats can often reset the messages by setting the message_operation parameter as set.

Examples:

  • false
  • 5
  • 5000
  • true (uses default value)

Default: 200


stop_on_high_interactions

optional
string

This parameter indicates the number of interactions with the assistant that is allowed in a chat before it is blocked as a failsafe mechanism for bot vs bot chats. To disable it (we recommend not disabling this), you have to set this parameter explicitly as false.

Examples:

  • false
  • 5
  • 5000
  • true (uses default value)

Default: 1000


action_call_result_method

optional
string

By default, when the assistant decides to run an action, a dynamic task is added to that action to return the result of the action back to the assistant, allowing the assistant to respond on the result of the action it initiated. A custom dynamic task can be passed instead of the default one by setting this parameter to custom.

Default: null


dynamic_tasks

optional
array

The tasks to dynamically add to an action that the assistant has chosen to run. The default task that is added, return_gpt_function_data, returns the result of the action, the stubpost, to the assistant. The assistant will then respond normally with the passed feedback action, by default _update_from_chat_assistant_task.

This parameter is only used if action_call_result_method is set to custom. If it is specified as custom, only the tasks specified in dynamic_tasks will be added to the action.

Default:

loading...

Result

loading...

Properties

response

The response as it was received from Open AI.


response.id

The unique identifier for the chat completion in Open AI's system. This is not used in Stubber.


response.object

The type of the response object, this should always be chat.completion.


response.created

The timestamp at which the response was created in Open AI's system.


response.model

The exact model that was used for the chat completion. This property could be of some significance, since it is not always exactly the model that is passed as a parameter. This is the case here. No model was passed in the Basic Usage section, so the default gpt-3.5-turbo was used, yet the value in the response is gpt-3.5-turbo-0613.


response.choices

The choices object is the property that contains the message with some additional information. It should very rarely be necessary to use this property. Since the message is the desired result from this task, we include it in the top level of the response for convenience.


response.usage

The details of the token usage can be found in this property.

  • prompt_tokens: the amount of tokens that the prompt, ie. the messages, used.
  • completion_tokens: the amount of tokens in the completion, ie. the response of the assistant.
  • total_tokens: a sum of the prompt_tokens and the completion_tokens.

response.system_fingerprint

This is a unique identifier for Open AI for the system that was responsible for handling the request.


response.message

This is the response of the assistant on the prompt or messages that was provided for the task. This message will also be passed as the message of the feedback action, _update_from_chat_assistant_task.


Action Meta

All Stubber actions have the action_meta property. In gpt_chat_task, this property can be used to make actions available to AI assistants for calling. It can also be used to change the behaviour when only specific actions are run by the AI assistant.

Basic Usage

Here is an example action_meta object for a get_weather function. The initial ai_function_calling: true is what makes the action available for calling by AI assistants. Additional parameters are then all nested inside of ai_details.

loading...

Parameters

ai_function_calling

optional
boolean

Whether or not an action is callable by an AI assistant.

Default: false


ai_details

required if ai_function_calling is true
object

The object that contains the additional parameters relevant to AI action calling. It is only relevant if ai_function_calling is set to true.

Default: null


ai_details.action_result_inject_data

required if ai_function_calling is true
object

The data that will be passed back the AI assistant after a successful AI action call. The aim for good inject data is to have it be as concise as possible. You don't want to return the full stubpost, as this will use a lot of tokens and be quite expensive. You want to return only the data that the model will need from the result of the action it chose to run.

Example:

loading...

Default:

loading...

ai_details.clear_function_call_results

optional
boolean

With this parameter set to true, all previous function call results will be cleared. This can be used to reduce the token count in long chats where function results account for a lot of unnecessary tokens.

Default: false


ai_details.clear_intermediate_system_messages

optional
boolean

With this parameter set to true, all system messages except for the first (the one in the 0th message position) and the most recent will be cleared. This can help reduce token count and also helps with keeping the instructions to the model as concise as possible.

Default: false


ai_details.disable_dynamic_tasks

optional
boolean

Whenever an AI assistant runs an action, a dynamic task that returns the result of the action to the chat is added to that action automatically. This can be disabled for all actions in the gpt_chat_task params by setting action_call_result_method to custom. However, sometimes we want to do that only when a specific action is ran.

This parameter disables the addition of the default dynamic task when the specific action is run.

Default: false


Examples

In these examples, the wider picture of what happens will be shown, as the most important part of the task result is the assistant's response message, and that is fairly simple to understand. Which parameters to use in which scenario is the secret sauce of this task.

Greet new user with the weather

We have an action called greet_new_users with the gpt_chat_task definition below.

The task definition:

loading...

The assistant calls a Stubber action get_current_weather to get the weather in a location, and then generates a welcome message which includes the weather. This requires an action, get_current_weather, with the action_meta field ai_function_calling set to true and a text field with the name location available on the stub in the correct state.

For our get_current_weather action, we added a task with an API call to api.weatherapi.com. This task is as below:

loading...

We added an api key for api.weather.com in our stub data. As for the {{stubpost.data.location}}, when the assistant decides to call a function, the properties that the assistant uses to call the function are added to the stubpost.data of the action that the Stubber system runs on behalf of the assistant. So the stubpost.data.location value will come from the assistant.

Result

Note that in the order of things happening below, the disable_model_response: true parameter ensures that there would not be an additional assistant response between steps 2 and 3. Sometimes the assistant informs the user that it is going to call a function before actually calling the function.

The order of things happening in this example is as follows:

  1. We run the greet_new_users action with the message "I am flying to New York".
  2. The action runs the gpt_chat_task defined above, calling the assistant.
  3. The assistant responds that the action get_current_weather, with the location parameter set to "New York", should be ran. The dynamic task that will return the result of the action to the assistant is added to the get_current_weather action request.
  4. The Stubber system automatically runs the get_current_weather action, and thus the task that makes an api call to api.weather.com.
  5. The dynamic task returns the entire stubpost of the get_current_weather action as a appended message to the assistant.
  6. The assistant generates a greeting response as instructed in the system message, this response is added to the stub via the task feedback action, _update_from_chat_assistant_task.

Here is a screenshot of the flow in Stubber: ![greet-with-weather](/images/docs/templates/actions/tasks/gpt-chat-task /greet-with-weather.png)


Force the model to respond in Json only

This is only available for select models. You have to mention the word "json" in the system message, or the task will error.

The task definition:

loading...
Result

Here follows the result with the stubpost message as "Create a recommended people structure for my company, "The Pink Factory". We have technical, operations, support and sales teams with 5, 5, 10, 10 members in each respectively."

loading...

Supported Models

Supported models

editor
        {
  "gpt-3.5-turbo",
  "gpt-3.5-turbo-0613",
  "gpt-3.5-turbo-1106",
  "gpt-3.5-turbo-0125",
  "gpt-3.5-turbo-16k",
  "gpt-3.5-turbo-16k-0613",
  "gpt-4",
  "gpt-4-0613",
  "gpt-4-32k",
  "gpt-4-32k-0613",
  "gpt-4-1106-preview",
  "gpt-4-vision-preview",
  "gpt-4-0125-preview",
  "gpt-4-turbo-preview",
  "gpt-4-turbo",
  "gpt-4-turbo-2024-04-09",
  "gpt-4o",
  "gpt-4o-2024-05-13",
  "claude-3-haiku-20240307",
  "claude-3-sonnet-20240229",
  "claude-3-opus-20240229",
  "di-llama-3-70b-instruct",
  "gq-llama-3-8b",
  "gq-llama-3-70b",
}

      

Supported function models

editor
        [
  "gpt-3.5-turbo",
  "gpt-3.5-turbo-0613",
  "gpt-3.5-turbo-16k-0613",
  "gpt-3.5-turbo-1106",
  "gpt-3.5-turbo-0125",
  "gpt-4",
  "gpt-4-0613",
  "gpt-4-32k-0613",
  "gpt-4-1106-preview",
  "gpt-4-turbo-preview",
  "gpt-4-0125-preview",
  "gpt-4-turbo",
  "gpt-4-turbo-2024-04-09",
  "gpt-4o",
  "gpt-4o-2024-05-13",
  "claude-3-opus-20240229",
  "claude-3-sonnet-20240229",
  "claude-3-haiku-20240307",
  "gq-llama-3-8b",
  "gq-llama-3-70b",
  "di-llama-3-70b-instruct",
];