Actions: AI
GPT chat completion
This submits the prompt to the GPT Chat Completion endpoint. It includes also, the previous messages and responses in the conversation. These can be accessed independently using the ‘GPTConversations’ array functions. The documentation for this endpoint can be found here: https://platform.openai.com/docs/api-reference/chat/create.
| Argument | Description |
|---|---|
| ResponseOutput | Where the resulting response text should be be placed. |
| ResponseSchema | JSON Schema for structured response. To learn more, and to use our helper bot, visit https://platform.openai.com/docs/guides/structured-outputs/ |
| ConversationName | The name that will be used to track this conversation. This name gives you the ability to access the conversation’s responses directly using the ‘GPTConversations’ variable array. |
| Model | The model that should be used for the request. If you use ‘chatgpt-4o-latest’ your code will always use the latest 4o model. Visit https://platform.openai.com/docs/models for more info. |
| BasePrompt | The prompt which defines the desired behavior of the AI. This is where you tell the AI what role to play in this conversation. Describe it in as much detail as you can. For example: You are a world class designer, and you will help me design an app based on proper design rules and conventions. |
| RequestPrompt | Where the user’s next prompt should come from |
GPT delete file
Delete an uploaded file from the OpenAI File API.
| Argument | Description |
|---|---|
| FileID | Variable containing the ID of the file to delete. |
| DeleteResult | Variable to store API response JSON as array. |
GPT file info
Retrieve metadata for a specific OpenAI file by its file ID.
| Argument | Description |
|---|---|
| FileID | Select a variable that contains the OpenAI file ID. |
| FileInfo | An array containing the information of the file. |
GPT list files
List files accessible to the configured OpenAI account.
| Argument | Description |
|---|---|
| Files | Array of the file objects present in your ORGs File API |
GPT response
Unified Responses API with conversation state, optional file attachments, and optional inline image input.
| Argument | Description |
|---|---|
| DetailMode | Optional vision detail mode for inline images: auto, low, high. |
| ImageData | Include images in the request. Supports multiple files. Each line can be a URL to a remote file, a BaseEncodedImage, or a FileID from the Open AI Files API |
| ResponseSchema | Optional JSON schema for structured output. |
| Model | The model to use for the response. |
| BasePrompt | Define how the AI should behave and respond in this conversation. |
| ResponseOutput | Where to place the response output. The full JSON response is available in the GPTConversations array. |
| ConversationName | Name of the conversation thread in GPTConversations. |
| RequestPrompt | Next prompt from the user. |
| VectorStoreIDs | Optional variable holding a comma delimited list of Vector Store IDs from gptFileUpload action |
| ReasoningEffort | When using reasoning models like GPT-5 or o3, you can specify the amount of reasoning effort the model puts in. |
GPT upload file
Uploads a file you already have in a variable to the OpenAI Files API. Returns file_id and the full JSON response.
| Argument | Description |
|---|---|
| VectorStoreID | Variable to receive the created vector_store id. |
| ResponseOutput | Where to store the full response (Variable or Element). |
| FileName | Filename to send in multipart Content-Disposition (e.g., report.pdf). |
| Purpose | OpenAI file purpose (e.g., assistants, batch, fine-tune). Choose Variable or Input. |
| LifeTime | |
| FileData | Variable that contains the raw file data to upload. |
| FileID | Variable to receive the returned file_id string. |
| CreateVectorStore | Whether to create a vector embedding for your file. This is required to use documents with GPT Response action. |
GPT vision
This sends an image to the OpenAI Vision API, returning the GPTs response to your image based on the prompt. The call is synchronous.
| Argument | Description |
|---|---|
| ResponseSchema | JSON Schema for structured response. To learn more, and to use our helper bot, visit https://platform.openai.com/docs/guides/structured-outputs/ |
| ResponseOutput | Where the response text will be placed. |
| DetailMode | The input detail mode. Low = 512x512, High = 768x2048 or 2048x768. Input images are resized to those dimensions by OpenAI. It is best to aim to send the appropriate image dimensions. Limit per image is 12 MB. |
| ImageData | The binary data of the image. Your image will be resized based on the detail level you selected. When this variable is a numerical array, we use will use ‘ImageData’ as the key that contains the ImageData |
| ResponseModel | The GPT model to use to generate the text response. |
| TopicName | The name that will be used to track calls. This name gives you the ability to access the returned data directly using the ‘GPTVision’ array variable. |
| RequestPrompt | The prompt for the request. ie: What is in the image? |
| BasePrompt | The prompt which defines the desired behavior of the AI. This is where you tell the AI what role to play in this conversation. Describe it in as much detail as you can. For example: You are a world class designer, and you will help me design an app based on proper design rules and conventions. |
clear GPT conversation
Clear the content of a GPT Conversation.
| Argument | Description |
|---|---|
| ConversationName | Select the GPT conversation to be cleared. |
convert text to speech
This converts text to speech using OPENAIs Text To Speech API. Documentation for this endpoint can be found here: https://platform.openai.com/docs/api-reference/audio/createSpeech.
| Argument | Description |
|---|---|
| Response | Where the resulting audio data be be placed |
| Text | The text to be turned into audio. |
| Model | The model that should be used for the request. |
| Voice | The OPENAI TTS Voice to be used. |
| Speed | The speed of the speech. |
generate image
Generates an image using OpenAI API and places the image in a variable or image. The documentation for this endpoint can be found here: https://platform.openai.com/docs/api-reference/images.
| Argument | Description |
|---|---|
| ResponseFormat | Return Image BINARY Data or a URL. |
| Resolution | The resolution of the resulting image |
| RequestPrompt | The prompt for the image generation request. This is where you describe the image you wish to generate in as much detail as you can. |
| Style | Generation style to use. |
| ConversationName | The name that will be used to track this conversation. This name gives you the ability to access the conversation’s array directly using the GPTConversations array. |
| Quality | The quality setting to use for the request. ‘standard’ produced lower quality results, and ‘hd’ produces the highest quality images the DALE-3 model is capable of. |
| ResponseOutput | Where the URL or image data should go. |
This chapter was last updated on Fri 24 Oct 2025 16:52:19 BST