Actions: AI

GPT chat completion

This submits the prompt to the GPT Chat Completion endpoint. It includes also, the previous messages and responses in the conversation. These can be accessed independently using the ‘GPTConversations’ array functions. The documentation for this endpoint can be found here: https://platform.openai.com/docs/api-reference/chat/create.

Argument Description
ResponseOutput Where the resulting response text should be be placed.
ConversationName The name that will be used to track this conversation. This name gives you the ability to access the conversation’s responses directly using the ‘GPTConversations’ variable array.
Model The model that should be used for the request. If you use ‘chatgpt-4o-latest’ your code will always use the latest 4o model. Visit https://platform.openai.com/docs/models for more info.
BasePrompt The prompt which defines the desired behavior of the AI. This is where you tell the AI what role to play in this conversation. Describe it in as much detail as you can. For example: You are a world class designer, and you will help me design an app based on proper design rules and conventions.
RequestPrompt Where the user’s next prompt should come from

GPT vision

This sends an image to the OpenAI Vision API, returning the GPTs response to your image based on the prompt. The call is synchronous.

Argument Description
ResponseOutput Where the response text will be placed.
DetailMode The input detail mode. Low = 512x512, High = 768x2048 or 2048x768. Input images are resized to those dimensions when uploaded. It is best to aim to send the appropriate image sizes.
ImageData The binary data of the image. Your image will be resized based on the detail level you selected.
ResponseModel The GPT model to use to generate the text response.
TopicName The name that will be used to track calls. This name gives you the ability to access the returned data directly using the ‘GPTVision’ array variable.
RequestPrompt The prompt for the request. ie: What is in the image?
BasePrompt The prompt which defines the desired behavior of the AI. This is where you tell the AI what role to play in this conversation. Describe it in as much detail as you can. For example: You are a world class designer, and you will help me design an app based on proper design rules and conventions.

clear GPT conversation

Clear the content of a GPT Conversation.

Argument Description
ConversationName Select the GPT conversation to be cleared.

convert text to speech

This converts text to speech using OPENAIs Text To Speech API. Documentation for this endpoint can be found here: https://platform.openai.com/docs/api-reference/audio/createSpeech.

Argument Description
Response Where the resulting audio data be be placed
Text The text to be turned into audio.
Model The model that should be used for the request.
Voice The OPENAI TTS Voice to be used.
Speed The speed of the speech.

generate image

Generates an image using OpenAI API and places the image in a variable or image. The documentation for this endpoint can be found here: https://platform.openai.com/docs/api-reference/images.

Argument Description
ResponseFormat Return Image BINARY Data or a URL.
Resolution The resolution of the resulting image
RequestPrompt The prompt for the image generation request. This is where you describe the image you wish to generate in as much detail as you can.
Style Generation style to use.
ConversationName The name that will be used to track this conversation. This name gives you the ability to access the conversation’s array directly using the GPTConversations array.
Quality The quality setting to use for the request. ‘standard’ produced lower quality results, and ‘hd’ produces the highest quality images the DALE-3 model is capable of.
ResponseOutput Where the URL or image data should go.

This chapter was last updated on Fri 13 Dec 2024 12:14:28 GMT