AI
OpenAI (ChatGPT, Whisper, DALL...

OpenAI modules

15min
after connecting to the openai (chatgpt, whisper, dall e) app, you can use the following modules to build your {{scenario plural lowercase}} triggers watch batch completed triggers when a batch is completed connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq limit enter the maximum number of results to be worked with during one execution cycle ai message an assistant send messages to a specified or newly created thread and execute it seamlessly this action can send the arguments for your function calls to the specified urls (post http method only) works with assistants v2 connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq assistant select or map the assistant you would like to use role indicate whether to send a message on behalf of the user or the assistant message enter the message to send to the assistant image files select images you want to include map or select the binary data of the image you can retrieve the binary data of an image using the http get a file module, or another app such as dropbox images are only supported on certain models for more information, see openai vision compatible models image urls add images you want to include enter the url address to a public resource of the image for example, https //getmyimage com/myimage png images are only supported on certain models for more information, see openai vision compatible models thread id enter the thread id where the message will be stored to find your thread id, go to the openai playground, open your assistant, and the thread id will be visible if thread id is left empty, we will create a new thread you can find the new thread's thread id value in the module's response tool choice select the tool that is called by a model the options are none the model will not call any tool and instead generates a message auto the model can pick between generating a message or calling one or more tools required the model must call one or more tools specific tool's name model select or map the model you want to use note ensure you have access to the model you want to work with only models available for the account are listed for selection tools specify tools in order to override the tools the assistant can use when generating the response file source resources select the vector store that will become available to the file search tool in this thread code interpreter resources select the files that will become available to the code interpreter tool in this thread instructions enter instructions in order to override the default system message of the assistant when generating the response max prompt tokens the maximum number of tokens to use in the prompts (input including files) if the run exceeds the number of prompt tokens specified, the run will end with the incomplete status refer to open ai platform documentation https //platform openai com/docs/assistants/deep dive/max completion and max prompt tokens to learn more max completion tokens the maximum number of tokens to use in the completion (output) if the run exceeds the number of prompt tokens specified, the run will end with the incomplete status note the o1 models require tokens for both output and reason insert the sum of both tokens if you use the o1 models refer to open ai platform documentation to learn more temperature specify the sampling temperature to use higher temperatures generate more diverse and creative responses for example, 0 8 lower temperatures generate more focused and well defined responses for example, 0 2 the default value is 1 the value must be lower than or equal to 2 top p specify the top p value to use nucleus sampling this will consider the results of the tokens with probability mass the default value is 1 the value must be lower than or equal to 1 response format select the format in which the response will be returned parse json response if you selected json object for the response format , you can choose whether or not the response will be parsed truncation strategy select the truncation strategy for the response create a completion (prompt) (gpt and o series models) creates a completion for a prompt or chat see the openai model endpoint compatibility section for the list of supported models connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq select method select a method to create a completion model select or map the model you want to use o1 o3 o4 gpt 4 1 gpt 4o gpt 4 turbo gpt 4 gpt 4 5 gpt 3 5 turbo note ensure you have access to the model you want to work with only models available for the account are listed for selection messages add the messages for which you want to create the completion by selecting the role and entering the message content for some models you can manage image input type and image details as well as audio filename and audio data for more information about chat completion, refer to the openai documentation web search options add web search capabilities to improve the relevance of responses use search context size to define how much search content to include, and optionally set user location with city , country , region , and timezone to refine location based results max completion tokens the maximum number of tokens to generate in the completion the default value is 2048 note low values may cause the output to be truncated high values may use a lot of openai credit the o1 and o3 models require tokens for both output and reason insert the sum of both tokens if you use the o1 and o3 models refer to openai platform documentation to learn more about reasoning modules predicted outputs enter the content that is going to be used as a predicted output this is often the text of a file you are regenerating with minor changes predicted outputs are available for the latest gpt 4o and gpt 4o mini models for more information, refer to openai predicted outputs documentation temperature specify the sampling temperature to use a higher value means the model will take more risks try 0 9 for more creative applications and 0 (argmax sampling) for ones with a well defined answer defaults to 1 top p an alternative to sampling with temperature is called nucleus sampling, where the model considers the results of the tokens with top p probability mass so 0 1 means only the tokens comprising the top 10% probability mass are considered defaults to 1 number the number of completions to generate for each prompt defaults to 1 frequency penalty positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim this value must be a number between 2 and 2 presence penalty positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics this value must be a number between 2 and 2 token probability add the token probability by selecting the token and probability the probability must be a number between 100 and 100 response format choose the format for the response reasoning effort constrain the effort on reasoning for reasoning models https //platform openai com/docs/guides/reasoning by selecting low , medium , or high reducing the reasoning effort can result in faster responses and fewer tokens used on reasoning in a response defaults to medium audio output options if you selected the gpt 4o gpt 4o audio preview model and audio + text response format, you need to indicated a voice and an audio format in the voice and file format fields accordingly parse json response if you selected json object for the response format , you can choose whether or not the response will be parsed seed this feature is in beta if specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result refer to the system fingerprint response parameter to monitor changes in the backend for more information, refer to the openai documentation stop sequences add up to 4 sequences where the api will stop generating further tokens other input parameters add any additional input parameters by selecting the parameter name , the input type , and entering the parameter value for more information, refer to the openai documentation transform text to structured data identifies specified information in a prompt's raw text and returns it as structured data field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq model select or map the model you want to use note ensure you have access to the model you want to work with only models available for the account are listed for selection text to parse enter the text containing the data that you want to transform prompt enter a short description explaining what type of data should be extracted from the text entered above structured data definition enter parameters or map the way in which the structured data should be returned parameter name enter a name for the parameter adhere to the following rules the first character must be either a letter or an underscore do not include any commas or spaces do not use any special symbols, other than an underscore description enter a short description of the parameter this description is part of the prompt, so the more clear your description, the more accurate the response for example, if would like openai to search for coordinates in the text, you can enter latitude and longitude of the location data type select or map the data type format in which the parameter is returned if there will be multiple occurrences of similar data types, select an array option for example, if there are coordinates of multiple locations to be returned, select array (text) value examples enter or map examples of possible values to be returned the more examples you provide, the more accurate the response will be is parameter required? select or map whether the parameter is required when selecting yes , openai will be forced to generate a value in the output, even if no value is detected in the text object definitions enter object property details parameter name enter a name for the object parameter adhere to the following rules the first character must be either a letter or an underscore do not include any commas or spaces do not use any special symbols, other than an underscore description enter a short description of the object parameter for example, if would like openai to search for coordinates in the text, you can enter latitude and longitude of the location data type select or map the data type format in which the object parameter is returned if there will be multiple occurrences of similar data types, select an array option for example, if there are coordinates of multiple locations to be returned, select array (text) is parameter required? when selecting yes , openai will be forced to generate a value in the output, even if no value is detected in the text analyze image(s) vision accepts an array of images as an input and provides an analysis result for each of the images following the instructions specified in the prompt field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq prompt enter instructions for how to analyze the image(s) images add or map the images you want to analyze you can add images by entering an image url or image file data image url enter the url address to a public resource of the image for example, https //getmyimage com/myimage png image file map or select the binary data of the image you can retrieve the binary data of an image using the http get a file module, or another app such as dropbox model select or map the model you want to use note ensure you have access to the model you want to work with only models available for the account are listed for selection max completion tokens enter the maximum number of tokens to use for the completion the default value is 2048 temperature specify the sampling temperature to use higher temperatures generate more diverse and creative responses for example, 0 8 lower temperatures generate more focused and well defined responses for example, 0 2 the default value is 1 the value must be lower than or equal to 2 top p specify the top p value to use nucleus sampling this will consider the results of the tokens with probability mass the default value is 1 the value must be lower than or equal to 1 number enter the number of responses to generate if more than 1 response is generated, the results can be found in the module's output within choices the default value is 1 generate an image generates an image using gpt image or dall e field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq model select or map the model you want to use gpt image 1 dall e 3 dall e 2 note ensure you have access to the model you want to work with only models available for the account are listed for selection prompt enter details of the images you want to generate the maximum number of characters allowed is 1000 if your model is dall e 2 and 4000 if your model is dall e 3 size select or map the size of the images to generate it must be one of 256x256 , 512x512 , or 1024x1024 number enter the number of images to generate enter a number between 1 and 10 response format select or map the format in which the generated images are returned if you select base64 encoded png, the raw png image can be obtained by using the iml function, tobinary(\<module no> data\[] b64 json; base64) in the subsequent modules quality select the quality of the generated image hd generates images with finer details and greater consistency across the image edit image(s) creates an edited or extended image given one or more source images and a prompt field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq model select or map the model you want to use gpt image 1 dall e 2 note ensure you have access to the model you want to work with only models available for the account are listed for selection image enter an image to edit the file must be in png format, less than 4 mb, and square if a mask file is not provided, the image must have transparency, which will be used as the mask prompt enter details of the images you want to edit the maximum number of characters allowed is 1000 mask enter an additional image with fully transparent areas (where alpha is zero) the transparent areas indicate where the image will be edited the file must be in png format, less than 4 mb, and the same dimensions as image above number enter the number of images to generate enter a number between 1 and 10 size select or map the size of the images to generate it must be one of 256x256 , 512x512 , or 1024x1024 response format select or map the format in which the generated images are returned if you select base64 encoded png, the raw png image can be obtained by using the iml function, tobinary(\<module no> data\[] b64 json; base64) in the subsequent modules create a translation (whisper) creates a translation of an audio into english field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq file name enter the name of the file you want to translate file data enter the data of the file you want to translate model select or map the model you want to use refer to the openai audio api documentation for information on available models note ensure you have access to the model you want to work with only models available for the account are listed for selection prompt enter text to guide the model's style or continue a previous audio segment (optional) the prompt should be in english temperature enter a sampling temperature between 0 and 1 higher values, such as 0 8, will make the output more random lower values, such as 0 2, will make it more focused and deterministic create a transcription creates a transcription of an audio to text field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq file name enter the name of the file you want to transcribe file data enter the data of the file you want to transcribe model select or map the model you want to use refer to the openai audio api documentation for information on available models note ensure you have access to the model you want to work with only models available for the account are listed for selection prompt enter text to guide the model's style or continue a previous audio segment (optional) the prompt should be in the same language as the audio temperature enter a sampling temperature between 0 and 1 higher values, such as 0 8, will make the output more random lower values, such as 0 2, will make it more focused and deterministic language enter the two letter iso code of the input audio's language create a moderation qualifies whether the provided image or text(s) contains violent, hateful, illicit, or adult content field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq input format select the format of the input content text array of texts image input text enter the text for which you want to create the moderation the module will classify the content against openai's content policy model select or map the model you want to use if empty, the default model will change based on the selected input format generate an audio generates an audio file based on the text input and settings specified field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq input enter text to generate into audio the text must be between 1 and 4096 characters long model select or map the model you want to use refer to the openai audio api documentation for information on available models note ensure you have access to the model you want to work with only models available for the account are listed for selection voice select or map the voice to use in the audio for voice samples, see the openai voice options guide https //platform openai com/docs/guides/text to speech/voice options output filename enter a name for the generated audio file do not include the file extension response format select or map the file format for the generated audio file speed enter a value for the speed of the audio this must be a number between 0 25 and 4 files add files to a vector store adds files to a specified vector store or, if not specified, creates a new vector store based on the configuration field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq batch create mode choose if you would like to create a new vector store or choose an existing vector store vector store id select the vector store vector store name enter a name for a new vector store days expires after enter the number of days of inactivity for the vector store when the specified amount of days have passed, the vector store expires file ids select the files to add to a vector store for a list of supported file formats, see open ai file search supported files https //platform openai com/docs/assistants/tools/file search/supported files upload a file uploads a file that will be available for further usage across the openai platform field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq file name enter the name of the file to be uploaded, including the file extension for example, myfile png supported file types depend on the option selected in the purpose field below the fine tune purpose supports only jsonl files see the assistants tools guide to learn more about the supported file types file data enter the file data to be uploaded you can retrieve file data using the http get a file module purpose select or map the purpose select assistants for assistants https //platform openai com/docs/api reference/assistants and messages https //platform openai com/docs/api reference/messages and fine tune for fine tuning https //platform openai com/docs/api reference/fine tuning batches if you are using restricted permissions for your api key, make sure the files\ read permission is enabled in the openai api platform this is required for the batch modules to work without errors list batches retrieves a list of batches field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq limit enter the maximum number of returned results (bundles) get a batch retrieves details of the specified batch field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq batch id retrieves details of the specified batch create a batch creates and executes a batch of api calls field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq input file id select a file to use as input to create a batch endpoint select the endpoint to use for all requests in the batch note that /v1/embeddings batches are also restricted to a maximum of 50,000 embedding inputs across all requests in the batch cancel a batch cancels an in progress batch the batch will be in status "cancelling" for up to 10 minutes before changing to "cancelled", where it will have partial results (if any) available in the output file field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq batch id select a batch to cancel responses list input items lists input items field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq response id enter the id of the response that you want to list the input items for include select the additional output data to include in the response order select the order to return the input items in limit enter the maximum number of returned results (bundles) get a model response retrieves an existing model response field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq response id enter the id of the response that you want to retrieve the model response for create a model response creates a new model response field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq model select the model that will generate the response prompt type select the prompt type for the response text prompt input array max output tokens enter the upper limit for the number of tokens that can be generated for the response, including visible output tokens and reasoning tokens temperature enter the sampling temperature to use higher values like 0 8 will make the output more random, while lower values like 0 2 will make it more focused and deterministic we generally recommend altering this or top p but not both must be lower than or equal to 2 top p enter the nucleus sampling value, which is an alternative to sampling with temperature this considers the results of the tokens with top p probability mass the value 0 1 means only the tokens comprising the top 10% probability mass are considered store select whether to store the generated model response for later retrieval via api defaults to yes previous response id enter the id of the previous response to the module parallel tool calls select whether the model can run tool calls in parallel defaults to no instructions enter a system (or developer) message as the first item in the model's context when using along with previous response id , the instructions from a previous response will not be carried over to the next response reasoning effort select how to model performs reasoning reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response summary select the summary of the reasoning performed by the model format type select the format that the module must output text json schema json object tool choice select how the model should choose tools during response generation none auto required file search web search preview computer use preview function tools add one or more tools that the model may call while generating a response type select the type of tool to add truncation select the truncation strategy to use for the model response auto disabled include select additional output data to include in the model response metadata add key value pairs that can be attached to an object delete a model response deletes an existing model response field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq response id enter the id of the response that you want to delete the model response for other make an api call performs an arbitrary authorized api call for the list of available endpoints, see openai chatgpt api documentation field description connection openai (chatgpt, whisper, dall e) docid 4mdglborcx73dtu zq5qq url enter a path relative to https //api openai com for example, /v1/models method get to retrieve information for an entry post to create a new entry put to update/replace an existing entry patch to make a partial entry update delete to delete an entry headers enter the desired request headers you don't have to add authorization headers; we already did that for you query string enter the request query string body enter the body content for your api call examples of use list models the following api call returns all pages from your openai account url /v1/models method get the search matches can be found in the module's output under bundle > body > data in our example, 69 models were returned