AI
Vlm Run by Maxmel Tech
14 min
vlm run is a developer focused platform offering a unified api and dashboard for extracting structured, validated data from images using visual ai models integrating with make com enables automated, reliable image data processing and seamless connection to other workflows and databases this is ai generated content based on official vlm run documentation the content may still contain errors—please verify important information if you have questions, contact vlm run support directly how to get support on vlm run vlm run is a community developed application and is subjected to the developer's terms and conditions, which may include applicable fees make does not maintain or support this integration for assistance, please https //f make com/r/reachout?app name=vlm%20run\&app slug=vlm run community\&app unique id=vlm run l25hv4 requirements to use the vlm run app in make, you must have an active account and an api key, create your account on vlm run app installation to install this app, you need admin, owner, or app developer permissions organizations go to the make integration page click the install button follow the on screen instructions to complete the setup connect vlm run and {{product name}} to get started, you must first create a connection between vlm run and {{product name}} , allowing the two services to communicate you can connect using the following method api keys api keys instructions you need to retrieve your vlmrun api key from the dashboard login to your platform using your credentials navigate to the dashboard locate the section where your api keys are displayed retrieve your vlmrun api key some community developers require an additional specific api key retrieve the required api key from the community developer's dashboard, generally found under settings/api for more info, please https //f make com/r/reachout?app name=vlm%20run\&app slug=vlm run community\&app unique id=vlm run l25hv4 create the connection in {{product name}} log in in your make account , add the vlm run module to your {{scenario singular lowercase}} , and click create a connection if you add a module with an instant tag, click create a webhook , then create a connection optional in the connection name field, enter a name for the connection enter the authentication credentials you copied from your vlm run or follow the on screen instructions to create the connection you must also enter any required third party api keys obtained earlier for detailed information on connecting an application, see connect an application page vlm run modules after connecting to the vlm run app, you can choose from a list of available modules to build your {{scenario plural lowercase}} analyze an image creates a detailed and organized prediction based on the content of the provided image get a completion fetches and displays the results generated from previous parsing or transcription processes, allowing you to view the extracted or converted data list files fetches and displays a list of all files that have been uploaded to your account, making it easy to view and manage your stored files make an api call allows you to make a custom, authorized api request to the service, giving you flexibility to access specific endpoints or features not covered by other modules parse a document creates organized predictions based on the content of the provided document, helping you extract and understand key information in a structured format parse a video analyzes the provided video and generates detailed, structured predictions about its content transcribe an audio analyzes the provided audio file and generates a detailed, structured prediction based on its content upload a file uploads a selected file directly to your vlm account, making it available for use within your vlm workspace watch completed predictions triggers when a prediction process finishes and the results are available templates you can look for more templates in make's template gallery , where you'll find thousands of pre created {{scenario plural lowercase}} vlm run resources you can have access to more resources related to this app on the following links vlm run website vlm run documentation vlm run api documentation vlm run page on make