Vlm Run
14 min
vlm run is a platform for extracting structured json data from images, videos, and documents using vision language models, enabling automation and analysis of visual content integrating with make com allows seamless, automated visual data processing in user workflows this is ai generated content based on official vlm run documentation the content may still contain errors—please verify important information if you have questions, contact vlm run support directly how to get support on vlm run vlm run is a community developed application make does not maintain or support this integration for assistance, please https //f make com/r/reachout?app name=vlm%20run\&app slug=vlm run community requirements you need to have an active account and an api key to use the api create your account on vlm run app installation to install this app, you need admin, owner, or app developer permissions docid foycaspyp9uykgm7lqpb start by installing the app from the make integration page click the install button and follow the on screen instructions to complete the setup connect vlm run and {{product name}} to get started, you must first create a connection between vlm run and {{product name}} , allowing the two services to communicate connect an application docid\ so88fm6pkt0g adkddfzz you can connect using the following method api keys api keys instructions you need to obtain your vlmrun api key and use it as a bearer token in the authorization header for all api requests login to your vlm run account using your credentials navigate to the api or developer section in your account dashboard locate or generate your vlmrun api key copy the generated api key and store it securely when making api requests, include the api key in the authorization header as follows authorization bearer \<vlmrun api key> vlm run modules after connecting to the vlm run app, you can choose from a list of available modules to build your {{scenario plural lowercase}} actions modules analyze an image creates a detailed, organized prediction based on the content of the provided image, helping you understand what the image contains get a completion fetches and provides the detailed results generated from a previous parsing or transcription process, allowing you to access the extracted or transcribed data parse a document creates organized predictions based on the content of the provided document, helping you extract meaningful insights or information from your files parse a video creates detailed, organized predictions based on the content of the provided video transcribe an audio creates a detailed analysis or prediction based on the audio file you provide, helping you interpret or classify its content upload a file uploads a selected file directly to your vlm account, making it available for use within your vlm workspace searches modules list files fetches and displays a list of files that have been previously uploaded, allowing you to view all available uploaded files in your account triggers modules watch completed predictions triggers when a prediction process finishes and the results are available universals modules make an api call allows you to make a custom api request to the connected service using your own parameters and settings, giving you flexibility to access features or endpoints not covered by the standard modules templates you can look for more templates in make's template gallery , where you'll find thousands of pre created {{scenario plural lowercase}} vlm run resources you can have access to more resources related to this app on the following links vlm run website vlm run documentation vlm run api documentation vlm run page on make