The video understanding artificial intelligence (AI) Twelbraps (CEO Jae -Sung Lee) announced on the seventh that it would provide multimodal models (LMM) ‘Marengo’ and ‘Pegasus’ to Amazon Web Service (AWS) ‘Amazon Bedrock’.
Amazon Bedrock is a service that permits developers to access the AI ​​model through a single API. It provides foundation models of world AI corporations corresponding to Amazon, Meta, Antropic, Mistral AI, Deep Chic, and Stability AI.
Twelbraps’ video understanding models effectively process vast video data to supply video content search, evaluation and insight generation ability. Specifically, it is feasible to look and understand elements corresponding to objects, behaviors, and background sounds within the video, and maximize the worth of video data that was difficult to make use of.
Amazon Bedrock provides complete control, enterprise -class security and value control for data. Based on this, he said, “Seek for video search, category scenes, content summaries and insights in natural language, and to construct sophisticated video understanding without skilled AI knowledge, and to expand consistent performance from small video collections to large -scale librarys, and to use enterprise -class security and governance systems.
“The video accounts for about 80%of the world’s data, but most of them should not available and never fully used,” said Lee Jae -sung, CEO of Twelbraps.
By Jang Se -min, reporter semim99@aitimes.com