Models

Liquid AI Launches Liquid Foundation Models: A Game-Changer in Generative AI

In a groundbreaking announcement, Liquid AI, an MIT spin-off, has introduced its first series of Liquid Foundation Models (LFMs). These models, designed from first principles, set a brand new benchmark within the generative AI...

An exclusive look into Google’s recent AI models

Welcome, AI enthusiasts. We have now an exclusive for you today.In case you missed it, last week Google released two recent upgraded Gemini 1.5 models—achieving recent, state-of-the-art performance across math benchmarks.We partnered with Google...

AI models let robots perform tasks in unfamiliar environments

These models were deployed on Stretch, a robot consisting of a wheeled unit, a tall pole, and a retractable arm holding an iPhone, to check how successfully they were capable of execute the tasks...

The Evolution of Text to Video Models

Simplifying the neural nets behind Generative Video DiffusionIt's speculated that OpenAI has collected a somewhat large annotation dataset of video-text data which they're using to coach conditional video generation models.Combining all of the strengths...

5 Best Large Language Models (LLMs) (September 2024)

The sector of artificial intelligence is evolving at a panoramic pace, with large language models (LLMs) leading the charge in natural language processing and understanding. As we navigate this, a brand new generation of...

Inside “Large World Models”

Good morning. It’s Monday, September sixteenth.Did you recognize: On at the present time in 1997, Steve Jobs was named interim CEO of Apple. Large World Models o1-preview Classified as...

‘AI Godmother’ Startup Officially Launched… “We Will Develop a ‘World Model’ That Goes Beyond Language Models”

World Labs, a 'spatial intelligence' startup led by Stanford University professor Feifei Li, has reportedly succeeded in attracting investment of 230 million dollars (about 300 billion won). The corporate announced that it is going...

EAGLE: Exploring the Design Space for Multimodal Large Language Models with a Mixture of Encoders

The flexibility to accurately interpret complex visual information is a vital focus of multimodal large language models (MLLMs). Recent work shows that enhanced visual perception significantly reduces hallucinations and improves performance on resolution-sensitive tasks,...

Recent posts

Popular categories

ASK DUKE