Good morning. It’s Friday, September thirteenth.
Did you recognize: On today in 2003, Steam officially launched.
You read. We listen. Tell us what you’re thinking that by replying to this email.
In Partnership with GROWTH SCHOOL
Still struggling to attain work-life balance and manage your time efficiently?
Join this 3 hour Intensive Workshop on AI & ChatGPT tools (often $399) but FREE for first 100 readers.
An AI-powered skilled will earn 10x more. 💰 An AI-powered founder will construct & scale his company 10x faster 🚀 An AI-first company will grow 50x more! 📊 |

Today’s trending AI news stories
OpenAI’s latest ‘o1’ model thinks longer to present smarter answers
OpenAI’s latest release, GPT-o1, redefines AI reasoning by extending its thought process before answering. Unlike its predecessors, which prioritized pre-training, o1 invests in prolonged inference, sharpening its logical prowess. Though it doesn’t at all times outperform GPT-4o across the board, o1 shines in tasks requiring deep reasoning.
Accompanying this launch are o1-preview and o1-mini. The previous is a compact version designed for refining use cases, while o1-mini, a cheap variant, delivers almost the identical performance as o1 for STEM challenges. Each models are actually available to ChatGPT Plus and Team users, with broader rollout expected. Users have a current limitation of 30 messages per week on o1-preview, and 50 per week on o1-mini.
Looking ahead, OpenAI anticipates that o1 models might be able to prolonged reasoning times, starting from seconds to potentially weeks, which may lead to advancements in fields like drug discovery and theoretical mathematics. Read more.
We’re releasing a preview of OpenAI o1—a brand new series of AI models designed to spend more time pondering before they respond.
These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math.
— OpenAI (@OpenAI)
5:09 PM • Sep 12, 2024
In our exclusive one-word interview with OpenAI CEO Sam Altman, he denied the brand new model’s AGI status.
OpenAI reportedly searching for $6.5B investment at $150B valuation
OpenAI is reportedly courting $6.5 billion in latest funding, setting its valuation at a staggering $150 billion. Thrive Capital is anticipated to guide the round with a $1 billion investment, while Microsoft, which has already invested $13 billion, can also participate. This funding would support the launch of OpenAI’s upcoming model, o1, known for its advanced reasoning capabilities but requiring more hardware resources, likely increasing operational costs. The capital can be earmarked for AI infrastructure, as indicated in an internal memo about acquiring more compute resources.
Alongside the $6.5 billion funding round, OpenAI is searching for a $5 billion revolving credit facility, a move often seen before an organization goes public. While the corporate’s complex corporate structure, which mixes a nonprofit and for-profit arm, might complicate an IPO, sources suggest OpenAI may consider restructuring to permit for more investor returns, potentially easing a path to the stock market. Read more.
Suno releases latest “Covers” feature to reimagine music you like
Reimagine the music you like with Covers! Covers can transform anything, from an easy voice recording to a fully-produced track, into a wholly latest style all while preserving the unique melody that’s uniquely yours. Our newest feature, now available in early-access beta,… x.com/i/web/status/1…
— Suno (@suno_ai_)
8:44 PM • Sep 12, 2024
Suno’s latest feature, Covers, now in early-access beta, allows users to reimagine their music by transforming it into different styles while maintaining the unique melody. This tool supports various audio inputs, similar to voice recordings and instrumentals, enabling users to experiment with latest genres and add lyrics to instrumental tracks.
To create a canopy, users can select a song from the Library or Create page, select “Cover Song,” and pick a brand new music style. The feature will mechanically adapt the unique lyrics to suit the chosen style, though users can modify the lyrics as desired. This feature is accessible to Pro/Premier subscribers with an initial allocation of 100 free covers. Suno invites feedback during this beta phase to boost the tool’s performance. Read more.
Adobe pronounces Firefly Video Model AI video tool
Adobe is launching Firefly Video Model, an AI-powered video editing tool, with a limited beta version due later this 12 months. This tool, a part of Adobe’s Firefly suite, marks the corporate’s first step into AI-driven video editing. It allows users to generate five-second video clips from text or image prompts, with capabilities for custom camera angles, pans, and zoom effects. Adobe claims the tool offers superior prompt accuracy and performance in comparison with competitors like Runway and Pika Labs.
The Firefly Video Model might be trained exclusively on public and licensed content, avoiding Adobe customer data. Alongside this, Adobe will introduce Generative Extend in Premiere Pro, a feature that extends clips by generating two-second inserts. Enthusiasts can join the waiting list for beta access. ​​Read more.
Etcetera: Stories you’ll have missed

3 latest AI-powered tools from around the net

arXiv is a free online library where researchers share pre-publication papers.



Your feedback is invaluable. Reply to this email and tell us how you’re thinking that we could add more value to this article.
Fascinated by reaching smart readers such as you? To change into an AI Breakfast sponsor, reply to this email.