Good morning. It’s Monday, December ninth.
Did you understand: You may watch a livestream of Day 3 of OpenAI’s 12 days of Christmas today at 10:00am PST?
-
Sora Drop Coming Soon?
-
Aurora Image Generator Pulled
-
Meta Flame 3.3
-
OpenAI’s 12 Days of Christmas Continues
-
5 Latest AI Tools
-
Latest AI Research Papers
You read. We listen. Tell us what you’re thinking that by replying to this email.
In partnership with MASTERWORKS
Over the past seven elections, this asset class has outpaced the S&P 500
As an alternative of attempting to predict which party will win, and where to take a position afterwards, why not spend money on an ‘election-proof’ alternative asset? The sector is currently in a softer cycle, but over the past seven elections (1995-2023) blue-chip contemporary art has outpaced the S&P 500 by 64% even despite the recent dip, whatever the victors, and we have now conviction it’ll rebound to those levels long-term.
Now, because of Masterworks’ art investing platform, you’ll be able to easily diversify into this asset class with no need tens of millions or art expertise, alongside 65,000+ other art investors. From their 23 exits to this point, Masterworks investors have realized representative annualized net returns like +17.6%, +17.8%, and +21.5% (amongst assets held longer than one yr), even despite a recent dip within the art market.*
Past performance not indicative of future returns. Investing Involves Risk. See Essential Disclosures at masterworks.com/cd.
Thanks for supporting our sponsors!
Today’s trending AI news stories
OpenAI Signals Imminent Release of Sora Video Generator
OpenAI is on the cusp of unveiling an enhanced version of its Sora video generator, integrating sophisticated capabilities like text-to-video, text-and-image-to-video, and text-and-video-to-video generation, accommodating clips up to at least one minute in duration.
Sora v2 release is impending:
* 1-minute video outputs
* text-to-video
* text+image-to-video
* text+video-to-videoOpenAI’s Chad Nelson showed this on the C21Media Keynote in London. And he said we are going to see it very very soon, as @The identical has foreshadowed.
— Ruud van der Linden (@RuudNL)
3:57 PM • Dec 7, 2024
OpenAI’s Chad Nelson revealed these details in the course of the C21Media event in London, corroborated by recent API leaks indicating a faster, more efficient design. The rollout, potentially aligned with OpenAI’s winter event, could also introduce GPT-4.5 and advanced image-generation updates for GPT-4o. If realized, this suite of enhancements would further consolidate OpenAI’s standing in generative AI. Read more.
𝕏 adds, then quickly removes, Grok’s latest ‘Aurora’ image generator
Elon Musk’s 𝕏 briefly debuted Aurora, a photorealistic image generator integrated into the Grok assistant, only to remove it shortly after. Listed as “Grok 2 + Aurora (beta),” it was replaced by “Grok 2 + Flux (beta)” inside days. Musk admitted the tool’s beta status, promising swift upgrades. While Aurora’s ability to conjure images of public figures and copyrighted characters caught eyes, flaws like anatomical distortions and provocative outputs sparked scrutiny over content moderation and technical maturity.
Aurora’s rollout aligns with Grok’s pivot to a freemium model, offering 10 prompts and three image analyses every two hours, expanding access beyond its initial paywall. Developed by xAI, which recently secured $6 billion in funding, these shifts suggest a calculated push to broaden AI adoption while grappling with thorny challenges of compliance and refinement. Read more.
Meta Launches Open Source Llama 3.3, Shrinking Powerful Model Into Smaller Size
Meta’s Llama 3.3 is a leaner, meaner version of its predecessor, packing the punch of a 405-billion-parameter model right into a way more efficient 70 billion parameters. This upgrade slashes GPU memory needs—as much as 24 times less load for a similar performance—making it a cheap selection for developers.
At just $0.01 per million tokens for generation, it’s a competitive alternative to GPT-4 and Claude. Llama 3.3 shines in multilingual reasoning with a 91.1% accuracy on MGSM, outdoing Amazon’s Nova Pro in key NLP benchmarks. Trained on 15 trillion tokens and fine-tuned on 25 million synthetic examples, it encompasses a 128k token context window and Grouped Query Attention for enhanced scalability.
Meta also made strides in sustainability, ensuring a net-zero emission footprint despite the hefty computational demands. Llama 3.3 is a strong, budget-friendly tool with an environmental conscience. Read more.
OpenAI’s ‘Ship-Mas’ Continues with Reinforcement High quality-Tuning for Expert Models
OpenAI’s latest Reinforcement High quality-Tuning (RFT) method gives its o1 models the flexibility to tackle complex tasks with minimal training data. Unlike traditional fine-tuning, RFT allows models to explore potential solutions before evaluation, reinforcing effective reasoning and penalizing mistakes.
In collaboration with Thomson Reuters, RFT demonstrated its prowess, with the o1 Mini model outperforming standard versions in legal tasks. RFT also proved priceless in bioinformatics, hitting 45% accuracy in gene identification for rare genetic diseases.
OpenAI is now inviting select organizations to take part in an early access research program, with wider availability slated for 2025, cementing RFT’s potential in expert systems and area of interest AI applications. Read more.
3 latest AI-powered tools from around the net
arXiv is a free online library where researchers share pre-publication papers.
Your feedback is priceless. Reply to this email and tell us how you’re thinking that we could add more value to this article.
Eager about reaching smart readers such as you? To turn out to be an AI Breakfast sponsor, reply to this email or DM us on X!