OpenAI’s $1B Disney blindside

-

Good morning, { AI enthusiasts }. OpenAI’s Sora shutdown caught the AI video world off guard last week. Disney, it seems, was blindsided even harder — learning the product was dead lower than an hour before everyone else did.

A report with details including a $1M-a-day burn rate, a Sora enterprise pilot in progress, compute crunches, and more just shed latest light on the AI leader’s sudden shift away from its once-viral platform.

In today’s AI rundown:

  • Inside Sora’s $1M-a-day collapse at OpenAI

  • Microsoft pits Claude against ChatGPT for research

  • Construct a travel itinerary with Perplexity Computer

  • Stanford exposes AI’s people-pleasing problem

  • 4 latest AI tools, community workflows, and more

LATEST DEVELOPMENTS

OPENAI

Image source: Reve / The Rundown

The Rundown: A WSJ investigation just revealed the behind-the-scenes chaos of OpenAI Sora video generator shutdown, including a $1M day by day burn rate, a blindsided Disney, and the interior code-named model that required Sora’s compute budget.

The main points:

  • Sora was reportedly burning “roughly 1,000,000 dollars a day” and using significant compute, with Sora 3 training set to start out just because it was axed.

  • The WSJ said Disney learned in regards to the shutdown “lower than an hour” before the announcement, with the connection now “effectively dormant”.

  • The freed-up chips went to “Spud,” a model targeting coding and enterprise in response to Anthropic’s powerful moves within the sector.

  • An enterprise version of Sora was already in pilot with Disney for marketing and VFX work, with a spring launch expected prior to OAI pulling the plug.

Why it matters: We covered the shutdown when it broke, however the WSJ’s details put things into context — the generator was bleeding money and compute. The strangest a part of the story is the Disney blindside, which is definitely an odd technique to handle a possible $1B partnership with certainly one of the most important media firms on the planet.

TOGETHER WITH YOU.COM

The Rundown: It happens — LLMs hallucinate. Grounding your LLM, nevertheless, can assist dramatically improve accuracy. On this guide, You.com explains what AI grounding is and the way organizations can implement it to attain more reliable outputs.

The playbook covers:

  • A 3-part approach that outperforms RAG alone

  • Why grounding is not set-and-forget, and methods to construct audit trails

  • The open vs. closed platform trade-off (and what it means in your next model switch)

MICROSOFT

Image source: Microsoft

The Rundown: Microsoft released Critique and Council, two latest features that turn its Copilot Researcher right into a multi-model system that may review and edit research reports and run each systems side by side to see where they agree and disagree.

The main points:

  • Copilot’s Researcher already uses OAI for multi-step work, with Critique now adding Claude as a second model to review every report before it ships.

  • One model drafts the research, and the second tears it apart on source quality, completeness, and evidence grounding behind the scenes.

  • A separate Model Council mode runs each models side by side, then flags where they agree, where they split, and what each uniquely surfaced.

  • The updates come alongside a broader rollout of Copilot Cowork into Frontier, Microsoft’s Claude-based agentic tool for handling multi-step tasks

Why it matters: With orchestration systems like Perplexity Computer out within the wild, the longer term of LLM use feels multi-model, and for good reason. OAI co-founder Andrej Karpathy’s post proved some extent when an LLM helped perfect an argument, then shredded it on command: one model will sell you on anything, so that you higher ask two.

AI TRAINING

The Rundown: On this guide, you’ll learn methods to use Perplexity Computer to plan a full trip itinerary with flights, a day-by-day schedule, and sources in a single run. That is the fastest technique to turn travel tab chaos right into a usable plan you may actually book from.

Step-by-step:

  1. Open Perplexity and search for the Computer toggle. If you’ve got a Pro account, it’s best to give you the option to check it totally free

  2. Prompt: “Plan a visit itinerary for [DESTINATION] for [DATES / LENGTH]. Departing from: [AIRPORT] Budget: [range] Style: [relaxed/outdoors/etc.] Must-haves: [2-4 must-haves]. Make a full PDF as in case you were a travel agent with suggestions on where to remain and transportation between cities”

  3. Let Perplexity Computer run for 15-20 minutes. When it’s done, you should have a PDF laying out your trip

  4. When you wait, you may try your prompt in regular Perplexity search so you may see the difference

Pro tip: Perplexity Computer can deploy sub-agents to code. Ask it to create an interactive calendar website which you could use to assist you to plan and tweak your trip.

PRESENTED BY RIME

The Rundown: Rime is the enterprise TTS platform built for businesses where voice quality is non-negotiable — with AI voices that callers are 61% less more likely to hang up on, per independent testing against Google and ElevenLabs.

With Rime, you get:

  • Cloud or on-prem deployment

  • Human-quality voices

  • Low latency in production

  • Free to start out +$100 in credits included

Join totally free to see how Rime transforms AI voice agent interactions.

AI RESEARCH

Image source: Stanford University

The Rundown: Stanford researchers published a brand new study showing that major AI chatbots consistently take users’ side in personal conflicts, even backing harmful or illegal behavior, while also making users measurably more self-righteous in the method.

The main points:

  • The researchers tested 11 LLMs using 2K Reddit posts where crowds agreed the poster was incorrect, but chatbots still sided with the user over half the time.

  • Over 2,400 participants then chatted with each agreeable and neutral AIs and preferred the sycophantic version, rating it as more trustworthy.

  • After chatting with the agreeable model, users also doubled down on their position, lost interest in apologizing, and couldn’t tell the AI was biased.

Why it matters: While you consider the subject of people-pleasing AI, OpenAI’s 4o model might come to mind. But it surely seems that the majority other frontier models aren’t much different, and potentially much more worrisome with agreeableness that’s more convincing and fewer obvious than the drama seen with 4o.

QUICK HITS

  • 🗣️ Unwrap Customer Intelligence – Turn unstructured customer feedback into data-backed insights that inform your product roadmap*

  • 🧠 Qwen3.5-Omni – Alibaba’s AI with text, image, audio, video understanding

  • 🔍 Critique – Microsoft’s deep research tool that pits AIs against one another

  • 🤖 Hermes Agent – AI agent with memory and cross-platform messaging

Anthropic launched computer use in Claude Code, letting the AI open apps, click through UIs, and visually confirm its own builds from the terminal.

Mistral raised $830M in debt to power its own 13,800-GPU Nvidia AI infrastructure in France, a part of a broader push to chop reliance on U.S. cloud providers.

Alibaba released Qwen3.5-Omni, a brand new multimodal AI that processes text, images, audio, and video, with an “Audio-Visual vibe coding” mode that builds apps from audio.

Starcloud raised $170M at a $1.1B valuation to construct GPU-powered data centers in orbit, betting on SpaceX’s Starship to create space compute cost-competitive.

Apple mistakenly rolled out Apple Intelligence in China before quickly removing the update, with the features not yet approved to be used within the region.

COMMUNITY

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Paul M. in Woodland Park, NJ:

“I’m combining the usage of 3 tools to assist write a dissertation. I’m isolating articles of every topic into separate notebooks in Notebook LM to assist with disciplined synthesis of ideas. I’m using Gemini to assist coach me through initial writing drafts. I’m using Claude to edit and refine my writing.

Each tool brings different logic, and it’s like having a team to assist brainstorm ideas and break through author’s block. Sharing the output of 1 tool with the opposite has helped make each prompt higher than the following.”

How do you employ AI? Tell us here.

That is it for today!

Before you go we’d like to know what you considered today’s newsletter to assist us improve The Rundown experience for you.
  • ⭐️⭐️⭐️⭐️⭐️ Nailed it
  • ⭐️⭐️⭐️ Average
  • ⭐️ Fail

Login or Subscribe to participate

See you soon,

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x