OpenAI’s e-commerce takeover

-

Good morning, AI enthusiasts. Your next online purchase won’t occur on Amazon or a brand website, but mid-conversation with ChatGPT.

With OpenAI rolling out its recent Fast Checkout with support for tens of millions of merchants, AI is about to change into the brand new one-stop storefront for the web.

In today’s AI rundown:

  • OpenAI brings direct purchasing to ChatGPT

  • Anthropic launches Claude Sonnet 4.5

  • Create talking head videos using your voice

  • OpenAI’s TikTok-style app for Sora 2

  • 4 recent AI tools, community workflows, and more

LATEST DEVELOPMENTS

OPENAI

🤑 OpenAI brings direct purchasing to ChatGPT

Image source: OpenAI

The Rundown: OpenAI just rolled out direct purchasing inside ChatGPT for U.S. users, letting shoppers complete transactions without leaving the conversation interface through a brand new feature called Fast Checkout.

The small print:

  • The corporate partnered with Stripe to create the system, initially supporting Etsy sellers with availability for over 1M Shopify merchants coming soon.

  • Users can click a “Buy” button after ChatGPT suggests products, then review order details and pay in chat.

  • OAI open-sourced the underlying Agentic Commerce Protocol, enabling any retailer to integrate it — with Stripe merchants needing minimal code changes.

  • The corporate collects fees from merchants on accomplished sales, however the product rankings stay organic, still determined by relevance.

Why it matters: We’ll be curious to see if OAI eventually incorporates ads into the flow, but Fast Checkout and the ACP feel like an inflection point for the shift to the era of agentic AI commerce. The structure can be an interesting recent revenue stream for the AI giant, and will seriously add up as shopping shifts to ChatGPT.

TOGETHER WITH TURING

The Rundown: While data factories churn out quantity, leading AI labs need partners who co-own research goals and engineer the complex human-AI loops that push models from promising to state-of-the-art. Turing focuses on closing capability gaps through custom research acceleration.

Turing’s research-focused approach includes:

  • Co-owned experimental outcomes, not only data delivery, and vendor neutrality

  • Quality-by-design workflows with transparent data lineage and auditable results

  • Custom RL environments and SFT/RLHF/DPO pipelines designed to your benchmarks

Partner with the research accelerator that understands what frontier AI labs really need.

ANTHROPIC

🚀 Anthropic launches Claude Sonnet 4.5

Image source: Anthropic

The Rundown: Anthropic just released Claude Sonnet 4.5, calling it the “best coding model on the earth” and showcasing top-tier performance on development benchmarks while maintaining the identical API pricing as its predecessor.

The small print:

  • Sonnet 4.5 achieves SOTA results on real-world software development (SWE-bench verified), and an almost 20% upgrade from Opus 4.1 on computer use.

  • Testing showed Sonnet 4.5 coding autonomously for 30+ hours to deliver 11,000 lines of code, an enormous jump from GPT-5-Codex’s 7+ hour sessions.

  • Anthropic rolled out recent updates, including Claude Code checkpoints, memory and context editing in API, and a Claude Agent SDK for agent constructing.

  • The corporate also released “Imagine with Claude” as a 5-day research preview for Max users, showcasing real-time software generation.

Why it matters: OAI’s Codex stole a few of Claude Code’s thunder this summer, but the discharge of a brand new top coding model and platform upgrades could give Anthropic a renewed edge. A 30+ hour agentic session can be a wild achievement, and points to a way forward for long-horizon tasks that unlock unfathomable recent capabilities.

AI TRAINING

🗣️ Create talking head videos using your voice

The Rundown: On this tutorial, you’ll learn tips on how to create skilled talking avatar presentations by generating a headshot with Google Gemini and animating it together with your voice using Wan Video — no filming required.

Step-by-step:

  1. Go to Gemini and upload a photograph with the prompt: “Give me an expert headshot of this person as a talking head, facing the camera, wearing [outfit], with [background]. Close-up shot, skilled lighting, high resolution”

  2. Visit create.wan.video, create a brand new project, and alter media type from Video → Avatar

  3. Upload your Gemini headshot and add audio by recording 10-15 seconds of your script or typing as much as 300 words to make use of Wan’s built-in voices

  4. Hit Generate to sync lip movements together with your audio, then click “Send to Timeline” and add segments using the “+” button to construct your complete video

Pro tip: Write your script before you begin and split it into short sections with natural pauses. This makes each clip flow easily, making it sound like an actual presenter.

PRESENTED BY INVISIBLE

The Rundown: The AI industry is openly split on whether evaluations even matter — with some shipping “on vibes” and others insisting evals are the one strategy to measure progress. Invisible’s recent temporary cuts through the noise, showing why “benchmaxxing” distorts reality and what to measure as a substitute.

Inside, you’ll learn:

  • Evaluations explained: the missing step between AI pilots and ROI

  • A practical framework for custom evaluations aligned to your use cases

  • The way to construct inputs, catch faulty training data, and run behavioral and safety checks

  • A client case that reduced harmful behaviors by 97% with 4k rows, not 100k

OPENAI

🤳 OpenAI’s TikTok-style app for Sora 2

Image source: Sora

The Rundown: OpenAI is reportedly developing a standalone social platform powered by its upcoming Sora 2 video model, designed to mimic TikTok’s vertical scrolling feed but exclusively featuring AI-generated content as a substitute of user uploads.

The small print:

  • The platform will limit clips to 10 seconds and include identity verification, allowing users to authorize their likeness for video generations.

  • The WSJ said OAI will allow copyrighted material in videos unless rights holders actively request exclusion, though public figures would require consent.

  • The app incorporates remix functionality and algorithmic recommendations much like For You pages, with notifications sent when a user’s likeness is used.

  • The news comes just days after Meta revealed Vibes, a wholly AI video feed throughout the Meta AI app.

Why it matters: Sora 2 seems imminent, but it can require some big upgrades to bring the model as much as the extent of rivals. Each OAI and Meta are heading down the AI social feed route — and given the negative reactions to the Vibes launch, a lot of these apps are prone to be related to the slop-ification of the online until proven otherwise.

QUICK HITS

🛠️ Trending AI Tools

  • 🔒 Incogni – Remove your personal data from the online so scammers and identity thieves can’t access it. Use code RUNDOWN to get 55% off*

  • 🚀 Claude 4.5 Sonnet – Anthropic’s recent top-performing coding model

  • 🧾 Fast Checkout – Shop mid-conversation with ChatGPT’s

  • ❤️ Loveable AI & Cloud – Add AI app functionalities with a built-in backend

*Sponsored Listing

📰 Every little thing else in AI today

DeepSeek launched V3.2-Exp, a model with a brand new “sparse attention” mechanism that cuts API costs by over 50% while matching its predecessor’s performance.

California Governor Gavin Newsom signed SB 53 laws, requiring transparency from AI giants with a computing cluster consortium and whistleblower protections.

OpenAI rolled out a brand new safety routing system that switches to GPT-5-thinking during sensitive conversations, alongside the launch of recent parental controls.

Quantum computing expert Scott Aaronson published a brand new paper that he revealed had a key technical step come from GPT-5-Considering.

Lovable launched Lovable Cloud and AI, enabling users to construct full-stack apps through prompts with integrated backend services and Gemini-powered AI features.

COMMUNITY

🤝 Community AI workflows

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Tauseef M. in Bangalore, India:

“In my day by day practice as a clinician, I take advantage of AI ChatGPT & Perplexity to quickly interpret clinical data, review the newest guidelines, and create tailored lifestyle and treatment plans. This enhances my decision-making, improves efficiency, and allows me to focus more on meaningful patient interactions.”

How do you utilize AI? Tell us here.

🎓 Highlights: News, Guides & Events

  • Read our last AI newsletter: Hollywood’s synthetic actor showdown

  • Read our last Tech newsletter: Amazon fined $2.5B for Prime ‘trickery’

  • Read our last Robotics newsletter: Google’s robots learn to ‘think’ first

  • Today’s AI tool guide: Create talking head videos using your voice

  • RSVP to next workshop @ 4PM Friday: Vibe coding in Cursor for non-devs 

That is it for today!

Before you go we’d like to know what you considered today’s newsletter to assist us improve The Rundown experience for you.
  • ⭐️⭐️⭐️⭐️⭐️ Nailed it
  • ⭐️⭐️⭐️ Average
  • ⭐️ Fail

Login or Subscribe to take part in polls.

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x