ChatGPT’s “Juice 200” Mode

-

In partnership with

Good morning. It’s Monday, September 1st.

On at the present time in tech history: In 2000the discharge of OpenCV delivered a widely used, open-source library of computer-vision and AI algorithms (k-NN, SVMs, decision trees, etc.), laying groundwork for accessible machine learning tools in academia and industry.

  • ChatGPT’s “Juice 200”

  • Microsoft To Distance Themselves from OpenAI?

  • $10k/mo UBI Feasible With AI Growth?

  • 5 Latest AI Tools

  • Latest AI Research Papers

You read. We listen. Tell us what you think that by replying to this email.

How 433 Investors Unlocked 400X Return Potential

Institutional investors back startups to unlock outsized returns. Regular investors should wait. But not anymore. Due to regulatory updates, some firms are doing things otherwise.

Take Revolut. In 2016, 433 regular people invested a mean of $2,730. Today? They got a 400X buyout offer from the corporate, as Revolut’s valuation increased 89,900% in the identical timeframe.

Founded by a former Zillow exec, Pacaso’s co-ownership tech reshapes the $1.3T vacation home market. They’ve earned $110M+ in gross profit thus far, including 41% YoY growth in 2024 alone. They even reserved the Nasdaq ticker PCSO.

Paid commercial for Pacaso’s Regulation A offering. Read the offering circular at invest.pacaso.com. Reserving a ticker symbol just isn’t a guarantee that the corporate will go public. Listing on the NASDAQ is subject to approvals.

Today’s trending AI news stories

ChatGPT experiments with a max-thinking mode called Juice 200

OpenAI is experimenting with a brand new “Pondering Effort” selector within the ChatGPT web app, letting users adjust the AI’s cognitive intensity. Options range from Light pondering (5) to Max pondering (200), with intermediate tiers like Standard (18) and Prolonged (48). Max pondering, or “Juice 200,” is currently limited to Pro and Enterprise users as a consequence of the heavy compute it demands.

The update also includes other experiments: showing the chosen model within the composer, a totally collapsed tool menu, and model visibility within the Plus menu. The corporate wants to offer users smarter AI without forcing them to guess which model or power level to choose. Early tests suggest this might let ChatGPT scale from casual chat to heavy reasoning tasks while keeping the experience smooth. Read more.

With recent in-house models, Microsoft lays the groundwork for independence from OpenAI

Microsoft is not any longer content being just OpenAI’s biggest customer. Last week, it rolled out two homegrown AI models designed to point out it may construct world-class systems in-house while keeping costs under control. MAI-Voice-1 is a speech-generation model tuned for speed and efficiency: it cranks out a full minute of expressive, multi-speaker audio in under a second on a single GPU. It’s already powering Copilot features like Every day and Podcasts, a transparent play for voice to change into a mainstream interface.

The second model, MAI-1-preview, is a big language model trained on around 15,000 Nvidia H100 GPUs, far fewer than rivals like xAI’s Grok, but engineered to run inference on a single GPU. That balance of scale and efficiency is not any accident; AI chief Mustafa Suleyman calls it “punching above its weight,” crediting data curation and training discipline over brute-force compute. The model is in public testing now, with a wider Copilot rollout coming. These systems give Microsoft a hedge against overreliance on OpenAI. Read more.

Ex-OpenAI researcher says $10K UBI payments ‘feasible’ with AI-growth

Miles Brundage, who once led AGI readiness at OpenAI, argues that current UBI pilots offering $500–$1,500 a month are relics of a pre-AI economy. With trillion-dollar data center buildouts and Nvidia’s Blackwell Ultra GPUs already reshaping global output, he says $10,000 monthly stipends could possibly be each economically and politically viable within the near term. The bottleneck isn’t money, it’s whether governments can rewrite policy fast enough to administer a post-work transition without stagnation.

Nvidia CEO Jensen Huang sees the identical forces driving not redistribution but reconfiguration: AI because the long-awaited solution to the “productivity paradox.” By automating drudge work, he argues, firms can finally execute on shelved ideas, making a four-day workweek not only possible but efficient. Pilot programs back him up – 24% productivity gains, burnout cut in half, turnover down.

Brundage and Huang sketch two diverging but linked paths: direct redistribution of AI’s economic surplus, or structural redesign of labor itself. Read more.

5 recent AI-powered tools from around the net

arXiv is a free online library where researchers share pre-publication papers.

Your feedback is precious. Reply to this email and tell us how you think that we could add more value to this article.

Excited about reaching smart readers such as you? To change into an AI Breakfast sponsor, reply to this email or DM us on 𝕏!

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x