Good morning. It’s Friday, November fifteenth.
Did you already know: On at the present time in 2001, the unique Xbox went on sale?
-
OpenAI’s Policy Paper Hopes to Persuade US Gov
-
“Operator” Agent Launch Coming in Jan?
-
Run open-source AI on a Mac
-
Latest Experimental Gemini Model
-
Robot Learns Surgery By Statement
-
4 Latest AI Tools
-
Latest AI Research Papers
You read. We listen. Tell us what you think that by replying to this email.
In partnership with OUTREAD AI
Read 15-minute simplified summaries of groundbreaking research in tech, economics, psychology and more.
Quick insights, big impact – no heavy reading needed!
Thanks for supporting our sponsors!

Today’s trending AI news stories
Report: OpenAI Drafts Policy Paper With Daring Suggestions For US AI Strategy

OpenAI’s latest policy paper proposes a targeted U.S. AI strategy, urging the creation of AI-focused economic zones to streamline infrastructure projects, from data centers to renewable and nuclear energy. A proposed National Transmission Highway Act would ease energy and data transmission development, attracting private investment through government-backed energy purchases.
Further, OpenAI advocates a North American AI alliance to consolidate competitiveness against global players like China. Collaboration with public universities is printed to foster AI research hubs and specialized workforce training. These initiatives, OpenAI contends, promise substantial economic gains, drive job creation, GDP growth, and contributions to semiconductor manufacturing and grid modernization. Read more.
OpenAI is reportedly planning AI agent “Operator” for January launch
OpenAI is preparing to launch “Operator,” an AI assistant that may autonomously perform tasks resembling coding and travel bookings, set for release in January. Initially available as a research preview and thru an API for developers, Operator will focus on automating browser-based functions.
CEO Sam Altman sees AI agents as the subsequent stage in AI evolution, optimizing using existing models. This development mirrors industry efforts, with firms like Anthropic, Microsoft, and Google advancing similar assistants. These agents facilitate the automation of complex workflows by coordinating subtasks across multiple systems.
OpenAI’s “Project Swarm,” an open-source framework, illustrates this vision, enabling assistants to transfer control and execute tasks sequentially, showcasing the growing momentum of multi-agent systems in AI-driven automation. Read more.
You may now run probably the most powerful open source AI models locally on a Mac
Exo Labs has enabled running powerful open-source AI models like Qwen 2.5 Coder-32B and Nemotron-70B on local Mac M4 devices, bypassing costly cloud infrastructure. By connecting multiple Mac Minis and a Macbook Pro M4, Exo Labs provides a cheap, secure, and personal alternative for AI model execution.
The open-source Exo software distributes workloads across devices, making high-performance AI accessible to individuals and enterprises. Leveraging the M4’s advanced capabilities, Exo offers a decentralized solution with faster, more efficient AI operations. Exo Labs is expanding with enterprise offerings and a benchmarking site to guide users in optimizing their hardware setups. Read more.
Latest Experimental Gemini Model
The brand new experimental Gemini model, “gemini-exp-1114,” is now available in Google AI Studio and is making waves within the AI community. Recently rating #1 in overall performance and vision on the Chatbot Arena, the model has seen significant improvements across several domains, including math, hard prompts, creative writing, and coding.
This surge follows a 40-point leap in its rating, now surpassing previous iterations. The model’s standout features include top rankings in math, hard prompts, and artistic writing, while maintaining a powerful presence in coding.
As “gemini-exp-1114” continues to achieve traction, it is anticipated to change into accessible via API soon, offering users a chance to check its capabilities firsthand and supply feedback on its performance. Read more.
AI-powered robot learned to perform surgery by watching doctors work
Researchers at Johns Hopkins and Stanford have pioneered a technique that enables robots to learn surgery by watching video footage of expert surgeons, bypassing the necessity for tedious hand-coded instructions. Using imitation learning, the robot replicates tasks like suturing and tissue manipulation with precision on par with human surgeons.
The system, built on machine learning models akin to those powering language models like ChatGPT, decodes surgical movements into kinematic data. By training on an enormous archive of recorded procedures, the robot adapts autonomously, even correcting its actions if vital. This breakthrough accelerates the training process, offering a path toward faster, more accurate robotic surgeries and inching closer to completely autonomous operations. Read more.


4 latest AI-powered tools from around the net

arXiv is a free online library where researchers share pre-publication papers.



Your feedback is precious. Reply to this email and tell us how you think that we could add more value to this text.
Interested by reaching smart readers such as you? To change into an AI Breakfast sponsor, reply to this email or DM us on X!