Good morning. It’s Monday, July fifteenth.
-
OpenAI’s Recent Secret Project
-
Gemini AI Caught Spying on Docs
-
Deepmind’s PEERs
-
OpenAI’s Illegal NDAs?
-
5 Recent AI Tools
-
Latest AI Research Papers
You read. We listen. Tell us what you think that by replying to this email.
Today’s trending AI news stories
Exclusive: OpenAI working on recent reasoning technology under code name ‘Strawberry’
OpenAI is developing a brand new AI project, “Strawberry,” to reinforce its models’ reasoning capabilities. Formerly often called Q* or Q-Star, Strawberry leverages a specialized type of post-training to adapt pre-trained models for autonomous web searches and “deep research.”
Internal documents reveal Strawberry’s specialized post-training process, refining AI models after initial training with large datasets. Drawing parallels with Stanford’s “Quiet-STaR” method, Strawberry goals to enhance AI’s logical reasoning by integrating large language models with planning algorithms and reinforcement learning techniques.
This initiative aligns with OpenAI’s technique to empower AI models not only to generate answers but in addition to plan and navigate the web autonomously. Elon Musk has acknowledged OpenAI’s Strawberry project, expressing bullish optimism for AI’s future. Enhancing AI’s human-like reasoning is crucial for breakthroughs in scientific research and engineering, addressing common challenges like intuitive problem-solving and logical fallacies.
Recently, OpenAI signaled the discharge of technology with improved reasoning abilities through specialized post-training processes. Read more.
Google’s Gemini AI caught scanning Google Drive hosted PDF files without permission — user complains feature cannot be disabled
Google’s Gemini AI has come under scrutiny for allegedly scanning PDF files stored on Google Drive without explicit user permission, prompting concerns raised by Kevin Bankston, a Senior Advisor on AI Governance. Bankston criticized the practice after discovering that Gemini robotically summarized his private documents without prior user interaction.
Despite efforts to disable these features through settings, users like Bankston found the method convoluted and ineffective, highlighting a spot in Google’s transparency and user control measures. This incident fuels ongoing discussions about AI privacy and user consent, difficult the effectiveness of current safeguards as AI continues to integrate into every day applications.
Google has not clarified the technical rationale behind Gemini’s actions, leaving users wary in regards to the privacy implications of AI-driven functionalities inside Google’s ecosystem. Read more.
DeepMind’s PEER scales language models with hundreds of thousands of tiny experts
DeepMind has introduced Parameter Efficient Expert Retrieval (PEER) to scale Mixture-of-Experts (MoE) models, addressing current limitations of MoE architectures. MoE enhances large language models (LLMs) by routing data to specialized “expert” modules, thus increasing model capability without escalating computational costs. Nevertheless, traditional MoE is restricted to a small variety of experts. PEER overcomes this by utilizing a learned index to route input data efficiently to hundreds of thousands of tiny expert modules, each with a single neuron within the hidden layer. This design ensures improved parameter efficiency and knowledge transfer.
PEER’s architecture can replace existing transformer feedforward (FFW) layers, optimizing the performance-compute tradeoff. It leverages a multi-head retrieval approach, just like transformer models’ multi-head attention mechanism. Evaluations on various benchmarks show PEER achieves lower perplexity scores and higher performance-compute tradeoffs in comparison with dense FFW layers and other MoE architectures. Read more.
Whistleblowers accuse OpenAI of ‘illegally restrictive’ NDAs
Whistleblowers have accused OpenAI of unlawfully restricting its employees from communicating with government regulators, as reported in a letter obtained by The Washington Post. The letter, addressed to SEC Chair Gary Gensler, raises concerns about OpenAI’s severance, non-disparagement, and non-disclosure agreements (NDAs). It alleges that these agreements discourage employees from reporting securities violations to the SEC and force them to waive whistleblower incentives.
Furthermore, the whistleblowers claim that previous NDAs violated labor laws by placing overly restrictive conditions on employees searching for employment, severance payments, and other financial advantages. OpenAI has not yet responded to requests for comment. An organization spokesperson did highlight that their whistleblower policy is designed to guard employees’ rights to make protected disclosures. Read more.
Etcetera: Stories you’ll have missed

5 recent AI-powered tools from around the net
PngMaker.io is a free online tool converting text to skilled PNG images with transparent backgrounds in seconds, ideal for digital content.
AI Web Designer quickly redesigns web sites using AI, allowing users to edit and export results. It democratizes design, simplifying web development.
AyeHigh offers user-friendly generative AI tools for college kids and professionals, including resume shortlisting, ATS evaluation, and content optimization.
Move AI converts 2D video into 3D motion data for lifelike animation using advanced AI, computer vision, biomechanics, and physics technologies.
Phaie AI by Creatr is an open-source tool and Figma plugin for generating, editing, and fixing design systems using AI.

arXiv is a free online library where researchers share pre-publication papers.



Your feedback is helpful. Reply to this email and tell us how you think that we could add more value to this article.