Good morning. It’s Monday, May twenty seventh.
Did you realize: On today in 1988, Microsoft released Windows 2.1? We must always really bring back that stunning UI.
-
OpenAI Ex-Board Member’s Warning
-
Elon’s Gigafactory of Compute
-
Apple’s Potential WWDC Partnerships
-
Big Tech Agrees on AI Killswitch
-
5 Latest AI Tools
-
Latest AI Research Papers
You read. We listen. Tell us what you think that by replying to this email.
In partnership with Website positioning Blog Generator
Boost your revenue with the Website positioning Blog Generator
Drive more traffic to your website and increase your revenue with Website positioning-optimized blog posts, eye-catching images, and automatic social media post suggestions.
Drive Traffic: Attract more visitors and switch them into customers with perfectly optimized posts that rank higher on search engines like google.
Meta Titles & Descriptions: Optimize your posts further with robotically generated meta titles and descriptions.
Automatic Image Generation: Enhance your posts with stunning visuals generated robotically.
Social Media Automation: Get tailored post suggestions for Facebook, Twitter, Instagram, and LinkedIn to maintain your feeds energetic and fascinating.
Multi-Language: Reach a world audience with multi-language options.

Today’s trending AI news stories
Former OpenAI Board Members: “The corporate cannot be trusted to manipulate itself.“

In a scathing op-ed for The Economist, former OpenAI board members Helen Toner and Tasha McCauley assert that AI corporations cannot effectively self-govern and advocate for third-party regulation. They maintain their decision to remove CEO Sam Altman, citing allegations of making a “toxic culture of lying” and fascinating in behavior commensurate to “psychological abuse,” as reported by senior leaders inside OpenAI.
Altman’s reinstatement, coupled with concerns over safety, including using a voice resembling Scarlett Johansson in Chat GPT-4o, has solid doubt on OpenAI’s self-regulatory experiment. Toner and McCauley view these developments as a cautionary tale, urging external oversight to stop a “governance-by-gaffe” approach.
While acknowledging the Department of Homeland Security’s recent AI safety initiatives, they express concern concerning the influence of “profit-driven entities” on policy. Independent regulation, they argue, is the linchpin for ensuring ethical and competitive AI development, free from the undue influence of corporate interests. Read more.
Elon Musk’s xAI Plans “Gigafactory of Compute” to Dwarf Meta’s Massive GPU Clusters

Elon Musk’s xAI is developing an ambitious “Gigafactory of Compute” – a supercomputer surpassing competitors like Meta. Targeted for fall 2025, this Nvidia H100 GPU-powered behemoth is projected to be a quadrupled champion in processing power.
While details on its undisclosed location remain under wraps, Musk personally ensures its timely completion. Potential collaboration with Oracle suggests the platform will support xAI’s Grok chatbot on a yet-to-be-revealed “X” platform.
Musk foresees AI outpacing human intelligence by next 12 months with ample computing power. Microsoft and OpenAI also plan a $100 billion supercomputer, “Stargate,” aiming for full development by 2030, contingent upon significant AI research progress. Read more.
Apple’s WWDC may include AI-generated emoji and an OpenAI partnership

Mark Gurman’s Bloomberg report suggests WWDC 2024 might witness an uptick in Apple’s “applied intelligence” initiatives. This might include AI-powered emoji generation and deeper integration of on-device functionalities like voice memo transcription.
While the rumored partnership with OpenAI stays unconfirmed, whispers hint at advancements in chatbot technology. Local processing will likely take precedence, with M2 Ultra servers handling computationally intensive tasks. Moreover, expect refinements to Siri and potential feature extensions utilizing Apple’s large language models.
The rumored ability to customize app icons and layouts in iOS 18 further underscores a user-centric approach. Overall, the main target appears to be on leveraging AI for practical enhancements across the Apple ecosystem. Read more.
Tech corporations have agreed to an AI ‘kill switch’ to stop Terminator-style risks

Tech corporations, including Anthropic, Microsoft, and OpenAI, alongside 10 countries and the EU, have committed to responsible AI development by agreeing to implement an AI ‘kill switch.’ This policy goals to halt the advancement of their most sophisticated AI models in the event that they cross certain risk thresholds.
Nevertheless, the effectiveness of this measure is uncertain, lacking legal enforcement or specific risk criteria. The agreement, signed at a summit in Seoul, follows previous gatherings just like the Bletchley Park AI Safety Summit, criticized for its lack of actionable commitments. Concerns about AI development’s potential hostile effects, harking back to sci-fi scenarios just like the Terminator, have prompted calls for regulatory frameworks. While individual governments have taken steps, global initiatives have been largely non-binding. Read more.
Who will make AlphaFold3 open source? Scientists race to crack AI model: Following the discharge of DeepMind’s AlphaFold3 in Nature, a “gold rush” for open-source alternatives has begun. The shortage of accompanying code, while prompting community concerns (including a delayed release promise from DeepMind), has spurred initiatives like Columbia’s “OpenFold” project. Transparency stays a priority, echoing Nature’s code-sharing policies. While DeepMind’s future release details are unclear, particularly regarding protein-drug interactions, open-source versions offer the potential for retraining and improved performance – crucial for pharmaceutical applications. Scientists like David Baker and Phil Wang seek insights from AlphaFold3 for his or her models. Hacked versions already emerge, indicating the demand for accessibility and transparency in AI tool development. Read more.
Google scrambles to manually remove weird AI answers in search: Google is facing challenges with its AI Overview product, which has been generating bizarre responses like suggesting users put glue on pizza or eat rocks. The corporate is manually removing these strange answers as they seem on social media, indicating a hasty response to the difficulty. Despite being in beta since May 2023 and processing over a billion queries, the product’s rollout has been marred by unexpected outputs. Google claims to be ‘“swiftly addressing the issue” and is using instances of weird responses to boost its systems. Read more.
Etcetera: Stories you will have missed

5 latest AI-powered tools from around the net
Invisibility integrates advanced AI models (GPT-4o, Claude 3 Opus, Gemini, Llama 3) for Mac, offering seamless multitasking via a straightforward keyboard shortcut.
HyperCrawl offers zero-latency web crawling optimized for retrieval-based LLM development, enhancing data gathering and efficiency in AI research projects.
Forloop.ai is a no-code platform for web scraping and data automation, enabling rapid data gathering, preparation, and process automation for teams.
PitchFlow auto-generates pitch decks in a single minute with startup inputs, allowing entrepreneurs to concentrate on product development and user engagement.
Zycus Generative AI boosts Source-to-Pay (S2P) productivity by 10x, enhancing efficiency, cost savings, and risk management in procurement processes.

arXiv is a free online library where researchers share pre-publication papers.



Your feedback is helpful. Reply to this email and tell us how you think that we could add more value to this text.