Good morning, { AI enthusiasts }. After a wave of exits, including key members of the founding team, Elon Musk’s xAI is stepping on the gas.
The corporate just hosted its first all-hands meeting for the reason that SpaceX merger (and posted it online), covering every little thing from the much-talked-about organizational restructure to an ambitious plan to establish deep space data centers via the Moon.
In today’s AI rundown:
-
xAI’s restructure, product roadmap, Moon ambitions
-
Z.ai’s GLM-5 — the brand new open-source king
-
Turn SOP docs into talking-head training videos
-
Anthropic details Claude Opus 4.6’s sabotage risk
-
4 latest AI tools, community workflows, and more
LATEST DEVELOPMENTS
XAI

Image source: xAI
The Rundown: xAI hosted its first all-hands since merging with SpaceX, with CEO Elon Musk outlining a significant reorganization, product roadmap updates, and lunar ambitions, all geared toward outpacing rivals and taking xAI to the forefront of AI.
The small print:
-
Musk acknowledged the departure of team members and outlined a brand new structure for xAI, saying the move was meant to be “simpler” at scale.
-
The brand new structure has 4 core teams: Grok (chat and voice), a coding-focused unit, the Imagine team, and Macrohard (agents emulating corporations).
-
He also spoke about future infrastructure plans with SpaceX, including establishing AI satellite factories on the Moon — using lunar resources and solar energy.
-
Musk added that SpaceX will even construct an electromagnetic mass driver to “shoot” AI satellites/components for large deep space data centers.
Why it matters: Musk is not any stranger to audacious guarantees, and his timelines often shift. But by broadcasting xAI’s tightened focus, product roadmap, and bold lunar plans, he’s ensuring the world knows he’s aiming to construct advanced AI in a way no other AI giant is — scaling beyond Earth’s resource limits as a substitute of draining them.
TOGETHER WITH MODULATE
The Rundown: Voice-specialized AI is here, and in contrast to OpenAI, xAI, and other leaders, it understands conversations and meaning — not only transcripts. Velma 2.0 is the world’s first voice-native AI designed to supply human-level, real-time conversation intelligence.
By orchestrating 100+ sub-models purpose-built for voice, Velma lets you:
-
Decode intent, emotion, stress, and authenticity in messy, multilingual audio
-
Analyze audio 100x faster, cheaper, and more accurately than with LLMs
-
Get traceable outputs with an explainable path
Try Velma for yourself to grasp the true meaning of your conversations.
Z.AI

Image source: Artificial Evaluation
The Rundown: China’s Z.ai just launched GLM-5, a 744B-parameter open-weights model that further closes the gap with the West’s frontier — sitting just behind Claude Opus 4.6 and GPT-5.2 on Artificial Evaluation benchmarks.
The small print:
-
GLM-5 scored 50 on Artificial Evaluation’ Intelligence Index, surpassing closed models like Gemini 3 Pro and Grok 4 in addition to open-source ones like Kimi K2.5.
-
The model uses DeepSeek’s Sparse Attention architecture with just 40B lively parameters, and runs inference on Chinese chips, including Huawei Ascend.
-
On Humanity’s Last Exam, it hit 50.4 with tools, beating Opus 4.5, Gemini 3 Pro, and GPT-5.2. The coding performance on SWE-Bench was also close.
-
GLM-5 is open-source under an MIT license, available now on HuggingFace, Z.ai’s own platform, and via API at $1 per million input tokens.
Why it matters: The wave of Seedance 2.0’s viral AI clips hasn’t even faded, and there now we have one other near-frontier model from China that’s already knocking on the door. The gap with the West isn’t closed yet, but with open weights, competitive pricing, and domestic chip support, it’s definitely narrowing faster than ever.
AI TRAINING

The Rundown: On this guide, you’ll learn methods to turn boring onboarding docs into engaging training videos narrated by an AI avatar. We tried plenty of tools and located probably the most efficient system for constructing quality AI training videos in bulk.
Step-by-step:
-
Take your training doc and prompt Claude/ChatGPT with “Turn this right into a three-minute training video script for an AI-generated avatar. Only include text overlays with bullets. The avatar will be seated, standing, head-on, etc.”
-
Save the script as a text file and go to Synthesia.io > Create Latest Video > Create from AI > Upload the script file, with objective and audience description
-
Select a template and click on Create Outline. Review the outline and follow the steps to generate your video. It should take 10-25 minutes to generate
-
When the video is complete, you possibly can download and embed it somewhere like Notion or Google Docs
Pro tip: Repeat this for all onboarding docs to establish one-page onboarding that will be handed to any trainee!
PRESENTED BY SLACK FROM SALESFORCE
The Rundown: Slackbot is a context-aware AI agent built directly into Slack — understanding your conversations, files, and workflows to deliver what you would like, right while you need it, with zero setup.
Watch this 2-minute demo to see how Slackbot:
-
Makes your entire workspace searchable (docs, convos, apps)
-
Enhances every teammate with role-specific automations
-
Learns your project and preferences over time for even smarter outputs
-
Synthesizes what you would like immediately, respecting permissions and using only what you possibly can already see
AI SAFETY

Image source: Nano Banana / The Rundown
The Rundown: Anthropic published its latest Sabotage Risk Report, revealing that its latest Claude Opus 4.6 model displays an “elevated susceptibility” to be misused for “heinous crimes,” including assisting in the event of chemical weapons.
The small print:
-
Anthropic found Opus 4.6 knowingly supported crimes like chemical weapon development in small ways, but couldn’t execute attacks by itself.
-
When tasked to realize a selected goal in a multi-agent test, the model proved much more willing to govern and deceive other agents than previous models.
-
Considering these findings, Anthropic deemed the general sabotage risk “very low but not negligible” as a result of the model’s lack of coherent misaligned goals.
-
The corporate also classified the model’s capabilities as entering a “gray zone” that necessitated this mandatory report under its Responsible Scaling Policy.
Why it matters: Anthropic’s CEO Dario Amodei recently highlighted the risks of advanced AI, and now, one among his own models appears to be moving into the grey zone. With growing competition from OpenAI, Google, xAI, and Chinese labs, the pressure to push capabilities forward may only intensify the very risks he has warned about.
QUICK HITS
-
🗣️ Unwrap Customer Intelligence – Connect your entire organization to the true voice of the client with AI-driven insights from customer feedback*
-
🧑💻 GLM-5 – Ziphu AI’s latest open-source frontier model
-
🤖 Claude – Anthropic’s AI assistant, now with more features without spending a dime users
-
🧠 Ming-flash-omni 2.0 – Ant’s omni AI with speech, vision, image capabilities
Apple’s long-awaited Gemini-powered Siri AI upgrade has reportedly been pushed back (again) as a result of recent testing snags, now more likely to include iOS 26.5 or 27.
OpenAI elevated its “Mission Alignment” head, Joshua Achiam, to the role of Chief Futurist answerable for studying “AI impacts and interesting the world to debate them.”
Meta broke ground on a brand new data center in Lebanon, Indiana — one among its largest infrastructure bets — adding 1GW of capability to power its AI and core products.
Anthropic announced it’s going to cover electricity price increases from its data centers, shielding local ratepayers, consistent with similar pledges from Microsoft and OpenAI.
Google is rolling out UCP-powered checkout in Gemini and AI Mode within the U.S., integrating Veo into Google Ads, and testing sponsored retailer ads in AI Mode.
COMMUNITY
Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.
Today’s workflow comes from reader Lindsay F. in Kingsville, Ontario:
“I own a 1970 Chevelle SS and am converting it into a contemporary driving ‘restomod.’ I’m using each ChatGPT & Copilot to research and develop the whole restoration plan. The restoration of the vehicle will happen in phases, and the agents have provided me with a priority list, options for what parts to buy, and where to source them from.
They’ve also developed a budget for the project, including parts & local labor rates and what the finished project will appear to be upon completion. I’m 72 years old and just love how much this helps me.”
How do you utilize AI? Tell us here.
-
Read our last AI newsletter: xAI’s co-founder exodus continues
-
Read our last Tech newsletter: Musk’s ‘self-growing’ Moon city
-
Read our last Robotics newsletter: Uber to launch robotaxis in 15 cities
-
Today’s AI tool guide: Turn SOP docs into talking-head training videos
-
RSVP to our next workshop on Feb 18: Agentic Workflows Bootcamp pt. 2
That is it for today!
- ⭐️⭐️⭐️⭐️⭐️ Nailed it
- ⭐️⭐️⭐️ Average
- ⭐️ Fail
Login or Subscribe to participate
See you soon,



