Never miss a brand new edition of , our weekly newsletter featuring a top-notch number of editors’ picks, deep dives, community news, and more.
As we wrap up the primary month of 2026, it is likely to be a tad too early to detect major changes or emerging themes. One thing is evident, though: our readers are keen to remain on top of industry trends and cutting-edge tools.
Fortunately (and as all the time), TDS contributors have began the yr on a powerful note, delivering timely and insightful reads on these — and plenty of other — topics. This week, we’re highlighting our most-read and -shared articles from January, covering LLM context, Claude Code, and the long run of giant data platforms, to call just a few standout examples.
The Great Data Closure: Why Databricks and Snowflake Are Hitting Their Ceiling
“How big can a knowledge company really grow?” Hugo Lu begins his thought-provoking deep dive with a fundamental questioning of the present business model of giant platforms like Databricks and Snowflake. He goes on to unpack the several aspects at play, and to supply some daring predictions for the approaching yr.
How LLMs Handle Infinite Context With Finite Memory
Are you able to truly do (much) more with (much) less? Moulik Gupta offers a radical and accessible explainer on Infini-attention.
The best way to Maximize Claude Code Effectiveness
Eivind Kjosbakken’s handy guide outlines key optimization techniques when using the favored agentic-coding tool.
Other January Highlights
Listed below are just a few more of last month’s hottest stories, with insights on fused kernels, context engineering, and federated learning, amongst other topics:
Beyond Prompting: The Power of Context Engineering, by Mariya Mansurova
Using ACE to create self-improving LLM workflows and structured playbooks.
Cutting LLM Memory by 84%: A Deep Dive into Fused Kernels, by Ryan Pégoud
Why your final LLM layer is OOMing and learn how to fix it with a custom Triton kernel.
Why Human-Centered Data Analytics Matters More Than Ever, by Rashi Desai
From optimizing metrics to designing meaning: putting people back into data-driven decisions.
Retrieval for Time-Series: How Looking Back Improves Forecasts, by Sara Nobrega
An introduction to retrieval in time-series forecasting.
Why Supply Chain is the Best Domain for Data Scientists in 2026 (And The best way to Learn It), by Samir Saci
My take after 10 years in Supply Chain on why this will be a superb playground for data scientists who need to see their skills valued.
Federated Learning, Part 1: The Basics of Training Models Where the Data Lives, by Parul Pandey
Understanding the foundations of federated learning.
Authors within the Highlight
We hope you are taking the time to read our recent writer Q&A, and explore top-notch work from our newest contributors:
- Diana Schneider zoomed in on evaluation methods for multi-step LLM-generated content, like customer journeys.
- Kaixuan Chen and Bo Ma shared their work on constructing a neural machine translation system for Dongxiang, a low-resource language.
- Pushpak Bhoge devoted his debut article to benchmarking the performance of Meta’s SAM 3 to specialist models.
Do your Latest Yr’s resolutions include publishing on TDS and joining our Creator Payment Program? Now’s the time to send along your latest draft!
