TDS Newsletter: November Must-Reads on GraphRAG, ML Projects, LLM-Powered Time-Series Evaluation, and More

-

Never miss a brand new edition of , our weekly newsletter featuring a top-notch choice of editors’ picks, deep dives, community news, and more.

With the tip of the yr just a couple of weeks away, neither our authors nor our readers are showing any signs of slowing down.

We’re thrilled to have published a few of our strongest articles of the yr up to now month: practical guides on LLM workflows and resources on profession growth, Python-focused tutorials, and deep dives on recently launched tools, amongst other standout topics. Read on to meet up with (or revisit) November’s most-read stories.


Graph RAG vs SQL RAG

Which database paradigm delivers more accurate and insightful results? Reinhard Sellmair sets out to evaluate the performance of two sorts of RAG systems by pitting GraphRAG and SQL RAG against one another, using the identical dataset and questions.

LLM-Powered Time-Series Evaluation

Within the second a part of Sara Nobrega’s popular series, we learn concerning the prompts we want for advanced model development (think ARIMA and LSTM).

How one can Construct Machine Learning Projects That Help You Get Hired

Not all ML portfolios are created equal. Egor Howell shares time-tested insights on what works — and what doesn’t.


Other November Highlights

Don’t miss our other top reads from the past month, tackling NumPy, Multimodal RAG, marimo notebooks, and lots of other topics — each evergreen and innovative.

NumPy for Absolute Beginners: A Project-Based Approach to Data Evaluation, by Ibrahim Salami

Understanding Convolutional Neural Networks (CNNs) Through Excel, by Angela Shi

Run Python As much as 150× Faster with C, by Thomas Reid

How one can Construct an Over-Engineered Retrieval System, by Ida Silfverskiöld

Constructing a Multimodal RAG That Responds with Text, Images, and Tables from Sources, by Partha Sarkar

Why I’m Making the Switch to marimo Notebooks, by Parul Pandey

Your Next ‘Large’ Language Model Might Not Be Large After All,  by Moulik Gupta


In Case You Missed It: Our Latest Creator Q&As

We love sharing our authors’ expertise, profession insights, and views on the recent developments on the earth of knowledge science and AI. Listed below are our most up-to-date Creator Spotlights.

  • “Systems pondering helps me put the large picture front and center”
    Shuai Guo on deep research agents, analytical AI vs LLM-based agents, and systems pondering.
  • “The success of an AI product is determined by how intuitively users can interact with its capabilities”
    Janna Lipenkova on AI strategy, AI products, and the way domain knowledge can change the complete shape of an AI solution.

Meet Our Latest Authors

We hope you’re taking the time to explore the superb work from the most recent cohort of TDS contributors:

  • Jure Leskovec, a Stanford professor of computer science and entrepreneur, explains why LLMs aren’t a one-size-fits-all solution for corporations.
  • Sherin Sunny, a senior engineer at Walmart, walked us through the creation of a pc vision project geared toward detecting leaves.
  • Manuel Franco de la Peña introduced us to ShaTS, a novel Shapley-based explainability method specifically designed for time-series models, which he co-created.

We love publishing articles from latest authors, so should you’ve recently written an interesting project walkthrough, tutorial, or theoretical reflection on any of our core topics, why not share it with us?


We’d Love Your Feedback, Authors!

Are you an existing TDS writer? We invite you to fill out a 5-minute survey so we will improve the publishing process for all contributors.


Subscribe to Our Newsletter

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x