AI agent

Claude Skills and Subagents: Escaping the Prompt Engineering Hamster Wheel

For those who’ve been constructing with LLMs for some time, you’ve probably lived through this loop again and again: you are taking your time crafting an important prompt that results in excellent results, after...

Construct Effective Internal Tooling with Claude Code

is incredibly effective at quickly build up recent applications. That is, in fact, super useful for any programming task, whether it's working on an existing legacy application or a brand new codebase. Nevertheless, from...

Can AI Solve Failures in Your Supply Chain?

chain is a goal-oriented network of processes and stock points that delivers finished goods to stores. Imagine a luxury fashion retailer with a central distribution chain that delivers to stores worldwide (the USA, Asia-Pacific, and EMEA) from a...

Constructing a LangGraph Agent from Scratch

The term “AI agent” is probably the most popular straight away. They emerged after the LLM hype, when people realized that the newest LLM capabilities are impressive but that they will only perform tasks...

Constructing an AI Agent to Detect and Handle Anomalies in Time-Series Data

As a knowledge scientist working on time-series forecasting, I even have run into anomalies and outliers greater than I can count. Across demand forecasting, finance, traffic, and sales data, I keep running into spikes...

Prompt Fidelity: Measuring How Much of Your Intent an AI Agent Actually Executes

Spotify just shipped “Prompted Playlists” in beta. I built just a few playlists and discovered that the LLM behind the agent tries to meet your request, but fails since it doesn’t know enough but...

Plan–Code–Execute: Designing Agents That Create Their Own Tools

today deal with how multiple agents coordinate while choosing tools from a predefined toolbox. While effective, this design quietly assumes that the tools required for a task are known prematurely. Let’s challenge that assumption...

Why Your Multi-Agent System is Failing: Escaping the 17x Error Trap of the “Bag of Agents”

landed on arXiv just before Christmas 2025, very much an early present from the team at Google DeepMind, with the title “Towards a Science of Scaling Agent Systems.” I discovered this paper to be a...

Recent posts

Popular categories

ASK ANA