, when combined with analytics products, can transform into powerful tools for supply chain optimisation.
This quote is from a plant director who contacted us for the implementation of an algorithm to enhance their Master Production Schedule (MPS).
The target was to generate production schedules that balance economies of scale with minimal inventory.
Our initial solution was a web application connected to the factory systems.

While it generated optimal production plans, it still required planners to navigate dashboards and export results.
As an experiment, we packaged the optimisation engine right into a FastAPI microservice embedded inside an AI workflow built using n8n.
The tool evolved into an AI assistant (integrated within the planners’ workflow) that may understand inputs, run the algorithm and deliver optimised plans with explanations in plain English.

In this text, I present how we conducted this experiment of using AI Agents for Supply Chain Optimisation with n8n.
This will likely be the primary of an extended series of experiments attempting to construct a Supply Chain Optimisation “super-agent” equipped with algorithms packaged in FastAPI microservices.
Production Planning with Python
Scenario
Allow us to assume that we’re supporting a medium-sized factory in Europe.
The master production schedule is the first communication tool between the business team and production.
In our client’s factory, customers send purchase orders (PO) with quantities and expected delivery dates to their planning team.

For instance,
- Expected delivery quantity in
Month 2
is150 boxes
Initial Solution
The target of the planning team is to search out the optimal production plan to minimise production costs, considering:
- Setup Costs: fixed costs you will have every time you arrange a production line
- Holding Costs: cost of storage per unit per time
If you produce only the amount needed monthly, you’ll be able to minimise the holding costs.

But setup costs will explode as you will have to establish the production line 12 times.
Quite the opposite, should you produce the entire quantity in the primary month, you’ll only have one setup, but your holding costs will explode.

You construct a list of two,000 boxes in the primary month, which will likely be slowly consumed over the 12 months.
There may be an optimal scenario between these two edge cases.
In one other article, I explain the best way to use the Wagner-Inside algorithm to generate an optimised plan.
It is a dynamic programming method for production planning that finds the cost-optimal schedule over multiple periods.

It finds the most effective balance between setup and holding costs by evaluating all feasible production plans.

The output is an optimal plan:
- 4 months (only) of productions: Month 1, 6, 9 and 11 ;
- The inventory is consumed between each production batch.
Within the animated GIF below, you’ll be able to visualise a demo of the deployed solution on an internet application.

Users can
- Upload their demand forecasts monthly, week or day
- Select the parameters (setup costs, holding costs, …)
To enhance the productivity of planners, we aim to eliminate the UI and directly integrate the answer into their workflow using AI.
In the subsequent section, I’ll share the experiments we conducted using a prototype of this AI-powered workflow built with n8n.
👉 Check the video linked below for a live demo of the workflow
AI Workflow with FastAPI and n8n
From the feedback received in the course of the User Acceptance Tests (UAT), we understood that they would wish the tool to be higher integrated into their current processes and workflows.
AI Agents equipped with tools
The planning optimisation algorithm has been packaged in a FastAPI backend with multiple endpoints
- /upload_prod: this endpoint receives a POST request with the demand dataset included to upload it to the backend
- /launch_plan: this endpoint receives a GET request with parameters like setup cost, holding cost, and time unit

Find out how to connect this backend with an AI Agent?
We are going to use an AI Agent node in n8n, equipped with a tool node that may send and receive HTTP requests.

For all usages, the architecture of this AI Agent node will likely be equivalent:
- Large Language Model: In the instance above, we use an OpenAI model
- HTTP request node with a system message that explains the best way to hook up with the API and what form of data to expect as outputs
This node will likely be used to generate a summary of the optimal production plan, which will likely be sent via email.
AI Workflow: Automated Email Reply
Production planners typically receive their requests from the business team via email, which incorporates details within the body and requested volumes by period within the attachment.

They desired to mechanically answer these requests without manually downloading the attachment, uploading it to the UI and generating an email based on the outcomes shown within the UI.
It has been agreed with them that they are going to follow a particular format of emails to be certain that all information required is included:
- Attachment: demand dataset in (.csv) format
- Email body: all of the parameters needed, like holding costs, setup costs, currency

The AI Agent presented before will receive data from one other agent that can parse the e-mail to extract the relevant parameters.

Step 1: Collect Email and Download the Attachment
The Gmail trigger node collects the e-mail body and downloads the attachment.

The (.csv) file is converted into JSON and sent via POST request to the backend.
Now that the dataset is uploaded, we will provide the e-mail body to the primary AI Agent node.
Step 2: Generating the Optimal Production Plan
We now have two AI agent nodes on this workflow
- The AI Agent Parser parses the e-mail content to extract the parameters, which are returned in JSON format.
- The AI Agent API Request ingests these parameters and queries the FastAPI backend to retrieve the outputs used to generate the written evaluation.

Within the system prompt of the primary AI Agent node, we detail the best way to parse the e-mail to gather the right parameters.

The outputs of this AI Agent Parser are sent to the second AI Agent that can query the backend.

In a minimal system prompt, we instruct the AI Agent API Request on the best way to use the tool.
We offer an summary of the parameters available:

We list the outputs of the API’s endpoint:

We detail the duty expected:

The output of the second agent is shipped back to the business team via email using the last Gmail node.

The summary includes the minimal set of data (costs, production batches) needed by the business to offer a quotation to the client.
Conclusion
This workflow has been deployed as a POC with two users who provided encouraging feedback.
We’re in search of business cases to productize this approach and propose the feature for all our analytics products.
To this point, it has been decided that this feature can be used for orders with non-critical items.
AI Workflows as enablers
Based on my experience in designing and implementing analytics products, the first obstacle to the rapid adoption of a tool is its integration into existing processes and workflows.
For product planners who manage 100+ references with dozens of consumers and internal stakeholders, it’s preferable to avoid learning a brand new tool with a particular interface and extra manual steps.
Due to this fact, AI agents could be utilised to integrate a tool into any existing workflow with minimal impact and extra workload for users.
With the support of n8n, we now have experimented with integrating our analytics products with Jira for workforce planning, in addition to utilising Telegram for transportation routing and ERP modules utilized by our customers.
What’s next?
This workflow could be enhanced to leverage the complete potential of enormous language models (LLMs).
As an illustration, we will ask the agents to simulate multiple scenarios of volumes to advise the client on whether to extend or reduce their ordered quantity to acquire a greater price.
So long as we now have explained to the agent the best way to use the tool (i.e., our analytic product, packaged in a FastAPI microservice), we will work with it as if we now have an analyst who can run scenarios.
About Me
Let’s connect on Linkedin and Twitter. I’m a Supply Chain Engineer who uses data analytics to enhance logistics operations and reduce costs.
For consulting or advice on analytics and sustainable supply chain transformation, be happy to contact me via Logigreen Consulting.
If you happen to are occupied with Data Analytics and Supply Chain, take a look at my website.