than convincing someone of a truth they can not see in their very own data.
Data science and sustainability experts face the identical problem: our concepts could also be too abstract and theoretical, making them difficult for decision-makers to relate to.
I learned this the hard way while developping my startup!
After I published a case study on Green Inventory Management on TDS in 2024, I believed the logic was solid and convincing, however the impact was limited.
The article explained the mathematical theory behind it and used an actual case study to display the sustainability advantages.
Yet it didn’t convert sceptics.
Customer: “I’m sure it won’t work with our operations!”
Why? Since it wasn’t connected to their data and constraints.
So I made a decision to alter the approach.
I packaged the simulation tool in a FastAPI microservice and gave my customers the flexibility to check the model themselves using an MCP Server connected to Claude Desktop.

The target was to have them ask the LLM to run their very own scenarios, adjust their parameters, and see how CO₂ emissions dropped in response to different inventory policies.
In this text, I’ll share the approach I used for this experiment and the feedback I received from a prospect, the Supply Chain Director of a retail group based within the Asia Pacific region.
What’s Green Inventory Management?
On this section, I need to briefly explain the concept of Green Inventory Management so you will have the context to know the tool’s added value.
Context: Inventory Management for a Retail Company
Allow us to put ourselves in our Supply Chain Director’s shoes.
His teams (inventory teams, warehouse and transportation operations) are accountable for replenishing stores from a central distribution centre.

After they need specific products, stores routinely send replenishment orders via their ERP to the Warehouse Management System.

These automated orders follow rules implemented by the inventory team, often called the periodic review “Order-Up-To-Level (R, S)” policy.
- The ERP is reviewing stores’ inventory levels, also called inventory readily available (IOH), every R days
- The delta between the goal inventory S and the inventory level is calculated: Δ = S— IOH
- A Replenishment Order is created and transmitted to the warehouse with the amount: Q = S — IOH
After transmission, the order is ready on the warehouse and delivered to your store inside a particular lead time (LD) in days.

To be more concrete, I share the instance above:
- R = 25 days: we review the inventory every 25 days as you’ll be able to see within the blue scatter plot
- S = 1,995 units: we ordered to achieve this level, as shown in the most recent graph.
The inventory teams within the systems often set these parameters, and the replenishment orders are routinely triggered.
What if we optimise these parameters?
Impacts on Logistics Operations
Based on my experience, these parameters are, more often than not, not set optimally..
The issue is that they significantly impact the efficiency of your warehouse and transportation operations.
This increases carton and plastic consumption and reduces productivity.

In the instance above, items are stored in cartons containing units that might be picked individually.
If the order quantity is five, the operator will:
- Open a box of 20 units and take five units ;
- Take a brand new box and put these things in it ;
- Palletise the boxes using plastic film ;
The opposite impact is on truck filling rate and CO2 emissions.

With a high delivery frequency, you reduce the quantity per shipment.
This results in using smaller trucks that is probably not full.
What can we do?
Objectives of Green Inventory Management
We are able to test multiple scenarios, with different operational parameters, to seek out the optimal setup.
For that, I even have loaded customer data into the simulation model
to check the tool with real scenarios.

Users can adjust a few of these parameters to simulate different scenarios.
class LaunchParamsGrinv(BaseModel):
n_day: int = 30 # Variety of days within the simulation
n_ref: int = 20 # Variety of SKUs within the simulation
pcs_carton: int = 15 # Variety of pieces per full carton
cartons_pal: int = 25 # Variety of cartons per pallet
pallet_truck: int = 10 # Variety of pallets per truck
k: float = 3 # Safety factor for safety stock
CSL: float = 0.95 # Cycle service level goal
LD: float = 1 # Lead time for delivery (days)
R: float = 2 # Review period (days)
carton_weight: float = 0.3 # Carton material weight (kg)
plastic_weight: float = 0.173 # Plastic film weight per pallet (kg)
These parameters include:
n_dayandn_ref: define the scope of simulationpcs_carton,cartons_pal,LDandpallet_truck: parameters linked to warehousing and transportation operationscarton_weight,plastic_weight: sustainability parametersR,kandCSL: parameters set by the inventory team
I need our Supply Chain Director to take a seat together with his teams (inventory, warehouse, transportation and sustainability) to challenge the establishment.
In the event that they need to achieve a particular goal, our director can:
- Challenge his inventory teams to seek out higher review periods (R), or cycle service level (CSL) targets
- Ask the sustainability team to seek out lighter carton materials
- Redesign his warehouse operations to scale back the lead time (LD)

For that, we want to supply them with a tool to simulate the impact of specific changes.

That is what we’re going to do with the support of an MCP Server connected to Claude AI.
Demo of the Green Inventory Management AI Assistant
Now that we understand how this simulation tool can add value to my customers, let me show you examples of analyses they’ve performed.
These tests were performed using customer data over a simulation horizon of as much as 90 days.
Onboarding of users
I even have connected the MCP server to the Claude environment utilized by the Supply Chain managers to have them “play with the tool”.
The bulk didn’t take the time to review the initial case study and directly asked Claude concerning the tool.

Hopefully, I even have documented the MCP tools to supply context to the agent, like within the toot launch_greeninv shared below.
@mcp.tool()
def launch_greeninv(params: LaunchParamsGrinv):
"""
Launch a whole Green Inventory Management simulation.
This tool sends the input parameters to the FastAPI microservice
(via POST /grinv/launch_grinv) and returns detailed sustainability
and operational KPIs for the chosen replenishment rule (Review Period R).
-------------------------------------------------------------------------
🌱 WHAT THIS TOOL DOES
-------------------------------------------------------------------------
It runs the complete simulation described within the "Green Inventory Management"
case study, reproducing the behavior of an actual retail replenishment system
using a (R, S) Periodic Review Policy.
The simulation estimates:
- Replenishment quantities and order frequency
- Stock levels and stockouts
- Variety of full and mixed cartons
- Variety of pallets and truck deliveries
- CO₂ emissions for every store and globally
- Carton material and plastic usage
- Operator productivity (orderlines and pieces per line)
[REMAINDER OF DOC-STRING OMITTED FOR CONCISION]
"""
logging.info(f"[GreenInv] Running simulation with params: {params.dict()}")
try:
with httpx.Client(timeout=120) as client:
response = client.post(LAUNCH, json=params.dict())
response.raise_for_status()
result = response.json()
last_run = result
return {
"status": "success",
"message": "Simulation accomplished",
"results": result
}
except Exception as e:
logging.error(f"[GreenInv] Error during API call: {e}")
return {
"status": "error",
"message": str(e)
}
I used to be quite satisfied with Claude’s introduction to the tool.
It starts with the introduction of the core capabilities of the tools from an operational perspective.

Quickly, our director began to send me long emails with questions on learn how to use the tool:
- The best way to arrange the parameters?
- Who should I involve on this exercise?
My initial reflex was to reply: “Why don’t you ask Claude?”.
That is what they did, and the outcomes are excellent. Claude proposed a framework of study.

This framework is sort of perfect; I’d just have put the lead time (LD) also within the scope of the Warehouse Manager.
Nevertheless, I would like to confess that I’d never have been capable of generate such a concise and well-formatted framework by myself.
Then, Claude proposed a plan for this study with multiple phases.

Let me take you thru different phases from the user’s perspective.
Phase 1: Baseline Assessment
I advised the team to repeatedly ask Claude for a pleasant dashboard with a concise executive summary.
That’s what they did for Phase 1.

As you’ll be able to see within the screenshot above, Claude used the MCP Server tool launch_greeninv to run an evaluation with the default parameters defined within the Pydantic model.
With the outputs, it generated the Executive Summary for our director.

The summary is concise and straight to the purpose.
It compares the outputs (key performance indicators) to the targets shared within the MCP docstring and the master prompt.
What concerning the managers?
Then it generated team-specific outputs, including tables and comments that clearly highlighted essentially the most significant issues, as shown in the instance below.

What’s interesting here is that our warehouse manager only mentioned the goal pieces per line in a previous message.
Meaning we are able to have the tool learn not only from the MCP’s tools docstrings, master prompt, and Pydantic models, but additionally from user interactions.

Finally, the tool demonstrated its ability to have a strategic approach, providing mid-term projections and alerting on the important thing indicators.

Nevertheless, nothing is ideal.
When you will have weak prompting, Claude never loses the chance to hallucinate and propose decisions outside the scope of the study.
Allow us to proceed the exercise, following Anthropic’s model, and proceed to Phase 2.
Phase 2: Scenario Planning
After brainstorming with its team, our director collected multiple scenarios from each manager.

What we are able to see here is that every manager desired to challenge the parameters focused on their scope of responsibility.
This thought process is then transcribed into actions.

Claude decided to run the six scenarios listed above.
The challenge here is to compile all the outcomes into an artificial, insight-driven summary.

Within the case study published in 2024, I focused only on the primary three scenarios, examining each performance indicator individually.
What about Claude?
Claude was smarter.

Although we had the identical sort of data readily available, it produced something more “cross-functional” and decision-driven.
- We have now business-friendly names for every scenario which can be comprehensible across functions.
- Each scenario is linked to the team that pushed for it.
Finally, it provided an optimal scenario that may be a consensus between the teams.

We’re even supplied with a scorecard that explains to every team why the scenario is best for everyone.
For a more detailed breakdown of the agent’s outputs, be happy to have a have a look at this tutorial:
Conclusion
A brand new hope for the concept of Green Inventory Management
After a few weeks of experimentation, the Supply Chain Director is convinced of the necessity to implement Green Inventory Management.
The one bottleneck here is on their side now.
With Claude’s support, our 4 managers involved within the study understood the impact of their roles on the distribution chain’s overall efficiency.

This helps us at LogiGreen onboard Supply Chain departments for complex optimisation exercises like this one.
In my view, it is simpler to conduct a green transformation when all teams have ownership and sponsorship.
And the one approach to get that’s to make sure that everybody understands what we’re doing.
Based on the initial results of this modest experiment, I feel we now have found a wonderful tool for that.
Do you wish other case studies using MCP Server for Supply Chain Optimisation?
AI Agent for Supply Chain Network Optimisation
In one other article published on Towards Data Science, I share the same experiment focused on the Supply Chain Network Design exercise.

The target here is more macro-level.
We want to find out where goods are produced to serve markets on the lowest cost in an environmentally friendly way.

While the algorithm differs, the approach stays the identical.
We try multiple scenarios with parameters that favour different teams (finance, sustainability, logistics, manufacturing) to achieve a consensus.

Like here, Claude does an awesome job in synthesising the outcomes and providing data-driven recommendations.
For more details, you’ll be able to watch this video.
About Me
Let’s connect on Linkedin and Twitter. I’m a Supply Chain Engineer who uses data analytics to enhance logistics operations and reduce costs.
For consulting or advice on analytics and sustainable supply chain transformation, be happy to contact me via Logigreen Consulting.
In the event you are fascinated about Data Analytics and Supply Chain, have a look at my website.
